I was at a conference this past weekend where a famous astrophysicist explained that being a practicing scientist means constantly trying to figure out how your own claims are wrong. It is easy, all too easy, to look at some data, draw a conclusion and stop there. But science demands that we look at all conclusions, even our own conclusions, to determine if there is an alternative explanation for the data we have in hand. All too often, there is.
That’s why the Karolinska Institute’s decision to create a publicity campaign for a new paper on the possibility that C-sections might lead to epigenetic changes in newborn DNA is thoroughly irresponsible and violates the fundamental principles of science.
The paper is entitled Cesarean delivery and hematopoietic stem cell epigenetics in the newborn infant: implications for future health?. Let me tell you exactly what it shows:
NOTHING!
It is a tiny set of preliminary observations, without demonstrated reproducibility, with no evidence provided that it is clinically relevant in any way.
To give you an idea of just how preliminary it is, and just how irresponsible it is to promote it, imagine if I published the following study:
Cesarean delivery and newborn blood type: implications for future health?
This was an observational study of 64 healthy, singleton, newborn infants (33 boys) born at term. Cord blood was sampled after elective CS (n = 27) and vaginal delivery. Blood type was determined in the standard fashion.
Results
Blood type of infants delivered by CS was more likely to be O+ than blood type from infants delivered vaginally. In relation to mode of delivery, a antigen specific analysis of multiple antigens (Duffy, Kell, etc.) showed difference of of 10% or greater in a number of different antigens.
Conclusion
A possible interpretation is that mode of delivery affects blood type.
Ridiculous, right?
We know that mode of delivery has no impact on blood type, but it is completely possible that if we looked at the blood types of 64 infants (27 born by elective C-section), we might get these exactly results simply by chance.
Why might we get these results by chance?
- We looked at too few babies
- We never checked the background rate of these findings
- We assumed that differences of 10% were statistically relevant
- We failed to look at the difference over time by sampling blood type before birth and at multiple intervals after birth
All these deficiencies would apply to the study of methylation of newborn DNA, plus:
The authors have not demonstrated that DNA methylation is a proxy for epigenetic changes
The result is that their paper shows an observed difference between tiny groups with no evidence that the observed difference has any relevance to anything at all.
There is nothing wrong with publishing a paper that simply relates observed differences. That is often what basic science is about. But there is something very wrong with speculating on the meaning of those difference without any evidence to support the speculation. And there is something grossly irresponsible about publicizing an observed difference that may simply reflect chance.
I understand that it is very difficult to get funding for basic science research. And I understand that implying that your basic research has relevance to a current area of speculation might improve your chances of funding. But trumpeting as finding of essentially nothing to a public by implying that it means something is unethical. It only serves to scare people and to undermine trust in the sciences when better quality data inevitably reveals something entirely different.
Responsible scientists should not publish data until they have reproduced it and until they have carefully considered and rejected all possible reasons why their conclusions are wrong. That’s science; this paper, in contrast, is just self-serving publicity.