I am disappointed and very, very frustrated.
Scientific journals continue to publish more and more junk. Do the editors read what they publish? Do they think about whether it is true? Or do they simply print it, send out a press release and wait for attention?
The latest piece of junk published in a reputable scientific journal is Publicly funded homebirth in Australia: a review of maternal and neonatal outcomes over 6 years, just published in the Medical Journal of Australia.
I reviewed the findings when they were first presented at a medical conference in April (Australian midwives boast about terrible homebirth death rate):
During the 5 years of the study, there were 1807 women who intended, at the start of labor, to give birth at home. 83% had a homebirth, 52% in water (I have no idea why they mention this except to check women’s performances against the midwifery ideal.) The transfer rate was 17%. The C-section rate was 5.4% and the neonatal death rate was 2.2/1000. That’s more than 5X the rate of 0.4/1000 found in a 2009 report on birth in South Australia.In addition, 2 babies suffered hypoxic ischemic encephalopathy (brain damage due to lack of oxygen).
And that probably undercounts the deaths and complications because reporting was voluntary and only 9 of 13 program directors responded. Nonetheless, the authors conclude:
This study provides the first national evaluation of a significant proportion of women choosing publicly funded homebirth in Australia; however, the sample size does not have sufficient power to draw a conclusion about safety. More research is warranted into the safety of alternative places of birth within Australia.
Actually, the study is not underpowered to detect an extremely high death rate.
What is statistical power?
In plain English, statistical power is the likelihood that a study will detect an effect when there is an effect there to be detected. If statistical power is high, the probability of making a Type II error, or concluding there is no effect when, in fact, there is one, goes down.
Statistical power is affected chiefly by the size of the effect and the size of the sample used to detect it. Bigger effects are easier to detect than smaller effects, while large samples offer greater test sensitivity than small samples.
In most studies we find very small differences between the two groups under investigation. Therefore, we need a lot of individuals in each group in order to be sure that the difference we have found is real, and not the result of chance.
In contrast, if we find a very large difference, we don’t need a lot of individuals in each group in order to be sure that the result is real. A 400% increase in the death rate is an extremely large difference.
The authors never bothered to conduct a statistical analysis of any kind, which means that they literally have no idea whether any of their claims are valid. They simply announced that they could make no determination of safety, but nonetheless boasted about excellent outcomes. You can’t have it both ways. Either the study has too few individuals to draw ANY conclusions, in which case the entire paper is meaningless, or the study contains enough individuals to provide a meaningful result.
Caroline Homer, one of the authors of the study, and Hannah Dahlen, a spokesperson for the Australian College of Midwives, take to the lay press to boast about the results of the study (Study of low risk women reveals good news on the home birth front):
Hannah Dahlen, Professor of Midwifery at University of Western Sydney, said the findings we “very reassuring” and showed a very low perinatal mortality rate, comparable with birth centres.
That is an utter falsehood.
The study shows a VERY HIGH neonatal mortality rate, 400% higher than comparable risk hospital birth.
Which raises the question: Is Dahlen deliberately trying to trick readers, since a neonatal mortality rate of 2.2/1000 is 5X higher than comparable risk hospital birth? Or are she and the authors of the study so ignorant of childbirth safety statistics that they don’t realize that the homebirth death rate 400% higher than comparable risk hospital birth?
And what about the MJA?
Why did they publish such a misleading paper? Why didn’t they insist on a discussion of the very high death rate? Why did they allow the authors to declare that the study is underpowered to determine safety when they authors did no statistical calculations of any kind? If the study is underpowered, why did they bother to publish it?
The publication of this study is disappointing and very, very frustrating. The very best we can say about this paper is that it is utterly misleading.
As I said above, I don’t know if Hannah Dahlen and Australian midwives are trying to trick the Australian public into believing that homebirth is safe when it clearly is not, or whether they are so ignorant of basic science, statistics, and mortality data that they don’t realize that have shown that homebirth is dangerous.
It doesn’t really matter. Boasting about a hideous death rate is both bizarre and unacceptable.