Suppose you wanted to do a study that showed that a larger role for midwives could improve US healthcare outcomes. Imagine that you believed in your heart of hearts that midwifery care was better and you were sure that statistical analysis would prove your claims. Or, if you are cynical like me, imagine that you were intent on increasing midwife employment opportunities and you planned to massage the data until it showed what you wanted.
Where would you start?
The point of the paper is to try to create a relationship between midwifery and outcomes no matter how spurious or tenuous.
I know! You’d start by showing that outcomes improve as the numbers of midwives increase. You could show what happened to infant mortality as the number of midwives decreased dramatically in the early 20th Century and rose steeply at the end.
Oops! This shows that infant mortality dropped steeply as the number of midwives decreased sharply and that the increase in midwives at the end of the 20th Century had little to no impact.
Let’s look at maternal mortality.
That certainly doesn’t show that midwives improve outcomes. Moreover, US maternal mortality appears to have risen in the early 21st Century as the role of midwives increased (not shown).
Drat! I know. You could look at the density of midwives in each state and compare it to outcomes.
Here’s the density of CNMs per state:
And here’s perinatal mortality rate by state:
Dammit! That doesn’t prove the point, either. There are states with lots of CNMs that have poor mortality rates (Maine, Colorado) and states with few CNMs that have excellent outcomes (Nebraska, Iowa).
I’ve got it! You could create a composite score for midwifery integration and massage the components until it shows what you wanted it to show! That’s just what midwives have done.
The new paper is Mapping integration of midwives across the United States: Impact on access, equity, and outcomes by a group of authors containing Holly Powell Kennedy, former head of the American College of Nurse Midwives, Melissa Cheyney, forder Director of Research for the Midwives Alliance of North America, Marian MacDorman, Editor of the Lamaze owned Birth: Issues in Perinatal Care, and Eugene DeClercq, advisor to Ricki Lake on The Business of Being Born.
You’ll never guess what they found!
Using a modified Delphi process, we selected 50/110 key items to include in a weighted, composite Midwifery Integration Scoring (MISS) system. Higher scores indicate greater integration of midwives across all settings. We ranked states by MISS scores; and, using reliable indicators in the CDC-Vital Statistics Database, we calculated correlation coefficients between MISS scores and maternal-newborn outcomes by state …
The MISS scoring system assesses the level of integration of midwives and evaluates regional access to high quality maternity care. In the United States, higher MISS Scores were associated with significantly higher rates of physiologic birth, less obstetric interventions, and fewer adverse neonatal outcomes.
Who could have seen that coming??!!
Nina Martin of ProPublica reports on the results:
Now a groundbreaking study, the first systematic look at what midwives can and can’t do in the states where they practice, offers new evidence that empowering them could significantly boost maternal and infant health. The five-year effort by researchers in Canada and the U.S., published Wednesday, found that states that have done the most to integrate midwives into their health care systems, including Washington, New Mexico and Oregon, have some of the best outcomes for mothers and babies. Conversely, states with some of the most restrictive midwife laws and practices — including Alabama, Ohio and Mississippi — tend to do significantly worse on key indicators of maternal and neonatal well-being.
Sounds impressive on the surface, but if you dig deeper, you find a lot of problems with the analysis.
The most obvious problem is related to race. Since African Americans have 3X higher rates of perinatal and maternal mortality, we KNOW that the mortality rates in each state are related to the proportion of African Americans within the state. Let’s compare the midwifery integration scores to the “whiteness” of each state.
Here’s the midwifery integration scores:
Here is the “whiteness” of each state (created by inverting the colors on a map of the proportion of African Americans per state):
Although it doesn’t map exactly, it’s pretty clear that the whiter the state, the greater the midwifery integration. That’s not surprising since midwifery in the US is almost exclusively the province of white women. So while it looks as though midwifery integration is correlated with better outcomes, the reality is that midwifery integration is correlated with race and it is RACE that is correlated with outcomes.
That is not the only way that the authors have played fast and loose with the truth. Remember, the reason this paper exists is precisely because there is NO direct relationship either historical or by state, between the number of midwives and childbirth outcomes. The whole point of the paper is to try to create a relationship no matter how spurious or tenuous.
The authors created a composite score of maternity integration.
Using a modified Delphi process, we selected 50/110 key items to include in a weighted, composite Midwifery Integration Scoring (MISS) system.
What does that mean? It means that the authors convened a group of people deemed experts to decide what constituted midwifery integration and to give different weights to different factors.
They offered this example:
What is a summary score?
Summary scores combine many measures into one “overall” score, even though the individual measures may address quite different aspects of quality. While composites include a few measures that are highly related, a summary score reflects many more measures that may address different issues. However, all the measures are about a single specific provider or service.
What is weighting?
Summary scores must either give the same “weight” to all the measures they include or give some measures more weight than others. Weightings inherently involve judgments of what is more important and consequential. Individual report users may have different views on this than report sponsors, so the summary score may not reflect their preferences.
As you might imagine there are serious limitations to creating a weighted scoring system.
Sponsors who decide to set weights will need a strong rationale for their decision. Tips for weighting the measures include the following:
Involve people with multiple perspectives (clinicians, patients, managers, and payers) in setting the weights to make sure they are not biased in the direction of a single group’s perspective…
As the brief example offered by the authors demonstrates, weighting is deeply subjective. The authors of this study offer no rationale for weighting outcomes. Moreover, weighting offers an easy way to manipulate the data. Correlations that otherwise would not exist can be created by careful manipulation of relative weights.
The validity of a composite scoring system can be evaluated but as far as I can determine, the authors made no attempt to validate their scoring system.
The authors conclude (not surprisingly) that “greater midwifery integration” is associated with better outcomes:
This greater integration was significantly associated with higher rates of spontaneous vaginal birth, VBAC and breastfeeding at birth and at six months, as well as lower rates of obstetric interventions, preterm birth, low birth weight infants, and neonatal death…
They give lip service to the fact that correlation is not causation and then proceed to ignore it, spinning all sort of scenarios in which outcomes could be improved and money saved if only there were more jobs and autonomy for midwives.
The bottom line is this: there is NO historical correlation between number of midwives and outcomes and there is NO contemporary relationship between availability of midwives per state and outcomes. In response, midwives have created an unvalidated, subjective scoring system that purports to measure midwifery integration. Adjusting the weights of the variables leads to a possibly spurious correlation between midwifery integration and outcomes, a correlation that in no way proves causation.
What have these midwifery partisans demonstrated (beyond the fact that they are willing to go to great lengths to generated some sort of correlation)? Absolutely nothing.