Johnson and Daviss are still trying to salvage their study
Johnson and Daviss are still trying desperately to salvage their BMJ 2005 study, now that it has been exposed that the study actually shows homebirth with a CPM in 2000 had almost triple the neonatal death rate of hospital birth.
The paper itself relied on a scam, and the excuses are meant to obscure, minimize the scam, or to pretend that it didn't matter anyway. Now the excuses have been published as a PDF designed to look like a scientific paper.
Johnson and Daviss are offering it as a free download on their website Understanding Birth Better. Through the miracle of modern technology, PDF documents can be marked up with comments. I have taken the liberty of commenting directly on the document to point out the various inaccuracies. You can access the document with comments
here.
Labels: Johnson and Daviss

A professor of statistics says ...
Bravo, maria! Maria asked Henci Goer the question that sparked the debate on her site. She has not settled for the non-answer that Goer supplies or for the fact that my posts were deleted. She went to someone whom she believes is an
independent expert.
Ok, I posted this to a prof. in statistics and here is his response:
The study I am looking at is this study:
http://www.bmj.com/cgi/content/full/330/7505/1416
The following explanation was given by Johnson and Daviss about their study:
http://understandingbirthbetter.com/section.php?ID=31&Lang=En&Nav=Section
Some people say they have used the wrong comparison groups and that the correct comparison would prove that homebirth has triple the neonatality rate of hospital birth.
***
OK, here’s my take on it…
When I read the executive summary of the BMJ, I was struck by it’s modest claims in the results. By modest, I mean that it essentially reported the percentages of differing outcomes within it’s own data set. It was the conclusion, however, that struck me: it claims that their study group was similar to a group not in the study, namely, low risk hospital births in the US.
That seems to be the basis of the criticism. The comparison group has one obvious difference that masks for lots of other potential discrepancies: it was retrospective data. The authors of the study actually point this out in the study, however, so, to me, it isn’t fair to fault them for making the comparison. Perhaps they could have added a footnote to the conclusion in the exec summary, but that’s a bit picky. The disclaimer is clear in the discussion section:
"Regardless of methodology, residual confounding of comparisons between home and hospital births will always be a possibility. Women choosing home birth (or who would be willing to be randomised to birth site in a randomised trial) may differ for unmeasured variables from women choosing hospital birth…."
Consistent with this disclaimer, the biggest factor (in my opinion) is the demographics of their study group. This is visible in Table 1, which shows the characteristics of the mothers in the two groups:
- More women above the age of 25
- Likelihood of having already given birth is much higher
- Typical education levels are higher
- 95% had partners—which I would wager is significantly larger than the comparison group, whose rate is reported as N/A
Their study group is a self-selecting subpopulation of women—they are different from other women in ways that move them to choose a birth method that is out of the "main stream." This fact alone (supported by the items I just listed) suggests to me that they were better prepared for birth, and more aware of risks and of ways to handle them.
They did attempt to sort the data from the Nat. Center for Health Stats into "low risk" mothers, in order to make a better comparison. Assuming that sorting method valid, they arrive at the result that their group is, essentially, equivalent to the in hospital "low risk" group. Not shocking, given the kind of mom in their population.
I hope this is helpful.
***
Now my question is, what numbers did Amy Tuteur use to come to her comparison of homebirth being triple the risk of hospital birth. Where can I find these numbers and how are they a better comparison?
I think in the end, on one hand, eventhough this study has lots of merrit, the homebirth advocates should maybe not take it as a decisive study about the safety of homebirth, as they tend to do now, saying 'see!'
However, I do not think Amy's claims are grounded either so I would like to present to this prof. the numbers Amy is talking about and see what he comes up with.
Henci, would you please refer me to where I can find the numbers Amy is talking about? My apologies if they are posted here before!
Thanks!
maria.
ps: I asked two other people knowledgeable in statistics to look at this and I am waiting for their responses as well
****
Maria, I will refer you to the data and explain what I am talking about. Don't hesitate to ask additional questions or request additional data if you think it will be helpful.
The original problem:
According to Johnson and Davis, when analyzing the different intervention rates of home and hospital:
We compared medical intervention rates for the planned home births with data from birth certificates for all 3 360 868 singleton, vertex births at 37 weeks or more gestation in the United States in 2000, as reported by the National Center for Health Statistics [Births: final data for 2000. National vital statistics reports. Martin JA, Hamilton BE, Ventura SJ, Mencaker F, Park MM. Hyattsville, MD: National Center for Health Statistics, 2002;50(5)]
When analyzing the different mortality rate of home and hospital, Johnson and Davis used a group derived from out of date homebirth studies. I have always thought that was strange. Why not use the neonatal mortality data of the group that served as the comparison for interventions?
I went back and looked at the neonatal mortality data for this group, the EXACT group that Johnson and Daviss felt was the perfect comparison for intervention rates. I did this by reviewing the exact same paper that Johnson and Daviss used... Looking at the
raw data we find a death rate of 0.9/1000 (white women, age 20-44, 37+ weeks, 2500+gm).
The hospital neonatal death rate for white babies at term of 0.9/1000 is not corrected for congenital anomalies, pre-existing medical conditions, pregnancy complications or multiple births. The true rate is substantially lower. Nonetheless, we can make an important comparison. Johnson and Daviss reported a neonatal death rate at homebirth of 2.7/1000 (uncorrected for congenital anomalies, breech or twins). The neonatal death rate in the comparison group THAT THEY USED was less than 0.9/1000.
So now we have an explanation for why Johnson and Daviss used two different comparison groups. They used one group (births in the year 2000) for comparing medical interventions. The neonatal death rate in that exact group was 0.9/1000, half the rate of neonatal deaths at homebirth. They supressed that information by using an entirely different group (drawn primarily from the 1970's and 1980's) instead of using the death rate from the year 2000.
Here's where you can find more about the new explanation:
Johnson and Daviss acknowledge the validity of my criticism.
Johnson and Daviss have recently "re-analyzed" their own data and lowered the homebirth neonatal mortality rate:
Johnson and Daviss: If at first you can't trick them, try, try again.
Labels: Henci Goer, Johnson and Daviss

Johnson and Daviss: If at first you can't trick them, try, try again.
My persistent criticism of Johnson and Daviss for their bait-and-switch in the BMJ 2005 paper is evidently having an effect. In the paper, Johnson and Daviss compared intervention rates for homebirths in 2000 with intervention rates for low risk hospital birts in 2000. Then they compared neonatal mortality rates for homebirth in 2000 with .... a bunch of out of date studies extending back 40 years. They deliberately omitted the correct comparison with neonatal mortality rates for low risk hospital birth in 2000 because that would have shown that homebirth with a CPM had a neonatal mortality rate almost triple that of hospital birth.
Since I've exposed that trick, they've searched long and hard for a new way to fool lay people. Now they're going to claim that the neonatal mortality rate for CPM attended homebirth in 2000 was lower than they said it was. They've previewed that approach on their
website, and it's time to take the show on the road. Where is the show opening? Anyone who has followed this story will be able to guess the answer, the American Public Health Association Annual Meeting in October 2008. Check it out:
Neonatal mortality and prematurity: Comparison of 5,418 planned home births with full-term hospital births in the USA
Betty-Anne Daviss, MSc, RM, Midwifery Collective of Ottawa, 36 Glen Ave, Ottawa, ON K1S 2Z7, Canada and Kenneth C. Johnson, PhD, Evidence and Risk Assessment, Public Health Agency of Canada, 120 Colonnade Rd, Ottawa, ON K1A 0K9, Canada, 613 730 0282, Ken_LCDC_Johnson@phac-aspc.gc.ca.
We compared the neonatal mortality rate among 5,418 planned homebirths attended by Certified Professional Midwives in the year 2000 (CPM2000 study) to the U.S. National Institutes of Health (NIH) neonatal mortality rate for births in hospital to U.S. non-Hispanic white women of 37 weeks plus gestation. Prematurity rates were also examined for the two populations.
Adjustments were made to ensure that the comparisons were as close as possible to comparing like with like. (my emphasis) This included removal from the CPM2000 study death rate of intrapartum mortality, 3 deaths involving lethal birth defects unlikely to have been carried to term in the hospital population, and 1 death and 286 births among African-American and Hispanic women. After making the necessary adjustments that were possible, the neonatal death rate in both datasets was just under 1 death per 1000. The premature birth rate for the NIH non-Hispanic white births in hospital was 11.3%, more than double the rate for the women who started care with Certified Professional Midwives.
Our conclusions remain unchanged from those in the original article -- the neonatal mortality rate for low risk women in North America using Certified Professional Midwives is similar to that for low risk women in hospital in the U.S., and the intervention rates are much lower. Additionally, higher prematurity is a serious concern for the care of women planning hospital births, because prematurity is associated with higher perinatal mortality and morbidity.
In the original paper, Johnson and Daviss tried to scam people by inflating the neonatal mortality rate in the hospital group. Now that the scam has been exposed, they are going to scam people by simply pretending that the homebirth group had a lower death rate than what they originally claimed.
In other words, Johnson and Daviss originally said that homebirth in 2000 had a neonatal death rate of 2.6/1000 (including congenital anomalies). Now that I've pointed out that the neonatal death rate for low risk hospital births in 2000 was 0.9/1000, they've responded: Did we say the death rate for CPM attended births in 2000 was 2.6/1000 (including congenital anomalies)? Guess what, we just discovered we were wrong. It was actually 0.9/1000. Lucky for us that we figured that out at the same time someone publicly accused us of using the wrong numbers for the hospital group in 2000. Oh, and the hospital neonatal death rate in 2000 was 0.9/1000? What an amazing coincidence!
However, we can apply the same adjustments that Johnson and Daviss applied. According to the 2000 dataset on
CDC Wonder, in the group of white women, 37+ weeks, 2500+ gm, with singletons who delivered in the hospital in 2000, we find that there were 1863 deaths, of which 1001 were due to lethal congenital anomalies. That means that the neonatal death rate for hospital birth in 2000 was 0.34/1000 after we performed the EXACT SAME adjustment that Johnson and Daviss performed on the homebirth data. Now that the groups are once again comparable, the neonatal mortality rate for homebirth in 2000 is STILL almost TRIPLE the neonatal death rate for hospital birth in 2000.
Frankly, I think what Johnson and Daviss are doing is reprehensible. They've been caught trying to trick people and instead of apologizing, they've simply switched to a new way of trying to trick people. It's too late, though. The US government is now collecting statistics on homebirth with direct entry midwives and those statistics show homebirth with a DEM to have a neonatal death rate almost triple that of low risk hospital birth.
Addendum: We cannot disregard intrapartum mortality, much as Johnson and Daviss would like to do so. They write on their website:
5 intrapartum deaths need to be removed as the NIH data report only on live births and thus include only neonatal deaths.
Just because they are removed does not mean that they can be ignored. Intrapartum deaths at homebirth must be compared to intrapartum deaths at hospital birth.
At homebirth in 2000, the intrapartum death rate was astronomical, 0.92/1000. We know from other studies that the intrapartum death rate in the hospital is approximately 0.3/1000 including ALL gestational ages and ALL pregnancy complications. The intrapartum death rate for low risk women at term is vanishingly small. For example, during the years I was practicing, I worked to 2 major urban hospitals that had a combined total of approximately 75,000 deliveries. During that time, there was one low risk intrapartum death, and that death was considered a scandal resulting in an investigation and action against the personnel involved.
The bottom line is that Johnson and Daviss can "adjust" the data to their hearts' content, but those "adjustments" must also be applied to the hospital data. When both data sets are treated the same way, the conclusions remain the same. Homebirth has an increased rate of neonatal death almost triple that of hospital birth for low risk women AND homebirth has a much higher rate of intrapartum death than hospital birth.
Labels: Johnson and Daviss

For their next trick, Johnson and Daviss will ...
They must be really desperate, and it's easy to see why. Their own research shows homebirth increases the risk of neonatal death. The latest US statistics shows homebirth increases the risk of neonatal death. The MANA statistics show homebirth increases the risk of neonatal death.
In response, Johnson and Daviss are planning to resurrect the incredibly out of date, unpublished SOCIOLOGY dissertation of Peter Schlenzka. I'm not kidding. On Oct. 27, 2008, at an American Public Health Association conference (where else?), they plan to present
Safety of planned out-of-hospital birth similar to low-risk hospital birth in California: A large retrospective cohort study.
Before we address the fact that the study doesn't even show what they claim (of course!), let's look at a few fundamental facts.
1. The study is unpublished. It has never been subjected to peer review. Presumably it has been submitted to many journals and rejected.
2. The study is totally outdated. It refers to California data from almost 20 years ago.
3. It is a SOCIOLOGY dissertation. It has not been read or evaluated by anyone with expertise in obstetrics or public health. We don't even know what the sociology department examiners thought of it.
4. Peter Schlenzka has never published any research of any kind.
5. Peter Schlenzka appears to be unemployed. He describes himself as a private consultant, but I cannot find any evidence of that.
6. As he usually does, Johnson fails to mention his close ties to MANA (Midwives Alliance of North America) including the fact that he is the former of Director of Research of MANA.
Johnson and Daviss must be really desperate to even think of presenting this as if it were scientific information.
Now let's look at the dissertation itself. You can download it
here. I read the whole thing. It perfectly epitomizes that old saying: "If you can't dazzle them with brilliance, baffle them with BS." It is an absolute horror, using every trick in the book to confuse and obfuscate. The bottom line is that it does NOT show homebirth to be as safe as hospital birth. Despite lots of pious declarations about comparing like with like, Schlenzka NEVER compares homebirth in 1989-1990 with low risk hospital birth in the same years.
I did, though; you knew I would. I took Schlenzka's raw data and calculated mortality rates. Taking the most charitable view, the data shows that homebirth increases the risk of perinatal death from 1.5/1000 to 2.2/1000. In reality, the gap is probably much wider. I can't get a better estimate because, although Schlenzka acknowledges that 48.5% of the hospital group is African American compared to only 1.4% of the homebirth group, he does not tell us how many deaths come from the African American group.
Peter Schlenzka could not get his claims published by any peer reviewed scientific journal. His data has never even been reviewed by anyone with expertise in obstetrics or public health. Johnson and Daviss don't really care about that, since they know no scientist would take them seriously. This is part of a public relations effort to dress up an outdated, unpublished sociology dissertation in the mantle of respsectability for the gullible homebirth advocates. They plan to change the designation of this paper ("unpublished dissertation") to something that sounds impressive to those who don't know better ("presented at the APHA").
This is part of an ongoing campaign by professional homebirth advocates to keep the truth from American women. All the existing scientific evidence shows that homebirth increases the risk of neonatal death. The MANA safety data is so damaging to the cause that it must be hidden. The only thing left to do is to try to rehabilitate an out of date piece of junk and pass it off as legitimate to those who can be tricked.
Labels: Johnson and Daviss

Look what Johnson and Daviss are up to now
No, they are not publishing new research. How could they do that? They have no data to show that homebirth is as safe as hospital birth. Instead they are doing 3 things:
1. Selectively using the MANA statistics that are being withheld from the public to influence legislative debates on midwifery and to provide court testimony when midwives are prosecuted
2. Attempting to fend off questions and criticism of their BMJ 2005 study that claims to show that homebirth is as safe as hospital birth but ACTUALLY shows homebirth in 2000 to have a neonatal death rate almost TRIPLE that of hospital birth is for low risk women in 2000.
3.Collaborating with Jennifer Block, author of Pushed, to answer questions about the BMJ study coming to her on her website.
How do I know this? They've written about it in the latest Winter 2007-2008 NARM (North American Registry of Midwives)
Bulletin. On using the MANA statistics:
The CPM2000 study continues to be accessed from the BMJ website by more than 1,000 different individuals every month. With Wisconsin using the BMJ article in their legislative effort to make the case for the safety of CPM attended out-of-hospital births.
1. A record number of states have turned towards legislation (11 at last count) We have produced documentation to educate agency staff and policymakers for South Dakota, Wisconsin, Indiana, California, Missouri, New York, Minnesota, and Maine so far, and will continue to develop and make presentations when requested.
2. At least 10 midwives are presently under investigation. We have provided testimony for four court cases over the last two years.
3. We presented "Evidence Used, Evidence Ignored: the case of home birth policy," at the American Public Health Association meeting in Washington, D.C. in November.
We want CPMs to know that we are available for presenting state-focused statistics for the purpose of educating agency staff and policy makers and for testimony for individual midwives. We do not charge for this service. It is important to understand that meaningful statistics require more than a simple tabulation of births. They require comparison to a control group and for midwives attending home births, the CPM2000 study serves as the best comparison group for either the individual midwife or for the state as a whole. Thus we are able to provide the midwife and the courts with high quality, statistically valid information on birth outcomes from a highly reputable source that any judge/prosecutor/lawyer can download from the BMJ website. We realize that there are few epidemiologists who can offer this type of support to midwives who are on trial because it is time consuming, generally involves dropping everything else when suddenly asked to produce a report in a very tight time frame on the specific case, and requires specific expertise.
On handling questions about the BMJ study:
Some of you have written emails to us asking questions about the BMJ study. We have responded to these questions by placing a section called "Answers to Questions" on our website at UnderstandingBirthBetter.com.
And on collaborating with Jennifer Block:
Jennifer Block also consults with us periodically, as we were quoted frequently in her new book "Pushed." Because of her popularity, we are posting answers to questions coming to her on the website as well.
As I have written in the past, the Johnson and Daviss BMJ study 2005 is not an impartial study produced by independent researchers. It was undertaken by homebirth advocates in collaboration with MANA, and funded by a homebirth advocacy foundation. It was designed specifically to serve as an evidentiary foundation of a campaign on behalf of direct entry midwifery. There was never any question about what the results would show. They would only show that homebirth was "as safe as hospital birth" even if it required, as it did, withholding information about neonatal mortality in the hospital in 2000 and comparing homebirth to out of date papers extending back decades.
Labels: Johnson and Daviss

Johnson and Daviss acknowledge the validity of my criticisms
Johnson and Daviss have been forced to acknowledge the validity of my analysis of the
2005 BMJ study. They updated their website 2 weeks ago to address my specific criticisms. This is implicit recognition of the fact that comparing the homebirth rates in 2000 to out of date hospital birth studies is invalid.
According to their website
Understanding Birth Better:
... Since our article was submitted for publication in 2004, the NIH has published analysis more closely comparable than was available at that time, and some have tried to use it as a comparison. While we still do not offer the comparison as a completely direct one, as it is the closest we have and the comparison is occurring regardless of our cautions, we offer the following adjustments that have to be made to provide the comparison of the CPM2000 analysis in as accurate a manner as is possible with the published NIH analysis. (my emphasis)
Finally, they are coming to grips with the central issue. Even now, though, they continue to offer disingenuous excuses for their failure to appropriately analyze the data. Consider this claim: "Since our article was submitted for publication in 2004, the NIH has published analysis more closely comparable than was available at that time". However, the relevant data was published in 2002, long before their paper was submitted (
Infant Mortality Statistics from the 2000 Period Linked Birth/Infant Death Data Set, published August 29, 2002). Moreover, even before publication of the analysis, Johnson and Daviss had the raw data in their possession. They used that raw data from 2000 to calculate the rates of hospital interventions, so they were fully aware of the mortality data at all times.
As they say in politics, it's not the crime, but the cover up. Johnson and Daviss are now acknowledging that they used inappropriate data for comparison with homebirth, but claiming that the correct data was not available at that time. The relevant data was in their possession the entire time, and it was even released publically years before they made their erroneous comparisons. It is is difficult to imagine a legitimate reason why a professional statistician would deliberately use the wrong statistics for comparison when the right statistics were available and actually in his possession. It seems to me that the only possible explanation is that they knew all along that their study showed that homebirth has an increased risk of preventable neonatal death compared to hospital birth.
Johnson and Daviss also publically acknowledge that my analysis of the hospital mortality rate in the year 2000 is correct:
Thus a crude comparison of the comparable rates for non-Hispanic white >37 week babies in hospital in the year 2000 would be about 0.91 neonatal deaths/1000 live births ...
That is almost exactly the figure I reached in my analysis of the hospital data. Here is what I wrote in January of 2007in my post
Johnson and Daviss study shows death rate more than double the hospital group:
Looking at the raw data we find:
2,824,196 births to white women at term (37+ weeks), see Table 2
and
2,602 deaths of white babies weighing more that 2500 gm see Table 6
for
a death rate of 0.9/1000.
The hospital neonatal death rate for white babies at term of 0.9/1000 is not corrected for congenital anomalies, pre-existing medical conditions, pregnancy complications or multiple births.
Having acknowledged the real neonatal death rate in the hospital in 2000 of 0.9/1000, they face a serious problem; their study reported a neonatal death rate at homebirth of 2.6/1000 (uncorrected for congenital anomalies, breech or twins). Once again, they resort to disingenuous and deliberately misleading claims.
Let's look at their efforts to extricate themselves from the inevitable conclusion that homebirth is not as safe as hospital birth and why those attempts are misleading and invalid. Johnson and Daviss claim:
"A crude comparison of the CPM2000 death rate to the neonatal mortality rate among U.S. Non-Hispanic White women with 37 week plus births would also require the following exclusions:
5 intrapartum deaths need to be removed as the NIH data report only on live births and thus include only neonatal deaths"
According to Johnson and Daviss (farther down the page): Intrapartum Mortality - baby who died during labour (before birth). So a true intrapartum death is one in which the baby is born without any sign of life at all, not even one pulsation of the umbilical cord. Yet if you look at the descriptions of the "intrapartum deaths" in the BMJ study, it is clear that some, if not all of them are misclassified. For example, one baby is even listed as having an initial Apgar score of 1. It is very important to understand that a baby who cannot be resuscitated is NOT an intrapartum death. Unless Johnson and Daviss can show that these babies were born and had absolutely no sign of life, and therefore never received birth certificates, we must assume that these are neonatal deaths.
Johnson and Daviss also try to exclude congenital anomalies from the homebirth group, even though they are included in the hospital birth group:
3 neonatal deaths caused by fatal birth defects need to be removed. All three of these deaths would have occurred regardless of whether the birth was planned at initiation of labour to be in hospital or at home.
If congenital anomalies are in the hospital group, they MUST be included in the homebirth group, no matter how much or why Johnson and Daviss wish to exclude them. However, their excuse for excluding them is particularly unpersuasive and disingenuous: "Had these three birth defect deaths occurred among the hospital population in the present medical culture, they would have been far more likely than not to have been induced or terminated before term." This is an absurd claim: fully 25% of the neonatal deaths in the hospital group were due to congenital anomalies. There was actually a lower rate of congenital anomalies in the homebirth group than in the hospital group, not an artificially higher rate.
Finally, they also want to exclude "1 home birth neonatal death that was among the 286 Hispanic and African-American births in the dataset. Both the death and 286 births need to be removed from the comparison as they did not fit the non-Hispanic white women category provided by the NIH." That's perfectly legitimate, but that doesn't mean that we don't need to take that death into account. It simply means that we must compare the death rate among Hispanics and African-Americans at homebirth to the same groups giving birth in the hospital.
The bottom line is that the 5 "intrapartum" deaths and the 3 congenital anomalies CANNOT be removed from the homebirth deaths. The comparable death rate is not 5 among 5,132 but 13 among 5,132 for a homebirth death rate of 2.5/1000. The homebirth death rate is almost triple that of the hospital death rate for low risk white women at term.
The Johnson and Daviss 2005 BMJ study always showed and continues to show that homebirth has a higher neonatal death rate than hospital birth. Indeed, the rate is almost 3 times higher. Johnson and Daviss deliberately and disingenuously tried to obscure that fact in the original article. They now acknowledge that they used an inappropriate comparison group, yet their explanation is completely unbelievable. They claim that the appropriate data was not available, even though it had been published 2 years before. In addition, they are now making new invalid and misleading claims in attempt to avoid the inevitable and obvious conclusion that their study showed that homebirth has an increased risk of neonatal death.
Labels: Johnson and Daviss

Johnson & Daviss can't answer my criticism
Johnson and Daviss have been stung by my criticism. They have created a
website to answer the questions I have raised. However, they can't answer the most important question.
Why didn't they compare homebirth in 2000 with hospital birth of low risk women in 2000? I know why. The neonatal mortality rate for low risk white women at term in the year 2000 is only 0.72/1000. So the results from their study of homebirth is 3-4 times HIGHER than the comparable risk group in the hospital in that year.
The basic point is that homebirth in 2000 had a higher neonatal death rate than hospital birth of low risk women in 2000, and Johnson and Daviss left that out of the study. Keep that fact in mind as we analyze their response.
How do I know that Johnson and Daviss are attempting to respond to my criticism? Understanding Birth Better.com was created on February 8, 2007. My first post on the topic,
Johnson and Daviss study shows death rate more than double the hospital group, was posted on January 12, 2007. I followed up with further analysis,
More finely grained results for neonatal mortality in 2000, on January 16, 2007. Johnson and Daviss recognize that my criticism of their study is truly serious and essentially invalidates their conclusions. They are attempting to respond, but as I said above, they do not address the central issue: The left out the only comparison that really matters, the comparison between homebirth in 2000 and low risk hospital birth in 2000.
I would draw your attention to several aspects of their "response".
1. It is written to obfuscate and confuse. This is a classic tactic. They are trying to dazzle their supporters with "scientese" that means nothing.
2. It is heavily padded with irrelevant information. No one asked and no one cares why the study was published in BMJ, yet the bulk of the beginning of the response includes this information.
3. It include the apples-orange "riff" created in conjunction with Henci Goer who also could not respond to my criticism of the study,
What led up to Henci Goer's refusal to debate.
4. Johnson & Daviss recognize that their failure to disclose conflicts of interest call the results of the study into question. As I wrote in
Research and special interests/the BMJ 2005 study:
Therefore, I was distressed to find that Johnson and Daviss, authors of the 2005 BMJ study that was the largest study of homebirth to date, are NOT independent researchers. In the paper, Johnson describes his professional position as "senior epidemiologist, Surveillance and Risk Assessment Division, Centre for Chronic Disease Prevention and Control, Public Health Agency of Canada", but he neglected to mention that he holds another position: head of the MANA Statistics and Research Committee. In fact, Johnson and Daviss have been passionate homebirth advocates for many years, long before they embarked on the study. Daviss, who is Johnson's wife, is a homebirth midwife. Furthermore, the study was not funded by an academic institution or a government agency. Rather, it was funded by Foundation for the Advancement of Midwifery, a homebirth advocacy group.
So using money from a homebirth advocacy group, NARM, a homebirth advocacy group, hired homebirth advocates Johnson and Daviss to produce a study on homebirth. The conclusion appears to be predetermined. When an industry hires known allies to do a study about that industry, the results are going to be favorable.
Let's look at what Johnson & Daviss have to say in their defense:
The study was not commissioned by any national or state midwifery group. However, we are greatly indebted to the North American Registry of Midwives for their role in requiring all CPMs to participate in the data collection as a requirement for renewal of their CPM credential.
Clearly, NARM collaborated with Johnson & Daviss to create the study. Midwives were required to participate or their CPM credential would not be renewed. NARM was, at a minimum, an undisclosed partner to Johnson and Daviss.
"In fact, the risk to NARM, should the outcomes reflect negatively on the care by CPMs, was substantial and significant."
Oh, no it wasn't. If the results were negative, the study would not have been submitted for publication. As it is, the authors had to resort to a deceptive and invalid comparison in order to fabricate a positive conclusion. Moreover, the same data has been collected in every year since 2000. Not only has none of it been published, none of it is available to the public. It will only be released to persons who promise to use it for "the benefit of midwifery" and who sign a legal confidentiality requirement that prohibits them from sharing the data with anyone else. This strongly suggests that the existing data shows that homebirth is not as safe as hospital birth.
"Initial and primary funding was obtained from the Benjamin Spencer Fund, a small, private foundation with program interests in the environment, women and families and reproductive rights."
And the remainder of the funding was obtained from The Foundation for the Advancement of Midwifery, as the authors themselves disclosed in the paper.
In other words, all my assertions are correct. Johnson & Daviss are known, longtime public advocates of homebirth. Johnson has been professionally associated with MANA. NARM was intimately involved in the creation of the study and enforced the participation of the midwives. The study was funded by money from a midwifery advocacy group.
In summary, Johnson & Daviss recognize that my criticism of their study is legitimate and serious. They have set up a website to respond. However, they fail to address the key issue. The neonatal death rate at homebirth in 2000 is 3-4 times higher that the neonatal death rate in the hospital for comparable risk women. They don't even mention it. Their study NEVER showed that homebirth was as safe as hospital birth and should not be quoted as such. Indeed, as I have maintained, it shows that homebirth has an excess rate of preventable neonatal death in the range of 1-2/1000, which makes it consistent with other existing homebirth research.
Labels: Johnson and Daviss

More finely grained result for neonatal mortality in 2000
I have been quoting a neonatal mortality rate for white women at term in 2000 of 0.9/1000. This included both singletons and multiples. CDC Wonder allows us to generate detailed statistics on neonatal deaths. Therefore, I could find out the neonatal death rate for white women at term who were carrying singletons.
According to CDC Wonder, in 2000 there were:
2,725,633 births to white women with singleton pregnancies at 37-42+ weeks
1,970 deaths within that group
for a neonatal death rate of 0.72/1000
We are coming ever closer to the true neonatal mortality rate in the appropriate comparison group. Johnson and Daviss found a neonatal mortality rate of 2.7/1000 at homebirth of pregnancies at term (including congenital anomalies, breeches and twins). The hospital neonatal mortality rate of white women with singleton pregnancies at term was 0.72.
The neonatal mortality rate at homebirth in the Johnson and Daviss study was more than 3 and almost 4 times higher than at homebirth. This is despite the fact that the homebirth group had specifically excluded women with pre-existing medical conditions and pregnancy complications, and the hospital group still contains these women.
Labels: Johnson and Daviss, neonatal mortality

Johnson and Daviss study shows death rate more than double the hospital group
Anyone who has followed the discussions on Homebirth Debate knows that I have repeatedly criticized the Johnson and Davis study of homebirth,
Outcomes of planned home births with certified professional midwives, because it used the wrong comparison group. Now I have found the death rate for the correct comparison group by using the same data that Johnson and Davis used.
According to Johnson and Davis, when analyzing the different intervention rates of home and hospital:
We compared medical intervention rates for the planned home births with data from birth certificates for all 3 360 868 singleton, vertex births at 37 weeks or more gestation in the United States in 2000, as reported by the National Center for Health Statistics [Births: final data for 2000. National vital statistics reports. Martin JA, Hamilton BE, Ventura SJ, Mencaker F, Park MM. Hyattsville, MD: National Center for Health Statistics, 2002;50(5)]
When analyzing the different mortality rate of home and hospital, Johnson and Davis used a group derived from out of date homebirth studies. I have always thought that was strange. Why not use the neonatal mortality data of the group that served as the comparison for interventions?
I went back and looked at the neonatal mortality data for this group, the EXACT group that Johnson and Daviss felt was the perfect comparison for intervention rates. I did this by reviewing the exact same paper that Johnson and Daviss used. The paper is 105 pages long and has been divided into subsets for ease of research. The subset on neonatal mortality is
Infant Mortality Statistics from the 2000 Period Linked Birth/Infant Death Data Set. Looking at the raw data we find:
2,824,196 births to white women at term (37+ weeks),
see Table 2and
2,602 deaths of white babies weighing more that 2500 gm
see Table 6for
a death rate of 0.9/1000.
The hospital neonatal death rate for white babies at term of 0.9/1000 is not corrected for congenital anomalies, pre-existing medical conditions, pregnancy complications or multiple births. The true rate is substantially lower. Nonetheless, we can make an important comparison.
Johnson and Daviss reported a neonatal death rate at homebirth of 2.7/1000 (uncorrected for congenital anomalies, breech or twins). The neonatal death rate in the comparison group THAT THEY USED was less than 0.9/1000.So now we have an explanation for why Johnson and Daviss used two different comparison groups. They used one group (births in the year 2000) for comparing medical interventions. The neonatal death rate in that exact group was 0.9/1000, half the rate of neonatal deaths at homebirth. They supressed that information by using an entirely different group (drawn primarily from the 1970's and 1980's) instead of using the death rate from the year 2000.
Labels: Johnson and Daviss, neonatal mortality

Additional data: hospital death rates are lower than claimed by Johnson & Daviss
I have returned repeatedly to the fact that Johnson & Daviss used the wrong comparison (cohort) group in their study in order to make the neonatal mortality rate of 2/1000 look better by comparison. They claim a neonatal death rate of 1.7/1000 for low risk women intending to deliver at home. They removed congenital anomalies from the deaths, but for the purposes of the following comparison, we need to put them back. That's because US statistics for neonatal mortality do not remove congenital anomalies. If you add back the 3 babies who died of congenital anomalies, the neonatal death rate was 2.3/1000 among babies delivered by a CPM in 2000.
Now let's take a look at
US Birth Weight/Gestational Age-Specific Neonatal Mortality: 1995–1997 Rates for Whites, Hispanics, and Blacks. This paper breaks down neonatal mortality rates by race, by gestational age and by birth weight. Therefore, we can find out the neonatal mortality rate for white women who delivered a single baby at term. The neonatal mortality rates range from 0.8/1000 at 40-41 weeks up to 1.1/1000 at 42-43 weeks.
So, the neonatal death rate at homebirth in the Johnson and Daviss study was 2.3/1000. The neonatal death rate in this study was less than 1.1/1000. That means that the neonatal death rate in the Johnson and Daviss study was more than double that of white women delivering a single baby at term between 1995-1997.
Keep in mind that the hospital group in this study includes high risk women including those with pre-existing medical conditions, pregnancy complications and babies in the breech or transverse position. The neonatal death rate would be much lower if high risk women were excluded. Moreover, these numbers were from 1995-1997. The comparable numbers from 2000 are almost certainly lower.
So, at a minimum, the neonatal death rate at homebirth in the Johnson and Daviss study was more than double the neonatal death rate in the hospital.
Labels: Johnson and Daviss, neonatal mortality

Confounders in cohort studies
In a previous
post, I have used the Rochon article on cohort studies to evaluate the validity of the
Johson and Daviss homebirth study. The initial analysis showed that the study suffers from selection bias. The second article in the series,Reader's guide to critical appraisal of cohort studies:
2. Assessing potential for confounding, gives us more information about the potential sources of selection bias,the confounders.
According to the article:
For a characteristic to be a confounder in a particular study, it must meet two criteria. The first is that it must be related to the outcome in terms of prognosis or susceptibility...
The second criterion that defines a confounder is that the distribution of the characteristic is different in the groups being compared.
The article highlights three questions that must be asked to identify confounders in a cohort study:
Has there been a systematic effort to identify and measure potential confounders?
Is there information on how the potential confounders are distributed between the comparison groups?
What methods are used to assess differences in the distribution of potential confounders?
Let's ask these three questions about the Johnson and Daviss study.
1. Has there been a systematic effort to identify and measure potential confounders?
The article describes what an effort to identify and measure potential confounders looks like:
Information on the distribution of potential confounders in the intervention and comparison groups is usually provided in the first table of the paper. Confounding is a problem only if these characteristics are unevenly distributed between the intervention and comparison groups.
Johnson and Daviss do provide a table at the beginning of their study (Table 1). The table shows us characteristics of the homebirth group and the group of all singleton, vertex births at term in the US in 2000. It does not provide any information about the characteristics in the comparison group derived from out-of-date homebirth studies.
2. Is there information about how the potential confounders are distributed between the comparison groups?
Table 1 shows us that various confounders are distributed quite differently between the two groups that are listed. For example, African Americans make up only 1.3% of the homebirth group, but 14.1% of the comparison group. We know that the neonatal mortality rate for African Americans is 2-3 times higher than for white women. Therefore, this is a very serious confounder.
There are potential confounders that are not addressed in the table. For example, pre-existing medical conditions and pregnancy complications have a profound effect on neonatal mortality. Johnson and Daviss do not provide us with any information about these important confounders.
Of course, they provide no information at all about the comparison group derived from out-of-date homebirth studies and this is a serious omission.
3. What methods are used to assess differences in the distribution of potential confounders?
None. Johnson and Daviss made no effort to assess the distribution of potential confounders. According to Rochon: "Perhaps the most common strategy to identify important imbalances in individual confounders between intervention and comparison groups is to use significance tests such as x2 tests (for dichotomous variables) or t tests (for continuous variables)."
The bottom line is that the limited information that Johnson and Daviss provide shows the presence of serious confounders, but no attempt was made to assess the differences in distribution of these confounders and their impact. Every cohort study must include this analysis, and the fact that it is missing renders the conclusions invalid.
Labels: Johnson and Daviss

Evaluating cohort studies
I have repeatedly criticized the
Johnson and Daviss study for using the wrong comparison group, rendering the conclusions of the study invalid.
To understand this problem, you need to know about the appropriate design of a cohort study. An excellent series of articles in the British Medical Journal addresses this very question. These articles are completely independent of the issue of homebirth; they apply to the design of any cohort study. The first article in the series is
Reader's guide to critical appraisal of cohort studies: 1. Role and design by Rochon et al. I encourage everyone to read the whole article, and I have quoted the relevant portions here:
In cohort studies care must be taken to minimise, assess, and deal with selection bias. A comprehensive approach is needed that includes the selection of appropriate comparison groups, the identification and assessment of the comparability of potential confounders between those comparison groups, and the use of sophisticated statistical techniques in the analysis...
Ideally, the comparison group in the cohort study should be identical to the intervention group, apart from the fact that they did not receive the intervention... Part of the art of designing a cohort study is choosing comparison groups that approach this ideal in order to minimise selection bias while maintaining clinical relevance...
The authors offer three questions that must be answered when determining the validity of a cohort study:
What comparison is being made?
Published studies may include more than one type of comparison, but the focus of any appraisal of a cohort study is on an individual comparison between an intervention group and a comparison group in a defined population. A well written study should contain a clear definition of why the two groups were selected and how they were defined. This information is essential for assessment of clinical relevance and potential for selection bias.
Does the comparison make clinical sense?
...Cohort studies should not only describe the populations being compared but also include a discussion of the clinical context for that comparison and provide a justification for the comparison. Readers of these studies should determine if the study makes a comparison that is realistic and relevant to their decision needs.
What are the potential selection biases?
Selection bias occurs when there is something inherently different between the groups being compared that could explain differences in the observed outcomes.
It is important to keep in mind the effect the choice of comparison groups will have on potential selection bias when evaluating a cohort study... [A] form of selection bias, referred to as channelling bias or confounding by indication, occurs when patients are assigned to one intervention or another on the basis of prognostic factors and is key issue in cohort studies.
Readers should recognise the potential for selection bias in all cohort studies and carefully consider possible sources of bias...
So let's ask these three questions about the Johnson and Daviss study.
What comparison is being made?
Johnson and Daviss have constructed a prospective cohort study to determine the effect of homebirth on mortality and intervention rates. One cohort contains all women who delivered at home with a CPM in 2000. This is compared to women who delivered in the hospital. According to Rochon et al., "Ideally, the comparison group in the cohort study should be identical to the intervention group, apart from the fact that they did not receive the intervention". So the comparison group MUST be women with the same level of risk who delivered in the hospital in the same year.
Does the comparison make sense?
Yes, it does make sense to compare women who delivered at home with a CPM to women of the same risk level as women who delivered in the hospital in the same year.
What are the potential selection biases?
According to Rochon et al., "Selection bias occurs when there is something inherently different between the groups being compared that could explain differences in the observed outcomes."
The Johnson and Daviss study suffers from selection bias. The patients in the homebirth group differ from the patients in the comparison groups on the basis of prognostic factors. Pre-existing medical and obstetric complications are prognostic factors for both mortality outcomes and intervention outcomes. ONLY low risk women can be in the homebirth group. However, women in the comparison group are of all risk levels. Because of this, the comparison groups that Johnson and Daviss created (one group consisting of all women who gave birth at term from the vertex presentation in 2000, and the second group fished from out-of-date homebirth papers) suffer from selection bias. Therefore, any conclusions drawn from a direct comparison of either of those groups with the homebirth group is invalid.
Labels: Johnson and Daviss

Another serious problem with the Johnson and Daviss study
I have concentrated my focus on the faulty neonatal mortality statistics in the
Johnson and Daviss study, since the study is used most often to claim that homebirth is as safe as hospital birth. This is not the only inappropriate comparison in the study. The problem with the intervention rate comparison is equally serious, and in some ways, perhaps, easier for the layperson to understand.
When looking at the rate of various interventions, Johnson and Daviss use an entirely different comparison group. This, in itself, is a problem. A study should have only one control/cohort group, not more than one. Even more important, this comparison group is also the wrong group and that has a profound effect on the results and on the conclusions.
According to Johnson and Daviss:
We compared medical intervention rates for the planned home births with data from birth certificates for all 3 360 868 singleton, vertex births at 37 weeks or more gestation in the United States in 2000, as reported by the National Center for Health Statistics, which acted as a proxy for a comparable low risk group.
The problem is that all women who gave birth from the vertex position at term is NOT the appropriate comparion group. The appropriate comparison group is all LOW RISK, WHITE women giving birth from the vertex presentation at term. I suspect that anyone can see that including women of high risk, women with pre-existing medical conditions, and women of other races is not acceptable.
Where can we find the information that we need? Again
MacDorman et al. come through for us, having already analyzed the data from all birth certificates between 1998-2001. They don't report each year individually, but an average of the years 1998-2001 is likely to be pretty close to the figure for the year 2000. The C-section rate for low risk white women was 4.7%. How do I know that? MacDorman et al. report 3,571,332 births to low risk white women from 1998-2001. Of these births, 166,814 were C-sections. Doing the division yields a C-section rate of 4.7.
Johnson and Daviss left that information out of their study. In table 3, they report a C-section rate in the homebirth group of 3.7%. Then they disingenuously compare that rate to the C-section rate for "Singleton, vertex births at 37 weeks gestation in US" of 19%, and to "Survey of singleton births in all risk categories in US 2000-2001" of 24%. However, as we have just seen the rate for the APPROPRIATE comparison group is only 4.7%.
I think it is pretty easy for anyone to understand what has gone on here. Johnson and Daviss have used a competely unacceptable comparison group to make the results in the homebirth group look good. They assumed that most people would not read the paper closely enough to understand that they were using the WRONG group, and indeed they were correct that most people did not notice that they were using the wrong group.
There are two take-home messages from this exercise. First, while
the homebirth group had a C-section rate of 3.7%, this was not substantially lower than the C-section rate of 4.7% in the hospital group. Second,
Johnson and Daviss published a grossly misleading statistical analysis. The true analysis was easy to do. They were already in possession of all the necessary data.
I have remarked before that this study was commissioned by a homebirth advocacy group and funded by yet another homebirth advocacy group. The authors themselves had been passionate and public homebirth advocates long before they undertook the study. The entire point of the study was to convince others that homebirth was safe. The ONLY way that Johnson and Daviss could accomplish that is by rigging the data. They used inappropriate comparison groups because using the correct comparison groups would have demonstrated what we can now see to be true. Homebirth is FAR more dangerous for babies (a neonatal death rate of 1.7-2/1000 at home compared to 0.3/1000 in the hospital) and has only a marginally lower C-section rate.
Labels: Johnson and Daviss, neonatal mortality

What is the neonatal death rate in the correct comparison group?
Johnson and Daviss described their
study as a prospective cohort study. They are comparing the cohort of all women who gave birth at home with a direct entry midwife in the year 2000 to the cohort of women who gave birth in the hospital. In order for the results to be valid, the cohorts must be matched very carefully in all possible variables. The correct cohort with which the homebirth group SHOULD have been matched is low risk white women with vertex babies at term, and from which congenital anomalies have been excluded (since Johnson and Daviss removed congenital anomalies from the homebirth group).
As we have seen, Johnson and Daviss did not use the correct group for comparison. They created their own group by amalgamating data from a series of out of date homebirth studies. This is an unacceptable comparison and renders the conclusion of their study invalid. What would the correct comparison have looked like?
In order to make the correct comparison, we would need to find the data from low risk white women with vertex babies at term who gave birth in the hospital in the eyar 2000. Then we would need to subtract out the deaths caused by congenital anomalies. All the relevant data is available from the National Center for Health Statistics in Maryland. It would take some work to calculate the exact figures, the answer can be calculated. We know that Johnson and Daviss had access to this data, because they used it to create the comparison group for their conclusions about interventions at homebirths.
We are spared from doing the tedious calculations because someone has already done it for us.
MacDorman et al. did these calculations as part of their study of neonatal mortality and C-sections . I have argued that their numbers are artificially high because the group is higher risk than they claim. I'll ignore that problem for the moment, and use their numbers. The real numbers are undoubtedly lower.
MacDorman et al. performed their calculations on all birth from the years 1998-2001. The neonatal mortality rates were similar in each of those years, so it can serve as an acceptable proxy for neonatal mortality in 2000. Keep in mind that we are not using someone else's control group, as Johnson and Daviss did. We are simply using the same birth certificate data that anyone must use.
MacDorman et al. reported 3,571,332 births to low risk white women with vertex babies at term. Within that group, there were 2344 neonatal deaths from a total neonatal death rate of 0.65/1000. MacDorman et al. also found that 55% of the neonatal deaths were caused by congenital anomalies. We need to subtract the congenital anomalies from our group because Johnson and Daviss subtracted the congenital anomalies from their group. The final neonatal death rate is 0.3/1000. As I have remarked elsewhere, this number is almost certainly higher than the true number since the MacDorman "low risk" group undoubtedly included some high risk women.
The neonatal death rate for low risk white women giving birth to vertex babies at term (and excluding congenital anomalies) is
0.3/1000 or less. According to Johnson and Daviss:
After we excluded four stillborns who died before labour but whose mothers still chose home birth, and three babies with fatal birth defects, five deaths were intrapartum and six occurred during the neonatal period (see box). This was a rate of 2.0 deaths per 1000 intended home births. The intrapartum and neonatal mortality was 1.7 deaths per 1000 low risk intended home births after planned breeches and twins (not considered low risk) were excluded.
I am not convinced that removing the breeches and twins from the group is warranted, but for the sake of argument, I will accept their lower rate of 1.7/1000.
The bottom line is this: Johnson and Daviss quote a neonatal death rate at homebirth of 1.7/1000. The neonatal death rate in the comparable hospital group is 0.3/1000 or less. The appropriate conclusion drawn from the Johnson and Daviss data is that
the neonatal death rate at homebirth is more 5 times higher than the neonatal death rate at hospital birth! Not only have Johnson and Daviss NOT shown that homebirth is as safe as hospital birth, they have actually shown that homebirth is considerably more dangerous for babies than hospital birth.
I have been participating in a similar discussion of this issue over on the Lamaze blog. Neither Henci Goer, nor the 6 other homebirth advocates have responded to a direct, yes or no question about the control group. I take that as an acknowledgement that Johnson and Daviss used an inappropriate comparison (cohort) group. I have now posted the above information on the Lamaze blog to demonstrate that the neonatal death rate at home in the study is more than 5 times higher than the neonatal death rate in the hospital. It will be interesting to see if there is an acknowledgement of the true numbers.Labels: Johnson and Daviss, neonatal mortality

The authors of homebirth papers do not seem to engage with scientific peers
It strikes me as notable that the authors of the most prominent homebirth studies have never publicly addressed their scientific peers about the issue of homebirth. They restrict themselves to speaking before, and writing for the midwifery or lay audience, most of whom are not able to evaluate the papers for statistical or scientific accuracy.
This is especially surprising, when you consider that many of these authors are statisticians or epidemiologists, not midwives. So, for example, if the Johnson and Daviss paper is supposed to be such a breakthrough in the scientific study of homebirth, why hasn't it ever been presented at a conference of statisticians, epidemiologists or obstetricians. It makes me wonder whether the homebirth authors fear that their papers will be indenfensible to an audience of scientifically literate peers.
I know that I am not able to do an exhaustive search into the issue, so I would be grateful for information from anyone who knows about any instances in which the prominent homebirth authors have ever put themselves in a position where they would be required to take questions from their scientific peers. I'm not talking about midwifery conferences or conferences of homebirth advocates. I am asking only about conferences on the topics of statistics, epidemiology or obstetrics. I'm not talking about the American Public Health Association, either, since that seems to be an organization that appears to take stands on political issues (such as wars) as opposed to purely public health issues.
Labels: Johnson and Daviss

A colleague of Johnson & Davis has been defending their study on this board
I have written repeatedly about the serious flaws in the Johnson & Davis 2005 study of homebirth. In addition to the serious methodological flaws (two different control groups, both of them in inappropriate), I recently learned that Johnson & Davis were never the unbiased researchers that they represented themselves to be. Both are long time homebirth advocates and closely associated with professional homebirth organizations.
Johnson and Daviss were invited to be a part of this discussion by a participant. They declined as I predicted that they would. Why would they willingly acknowledge a conflict of interest in a public forum? Furthermore, they have never been publicly questioned by another physician (to my knowledge) and evidently weren't about to start now.
What I didn't know until yesterday is that a professional colleague has been posting in their defense without revealing her identity. Wendy CPM has been on the Board of Research of MANA with Ken Johnson, and may have actually been involved in the study itself. She appeared on this board only after the invitation to participate was extended to Johnson and Daviss.
I will take this opportunity to extend a public invitation to Johnson and Daviss to address the methodological and ethical issues that I have raised. Surely they could do a better job.
Labels: Johnson and Daviss

Research and special interests/the BMJ 2005 study
If I find that a study is funded by special interest money, I immediately become concerned that the results are biased. For example, when I hear about a study that claims that the Arctic wilderness will be "improved" by oil drilling, I am naturally suspicious. If I find out that the study was conducted by scientists known to be associated with the oil industry, and that the study was funded by an oil company, I discount the results. The study was designed from the outset to produce a pre-determined conclusion, one that meshed with the interests of the researchers and the oil company.
Therefore, I was distressed to find that Johnson and Daviss, authors of the 2005 BMJ study that was the largest study of homebirth to date, are NOT independent researchers. In the paper, Johnson describes his professional position as "senior epidemiologist, Surveillance and Risk Assessment Division, Centre for Chronic Disease Prevention and Control, Public Health Agency of Canada", but he neglected to mention that he holds another position: head of the MANA Statistics and Research Committee. In fact, Johnson and Daviss have been passionate homebirth advocates for many years, long before they embarked on the study. Daviss, who is Johnson's wife, is a homebirth midwife. Furthermore, the study was not funded by an academic institution or a government agency. Rather, it was funded by Foundation for the Advancement of Midwifery, a homebirth advocacy group.
So using money from a homebirth advocacy group, NARM, a homebirth advocacy group, hired homebirth advocates Johnson and Daviss to produce a study on homebirth. The conclusion appears to be predetermined. When an industry hires known allies to do a study about that industry, the results are going to be favorable.
That explains why a study done by an epidemiologist failed to follow the standards of epidemiologists. Johnson and Daviss sliced and diced the data in every possible way. They went beyond that by selecting comparison groups that were not matched for risk, since they almost certainly knew that low risk women have a lower neonatal mortality rate than the homebirth group.
Their own comments are quite illuminating. In a
NARM bulletin from summer 2005, Johnson and Davis actually advise midwives how to generate publicity for the paper, and how to spin the data. This is not what you would expect from researchers who were independent.
For example:
We invite you, if you have not already done so, to contact your local radio stations and newspapers this week about the study,..
When contacting the media take the time to educate them on the CPM credential and make sure they know that NARM, MEAC, CfM, MANA, and NACPM have information on these maternity care providers.
On spinning the data:
We purposely reported transfers as: “over 87% of mothers and neonates did not require transfer to hospital,” and most of the transfers were for lack of progress, because the mother was tired or wanted pain relief. This kind of detail is especially important when communicating with the media. For example “over 87% of the mothers…” conveys a sense of confidence, while “thirteen per cent of women still had to be transferred,” which one television broadcast did (even though it was overall a positive study) focuses on the negative end of the curve.
And:
Policy Implications: The study suggests that legislators and policy makers should pay attention to the fact that this study supports the American Public Health Association’s resolution to increase out of hospital births attended by direct entry midwives. The American College of Obstetricians and Gynecologists still opposes home birth, but has no valid evidence to support this position. The Society of Obstetricians and Gynecologists of Canada and several provinces have written statements either acknowledging that women have the right to choose their place of birth or supporting it.
For continuing information on creative and effective ways to highlight this study in the policy arena, consider joining the BirthPolicy listserve. It is a great resource for midwifery policy discussion. Plus list moderators Katie Prown and Steve Cochran have their own personal tips on how to become more media savvy.
Needless to say, "policy implications" which dovetail with the author's pre-existing advocacy, are not the typical purview of a truly independent researcher. Furthermore, it is my understanding from reading the bulletin, that this letter was unsolicited. It was the authors' idea to offer tips on how to publicize the article, how to spin the data, and how to exploit the paper for policy purposes.
I always said that this paper was poor, since it actually shows multiple preventable neonatal deaths at homebirth, and a neonatal death rate that is approximately double the neonatal death rate for low risk groups in the hospital. Now I know why the paper is poor. This paper was produced at the request of a special interest group, by special interest advocates, using special interest money. Not only is the conclusion of homebirth safety unjustified by the data in the actual paper, but the involvement of a special interest group and special interest money renders the paper ethically suspect.
At a minimum, the authors should have been required to disclose their ongoing association with homebirth advocacy organizations and the funding from a homebirth special interest group. Readers of the paper deserve to have this information.
Labels: Johnson and Daviss
