I've mentioned a couple times now that I know the REACH data aren't very good (certainly not good enough to use for any public health planning), despite my two months of effort to make them otherwise. The question that should arise, as with any research project, is "How do you know?" It's a particularly thorny question in this case, because REACH is effectively a cross-sectional survey of an entire population (one block worth of villages). In theory, it's a complete census. So how could I tell if it's got holes in it? For instance, up to this point in the year 2009, REACH tells me that I've had 765 live births and 33 infant deaths, giving me a crude birth rate of 9.7 and an infant mortality rate of 43.1. (For reference, the infant mortality rate of the United States is 6 babies per 1000 live births.)
The answer, perhaps unsurprisingly, is to find similar entities and compare against them. In this regard, the Government of India has helped us out by constructing the National Family Health Survey and then releasing NFHS reports for the various states of India. Of course, the problem is that their "rural Rajasthan" figure is for all of Rajasthan, from the best to the worst. In theory, this district, where child development services are done by an NGO that cares a bit more about the average person's well-being, should be doing better than the average.
Thankfully, said NGO is also linked to a health research institute, and thus can commission their services to study its activities. From that, we have an estimate of the rates for my specific block of villages, as of January 2007 -- BUT, that estimate turns out to have less-than-ideal methodology and not to have tracked some rather important indicators, meaning it has to be treated with a fair-size pinch of salt.
At any rate, depending on which of those external indicators you trust, I should be seeing between 1400 and 2200 births thus far based on my population -- double what I've got. I should also have between 50 and 140 infant deaths. The mortality rate, on the other hand, should be about where we've pegged it, telling us that we've not got under-reporting of deaths or births in isolation, we've got a generalized problem of under-reporting in ALL our data. The potential causes of that include confusion (our forms can be complex, and not all are in Hindi), laziness (it's rather tedious work copying information into them), bad data entry (both of above apply to our data entry techs as well as to our female field workers), or a failure on our part to explain what it is we want. Probably, some combination of the above.
And that, in a nutshell, is what I've spent the past two months doing -- working out the evidence that problems exist, showing it to others, letting them shout at underlings, and trying to channel that shouting into something resembling progress. It's a little bit of a shaky thing to be doing, as all of this is reasoning based on low-grade math, and even the comparison data can't truly be trusted. Still, it's better than just plowing ahead assuming these REACH data are correct, which would have led us into serious deep camel dung.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment