Earlier, I wrote about mortality statistics coming out of Libya to try and shed some light on where these numbers come from, what some of the challenges are in obtaining them, and why you really ought to know what they mean before using them to plan a program. The challenge still remains – how do we accurately calculate these numbers and why should we believe them?
Over the past month, world events have been such that we can examine these figures from a few different perspectives. There are ongoing chronic crises in places like Sudan or the Democratic Republic of the Congo where data are often of questionable quality outside of well-designed mortality surveys – generally we hear of X number of deaths in a given area, through media sources or NGO reports; there has been the unfolding crisis in Libya which has produced interesting sets of data from both within and outside of the country; and then there has been the evolving crisis in Japan, where data seems to be fairly reliable and prospectively collected as it becomes available.
We are essentially dealing with some very different types of data, as I have mentioned previously: First, we have the anecdotal reports of deaths and health effects of violence or disease that show up in the media, NGO reports and so on. That’s not to suggest that they’re wholly inaccurate, per se, but these reports are often difficult to validate or to establish causation. What we want to know to plan for interventions (or assess the effectiveness of our interventions) is how many deaths are in excess of what would normally be expected and why they happened. It’s difficult to get that much out of a few BBC articles.
Second, we have data that is released from ministries of health, or from hospitals – this would be the case in Libya, in many instances. The challenge is that we don’t know how representative it is of the overall situation. I don’t know much about the population demographics or their distribution (both inside and outside of the current conflict) so trying to understand what 46 deaths at a hospital means is a challenge – I don’t know where that hospital is or how many patients it is likely to treat of the overall population.
Third, there are mortality surveys that are often conducted retrospectively that I have wrote about previously – more details of the methodologies used coming soon in another post – that are often conducted as a means of gaining some sense of what is happening in a (often displaced) population. To this end, we’re seeing some fairly good data coming out in the UN-OCHA reports about the health of the displaced populations in Tunisia, Egypt, Niger and Algeria. UNHCR and IOM are known for having a pretty robust set of indicators for the camps that they manage, so we’re seeing some pretty good surveillance data coming from this end.
The problem still remains getting reliable data from within Libya – which I have yet to see. The most recent OCHA situation report indicates that the number of internally displaced persons (IDPs) in Libya is still unknown. This would also suggest that we don’t know much about the health of these populations and we are left to rely on some of the less robust sources of data that we have.
All in all, we rarely have reliable, robust public health datasets available to us. Even in Japan, where a prospective surveillance and monitoring system is in place, the scale of the crisis is creating a number of problems for getting reliable and accurate estimates. The most recent health situation report, however, contains about the most detailed tables that you could ask for in this situation – detailing by prefecture, the number of dead, missing, injured, and evacuated persons. The large number of yet unaccounted for people suggests that there are still logistical challenges in gaining access to reliable data across the board, though.