I am attending the Third Global Symposium on Health Systems Research in Cape Town this week. The central focus and theme of the conference this year is on people-centered health systems, putting patients, their families, and communities at the center of everything that we do. Intersecting this goal is, of course, a need to measure the impact of new (and old) policies and interventions that take place within the health system, from  national and international levels all the way to the bedside.

I have been interested in methods and indicators for monitoring the availability and functionality of health services across health systems for a long time. More specifically, I am interested in how we can monitor fluctuations in the availability of health services under very difficult circumstances – in fragile and conflict-affected states, natural disasters, and other humanitarian emergencies. One of the challenges that we have often run into is, however, that there are a plethora of data that are available, with a great deal of uncertainty as to how to compare them, how to integrate them, and what nuances or limitations come along with the different datasets, compiled from different tools and indicators.

One of the papers that arose from my PhD thesis was a systematic review of the existing tools for conducting health facility assessments, where we found that there were a large number of these tools in use in low- and middle-income countries. We analyzed these tools using a health systems framework, and from this we established a set of criteria that should be evaluated in a health facilities assessment – everything from the number of health workers present to the availability of essential services like surgery, pediatric care, and others. Interestingly, we found that many tools being used to guide decision-making were largely incomplete: they reflected donor or other priorities, and could offer little in the way of a broad assessment of the capacity of the health system to deliver essential functions at the health facility level. There was a general trend toward primary care services, which wasn’t surprising, but even universally-necessary services (like a morgue, or healthcare waste disposal services, or the number of healthcare workers present) were neglected by some of the tools. The large number of tools in use, and the discrepancies in what they measured and how they measured it, make the data incomparable in many instances.

Within the criteria we defined for health facility assessments, many have existing, well-defined and well-used, standardised assessments, often endorsed by the World Health Organization. For example, the Situational Analysis tool to Assess Emergency and Essential Surgical Care (PDF), evaluating the availability of drugs, equipment, human resources, and other essential items for surgical care, which is then databased for each facility around the world. This week, I also attended a session on the Workload Indicators for Staffing Need (WISN), which is also a WHO tool, used to assess the availability of health workers, identifying gaps in availability, and monitoring workload imbalances. For many other services, no standardized assessment tools exist yet, and that’s something that we really need to map out and determine where gaps are.

All of this has got me thinking: how can we practically measure progress across the health system when all of our data are so fragmented?

My experience suggests that there are a myriad of indicators and standardized assessment tools in use across all of the health system building blocks, but currently there is no way of consolidating all of this information into a big picture assessment of the relationship between them. In absence of this, will we know what success in health systems strengthening looks like if and when we get there?

This becomes particularly problematic when health systems begin to collapse – we rapidly need to know where the health facilities are, what services they provided prior to the emergency, how many staff are available, and how well they are functioning. Taking the example of the ongoing Ebola epidemic, it’s been difficult to know what capacities existed at the outset of the outbreak in these now very disruptedhealth systems, which makes planning very difficult. This information needs to be rapidly synthesized in a way that is usable and easily updatable. When this isn’t consolidated (as was the case during the Haiti earthquake) a large number of databases tend to emerge, often providing limited assessments of the health system (we have a paper forthcoming on this).

I think that an extremely useful contribution to health systems research would be to establish a system capable of consolidating the outputs of all of the assessments currently in use to allow us to conduct analyses and syntheses of these data in a format that’s usable for making decisions and evaluating outcomes. We need to understand how these data could become interoperable and comparable. This seems like a massive undertaking, but would undoubtedly be of incredible value to the health systems research community, and health systems themselves. Any collaborators out there?