• View in gallery

    Schematic illustration of continuous surveys with temporal-spatial sampling and small-area estimates (hypothetical example). aClusters are primary sampling units. For household surveys, clusters are villages and city neighborhoods; for health facility surveys, clusters are health facilities (see text for details and Box 1 for assumptions).

  • View in gallery

    Relationships among the integrated continuous survey system, quality management structures, and partners. NGO, non-governmental organization.

  • 1

    Anonymous, 2008. The GAVI Alliance’s new vaccine strategy. Lancet 372 :2.

  • 2

    The Global Fund to Fight AIDS, Tuberculosis, and Malaria, 2008. How the global fund works. Available at: http://www.theglobalfund.org/en/about/how. Accessed May 27, 2008.

  • 3

    US Institute of Medicine, Committee for the Evaluation of the President’s Emergency Plan for AIDS Relief (PEPFAR) Implementation, 2007. PEPFAR Implementation: Progress and Promise. Washington, DC: National Academies Press.

  • 4

    Millennium Development Goals, 2005. Global data monitoring information system. Available at: http://www.developmentgoals.org/. Accessed November 2, 2005.

  • 5

    Roll Back Malaria Partnership, 2006. Roll back malaria global strategic plan 2005–2015. Available at: http://www.rollbackmalaria.org/forumV/docs/gsp_en.pdf. Accessed September 20, 2006.

  • 6

    The Bellagio Study Group on Child Survival, 2003. Knowledge into action for child survival. Lancet 362 :323–327.

  • 7

    Bryce J, el Arifeen S, Pariyo G, Lanata CF, Gwatkin D, Habicht J-P, and the Multi-Country Evaluation of IMCI Study Group, 2003. Reducing child mortality: can public health deliver? Lancet 362 :159–164.

    • Search Google Scholar
    • Export Citation
  • 8

    Sanders D, Haines A, 2006. Implementation research is needed to achieve international health goals. PLoS Med 3 :e186.

  • 9

    World Bank, 2004. The World Development Report 2004: Making Services Work for Poor People. New York: Oxford University Press.

  • 10

    Rowe AK, de Savigny D, Lanata CF, Victora CG, 2005. How can we achieve and maintain high-quality performance among health workers and managers in low-resource settings? Lancet 366 :1026–1035.

    • Search Google Scholar
    • Export Citation
  • 11

    World Health Organization, 2006. The World Health Report 2006: Working Together for Health. Geneva: World Health Organization.

  • 12

    Victora CG, Habicht J-P, Bryce J, 2004. Evidence-based public health: moving beyond randomized trials. Am J Public Health 94 :400–405.

  • 13

    Walshe K, 2007. Understanding what works—and why—in quality improvement: the need for theory-driven evaluation. Int J Qual Health Care 19 :57–59.

    • Search Google Scholar
    • Export Citation
  • 14

    Chen L, Evans T, Anand S, Boufford JI, Brown H, Chowdhury M, Cueto M, Dare L, Dussault G, Elzinga G, Fee E, Habte D, Hanvoravongchai P, Jacobs M, Kurowski C, Michael S, Pablos-Mendez A, Sewankambo N, Solimano G, Stilwell B, de Waal A, Wibulpolprasert S, 2004. Human resources for health: overcoming the crisis. Lancet 364 :1984–1990.

    • Search Google Scholar
    • Export Citation
  • 15

    Gething PW, Noor AM, Gikandi PW, Ogara EAA, Hay SI, Nixon MS, Snow RW, Atkinson PM, 2006. Improving imperfect data from health management information systems in Africa using space-time geostatistics. PLoS Med 3 :e271.

    • Search Google Scholar
    • Export Citation
  • 16

    Mathers CD, Ma Fat D, Inoue M, Rao C, Lopez AD, 2005. Counting the dead and what they died from: an assessment of the global status of cause of death data. Bull World Health Organ 83 :171–177.

    • Search Google Scholar
    • Export Citation
  • 17

    Murray CJL, Shengelia B, Gupta N, Moussavi S, Tandon A, Thieren M, 2003. Validity of reported vaccination coverage in 45 countries. Lancet 362 :1022–1027.

    • Search Google Scholar
    • Export Citation
  • 18

    Ronveaux O, Rickert D, Hadler S, Groom H, Lloyd J, Bchir A, Birmingham M, 2005. The immunization data quality audit: verifying the quality and consistency of immunization monitoring systems. Bull World Health Organ 83 :503–510.

    • Search Google Scholar
    • Export Citation
  • 19

    Lim SS, Stein DB, Charrow A, Murray CJL, 2008. Tracking progress towards universal childhood immunisation and the impact of global initiatives: a systematic analysis of three-dose diphtheria, tetanus, and pertussis immunisation coverage. Lancet 372 :2031–2046.

    • Search Google Scholar
    • Export Citation
  • 20

    MEASURE DHS, 2006. Demographic and health surveys. Available at: http://www.measuredhs.com. Accessed December 3, 2006.

  • 21

    UNICEF, 2006. Multiple indicator cluster survey 3. Available at: http://www.childinfo.org/mics/mics3. Accessed December 3, 2006.

  • 22

    US Bureau of Labor Statistics, 2008. Internet website. Available at: http://www.bls.gov/. Accessed November 5, 2008.

  • 23

    Centers for Disease Control and Prevention, 2003. Behavioral risk factor surveillance system User’s guide. Available at: www.cdc.gov/brfss/usersguide.htm. Accessed August 27, 2003.

  • 24

    Centers for Diseases Control and Prevention, 2008. Internet website. Available at: http://www.cdc.gov/nchs/nhanes.htm. Accessed May 27, 2008.

  • 25

    Soleman N, Chandramohan D, Shibuya K, 2006. Verbal autopsy: current practices and challenges. Bull World Health Organ 84 :239–245.

  • 26

    World Health Organization, Department of Child and Adolescent Health and Development, 2003. Health Facility Survey: Tool to Evaluate the Quality of Care Delivered to Sick Children Attending Outpatients Facilities (Using the Integrated Management of Childhood Illness Clinical Guidelines as Best Practices). Geneva, Switzerland: World Health Organization.

  • 27

    Korenromp EL, Arnold F, Williams BG, Nahlen BL, Snow RW, 2004. Monitoring trends in under-5 mortality rates through national birth history surveys. Int J Epidemiol 33 :1293–1301.

    • Search Google Scholar
    • Export Citation
  • 28

    Rowe AK, Steketee RW, Arnold F, Wardlaw T, Basu S, Bakyaita N, Lama M, Winston CA, Lynch M, Cibulskis RE, Shibuya K, Ratcliffe AA, Nahlen BL for the Roll Back Malaria Monitoring and Evaluation Reference Group, 2007. Methods for evaluating the impact of malaria control efforts on mortality in sub-Saharan Africa. Trop Med Int Health 12 :1524–1539.

    • Search Google Scholar
    • Export Citation
  • 29

    Kuykendall RJ, 2000. County specific prevalence of regular physical activity in Georgia: a comparison of three analytic methods using Behavioral Risk Factor Surveillance System data. MS thesis, Rollins School of Public Health, Emory University, Atlanta, GA.

  • 30

    Howard G, Howard VJ, Katholi C, Oli MK, Huston S, 2001. Decline in US stroke mortality. An analysis of temporal patterns by sex, race, and geographic region. Stroke 32 :2213–2220.

    • Search Google Scholar
    • Export Citation
  • 31

    Berwick DM, Godfrey AB, Roessner J, 1990. Curing Health Care. San Francisco: Jossey-Bass.

  • 32

    Victora CG, Wagstaff A, Armstrong Schellenberg J, Gwatkin D, Claeson M, Habicht J-P, 2003. Applying an equity lens to child health and mortality: more of the same is not enough. Lancet 362 :233–241.

    • Search Google Scholar
    • Export Citation
  • 33

    Murray CJL, Frenk J, 2008. Health metrics and evaluation: strengthening the science. Lancet 371 :1191–1199.

  • 34

    Garvin DA, 1988. Managing Quality: The Strategic and Competitive Edge. New York: Free Press.

  • 35

    Langley GJ, Nolan KM, Nolan TW, Norman CL, Provost LP, 1996. The Improvement Guide. A Practical Approach to Enhancing Organizational Performance. San Francisco: Jossey-Bass.

  • 36

    Boonyasai RT, Windish DM, Chakraborti C, Feldman LS, Rubin HR, Bass EB, 2007. Effectivness of teaching quality improvement to clinicians: a systematic review. JAMA 298 :1023–1037.

    • Search Google Scholar
    • Export Citation
  • 37

    Øvretveit J, Bate P, Cleary P, Cretin S, Gustafson D, McInnes K, McLeod H, Molfenter T, Plsek P, Robert G, Shortell S, Wilson T, 2002. Quality collaboratives: lessons from research. Qual Saf Health Care 11 :345–351.

    • Search Google Scholar
    • Export Citation
  • 38

    Institute for Healthcare Improvement, 2003. The breakthrough series: IHI’s collaborative model for achieving breakthrough improvement. Available at: http://www.ihi.org/ihi. Accessed October 27, 2008.

  • 39

    Berwick DM, 2004. Lessons from developing nations on improving health care. BMJ 328 :1124–1129.

  • 40

    Catsambas TT, Franco LM, Gutmann M, Knebel E, Hill P, Lin Y-S, 2008. Evaluating Health Care Collaboratives: The Experience of the Quality Assurance Project. Bethesda, MD: University Research Corporation.

  • 41

    The HIVQUAL Project, 2008. Country profile: HIVQUAL-Uganda pushes ahead at full speed. The HIVQUAL international update. Available at: http://www.hivguidelines.org/admin/Files/qoc/HIVQUAL/international%20update/HQIU_May_2008_epublication.pdf. Accessed October 21, 2008.

  • 42

    Reerink IH, Sauerborn R, 1996. Quality of primary health care in developing countries: recent experiences and future directions. Int J Qual Health Care 8 :131–139.

    • Search Google Scholar
    • Export Citation
  • 43

    Supawitkul S, Srisongsom S, Ningsanond P, Lolekha R, Fox K, Agins B, Levine W, Tappero J, Thanprasertsuk S, 2006. Quality improvement of HIV care in Thailand through the HIVQUAL-T model, 2002–2005. XVI International AIDS Conference 2006, 13–18 August 2006, Montreal, Canada.

  • 44

    Weinberg M, Fuentes JM, Ruiz AI, Lozano FW, Angel E, Gaitan H, Goethe B, Parra S, Hellerstein S, Ross-Degnan D, Goldmann DA, Huskins C, 2001. Reducing infections among women undergoing cesarean section in Colombia by means of continuous quality improvement methods. Arch Intern Med 161 :2357–2365.

    • Search Google Scholar
    • Export Citation
  • 45

    Withanachchi N, Karandagoda W, Handa Y, 2004. A performance improvement programme at a public hospital in Sri Lanka: an introduction. J Health Organ Manag 18 :361–369.

    • Search Google Scholar
    • Export Citation
  • 46

    Withanachchi N, Handa Y, Karandagoda KKW, Pathirage PP, Tennakoon NCK, Pullaperuma DSP, 2007. TQM emphasizing 5-S principles: a breakthrough for chronic managerial constraints at public hospitals in developing countries. Int J Public Sector Management 20 :168–177.

    • Search Google Scholar
    • Export Citation
  • 47

    Roethlisberger FJ, Dickson WJ, 1939. Management and the Worker: An Account of a Research Program Conducted by Western Electric Company, Hawthorne Works, Chicago. Cambridge, MA: Harvard University Press.

  • 48

    Guerra CA, Gikandi PW, Tatem AJ, Noor AM, Smith DL, Hay SI, Snow RW, 2008. The limits and intensity of Plasmodium falciparum transmission: implications for malaria control and elimination worldwide. PLoS Med 5 :e38.

    • Search Google Scholar
    • Export Citation
  • 49

    Kaur H, Goodman C, Thompson E, Thompson K-A, Masanja I, Kachur SP, Abdulla S, 2008. A nationwide survey of the quality of antimalarials in retail outlets in Tanzania. PLoS One 3 :e3403.

    • Search Google Scholar
    • Export Citation
  • 50

    Smith SC, Joshi UB, Grabowsky M, Selanikio J, Nobiya T, Aapore T, 2007. Evaluation of bednets after 38 months of household use in Northwest Ghana. Am J Trop Med Hyg 77 (Suppl 6):243–248.

    • Search Google Scholar
    • Export Citation
  • 51

    Loevinsohn B, Harding A, 2005. Buying results? Contracting for health service delivery in developing countries. Lancet 366 :676–681.

  • 52

    Siddiqi S, Masud TI, Sabri B, 2006. Contracting but not without caution: experience with outsourcing of health services in countries of the Eastern Mediterranean Region. Bull World Health Organ 84 :867–875.

    • Search Google Scholar
    • Export Citation
  • 53

    Meessen B, Musango L, Kashala J-PI, Lemlin J, 2006. Reviewing institutions of rural health centres: the Performance Initiative in Butare, Rwanda. Trop Med Int Health 11 :1303–1317.

    • Search Google Scholar
    • Export Citation
  • 54

    Yamada T, 2008. In search of new ideas for global health. N Engl J Med 358 :1324–1325.

  • 55

    Alliance for Health Policy and Systems Research, 2008. Internet website. Available at: http://www.who.int/alliance-hpsr/en/. Accessed November 5, 2008.

  • 56

    Global Health Workforce Alliance, 2008. Internet website. Available at: http://www.ghwa.org/. Accessed November 5, 2008.

  • 57

    Health Metrics Network, 2008. Internet website. Available at: http://www.who.int/healthmetrics/about/en. Accessed October 27, 2008.

  • 58

    Institute for Health Metrics and Evaluation, 2008. Internet website. Available at: http://www.healthmetricsandevaluation.org/. Accessed October 27, 2008.

  • 59

    Institute for Healthcare Improvement, 2008. Internet website. Available at: http://www.ihi.org/ihi. Accessed October 27, 2008.

  • 60

    International Household Survey Network, 2008. Internet website. Available at: http://www.internationalsurveynetwork.org/home/. Accessed October 27, 2008.

  • 61

    International Network for Rational Use of Drugs, 2008. Internet website. Available at: http://www.inrud.org/. Accessed November 5, 2008.

  • 62

    Partnership in Statistics for Development in the 21st Century. 2008. Internet website. Available at: http://www.paris21.org/. Accessed October 27, 2008.

  • 63

    World Alliance for Patient Safety, 2008. Internet website. Available at: http://www.who.int/patientsafety/en/. Accessed November 5, 2008.

  • 64

    Jones G, Steketee RW, Black RE, Bhutta ZA, Morris SS, and the Bellagio Child Survival Study Group, 2003. How many child deaths can we prevent this year? Lancet 362 :65–71.

    • Search Google Scholar
    • Export Citation
  • 65

    Morel CM, Lauer JA, Evans DB, 2005. Cost effectiveness analysis of strategies to combat malaria in developing countries. BMJ 331 :1299.

  • 66

    Rowe AK, Steketee RW, 2007. Predictions of the impact of malaria control efforts on all-cause child mortality in Sub-Saharan Africa. Am J Trop Med Hyg 77 (Suppl. 6):48–55.

    • Search Google Scholar
    • Export Citation
  • 67

    Skarbinski J, Winston CA, Massaga JJ, Kachur SP, Rowe AK, 2008. Assessing the validity of health facility-based data on insecticide-treated bednet possession and use: comparison of data collected via health facility and household surveys—Lindi Region and Rufiji District, Tanzania, 2005. Trop Med Int Health 13 :396–405.

    • Search Google Scholar
    • Export Citation

 

 

 

 

Potential of Integrated Continuous Surveys and Quality Management to Support Monitoring, Evaluation, and the Scale-Up of Health Interventions in Developing Countries

View More View Less
  • 1 Malaria Branch, Division of Parasitic Diseases, Centers for Disease Control and Prevention, Atlanta, Georgia

Well-funded initiatives are challenging developing countries to increase health intervention coverage and show impact. Despite substantial resources, however, major obstacles include weak health systems, a lack of reasonably accurate monitoring data, and inadequate use of data for managing programs. This report discusses how integrated continuous surveys and quality management (I-Q), which are well-recognized approaches in wealthy countries, could support intervention scale-up, monitoring and evaluation, quality control for commodities, capacity building, and implementation research in low-resource settings. Integrated continuous surveys are similar to existing national cross-sectional surveys of households and health facilities, except data are collected over several years by permanent teams, and most results are reported monthly at the national, province, and district levels. Quality management involves conceptualizing work as processes, involving all workers in quality improvement, monitoring quality, and teams that improve quality with “plan-do-study-act” cycles. Implementing and evaluating I-Q in a low-income country would provide critical information on the value of this approach.

INTRODUCTION

In developing countries, today’s billion-dollar disease control initiatives and the Millennium Development Goals are challenging the global community to increase the coverage of health interventions rapidly and equitably, evaluate scale-up efforts with rigorous methods, strengthen health systems, and build capacity of in-country staff.15

As resources have greatly increased the availability of commodities such as medicines, vaccines, laboratory supplies, and other technologies (e.g., insecticide-treated nets [ITNs] to prevent malaria), the major challenge now is scaling-up coverage. Scale-up efforts generally involve delivering commodities to communities and health facilities, and changing behaviors of community members and health workers. For example, in malarious areas, individuals with fever must promptly seek care from healthcare providers, providers must prescribe medicines correctly, and everyone should sleep under an ITN.

In countries benefitting from these new resources, if one assumes that interventions remain efficacious and countries are relatively stable, a fundamental obstacle to scaling-up coverage is weak health systems.68 Specifically, countries face at least four key challenges. First, delivering commodities is often hampered by poor supply chains, forecasting systems, and transportation infrastructure. Second, changing human behaviors can be notoriously difficult. Numerous studies have shown inadequate healthcare quality despite often-used interventions for improving health worker adherence to clinical guidelines (e.g., training and guidelines dissemination),911 and there are virtually no proven strategies for rapidly changing and maintaining health practices of large populations in developing countries. Indeed, the lack of such evidence has led to the call for implementation research to evaluate the effectiveness and costs of scale-up approaches. 8,10,12,13 Third, many countries lack a sufficient number of qualified health workers, especially in areas severely affected by HIV/AIDS. 14 Finally, as most countries have varying physical, epidemiologic, cultural, and socioeconomic conditions, programmatic activities often must be tailored by region and adjusted over time.

Given these complexities, a common sense scale-up approach is to develop a reasonable plan based on existing data and in-country experience, begin implementation, monitor results, and use the monitoring data to modify the plan, as needed. Programs, however, often have difficulty with the last two steps. Reasonably accurate monitoring data on intervention coverage and impact that reflect local variation are generally unavailable, and program staff frequently lack the time and expertise to analyze and use existing data to make decisions. For example, regarding the availability of locally relevant monitoring data, although health information systems (HIS) typically include health facility-based counts of disease cases and deaths and administrative estimates of vaccination coverage, these data often have limited practical utility because of problems with completeness, timeliness, representativeness, and accuracy. 1519 Moreover, HIS rarely include indicators on the quality of services in health facilities. To help provide accurate data on intervention coverage and health status, a variety of high-quality community-based surveys are conducted, such as the Demographic and Health Survey 20 (DHS) and Multiple Indicator Cluster Survey. 21 However, these surveys are usually done only every few years and rarely provide district-level estimates.

The objective of this report is to discuss how integrated continuous surveys (ICS) and quality management principles could be applied together in developing countries to support the scale-up of health interventions, monitoring and evaluation, quality control for commodities, capacity building, and implementation research. The ICS-quality management (I-Q) approach is described, and advantages and disadvantages are discussed. I-Q has not been implemented at a national scale in any developing country, although parts of it have been used in sub-national projects (discussed below). Because the intent is to focus on concepts for policy-makers, program managers, and researchers, not all technical aspects of I-Q are presented in detail.

SYSTEM DESCRIPTION

I-Q has two components: a high-quality data collection arm (ICS) and an action arm (program strengthening with quality management).

Integrated continuous surveys.

ICS are similar to existing high-quality national cross-sectional surveys of communities and health facilities. The differences are that, with ICS, 1) data are collected continuously by a few permanent teams over several years; 2) sampling is done in time and place; 3) results are reported continuously; and 4) indicators are estimated at the national, province, and district level. The idea of ICS comes from cross-sectional surveys in developing countries such as the DHS, which measure numerous indicators for a variety of health programs (and thus could be considered “integrated” surveys) and continuous surveys in industrialized countries such as those conducted by the US Bureau of Labor Statistics, 22 which report on economic indicators monthly and are a key source of real-time information for decision-makers across the globe. Examples of continuous health surveys include the current Peru DHS (M. Vaessen, Macro International, personal communication, December 7, 2006), the US Behavioral Risk Factor Surveillance System, 23 and the US National Health and Nutrition Examination Survey, 24 although notably none of these surveys report results monthly. Thus, ongoing rigorous community-based surveys with continuous reporting is an established method, but one that has not yet been applied to the health sector in developing countries.

A hypothetical example of ICS is presented in Figure 1 and Box 1. Three permanent household survey teams and two health facility teams work year-round collecting data throughout the country. To minimize cost, the number of teams in this example was based on the minimum sample needed to assess typical health indicators (discussed below). To prevent burn-out, survey teams spend about 15 days in the field collecting data each month and 15 days at a central ICS office on data management and preparation for the next round of field work. In addition to the survey teams, a supervision team is dedicated to maintaining high-quality data collection and an analyst and a communications specialist prepare reports and other dissemination materials.

Assumptions for integrated continuous household and health facility surveys in the example in Figure 1 (a hypothetical country with three provinces and 18 districts)

Assumptions for household surveys

  1. Three survey teams (each with two surveyors and one driver) in the field about 15 days per month.
  2. Average time to complete one cluster (including travel, mapping, and interviewing) is about 2.5 days; cluster size is 14 households.
  3. On average, each team completes 6 clusters/month, and the three teams together collect 18 clusters/month; each province has an average of 6 clusters/month, and each district has an average of 1 cluster/month.
  4. At the national level, indicators are evaluated with a “running average” estimate each month (based on 30 clusters, with a precision of about ±7% points for household-level indicators), and a “high-precision” estimate each year (based on 216 clusters, with a precision of about ±3% points for household-level indicators). An example household-level indicator is the proportion of households that own an insecticide-treated bednet. Precision estimates assume an indicator value of 50% (for a conservative estimate of precision), a design effect of 2, and that the indicator is measured in all 12 months of the year.a Regarding power, a comparison of 2 high-precision estimates (i.e., comparing 2 years) could detect a 6% point change (e.g., an increase in coverage from 50% to 56%) with 90% power.

Assumptions for health facility surveys

  1. Two survey teams (each with four surveyors and one driver) in the field about 15 days per month.
  2. Average time to complete one cluster (including travel) is about 1.7 days; assume cluster is one health facility (i.e., one outpatient facility or inpatient ward) and one antenatal care clinic.
  3. On average, each team completes 9 clusters/month; and the two teams together collect 18 clusters/month; each province has an average of 6 clusters/month, and each district has an average of 1 cluster/month.
  4. At the national level, indicators are evaluated with a “running average” estimate each month (based on 30 clusters, with a precision of about ±18% points for health facility-level indicators), and a “high-precision” estimate each year (based on 216 clusters, with a precision of about ±7% points for health facility-level indicators). An example health facility-level indicator is the proportion of facilities with essential medicines in stock. Precision estimates assume an indicator value of 50% (for a conservative estimate of precision), a design effect of 1, and that the indicator is measured in all 12 months of the year.a

Precision would be lower if the indicator was measured in only 6 months of year (e.g., if the survey content varied over the year—see text for example).

For household surveys, after obtaining consent, standard questionnaires are used to measure health, demographic, and equity indicators (Box 2). As with other surveys, including the DHS, mortality measurements for children younger than 5 years old are based on complete birth histories of women of reproductive age. Cause-specific mortality is estimated with verbal autopsies. 25 Responses to all survey questions are entered directly into an electronic database with handheld computers equipped with a global positioning system to aid with mapping and selecting households. Depending on country needs, a small blood sample might be collected to measure biomarkers such as anemia (Box 2).

Examples of indicators that could be measured by integrated continuous household and health facility surveys

Indicators from a household survey

  • Example outcome indicators: proportion of households that own an insecticide-treated bednet; proportion of households with access to safe water; proportion of adults who know HIV prevention messages; proportion of pregnant women receiving antenatal care; proportion of births that occur in a health facility; proportion of children 12–23 months old that are fully vaccinated
  • Example health impact indicators: anemia prevalence among children < 5 years old; malaria parasite prevalence among children < 5 years old; HIV seroprevalence; all-cause mortality rate among children < 5 years old; cause-specific mortality rates among children < 5 years old, ascertained by verbal autopsy 25
  • Example indicators for assessing equity of intervention coverage and impact: demographics (age, sex); wealth index; race or ethnicity

Example indicators from a health facility survey

  • Example output indicators: proportion of health facilities with essential medicines in stock; proportion of health facilities that can perform microscopy; proportion of health workers who received training in the national case-management policy; proportion of health workers who received supervision in the past 3 months; proportion of health workers who demonstrate knowledge of the national case-management policy
  • Example outcome indicators: proportion of pregnant women seen at antenatal clinics who receive care recommended by the national policy; proportion of ill children < 5 years old who receive care recommended by the national policy; proportion of patients with suspected malaria who are correctly diagnosed according to the national policy; proportion of patients with HIV/AIDS seen at HIV/AIDS clinics who receive care recommended by the national policy
  • Example health impact indicators: proportion pregnant women seen at antenatal clinics who are HIV-positive; proportion of patients with a febrile illness that have malaria parasitemia

For health facility surveys, in this example, facilities have separate consultation rooms for antenatal and curative care; thus, at each facility, the team divides in two (other arrangements could be made if additional settings, such as vaccination clinics, are included). To assess antenatal care, one surveyor spends the day silently observing consultations and recording findings on a standard instrument. To evaluate curative care in outpatient departments, at least two surveyors are needed: one to silently observe consultations and one to interview and re-examine patients when they are ready to leave the facility (out of view of the observed health worker) using national guidelines to obtain a “gold standard” diagnosis (against which the observed health worker’s diagnoses and treatments are compared). This methodology is a well-recognized standard. 26 In high-volume clinics, extra surveyors are used to maximize sample size and reduce the time patients wait for the interview and re-examination. To assess inpatient care, surveyors perform observations and conduct chart reviews. In all facilities, at the end of the day, health workers are interviewed, and a facility assessment is performed to document the availability of supplies and equipment. Data from interviews and facility assessments are collected with handheld computers; and data from consultation observations are collected with paper forms, which are keypunched during the 15-day period at the central ICS office. Depending on a country’s needs, the health facility surveys could focus only on public facilities or could be expanded to include private clinics, pharmacies, and drug shops.

Cluster sampling is used for all surveys to obtain probability samples of target populations. In household surveys, the target population is the general population, and clusters are villages or city neighborhoods (e.g., census enumeration areas). Within clusters, households are randomly sampled. In health facility surveys, the target populations are health facilities and patients, and clusters are health facilities and all the patients seen at a given facility during regular working hours on a given weekday. In facilities with multiple consultation rooms, a fixed number of rooms (e.g., one per observer) are randomly selected.

Because the surveys are continuous, temporal-spatial sampling of clusters is needed. Every 2 months, for the household and health facility surveys, a new, independently selected sample of 36 clusters is chosen (Figure 1, point A). Thus, each year, 216 villages or city neighborhoods (i.e., 3,024 households) and 216 health facilities are surveyed. The number of households sampled annually is a little less than one half the sample of a typical DHS in sub-Saharan Africa. 27

At the national level, most indicators (Box 2) are estimated monthly with a running average of the 30 most recently surveyed clusters (Figure 1, point B). The use of 30 clusters is based on the need for at least 20–30 clusters for a reasonably stable estimate. Mortality indicators, which require large samples, are only estimated annually. The monthly estimates are for program management only and are disseminated widely to local partners (Figure 2). For formal reporting at the global level (e.g., to the World Health Organization headquarters), indicator values are based on an entire year of data—thereby ensuring acceptably precise estimates that are not affected by seasonality. Special analyses are performed for disease-specific initiatives (e.g., Roll Back Malaria recommends measuring ITN use during the malaria transmission season, 28 which might only last several months).

At the province and district level, small-area estimates of indicators (except mortality) are calculated monthly. As with the running-average national estimates, the monthly sub-national estimates are for program management only. Small-area estimates are based on data from a small geographic area at a given time plus data that are “borrowed” from the recent past and from neighboring geographic areas. 29 For example (Figure 1, point C), in District Z for October 2008, data from only about one cluster would be available (Box 1, assumption 3). In the inset illustration at the bottom of Figure 1, this cluster is indicated by the shaded square. For the minimum 20–30 clusters for a reasonably stable estimate, data from at least another 19 clusters must be borrowed. As indicated by the open squares inside District Z, two clusters are borrowed from the recent past (e.g., from August to September 2008). As indicated by the open squares outside District Z, 17 clusters are borrowed from neighboring geographic areas from August–October 2008. For larger areas (e.g., Province Y), fewer clusters are borrowed. Data from borrowed clusters are given lower analysis weights than clusters that are not borrowed, and weights of borrowed clusters decrease the further they are from the target area and time period. 30 This approach assumes that results from clusters that are spatially and temporally proximate are more similar than results from more distant clusters. When this assumption is violated because of an obvious geographic boundary (e.g., a poor district adjacent to a wealthy urban neighborhood), the boundary is incorporated into the algorithm that selects which clusters are borrowed to prevent making estimates for one district with data from an obviously dissimilar area. Although this approach is clearly imperfect, recall that the goal is to produce continuous, local-level data of reasonable validity for program management. To help with interpretation, district-level results are presented with a color-coded map showing broad categories of estimated intervention coverage (e.g., < 25%, 25–49%, 50–75%, > 75%). In industry and some healthcare settings, such continuously updated displays of key indicators for managers and stakeholders are known as balanced scorecards or corporate dashboards. 31 Monthly running averages and small-area analyses are not routinely performed for mortality, although at infrequent intervals (e.g., every 5 years), mortality is estimated for a small number of sub-national geographic areas with adequate sample size (e.g., the provinces in Figure 1).

Regarding survey content, each bimonthly national sample need not measure the same indicators. For example, to avoid overlong interviews, surveys of the January–February, May–June, and September–October samples could measure one set of indicators, whereas other indicators could be measured in the remaining samples. Indicators requiring large sample sizes (e.g., mortality), that are of special programmatic interest, or that are needed to assess equity 32,33 are measured in all months. Indicators measured less frequently are reported less frequently (e.g., every 4 months). Survey content might also change over time because health interventions might change. As surveys might evolve, ICS staff must be skilled in questionnaire development, sampling, and analysis; the ICS team would be led by someone with high-level analytic competence. Thus, ICS would create and sustain a foundation of survey expertise that could be lent to other programs and on which additional capacity could be built.

ICS are envisioned to be based in a non-governmental institution (e.g., local university or research organization) to promote high-quality data collection and capacity building, prevent overstretched Ministry of Health (MOH) staff from being saddled with additional work, and avoid conflicts of interest that might arise if a government ran the ICS (e.g., governments might benefit from results appearing better or worse than reality 19). Although ICS would be based outside the government, government programs would be closely involved in the design (e.g., choosing which indicators to measure), and would include ICS results in the HIS. To further promote acceptance of results, national, and local MOH staff would be invited to observe and learn from the survey process. That said, it is possible that some countries could house ICS in a governmental or quasi-governmental agency that was sufficiently protected from political influence.

It is important to state that ICS are not intended to replace all other monitoring and evaluation efforts. Rather, ICS provide an external measure of reality that can strengthen programs. ICS would be part of the HIS, filling a critical gap in most developing countries. Moreover, in countries with multiple, overlapping vertical surveys, ICS might be more economical.

Quality management and the quality improvement process.

Quality management principles evolved over the past century. They have been widely applied in industrial settings since the 1950s and have been cited as the reason for dramatic improvements in numerous companies. 31,34 Briefly, quality management involves conceptualizing work as processes (e.g., health workers systematically following the steps of a case-management algorithm), designing processes to reduce errors (e.g., developing clear case-management algorithms, with input from workers that will use them), recognizing that the main source of low quality is problems in processes and that low quality is costly, focusing improvement efforts on the most vital processes, satisfying external customers (patients) and internal customers (employees), monitoring quality, using scientific and statistical thinking, creating new organizational structures (e.g., quality improvement teams), and involving all workers in quality improvement (“everyone has two jobs: their job, and the job of helping to improve their job”). 31

Another critical part of quality management is the quality improvement process, which uses teams to improve quality with continuous “plan-do-study-act” cycles 35 (monitor indicators, identify problems, understand causes, implement solutions, check if solutions are working, and modify solutions as needed). Notably, this process is similar to the approach that clinicians use every day: assess the patient’s problem, diagnose the cause, develop and implement a treatment plan, and follow-up to assess how well the treatment is working. Quality improvement teams are typically small, time-limited, multi-disciplinary groups that form around solving a specific problem or improving a specific process. Large companies can have thousands of teams working on quality improvement projects at any one time, 34 and the overarching principle is that large improvements occur through many small incremental steps. 31

Since the late 1980s, the quality improvement process and other aspects of quality management have been applied to healthcare organizations in industrialized countries. Although quality management does not thrive in all settings, most published experiences reported that the approach led to improvements. 31,36,37 To accelerate the process, some organizations establish networks of teams (improvement collaboratives) that work on a single problem in multiple health facilities. 38 Since the late 1990s, these quality improvement methods have been implemented in varying degrees in at least 22 low- and middle-income countries 3946; although the evidence has important limitations, most evaluations suggested improvements—some quite large. Because the best studies were conducted in hospitals, it is still not clear how effective these approaches would be in the multitude of small rural clinics and community health worker networks that comprise a large portion of health systems in developing countries. Nor is it clear how well the approaches respond to common, intractable problems that might only be solved at the ministerial or international level (e.g., insufficient budgets or salaries, or a lack of qualified health workers).

With I-Q, implementation of quality management is shown in Figure 2. The box at the top represents health workers, local non-governmental organizations (NGOs), and other partners in communities and health facilities forming teams that use the quality improvement process to solve problems and strengthen the scale-up of health interventions. This design assumes that the most important improvements occur in communities, out-patient clinics, and district hospitals, and requires that staff are trained to use simple quality improvement methods. This training emphasizes the utilization of routinely collected local data to monitor the effects of quality improvement activities.

As teams will require assistance (especially at the beginning), the center of Figure 2 indicates a multi-tiered support system. At the national level, a quality improvement team, which reports directly to the Minister of Health, includes staff with expertise in quality improvement methods and problem-solving who have both the authority and resources to implement quality improvement activities. Specifically, the national team 1) provides strategic direction for quality improvement activities; 2) teaches quality improvement methods, especially to provincial and district managers; 3) coaches quality improvement teams, especially at the provincial and district level; 4) assists local teams in overcoming institutional barriers (e.g., obtaining permission to change a process from a national program manager); and 5) helps act as the voice of executive leadership (e.g., Minister of Health or national program manager) to express support for quality improvement activities and to convey specific messages (e.g., on the importance of supervision or following a new guideline). The emphasis on working with provincial and district managers reflects the broader goal of transforming supervision to “quality improvement support” for front-line health workers and identifying and supporting local quality champions to be agents of change. To actually solve problems, quality improvement teams would choose from a broad array of strategies, such as reorganizing processes, targeted training, job aids, and incentives. Strategies are tailored to fit specific contexts within the country.

While these quality improvement activities occur, ICS collect data to assess coverage and impact (right side of figure) and disseminate results monthly (bottom of figure). Widespread dissemination is intended to be an intervention itself by the Hawthorne effect (i.e., people are motivated to improve their practices when they know they are observed 47). Hence, the arrow on the left side of Figure 2 indicates that partners and the public can help influence the health system to improve.

Estimated costs.

Although variation among countries makes it impossible to calculate a generic cost of the I-Q system, Table 1 shows example estimates for the scenario in Figure 1. Assuming a small country, 5-year time frame, and illustrative quality improvement activities, the estimated cost of I-Q is $6.8 million (annually, $1.4 million or $0.15 per person). If one subtracts the $0.5–3.95 million in cost-savings from household and health facility surveys that would be replaced by ICS over 5 years, the incremental cost of ICS alone (on top of “replaced” survey costs) would range from $0 (no additional cost) to $3.4 million (annually, $0–0.7 million), and the incremental cost of I-Q would range from $2.8–6.3 million (annually, $0.6–1.3 million or $0.06–0.14 per person). Note that estimates of cost-savings from replaced surveys are conservative, because they exclude sub-national surveys that governments and NGOs often conduct to evaluate their activities and the occasional multi-million dollar survey.

Given I-Q’s integrated nature, it could be funded by a partnership of disease-specific initiatives and NGOs, as well as by general development resources—like basket-funding for surveys. Smaller NGOs could even “buy” questionnaire modules for particular ICS data collection rounds in a sub-national project area, as needed.

Advantages and disadvantages.

The I-Q approach has nine potential advantages.

  1. Most importantly, through teamwork at the local, intermediate, and national level, it establishes an ongoing quality improvement process that directly links decision-making with health information and encourages action. It establishes governmental structures with the authority and resources to support the process and overcome obstacles to scaling-up interventions; and by engaging front-line health workers in the process, which includes learning on quality improvement, it contributes to professional development and serves as an incentive to perform better.
  2. As a result of the Hawthorne effect, the presence of ICS teams and the widespread dissemination of monitoring results are an intervention to motivate health system actors to improve performance and reach goals.
  3. ICS produce methodologically rigorous national-level estimates of intervention coverage and impact to satisfy the data needs of programs, donors, other development partners, advocacy groups, and global health partnerships. In addition, as concerns have been raised about improving the quality and validity of data used by quality improvement teams 40 and collecting and using data, 37 high-quality ICS data could help keep quality improvement teams grounded in truth—even if local data were biased. Furthermore, high-quality ICS data (e.g., cause-specific mortality rates, malaria parasitemia prevalence, and HIV prevalence) could help researchers map 48 and track trends in disease burden and assess progress towards Millennium Development Goals. 33
  4. The system’s flexibility allows it to assess and strengthen multiple programs simultaneously and adapt to new programmatic needs over time.
  5. ICS teams could support quality control systems through continuous sampling of commodities in the field, which would be sent for testing (e.g., for the quality of drugs 49 and rapid diagnostic tests, and the durability and insecticide activity of ITNs 50).
  6. I-Q frees up some of the time that national and local MOH staff spend collecting and analyzing data—time that can be used to act on results and improve programs.
  7. It builds in-country technical capacity by creating monitoring and evaluation specialists, which might consequentially increase country ownership and use of the information.
  8. ICS results could support accountability-dependent scale-up approaches, such as performance-based contracting, by providing objective measures of performance levels. 19,5153 Indeed, I-Q might improve accountability more generally in the health sector and thus promote good governance.
  9. High-quality ICS data can serve as a platform for conducting targeted implementation research (example topics in Box 3).

Examples of key implementation research questions related to health intervention scale-up and program monitoring and evaluation

Questions related to health intervention scale-up

  • What is the cost and effectiveness of strategies to improve and maintain health worker adherence to clinical guidelines? What are the best ways to promote health worker adherence?
  • What is the cost and effectiveness of strategies (e.g., behavior change communication and information/education/communication strategies) to increase and maintain routine use of health interventions (e.g., safe water or insecticide-treated bednets)?
  • What are the determinants or predictors of intervention coverage? (e.g., what factors influence health worker adherence to clinical guidelines? what factors are associated with use of insecticide-treated bednets?)

Questions related to program monitoring and evaluation

  • What is the validity of epidemiologic models6466 that predict averted deaths based on changes in health intervention coverage?
  • What is the validity, cost, and utility of “inexpensive” monitoring methods based on data from health facilities (e.g., data collected at immunization clinics 67) or schools?

I-Q has four potential disadvantages.

  1. As it probably involves some reorientation and restructuring of the health system, it requires buy-in and genuine support from governmental leadership and other development partners. It might encounter resistance from being perceived as too complex, ambitious, or unsustainable for a low-income country. However, it seems no less sustainable than million-dollar surveys every 2–5 years and billion-dollar disease control initiatives. Moreover, given that weak health systems are now recognized as a major bottleneck to scale-up, an ambitious strategy might be what is needed to ensure that these enormous health investments are successful. 54
  2. A related issue is that quality management does not thrive in all settings. 31,37,40 Clearly, a considerable amount of preparation and buy-in is needed.
  3. Small-area coverage estimates based on very small samples could be criticized. Aside from the obvious response that greater precision increases costs, perhaps managers would find it more acceptable to have estimates for small groups of similar districts so less data borrowing is needed. Also, the assumption of the similarity of spatially and temporally proximate clusters could be examined with existing data-sets, and such analyses are planned.
  4. ICS might be considered duplicative in countries where recurrent data collection already exists.

DISCUSSION

The I-Q approach involves the application of existing methods to healthcare settings in developing countries at national scale. It is not, however, a “one size fits all” approach; the hypothetical example in this report was for descriptive purposes only. In places where I-Q is implemented, it would need adaptation to the country-specific context.

The intent of this discussion is to contribute to a dialog on strengthening health systems of developing countries with state-of-the-art methods for monitoring, evaluation, and quality improvement. I-Q systems could contribute to and benefit from groups that are working to improve the quality, availability, and comparability of health data; strengthen monitoring and evaluation capacity; increase the use of health data for program management; assess population health and the performance of health systems; and measure and improve the quality and safety of health services (e.g., Alliance for Health Policy and Systems Research, 55 Global Health Workforce Alliance, 56 Health Metrics Network, 57 Institute for Health Metrics and Evaluation, 58 Institute for Healthcare Improvement, 59 International Household Survey Network, 60 International Network for Rational Use of Drugs,61 Partnership in Statistics for Development in the 21st Century, 62 and World Alliance for Patient Safety 63).

In conclusion, in countries that are under pressure to rapidly increase intervention coverage and that already have resources for commodities, I-Q might be a useful approach for supporting the scale-up of health interventions, monitoring and evaluation, quality control for commodities, capacity building, and implementation research. Given the scrutiny of today’s large health initiatives, the stakes are high. Weak health systems pose a real threat to success, and decision-makers need to think creatively and ambitiously about solutions. Implementing and evaluating I-Q in at least one low-income country would provide critical information on the value of this approach.

Table 1

Estimated costs and cost savings of integrated continuous surveys (ICS) and the quality improvement process in a small African country*

Table 1
Figure 1.
Figure 1.

Schematic illustration of continuous surveys with temporal-spatial sampling and small-area estimates (hypothetical example). aClusters are primary sampling units. For household surveys, clusters are villages and city neighborhoods; for health facility surveys, clusters are health facilities (see text for details and Box 1 for assumptions).

Citation: The American Journal of Tropical Medicine and Hygiene Am J Trop Med Hyg 80, 6; 10.4269/ajtmh.2009.80.971

Figure 2.
Figure 2.

Relationships among the integrated continuous survey system, quality management structures, and partners. NGO, non-governmental organization.

Citation: The American Journal of Tropical Medicine and Hygiene Am J Trop Med Hyg 80, 6; 10.4269/ajtmh.2009.80.971

*

Address correspondence to Alexander K. Rowe, Centers for Disease Control and Prevention, Mailstop F22, 4770 Buford Highway, Atlanta, GA 30341-3724. E-mail: axr9@cdc.gov

Author’s address: Alexander K. Rowe, Centers for Disease Control and Prevention, Mailstop F22, 4770 Buford Highway, Atlanta, GA 30341-3724, Tel: 770-488-3588, Fax: 770-488-7761, E-mail: axr9@cdc.gov.

Disclosure: The author reports no conflict of interest.

REFERENCES

  • 1

    Anonymous, 2008. The GAVI Alliance’s new vaccine strategy. Lancet 372 :2.

  • 2

    The Global Fund to Fight AIDS, Tuberculosis, and Malaria, 2008. How the global fund works. Available at: http://www.theglobalfund.org/en/about/how. Accessed May 27, 2008.

  • 3

    US Institute of Medicine, Committee for the Evaluation of the President’s Emergency Plan for AIDS Relief (PEPFAR) Implementation, 2007. PEPFAR Implementation: Progress and Promise. Washington, DC: National Academies Press.

  • 4

    Millennium Development Goals, 2005. Global data monitoring information system. Available at: http://www.developmentgoals.org/. Accessed November 2, 2005.

  • 5

    Roll Back Malaria Partnership, 2006. Roll back malaria global strategic plan 2005–2015. Available at: http://www.rollbackmalaria.org/forumV/docs/gsp_en.pdf. Accessed September 20, 2006.

  • 6

    The Bellagio Study Group on Child Survival, 2003. Knowledge into action for child survival. Lancet 362 :323–327.

  • 7

    Bryce J, el Arifeen S, Pariyo G, Lanata CF, Gwatkin D, Habicht J-P, and the Multi-Country Evaluation of IMCI Study Group, 2003. Reducing child mortality: can public health deliver? Lancet 362 :159–164.

    • Search Google Scholar
    • Export Citation
  • 8

    Sanders D, Haines A, 2006. Implementation research is needed to achieve international health goals. PLoS Med 3 :e186.

  • 9

    World Bank, 2004. The World Development Report 2004: Making Services Work for Poor People. New York: Oxford University Press.

  • 10

    Rowe AK, de Savigny D, Lanata CF, Victora CG, 2005. How can we achieve and maintain high-quality performance among health workers and managers in low-resource settings? Lancet 366 :1026–1035.

    • Search Google Scholar
    • Export Citation
  • 11

    World Health Organization, 2006. The World Health Report 2006: Working Together for Health. Geneva: World Health Organization.

  • 12

    Victora CG, Habicht J-P, Bryce J, 2004. Evidence-based public health: moving beyond randomized trials. Am J Public Health 94 :400–405.

  • 13

    Walshe K, 2007. Understanding what works—and why—in quality improvement: the need for theory-driven evaluation. Int J Qual Health Care 19 :57–59.

    • Search Google Scholar
    • Export Citation
  • 14

    Chen L, Evans T, Anand S, Boufford JI, Brown H, Chowdhury M, Cueto M, Dare L, Dussault G, Elzinga G, Fee E, Habte D, Hanvoravongchai P, Jacobs M, Kurowski C, Michael S, Pablos-Mendez A, Sewankambo N, Solimano G, Stilwell B, de Waal A, Wibulpolprasert S, 2004. Human resources for health: overcoming the crisis. Lancet 364 :1984–1990.

    • Search Google Scholar
    • Export Citation
  • 15

    Gething PW, Noor AM, Gikandi PW, Ogara EAA, Hay SI, Nixon MS, Snow RW, Atkinson PM, 2006. Improving imperfect data from health management information systems in Africa using space-time geostatistics. PLoS Med 3 :e271.

    • Search Google Scholar
    • Export Citation
  • 16

    Mathers CD, Ma Fat D, Inoue M, Rao C, Lopez AD, 2005. Counting the dead and what they died from: an assessment of the global status of cause of death data. Bull World Health Organ 83 :171–177.

    • Search Google Scholar
    • Export Citation
  • 17

    Murray CJL, Shengelia B, Gupta N, Moussavi S, Tandon A, Thieren M, 2003. Validity of reported vaccination coverage in 45 countries. Lancet 362 :1022–1027.

    • Search Google Scholar
    • Export Citation
  • 18

    Ronveaux O, Rickert D, Hadler S, Groom H, Lloyd J, Bchir A, Birmingham M, 2005. The immunization data quality audit: verifying the quality and consistency of immunization monitoring systems. Bull World Health Organ 83 :503–510.

    • Search Google Scholar
    • Export Citation
  • 19

    Lim SS, Stein DB, Charrow A, Murray CJL, 2008. Tracking progress towards universal childhood immunisation and the impact of global initiatives: a systematic analysis of three-dose diphtheria, tetanus, and pertussis immunisation coverage. Lancet 372 :2031–2046.

    • Search Google Scholar
    • Export Citation
  • 20

    MEASURE DHS, 2006. Demographic and health surveys. Available at: http://www.measuredhs.com. Accessed December 3, 2006.

  • 21

    UNICEF, 2006. Multiple indicator cluster survey 3. Available at: http://www.childinfo.org/mics/mics3. Accessed December 3, 2006.

  • 22

    US Bureau of Labor Statistics, 2008. Internet website. Available at: http://www.bls.gov/. Accessed November 5, 2008.

  • 23

    Centers for Disease Control and Prevention, 2003. Behavioral risk factor surveillance system User’s guide. Available at: www.cdc.gov/brfss/usersguide.htm. Accessed August 27, 2003.

  • 24

    Centers for Diseases Control and Prevention, 2008. Internet website. Available at: http://www.cdc.gov/nchs/nhanes.htm. Accessed May 27, 2008.

  • 25

    Soleman N, Chandramohan D, Shibuya K, 2006. Verbal autopsy: current practices and challenges. Bull World Health Organ 84 :239–245.

  • 26

    World Health Organization, Department of Child and Adolescent Health and Development, 2003. Health Facility Survey: Tool to Evaluate the Quality of Care Delivered to Sick Children Attending Outpatients Facilities (Using the Integrated Management of Childhood Illness Clinical Guidelines as Best Practices). Geneva, Switzerland: World Health Organization.

  • 27

    Korenromp EL, Arnold F, Williams BG, Nahlen BL, Snow RW, 2004. Monitoring trends in under-5 mortality rates through national birth history surveys. Int J Epidemiol 33 :1293–1301.

    • Search Google Scholar
    • Export Citation
  • 28

    Rowe AK, Steketee RW, Arnold F, Wardlaw T, Basu S, Bakyaita N, Lama M, Winston CA, Lynch M, Cibulskis RE, Shibuya K, Ratcliffe AA, Nahlen BL for the Roll Back Malaria Monitoring and Evaluation Reference Group, 2007. Methods for evaluating the impact of malaria control efforts on mortality in sub-Saharan Africa. Trop Med Int Health 12 :1524–1539.

    • Search Google Scholar
    • Export Citation
  • 29

    Kuykendall RJ, 2000. County specific prevalence of regular physical activity in Georgia: a comparison of three analytic methods using Behavioral Risk Factor Surveillance System data. MS thesis, Rollins School of Public Health, Emory University, Atlanta, GA.

  • 30

    Howard G, Howard VJ, Katholi C, Oli MK, Huston S, 2001. Decline in US stroke mortality. An analysis of temporal patterns by sex, race, and geographic region. Stroke 32 :2213–2220.

    • Search Google Scholar
    • Export Citation
  • 31

    Berwick DM, Godfrey AB, Roessner J, 1990. Curing Health Care. San Francisco: Jossey-Bass.

  • 32

    Victora CG, Wagstaff A, Armstrong Schellenberg J, Gwatkin D, Claeson M, Habicht J-P, 2003. Applying an equity lens to child health and mortality: more of the same is not enough. Lancet 362 :233–241.

    • Search Google Scholar
    • Export Citation
  • 33

    Murray CJL, Frenk J, 2008. Health metrics and evaluation: strengthening the science. Lancet 371 :1191–1199.

  • 34

    Garvin DA, 1988. Managing Quality: The Strategic and Competitive Edge. New York: Free Press.

  • 35

    Langley GJ, Nolan KM, Nolan TW, Norman CL, Provost LP, 1996. The Improvement Guide. A Practical Approach to Enhancing Organizational Performance. San Francisco: Jossey-Bass.

  • 36

    Boonyasai RT, Windish DM, Chakraborti C, Feldman LS, Rubin HR, Bass EB, 2007. Effectivness of teaching quality improvement to clinicians: a systematic review. JAMA 298 :1023–1037.

    • Search Google Scholar
    • Export Citation
  • 37

    Øvretveit J, Bate P, Cleary P, Cretin S, Gustafson D, McInnes K, McLeod H, Molfenter T, Plsek P, Robert G, Shortell S, Wilson T, 2002. Quality collaboratives: lessons from research. Qual Saf Health Care 11 :345–351.

    • Search Google Scholar
    • Export Citation
  • 38

    Institute for Healthcare Improvement, 2003. The breakthrough series: IHI’s collaborative model for achieving breakthrough improvement. Available at: http://www.ihi.org/ihi. Accessed October 27, 2008.

  • 39

    Berwick DM, 2004. Lessons from developing nations on improving health care. BMJ 328 :1124–1129.

  • 40

    Catsambas TT, Franco LM, Gutmann M, Knebel E, Hill P, Lin Y-S, 2008. Evaluating Health Care Collaboratives: The Experience of the Quality Assurance Project. Bethesda, MD: University Research Corporation.

  • 41

    The HIVQUAL Project, 2008. Country profile: HIVQUAL-Uganda pushes ahead at full speed. The HIVQUAL international update. Available at: http://www.hivguidelines.org/admin/Files/qoc/HIVQUAL/international%20update/HQIU_May_2008_epublication.pdf. Accessed October 21, 2008.

  • 42

    Reerink IH, Sauerborn R, 1996. Quality of primary health care in developing countries: recent experiences and future directions. Int J Qual Health Care 8 :131–139.

    • Search Google Scholar
    • Export Citation
  • 43

    Supawitkul S, Srisongsom S, Ningsanond P, Lolekha R, Fox K, Agins B, Levine W, Tappero J, Thanprasertsuk S, 2006. Quality improvement of HIV care in Thailand through the HIVQUAL-T model, 2002–2005. XVI International AIDS Conference 2006, 13–18 August 2006, Montreal, Canada.

  • 44

    Weinberg M, Fuentes JM, Ruiz AI, Lozano FW, Angel E, Gaitan H, Goethe B, Parra S, Hellerstein S, Ross-Degnan D, Goldmann DA, Huskins C, 2001. Reducing infections among women undergoing cesarean section in Colombia by means of continuous quality improvement methods. Arch Intern Med 161 :2357–2365.

    • Search Google Scholar
    • Export Citation
  • 45

    Withanachchi N, Karandagoda W, Handa Y, 2004. A performance improvement programme at a public hospital in Sri Lanka: an introduction. J Health Organ Manag 18 :361–369.

    • Search Google Scholar
    • Export Citation
  • 46

    Withanachchi N, Handa Y, Karandagoda KKW, Pathirage PP, Tennakoon NCK, Pullaperuma DSP, 2007. TQM emphasizing 5-S principles: a breakthrough for chronic managerial constraints at public hospitals in developing countries. Int J Public Sector Management 20 :168–177.

    • Search Google Scholar
    • Export Citation
  • 47

    Roethlisberger FJ, Dickson WJ, 1939. Management and the Worker: An Account of a Research Program Conducted by Western Electric Company, Hawthorne Works, Chicago. Cambridge, MA: Harvard University Press.

  • 48

    Guerra CA, Gikandi PW, Tatem AJ, Noor AM, Smith DL, Hay SI, Snow RW, 2008. The limits and intensity of Plasmodium falciparum transmission: implications for malaria control and elimination worldwide. PLoS Med 5 :e38.

    • Search Google Scholar
    • Export Citation
  • 49

    Kaur H, Goodman C, Thompson E, Thompson K-A, Masanja I, Kachur SP, Abdulla S, 2008. A nationwide survey of the quality of antimalarials in retail outlets in Tanzania. PLoS One 3 :e3403.

    • Search Google Scholar
    • Export Citation
  • 50

    Smith SC, Joshi UB, Grabowsky M, Selanikio J, Nobiya T, Aapore T, 2007. Evaluation of bednets after 38 months of household use in Northwest Ghana. Am J Trop Med Hyg 77 (Suppl 6):243–248.

    • Search Google Scholar
    • Export Citation
  • 51

    Loevinsohn B, Harding A, 2005. Buying results? Contracting for health service delivery in developing countries. Lancet 366 :676–681.

  • 52

    Siddiqi S, Masud TI, Sabri B, 2006. Contracting but not without caution: experience with outsourcing of health services in countries of the Eastern Mediterranean Region. Bull World Health Organ 84 :867–875.

    • Search Google Scholar
    • Export Citation
  • 53

    Meessen B, Musango L, Kashala J-PI, Lemlin J, 2006. Reviewing institutions of rural health centres: the Performance Initiative in Butare, Rwanda. Trop Med Int Health 11 :1303–1317.

    • Search Google Scholar
    • Export Citation
  • 54

    Yamada T, 2008. In search of new ideas for global health. N Engl J Med 358 :1324–1325.

  • 55

    Alliance for Health Policy and Systems Research, 2008. Internet website. Available at: http://www.who.int/alliance-hpsr/en/. Accessed November 5, 2008.

  • 56

    Global Health Workforce Alliance, 2008. Internet website. Available at: http://www.ghwa.org/. Accessed November 5, 2008.

  • 57

    Health Metrics Network, 2008. Internet website. Available at: http://www.who.int/healthmetrics/about/en. Accessed October 27, 2008.

  • 58

    Institute for Health Metrics and Evaluation, 2008. Internet website. Available at: http://www.healthmetricsandevaluation.org/. Accessed October 27, 2008.

  • 59

    Institute for Healthcare Improvement, 2008. Internet website. Available at: http://www.ihi.org/ihi. Accessed October 27, 2008.

  • 60

    International Household Survey Network, 2008. Internet website. Available at: http://www.internationalsurveynetwork.org/home/. Accessed October 27, 2008.

  • 61

    International Network for Rational Use of Drugs, 2008. Internet website. Available at: http://www.inrud.org/. Accessed November 5, 2008.

  • 62

    Partnership in Statistics for Development in the 21st Century. 2008. Internet website. Available at: http://www.paris21.org/. Accessed October 27, 2008.

  • 63

    World Alliance for Patient Safety, 2008. Internet website. Available at: http://www.who.int/patientsafety/en/. Accessed November 5, 2008.

  • 64

    Jones G, Steketee RW, Black RE, Bhutta ZA, Morris SS, and the Bellagio Child Survival Study Group, 2003. How many child deaths can we prevent this year? Lancet 362 :65–71.

    • Search Google Scholar
    • Export Citation
  • 65

    Morel CM, Lauer JA, Evans DB, 2005. Cost effectiveness analysis of strategies to combat malaria in developing countries. BMJ 331 :1299.

  • 66

    Rowe AK, Steketee RW, 2007. Predictions of the impact of malaria control efforts on all-cause child mortality in Sub-Saharan Africa. Am J Trop Med Hyg 77 (Suppl. 6):48–55.

    • Search Google Scholar
    • Export Citation
  • 67

    Skarbinski J, Winston CA, Massaga JJ, Kachur SP, Rowe AK, 2008. Assessing the validity of health facility-based data on insecticide-treated bednet possession and use: comparison of data collected via health facility and household surveys—Lindi Region and Rufiji District, Tanzania, 2005. Trop Med Int Health 13 :396–405.

    • Search Google Scholar
    • Export Citation

Footnotes

a

Precision would be lower if the indicator was measured in only 6 months of year (e.g., if the survey content varied over the year—see text for example).

Save