Evidence and Transparency are Needed to Develop a Frontline Health Worker mHealth Assessment Platform

J. Mark Ansermino Department of Anesthesiology, Pharmacology & Therapeutics
University of British Columbia
Vancouver, Canada
Centre for International Child Health
BC Children’s Hospital
Vancouver, Canada
E-mail: mansermino@cw.bc.ca

Search for other papers by J. Mark Ansermino in
Current site
Google Scholar
PubMed
Close
,
Matthew O. Wiens WalimuKampala, Uganda
Centre for International Child Health
BC Children’s Hospital
Vancouver, Canada
E-mail: matthew.wiens@cw.bc.ca

Search for other papers by Matthew O. Wiens in
Current site
Google Scholar
PubMed
Close
, and
Niranjan Kissoon Centre for International Child Health
BC Children’s Hospital
Vancouver, Canada
Department of Pediatrics and Emergency Medicine
University of British Columbia
Vancouver, Canada
BC Children’s Hospital Research Institute
BC Children’s Hospital
Vancouver, Canada
E-mail: nkissoon@cw.bc.ca


Search for other papers by Niranjan Kissoon in
Current site
Google Scholar
PubMed
Close

Dear Sir,

We would like to complement Finette et al.1 on the progress they have made in the development of a frontline health worker mHealth assessment platform for children 2–60 months of age and for an innovative study design. Mobile health platforms such as theirs have tremendous potential to bridge the gap in skilled health-care personnel shortages in low- and middle-income countries (LMICs). However, they should be developed using rigorous evidence and a transparent process to engender confidence and foster adoption in LMICs. Thus, we are concerned about the lack of reporting transparency and detail of the clinical decision process implemented within the platform. The platform uses 42 clinical data points that are interpreted based on WHO Integrated Management of Childhood Illnesses and integrated community case management protocols and “other” evidence-based data points. The individual risk assessments are based on “physician-based logic” and “Bayesian weighting and cluster pattern analysis.” No detail or references on the processes used to produce this physician-based logic or on how this was validated are provided. Indeed, the WHO protocols were not written for machine implementation and require a significant degree of interpretation before they can be implemented within computer logic. It is, therefore, important that the logic used to implement these protocols and the integration of this logic with the physician-based logic be described. To further confound interpretation, the algorithm also seems to have been updated during the course of the study.

Patients, providers, and funders are rightly very keen to adopt new technologies to overcome the limitations such as lack of access and inadequate training. However, the allure of significant positive potential leading to this very rapid growth in medical applications should not trump a rigorous evidence-based approach to developing and adopting these new technologies. Overpromising without adequate validation is amply evidenced by the recent tragic consequences of International Business Machines Watson2 and should be a cautionary tale for us all. We, therefore, need to uphold rigorous processes of development, reporting,3 and regulation. The importance of fidelity and rigor to performance has not gone unnoticed by many authorities, including World Health Organization/International Telecommunication Union4 and the Food and Drug Administration in a new draft document for software as a device.5 We would also encourage an open, unbiased, and transparent approach as is being promoted for clinical guidelines6 for clinical algorithms.

We applaud and would strongly support an evidence-based, data-driven, and algorithmic approach to revolutionize the current WHO protocols and significantly improve the outcomes of children around the world. However, lack of transparency and potential bias due to personal and commercial interests has the potential to introduce confusion and mistrust between patients and providers that could derail worthy initiatives by significantly limiting their potential, tarnishing their reputation, and, hence, adversely impacting their potential. Before integrating a platform such as that of Finette and colleagues into a clinical setting, we would recommend that well-established standards for model development, transparent reporting, and validation (both internal and external) be undertaken. We will then be able to confidently harness the growing potential of computing power and machine learning and produce useful and safe tools at the bedside.

REFERENCES

  • 1.

    Finette B et al. 2019. Development and initial validation of a frontline health worker mHealth assessment platform (MEDSINC®) for children 2–60 months of age. Am J Trop Med Hyg 100: 15561565.

    • PubMed
    • Search Google Scholar
    • Export Citation
  • 2.

    Strickland E, 2019. IBM Watson, heal thyself: how IBM Watson overpromised and underdelivered on AI health care. IEEE Spectr 56: 2431.

  • 3.

    Collins GS, Moons KGM, 2019. Reporting of artificial intelligence prediction models. Lancet 393: 15771579.

  • 4.

    Wiegand T, Krishnamurthy R, Kuglitsch M, Lee N, Pujari, Salathe M, Wenzel M, Xu S, 2019. WHO and ITU establish benchmarking process for artificial intelligence in health. Lancet 394: 911.

    • PubMed
    • Search Google Scholar
    • Export Citation
  • 5.

    FDA, 2019. Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD)—Discussion Paper and Request for Feedback. Available at: https://www.regulations.gov/docket?D=FDA-2019-N-1185. Accessed May 29, 2019.

    • PubMed
    • Search Google Scholar
    • Export Citation
  • 6.

    Incze M, Ross JS, 2019. On the need for (only) high-quality clinical practice guidelines. JAMA Intern Med 179: 561.

Author Notes

  • 1.

    Finette B et al. 2019. Development and initial validation of a frontline health worker mHealth assessment platform (MEDSINC®) for children 2–60 months of age. Am J Trop Med Hyg 100: 15561565.

    • PubMed
    • Search Google Scholar
    • Export Citation
  • 2.

    Strickland E, 2019. IBM Watson, heal thyself: how IBM Watson overpromised and underdelivered on AI health care. IEEE Spectr 56: 2431.

  • 3.

    Collins GS, Moons KGM, 2019. Reporting of artificial intelligence prediction models. Lancet 393: 15771579.

  • 4.

    Wiegand T, Krishnamurthy R, Kuglitsch M, Lee N, Pujari, Salathe M, Wenzel M, Xu S, 2019. WHO and ITU establish benchmarking process for artificial intelligence in health. Lancet 394: 911.

    • PubMed
    • Search Google Scholar
    • Export Citation
  • 5.

    FDA, 2019. Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD)—Discussion Paper and Request for Feedback. Available at: https://www.regulations.gov/docket?D=FDA-2019-N-1185. Accessed May 29, 2019.

    • PubMed
    • Search Google Scholar
    • Export Citation
  • 6.

    Incze M, Ross JS, 2019. On the need for (only) high-quality clinical practice guidelines. JAMA Intern Med 179: 561.

Past two years Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 615 179 47
PDF Downloads 127 25 3
 
 
 
 
Affiliate Membership Banner
 
 
Research for Health Information Banner
 
 
CLOCKSS
 
 
 
Society Publishers Coalition Banner
Save