Artificial intelligence in medicine: is the genie out of the bottle?
Editorial

Artificial intelligence in medicine: is the genie out of the bottle?

Bruce Wilder^

Interprofessional Systems, New Orleans, LA, USA

^ORCID: 0000-0001-8875-5974.

Correspondence to: Bruce Wilder, MD, MPH, JD. Interprofessional Systems, 2803 Ursulines Avenue, New Orleans, LA 70119, USA. Email: bwild@interprofessional.com.

Received: 30 April 2020; Accepted: 30 July 2020; Published: 30 September 2020.

doi: 10.21037/jmai-20-36


“Code-based regulation—especially of those who are not themselves technically expert—risks making regulations invisible. Controls are imposed for particular policy reasons, but people experience these controls as nature.” (1).

It is probably a given that artificial intelligence (AI) will become an integral part of health care delivery and of our public health infrastructure (2). What is not a given is that we will easily reach that point, and maintain progress in a way that maximizes its effectiveness in achieving the goals we have come to expect of it—efficient and improved health care and public health systems. In other words, making the health of people better in a cost-effective way. Responsible commentators have already begun to question the value of AI in medicine (3).

People may differ on what is and what is not AI. I choose to define the concept in general terms that are inclusive of its various sub-categories. AI can be thought of as a method of automating human thought processes (by creating algorithms) (4), to exploit information by making or at least suggesting (augmented intelligence) (5), a course of action, whether it be an end result, or modifying the algorithm based on the end result, to become more accurate (machine learning). AI assumes that a person accurately perceives what those thought processes are, and that the algorithm they create accurately reflects those thought processes as perceived. In other words, there are three points where the possibility of human error can come into play: understanding human thought processes (including how data are selected and presented), reducing that understanding to an algorithm, and the process (another algorithm, if you will) by which the human-made algorithm is auto-modified.

The transformation of the paper chart to the electronic health record (EHR), jump-started by the HITECH Act of 2009, has geometrically increased the amount of easily accessible data accumulated on an individual patient in the healthcare setting. This data comes to us in somewhat structured form, including that required by the design of the particular EHR, as well as unstructured, such as free text. The former has given rise to much dissatisfaction on the part of clinicians, as well as the unanticipated tendency to collect and propagate inaccurate information in the EHR. The use of clinical natural language processing (cNLP) has enabled the extraction and structuring of data from free text, but it is not infallible and its methods are still evolving. Moreover, rapidly evolving technical advances in diagnostic and monitoring techniques have added to the mountains of data acquired in the care of a patient.

If one adds the increases in the sheer volume of data accumulating almost daily in the medical sciences that is relevant to the care of a particular patient, the need for reliable AI systems, including up-to-date clinical decision support (CDS), to maintain an acceptable standard of care seems undeniable (6).

AI has already been shown to improve diagnostic accuracy in body imaging and microscopic specimens. Predictive analytics is that increasingly being used in assessing risk of readmission, early detection of changes in the status of a patient, and advances in personalized medicine. The elephant in the room, though is the potential for use as a business tool by healthcare organizations and the attendant possibility that conflicts with what is best for the individual patient will emerge.

So, the task before us requires some forethought on how AI systems will be integrated into health care and public health practice with a minimum of unanticipated adverse consequences. Let's start with the supposition that physicians, to a large degree, need to be in control of the process.

The traditional model of makers of different products competing for the business of health care providers (institutions, physicians, etc.) doesn’t work very well for AI systems, since a “customer” is hardly in a position to compare products so as to choose the “best” one, since the product is constantly changing, and the logistics of changing to a new system can be prohibitive. Healthcare is more like a public good than a consumer product, and thus AI systems ought to be transparent. Moreover, developers of AI systems face the marketing pressures of creating a product that appeals to the business needs of an institution/potential customer. Michael Crichton could as well have been talking about AI in medicine as he was molecular biology when he wrote “The commercialization of molecular biology is the most stunning ethical event in the history of science…” and “Genetic research continues, at a more furious pace than ever. But it is done in secret, and in haste, for profit.” (7).

Medical software is generally considered to be a device and thus subject to regulation by the FDA (8). But who is going to be liable when things go wrong and a patient, or group of patients, is harmed? New ethical/legal issues relating to patient privacy and informed consent are sure to arise. Medical device manufacturers are ever on the lookout to avoid product liability lawsuits and tend to include in their contracts with providers “hold harmless” (the provider agrees to indemnify the manufacturer if it is a defendant in a lawsuit) clauses, or “gag” clauses (where the customer agrees not to disclose defects in the product). If an algorithm is public and crowd-sourced, users will be likely to equate its processes with the standard of care, and help eliminate or ameliorate the “black box” problem (9). The American Medical Association has called for user-centered design and transparency of AI systems. It remains to be seen how that policy is implemented.

Progress in medical science is fast-moving and AI algorithms will likely need constant tweaking in order to function accurately. A better paradigm (than strict IP protection) would be one that exploits crowdsourcing, i.e., input from the entire community of users. Such a scheme would require some central governing entity that would consist of health care providers, experts with technical proficiency (developers) in AI, government regulators, and legal experts attuned to issues of liability and intellectual property. It would also require public algorithms, a relatively new concept in the field of AI, generally. “[Algorithms] reflect existing biases in our data and society and in the very questions asked of them. Algorithms can reinforce and even accelerate existing discrimination patterns.” (10). In the healthcare setting, hidden biases and assumptions in algorithms can lead to medical misadventure as well.

In summary, there is potential for wrong turns on the road to successfully implementing and integrating AI into our health care system, and in the case of patient care, necessarily into the EHR—bringing about the transformation of the EHR into something else. For now, let's call it the electronic health care module. In any event, this inevitable marriage of the EHR and AI is sure to produce something that necessarily radically transforms how we practice medicine. In order to achieve the goals of user-centered design and transparency in AI we will need algorithms to be public. It is, therefore, critical that we not allow the potential for commercialization and venture capital interests to co-opt the process and rush us to premature adoption.


Acknowledgments

This article was originally published in the Bulletin of the Allegheny County (Pennsylvania) Medical Society, April 20, 2020.

Funding: None.


Footnote

Provenance and Peer review: This article was a standard submission to the journal. The article has undergone external peer review.

Conflicts of Interest: The author has completed the ICMJE uniform disclosure form (available at http://dx.doi.org/10.21037/jmai-20-36). The author has no conflicts of interest to declare.

Ethical Statement: The author is accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.

Open Access Statement: This is an Open Access article distributed in accordance with the Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International License (CC BY-NC-ND 4.0), which permits the non-commercial replication and distribution of the article with the strict proviso that no changes or edits are made and the original work is properly cited (including links to both the formal publication through the relevant DOI and the license). See: https://creativecommons.org/licenses/by-nc-nd/4.0/.


References

  1. Lessig L. Code: And Other Laws of Cyberspace, Version 2.0. Basic Books, 2006.
  2. Davenport T, Kalakota R. The potential for artificial intelligence in healthcare. Future Healthc J 2019;6:94-8. [Crossref] [PubMed]
  3. Emanuel EJ, Wachter RM. Artificial Intelligence in Health Care: Will the Value Match the Hype? JAMA 2019;321:2281-82. [Crossref] [PubMed]
  4. Merriam-Webster. Definition of algorithm. Available online: https://www.merriam-webster.com/dictionary/
  5. AMA Policy H-480.940, Augmented Intelligence in Health Care (2018). (access 3/14/20). Available online: https://policysearch.ama-assn.org/policyfinder/detail/augmented%20intelligence?uri=%2FAMADoc%2FHOD.xml-H-480.940.xml
  6. Wilder BL. On the Need for a Universal Health Record. Open Health News. January 1, 2017. (access 3/14/20). Available online: http://www.openhealthnews.com/story/2017-01-01/need-universal-health-record
  7. Knopf AA. From the Introduction to JURASSIC PARK. New York: 2006.
  8. Guidances with Digital Health Content, (access 3/14/20). Available online: www.fda.gov/medical-devices/digital-health/guidances-digital-health-content
  9. “Black box” in the AI world refers to inability of users, and sometimes even developers to understand how an AI system reaches a result. AMA Journal of Ethics 2019;21:E119-197.
  10. PITT CYBER. Pittsburgh Task Force on Public Algorithms. (access 3/14/20). Available online: www.cyber.pitt.edu/algorithms
doi: 10.21037/jmai-20-36
Cite this article as: Wilder B. Artificial intelligence in medicine: is the genie out of the bottle? J Med Artif Intell 2020;3:12.

Download Citation