Executive Summary

Artificial intelligence, including machine learning, presents exciting opportunities to transform the health and life sciences spaces. It offers tantalizing prospects for swifter, more accurate clinical decision making and amplified R&D capabilities. However, open issues around regulation and clinical relevance remain, causing both technology developers and potential investors to grapple with how to overcome today's barriers to adoption, compliance, and implementation. This article explains the key obstacles and offers ways to overcome them.

AndreyPopov/Getty Images

Artificial intelligence, including machine learning, presents exciting opportunities to transform the health and life sciences spaces. It offers tantalizing prospects for swifter, more accurate clinical decision making and amplified R&D capabilities. However, open issues around regulation and clinical relevance remain, causing both technology developers and potential investors to grapple with how to overcome today's barriers to adoption, compliance, and implementation.

Here are key obstacles to consider and how to handle them:

Developing regulatory frameworks. Over the past few years, the U.S. Food and Drug Administration (FDA) has been taking incremental steps to update its regulatory framework to keep up with the rapidly advancing digital health market. In 2017, the FDA released its Digital Health Innovation Action Plan to offer clarity about the agency's role in advancing safe and effective digital health technologies, and addressing key provisions of the 21st Century Cures Act.

The FDA has also been enrolling select software-as-a-medical-device (SaMD) developers in its Digital Health Software Precertification (Pre-Cert) Pilot Program. The goal of the Pre-Cert pilot is to help the FDA determine the key metrics and performance indicators required for product precertification, while also identifying ways to make the approval process easier for developers and help advance health care innovation.

Most recently, the FDA released in September its " Policy for Device Software Functions and Mobile Medical Applications" - a series of guidance documents that describe how the agency plans to regulate software that aids in clinical decision support (CDS), including software that utilizes machine-learning-based algorithms.

In a related statement from the FDA, Amy Abernethy, its principal deputy commissioner, explained that the agency plans to focus regulatory oversight on "higher-risk software functions," such as those used for more serious or critical health circumstances. This also includes software that utilizes machine learning-based algorithms, where users might not readily understand the program's "logic and inputs" without further explanation.

An example of CDS software that would fall under the FDA's "higher-risk" oversight category would be one that identifies a patient at risk for a potentially serious medical condition - such as a postoperative cardiovascular event - but does not explain why the software made that identification.

Achieving FDA approval. To account for the shifting FDA oversight and approval processes, software developers must carefully think through how to best design and roll out their product so it's well positioned for FDA approval, especially if the software falls under the agency's "higher risk" category.

One factor that must be considered is the fact that AI-powered therapeutic or diagnostic tools, by nature, will continue to evolve. For example, it is reasonable to expect that a software product will be updated and change over time (e.g., security updates, adding new features or functionalities, updating an algorithm, etc.). But given the product has technically changed, its FDA approval status could be put at risk after each update or new iteration.

In this case, planning to take a version-based approach to the FDA approval process might be in the developer's best interest. In this approach, a new version of software is created each time the software's internal ML algorithm(s) is trained by a new set of data, with each new version being subjected to independent FDA approval.

Although cumbersome, this approach sidesteps FDA concerns about approving software products that functionally change post-FDA approval. These strategic development considerations are crucial for solutions providers to consider.

Similarly, investors must also have a clear understanding of a company's product development plans and intended approach for continual FDA approval as this can provide clear differentiation over other competitors in the same space. Clinicians will be hard pressed to adopt technologies that haven't been validated by the FDA, so investors need to be sure the companies they are considering supporting have a clear product development roadmap - including an approach to FDA approvals as software products themselves and regulatory guidelines continue to develop.

AI is a black box. Besides current regulatory ambiguity, another key issue that poses challenges to the adoption of AI applications in the clinical setting is their black-box nature and the resulting trust issues.

One challenge is tracking: If a negative outcome occurs, can an AI application's decision-making process be tracked and assessed - for example, can users identify the training data and/or machine learning (ML) paradigm that led to the AI application's specific action?. To put it more simply, can the root cause of the negative outcome be identified within the technology so that it can be prevented in the future?

From reclassifying the training data to redesigning the ML algorithms that "learn" from the training data, the discovery process is complex - and could even result in the application being removed from the marketplace.

Another concern raised about the black-box aspect of AI systems is that someone, either on purpose or by mistake, could feed incorrect data into the system, causing erroneous conclusions (e.g., misdiagnosis, incorrect treatment recommendations). Luckily, detection algorithms designed to identify doctored or incorrect inputs could reduce, if not eliminate, this risk.

A bigger challenge posed by AI systems' black box nature is that physicians are reluctant to trust (due in part to malpractice-liability risk) - and therefore adopt - something that they don't fully understand. For example, there are a number of emerging AI imaging diagnostic companies with FDA-approved AI software tools that can assist clinicians in diagnosing and treating conditions such as strokes, diabetic retinopathy, intracranial hemorrhaging, and cancer.

However, clinical adoption of these AI tools has been slow. One reason is physician certification bodies such as the American College of Radiology (ACR) have only recently started releasing formalized use cases for how AI software tools can be reliably used. Patients are also likely to have trust issues with AI-powered technologies. While they may accept the reality that human errors can occur, they have very little tolerance of machine error.

While efforts to help open up the black box are underway, AI's most useful role in the clinical setting during this early period of adoption may be to help providers make better decisions rather than replacing them in the decision-making process. Most physicians may not trust a black box, but they will use it as a support system if they remain the final arbiter.

To gain physicians' trust, AI-software developers will have to clearly demonstrate that when the solutions are integrated into the clinical decision-making process, they help the clinical team do a better job. The tools must also be simple and easy to use. Applying AI in lower-stakes tasks initially, such as billing and coding (e.g., diagnostics, AI-assisted treatments), should also help increase trust over time.

At the industry level, there needs to be a concerted effort to publish more formalized use cases that support AI's benefits. Software developers and investors should be working with professional associations such as the ACR to publish more use cases and develop more frameworks to spur industry adoption and get more credibility.

Lower hurdles in life sciences. While AI's application in the clinical care setting still faces many challenges, the barriers to adoption are lower for specific life sciences use cases. For instance, ML is an exceptional tool for matching patients to clinical trials and for drug discovery and identifying effective therapies.

But whether it's in a life sciences capacity or the clinical care setting, the fact remains that many stakeholders stand to be impacted by AI's proliferation in health care and life sciences. Obstacles certainly exist for AI's wider adoption - from regulatory uncertainties to the lack of trust to the dearth of validated use cases. But the opportunities the technology presents to change the standard of care, improve efficiencies, and help clinicians make more informed decisions is worth the effort to overcome them.

tag