One out of three in-hospital deaths in the U.S. are caused by sepsis, a condition that can kill within hours and is one of the main causes of death in hospitals.

Academics and electronic-health-record companies have developed automated systems that send reminders to check patients for sepsis, but they can cause health care providers to ignore or turn off these notices. Machine learning can be used to fine- tune programs and reduce the number of notifications. Doctors and nurses in real hospitals were able to treat sepsis cases nearly two hours earlier on average with the help of an artificial intelligence system.

Sepsis can cause organ failure, limb loss and death if the body's response to an infection spirals out of control. About 270,000 people die from sepsis in the U.S. each year, according to the CDC. The condition is a major cause of death in this setting. It's important to catch the problem as soon as possible. If you don't get timely treatment for sepsis, it spirals very fast, like in a matter of hours. My nephew passed away.

It can be hard to diagnose sepsis in a busy hospital. According to Saria, a health care provider should be aware of any two out of four warning signs when a patient is showing symptoms of sepsis. Many patients display at least two of the four criteria during a hospital stay, which can give warning programs a high false positive rate. The chair of the Sepsis Alliance, who was not involved in the development of the new sepsis detection, says that a lot of other programs have a high false alert rate. Factors such as a person's age, medical history and recent lab test results must be considered by physicians. Sepsis patients don't have time to put together all the relevant information.

It can take time to find known sepsis risk factors in a well- connected electronic records system. That's where machine- learning is used. Several academic and industry groups are teaching these programs to recognize the risk factors for sepsis and to warn health care providers about which patients are at risk. In 2015, Saria and her colleagues at the Machine Learning and healthcare lab at the university began work on a piece of machine learning software. The program scanned patients' electronic health records for factors that increase sepsis risk and combined this information with current vital signs and lab tests to create a score indicating which patients were likely to develop sepsis Saria and her team used machine learning to increase the sensitivity, accuracy and speed of their program.

Saria and a group of researchers evaluated TREWS in the real world. Over the course of two years, the program was implemented into the workflows of about 2,000 health care providers at five sites. Over 760,000 encounters with patients were used by doctors and nurses. The results of this trial, which suggest TREWS led to earlier sepsis diagnosis and reduced mortality, are described in three papers.

Molander thinks that the model for machine learning may prove to be as important to sepsis care as the electrocardiogram machine has proven to be. The clinician will be able to go from the computer to the bedside and assess the patient more quickly because of it.

TREWS is not the first program to be used in such trials. The Duke Institute for Health Innovation has a program called Sepsis Watch that Mark Sendak works on. Other machine-learning systems focused on health care have already gone through large-scale trials. One of the first tests of an artificial-intelligence-based system for detecting a type of diabetes was designed with input from the FDA.

The new system is an example of how the tools can be used to improve care. He wants to see more standardized trials that include research support and guidance from external partners, such as the FDA, who don't have a stake in the results. It is difficult to design health care trials of machine- learning systems. He says anything that takes an algorithm and puts it into practice and studies how it is used is amazing. It was done in the peer reviewed literature.

Molander was impressed by the fact that the artificial intelligence doesn't make decisions for health care providers. When doctors or nurses check a patient's electronic health record, they see a note that the patient is at risk of sepsis, as well as a list of reasons why. Molander says that the alert system for TRWES does not prevent the clinician from doing any other work on the computer. There is a reminder off in the corner of the system that says, 'Look, this person is at higher risk of decompensation due to sepsis, and these are the reasons why we think you need to be concerned.' Doctors and nurses don't have to worry about making their own decisions if they prioritize which patients to check on first. Saria says that they don't want to take autonomy away from the provider. It's a tool to help. This is not a way for them to know what to do.

The trial collected data on the willingness of doctors and nurses to use alert systems. Molander said that 89 percent of its notifications were evaluated rather than dismissed. According to a press release from Bayesian Health, TREWS cut the high rate of false sepsis-warning notifications by a factor of 10. Molander said that it was mind-blowing. Provider trust in machine learning is increased by that.

You can sign up for Scientific American's newsletters.

It's important to build trust but also to collect evidence. Machine-learning systems are not likely to be accepted by health care institutions. People are more likely to adopt new ideas if they believe in the thought process. Saria says that in medicine you really need rigorous data and prospective studies to support the claim to getScalable Adoption.

Sendak says that they are building the products while also building the evidence base and the standards for how the work needs to be conducted. It's difficult to achieve widespread adoption of an alert system because different hospitals may use different software or have a competing system in place. Hospitals have limited resources, which makes it difficult for them to assess the effectiveness of an alert tool.

Saria wants to use the trial data to expand the use of TREWS. She says she is working with several electronic-records companies to incorporate the algorithm into more hospital systems. She wants to find out if machine-learning would be able to warn about other problems in hospitals. Some patients need to be monitored for things like cardiac arrest and heavy bleeding, which can affect health during hospital stays and recuperation.

We have published a lot of information about what Artificial Intelligence looks like and how it works. Saria says that this is showing that artificial intelligence can actually get providers to adopt it. By incorporating an AI program into existing records systems, where it can become part of a health care provider's workflows, you can suddenly start chopping your way through all these preventable harms in order to improve outcomes.