Algorithms that detect cancer can be fooled by hacked images

The image is by Alex Castro.

A new study shows that artificial intelligence programs can be fooled by hacks and cyberattacks. A computer program can add or remove evidence of cancer from mammograms, and that is what researchers showed.

That could lead to an incorrect diagnosis. An artificial intelligence program that helps to screen mammograms might say that a mammogram is healthy if there are signs of cancer or that a patient is cancer free. The new study adds to a growing body of research suggesting healthcare organizations need to be prepared for such hacks.

Hospitals and healthcare institutions are being targeted with cyberattacks. Most of the time, those attacks steal patient data or lock up an organization's computer systems until they pay a ransom. Both of those types of attacks can make it harder for healthcare workers to deliver good care and harm patients by gumming up the operations at a hospital.

More direct attacks on people's health are worrying experts. Security researchers have shown that hackers can remotely break into internet- connected insulin pumps and deliver dangerous doses of medication.

There are hacks that can change medical images and impact a diagnosis. The University of Pittsburgh created a computer program that would make mammograms look like they are cancer-free, if they had no signs of cancer, in a study published in Nature Communications. They fed the tampered images to an artificial intelligence program that was trained to spot signs of breast cancer.

The artificial intelligence wrongly said that 70 percent of the manipulated images were cancer-free, and that the images were manipulated to look like they had cancer. Some were better at spotting manipulated images than others. Their accuracy ranged from 29 percent to 71 percent.

There is a possibility that a cyberattack on medical images could lead to incorrect diagnoses. A group of researchers showed that they could add or remove evidence of lung cancer from scans. The changes fooled both human and artificial intelligence programs.

This is the first public case where a hack like this has happened. There are a few reasons a hacker might want to do something. A hacker might be interested in targeting a specific patient, like a political figure, or they might want to alter their own scans to get money from their insurance company. If a hospital doesn't pay a Ransom, hackers might manipulate images randomly and refuse to stop tampering with them.

Whatever the reason, demonstrations like this one show that healthcare organizations and people designing artificial intelligence models should be aware that hacks that alter medical scans are a possibility. The study author said that models should be shown manipulated images to teach them to spot fake ones. Radiologists may need to be trained to identify fake images.

The research will hopefully get people to think about medical artificial intelligence model safety and how to defend against attacks.