It would be impossible for a human doctor to look at the same images as deep learning models can.

The findings raise some troubling questions about the role of artificial intelligence in medical diagnosis, assessment, and treatment: could racial bias be unintentionally applied by computer software when studying images like these?

An international team of health researchers from the US, Canada, and Taiwan tested their system on X-ray images that the computer software hadn't seen before, after training their artificial intelligence using hundreds of thousands of existing X-ray images labeled with details of the patient's race.

Even when the scans were taken from people of the same age and the same sex, the artificial intelligence could predict the patient's race with surprising accuracy. The system hit 90 percent with some images.

The researchers wrote in their published paper that they aimed to conduct a comprehensive evaluation of the ability of artificial intelligence to recognize a patient's racial identity from medical images.

We show that standard deep learning models can be trained to predict race from medical images with high performance, which was sustained under external validation conditions.

A previous study found that artificial intelligence scans of X-ray images were more likely to miss signs of illness in black people. Scientists need to understand why that is happening in the first place.

Artificial intelligence mimics human thinking to spot patterns in data. It can also be susceptible to the same biases. It is hard to untangle the prejudices we have woven into them.

The scientists don't know why the system is so good at identifying race from images that don't contain information. Even when limited information is provided, the models still perform well at guessing the race reported in the file by focusing on a small part of the body.

It is possible that the system is looking for signs of melanin, the substance that gives skin its color, which is unknown to science.

Our finding that artificial intelligence can accurately predict self-reported race even when clinical experts can't, creates an enormous risk for all model deployment in medical images.

There is growing evidence that artificial intelligence systems can reflect biases and prejudices of human beings, whether that is racism, sexism, or something else. Skewed training data can lead to skewed results.

That needs to be balanced against the potential of artificial intelligence to get through a lot more data quickly than humans can.

It is important to be aware of the potential for racial bias to show up in artificial intelligence systems, especially if we are going to hand more responsibility over to them in the future.

The research scientist and physician from the Massachusetts Institute of Technology told the Boston Globe that they need to take a break.

We can't rush to bring the algorithms to hospitals and clinics until we know they aren't making sexist or racist decisions.

The research has been published.