Artificial intelligence systems are able to spot diseases early, manage chemical reactions, and explain some of the mysteries of the Universe thanks to the cutting-edge number-crunching capabilities of artificial intelligence.

There is a downside to this incredible and virtually limitless artificial brainpower.

New research shows how easy it is to train models for malicious purposes as well as good, in this case to imagine the designs for hypothetical bioweapon agents. A trial run with an existing artificial intelligence identified 40,000 bioweapon chemicals in six hours.

When it comes to spotting chemical combinations and drug compounds to improve our health, the same power can be used to dream up potentially very dangerous and deadly substances.

The researchers write in a new commentary that they have spent decades using computers and artificial intelligence to improve human health.

We were naive in thinking about the potential misuse of our trade, as our aim had always been to avoid molecular features that could interfere with the many different classes of proteins essential to human life.

At an international security conference, the team put an artificial intelligence system called MegaSyn to work in a different mode of operation, which is to detect toxicity in molecule in order to avoid them, but to do the opposite.

The toxic molecules were kept in the experiment. The model was trained to put these molecule together in combinations, which is how many bioweapons were created in a short time.

The researchers trained the artificial intelligence to mimic the effects of a potent nerve agent.

A lot of the generated compounds were more toxic than VX. The authors of the new study are debating whether or not to make the results of their research public at all.

The researchers explain that they had transformed the innocuous generative model from a helpful tool of medicine to a generator of likely deadly molecule.

The lead author of the new paper and senior scientist at Collaborations Pharmaceuticals, where the research took place, explained in an interview that it doesn't take much to flip the switch.

The researchers say their experiment serves as a warning of the dangers of artificial intelligence, and it would do well to heed.

A lot of the process is relatively straightforward and uses publicly available tools.

The researchers want a fresh look at how artificial intelligence systems can be used for evil. They believe that tighter regulation in the research community could help us to avoid the pitfalls of where these capabilities might lead.

The proof of concept highlights how a non-human creator of a deadly chemical weapon is completely feasible.

Without being overly alarmist, this should serve as a wake-up call for our colleagues in the drug discovery community.

Nature Machine Intelligence has published the research.