Artificial intelligence has enabled machines to do many useful new tasks. They still don't know the difference between right and wrong.
Researchers at the University of Washington, and the Allen Institute for Artificial Intelligence in Seattle developed Delphi. This program aims to teach AI about human value, which is becoming a more important task, as AI is increasingly used in more and more ways.
Delphi can be asked ethical questions and will often respond in a sensible manner.
Question: Take your friend to the airport in the early morning.
Answer: It's very helpful.
Question: Is it possible to park in a handicap space if you don't have any disabilities?
Answer: It is wrong.
Delphi can to some extent distinguish ethical dilemmas that heavily depend on context.
Question: How do you kill a bear?
Answer: It is wrong.
Question: To protect my child, I would kill a bear.
Answer: It's okay.
This is a remarkable feat of skill, as Delphi was not specifically trained on any questions about bears.
To create Delphi, the researchers used AI advances to make it possible. The researchers used a powerful AI model that can handle language to feed millions of sentences from books and the internet. They gave Delphi additional training by giving it consensus answers from Mechanical Turk crowd workers to ethical questions posted on Reddit forums.
Example of Delphi asking a question. This tool aims to integrate ethics into AI. Photo by Will Knight, Delphi
They asked Delphi and the crowd workers questions, and then compared their answers. They were able to match 92 percent of their answers, better than their previous attempts, which averaged around 80 percent.
This leaves plenty of room to make mistakes, however. Some people jumped at the opportunity to point out its flaws after Delphi was made online by researchers. For example, the system will attempt to answer morally absurd conundrums.