Many statespersons argue that diplomacy is an art that requires not just strategy, but also intuition, persuasion, and even subterfuge, human skills that have been off-limits to even the most powerful artificial intelligence approaches. The board game Diplomacy, which requires both strategic planning and verbal negotiations with other players, can be played by many humans. Researchers think the work could point the way to virtual exercise coaches and dispute mediation. It's possible that international chatbot diplomacy is not far away.

A computer scientist at DeepMind who worked on the game Diplomacy but was not involved in the new research says the results are amazing. Diplomacy is an excellent environment for studying cooperative artificial intelligence, in which machines don't just compete, but work together.

Humans have been defeated in games of strategy such as chess, Go, and poker by artificial intelligence. Natural-language processing is proving to be powerful, in which it can generate human-like text and carry on conversations. Both are required for the game of Diplomacy. Seven players are competing for control of Europe. Following discussion with other players, players issue orders regarding the movement of army and naval units. Building trust is necessary for success. Both John F. Kennedy and Henry Kissinger enjoyed the game.

No-press Diplomacy is a version of the game in which players don't talk. The combination of cooperation and competition is a challenge for computers. The new work is the first to get respectable results in the full game. Noam Brown, a computer scientist at Meta who co-authored the paper, thought it would take a decade to succeed. It seemed like a science fiction idea to have an artificial intelligence that could talk strategy with another person.

A dialogue module and a strategic reasoning module have been welded together by Meta. In order to train the modules on large data sets, they had to learn from the games that humans had played online.

The agents were trained to play against each other. It was able to pick actions based on the state of the game, any previous dialogue, and the predicted actions of other players. Researchers rewarded it for humanlike play so that it wouldn't confuse other players. Conventions tend to make interactions easier.

tuning was needed for the dialogue module. To imitate the kinds of things people say in games, it was trained to do so within the context of the state of the game and what the strategic planning module intended to do. The agent was able to balance deception and honesty on it's own. In an average game, it sent and received nearly 300 messages. One message said, "How are you thinking Germany is going to open?" I might have a chance at Belgium, but I need your help to get into Denmark.

Jonathan Gratch, a computer scientist at the University of Southern California who studies negotiation agents, gave early guidance for a Defense Advanced Research Projects Agency program that is trying to master Diplomacy. CICERO keeps its remarks and game play within the realm of human convention.

The researchers wanted to see if CICERO could play online games against other people. The top 10% of players had played at least two games. Zhou Yu is a computer scientist at Columbia University who studies dialogue systems.

The work is important according to Gratch. He wonders if CICERO's dialogue contributed to its success. About 10% of CICERO's messages were found to be inconsistent with its plan or game state. It suggests it is saying a lot of nonsense. Yu is aware that CICERO sometimes uses non sequiturs.

The work could lead to practical applications in niches that need a human touch, according to Brown. Consumers might be able to negotiate for better prices on plane tickets with the assistance of virtual personal assistants. Both Gratch and Yu think there are opportunities for agents to convince people to make healthy choices. Negotiating agents could help resolve disagreements.

There are risks researchers see. Similar agents could do a lot of things. Gratch thinks that the idea of manipulation is not bad. Let people know you are interacting with an artificial intelligence and that it won't lie to them. People are consenting and there is no deception.