DeepMind, a Google sister company, is claiming a breakthrough for artificial intelligence after it mastered the complex video game StarCraft II.

The London-based AI researcher's "AlphaStar" bots reached the top "grandmaster" tier of the StarCraft league, ranking it above 99.8 per cent of human players - often without its real-life opponents realising that they were playing against an algorithm.

"You could say we passed the Turing test of StarCraft," said David Silver, principal research scientist at Alphabet-owned DeepMind, referring to the widely used yardstick for machines that give the appearance of human-like intelligence.

A "real-time strategy" game, StarCraft has come to the fore as a testing ground for AI in recent years because marshalling resources, deploying troops and battling opponents requires both tactical responses and longer-range strategic planning.

Unlike the board game Go, at which DeepMind's AlphaGo defeated world champion Lee Sedol in 2016, StarCraft players must operate with imperfect information about their opponent's moves. This "fog of war" brings StarCraft closer to how an AI would have to operate in the real world, making the learnings from AlphaStar more widely applicable beyond mere games.

You could say we passed the Turing test of StarCraft

David Silver

"AlphaStar's neural network architecture can model very long sequences of likely actions, with games often lasting up to an hour, tens of thousands of moves and based on imperfect information," said Oriol Vinyals, a DeepMind research scientist and former StarCraft champion player, who led the AlphaStar project.

That could make the learnings from AlphaStar useful in weather prediction, language understanding, personal assistants or self-driving cars, he said.

DeepMind was acquired by Google in 2014 for £400m but, despite investing hundreds of millions more since then, it remains in the very early stages of commercialising its technology beyond the confines of Alphabet.

Revenues in 2018 nearly doubled to £102.8m, according to its most recent UK accounts, but staff costs and other expenses rose by more than two-thirds, year on year, to £568m.

Earlier this year, news that co-founder Mustafa Suleyman was taking a prolonged leave of absence was widely seen as a sign that the balance of power was shifting between Google and DeepMind, which has long sought to protect its autonomy.

DeepMind says that it is working with dozens of projects within Alphabet, from its Waymo autonomous cars unit to improving efficiency in its data centres.

Video games have long been used as a testing ground for the latest developments in AI. In April, bots developed by OpenAI, an Elon Musk-backed research company, beat a champion esports team at Dota 2, an online multiplayer game.

Recommended

AlphaStar honed its technique using a combination of general-purpose AI techniques, including "imitation learning" - which learns from watching human players - and "reinforcement learning", in which the system plays against itself. The neural network that reached grandmaster status required just a month and a half of training.

"There is really no programming," Mr Vinyals said.

DeepMind won at StarCraft while operating under the same conditions as human players, such as the view of the battlefield and the possible frequency of mouse clicks.

That also helps to make DeepMind's algorithms work better alongside or in collaboration with humans, increasing the system's broader potential, he said.

"The success of AlphaStar in StarCraft II suggests that general-purpose machine learning algorithms may have a substantial effect on complex real-world problems," DeepMind's researchers wrote in a peer-reviewed paper published in Nature this week.

tag