Earlier this year, Google’s AlphaGo AI successfully beat world-class champion Go player Lee Se-dol in four games out of five. This was a significant milestone, thanks to the sheer number of positions that are possible within the game, and the difficulty of creating an AI that could efficiently evaluate them before the heat death of the universe. Now, Blizzard is teaming up with Google to teach a next-generation AI to play the actual computer game: Starcraft II.
At first glance, this might not seem to make much sense. After all, playing against an “AI” has been a feature of computer games for decades, in everything from first person shooters to RPGs, to chess simulators. The difference between game AI and the kind of AI Google is developing is simple: Most of what we call artificial intelligence in gaming is remarkably bereft of anything resembling intelligence. In many titles, increasing the difficulty level simply gives the computer player more resources, faster build times, inside information about player activities, or loosens constraints on how many actions the CPU can perform simultaneously. It turns the bots into overpowered thugs, but doesn’t really make them better at what they do.Game AI typically makes extensive use of scripts to determine how the computer should respond to player activities (we know Starcraft’s AI does this because it has actually been studied in a great deal of depth). At the most basic level, this consists of a build order for units and buildings, and some rules for how the computer should respond to various scenarios. In order to seem even somewhat realistic, a game AI has to be capable of responding differently to an early rush versus an expansionistic player who builds a second base, versus a player who turtles up and plays defensively. In an RPG, a shopkeeper might move around his store unless he notices you stealing something, at which point a new script will govern his responses to the player.
Game AI, therefore, is largely an illusion, built on scripts and carefully programmed conditions. One critical difference between game AI and the type of AI that DeepMind and Blizzard want to build is that game AI doesn’t really learn. It may respond to your carrier rush by building void rays, or counter your siege tanks with a zergling rush. But the game isn’t actually learning anything at all; it’s just reacting to conditions. Once you quit the match the computer doesn’t remember anything about your play, and it won’t make adjustments to its own behavior based on who it’s facing.
The AI that Google and Blizzard want to build would be capable of learning, adapting, and even teaching new players the ropes of the game in ways far beyond anything contemplated by current titles. It’ll still be important to constrain the AI in ways that allow for humans to win, since games like Starcraft are (to a computer) basically just giant math problems, and an unconstrained CPU opponent can micro at speeds that would make the best Korean players on Earth weep.
According to Oriol Vinyals, a research scientist with Google DeepMind, the company is looking forward to the challenge. “It’s a game I played a long time ago in quite a serious way,” Vinyals told Technology Review. “And as a player, I can attest that there are many interesting things about StarCraft. For instance, an agent will need to learn planning and utilize memory, which is a hot topic in machine learning.”
It’s still not clear how easily these initiatives could be translated back into shipping games; Google’s AlphaGo is based on its own custom tensor processing units (TensorFlow) and a varying number of CPU and GPU cores ranging from 48 CPUs and one GPU to 1,920 CPUs and 280 GPUs. Either way, you’re not going to be setting up a home system to handle your gaming unless you happen to live in a server room. This doesn’t mean that computer games couldn’t benefit from these kinds of projects, though. If Blizzard can teach an AI how to play Starcraft, it may well be able to teach the AI how to generate scripts and decision trees that accurately model its own play.
The idea of an AI that teaches a game how to play Starcraft 2 against humans might sound like science fiction, and neither Google nor Blizzard has proposed anything quite this advanced. But it wouldn’t surprise me if that’s the big-picture, long-term idea. After all, what’s the point of teaching a computer to play Starcraft 2 if humans never get to play against it?