by Andrew White | October 31, 2019 | Comments Off on Google’s DeepMind Now a Grandmaster at StarCraft II
The headline was attention grabbing: DeepMind, Google’s AI engine, win a StartCraft II tournament, winning 10 games in a row. It lost the last match. With such a win, DeepMind is now a grand master and can beat 98.8% of human players. This is an elevated and coveted status in the StarCraft community.
Of course, I, like many others, was amazed back in January of this year when DeepMind won its first StarCraft II game. Google’s DeepMind (and IBM’s Deep Blue, forerunner of Watson) have been talked about in the past with earlier games. IBM’s Deep Blue conquered Chess in 1997 with a famous victory over Gary Kasparov. In 2017 Google’s DeepMind beat a human Go champion.
The StarCraft II victory was heralded as a new breakthrough for DeepMind and AI since the game StarCraft II is hugely different from Chess or Go. For both Chess and Go, information about competitor moves is public. There are of course a huge number of permutations and combinations of movements in all these games, but in Chess and Go the computer is given information every turn about all moves of the competitor. With StarrCraft II and other similar real-time strategy games, the range of potential solutions can be voluminous over time.
The range of potential moves and combinations is huge already, and with fog of war the alternatives become unfathomable. It is not possible to imagine by humans – the numbers are too big:
- What resources (places on the map) should be sought?
- Treated individually, each resource location is more of a tactical topic. If you consider sequence of resources and locations, we approach strategy
- What troops to move where?
- Again, at an individual piece level this is more a tactical decision; when considering supporting roles and reinforcing points of strength, we are talking more of strategy
- What buildings should be built?
- Again, at the immediate next step, it is a tactical decision. If you think about what forces you want to build over time, on assumptions for what the enemy has built, we are talking strategy
But what if the computer had infinite compute power? The more you think through how the game is played, you more you realize that StarCraft II is really a more complex game than Chess or Go. And with unlimited compute the fog of war does not become insurmountable. As moves are made public, possible solutions to consider can be pruned: pathways can be shut down as more of the fog of war is removed. The impact of imperfect information starts to be less of an issue.
But there is an aspect of what StarCraft brings to the table to challenge AI that Chess and Go, with perfect information neutralizes. I figure it is the ability of a competitor to feint at a strategic level, not at a tactical level. Fog of war alone is not the game changer as far as I can tell. As a player of StarCraft and other similar real-time strategy games (in case you wondered I just finished Warhammer 40,000 Gladius: Relics of War) things start to get exciting with the interplay of the fog of war and the level of the feint involved.
Let’s look at two types of feint. In both cases, initial conditions are that the two forces have engaged in early combat in one part of the map, implying that most forces are facing each other.
- Force B sends a weak unit or two to the far side of their side of the map, under the fog of war. The weak units then emerge from under the fog of war and proceeds to ‘invade’.
- Force B sends its main strength to the far side of the map under the fog of war. The force then emerges from under the fog of war and proceeds to ‘invade’.
In the first example the AI might note the new threat. However, since the threat seems quite weak, it may counter by sending a small force, capable to overcome the units, to the part of the map to engage. Or it may weigh up waiting to see if any other forces follow, before it responds. Then again, depending on the unit type, it may evaluate any number of pathways. If the newly discovered enemy unit could capture parts of the map for base building, then the threat might be more important. If it is just combat units, it is less of an immediate threat. This is the interplay of tactical response and strategic dilemma.
In the second case, the range of options to weigh maybe larger. Is the current assumption (made by the AI – assuming it “makes” assumptions) that the main enemy force is opposite the AI base no longer valid? All the plans that would have followed, to build and organize forces, might need to be re-thought. All plans might be re-evaluated each turn – so the AI engine may never operate with a plan in the way we use the term. As such, does the AI determine that all forces need to be diverted and base building goals need to change? Should the AI risk a quick push to take control of more of the map (and remove more fog of war) while the enemy continues to move across the other part of the map? I am reminded of Harry Seldon and psychohistory and the Second Foundation and the ability to look at time (or a PC game) as a set of possible pathways, at each turn (or passing moment) new pathways open up and old potential pathways close down…
In both these cases I don’t think that ML and neural network technology is going to be completely tested. I think the real test is when the human player changes strategy, over and over. When a strategy changes, the opponents’ tactical actions need to incur a penalty on their potential for future actions. As such, if this can be achieved several times, the ability for the opponents’ strategy to become dominant will wane. The fog of war provides a meas to shroud the strategy shifting actions so you don’t have to expose that knowledge until you want, or have, to.
So, after playing StarCraft, I don’t think that there is a single perfect strategy to win:
- There are specific tactical steps one could take to assure a good outcome for specific situations
- These are situations where the number of calculations and variables are fewer, and even with fog of war involved, the horizon over which you would evaluate only extend as far as the resource investment persists
- There are a range of strategies that tend to lead to good outcomes given general conditions
- These are situations where the number of calculations and variables grow precipitously and, with fog of of war, cannot really be conceived of my human minds
But if your opponent is good at both 1) and 2), then you need to be more dynamic with your selection of strategy. And you need to delay the change in strategy just enough that the opponent commits to tactical use of resources such that the penalty I referred to is realized. As such feints can be maximized with fog of war. Going back to the original report, the humans did beat DeepMind once. Did they feint some action, and change strategy hidden from view? Was it the case that DeepMind was committed to a range of actions at a tactical level that it could no longer alter its strategy? If that is what the humans did, could DeepMind not learn that this was a possibility and could it therefore not revise its range of strategy options to allow for it?
Given the power of unlimited compute and ability of deep neural networks to comb through all the data and range of options, I think dynamism of strategy selection and timing of the changes is where victory and defeat is established. And keep in mind, when I play Warhammer 40,000 Gladius: Relics of War I am playing against humans and AI engines; not DeepMind. After playing lots of similar games, it tends to be quite logical and easy to “spot” the patterns that many game designer AI engines involve. Even those that seem good at developing their own feints can end up being predictable. And I love it when a I find a game where the AI prods and pokes at defenses to find the weakest front, and then piles in with everything it has at that time. That is always a thrill. But even that ends up being the undermining of the AI engine once you spot that this is its mode of operation. AI’s that dither and alter strategy, not tactics, are far more challenging – and harder to come by.
Increasingly it looks like AI and ML is getting better and better. It’s ability to re-think its own strategy is increasing. Despite my love of real-time strategy games and fighting good AI, it does not compare to watching games with embedded AI-capabilities that you set up and watch for days at a time. Those games end up demonstrating emergent properties as agents in the game, perhaps using their own AI-rules, learn to operate (for fight) together. Black and White, Spore, and Diggle’s: Myth of Fenris, are awesome “god” games of old that you could set up and leave to run for days and days. You could go back to the PC and explore the result. It could be that the planet was dead or a bustling hubbub of life – and you as the “god” had little directly to do with it.
Other PC gaming related blogs:
- What Software Developers can learn from their PC Gaming Developer Cousins
- The State of AI in PC-gaming, and the downfall of Peter Molynuex and his Godus
- Why Mobile Gaming is killing our High Tech Industry – though Doom and Id Software may yet help…
View Free, Relevant Gartner Research
Gartner's research helps you cut through the complexity and deliver the knowledge you need to make the right decisions quickly, and with confidence.Read Free Gartner Research
Comments or opinions expressed on this blog are those of the individual contributors only, and do not necessarily represent the views of Gartner, Inc. or its management. Readers may copy and redistribute blog postings on other blogs, or otherwise for private, non-commercial or journalistic purposes, with attribution to Gartner. This content may not be used for any other purposes in any other formats or media. The content on this blog is provided on an "as-is" basis. Gartner shall not be liable for any damages whatsoever arising out of the content or use of this blog.