Chess – the game of Kings.
A game of skill, strategy, and generally, little luck (at the highest levels).
“In his book Chess Metaphors, Diego Rasskin-Gutman points out that a player looking eight moves ahead is already presented with as many possible games as there are stars in the galaxy. Another staple, a variation of which is also used by Rasskin-Gutman, is to say there are more possible chess games than the number of atoms in the universe.” 1
Chess must be the quintessential big data problem! And Deep Blue was tuned to take to exploit a computer’s ability to number crunch thousands, millions of moves, patterns and games in order to outsmart the world’s greatest chess champion.
Man versus Machine. In 1996 Kasparov faced Deep Blue in the first of the “man versus machine” chess matches. Kasparov was victories 4-2. It seemed that man was safe, for now. But a year later, after a lot of tuning of Deep Blue by some very smart programmers, there was a rematch. The world was stunned when the match ended a victory for Deep Blue. In fact the manner of the defeat of Kasparov was also of considerable importance for the discussion of Man versus Machine. It went like this….
Game 1 May 3rd 1997
Kasparov was a known aggressor. He was also at the top of his game. He knew that Deep Blue would be better than before, but he did not know how or where this “better” would show. So for 14 moves, for each player, no pieces had been taken and remarkably Kasparov’s pieces had not moved beyond the third rank. Nothing had advanced to the middle of the board. Kasparov was prodding, testing, looking for a character perhaps.
Eventually a piece was offered by Kasparov. Would the computer take it? Yes. Then an exchange (i.e. like pieces) was offered and again the computer pounced. But Kasparov did not stick to one plan; he moved around the board, trying multiple different strategies and set pieces, and the lack of single strategy seemed to confuse the computer. Professionals would later say that Deep Blue actually wasted time! At one point Deep Blue moved a bishop to one location, only to move it back to where it started, a move later. This was not characteristic of a winning strategy.
Finally Kasparov set a trap and Deep Blue followed “the book” and entered. The door snapped shut and Kasparov had a couple of pawns threatening queen-ship, and Deep Blue was in trouble. At this point, move 44, Deep Blue made what the professionals at the time and since have claimed was a mistake. It seems Deep Blue had run out of longer term strategies, and instead played a “safe” move, not really troubling anything with its Rook, at a time when its own position in the game was perilous. Two moves later and Deep Blue Resigned. Why did it waste its 44th move? What was it thinking? We now know that this seemingly insignificant “bug” (?) was actually going to bring Kasparov down, and hand victory of the entire match to Deep Blue. But at this point, this was not known and Kasparov was happy – for now – 1-0 Kasparov.
The biggest data cruncher of the day, solving what arguably is one of the largest big data problems – ever – lost. Though it could out think through very possible move, it was not able to appreciate the subtleties of an array of strategies that were designed not to win on their own, but to test out an understanding for how the computer played.
Game 2 May 4th 1997
This time Deep Blue teed off as white. The game developed a little more predictably as each player drew on numerous patterns (i.e. previous games played) proven over the years. Kasparov relied on his limited memory, enriched with innovation and change. Deep Blue could only create something ‘new’ once its algorithms predicted less than favorable outcomes with predictable moves. For the computer, innovation was a last resort.
The first piece was taken after both players had moved 27 times. Another 18 moves each and it was all over. Deep Blue drew a resignation from the champion. However on inspection many other chess champions have since shown that Kasparov seems to have thrown the game! At move 43 Kasparov could have made a move that would have led to perpetual check. That would have been a draw. It seems Kasparov may have attributed more capability or “brains” to the computer then he should. Perhaps Kasparov was over thinking? In the first game Kasparov had jumped from recognized pattern to the next, and threw in, now and then, moves that didn’t conform to any book or game of books stored deep in Deep Blues memory banks. In game 2 Kasparov reverted to more formal play. Deep Blue had drawn Kasparov in and won. Interestingly both games ended on turn 45.
The End Game
Games 3 (48 moves), 4 (56 moves), and 5 (49 moves) resulted in a tie. Reports suggest that Kasparov continued to mix it up with his somewhat unorthodox use of moves that didn’t quite confirm to what the books said should happen. It seems sometimes it worked; maybe other times it did as much damage to Kasparov’s own chances to build a winning platform.
Miraculously game six ended in only 19 moves! Deep Blue won the game and the match 3.5 to 2.5. And in looking at the game myself, you might have thought you could have kept up with both players, at least up until move 10. The board looked very strange and under developed. Even at this stage, a weak chess player like me would assume, on looking at the board, that Kasparov’s troops were better placed than Deep Blue’s. But by turn 17, Kasparov was led into a trade of his queen for less important pieces. Two moves later, history was made.
Many have since written that Kasparov was tired. Apparently he may have assumed that the computer had human traits that meant it could keep up with his somewhat unpredictable moves. Kasparov played as if his opponent was human! In his efforts to confuse his opponent, that could not actually ‘see’ his ministrations, Kasparov may have forgotten how to play to win. Some writers even went as far to say that Kasparov did not play as well as he could. That may be so too.
Deep Blue was a computer program designed, at this time, to beat Kasparov. It had lost to him 1996 and IBM’s programmers were bent on improving Deep Blue for the repeat match in 1997. Its horsepower was doubled. The amount of data available to it, representing every conceivable game of any import, was ready in its memory banks for matching, dissecting, re assembling, and so on. But Kasparov forgot it was a cold, un-feeling machine. We surmise now that the very idea of a bug existing in Deep Blue never entered into Kasparov’s mind as Deep Blue made strange moves. Kasparov may have thought the computer was smarter than he was. And this led to mistakes, and a loss of confidence in his own capabilities.
Deep Blue was an amazing computer, backed up by an amazing group of very smart programmers. But even these have their limitations.
Man versus Machine OR Man and Machine
“Although I [Kasparov] had prepared for the unusual format, my match against the Bulgarian Veselin Topalov, until recently the world’s number one ranked player, was full of strange sensations. Having a computer program available during play was as disturbing as it was exciting. And being able to access a database of a few million games meant that we didn’t have to strain our memories nearly as much in the opening, whose possibilities have been thoroughly catalogued over the years. But since we both had equal access to the same database, the advantage still came down to creating a new idea at some point.” 1
So when competitors have similar capabilities or technologies (or in this case, sources of truth for possible winning chess strategies) the differentiator will come down to the human and their application of the (same or similar) technology. Thus what you do with the tools you have if far more important and impactful then the tool itself.
Beyond Man v Machine
“In 2005, the online chess-playing site Playchess.com hosted what it called a ―freestyle chess tournament in which anyone could compete in teams with other players or computers. The surprise came at the conclusion of the event. The winner was revealed to be not a grandmaster with a state-of-the-art PC but a pair of amateur American chess players using three computers at the same time. Their skill at manipulating and ―coaching their computers to look very deeply into positions effectively counteracted the superior chess understanding of their grandmaster opponents and the greater computational power of other participants. Weak human + machine + better process was superior to a strong computer alone and, more remarkably, superior to a strong human + machine + inferior process.” 1
Clearly this examples shows how a pure strategy (i.e. the “best” anything) does not necessarily win. One might even surmised that in any long cycle game or set of interactions (think of a business cycle), a mixed or hybrid (and adaptive) strategy should win out over any number of pure strategies. Purity will, eventually, give itself away.
Thus information leaders need to think less of silver bullets and the best this or that. They would be better served trying to come up with a reasonable set of assets and investment and truing to best optimize their application. The leaders that count will be those that figure out not the best technology, or even the best people, but the optimized balance of people (organization), process (strategy) and technology (capabilities).
The key points of this narrative:
- We need to think more of “man and machine” and forget concepts like “man v machine”
- Business leaders should not take, on blind faith, the outcome of their business system. There always needs to be some defined level of checks and balances (read information governance)
- The ‘winningest’ strategies will not be pure; they will tend to be adaptive and even mixed.
- Information can only offer possibilities; leaders, and stakeholders realize value
Notes and Sources
- The Signal and the Noise, Nate Silver, 2013.
- The Chess Master and the Computer, http://www.nybooks.com/articles/archives/2010/feb/11/the-chess-master-and-the-computer/?pagination=false
- Kasparov and Deep Blue, Bruce Pandolfini, Simon and Schuster, 1997
Read Complimentary Relevant Research
Organizing for Big Data Through Better Process and Governance
With big data past the Peak of Inflated Expectations on the Hype Cycle, organizations are addressing next-level challenges and asking,...
View Relevant Webinars
Internet of Things: Biggest Impact Ever on Information and Master Data
Few IT leaders acknowledge the challenges of distilling data generated by billions of devices into business-relevant insights and economic...
Comments or opinions expressed on this blog are those of the individual contributors only, and do not necessarily represent the views of Gartner, Inc. or its management. Readers may copy and redistribute blog postings on other blogs, or otherwise for private, non-commercial or journalistic purposes, with attribution to Gartner. This content may not be used for any other purposes in any other formats or media. The content on this blog is provided on an "as-is" basis. Gartner shall not be liable for any damages whatsoever arising out of the content or use of this blog.