Thursday, August 14, 2008

Chess, Computers, and Crystal Balls

I've written before about the significance (or lack thereof) of Deep Blue's now 11 year old victory over Gary Kasparov, but this is a topic that Eripsa and I invariably end up arguing over, so my recent three weeks working with him has made me think about this issue again, and I think I've come up with a few additions to my argument.

Briefly, my position is this. Contrary to what some functionalists would have us believe, Deep Blue's "victory--while undoubtedly a great achievement in design--isn't terribly significant in any deep way. However, I also don't think Dan Dennett is right in saying that the reason the question isn't interesting is that human brains aren't unique in the first place: instead, it seems wrong to me to call what happened "Deep Blue's victory" at all, as it was never in the game to begin with. Playing chess with a computer is no more a competitive affair than a game of tetherball is a competitive game with the pole or a game of solitaire is a competitive game with the cards. To truly participate in a game is an inherently intentional act--that is, an act that requires the ability to understand how one thing can be about or stand for another--and digital computers are fundamentally incapable of intentionality. In other words, ascribing a victory to Deep Blue over Gary Kasparov is to tacitly treat Deep Blue as an independent agent capable of its own successes and defeats, and that doesn't seem like the right way to talk about machines.

Clearly something is happening here--that is, Kasparov is really doing something when he sits down at the chessboard with Deep Blue and its handlers--so if that something is not a game between man and machine, then what is it? While I still find the above argument (at least in its non-brief form) compelling, it occurs to me that it is a strictly negative argument--it contends that Deep Blue is not playing a game at all and so has no real "victory" over Kasparov to speak of--leaving the question of what is actually going on unanswered. It is this question I wish to try to address here.

Suppose you and I are standing in a room with hardwood floors arguing about whether or not the ground is level. To settle the issue, I pull out a flawless crystal ball and set it carefully in the center of the room, knowing that if the floor really isn't level, the ball will roll down the incline, however slight; sure enough, the ball rolls off to the south, and we agree that we really do need to deal with that sinkhole out back. What's happened here? On a strong externalist account like Andy Clark's, I've externalized some of my cognition into a tool, letting it do the information processing for me in a way that just my un-extended meat mind just couldn't: this is the position that lies at the root of the intuition that Deep Blue is an agent in itself capable of playing chess, and it is this position against which I want to argue.

Rather than somehow externalizing my cognition, it seems to me that I'm simply cleverly manipulating my environment in order to make my internal cognition more powerful. When I set the ball in the middle of the room, it is with knowledge that--thanks to the action of some basic physical laws--one sort of result will occur if the floor is level and another sort will occur if it is not level. In short: I don't know if the floor is level, but I know that if the floor is not level, then the ball will roll down hill; thus, I infer that since I can certainly see the ball move, placing it in the middle of the floor is a good way to find out if there is a tilt or not. The ball is not doing any information processing of its own, nor is it some kind of metaphysical receptacle for my own cognition; instead, it is just a reliable indicator that I can use to make a judgment about the environment around me.

Let's extend (so to speak) this argument to computers in general (and Deep Blue in particular), then. A digital computer is a physical system just like a crystal ball--albeit a much more complex one--so it seems that the analogy is preserved here: any apparent "information processing" done by the computer (that is, any native cognition OR extended cognition) is nothing more than a very complicated ball rolling down a very complicated hill; a computer isn't actually doing anything cognitive, it's just a physical system with a reliable enough operation that I can use it to help me make certain judgments about the environment. Given a hill the ball will--just in virtue of what it is--roll, and given certain inputs the digital computer will--just in virtue of what it is--give certain outputs. In both the case of the ball and the case of the computer the tool's interactions with the environment can be informative, but only when interpreted by a mind that is capable of consciously attaching significance to that interaction; that's all a computer is, then: a physical system we use to help us make judgments about the environment.

That still doesn't address the central question, though, of what exactly is going on in the Deep Blue vs. Kasparov game (or of what's going on when anyone plays a computer game, for that matter). Clearly Kasparov at least is doing something cognitive (he's working hard), and clearly that something is at least partially based on the rules of chess, but if he's not playing chess with Deep Blue, then--at the risk of sounding redundant--what is he doing? Perhaps he is, as others have argued, actually playing chess with Deep Blue's programmers (albeit indirectly). I've advanced this argument before, and have largely gotten the following response.

Kasparov can't actually be playing against Deep Blue's programmers, because the programmers--either individually or collectively--wouldn't stand a chance in a match against Kasparov, whereas Deep Blue was able to win the day in the end. If the competition really was between Kasparov and the people behind the design and development of Deep Blue, those people would be expected to (at least as a group) be able to perform at least as well as Deep Blue itself did in the chess match. This is an interesting objection, but one that I do not think ultimately holds water. To see why, I'll beg your pardon for engaging in a bit more thought experimentation.

I'm not much of a chess player. I know the rules, and can win a game or two against someone who is as inexperienced as I am, but those wins are as much a product of luck as anything I've done. Kasparov would undoubtedly mop the floor with me even with a tremendous handicap--say, the handicap of not being able to see the chess board, but rather having to keep a mental model of the game and call out his moves verbally. I have, as I said, no doubt that I would be absolutely annihilated even with this advantage, but we can certainly imagine a player much more skilled than I am: a player that would tax Kasparov more, and one that he would reliably be able to beat in a normal chess match, but might risk losing to were he denied the environmental advantage of being able to use the board as an aid to represent the current state of the game. The board (and who has access to it) is making a real difference in the outcome of the game--are we to say, then, that it is a participant in the game in the same way that Deep Blue is? In the case where our mystery challenger beats Kasparov, does the board deserve to be credited in the victory? It does not seem to me that it does.

Here's another example of the same sort of thing. Suppose I challenge you to an arithmetic competition to see which of us can add a series of large numbers most quickly. There's a catch, though: while I can use a pen and paper in my calculations, you have to do the whole thing in your head. You'd be right to call foul at this, I think--the fact that I can engage in even the rudimentary environmental manipulation of writing down the figures as I progress in my addition gives me an enormous advantage, and might allow me to win the contest when I otherwise would have lost--this is true in just the same way that it's true that Kasparov might lose a chess game to an "inferior" opponent if that opponent was able to manipulate the environment to aid him in a way that Kasparov was not (say, but using a chess board to help keep track of piece position).

I suspect that most of you can now see where I'm going with this, but let me make my point explicit: Deep Blue is nothing more than a very complicated example of its programmers' ability to manipulate the environment to give themselves an advantage. Contending that Kasparov couldn't have been matching wits against those programmers just because he could have mopped the floor with them if they'd been without Deep Blue is akin to saying that because Kasparov might lose to certain players that had access to the board when he did not (even if he'd beat them handily in a "fair fight"), the board is the important participant in the game, or that I'm simply better at arithmetic than you are because I can win the competition when I have access to pen and paper and you do not.

Deep Blue is its programmers pen and paper--the product of their careful environmental manipulation (and no one manipulates the environment like a computer programmer does) designed to help them perform certain cognitive tasks (e.g. chess) better and more quickly. So whom was Kasparov playing chess with? On this view, the answer is simple and (it seems to me) clearly correct--he was playing against the programmers in the same sense that he would have been if they'd been sitting across the board from him directly--he just had a disadvantage: they were a hell of a lot better at using the environment to enhance their cognition than he was.

3 comments:

RaplhCramden said...

The significance of Deep Blue is that humans can design a machine that can beat any human in a chess game. We didn't used to be able to do that, and now we are able to do that. Whether that machine gets sad or angry or anxious or loves its children or fears death is NOT part of the significance of Deep Blue.

An implication of Deep Blue is that if we can probably design machines that will be able to outperform at least some other genius humans at the things THEY excel at. Who knows which tasks will fall first: Tennis playing? Driving a race car? Leading a battle? Inventing new math? Teaching Philosophy?

Do we say about the car that wins a drag race that it wasn't really the car that won, because cars are just machines? Sure we do, but we as often as not mistakenly ID the driver as the flagship component rather than the designers. Perhaps Deep Blue is a fancy notepad that allows its handlers to beat Kasparov at chess?

Deep Blue and Kasparov are BOTH chess-playing machines. At least one of them is much more than that, but there is no gain in contemplating that here, or rather almost all of what is interesting about Deep Blue does not depend on Kasparov's non-chess activities.

Indeed, Kasparov the machine was either designed by an intelligence or evolved, at least those are the only two major theories I am aware of. It doesn't matter WHICH of those you choose, the point of Deep Blue is: there is a new top designer in town when it comes to chess playing machines. Whether we have beaten God or Darwin is a matter of some contention, but that we have won as designers of chess playing machines seems indisputable to me.

I agree with you, if your point is Deep Blue probably isn't conscious.

Inferring from this that no machine designed by humans can ever be conscious is a leap too far for me to take. From my point of view, if the consciousness is in the machine of the brain, then 1) it could probably exist in other kinds of machines, and 2) even if it needed to be in organic electrochmeical processors to get it, humans, with their silicon-based tools, will eventually be able to design those gooey machines as well.

Deep Blue proves, if nothing else, that stuff we couldn't do in the past we can do now. Whether the particular thing, designing and building a conscious machine is one of them, is deductively still an open question.

Mike

Derek said...

I agree with the idea that Deep Blue's victory is not significant in terms advancing the understanding of cognitive processes or AI in general, but not because of the reasons in this post.

Deep Blue is essentially an expert system, a hyper-specialist specifically engineered to perform an extremely narrow function using algorithms (primarily brute force tree search) that do not scale or generalize to the wide range of cognitive skills we want to understand (invariant object recognition, language, motor interaction with real-world environments, etc.).

Jon says:

A digital computer is a physical system just like a crystal ball--albeit a much more complex one--so it seems that the analogy is preserved here: any apparent "information processing" done by the computer (that is, any native cognition OR extended cognition) is nothing more than a very complicated ball rolling down a very complicated hill; a computer isn't actually doing anything cognitive, it's just a physical system with a reliable enough operation that I can use it to help me make certain judgments about the environment.

But isn't a person just a "physical system", albeit a much more complex one? Why couldn't you use the same argument put forth here in an example where the programmers raised a small child and taught him to play chess well enough to beat Kasparov? Why wouldn't the trained prodigy just be an extension of their cognition?

No, Kasparov was playing Deep Blue, and he lost to Deep Blue, just as much as the mythological John Henry competed directly against the steam engine. The programmers may have imbued the machine with its properties, but Deep Blue did the work.

Jon said...

Good points both of you. I'm going to give this another day or two to garner more comments, then I'll post a response. I was particularly hoping someone would raise the "but aren't human brains just physical systems" response, Derek (the short answer is 'yes, but in a different way'). Thanks as always for the comments!