Thursday, August 14, 2008
Chess, Computers, and Crystal Balls
Briefly, my position is this. Contrary to what some functionalists would have us believe, Deep Blue's "victory--while undoubtedly a great achievement in design--isn't terribly significant in any deep way. However, I also don't think Dan Dennett is right in saying that the reason the question isn't interesting is that human brains aren't unique in the first place: instead, it seems wrong to me to call what happened "Deep Blue's victory" at all, as it was never in the game to begin with. Playing chess with a computer is no more a competitive affair than a game of tetherball is a competitive game with the pole or a game of solitaire is a competitive game with the cards. To truly participate in a game is an inherently intentional act--that is, an act that requires the ability to understand how one thing can be about or stand for another--and digital computers are fundamentally incapable of intentionality. In other words, ascribing a victory to Deep Blue over Gary Kasparov is to tacitly treat Deep Blue as an independent agent capable of its own successes and defeats, and that doesn't seem like the right way to talk about machines.
Clearly something is happening here--that is, Kasparov is really doing something when he sits down at the chessboard with Deep Blue and its handlers--so if that something is not a game between man and machine, then what is it? While I still find the above argument (at least in its non-brief form) compelling, it occurs to me that it is a strictly negative argument--it contends that Deep Blue is not playing a game at all and so has no real "victory" over Kasparov to speak of--leaving the question of what is actually going on unanswered. It is this question I wish to try to address here.
Suppose you and I are standing in a room with hardwood floors arguing about whether or not the ground is level. To settle the issue, I pull out a flawless crystal ball and set it carefully in the center of the room, knowing that if the floor really isn't level, the ball will roll down the incline, however slight; sure enough, the ball rolls off to the south, and we agree that we really do need to deal with that sinkhole out back. What's happened here? On a strong externalist account like Andy Clark's, I've externalized some of my cognition into a tool, letting it do the information processing for me in a way that just my un-extended meat mind just couldn't: this is the position that lies at the root of the intuition that Deep Blue is an agent in itself capable of playing chess, and it is this position against which I want to argue.
Rather than somehow externalizing my cognition, it seems to me that I'm simply cleverly manipulating my environment in order to make my internal cognition more powerful. When I set the ball in the middle of the room, it is with knowledge that--thanks to the action of some basic physical laws--one sort of result will occur if the floor is level and another sort will occur if it is not level. In short: I don't know if the floor is level, but I know that if the floor is not level, then the ball will roll down hill; thus, I infer that since I can certainly see the ball move, placing it in the middle of the floor is a good way to find out if there is a tilt or not. The ball is not doing any information processing of its own, nor is it some kind of metaphysical receptacle for my own cognition; instead, it is just a reliable indicator that I can use to make a judgment about the environment around me.
Let's extend (so to speak) this argument to computers in general (and Deep Blue in particular), then. A digital computer is a physical system just like a crystal ball--albeit a much more complex one--so it seems that the analogy is preserved here: any apparent "information processing" done by the computer (that is, any native cognition OR extended cognition) is nothing more than a very complicated ball rolling down a very complicated hill; a computer isn't actually doing anything cognitive, it's just a physical system with a reliable enough operation that I can use it to help me make certain judgments about the environment. Given a hill the ball will--just in virtue of what it is--roll, and given certain inputs the digital computer will--just in virtue of what it is--give certain outputs. In both the case of the ball and the case of the computer the tool's interactions with the environment can be informative, but only when interpreted by a mind that is capable of consciously attaching significance to that interaction; that's all a computer is, then: a physical system we use to help us make judgments about the environment.
That still doesn't address the central question, though, of what exactly is going on in the Deep Blue vs. Kasparov game (or of what's going on when anyone plays a computer game, for that matter). Clearly Kasparov at least is doing something cognitive (he's working hard), and clearly that something is at least partially based on the rules of chess, but if he's not playing chess with Deep Blue, then--at the risk of sounding redundant--what is he doing? Perhaps he is, as others have argued, actually playing chess with Deep Blue's programmers (albeit indirectly). I've advanced this argument before, and have largely gotten the following response.
Kasparov can't actually be playing against Deep Blue's programmers, because the programmers--either individually or collectively--wouldn't stand a chance in a match against Kasparov, whereas Deep Blue was able to win the day in the end. If the competition really was between Kasparov and the people behind the design and development of Deep Blue, those people would be expected to (at least as a group) be able to perform at least as well as Deep Blue itself did in the chess match. This is an interesting objection, but one that I do not think ultimately holds water. To see why, I'll beg your pardon for engaging in a bit more thought experimentation.
I'm not much of a chess player. I know the rules, and can win a game or two against someone who is as inexperienced as I am, but those wins are as much a product of luck as anything I've done. Kasparov would undoubtedly mop the floor with me even with a tremendous handicap--say, the handicap of not being able to see the chess board, but rather having to keep a mental model of the game and call out his moves verbally. I have, as I said, no doubt that I would be absolutely annihilated even with this advantage, but we can certainly imagine a player much more skilled than I am: a player that would tax Kasparov more, and one that he would reliably be able to beat in a normal chess match, but might risk losing to were he denied the environmental advantage of being able to use the board as an aid to represent the current state of the game. The board (and who has access to it) is making a real difference in the outcome of the game--are we to say, then, that it is a participant in the game in the same way that Deep Blue is? In the case where our mystery challenger beats Kasparov, does the board deserve to be credited in the victory? It does not seem to me that it does.
Here's another example of the same sort of thing. Suppose I challenge you to an arithmetic competition to see which of us can add a series of large numbers most quickly. There's a catch, though: while I can use a pen and paper in my calculations, you have to do the whole thing in your head. You'd be right to call foul at this, I think--the fact that I can engage in even the rudimentary environmental manipulation of writing down the figures as I progress in my addition gives me an enormous advantage, and might allow me to win the contest when I otherwise would have lost--this is true in just the same way that it's true that Kasparov might lose a chess game to an "inferior" opponent if that opponent was able to manipulate the environment to aid him in a way that Kasparov was not (say, but using a chess board to help keep track of piece position).
I suspect that most of you can now see where I'm going with this, but let me make my point explicit: Deep Blue is nothing more than a very complicated example of its programmers' ability to manipulate the environment to give themselves an advantage. Contending that Kasparov couldn't have been matching wits against those programmers just because he could have mopped the floor with them if they'd been without Deep Blue is akin to saying that because Kasparov might lose to certain players that had access to the board when he did not (even if he'd beat them handily in a "fair fight"), the board is the important participant in the game, or that I'm simply better at arithmetic than you are because I can win the competition when I have access to pen and paper and you do not.
Deep Blue is its programmers pen and paper--the product of their careful environmental manipulation (and no one manipulates the environment like a computer programmer does) designed to help them perform certain cognitive tasks (e.g. chess) better and more quickly. So whom was Kasparov playing chess with? On this view, the answer is simple and (it seems to me) clearly correct--he was playing against the programmers in the same sense that he would have been if they'd been sitting across the board from him directly--he just had a disadvantage: they were a hell of a lot better at using the environment to enhance their cognition than he was.
Tuesday, August 5, 2008
Hey Look, Irony!
Friday, August 1, 2008
100th Post - An Ode to Technology
I'm 2/3 of the way through my second CTY session, and this time I'm teaching philosophy of mind with Eripsa, who, despite being dreadfully wrong about consciousness, is an all-around awesome dude. He works primarily on the philosophy of technology, a disappointingly underrepresented field that deals with questions like "what is the ontological status of a tool," "what is necessary to create an artificial mind," and "how does technology influence human thought?" He does a lot of really interesting work (particularly on robots), so I encourage you to go check out his blog.
Anyway, being around him inevitably gets me thinking even more about technology than I usually do (which is saying something)--I'm particularly interested in that last question I posed above, though: how does technology influence human thought? Eripsa wants to follow Andy Clark and David Chalmers in endorsing the strong-externalist extended mind thesis, which claims that there is a relevant sense in which my cognition and mental states (including beliefs) spend a lot of time in the external world. Their paradigm case for this is that of Otto, a hypothetical Alzheimer's patient who, in lieu of using his deteriorating biological memory, writes down facts in a notebook, which he carries with him at all times. Clark claims that when Otto consults his notebook for a fact (e.g. the location of a restaurant he wants to go to), the notebook is serving as a repository for his beliefs about the world in just the same way that my (or your) biological memory does; that is, his belief about the location of the restaurant is literally stored in the external world.
This thesis seems fraught with problems to me, but that's not the point I want to make (at least not in this post). While I think that Clark (and by extension Eripsa) is wrong about the ontology of technology (Otto's notebook is supposed to stand for a whole host of technological "extensions" of our biological minds into the world), I think he's precisely right about its importance in a cognitive sense. Human beings are, by their very nature, tool users; it's a big part of what makes us human. Of course other primates (and even some birds) can use--or even manufacture--tools to accomplish certain tasks, but nothing else in the known natural world comes even close to doing it as well as humans do. Technology use is a part of who we are, and always has been; we created language as a tool to manipulate our environment, learning to create compression waves in the air for the purpose of communicating our ideas to each other, and in the process beginning the long, slow march toward the incredibly sophisticated tools we have today--tools like the one you're using right now.
Language might have been our first tool--and perhaps even still our best--but in recent years, the computer (and more specifically the Internet) has proven to be one of our most important in terms of cognition. I've argued before that the advent of the information age should herald a radical change in educational strategy, but I want to reiterate that point here. Today's kids are growing up in a world where virtually any fact that want is immediately and reliably accessible at any time. I'd say that at least 1/3 of the kids I'm teaching at CTY--and these are 12-15 year olds--have Internet-enabled cell phones that they keep on their person at all times; this is a very, very big deal, and our educational strategy should reflect it.
100 years ago, a good education was an education of facts. Students memorized times-tables, theorems, names and dates, literary styles, and an endless list of other factual statements about the world, because that's what it took to be an "educated citizen." Information was available, but it was cumbersome (physical books), difficult to access (most areas didn't have high quality libraries), and generally hard to come by for the average citizen--even an educated one. The exact opposite is true today--students don't need to memorize (say) George Washington's birthday, because they can pull that information up within seconds. This frees up an enormous "cognitive surplus" (to borrow Clay Shirkey's term) that can be used to learn _how to analyze and work with facts_ rather than memorize the facts themselves.
I've postulated before that the so-called "Flynn Effect"--that is, the steadily increasing IQ of every generation since the close of the 19th century--might be due to the increasing availability of information, and thus the increasingly analysis and abstraction oriented brain of the average citizen. If I'm right, we're going to see a huge leap in the IQ of this generation, but only if we start to educate them appropriately. We need a radical emphasis shift as early as in the kindergarten classroom; students need to be taught that it's not what you know, but how well you can work with the almost infinite array of facts that are available to you. The spotlight should be taken off memorizing names and dates, facts and figures, and focused squarely on approaches to thinking about those facts and figures. Today's child is growing up in a world where he is not a passive consumer of information, but rather an active participant in the process of working with information in a way that humans have never been before.
This leads me to my final point, which is that you should all go read this speech by Clay Shirky, author of the book Here Comes Everyone. It's very, very well articulated, and makes exactly the kind of point I'm driving at here. Snip:
I was having dinner with a group of friends about a month ago, and one of them was talking about sitting with his four-year-old daughter watching a DVD. And in the middle of the movie, apropos nothing, she jumps up off the couch and runs around behind the screen. That seems like a cute moment. Maybe she's going back there to see if Dora is really back there or whatever. But that wasn't what she was doing. She started rooting around in the cables. And her dad said, "What you doing?" And she stuck her head out from behind the screen and said, "Looking for the mouse."I'm betting the same. Thanks for reading, and here's to the next 100 posts.
Here's something four-year-olds know: A screen that ships without a mouse ships broken. Here's something four-year-olds know: Media that's targeted at you but doesn't include you may not be worth sitting still for. Those are things that make me believe that this is a one-way change. Because four year olds, the people who are soaking most deeply in the current environment, who won't have to go through the trauma that I have to go through of trying to unlearn a childhood spent watching Gilligan's Island, they just assume that media includes consuming, producing and sharing.
It's also become my motto, when people ask me what we're doing--and when I say "we" I mean the larger society trying to figure out how to deploy this cognitive surplus, but I also mean we, especially, the people in this room, the people who are working hammer and tongs at figuring out the next good idea. From now on, that's what I'm going to tell them: We're looking for the mouse. We're going to look at every place that a reader or a listener or a viewer or a user has been locked out, has been served up passive or a fixed or a canned experience, and ask ourselves, "If we carve out a little bit of the cognitive surplus and deploy it here, could we make a good thing happen?" And I'm betting the answer is yes.