Monday, August 9, 2010

AI and Consciousness

So I've been at CTY again (as per usual). This time, I was instructor for Human Nature and Technology, for the first session, and then instructor for Philosophy of Mind second session. Yes, I am moving up in the world. As is normal at CTY, I was thinking about philosophy a lot. So today, when a friend of mine asked me to share my thoughts on the AI debate, I decided to finally try to put pen to paper and express the nascent argument I've been developing for a while now. This is how it turned out:

I think one of the most important things to press on here is the distinction between simulation and duplication. A lot of the disagreements about artificial intelligence, it seems to me, come down to disagreements about whether or not AI is supposed to be simulating a (human) mind or duplicating a (human) mind. This point is widely applicable, but one of the most salient ways that it comes up here is as a distinction between consciousness and intelligence. Let me say what I mean.

Some (many) properties of systems are functional properties: they're properties that are interesting, that is, just in virtue of what function they discharge in the system, not in virtue of HOW they discharge that function. For most purposes, human hearts are like this: an artificial heart is just as good as a normal one, because we don't really care about having a meat heart so much as having something to pump our blood. Functional properties are "medium-independent:" they are what they are irrespective of the kind of stuff the system is made out of. Not all properties are like that, though. Here's a story.

Suppose I've got a few different copies of Moby Dick. Thanks to the wonders of technology, though, all of these copies are in different formats: I've got a regular old leather-bound book, a PDF on my hard drive, and a CD with the audio book (read by...let's say Patrick Stewart). These are all representations with the same content: that is, they're all representations of the events described in Moby Dick. The content of the story is medium-independent. There are, however, facts about each of the representations that don't apply to the other representations: the physical book is (say) 400 pages long, but the audiobook has no page length. The PDF is in a certain file format (PDF), but the book has no file format. The audiobook has a peak frequency and volume level, but the PDF has neither of those. This list could be continued for quite a while; you get the picture--the important thing to emphasize is that while they all have the same content, each of the representations is instantiated in a different physical form, and thus has facts that are unique to that physical form (e.g. page length, format, peak frequency, &c.). That's all true (once again) in spite of their identical content.

Now, let's add another representation to the mix. Suppose I, like the fellows at the end of Fahrenheit 451, have decided to memorize Moby Dick. It takes quite some time, but eventually I commit the entire book to memory, and can recite it at will. This is a fourth kind of representation--a representation instantiated in the complex interconnection of neurons in my brain. Paul Churchland (2007) has developed a very nice account of how to think about neural representation, and he talks about a semantic state space (SSS)--that is, a complicated, high-dimensional vector space that represents the state of each of my neurons over time. This SSS--the changing state of my brain as I run through the story in my head--then, represents Moby Dick just as surely as the book, audiobook, or PDF does, and remains a totally physical system.

Ok, so we can ask ourselves, then, what unique things we can say about the SSS representing Moby Dick that we can't say about the other representations. It has no page length, no file format, and no peak volume. What it does have, though, is a qualitative character--what Thomas Nagel calls a "what-it-is-like-ness." It has (that is) a certain kind of feeling associated with it. That seems very strange. Why should a certain kind of representation have this unique quality? Asking this question, though, just shows a certain kind of representational chauvinism on our part: we might as well ask why it is that a book has a certain number of pages, or why a PDF has a file format (but not vice-versa). The answer to all of the questions is precisely the same: this representation has that feature just in virtue of how the physical system in which it is instantiated is arranged. The qualitative nature of the memorized book is no more mysterious than the page count of the leatherbound book, but (like page length) it isn't medium-independent either, and that's the point that cuts to the heart of the AI debate, I think. Here's why.

Think about Turing and Searle as occupying opposite sides of the divide here. Turing says something like this: when we want AI, we want to create something that's capable of acting intelligently--something that can pass the Imitation Game. Searle points out, though, this approach still leaves something out--it leaves out the semantic content that our minds enjoy. Something could pass the Imitation Game and still not have a mind like ours, in the sense that it wouldn't be conscious. This whole argument, I suggest, is just a fight about whether we should be going after the medium-independent or medium-dependent features of our minds when we're building thinking systems. That is, should we be trying to duplicate the mind (complete with features that depend on how systems like our brains are put together), or should we be trying to simulate (or functionalize) the mind and settle for something that discharges all the functions, even if it discharges those functions in a very different way? There's no right or wrong answer, of course: these are just very different projects.

Searle's point is that digital computers won't have minds like ours no matter what kind of program they run, or how quickly they run it. This makes sense--in the language of the book analogy from above, it's like asserting that no matter how much fidelity we give to the audiobook recording, it's never going to have a page length. Of course that's true. That's the sense in which Searle is correct. Turing has a point too, though: for many applications, we care a lot more about the content of the story than we do about the format in which we get it. Moreover, it looks like duplicating our sort of minds--building a conscious system--is going to be a lot harder than just building a functionally intelligent system. Ignoring the medium-dependent features of mentality and focusing on the medium-independent ones lets us build systems that behave intelligently, but gives us the freedom to play around with different materials and approaches when building the systems.

So in some sense, the question "can digital computers think" is ambiguous, and that's why there's so much disagreement about it. If we interpret the question as meaning "can digital computers behave intelligently?" then the answer is clearly "yes," in just the same sense that the answer to the question "can you write a story in the sand?" is clearly yes, even though sand is nothing like a book. If we interpret the question as meaning "can digital computers think in precisely the way that humans with brains think?" then the answer is clearly "no," in just the same sense that the answer to the question "will a story written in the sand have a page count?" is clearly "no," no matter what kind of sand you use. Searle is right to say that consciousness is special, and isn't just a functional notion (even though, as he says, it is a totally physical phenomenon). It's a function of the way our brains are put together. Turing is right to say that what we're usually after when we're trying to make intelligent machines, though, is a certain kind of behavior, and that our brains' way of solving the intelligent behavior problem isn't the only solution out there.

I'm not sure if any of that is helpful, but I've been meaning to write this idea down for a while, so it's a win either way.

Monday, June 7, 2010

On the Future of Philosophy and Science

A friend of mine asked me for my opinion about the future of philosophy. There's a popular perception that, in light of the tremendous advances science has made in the last century or so, philosophy is a dying discipline. As most of you know, I disagree with this assessment, but I do think that philosophy needs to adapt if it is to survive. Here are my brief thoughts on the future of my discipline, and its relationship to the scientific project.

Philosophy as an isolated discipline is certainly in decline--the number of questions that are purely "philosophical" (and worth answering) is shrinking. That's less a reflection on philosophy, though, and more a reflection of the state of academia in general: disciplinary lines are blurring. Physics is (at least in parts) informed by biology, information theory, and other special sciences. The special sciences themselves are (and have been for a while) mutually supportive and reinforcing; there's no clear line between a question for (say) sociology and a question for economics. None of that is to say that physics, biology, information theory, sociology, or economics is in decline, though--it just means that academia is becoming increasingly interdisciplinary.

On the face of it, this shift looks to have hit philosophy particularly hard: we've fallen quite far from our position as the queen of the Aristotelean sciences to where we are today, and there's a pervasive attitude both among other academics and among lay-people, I think, that philosophy is basically obsolete today, having been replaced by more reputable scientific investigation. There's a perception, that is, that metaphysics has been supplanted by mathematical physics, ethics has been rendered obsolete by sociobiology and evolutionary game theory, and that questions about the nature of the mind have been reduced to questions about neurobiology (or maybe computation theory). All of this is, I think, more or less true: the days of philosophy pursued as a stand-alone competitor to science are over, or at least they ought to be. This is emphatically not the same thing as saying that philosophy is dead or dying, though--it just needs to undergo the same kind of shift that other sciences have had to go through as they've entered the modern era. Philosophy needs to be incorporated into the unified structure of science generally.

It's not immediately obvious how to do this, but there are some clues. We should start by looking at the areas of science where philosophers--that is, people trained in or employed by philosophy departments using methods that are marked by careful attention to argument, critical examination of underlying assumptions, and concern with big-picture issues--are still making useful contributions to the scientific enterprise. There are, I think, two pretty clear paradigm cases here: quantum mechanics and cognitive science. In both of these fields, philosophers have made contributions that, far from consisting in idle navel-gazing and linguistic trickery, have made a real impact on scientific understanding. In QM, philosophers like David Albert, Hilary Greaves, David Deutsch, David Wallace, Barry Loewer, Tim Maudlin, Frank Arntzenius, and others have helped tremendously in clarifying foundational issues and resolving (or at least explicating) some of the trickier conceptual problems lurking behind dense mathematical formalism. Similarly, philosophers like Daniel Dennett, John Searle, Andy Clark, Ken Aizawa, and others have been instrumental in actually getting the field of cognitive science off the ground; just as in QM, these philosophers are responsible both for clarifying foundational concepts and for designing ingenious experiments to test hypotheses developed in the field.

What does the work these people are doing have in common in virtue of which it is philosophical? Again, the answer isn't clear, but this just reinforces the point that I'm making: there's no longer a clear division between philosophy and the rest of the scientific project to which philosophers ought to be contributing. If anything, the line between philosophy qua philosophy and science (insofar as there's a line at all) seems more and more to be amethodological line rather than a topical one; a philosopher differs from a "normal" scientist not in virtue of the subject matter he investigates, but in virtue of the way he approaches that subject matter. Scientists, by and large, are trained as specialists: by the time a physicist or biologist reaches the later stages of his PhD, his work is usually sharpened to a very fine point, and his area of expertise is narrow, but very deep: many (but not all) practicing scientists know a tremendous amount about their own fields, but are content to leave thinking about other fields to other specialists. Philosophers, on the other hand, are often generalists (at least when compared to physicists). In virtue of our general training in logic, argumentation, critical thinking, and, well, philosophy we're often better equipped than most to see the bigger picture--to see the way the whole scientific enterprise fits together, and to notice problems that are only apparent from a sufficiently high level of abstraction. Training in philosophy means sacrificing a certain amount of depth of knowledge--I'll never know as much about particle physics as Brian Greene--for a certain amount of breadth and flexibility; by the time my training is done, I'll know a bit about particle physics, a bit about evolutionary theory, a bit about computer science, a bit about cognitive neurobiology, a bit about statistical mechanics, a bit about climate science, a bit about the foundations of mathematics, and so on. That kind of breadth certainly has its drawbacks--a philosopher is unlikely to make the kind of experimental breakthroughs that a scientist dedicating his life to a single problem might achieve--but it also has its benefits; philosophers are in a unique position to (as it were) care for the whole forest rather than just a few trees.

Philosophers are uniquely situated, that is, to engage in the project of "bridge-building" between the individual sciences--uniquely situated to facilitate the continuing break-down of disciplinary barriers that threatened philosophy's existence to begin with. Philosophy's tool-kit is sufficiently general to be applied to any of the special sciences, given a little bit of study and localization. This isn't to suggest that philosophers should (or even can) make pronouncements about scientific issues from the armchair; that's the model of philosophy that's dying, and I'm not the only one to have said "good riddance" to it. Doing philosophy of physics means learning physics, and doing philosophy of biology means learning biology. We need to engage with the disciplines to which we contribute; the edges of the bridges need to be anchored on solid ground before they can help us cross the interdisciplinary gaps. The "big picture" questions that have been the hallmark of philosophy for millennia--questions like "what is humanity's place in the universe?" and "what do our best theories of the structure of the world mean for who we are?" and even "what's special about consciousness?" still have a place in contemporary science. Science has room for both specialists and generalists, and questions like "what's the right way to think about a real physical system's being in a state that's represented by a linear combination of eigenvectors?" have an important place in science. The scientific enterprise takes all kinds, and there's room for philosophers to contribute, if we can just get our collective head out of our collective ass and come back to the empirical party with the rest of science.