So I've been at CTY again (as per usual). This time, I was instructor for Human Nature and Technology, for the first session, and then instructor for Philosophy of Mind second session. Yes, I am moving up in the world. As is normal at CTY, I was thinking about philosophy a lot. So today, when a friend of mine asked me to share my thoughts on the AI debate, I decided to finally try to put pen to paper and express the nascent argument I've been developing for a while now. This is how it turned out:
I think one of the most important things to press on here is the distinction between simulation and duplication. A lot of the disagreements about artificial intelligence, it seems to me, come down to disagreements about whether or not AI is supposed to be simulating a (human) mind or duplicating a (human) mind. This point is widely applicable, but one of the most salient ways that it comes up here is as a distinction between consciousness and intelligence. Let me say what I mean.
Some (many) properties of systems are functional properties: they're properties that are interesting, that is, just in virtue of what function they discharge in the system, not in virtue of HOW they discharge that function. For most purposes, human hearts are like this: an artificial heart is just as good as a normal one, because we don't really care about having a meat heart so much as having something to pump our blood. Functional properties are "medium-independent:" they are what they are irrespective of the kind of stuff the system is made out of. Not all properties are like that, though. Here's a story.
Suppose I've got a few different copies of Moby Dick. Thanks to the wonders of technology, though, all of these copies are in different formats: I've got a regular old leather-bound book, a PDF on my hard drive, and a CD with the audio book (read by...let's say Patrick Stewart). These are all representations with the same content: that is, they're all representations of the events described in Moby Dick. The content of the story is medium-independent. There are, however, facts about each of the representations that don't apply to the other representations: the physical book is (say) 400 pages long, but the audiobook has no page length. The PDF is in a certain file format (PDF), but the book has no file format. The audiobook has a peak frequency and volume level, but the PDF has neither of those. This list could be continued for quite a while; you get the picture--the important thing to emphasize is that while they all have the same content, each of the representations is instantiated in a different physical form, and thus has facts that are unique to that physical form (e.g. page length, format, peak frequency, &c.). That's all true (once again) in spite of their identical content.
Now, let's add another representation to the mix. Suppose I, like the fellows at the end of Fahrenheit 451, have decided to memorize Moby Dick. It takes quite some time, but eventually I commit the entire book to memory, and can recite it at will. This is a fourth kind of representation--a representation instantiated in the complex interconnection of neurons in my brain. Paul Churchland (2007) has developed a very nice account of how to think about neural representation, and he talks about a semantic state space (SSS)--that is, a complicated, high-dimensional vector space that represents the state of each of my neurons over time. This SSS--the changing state of my brain as I run through the story in my head--then, represents Moby Dick just as surely as the book, audiobook, or PDF does, and remains a totally physical system.
Ok, so we can ask ourselves, then, what unique things we can say about the SSS representing Moby Dick that we can't say about the other representations. It has no page length, no file format, and no peak volume. What it does have, though, is a qualitative character--what Thomas Nagel calls a "what-it-is-like-ness." It has (that is) a certain kind of feeling associated with it. That seems very strange. Why should a certain kind of representation have this unique quality? Asking this question, though, just shows a certain kind of representational chauvinism on our part: we might as well ask why it is that a book has a certain number of pages, or why a PDF has a file format (but not vice-versa). The answer to all of the questions is precisely the same: this representation has that feature just in virtue of how the physical system in which it is instantiated is arranged. The qualitative nature of the memorized book is no more mysterious than the page count of the leatherbound book, but (like page length) it isn't medium-independent either, and that's the point that cuts to the heart of the AI debate, I think. Here's why.
Think about Turing and Searle as occupying opposite sides of the divide here. Turing says something like this: when we want AI, we want to create something that's capable of acting intelligently--something that can pass the Imitation Game. Searle points out, though, this approach still leaves something out--it leaves out the semantic content that our minds enjoy. Something could pass the Imitation Game and still not have a mind like ours, in the sense that it wouldn't be conscious. This whole argument, I suggest, is just a fight about whether we should be going after the medium-independent or medium-dependent features of our minds when we're building thinking systems. That is, should we be trying to duplicate the mind (complete with features that depend on how systems like our brains are put together), or should we be trying to simulate (or functionalize) the mind and settle for something that discharges all the functions, even if it discharges those functions in a very different way? There's no right or wrong answer, of course: these are just very different projects.
Searle's point is that digital computers won't have minds like ours no matter what kind of program they run, or how quickly they run it. This makes sense--in the language of the book analogy from above, it's like asserting that no matter how much fidelity we give to the audiobook recording, it's never going to have a page length. Of course that's true. That's the sense in which Searle is correct. Turing has a point too, though: for many applications, we care a lot more about the content of the story than we do about the format in which we get it. Moreover, it looks like duplicating our sort of minds--building a conscious system--is going to be a lot harder than just building a functionally intelligent system. Ignoring the medium-dependent features of mentality and focusing on the medium-independent ones lets us build systems that behave intelligently, but gives us the freedom to play around with different materials and approaches when building the systems.
So in some sense, the question "can digital computers think" is ambiguous, and that's why there's so much disagreement about it. If we interpret the question as meaning "can digital computers behave intelligently?" then the answer is clearly "yes," in just the same sense that the answer to the question "can you write a story in the sand?" is clearly yes, even though sand is nothing like a book. If we interpret the question as meaning "can digital computers think in precisely the way that humans with brains think?" then the answer is clearly "no," in just the same sense that the answer to the question "will a story written in the sand have a page count?" is clearly "no," no matter what kind of sand you use. Searle is right to say that consciousness is special, and isn't just a functional notion (even though, as he says, it is a totally physical phenomenon). It's a function of the way our brains are put together. Turing is right to say that what we're usually after when we're trying to make intelligent machines, though, is a certain kind of behavior, and that our brains' way of solving the intelligent behavior problem isn't the only solution out there.
I'm not sure if any of that is helpful, but I've been meaning to write this idea down for a while, so it's a win either way.
4 comments:
What I believe from direct experience is that associated with my brain is a consciousness. What I infer but cannot prove is that associated with each other human brain out there is yet another consciousness. Consciousness is such that I can imagine a machine like a brain doing most of what we see people do without a consciousness associated with it.
The interesting question I think is could there ever be a consciousness associated with a man-made machine.
It is not something as trivial as labeling "consciousness" is to human brains the way "page count" is to paper books. Rather, consciousness is an interesting concept which we do not know is media dependent. It could be more like "irony," a statement that is ironic is ironic no matter what medium it is recorded in.
I think equating "consciousness" and "page count" severely misses out on why this is generally considered an interesting set of questions.
Parenthetically, the difference between a simulation and a copy I think is addressed by the concept "strong A.I." I think "strong A.I." is equivalent to the statement that intelligence cannot be simulated, that is, if you do simulate it you have actually created it. I think strong A.I. is probably wrong myself. I think it is magical thinking, cargo cult thinking. If we build something that looks like an airplane out of coconuts and palm fronds, we cannot fly in it.
Hi, i just want to say hello to the community
do i take a automobile insurance firm to small claims court? The insurer firm denied my claim, (I would take the at fault driver to small claims however I've no address to serve them or send a demand letter). One other driver was at fault but his particular insurance provider states that there is actually a difference in your statements so they really need to take the word of their insured vs. my word. I do think they acted in lousy belief and did not perform a proper investigation would this even be a valid claim in small claims court? I wish to accept the at fault drivers insurer (not my own) to small claims for any damages to my car.
The ability to logically formulate and format an idea within the mind, stands apart from absorbing outside information and processing it through our "Feeling Factors" for the brain has the unconscious ability to enhance certain factors the original subject matter could not, in direct relevance or contrast to the one, whose mind is in charge of processing.Therefore the more "comlpex in nature" the AI becomes, only means its ideas could still end up as confused and corrupted as its human counterpart.
Thus,is creating AI as a logical solution to certain functions acceptable,perhaps.But to try and simulate human consciousness, perhaps not.Please,for now,let us simply continue to leave that matter, as only a theory!
Post a Comment