tag:blogger.com,1999:blog-92151176871491499632024-02-19T11:27:19.377-08:00Reality ApologeticsJonhttp://www.blogger.com/profile/09594949524027204661noreply@blogger.comBlogger117125tag:blogger.com,1999:blog-9215117687149149963.post-8730408283402556862011-04-19T13:17:00.000-07:002011-04-19T13:18:19.044-07:00Born to the Metaverse: Scream 4 and the Future of Pop Culture<div><br /></div><div><p style="font-family: 'lucida grande', tahoma, verdana, arial, sans-serif; font-size: 11px; line-height: 16px; margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; padding-top: 0px; padding-right: 0px; padding-bottom: 0px; padding-left: 0px; ">This is gonna contain some spoilers. Sorry, but it will. I'll try to flag the major ones, but if you really want to be surprised, you should probably stop reading right now. There's really no reason for that, though, because <em style="font-family: 'lucida grande', tahoma, verdana, arial, sans-serif; font-size: 11px; line-height: 16px; ">Scream 4 </em>is by no means a good film. It's a slasher movie, and not a terrible one at that, but still just a slasher movie. The interesting aspects of it are entirely cultural: it's a window to the future. If you want to know what the next ten years are going to look like, you should see <em style="font-family: 'lucida grande', tahoma, verdana, arial, sans-serif; font-size: 11px; line-height: 16px; ">Scream 4</em>. If you're a fan of the genre you might also want to see it, but I'm not going to talk about that here. I'm not a film critic, nor am I interested in what it did well with regard to film-making--I'm interested in it as a cultural phenomenon, and as a lens into the difference between youth culture 15 years ago (which is mainstream culture now), and youth culture today (which will become mainstream culture over the next 10 years). <em style="font-family: 'lucida grande', tahoma, verdana, arial, sans-serif; font-size: 11px; line-height: 16px; ">Scream 4</em> is one of the first definitive digital-native generation pieces. Here's why.</p><p style="font-family: 'lucida grande', tahoma, verdana, arial, sans-serif; font-size: 11px; line-height: 16px; margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; padding-top: 0px; padding-right: 0px; padding-bottom: 0px; padding-left: 0px; "><br /></p><p style="font-family: 'lucida grande', tahoma, verdana, arial, sans-serif; font-size: 11px; line-height: 16px; margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; padding-top: 0px; padding-right: 0px; padding-bottom: 0px; padding-left: 0px; ">The original <em style="font-family: 'lucida grande', tahoma, verdana, arial, sans-serif; font-size: 11px; line-height: 16px; ">Scream </em>was instrumental in mainstreaming the concept of explicit self-awareness in popular culture: as a friend pointed out to me, it certainly wasn't the first major movie to play around with genre-awareness (<em style="font-family: 'lucida grande', tahoma, verdana, arial, sans-serif; font-size: 11px; line-height: 16px; ">Pulp Fiction</em>, at least, beat it by a few years), but it certainly <em style="font-family: 'lucida grande', tahoma, verdana, arial, sans-serif; font-size: 11px; line-height: 16px; ">was</em> the first major movie to do it so explicitly: the characters in<em style="font-family: 'lucida grande', tahoma, verdana, arial, sans-serif; font-size: 11px; line-height: 16px; ">Scream</em> talked about the conventions of the teen slasher genre, and the film was largely predicated on the novelty of such meta-commentary: the characters used their knowledge of genre conventions to survive. It was a slasher film about people who knew something about slasher films. This was prescient: <em style="font-family: 'lucida grande', tahoma, verdana, arial, sans-serif; font-size: 11px; line-height: 16px; ">Scream</em> came out in 1996, and it would be precisely that kind of self-referential meta-commentary that would come to define the popular culture contributions of the 2000s--think of the difference between the humor of <em style="font-family: 'lucida grande', tahoma, verdana, arial, sans-serif; font-size: 11px; line-height: 16px; ">The Simpsons</em> in the 90s and the humor of <em style="font-family: 'lucida grande', tahoma, verdana, arial, sans-serif; font-size: 11px; line-height: 16px; ">Family Guy</em> in the 2000s for another good illustration of this. <em style="font-family: 'lucida grande', tahoma, verdana, arial, sans-serif; font-size: 11px; line-height: 16px; ">Scream</em> was a trend-setter in presaging the rise of what technology writers have called "remix culture:" the artistic style of creating novel creative works by recombining the elements of existing works, and feeding popular culture back into itself. Whatever you think of this phenomenon (FUCK OFF JARON LANIER!), there's absolutely <em style="font-family: 'lucida grande', tahoma, verdana, arial, sans-serif; font-size: 11px; line-height: 16px; ">no</em> question that it was the definitive cultural innovation of the 2000s; the kind of entertainment that people my age (what I call "young immigrants" to the digital world) grew up on was largely predicated on playing around with self-referentiality.</p><p style="font-family: 'lucida grande', tahoma, verdana, arial, sans-serif; font-size: 11px; line-height: 16px; margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; padding-top: 0px; padding-right: 0px; padding-bottom: 0px; padding-left: 0px; "><br /></p><p style="font-family: 'lucida grande', tahoma, verdana, arial, sans-serif; font-size: 11px; line-height: 16px; margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; padding-top: 0px; padding-right: 0px; padding-bottom: 0px; padding-left: 0px; ">For most digital natives, though, this isn't anything new or original. Remix culture isn't an innovation--it's just the way things are. The oldest digital natives were only 3 or 4 years old when the original <em style="font-family: 'lucida grande', tahoma, verdana, arial, sans-serif; font-size: 11px; line-height: 16px; ">Scream</em> came out; that generation is now starting to graduate from high school, go off to college, and make its own mark on popular culture. <em style="font-family: 'lucida grande', tahoma, verdana, arial, sans-serif; font-size: 11px; line-height: 16px; ">Scream 4</em> is the embodiment of that mark. <strong style="font-family: 'lucida grande', tahoma, verdana, arial, sans-serif; font-size: 11px; line-height: 16px; ">HERE COME SOME SPOILERS</strong>. The movie opens with several rapid "frame shifts." We get a few really bloody kills immediately, but after each one, the movie zooms out to reveal that the scene was actually taken place inside a slasher movie (a thinly-veiled version of the <em style="font-family: 'lucida grande', tahoma, verdana, arial, sans-serif; font-size: 11px; line-height: 16px; ">Scream</em> franchise itself called <em style="font-family: 'lucida grande', tahoma, verdana, arial, sans-serif; font-size: 11px; line-height: 16px; ">Stab</em>) that other people were watching. The new characters pick apart the genre for a while, and are then killed. This process repeats (if I recall) three or four times. By the last zoom-out, the whole thing is just patently ridiculous--people in the theater were either groaning or (like me) laughing. At this point, it's really unclear where the movie is going; this whole process takes somewhere between 15 and 20 minutes, and I was buckled in for a really awful movie: I thought it was just going to be another sequel recycling the now decades-old trick of genre awareness. But that's not what happened.</p><p style="font-family: 'lucida grande', tahoma, verdana, arial, sans-serif; font-size: 11px; line-height: 16px; margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; padding-top: 0px; padding-right: 0px; padding-bottom: 0px; padding-left: 0px; "><br /></p><p style="font-family: 'lucida grande', tahoma, verdana, arial, sans-serif; font-size: 11px; line-height: 16px; margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; padding-top: 0px; padding-right: 0px; padding-bottom: 0px; padding-left: 0px; ">This early absurd level of meta-humor is essential for what follows. Like a tasteless sorbet, it cleanses the palate, erasing all traces of what came before, and opening the way for you to appreciate the new flavor of what's going to follow. It gets all the issues about self-awareness and meta-commentary right out in the open immediately, loudly, and exaggeratedly. This is surely deliberate, and the message comes through loud and clear: yes, yes, there's meta commentary going on here. That's obvious, and it's been done. Let's get all of this right out in the open. If <em style="font-family: 'lucida grande', tahoma, verdana, arial, sans-serif; font-size: 11px; line-height: 16px; ">Scream</em> was meta, <em style="font-family: 'lucida grande', tahoma, verdana, arial, sans-serif; font-size: 11px; line-height: 16px; ">Scream 4</em> is post-meta: not just self-aware about horror film conventions, but self-aware about self-aware film conventions. This lets the meta aspects of the film fade into the background from that point forward: this isn't the gimmick driving the film anymore, but just a fact of life, and part of the scenery against which the rest of the narrative plays out. Everyone not only knows about the horror film conventions, but everyone <em style="font-family: 'lucida grande', tahoma, verdana, arial, sans-serif; font-size: 11px; line-height: 16px; ">knows</em> that they know this: it's just assumed that everyone is constantly analyzing what's going on at a higher level of abstraction (or, at least all the <em style="font-family: 'lucida grande', tahoma, verdana, arial, sans-serif; font-size: 11px; line-height: 16px; ">young</em>characters; more on this in a bit).</p><p style="font-family: 'lucida grande', tahoma, verdana, arial, sans-serif; font-size: 11px; line-height: 16px; margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; padding-top: 0px; padding-right: 0px; padding-bottom: 0px; padding-left: 0px; "><br /></p><p style="font-family: 'lucida grande', tahoma, verdana, arial, sans-serif; font-size: 11px; line-height: 16px; margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; padding-top: 0px; padding-right: 0px; padding-bottom: 0px; padding-left: 0px; "><strong style="font-family: 'lucida grande', tahoma, verdana, arial, sans-serif; font-size: 11px; line-height: 16px; ">HERE'S ANOTHER SPOILER</strong></p><p style="font-family: 'lucida grande', tahoma, verdana, arial, sans-serif; font-size: 11px; line-height: 16px; margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; padding-top: 0px; padding-right: 0px; padding-bottom: 0px; padding-left: 0px; "><br /></p><p style="font-family: 'lucida grande', tahoma, verdana, arial, sans-serif; font-size: 11px; line-height: 16px; margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; padding-top: 0px; padding-right: 0px; padding-bottom: 0px; padding-left: 0px; ">One of the early frames is worth mentioning in particular: it plays out the "Facebook stalker who comes to kill you" sort of plot. The girls being stalked and killed have slightly out-of-date slider phones, adding a weird sense of one-off anachronism; this is exactly the kind of thing that an <em style="font-family: 'lucida grande', tahoma, verdana, arial, sans-serif; font-size: 11px; line-height: 16px; ">actual</em> movie would have been about just a few years ago, but now it's passe. After the characters in the frame are slashed by the FB stalker, the "zoom out" commentary remarks about how "it would be a twitter stalker today." This frame in particular has the effect of not only getting meta-commentary out of the focus and into the background, but also of getting <em style="font-family: 'lucida grande', tahoma, verdana, arial, sans-serif; font-size: 11px; line-height: 16px; ">technology</em> out of the focus and into the background. Everyone in <em style="font-family: 'lucida grande', tahoma, verdana, arial, sans-serif; font-size: 11px; line-height: 16px; ">Scream 4</em> (or, again, every teen character) has a smartphone that's constantly connected to the Internet, but that fact isn't in the least gimmicky in the way that it would have been just five years ago; social networking is portrayed with <em style="font-family: 'lucida grande', tahoma, verdana, arial, sans-serif; font-size: 11px; line-height: 16px; ">perfect</em> versimillitude, and the presence of ubiquitious computing and information is--just like the <em style="font-family: 'lucida grande', tahoma, verdana, arial, sans-serif; font-size: 11px; line-height: 16px; ">general</em>meta-commentary--presented as just a normal feature of the world. Characters transition seamlessly between real-world interaction and digital interaction, and everyone just takes this as a matter of course. The fact that everyone is constantly connected to everyone else (and the Web) is no longer a novelty to be played with, but rather is just a fact in the same way that everyone's having access to a car is just a fact. Someone who knows more about the history of the genre than I do could probably point to a time when <em style="font-family: 'lucida grande', tahoma, verdana, arial, sans-serif; font-size: 11px; line-height: 16px; ">that</em> transition took place--when teengers with automobiles ceased to be a novelty driving (so to speak) a horror film's plot, and just became transparently there. Digital technolgy in <em style="font-family: 'lucida grande', tahoma, verdana, arial, sans-serif; font-size: 11px; line-height: 16px; ">Scream 4</em> is perfectly transparent: ubiquitous and taken for granted. </p><p style="font-family: 'lucida grande', tahoma, verdana, arial, sans-serif; font-size: 11px; line-height: 16px; margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; padding-top: 0px; padding-right: 0px; padding-bottom: 0px; padding-left: 0px; "><br /></p><p style="font-family: 'lucida grande', tahoma, verdana, arial, sans-serif; font-size: 11px; line-height: 16px; margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; padding-top: 0px; padding-right: 0px; padding-bottom: 0px; padding-left: 0px; ">Or, again, at least for the younger characters. Part of the film's brilliance is the degree to which it <em style="font-family: 'lucida grande', tahoma, verdana, arial, sans-serif; font-size: 11px; line-height: 16px; ">nails</em> the weird sort of techno-gulf between the older generation (Neve Campbell and Courteney Cox, the heroines of the first film) and the new generation of high-school aged victims. Neve Campbell was supposed to be a teenager in the first film, which would put her in her very late 20s or early 30s now. She's right on the older edge of the young-immigrant generation, and is by far the most competent of her generation's characters in this film. The rest of the older characters are portrayed as relatively bumbling incompetents who are always a few steps behind (largely in virtue of their reliance on landline phones and radio communication). <strong style="font-family: 'lucida grande', tahoma, verdana, arial, sans-serif; font-size: 11px; line-height: 16px; ">SPOILER. </strong>At one point, Courteney Cox's reporter character, in a vain attempt to remain relevant, sneaks into an underground film festival to plan cameras, which transmit via closed-circuit to her laptop in front of the building (as she sets them up, she congratulates herself on her relevance, saying something like "still got it!"). This would have been the height of technology in the first film, but in light of the fact that the high school's gossip-blog owner spends the entire film wearing (I shit you not) a live-streaming wireless webcam mounted on a headset (a fact which <em style="font-family: 'lucida grande', tahoma, verdana, arial, sans-serif; font-size: 11px; line-height: 16px; ">no one</em> ever remarks on), it looks comically archaic. Her cameras are promptly disabled by the killer, and she gets stabbed when she tries to go fix them. This is what you get for using wires, 30-somethings.</p><p style="font-family: 'lucida grande', tahoma, verdana, arial, sans-serif; font-size: 11px; line-height: 16px; margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; padding-top: 0px; padding-right: 0px; padding-bottom: 0px; padding-left: 0px; "><br /></p><p style="font-family: 'lucida grande', tahoma, verdana, arial, sans-serif; font-size: 11px; line-height: 16px; margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; padding-top: 0px; padding-right: 0px; padding-bottom: 0px; padding-left: 0px; ">The iconic trivia scene from the first film is also replayed, but it seems bizarrely out of place by the time it comes around: we've spent the whole film stewing in the fact that everyone knows everything about everyone all the time (none of the younger-generation characters are ever missing for more than a minute or two at a time, and are consistently tracked down with technology when they disappear for even a second), and so the idea that being able to simply recite a list of facts would possibly have any relevance to anything seems laughable: all that trivia knowledge that was so impressive in 1996 is just a Wikipedia search away for all characters all the time now. That point is hammered home by the single classroom scence in the film, in which a balding history instructor vainly tries to take control of a totally disinterested audience of students, each and every one of which has a smartphone out, and isn't paying attention. Behind the teacher, the phrase <em style="font-family: 'lucida grande', tahoma, verdana, arial, sans-serif; font-size: 11px; line-height: 16px; "><strong style="font-family: 'lucida grande', tahoma, verdana, arial, sans-serif; font-size: 11px; line-height: 16px; ">NO GOOGLING!!</strong></em> is written on the chalkboard in all capital letters. The irrelevance of normal education in light of the informal education made possible by the devices that all the students are carrying (and the teacher's frantic attempts to maintain the orthodox approach to pedagogy) practically slaps you in the face, and it is awesome.</p><p style="font-family: 'lucida grande', tahoma, verdana, arial, sans-serif; font-size: 11px; line-height: 16px; margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; padding-top: 0px; padding-right: 0px; padding-bottom: 0px; padding-left: 0px; "><br /></p><p style="font-family: 'lucida grande', tahoma, verdana, arial, sans-serif; font-size: 11px; line-height: 16px; margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; padding-top: 0px; padding-right: 0px; padding-bottom: 0px; padding-left: 0px; ">All of this comes to a final head when the killer is revealed near the end. I'm not going to reveal which of the characters it really is, but <strong style="font-family: 'lucida grande', tahoma, verdana, arial, sans-serif; font-size: 11px; line-height: 16px; ">MAJOR SPOILER</strong> he/she is one of the younger generation, and gives a long tirade about the role of the Internet, at one point yelling "We all live in public now, and to be famous you don't have to do anything great; you just have to do something fucked up. I don't want friends--I want fans." I wish I could find a copy of the film to find this speech again, because there was a lot more to it; I'll come back and revise this once I have a chance to see the movie again, but it makes many of the themes I've been pointing to here at least somewhat explicit, and also reveals the movie's greatest flaw--even though it's aware of these issues, it's ultimately still being directed, written, and produced not by digital natives (or even young immigrants), but by folks from the Old Country. In the end, all of this ubiquitious computing is cast very negatively, and it's suggested that the killer was driven to this in virtue of being immersed in a culture that encourages shallowness, extreme pragmatism ("we are whatever we need to be," says the killer), and amoral narcissism. <strong style="font-family: 'lucida grande', tahoma, verdana, arial, sans-serif; font-size: 11px; line-height: 16px; ">SPOILER </strong>When the killer is finally defeated (by the just-barely-still-relevant Campbell), it is with a set of defibrilation paddles--the ultimate wired, analog technolgy finally triumphing over the digital menace. </p><p style="font-family: 'lucida grande', tahoma, verdana, arial, sans-serif; font-size: 11px; line-height: 16px; margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; padding-top: 0px; padding-right: 0px; padding-bottom: 0px; padding-left: 0px; "><br /></p><p style="font-family: 'lucida grande', tahoma, verdana, arial, sans-serif; font-size: 11px; line-height: 16px; margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; padding-top: 0px; padding-right: 0px; padding-bottom: 0px; padding-left: 0px; ">If you want to see what the world is going to look like as the next generation starts to assume power, this is the movie to watch. You can see the writing on the wall here; the characters were born in the metaverse, and treat ubiquitous digital technology, social networking, self-referentiality, and remix culture not as novelties, but simply as part of the background against which they operate. These things are moving out of the fringes and being integrated into the mainstream of popular culture; as that move happens, it opens the door for new ways to define genre, and starts to suggest new conventions. This is beyond simple remix; the meta aspect of the 2000s has started to settle down and become just another facet of creative culture as a generation born to the Web starts to define itself in contrast to those who have come before. It's very, very exciting to watch.</p><p style="font-family: 'lucida grande', tahoma, verdana, arial, sans-serif; font-size: 11px; line-height: 16px; margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; padding-top: 0px; padding-right: 0px; padding-bottom: 0px; padding-left: 0px; "><br /></p></div><div><br /></div>Jonhttp://www.blogger.com/profile/09594949524027204661noreply@blogger.com232tag:blogger.com,1999:blog-9215117687149149963.post-82285155782737143372011-03-04T16:13:00.001-08:002011-03-04T16:20:24.432-08:00UN/UNIS talkI had the honor of speaking at the United Nations/United Nations International School global issues conference today. They held it in the UN General Assembly chamber (wow!), and I got to share the stage with such wizardly people as <a href="http://en.wikipedia.org/wiki/Clay_Shirky">Clay Shirky</a> and <a href="http://en.wikipedia.org/wiki/Reddit">Alexis Ohanian</a>. It was a tremendous amount of fun. If you're interested in seeing me--my talk was called "Building a Great Community from Terrible People (Or: How I Learned to Stop Worrying and Love 4Chan)"--then the direct link is <a href="http://www.ustream.tv/recorded/13083294">here</a>.Jonhttp://www.blogger.com/profile/09594949524027204661noreply@blogger.com4tag:blogger.com,1999:blog-9215117687149149963.post-72172437067513479712010-08-09T14:57:00.001-07:002010-08-09T15:03:50.017-07:00AI and Consciousness<div>So I've been at CTY again (as per usual). This time, I was instructor for Human Nature and Technology, for the first session, and then instructor for Philosophy of Mind second session. Yes, I am moving up in the world. As is normal at CTY, I was thinking about philosophy a lot. So today, when a friend of mine asked me to share my thoughts on the AI debate, I decided to finally try to put pen to paper and express the nascent argument I've been developing for a while now. This is how it turned out:</div><div><br /></div><div>I think one of the most important things to press on here is the distinction between simulation and duplication. A lot of the disagreements about artificial intelligence, it seems to me, come down to disagreements about whether or not AI is supposed to be <i>simulating</i> a (human) mind or <i>duplicating</i> a (human) mind. This point is widely applicable, but one of the most salient ways that it comes up here is as a distinction between consciousness and intelligence. Let me say what I mean.</div><div><br /></div><div>Some (many) properties of systems are functional properties: they're properties that are interesting, that is, just in virtue of what function they discharge in the system, not in virtue of HOW they discharge that function. For most purposes, human hearts are like this: an artificial heart is just as good as a normal one, because we don't really care about having a <i>meat</i> heart so much as having something to pump our blood. Functional properties are "medium-independent:" they are what they are irrespective of the kind of stuff the system is made out of. Not all properties are like that, though. Here's a story.</div><div><br /></div><div>Suppose I've got a few different copies of <i>Moby Dick</i>. Thanks to the wonders of technology, though, all of these copies are in different formats: I've got a regular old leather-bound book, a PDF on my hard drive, and a CD with the audio book (read by...let's say Patrick Stewart). These are all representations with the same content: that is, they're all representations of the events described in <i>Moby Dick</i>. The content of the story is medium-independent. There are, however, facts about each of the representations that don't apply to the other representations: the physical book is (say) 400 pages long, but the audiobook has no page length. The PDF is in a certain file format (PDF), but the book has no file format. The audiobook has a peak frequency and volume level, but the PDF has neither of those. This list could be continued for quite a while; you get the picture--the important thing to emphasize is that while they all have the same content, each of the representations is instantiated in a different physical form, and thus has facts that are unique to that physical form (e.g. page length, format, peak frequency, &c.). That's all true (once again) in spite of their identical content.</div><div><br /></div><div>Now, let's add another representation to the mix. Suppose I, like the fellows at the end of Fahrenheit 451, have decided to memorize Moby Dick. It takes quite some time, but eventually I commit the entire book to memory, and can recite it at will. This is a fourth kind of representation--a representation instantiated in the complex interconnection of neurons in my brain. Paul Churchland (2007) has developed a very nice account of how to think about neural representation, and he talks about a semantic state space (SSS)--that is, a complicated, high-dimensional vector space that represents the state of each of my neurons over time. This SSS--the changing state of my brain as I run through the story in my head--then, represents Moby Dick just as surely as the book, audiobook, or PDF does, and remains a totally physical system.</div><div><br /></div><div>Ok, so we can ask ourselves, then, what unique things we can say about the SSS representing Moby Dick that we can't say about the other representations. It has no page length, no file format, and no peak volume. What it does have, though, is a qualitative character--what Thomas Nagel calls a "what-it-is-like-ness." It has (that is) a certain kind of feeling associated with it. That seems very strange. Why should a certain kind of representation have this unique quality? Asking this question, though, just shows a certain kind of representational chauvinism on our part: we might as well ask why it is that a book has a certain number of pages, or why a PDF has a file format (but not vice-versa). The answer to all of the questions is precisely the same: this representation has that feature just in virtue of how the physical system in which it is instantiated is arranged. The qualitative nature of the memorized book is no more mysterious than the page count of the leatherbound book, but (like page length) <i>it isn't medium-independent either</i>, and <i>that's</i> the point that cuts to the heart of the AI debate, I think. Here's why. </div><div><br /></div><div>Think about Turing and Searle as occupying opposite sides of the divide here. Turing says something like this: when we want AI, we want to create something that's capable of <i>acting </i>intelligently--something that can pass the Imitation Game. Searle points out, though, this approach still leaves something out--it leaves out the semantic content that our minds enjoy. Something could pass the Imitation Game and still not have a mind like <i>ours</i>, in the sense that it wouldn't be conscious. This whole argument, I suggest, is just a fight about whether we should be going after the medium-independent or medium-dependent features of our minds when we're building thinking systems. That is, should we be trying to <i>duplicate</i> the mind (complete with features that depend on how systems like our brains are put together), or should we be trying to <i>simulate</i> (or functionalize) the mind and settle for something that discharges all the functions, even if it discharges those functions in a very different way? There's no right or wrong answer, of course: these are just very different projects.</div><div><br /></div><div>Searle's point is that digital computers won't have minds like ours no matter what kind of program they run, or how quickly they run it. This makes sense--in the language of the book analogy from above, it's like asserting that no matter how much fidelity we give to the audiobook recording, it's never going to have a page length. Of course that's true. That's the sense in which Searle is correct. Turing has a point too, though: for many applications, we care a lot more about the <i>content</i> of the story than we do about the format in which we get it. Moreover, it looks like duplicating our sort of minds--building a conscious system--is going to be a lot harder than just building a functionally intelligent system. Ignoring the medium-dependent features of mentality and focusing on the medium-independent ones lets us build systems that behave intelligently, but gives us the freedom to play around with different materials and approaches when building the systems. </div><div><br /></div><div>So in some sense, the question "can digital computers think" is ambiguous, and that's why there's so much disagreement about it. If we interpret the question as meaning "can digital computers behave intelligently?" then the answer is clearly "yes," in just the same sense that the answer to the question "can you write a story in the sand?" is clearly yes, even though sand is nothing like a book. If we interpret the question as meaning "can digital computers think in precisely the way that humans with brains think?" then the answer is clearly "no," in just the same sense that the answer to the question "will a story written in the sand have a page count?" is clearly "no," no matter what kind of sand you use. Searle is right to say that consciousness is special, and isn't just a functional notion (even though, as he says, it is a totally physical phenomenon). It's a function of the way our brains are put together. Turing is right to say that what we're usually after when we're trying to make intelligent machines, though, is a certain kind of behavior, and that our brains' way of solving the intelligent behavior problem isn't the only solution out there.</div><div><br /></div><div>I'm not sure if any of that is helpful, but I've been meaning to write this idea down for a while, so it's a win either way. </div>Jonhttp://www.blogger.com/profile/09594949524027204661noreply@blogger.com4tag:blogger.com,1999:blog-9215117687149149963.post-58512885821111391942010-06-07T14:24:00.000-07:002010-06-07T14:25:38.892-07:00On the Future of Philosophy and Science<span class="Apple-style-span" style="font-family: verdana, geneva, lucida, 'lucida grande', arial, helvetica, sans-serif; font-size: 13px; -webkit-border-horizontal-spacing: 1px; -webkit-border-vertical-spacing: 1px; ">A friend of mine asked me for my opinion about the future of philosophy. There's a popular perception that, in light of the tremendous advances science has made in the last century or so, philosophy is a dying discipline. As most of you know, I disagree with this assessment, but I do think that philosophy needs to adapt if it is to survive. Here are my brief thoughts on the future of my discipline, and its relationship to the scientific project.<br /><br />Philosophy as an isolated discipline is certainly in decline--the number of questions that are purely "philosophical" (and worth answering) is shrinking. That's less a reflection on philosophy, though, and more a reflection of the state of academia in general: disciplinary lines are blurring. Physics is (at least in parts) informed by biology, information theory, and other special sciences. The special sciences themselves are (and have been for a while) mutually supportive and reinforcing; there's no clear line between a question for (say) sociology and a question for economics. None of that is to say that physics, biology, information theory, sociology, or economics is in decline, though--it just means that academia is becoming increasingly interdisciplinary.<br /><br />On the face of it, this shift looks to have hit philosophy particularly hard: we've fallen quite far from our position as the queen of the Aristotelean sciences to where we are today, and there's a pervasive attitude both among other academics and among lay-people, I think, that philosophy is basically obsolete today, having been replaced by more reputable scientific investigation. There's a perception, that is, that metaphysics has been supplanted by mathematical physics, ethics has been rendered obsolete by sociobiology and evolutionary game theory, and that questions about the nature of the mind have been reduced to questions about neurobiology (or maybe computation theory). All of this is, I think, more or less true: the days of philosophy pursued as a stand-alone competitor to science are over, or at least they ought to be. This is emphatically <i>not</i> the same thing as saying that philosophy is dead or dying, though--it just needs to undergo the same kind of shift that other sciences have had to go through as they've entered the modern era. Philosophy needs to be incorporated into the unified structure of science generally.<br /><br />It's not immediately obvious how to do this, but there are some clues. We should start by looking at the areas of science where philosophers--that is, people trained in or employed by philosophy departments using methods that are marked by careful attention to argument, critical examination of underlying assumptions, and concern with big-picture issues--are still making useful contributions to the scientific enterprise. There are, I think, two pretty clear paradigm cases here: quantum mechanics and cognitive science. In both of these fields, philosophers have made contributions that, far from consisting in idle navel-gazing and linguistic trickery, have made a real impact on scientific understanding. In QM, philosophers like David Albert, Hilary Greaves, David Deutsch, David Wallace, Barry Loewer, Tim Maudlin, Frank Arntzenius, and others have helped tremendously in clarifying foundational issues and resolving (or at least explicating) some of the trickier conceptual problems lurking behind dense mathematical formalism. Similarly, philosophers like Daniel Dennett, John Searle, Andy Clark, Ken Aizawa, and others have been instrumental in actually getting the field of cognitive science off the ground; just as in QM, these philosophers are responsible both for clarifying foundational concepts and for designing ingenious experiments to test hypotheses developed in the field.<br /><br />What does the work these people are doing have in common in virtue of which it is philosophical? Again, the answer isn't clear, but this just reinforces the point that I'm making: there's no longer a clear division between philosophy and the rest of the scientific project to which philosophers ought to be contributing. If anything, the line between philosophy <i>qua</i> philosophy and science (insofar as there's a line at all) seems more and more to be a<i>methodological</i> line rather than a <i>topical</i> one; a philosopher differs from a "normal" scientist not in virtue of the subject matter he investigates, but in virtue of the way he approaches that subject matter. Scientists, by and large, are trained as specialists: by the time a physicist or biologist reaches the later stages of his PhD, his work is usually sharpened to a very fine point, and his area of expertise is narrow, but very deep: many (but not all) practicing scientists know a tremendous amount about their own fields, but are content to leave thinking about other fields to other specialists. Philosophers, on the other hand, are often generalists (at least when compared to physicists). In virtue of our general training in logic, argumentation, critical thinking, and, well, <i>philosophy</i> we're often better equipped than most to see the bigger picture--to see the way the whole scientific enterprise fits together, and to notice problems that are only apparent from a sufficiently high level of abstraction. Training in philosophy means sacrificing a certain amount of depth of knowledge--I'll never know as much about particle physics as Brian Greene--for a certain amount of breadth and flexibility; by the time my training is done, I'll know a bit about particle physics, a bit about evolutionary theory, a bit about computer science, a bit about cognitive neurobiology, a bit about statistical mechanics, a bit about climate science, a bit about the foundations of mathematics, and so on. That kind of breadth certainly has its drawbacks--a philosopher is unlikely to make the kind of experimental breakthroughs that a scientist dedicating his life to a single problem might achieve--but it also has its benefits; philosophers are in a unique position to (as it were) care for the whole forest rather than just a few trees.<br /><br />Philosophers are uniquely situated, that is, to engage in the project of "bridge-building" between the individual sciences--uniquely situated to facilitate the continuing break-down of disciplinary barriers that threatened philosophy's existence to begin with. Philosophy's tool-kit is sufficiently general to be applied to any of the special sciences, given a little bit of study and localization. This isn't to suggest that philosophers should (or even can) make pronouncements about scientific issues from the armchair; that's the model of philosophy that's dying, and I'm not the only one to have said "good riddance" to it. Doing philosophy of physics means learning physics, and doing philosophy of biology means learning biology. We need to engage with the disciplines to which we contribute; the edges of the bridges need to be anchored on solid ground before they can help us cross the interdisciplinary gaps. The "big picture" questions that have been the hallmark of philosophy for millennia--questions like "what is humanity's place in the universe?" and "what do our best theories of the structure of the world mean for who we are?" and even "what's special about consciousness?" still have a place in contemporary science. Science has room for both specialists and generalists, and questions like "what's the right way to think about a real physical system's being in a state that's represented by a linear combination of eigenvectors?" have an important place in science. The scientific enterprise takes all kinds, and there's room for philosophers to contribute, if we can just get our collective head out of our collective ass and come back to the empirical party with the rest of science.</span>Jonhttp://www.blogger.com/profile/09594949524027204661noreply@blogger.com6tag:blogger.com,1999:blog-9215117687149149963.post-20797935409443352182009-09-03T09:54:00.000-07:002009-09-03T15:17:59.209-07:00Bertrand Russell: Leaping Tall Proofs in a Single Bound VariableBack when I was a human larva, Bertrand Russell was one of the first philosophers I ever discovered, let alone read in any depth. I was raised moderately Catholic, but by the time I was 11 or 12, I was wrestling with nascent feelins that Catholicism--and indeed, all of religion--might be terribly inadequate. One day, while hanging out in a bookstore (yeah, I was that kind of 12 year old), I happened on a book called <i>Why I'm Not a Christian</i>. I read the titular essay right then and there and, after buying the book, soon devoured the rest of them. Russell's clear, lucid, humorous prose expressed all the doubts I'd been unable to put into words (and then some!) and exposed me to serious philosophy for the first time. I was hooked, and before long I was plowing through Wittgenstein's <i>Philosophical Investigations </i>and every other piece of philosophy I could get my hands on. Though I'm not a logician--and though Russell's work on religion was only a very, very small part of his mostly logic-oriented corpus--I still have a soft-spot in my heart for him: he was my first doorway into what eventually would become a career. <div><br /></div><div>That's why I'm so delighted to discover that two gentlemen (one of them a computer science professor at Berkeley!) are <a href="http://www.independent.co.uk/arts-entertainment/books/features/bertrand-russell-the-thinking-persons-superhero-1780185.html">publishing a graphic novel</a>--that's what you call you comic book if you want it to be taken seriously--about Russell's struggles with life, mathematics, philosophy, and his own tenuous sanity. Snip from the article about it in <i>The Independent</i>:</div><blockquote><p>Through GE Moore at Cambridge, he discovered Leibniz and Boole, and became a logician. Through Alfred Whitehead's influence, he travelled to Europe and met Gottlob Frege, who believed in a wholly logical language (and was borderline insane) and Georg Cantor, the inventor of "set theory" (who was locked up in an asylum) and a mass of French and German mathematicians in varying stages of mental disarray. Back home he and Whitehead wrestled with their co-authored Principles of Mathematics for years, endlessly disputing the foundations of their every intellectual certainty, constantly harassed by Russell's brilliant pupil Wittgenstein.</p><p>If the subject matter seems a little arid, with its theories of types, paradoxes and abstruse language (calculus ratiocinator?), and if its recurring theme of how logic and madness are psychologically intertwined seems a touch gloomy, don't let that put you off. Logicomix tells its saga of human argumentation with such drama and vivid colour that it leaves the graphic novel 300 (Frank Miller's take on the Battle of Thermopylae) looking like something from Eagle Annual.</p></blockquote><div>This sounds great--something like <i><a href="http://www.amazon.com/Wittgensteins-Poker-Ten-Minute-Argument-Philosophers/dp/0060936649/ref=sr_1_1?ie=UTF8&s=books&qid=1251997709&sr=1-1">Wittgenstein's Poker</a></i> with pictures. It looks like the book itself isn't available for preorder on <a href="http://www.amazon.com/gp/product/0747597200">Amazon</a> (it's going to be released in Europe on September 7, and sometime after that in the United States), but you can sign up to be notified when it is available. This is certainly something that I'll be making room in my schedule to read!</div>Jonhttp://www.blogger.com/profile/09594949524027204661noreply@blogger.com24tag:blogger.com,1999:blog-9215117687149149963.post-9254724727637077022009-08-09T13:02:00.001-07:002009-08-09T13:16:55.331-07:00Quicklink: Ben Bradley and Roy Sorensen on DeathI've been thinking a lot about death lately--there's no particular reason, I just find some of the questions surrounding the philosophy of death fascinating. Perhaps primarily, I'm intrigued by the intuition that some people (apparently) have that either (1) death is not an evil--that is, it isn't something that we should fear for ourselves--or that (2) indefinite life isn't something to be desired. I suspect that both of these intuitions come to more or less the same thing, but they don't seem to be universally correlated: some people will hold (1) without holding (2). When I first started talking to friends and colleagues about this issue, I was rather shocked to find out that <i>anyone</i> holds (1) <i>or </i>(2) at all--they seem so obviously false to me that I have a hard time fathoming how anyone could hold them. Still, apparently this issue is non-controversial; I've got a paper floating around in my head attacking (1) and (2), but until it manifests (maybe later this semester?), I'll have to settle for just pondering. In the mean time, here are Ben Bradley (Syracuse University) and Roy Sorensen (Washington University-Saint Louis) discussing some of these issues. The discussion is a little slow (and Ben Bradley is--ugh--a hedonist), but BloggingHeads lets you watch the whole thing at 1.4x speed. I recommend that option. They touch on some of the fundamental questions in the field, including (1) and (2)--Roy Sorensen and Ben Bradley both seem to share my shock about the fact that someone might hold (2). Enjoy!<div><br /></div><div> </div><br /><embed type="application/x-shockwave-flash" src="http://static.bloggingheads.tv/maulik/offsite/offsite_flvplayer.swf" flashvars="playlist=http%3A%2F%2Fbloggingheads%2Etv%2Fdiavlogs%2Fliveplayer%2Dplaylist%2F21728%2F00%3A08%2F57%3A58&cobrand=3" height="335" width="448"></embed><br /><br />Thanks,<a href="http://leiterreports.typepad.com/blog/"> Leiter</a>!Jonhttp://www.blogger.com/profile/09594949524027204661noreply@blogger.com9tag:blogger.com,1999:blog-9215117687149149963.post-63461686650204203442009-07-29T18:53:00.001-07:002009-07-30T08:26:38.722-07:00Having Your Qualia and Eating Your Physics TooCan we coherently acknowledge the existence of <a href="http://plato.stanford.edu/entries/qualia/">qualia</a> without being forced into a non-physicalist stance about the contents of the world? I'm back at <a href="http://en.wikipedia.org/wiki/Center_for_Talented_Youth">CTY</a>--as I am every summer--and today our philosophy of mind class got to Jackson's "Epiphenomenal Qualia." I was somewhat surprised, having not read the article since last year, to find that my own views on it seem to have changed considerably. Specifically, while I still agree with the main thrust of Jackson's argument (that is, that qualia exist), I'm much less impressed with the quality of his argumentation and the route by which he arrives at his conclusion; more specifically still, I'm incredibly skeptical that his "what Mary didn't know" argument shows anything like what it is purported to show. Qualia certainly deserve to be included in our ontology, but that emphatically <i>doesn't</i> imply that we ought to reject the physicalist picture of the world. Let me try and show how I think these two statements can be reconciled.<div><br /></div><div>First, I suppose a bit of background is in order. Readers may already be somewhat familiar with the Mary case--Jackson's version of the <a href="http://plato.stanford.edu/entries/qualia-knowledge/">knowledge argument</a> against physicalism--so I won't waste a whole lot of time detailing the moves. Still, it's worth laying out exactly how the argument is supposed to proceed; as we shall see, the precise wording of one of the premises can make all the difference between soundness and total incoherence. Let's start with the informal presentation. Briefly, the standard presentation goes something like this.</div><div><br /></div><div><blockquote>Mary is a gifted neuroscientist who has dedicated her life to studying human color perception. She's learned everything there is to know about the physical process of seeing color: she knows everything about how the surface spectral reflectance of various objects interacts with environmental variables to produce changes in the photoreceptors of the eye, how those changes produce neural excitations, how those excitations are processed in the brain, and so on. She knows all the physical facts about how humans perceive color. Somewhat ironically, Mary herself has never perceived color. Her eyes (say) have been surgically altered so that she is only able to view the world in shades of grey. Nevertheless, her studies have proceeded beautifully, and she is now in a position of perfect physical knowledge. With this complete knowledge in hand, Mary undergoes an operation to reverse her perceptual idiosyncrasy; the procedure to keep her from being able to see color is reversed, and Mary's biology is returned to normal. When Mary awakens from the operation, she is presented with a red rose, and actually sees red for the first time. Does Mary learn something new?</blockquote><br /><br />On the standard interpretation, we're now presented with two horns of a dilemma: we're either forced to say that no, Mary has learned nothing new when she first sees color--an ostensibly counter-intuitive position to hold--or we're forced to say that yes, Mary learns something new when she sees the rose. If we take this second horn, though (so the argument goes), we must <i>also</i> admit that there are facts about color experience that are not physical; after all, <i>ex hypothesi </i>Mary knows all the physical facts about color vision--if she learns something new by actually seeing color, that new fact must be a non-physical fact. Therefore, the physicalist picture of the world is, while perhaps not strictly <i>false</i>, incomplete in an important way: it is incapable of accounting for the qualitative character of conscious experience. Thus, we must appeal to more than physics when describing a world that contains conscious creatures.</div><div><br /></div><div>Here's a more formal presentation of the argument (taken from the SEP):</div><div><br /></div><div><p><table cellpadding="+3"><tbody><tr><td align="right" valign="top"><em>Premise P1</em></td><td valign="top">Mary has complete physical knowledge about human color vision before her release.<p>Therefore</p></td></tr><tr><td align="right" valign="top" nowrap="nowrap"><em>Consequence C1</em></td><td valign="top">Mary knows all the physical facts about human color vision before her release.</td></tr><tr><td align="right" valign="top"><em>Premise P2</em></td><td valign="top">There is some (kind of) knowledge concerning facts about human color vision that Mary does not have before her release.<p>Therefore (from (P2)):</p></td></tr><tr><td align="right" valign="top" nowrap="nowrap"><em>Consequence C2</em></td><td valign="top">There are some facts about human color vision that Mary does not know before her release.<p>Therefore (from (C1) and (C2)):</p></td></tr><tr><td align="right" valign="top" nowrap="nowrap"><em>Consequence C3</em></td><td valign="top">There are non-physical facts about human color vision.</td></tr></tbody></table></p><p><br /></p><p>This is, at first glance, a very plausible argument. Jackson's own conclusion was a version of epiphenomenalism: at the time of the article's publication, he held that whatever non-physical knowledge Mary acquired must lack any kind of causal efficacy, thus maintaining the causal closure of the physical universe. That seems to me to be a pretty desperate move, though, and apparently Jackson eventually agreed--he's since recanted this position, and now holds that there must be <i>something</i> wrong with the Mary case. I'm not sure if he's put any work into figuring out what it is, but other people certainly have. I'm going to more or less ignore all of them, as is my wont.</p><p>Here's what struck me when I was reading this argument today while preparing to lecture to the class on it: <i>Jackson is deeply ambiguous, confused, or otherwise mistaken about what he means in (P1).</i> The argument never even gets off the ground just because he's <i>wrong</i> about the kinds of things that Mary would be able to know from her particular position in her gray scale world. Let's tease this apart a little more.</p><p>What does it mean to say that Mary knows all physical facts about color perception? Presumably, just this: for every predicate, relation, or process <i>P</i> that relates to human color vision, if <i>P</i> is constrained by the laws of physics, then Mary knows <i>P</i>. This should be relatively non-controversial--"physical facts" are those (and only those) facts that are about the behavior of physical systems (and nothing else). The physicalist position is that the set of these facts is identical with the set of all facts that are necessary to explain the workings of the universe; that is, the physicalist position is the position that knowing <i>all</i> the physical facts amounts to knowing <i>everything</i> worth knowing. More narrowly, the physicalist position <i>vis-a-vis</i> color perception is just that knowing all the physical facts about color perception is both necessary and sufficient to give a complete account of how color perception works.</p><p>Good. We're homing in on the problem. The next question that we need to answer is this one: how do we go about learning physical facts? The physicalist "bite the bullet" style response to Jackson's argument just denies that Mary learns anything new when she's exposed to color for the first time--it asserts that if she knew all the physical facts, then she'd know what the experience was like. This is not very intuitive; we have a deep intuition that no matter how much I study some subject (via books, laboratory experiments, and so on), there are just some facts--like what it's <i>like</i> to see color--that just won't be accessible to me. That is, we have an intuition that there are some <i>relevant</i> facts that either can't be <i>written down, </i>or can't be discerned through objective experimentation: the what-it-is-likeness of color experience is, presumably, counted among these facts. <i>This</i> is the intuition that Jackson's argument exploits. </p><p>It's worth proceeding carefully here, though. Is saying that some particular fact <i>F</i> can't be written down or accessed through objective, third-person experimentation--that is, can't be described from a "view from nowhere"--<i>equivalent</i> to saying that <i>F</i> isn't a physical fact? Can all physical facts (to put it another way) be <i>written down</i> and <i>accessed</i> from a third-person viewpoint? Recall our definition of 'physical fact' above: </p><p></p><blockquote>"Physical facts" are those (and only those) facts that are about the behavior of physical systems (and nothing else)</blockquote>Let's rephrase the question, then: can <i>all</i> the behavior of <i>every</i> physical system be represented in third-person accessible formats? If we answer this question in the affirmative, we've adopted the position that Flannagan, in <i>Consciousness Reconsidered</i>, terms "linguistic physicalism," and there seems to be good reason to think that we've made a mistake somewhere in our reasoning. If we answer the question in the affirmative (that is), we've committed ourselves to the following position.</div><div><br /></div><div>(LP) What it means for some fact <i>F</i> to be a physical fact is for <i>F </i>to be representable in some observer-neutral, third-person accessible form (e.g. public language).</div><div><br />That's a problem, though. If we adopt (LP), then Jackson's argument collapses into something that's trivially true (if not question-begging!).</div><div><br /></div><div>(1a) Mary knows all linguistic (i.e. third-person accessible) facts about color perception.</div><div>(2a) Mary learns something new about color perception when she sees the rose.</div><div>(3a) Therefore, there are some facts about color perception that are not representable linguistically.</div><div><br /></div><div><i>Of course</i> this is true: it's part of what it <i>means</i> for something to be qualitative (that is, to be a conscious experience) that it's <i>essentially</i> private--that it's <i>essentially</i> accessible only from the first-person perspective. The question, then, becomes whether or not we are justified in adopting (LP); can we give an account of what's going on that doesn't require us to adopt it? Sure: we just have to allow that there might be some physical facts--facts about the behavior of some physical systems--that aren't capturable in third-person accessible representations. If we make this concession, then explaining what's going on in the Mary case becomes very easy: while black-and-white Mary has learned all the <i>linguistically representable</i> physical facts about color perception, this set of facts is not identical to the set of <i>all </i>physical facts about color perception--that is, there are aspects of the behavior of some relevant physical systems that cannot be captured from the third-person "view from nowhere." These facts, of course, are facts about <i>what it is like</i> to be in a certain <i>physical</i> state. To put it another way, there are facts about the state of Mary's own brain--which is, of course, a physical system--that can't be known from a third person perspective: she actually has to <i>be</i> in that state in order to know <i>everything</i> about it. When she's exposed to red for the first time, then, she's adding another bit of physical knowledge--which just is, recall, knowledge about the behavior of physical systems, <i>which includes her brain--</i>to her knowledge-base: that bit of knowledge, though, is one that is only <i>accessible</i> from the first-person standpoint.</div><div><br /></div><div>Let me try to put this point as simply as I can. The problem with this thought-experiment is that Jackson is mistaken when he says that black-and-white Mary knows <i>all</i> the physical facts. What he <i>means</i> to say is that she knows all the <i>linguistic</i> physical facts--all the physical facts that can be <i>accessed</i> from the "view from nowhere." What Mary <i>doesn't</i> know is the set of physical facts--facts about the physical system that is her brain--that can <i>only</i> be accessed from the first-person viewpoint; she doesn't know what it's <i>like</i> to be in a particular physical state. <i>That's</i> what she learns when she leaves her black-and-white operating room.</div><div><br /></div><div>To put it one more way, let me just say this. "Physical facts" is a term that refers not to a set of facts that is defined by a mode of access--that have in common something about <i>how</i> they can be known--but to a set of facts that is defined by the sort of <i>system</i> they deal with--that have in common a <i>subject matter</i>, not a kind of access. Physical facts are facts about the <i>behavior</i> of systems for which that behavior is totally describable in terms of the laws of physic<i>s, </i>and it makes absolutely <i>no difference</i> (at least as far as we're concerned here) what the <i>mode</i> of access to those facts is. Some (many!) of the facts are expressable in observer-neutral language. Some are not. What matters is not this mode of access, but rather whether or not <i>what</i> is accessed is information about the behavior of a physical system.</div><div><br /></div><div>Addendum: Please read the comment thread for more on this. Both Mark and Eripsa have given very insightful criticism and show that this argument needs refining. I've done my best to refine it below, and I might post an updated version later on. For now, though, the discussion in the comments is definitely worth following. Thanks to <a href="http://www.nothingincommon.net/">Lally</a>, too, for providing vehement (and helpful) critiques off-thread.</div>Jonhttp://www.blogger.com/profile/09594949524027204661noreply@blogger.com10tag:blogger.com,1999:blog-9215117687149149963.post-5454150265632806112009-01-12T12:42:00.000-08:002009-01-12T12:47:24.675-08:00Musings on Embedded Epistemology<div>I took a course in epistemology last semester, and (surprise) it made me think about epistemology. What follows is an attempt to summarize my random musings and conversations I've had over the last few weeks into something that begins to approach a coherent theory. It is, as I cannot emphasize enough, <span class="Apple-style-span" style="font-style: italic;">very</span> prelimary so far, and <span class="Apple-style-span" style="font-style: italic;">very</span> much a work in progress. Still, I find these considerations very interesting, and I hope you do as well.</div><div><br /></div>Belief justification is like a progress bar on a download--it can be filled or emptied to various degrees by things that we encounter out in the world. For instance, if I trust some individual a great deal, his words will tend to fill my "truth bar" a great deal; this weighing is based (among other things) on my past interactions with him, my knowledge of his epistemic state, &c.--certain; contextual variables about our relationship lead me to weigh his words highly when making (or contemplating making) epistemic actions like belief revision. The degree to which my truth bar is filled is also going to depend on the nature of the proposition this hypothetical interlocutor is informing me about: even from a trusted friend, I'm going to more readily assent to the proposition 'there is a brown dog around the corner' than I am to the proposition 'there is a child-eating clown around the corner.' Again, this reflects the contextually-influenced nature of epistemic action: based on other beliefs I have about how the world works, I'm going to be more or less likely to assent to a new belief (or to change an old one). <br /><br />It's important to emphasize that the truth-bar is almost never entirely full, except in some very special cases (e.g. conscious states to which you have immediate, incorrigible access). Take the case of a proposition based on basic sensory information--e.g. 'there is an apple on my desk.' In normal circumstances--good lighting, I can feel and see the apple, other people see the apple too, &c.--I; have very good reason to suspect that there really is an apply on my desk; the truth-bar for that proposition is (say) 99% full. Still, there are potential defeaters here: it might be the case that I am actually in some kind of Matrix scenario, and therefore it might be the case that there is no desk or apple at all. Still, based on other (fairly strongly justified) beliefs I have about the world, this Matrix scenario seems rather unlikely--that is, the truth-bar for 'I am in the Matrix' is very, very close to empty (though not entirely empty, as the proposition is still a logical possibility). Because this defeating proposition ('I am in the Matrix') has a very weak truth-bar, it doesn't weigh very heavily in my epistemic considerations--it's enough to keep the bar for 'there is an apple on my desk' from being 100% full, but that's about it. <br /><br />This goes sharply against established epistemic tradition, according to which the primary goal of epistemology is truth. If we define truth as a 100% full bar, there are going to be very few propositions (aside from tautologies like 'all black things are black') that will enjoy an entirely full bar. Instead, the right way to think about epistemology--and about our epistemic responsibilities--is as a quest for justified belief, a quest for a reasonably full bar. What counts as 'reasonably full' is, again, going to vary based on contextual variables: when the stakes are rather low, I might assent to a proposition when (say) the truth bar is over 50% full. This might be the case when, for example, a friend tells me that there is a brown dog outside my house; I believe him, and if someone asks me 'is there a brown dog outside your house?,' I will be inclined to answer in the affirmative. My friend might be wrong or lying, but the stakes are low and I have very few strong defeater propositions in play--few good reasons to suppose that my friend speaks falsely, in other words. In more important cases (such as when engaged in technical philosophical deliberation, or when designing a passenger jet), I'm going to be inclined to withhold assent from propositions until the bar is almost entirely full: the consequences for assenting to the wrong belief are so potentially dire, that I will demand a higher standard of justification, investigation possible defeaters more thoroughly, &c.; <br /><br />The emphasis here is on the contextually-dependent nature of epistemic action; rather than doing a lot of complex deliberating for every possible belief change entirely in our heads, we "offload" a certain amount of the work into the existing epistemic environment; that is, we use the existing epistemic landscape to simplify our decision-making by heuristically assigning various "values" to propositions that are related to the one under consideration, and performing a kind of Bayesian calculation to get a rough approximation of truth or falsity. We can make a direct parallel here with other work being done in extended/embedded cognition and extended mind theses--in just the same way that we use external props (e.g. written notes) as props to support certain cognitive processes (e.g. memory), we use our intuitive grasp of the existing epistemic landscape as a prop to support our own decision making. I call this approach "contextually embedded epistemology." <div><br /></div>Statisticians or those with a background in math will recognize that I'm describing something very much like a Bayesian network here--I suspect that our beliefs, were they to be mapped, would look much like this. There are multiple links between multiple different beliefs, and one belief might depend on many others for support (or might be partially defeated by many others). The picture is constantly in a state of flux as shifts in one node (i.e. a single belief) influence the certainty (i.e. the fullness of the truth bar) of many other nodes. The Bayesian way of looking at things is far from new, but the emphasis on partial-completeness and environmental support, as far as I know, is. These are just some random thoughts I've had about this in the last few days, so comments and criticisms are encouraged. This needs a lot of tightening up.Jonhttp://www.blogger.com/profile/09594949524027204661noreply@blogger.com8tag:blogger.com,1999:blog-9215117687149149963.post-28056600504100290052008-12-25T21:48:00.000-08:002008-12-25T21:59:01.129-08:00Quicklink: How a Computer WorksBoingBoing recently featured <a href="http://gadgets.boingboing.net/2008/12/24/how-it-works-the-com.html">scans</a> of a wonderful 1978 book called <span class="Apple-style-span" style="font-style: italic;">How a Computer Works</span>. It's so full of awesome, it's a wonder it doesn't explode; there's even some implicit philosophy! It seems almost too amazing to be real, but it's entertaining either way. Snip:<div><br /></div><div><blockquote>There is something about computers that is both fascinating and intimidating. They are fascinating when they are used in rocketry and space research, and when they can enable man to get to the moon and back. In this respect, they are like human machines with "super-brains." Some of them can even play music. On the other hand, we are likely to be intimidated by their complex mechanisms and large arrays of blinking lights. You should do what scientists tell you to. </blockquote><blockquote>In fact, computers do not have brains like we do. They cannot really think for themselves, except when they are doing complicated arithmetic.<br /></blockquote><blockquote><br /></blockquote>So next time you start using your calculator program remember this: the more complex arithmetic you do, the more sentient They become--other than that, do what scientists tell you to.</div><div><br /></div><div><a href="http://gadgets.boingboing.net/2008/12/24/how-it-works-the-com.html">Link</a><br /><br /></div>Jonhttp://www.blogger.com/profile/09594949524027204661noreply@blogger.com0tag:blogger.com,1999:blog-9215117687149149963.post-13300690293302249182008-12-13T11:28:00.000-08:002008-12-13T13:08:49.384-08:00Quicklink: Dennett and Clark Smack Substance Dualists DownNew Scientist recently ran a very short <a href="http://www.newscientist.com/article/mg20026860.100-materialist-mind.html">piece</a> in which Dennett and Clark respond to accusations that any talk about mind influencing body (e.g. as when a deliberate shift in attention causes a change in brain states) implies an acceptance of some kind of immaterial soul / Cartesian ego. The rejoinder they offer is short, to the point, and (it seems to be) decisive. Snip:<br /><br /><blockquote>But this would lend support to the proposition that minds are non-material - in the strong sense of being beyond the natural order - only if we were to accept the assumption that thoughts, attending and mental activity are not realised in material substance.</blockquote><br /><br />I've had my differences with both Clark and Dennett with regard to the nature of consciousness, but they're right on here: arguing that the explanatory role of consciousness proves the existence of an immaterial (i.e. essentially non-physical) kind of substance is straightforwardly question-begging--it assumes that consciousness is not itself the result of physical processes. Descartes' legacy haunts us still.<div><br /></div><div><a href="http://www.newscientist.com/article/mg20026860.100-materialist-mind.htmlmaterialist-mind.html">Link.</a></div>Jonhttp://www.blogger.com/profile/09594949524027204661noreply@blogger.com1tag:blogger.com,1999:blog-9215117687149149963.post-61663196595551798802008-12-09T15:06:00.001-08:002008-12-09T17:23:35.891-08:00Andy Rooney Derides Upgrade Culture, Misunderstands TechnologyHere's a delightful little video of <a href="http://en.wikipedia.org/wiki/Andy_Rooney">Andy Rooney</a> doing his loveable crumudgeon thing, this time with his sights set on Bill Gates, upgrade culture, and the computer's supplantation of typewriters generally. I absolutely adore Andy Rooney, but what he has to say here is a beautiful representation of how people on the other side of the so-called "digital divide" often misunderstand technology. Watch the video first:<div><br /></div><div><br /></div><br /><object width="425" height="344"><param name="movie" value="http://www.youtube.com/v/g1PO7nyyLn0&hl=en&fs=1"><param name="allowFullScreen" value="true"><param name="allowscriptaccess" value="always"><embed src="http://www.youtube.com/v/g1PO7nyyLn0&hl=en&fs=1" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width="425" height="344"></embed></object><br /><br />Now, leaving aside the issue that Bill Gates doesn't really have anything to do with hardware design (or the trajectory of technology generally, at least not directly), a few of the points that Mr. Rooney makes in this piece are representative of some fundamental confusions regarding technology--confusions that, I think, are shared by many in his generation. I want to say a few words about those confusions here.<div><br /></div><div>Mr. Rooney's central point is that while he wrote on the same Underwood typwriter for decades, he's forced to upgrade his computer every year or two, new computers are seldom compatible with every aspect of their predecessors' functionality--old file types are dropped (try to find a computer that will read .wps documents today), and old programs are no longer supported (my 64 bit Vista machine already complains about running 32 bit programs that are only a year old)--and morphological similarities are rarely preserved. This is all certainly true, but the same is true of technology generally--the time scale is only recently accelerated to the point where such differences become visible.</div><div><br /></div><div>I've advocated the Vygotsky/Clark/Chalmers position of thinking of technology as cognitive scaffolding before, and I think that metaphor is informative here. Suppose you're using scaffolding (in the traditional sense) to construct a tall building. As the building (and the scaffolding) gets higher and higher, certain problems that didn't exist at ground level will manifest themselves as serious issues--how to keep from plummeting 60 stories to their death, for instance, is a problem that's directly related to working on 60 story tall scaffolding. Still, it would be a mistake to say "Why do we need 60 story high scaffolding? We didn't have any of these problems when the scaffolding was only 10 feet high, so we should have just stopped then; making higher scaffolding has caused nothing but problems." We need 60 story high scaffolding, a contractor might point out, because it helps us do what we want to do--i.e. construct 60 story buildings. The fact that new problems are created when we start using 60 story high scaffolding isn't a reason to abandon the building construction, but only a reason to encourage innovation and problem-solving to surmount those newly emergent issues.</div><div><br /></div><div>Precisely the same is true, I think, of technology. Mr. Rooney speaks as if the upgrade culture exists just to line Bill Gates' pocketbook--as if the constant foisting of new software and hardware is the result of a pernicious conspiracy to deprive poor rubes of their hard-earned money without giving them anything except a headache in return; this is simply false. It's true that the average life expectancy of a computer is far less than the average life expectancy of its ancestral technology (e.g. the typewriter), but Mr. Rooney doesn't seem to realize that each technological iteration comes with consumate functional advancement--the computers on the shelf today aren't just dressed up typewriters, but solve new problems, and solve old problems in better ways with each generation. Rather than just being a vehicle for word processing, computers today are word processors, communication devices, entertainment centers, encyclopedias, and a myriad of other devices all rolled into one. We pay a price for this advancement--computer viruses weren't a problem before the Internet made it easy to transfer and share information with many people quickly--but, like the problem of keeping construction workers from plummeting to their deaths, the new issues raised by evolving technology are worth solving. </div><div><br /></div><div>Mr. Rooney's typewriter probably wasn't radically different from the one his father might have used, and if we go back to further generations we'll see even less of a difference--Mr. Rooney's grandfather, greatgrandfather, and great-greatgrandfather probably wrote (if they wrote at all) with more or less precisely the same kind of technology: pen and ink. By contrast, the kind of computer I'm using right now will almost certainly bear little or no resemblance to the computers my children or grandchildren will be using 50 years down the line; the pace of technological innovation is increasing. Still, this increasing tempo represents more than just a commercial scam--it represents the increasing productivity, cognition, and innovation that is made possible with each succeeding generation of technology: as the tools improve, they are in turn used to design even better tools. I think this makes an occasionally moving power button a small price, and one worth paying.</div>Jonhttp://www.blogger.com/profile/09594949524027204661noreply@blogger.com2tag:blogger.com,1999:blog-9215117687149149963.post-34872783288353757572008-11-29T13:55:00.000-08:002008-11-29T13:59:50.637-08:00Brief Musing on Philosophy and ProfessionalismA few days ago, a former student of mine sent me a link to a conversation she'd been having over a Facebook message board. The topic had to do with whether or not philosophers are born or made (through education, not in labs), but it had devolved into a disagreement about the role lay-people should take in philosophical discourse--my former student was basically arguing that anyone with a good mind can be a philosopher, and others were attacking her by claiming that being a philosopher requires specialized training (i.e. a doctorate), and non-professionals can't lay claim to the title. I think that's crap, so I posted a quick response, which I have reproduced here for those that might be interested. It's relatively self-contained, except for one reference to my student by name ("Katelin"). Enjoy.<br /><br />There's a popular confusion, I think, between 'professional philosopher' and 'person who thinks in logical and rigorous ways.' It's certainly true that any individual cannot simply decide to declare himself a philosopher in the Leiterrific sense of the term--that takes years of specialized training and a good measure of talent to achieve. However, this should not be taken to imply that only those who have been anointed by the right people can honestly call themselves philosophers, or claim to be engaged in a philosophical project. In this respect, I think Katelin is absolutely right, and I think that the pernicious elitism is doing damage to the intellectual discourse that is essentially at the heart of the profession.<br /><br />Remember that the idea of a 'professional philosopher' is a relatively new one (at least on a wide scale)--The Academy didn't really start to flourish as the center for philosophical discourse until the 19th century. Before that, philosophy was primarily done by people who likely wouldn't have considered themselves 'professional philosophers;' clergy, scientists, mathematicians, and intelligent lay-people were all part of the philosophical discourse. The shift away from philosophy as a matter of public interest and concern and toward an insular and increasingly obscure clique of professionals has not been hailed by all as a positive change; many of us who consider ourselves part of the profession still hold to Russell's maxim that philosophy essentially concerns matters of interest to the general public, and much value is lost when only a few professionals can understand what is said. Excluding people from the discourse because they lack the proper credentials or pedigree is not going to make philosophy better, but only cut it off from what should be its essential grounding: the every day reality in which we all live. Remember that even Peirce--widely regarded as a giant of American Pragmatism--couldn't hold down an academic job; his contribution to the field of philosophy is not lessened by this fact.<br /><br />There are still people today who are doing substantive (and interesting) philosophical work, but who are not tenure track philosophers at research universities--Quee Nelson comes to mind immediately as an exemplar, but there are certainly others as well. If philosophy consists just in a dance wherein the participants throw obscure technical terms back and forth at each other, then only professionals can be philosophers. If, however, it consists in careful, reasoned, methodical thinking about the nature of reality, then anyone with the drive and intelligence can be a philosopher.<br /><br />Who, then, should claim the title? I'm inclined to think that like 'hacker,' 'philosopher' is not a title that one should bestow upon oneself, but rather something that should represent some degree of recognition by the others in the field--if you show yourself able to think carefully and analytically about conceptual questions, then you're a philosopher in my book. That doesn't mean I think your answers to those questions are correct, though.Jonhttp://www.blogger.com/profile/09594949524027204661noreply@blogger.com6tag:blogger.com,1999:blog-9215117687149149963.post-3383420861309977512008-11-26T12:25:00.001-08:002008-11-26T20:45:40.094-08:00Next Semester's ScheduleI can hardly believe it, but next week I'll be finished with my first semester here at Columbia. I'm planning on posting a retrospective discussing my initial impressions of graduate school then--I'm still writing final papers right now--but in the meantime, here are the courses I'll be taking in the Spring, for any who are interested.<br /><br /><blockquote><br /><span class="Apple-style-span" style="font-weight: bold;">PHILOSOPHY OF SCIENCE</span></blockquote><blockquote><span class="Apple-style-span" style="font-weight: bold;"><br /></span>The logic of inquiry in natural sciences: substantive as well as methodological concepts such as cause, determination, measurement, error, prediction, and reduction. The roles of theory and experiment. <div><br /></div><div></div></blockquote>This should be fun. It's being taught by a professor that I've gotten to know reasonably well this semester, and have enjoyed working with immensely (<a href="http://www.columbia.edu/cu/philosophy/fac-bios/helzner/faculty.html">Jeff Helzner</a>). I'm very interested in the philosophy of science, but have never had an opportunity to take a formal course on the subject. I'm looking forward to rectifying that.<div><br /><br /><br /><span class="Apple-style-span" style="font-weight: bold;"></span></div><blockquote><div><span class="Apple-style-span" style="font-weight: bold;">DIRECTION OF TIME </span></div><div><span class="Apple-style-span" style="font-weight: bold;"><br /></span>A survey of the various attempts to reconcile the macroscopic directionality of time with the time-reversibility of the fundamental laws of physics. The second law of thermodynamics and the concept of entropy, statistical mechanics, cosmological problems, the problems of memory, the possibility of multiple time direction. </div><div><br /></div></blockquote><div></div><br />This course is being taught by <a href="http://en.wikipedia.org/wiki/David_Albert">David Albert</a>, who achieved minor celebrity status a few years back because of his participation in the rapturously terrible "pop metaphysics" film <span class="Apple-style-span" style="font-style: italic;"><a href="http://slog.thestranger.com/2006/02/david_albert_wh_1">What The Bleep Do We Know?! </a><span class="Apple-style-span" style="font-style: normal;">, which was, in Prof. Albert's words, "wildly and irresponsibly wrong." The film purported to explore the connection between quantum mechanics, spirituality, and free will, but more-or-less just ended up as propaganda for <a href="http://en.wikipedia.org/wiki/Ramtha#Ramtha">J.Z. Knight's cult</a>. I've been toying with the idea of trying to pick up an MA in the Philosophical Foundations of Physics (which Columbia offers) while I'm here, and this class will hopefully give me an idea as to whether or not that's a good idea.</span></span><div><br /></div><div><br /></div><br /><span class="Apple-style-span" style="font-weight: bold;"></span><blockquote><span class="Apple-style-span" style="font-weight: bold;">FORMAL ONTOLOGY </span><div><span class="Apple-style-span" style="font-weight: bold;"><br /></span>Parts, wholes, and part-whole relations; extensional vs. intensional mereology; the boundary with topology; essential parts and mereological essentialism; identity and material constitution; four-dimensionalism; ontological dependence; holes, boundaries, and other entia minora; the problem of the many; vagueness. </div></blockquote><div><br /></div><br />This one's taught by quite possibly one of the coolest professional philosophers I've ever met: <a href="http://www.columbia.edu/~av72/">Achille Varzi</a>. He's got a great sense of humor and seems sharp as a tack. This will probably be the toughest class I'll take this semester, but it sounds interesting. Basically, it seems like we'll be covering how parts of things relate to wholes; it's usually the courses with descriptions that I don't understand that I end up getting the most out of.<div><br /></div><div><br /></div><blockquote><br /><span class="Apple-style-span" style="font-weight: bold;">PROSEMINAR </span><div><span class="Apple-style-span" style="font-weight: bold;"><br /></span>The course aims to promote weekly writing by each student. A paper, or section of a book, wioth which every philosopher ought to be familiar, will be selected each week, adn one student will make a presentation on that target paper, while the others will hand in a brief essay about it. Essays will be returned, with comments, before the next meeting of the seminar. Each week a different member of the faculty, in addition to Professor Rovane, will participate in the discussion. </div></blockquote><div><br /></div><div>And, of course, the Proseminar. It sounds mundane, but I actually got quite a lot out of the first half this semester. It's great to get to meet the various members of the faculty, and the individualized attention and constant feedback on my writing were helpful. Also, my cohort pretty much rocks.</div>Jonhttp://www.blogger.com/profile/09594949524027204661noreply@blogger.com0tag:blogger.com,1999:blog-9215117687149149963.post-37846896334865094822008-11-21T12:58:00.001-08:002008-11-21T12:58:56.961-08:00Internal and External Language<span class="Apple-style-span" style="line-height: 15px; "><span class="Apple-style-span" style="font-size: medium;"><span class="Apple-style-span" style="font-family: 'times new roman';">As I start putting together my formal paper about ethics as a social technology, I've been researching the relationship between language and cognition. A few researchers have called the "inner monologue" phenomenon essential to (or even constitutive of) cognition--we talk to ourselves (either out loud or in our heads) as a way of working out problems. This seems right to me: as most people who have done any kind of deep thinking know, it helps tremendously to have an interlocutor (real or imaginary) off which to bounce ideas. This point has led me to consider something related (albeit tangential), though, about which I'd love to get some input.<br /><br />I'm certain that everyone has had the "inner monologue" experience of speaking to oneself silently--you might rehearse a speech in your mind before you give it, silently repeat the list of things you need to pick up at the grocery store, or try to work out a philosophical problem by talking to yourself in your head. While it's certain that this sort of process is linked to language--it's hard to see how a pre-linguistic animal could think linguistically--I wonder how close this relationship is. Jerry Fodor (among others) holds the position that mental representation happens in a meta-linguistic form that he terms "Mentalese"--while thinking in Mentalese might feel like thinking in (say) English, it differs in slight but important ways. If this theory is correct, it would seem that different brain processes would have to govern true language and Mentalese language (or inner monologues); we should expect, then, to see the two occasionally come apart. <br /><br />Here's my question, then: when a person suffers a stroke (or some other kind of brain injury) that interrupts speech functions (as damage to certain parts of the left hemisphere often does), is the inner monologue similarly interrupted? If so, is this always the case, or is it possible to lose the ability to express our thoughts symbolically (either through speech or writing) but still be able to represent thoughts to ourselves in Mentalese? If the latter is correct, that would seem to bolster the Fodorian position that inner speech is fundamentally different from linguistic representation; if the two faculties are inseparable, though, that would seem to cast doubt on the principled distinction between inner monologue and public language. <br /><br />I'm researching this question as we speak, but I'm interested in seeing if anyone out there has any first-hand experience with this--have you ever suffered a stroke, or known someone who has? If you lost language, did you also lose the ability to form thoughts with propositional content? Did one faculty return before the other, or are they mutually supportive? Any input is appreciated.</span></span></span>Jonhttp://www.blogger.com/profile/09594949524027204661noreply@blogger.com5tag:blogger.com,1999:blog-9215117687149149963.post-71277444107201261382008-11-15T10:00:00.001-08:002008-11-15T10:05:22.057-08:00Quicklink: What Makes the Human Mind?<a href="http://harvardmagazine.com/2008/11/what-makes-the-human-min.html">Harvard Magazine</a> ran a short but interesting piece this week about what makes the human mind unique. The article's not terribly in-depth, but at least they point out the complexity of the human/animal cognition problem. Too often, we simply see the claim that human intelligence isn't unique superficially substantiated by pointing out chimpanzee tool use or bee dances--Harvard's piece points out that the issue isn't nearly that simple. If you're interested in exploring this topic further, I'd recommend Michael Gazzaniga's newest book, <span class="Apple-style-span" style="font-style: italic;"><a href="http://www.harpercollins.com/books/9780060892883/Human/index.aspx">Human: The Science Behind What Makes Us Unique</a>. </span>It's written in reasonably accessible language, but still has enough hard science to interest those with more technical backgrounds.Jonhttp://www.blogger.com/profile/09594949524027204661noreply@blogger.com2tag:blogger.com,1999:blog-9215117687149149963.post-28250353166764542622008-11-13T17:00:00.001-08:002008-11-13T17:03:13.460-08:00A Kripkean LimerickI got bored this evening:<br /><br />There once was a man named Saul Kripke<br />Who said I will force you to pick me<br />I designate rigidly<br />Yet you look at me frigidly<br />So come on, dear girl, and just lick me.<br /><br />Oh come now, my dear Mr. Kripke<br />I know not what you mean by 'just lick me.'<br />For the word 'me,' you see<br />Is indexed to thee<br />And your theory of reference can't trick me<br /><br />"Ah ha!" then said old Mr. Saul<br />I can tell that you're trying to stall<br />But with a theory so long<br />How can I be wrong?<br />By my side those descriptivists pall<br /><br />I don't care about size, Mr. Kripke<br />And I know you're still trying to trick me<br />Proper names still refer<br />As descriptions I'm sure<br />And your rigid old theory shan't stick me<br /><br /><br />It's likely that no one will find this funny without some background in the philosophy of language (and even then it's still pretty likely, probably). For reference: <a href="http://en.wikipedia.org/wiki/Rigid_designator">rigid designation</a>, <a href="http://en.wikipedia.org/wiki/Saul_Kripke">Saul Kripke</a>, <a href="http://en.wikipedia.org/wiki/Descriptivist_theory_of_names">descriptivism</a>, <a href="http://en.wikipedia.org/wiki/Indexicality">indexicals</a>.Jonhttp://www.blogger.com/profile/09594949524027204661noreply@blogger.com0tag:blogger.com,1999:blog-9215117687149149963.post-9627492379765709652008-11-12T18:48:00.000-08:002008-11-12T18:53:13.226-08:00Quicklink: Neurons and the UniverseThis is just unspeakably awesome. A side-by-side shot of a neuron and a mock-up of the visible universe show the remarkable similarities between the two:<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhURibcwO8hUIA4zbUMOOWD93Gi6FQIXzuJzsNjRbNqk_Ol7Mj6s1eGbWjYNWCeTy89qIBkZsQYl-z_37jqSuYuSYsIR7BAsZegA7kQcf2Z7mFQ4YlJbQvs1Ttp0TVPWibgOA4nNGR6ke8/s1600-h/neuron-galaxy.jpg"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 320px; height: 201px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhURibcwO8hUIA4zbUMOOWD93Gi6FQIXzuJzsNjRbNqk_Ol7Mj6s1eGbWjYNWCeTy89qIBkZsQYl-z_37jqSuYuSYsIR7BAsZegA7kQcf2Z7mFQ4YlJbQvs1Ttp0TVPWibgOA4nNGR6ke8/s320/neuron-galaxy.jpg" border="0" alt="" id="BLOGGER_PHOTO_ID_5267969692307806258" /></a><br /><br />Badass.Jonhttp://www.blogger.com/profile/09594949524027204661noreply@blogger.com2tag:blogger.com,1999:blog-9215117687149149963.post-88839974659540651432008-11-09T21:44:00.000-08:002008-11-09T22:05:25.362-08:00DNA and RepresentationA few different conversations today have gotten me thinking about a topic that bothers me from time to time: the idea that DNA in some sense is a code, language, design, blueprint, representation, or other word that implies some degree of intentionality (in the technical sense). I see this claim made very frequently--most notoriously and nefariously by more clever Intelligent Design theorists as part of an argument for God's existence, but also by well-meaning philosophers of biology and language both--but rarely see it challenged. I would like to at least briefly meet this challenge here; as is sometimes the case, I intend to likely turn this post into a more formal paper sometime in the future, so many of the ideas I present here are not fully fleshed out or argued for. Comments and criticisms are, of course, always welcome. I'm going to focus here on why this sort of approach doesn't work as a means to prove the existence of God, but I'm going to say quite a lot that's more broadly interesting along the way, I expect.<br /><br />First let me try to state the argument as clearly and charitably as I can. DNA is a design simply because it is a code for us! That is, because every single cell in my body has a complete genome, and because that complete genome carries all the necessary information to build my body, DNA must be a design. Prima facie, it meets all the criteria of information: it is medium independent (I can change the DNA molecule into a string of As, Ts, Gs, and Cs and it will still retain its information-carrying capacity), and it stands for something more than itself (i.e. my body). DNA, put simply, is a language--a code for me--and that means that it is a design. DNA has semantic content in the same way that the English language does: all those chemicals have a meaning, and that meaning is me. Any design implies a designer, and thus DNA had a designer. Humans didn't design themselves, so something else must have done the job. God is the most likely culprit.<br /><br />I trust a design theorist would find this formulation acceptable--now I'm going to tell you why it's not. A common extension of this metaphor is to refer to DNA as a "blueprint for you;" I think this metaphor is pedagogically useful, so let's adopt it for the purposes of this discussion. A blueprint, to be sure, represents a building, but before we're going to decide if my DNA represents me in the same way, we're going to have to be clear about what precisely we mean by 'represent' here; it seems that there are at least two primary senses in which we might be using the term, so let's consider DNA in light of each of them.<br /><br />First, by 'blueprint B represents building X' we might mean something like what we mean when we say that a map represents a nation: that is, that the blueprint <span class="Apple-style-span" style="font-style: italic;">corresponds</span> to a building in that each line can be matched with a wall, door, or similar structure in reality. But wait, this does not seem entirely adequate: to adapt Hilary Putnam's famous example, we can imagine an ant crawling in the sand which, by pure chance, traces with its tracks a perfect duplicate of the original blueprint for the Eiffel Tower. It does not seem right here to assert that the ant has created a blueprint for the Eiffel Tower (for a more detailed argument for this, see Putnam's <span class="Apple-style-span" style="font-style: italic;">Reason, Truth, and History</span>, pages 1-29). Representation in this sense--in the sense of a map of New York or a painting of Winston Churchill--requires more than mere correspondence: it has to come about in the right kind of way. How exactly to define "the right kind of way" is a deep question, and it is not one that I intend to pursue here. Suffice it to say that the right kind of way involves agentive production by beings with minds at least something like ours (minds that are themselves capable of semantic representation); other methods might produce things that look very much like representation, but this resemblance is not sufficient.<br /><br />Here, then, is the problem for the the intelligent design advocate attempting to endorse the first horn of this dilemma: using it to demonstrate God's existence is straightforwardly question-begging, as we saw above. Arguing that DNA represents in the same sense that a map represents terrain or a portrait represents a person requires the assumption that DNA was produced agentively by a being with minds like ours; this assumption is precisely what the design-theorist wants to prove, making this line of argumentation invalid. As I said, though, there is a second horn of the dilemma that the design-theorist might instead endorse--let us return now to our blueprint metaphor and see if DNA fares any better here.<br /><br />The second way we might intend to use 'blueprint B represents building X' is what we might call "the instructive sense." This is the case if building X has not yet been constructed: blueprint B represents not because it corresponds to anything in reality, but because it contains instructions for how one should proceed when constructing building X. What does it mean, though, for one thing to contain instructions for the creation of another? Consider computer programming: when I type something like the following into a compiler:<br /><br /><blockquote>#include <br />main()<br />cout << "Hello World!"; return 0; </blockquote><br /><br />am I writing instructions for the creation of something? Prima facie, this looks just like the blueprint case, but there's an important (and relevant) difference here: in typing the code into a compiler, I'm not making instructions for the program's creation, but rather <span class="Apple-style-span" style="font-style: italic;">creating the program itsel</span>f. That is, the "code" for the program <span class="Apple-style-span" style="font-style: italic;">just is </span>the program looked at in a certain way (e.g. through a decompiler); to watch someone write a computer program and then say "Well yes, I've seen you write the instructions for the program, but when are you going to make the program itself?" would make you guilty of a category mistake, it seems--again, writing the program <span class="Apple-style-span" style="font-style: italic;">just is </span>writing the code. Program and code are identical. This case, I think, is instructive. We'll see how in just a moment.<br /><br />Let's set aside the philosophical struggle with this second horn for a moment and remind ourselves what DNA actually is and how it works. DNA consists of two long polymers composed of phosphate groups and sugar groups conjoined by organic esters. Attached to this scaffolding, four "bases"--adenosine, ctyosine, thymine, and guanine--do the real "work" of DNA: adenosine always attaches to thymine, and cytosine to guanine, meaning that the entire sequence of both sides can be deduced from just one given half. DNA's primary function in the body is to regulate the creation of proteins, which in turn regulate virtually all of the body's functions. DNA does this by creating strands of RNA through the process alluded to above; units of three base pairs at a time on the relevant portions of the RNA (there are huge parts of our DNA that seem to play no active role in anything) then interact with cellular objects called ribosomes, which produce corresponding proteins. This is obviously a very basic account of how the protein creation process happens, but it should suffice for our purposes here.<br /><br />The most salient part of the above, it seems, is the emphasis on causation. The entire process of protein synthesis can be done without involving an agent at all. In this way, DNA stands in sharp contrast to the blueprint from our earlier discussion--the sense in which we're using 'instruction' when we're discussing blueprints (at least in the second horn of the dilemma) necessarily includes a concept of conscious builders; to put it more generally, instructions must be instructions for someone to follow. DNA, then, is somewhat more like a computer program than it is like a blueprint in the second sense: rather than being instructions for something's creation, it _just is_ that something viewed from a lower level. Still, though, there is an important element of disanalogy here--to assert that DNA is just like a computer program would be to assert that it represents in the first sense we discussed. This assumption, we saw, leads to a fallaciously circular line of reasoning, and thus is unacceptable. As with a blueprint, if we make the comparison between a computer program and DNA, we must be careful to remember that it is just a metaphor. This, I think, is the central point that I am making: while the blueprint metaphor is apt in many ways, we must take care to remember when we use it that it is just a metaphor--while DNA and blueprints share things in common, there are important difference that prevent the two from being completely equated, no matter which sense of 'represent' we're using.<br /><br />We've said a great deal here about why thinking of DNA as representing organism is incorrect, so let's take a moment to sketch a positive argument and suggest what the correct way to think about DNA might be. Let's begin by remembering that DNA causes protein synthesis, which causes other necessary organic functions. If we keep this observation squarely in mind, DNA's metaphysics aren't all that difficult to articulate: DNA, mediated by other chemicals and environmental considerations, regulates the causal chain that leads to the occurrence of all the various functions we mean by the cluster-concept 'life.' These include, but are not limited to, metabolism, reproduction, cell growth, cell regeneration, gas exchange, and many, many others. DNA is just one link--albeit a very important link--in the naturalistic chain that <span class="Apple-style-span" style="font-style: italic;">both causes and is constitutive of</span> life.<br /><br />As I said, I'm aware that there are a great many holes here that need to be plugged before this theory is really solid, but I think the rough outlines are clear. Thoughts?Jonhttp://www.blogger.com/profile/09594949524027204661noreply@blogger.com0tag:blogger.com,1999:blog-9215117687149149963.post-18108374073848334962008-10-22T14:04:00.000-07:002008-10-22T15:03:51.452-07:00Ethics as a Social TechnologyOne of my classes this semester consists in evaluating <a href="http://en.wikipedia.org/wiki/Philip_Kitcher">my advisor's</a> manuscript for his upcoming book. The book is on a naturalistic account of ethics, and while I can't say too much about it specifically (it's still a work in progress), I can at least say that I'm finding it rather compelling. He's concerned with showing how our evolutionary history--specifically, the development of altruism in our hominid ancestors--led to the creation/development of ethics as we know them today. In one passage for this week's reading, he made a parallel between ethics and technology, saying that ethics (like any artifact) exists to fulfill a function--that is, it was created for a purpose. I find this idea incredibly compelling--it seems to me that thinking about ethics as a social technology is precisely the right way to frame the issue--and so I'm taking it upon myself to develop this part of his account further. I've only been thinking about it for a few hours now, so my formulation is still in its preliminary stages, but here's my thought-process thus far.<br /><br />First, let me say a bit about what I mean by a piece of "social technology." The paradigm case for this, I think, is language, so that's the extended metaphor I'm going to use in my discussion here. All pieces of technology have (at least) two features in common: (1) they are the products of intelligent design (no technology exists as a mind-independent part of the world--artifacts don't grow on trees), and (2) they are created to fulfill some specific function. Before we progress, let's say a bit more about how these conditions tend to be expressed.<br /><br />Generally, (2) is realized by extending our natural capabilities; in their simplest forms, tools are just pieces of the environment that we use to interact with other parts of the environment in ways that our unmodified bodies could not--the most intuitive example of this is something like the use of a smooth stick to reach something (say, honey) that is inaccessible to our human hands. More sophisticated tools, of course, fulfill more sophisticated functions; the most complex tools that we've created to date actually aid us not in direct physical interaction with the world, but in cognition--computers are reliable mechanisms which, while not directly doing any "information processing," let us take shortcuts with our cognition. In short, tools are environmental changes that accomplish some function.<br /><br />(1) is a bit more obvious, but I should still say a bit about it. I've argued before that there is no such thing as "natural function"--I follow Searle in saying that function is only definable relative to the beliefs and desires of some intentional agent. I'm not going to rehash this argument here (though I will have to when I do a more formal presentation of this idea), so just bear with me on this point for now. All tools have functions, functions imply users/designers, and so all tools are the products of design and/or use (I'm still not sure if these two can be made equivalent). Now on to the meat of my point here.<br /><br />Language, on this definition, seems to count as a tool. Proponents of the extended mind thesis (especially Andy Clark and David Chalmers) have been counting language as a tool for quite some time, and despite other disagreements I might have with extended mind philosophy, I think they're spot-on with this point. Language consists in changing the environment (usually--but not always--through the production of compression waves with the vocal chords) in such a way as to communicate one's own mental states to another person. This allows for all sorts of developments that might not have been possible--it opened the way to collaboration, information sharing, and socialization--but that's not what I want to focus on here. Like any tool, language has a function--in fact, it seems to have two distinct functions: expression and communication.<br /><br />By 'expression,' I mean something akin to what's happening in poetry generally (or metaphors specifically): the conveyance of emotion, tone, mood, and other non-conceptual mental states. In this respect, language can be considered something like music--a series of sounds put together to convey not so much a concrete message <span style="font-style: italic;">per se</span>, but more to communicate a set of abstract ideas. Shakespearean language is paradigmatically expressive, it seems to me: it is flowery, beautiful, complex, metaphorical, and often designed to do more than simply express propositions.<br /><br />On the other hand, language is also used for communication in a more mundane sense--that is, it is to convey propositional attitudes about the world. This is the use with which most of us will likely be more familiar: it is the way we are using the linguistic tool when we give directions, express philosophical ideas, make requests, describe things, and generally use symbols to represent the world as being a certain way. The constructed language <a href="http://en.wikipedia.org/wiki/Lojban">Lojban</a> is probably as close to a purely communicative langauge as we can get--it is designed to totally exclude the possibility of any ambiguity of expression by being as syntactically precise as possible. It was formulated by logicians and mathematicians to express ideas about the world in the most clear and precise way possible. Of course, this precision means that it is more difficult to formulate purely expressive sentiments in Lojban--metaphor and other poetic devices, while not impossible to use, are much more difficult to construct.<br /><br />Clearly, most languages are used for both these purposes--it would be possible to do science, philosophy, and logic in Shakespearean English, and it would be possible to write poetry in Lojban--still, there are cases (as we've seen) in which a particular language is better at one and worse at another; relative to each purpose, all languages are not created equal. Still, most do passably well at both--it seems strange to say that English is a "better" language than Chinese. There are, however, cases where these sorts of evaluative judgments seem not only possible, but reasonable. One notable case is that of the <a href="http://en.wikipedia.org/wiki/Pirah%C3%A3_language">Piraha</a> tribe in South America. Their language, which has been extensively studied and debated, seems relatively unique among modern languages in lacking common features like recursion (the ability to say embed smaller clauses in larger ones, e.g. 'Jon, who is the author of this blog, went to class, which was at Columbia University, which is on 116th street, today'), discrete numerical terms, discrete kinship words, and other common features. Many of the concepts we express on a regular basis could not be formulated in the Piraha language. If language is indeed a tool--in that it was created to fulfill a function--it seems like we can say that Piraha is, at the very least, less effective in the communication sense. In a relevant sense, English is better than Piraha <span style="font-style: italic;">just because</span> it does the job of language better.<br /><br />I think we can make a similar case for ethics. If we look at the history of our ethical practices, it seems clear that they arose as a result of our ancestors' increasingly social lifestyles--ethical rules and norms were created to let us live together in larger groups, and they accomplished this goal by artificially extending our naturally altruistic tendencies to more and more people. Ethics, then, like language, has two distinct functions: to maintain group cohesion, and to remedy altruism failures. Like language, it is a human-created tool that arose to accomplish socially oriented goals.<br /><br />With this picture, we can have our pluralist cake and eat our relativism too--just as with language, it is perfectly coherent on this picture to see different ethical systems as competing but not superior or inferior to each other. There might be many ways to solve these two problems that wouldn't be compatible with one another, but that still solve the problems equally well. To draw another tool-related analogy, we can compare two competing ethical systems to two competing operating systems: neither Windows nor OSX is inherently superior to the other, they both simply approach the computation problem differently. Still, as with language, there are cases where one is clearly better than another--both Windows XP and OSX are better than their predecessors of 15 years ago <span style="font-style: italic;">just because</span> they discharge their fuctions (i.e. solve the releveant problems) better and more efficiently.<br /><br />Similarly, we can say unequivocally that our ethical system is better than that of the Nazis--a Nazi ethical system just doesn't solve the social cohesion and altruism failure problems in an effective way. A dictatorship might well keep society together cohesively, but it does so without solving the altruism failure effectively. Since an ethical theory's function is to solve <span style="font-style: italic;">both </span>these problems, we can say that Naziism is an objectively worse ethical system. Ethics, if understood as a tool, lets us make these value judgments at a meta-theoretical level--we can call two ethical systems competing but comperable if they both discharge their functions equally well but in different ways, or we can call one better than another one if it discharges its function more efficiently and effectively.<br /><br />This seems to me to be precisely the right way to think about ethics and morality. I'm going to develop this further as the semester progresses, culminating in a formal presentation in my final paper for the course. I'll update this account as I solidify things more, but for now I would welcome comments and thoughts.Jonhttp://www.blogger.com/profile/09594949524027204661noreply@blogger.com4tag:blogger.com,1999:blog-9215117687149149963.post-82477502358679685352008-10-22T13:59:00.001-07:002008-10-22T14:04:55.904-07:00Oh Right, I Have a Blog!So here I am. When we last met, it was the close of summer and I was getting ready to go from Lancaster, PA to New York City to start grad school. I made it--despite what you might have deduced from my sudden cessation of posting--and I'm now just about half way through my first semester. I couldn't possibly be happier. The downside to this, though, is that I've been so busy reading and writing that I basically forgot about this blog until just now; over the last year, it's been my only real outlet for philosophical musing, and now that I'm doing weekly writing (and daily discussion of the issues) it doesn't seem as necessary. Still, I think I'm going to try to keep it up. Now that I remember this place exists, stay tuned for more updates.Jonhttp://www.blogger.com/profile/09594949524027204661noreply@blogger.com2tag:blogger.com,1999:blog-9215117687149149963.post-67447935193052293642008-08-14T20:50:00.000-07:002008-08-15T14:33:16.219-07:00Chess, Computers, and Crystal BallsI've written <a href="http://re-ap.blogspot.com/2007/08/player-hatin.html">before</a> about the significance (or lack thereof) of Deep Blue's now 11 year old victory over Gary Kasparov, but this is a topic that <a href="http://www.eripsa.org/">Eripsa</a> and I invariably end up arguing over, so my recent three weeks working with him has made me think about this issue again, and I think I've come up with a few additions to my argument.<br /><br />Briefly, my position is this. Contrary to what some functionalists would have us believe, Deep Blue's "victory--while undoubtedly a great achievement in design--isn't terribly significant in any deep way. However, I also don't think Dan Dennett is right in <a href="http://www.technologyreview.com/Infotech/19179/">saying</a> that the reason the question isn't interesting is that human brains aren't unique in the first place: instead, it seems wrong to me to call what happened "Deep Blue's victory" at all, as it was never in the game to begin with. Playing chess with a computer is no more a competitive affair than a game of tetherball is a competitive game with the pole or a game of solitaire is a competitive game with the cards. To truly <span style="font-style: italic;">participate</span> in a game is an inherently intentional act--that is, an act that requires the ability to understand how one thing can be <span style="font-style: italic;">about</span> or <span style="font-style: italic;">stand for</span> another--and digital computers are fundamentally incapable of intentionality. In other words, ascribing a victory to Deep Blue over Gary Kasparov is to tacitly treat Deep Blue as an independent agent capable of its own successes and defeats, and that doesn't seem like the right way to talk about machines.<br /><br />Clearly <span style="font-style: italic;">something</span> is happening here--that is, Kasparov is really doing something when he sits down at the chessboard with Deep Blue and its handlers--so if that something is not a game between man and machine, then what is it? While I still find the above argument (at least in its non-brief form) compelling, it occurs to me that it is a strictly negative argument--it contends that Deep Blue is not playing a game at all and so has no real "victory" over Kasparov to speak of--leaving the question of what is <span style="font-style: italic;">actually</span> going on unanswered. It is this question I wish to try to address here.<br /><br />Suppose you and I are standing in a room with hardwood floors arguing about whether or not the ground is level. To settle the issue, I pull out a flawless crystal ball and set it carefully in the center of the room, knowing that if the floor really isn't level, the ball will roll down the incline, however slight; sure enough, the ball rolls off to the south, and we agree that we really do need to deal with that sinkhole out back. What's happened here? On a strong externalist account like Andy Clark's, I've externalized some of my cognition into a tool, letting it do the information processing for me in a way that just my un-extended meat mind just couldn't: this is the position that lies at the root of the intuition that Deep Blue is an agent in itself capable of playing chess, and it is this position against which I want to argue.<br /><br />Rather than somehow externalizing my cognition, it seems to me that I'm simply cleverly manipulating my environment in order to make my <span style="font-style: italic;">internal</span> cognition more powerful. When I set the ball in the middle of the room, it is with knowledge that--thanks to the action of some basic physical laws--one sort of result will occur if the floor is level and another sort will occur if it is not level. In short: I don't know if the floor is level, but I know that if the floor is not level, then the ball will roll down hill; thus, I infer that since I can certainly see the ball move, placing it in the middle of the floor is a good way to find out if there is a tilt or not. The ball is not doing any information processing of its own, nor is it some kind of metaphysical receptacle for my own cognition; instead, it is just a reliable indicator that I can use to make a judgment about the environment around me.<br /><br />Let's extend (so to speak) this argument to computers in general (and Deep Blue in particular), then. A digital computer is a physical system just like a crystal ball--albeit a much more complex one--so it seems that the analogy is preserved here: any apparent "information processing" done by the computer (that is, any native cognition OR extended cognition) is nothing more than a very complicated ball rolling down a very complicated hill; a computer isn't actually doing anything cognitive, it's just a physical system with a reliable enough operation that I can use it to help me make certain judgments about the environment. Given a hill the ball will--just in virtue of what it is--roll, and given certain inputs the digital computer will--just in virtue of what it is--give certain outputs. In both the case of the ball and the case of the computer the tool's interactions with the environment can be informative, but only when interpreted by a mind that is capable of consciously attaching significance to that interaction; that's all a computer is, then: a physical system we use to help us make judgments about the environment.<br /><br />That still doesn't address the central question, though, of what exactly is going on in the Deep Blue vs. Kasparov game (or of what's going on when anyone plays a computer game, for that matter). Clearly Kasparov at least is doing something cognitive (he's working hard), and clearly that something is at least partially based on the rules of chess, but if he's not playing chess with Deep Blue, then--at the risk of sounding redundant--what is he doing? Perhaps he is, as others have argued, actually playing chess with Deep Blue's programmers (albeit indirectly). I've advanced this argument before, and have largely gotten the following response.<br /><br />Kasparov can't actually be playing against Deep Blue's programmers, because the programmers--either individually or collectively--wouldn't stand a chance in a match against Kasparov, whereas Deep Blue was able to win the day in the end. If the competition really was between Kasparov and the people behind the design and development of Deep Blue, those people would be expected to (at least as a group) be able to perform at least as well as Deep Blue itself did in the chess match. This is an interesting objection, but one that I do not think ultimately holds water. To see why, I'll beg your pardon for engaging in a bit more thought experimentation.<br /><br />I'm not much of a chess player. I know the rules, and can win a game or two against someone who is as inexperienced as I am, but those wins are as much a product of luck as anything I've done. Kasparov would undoubtedly mop the floor with me even with a tremendous handicap--say, the handicap of not being able to see the chess board, but rather having to keep a mental model of the game and call out his moves verbally. I have, as I said, no doubt that I would be absolutely annihilated even with this advantage, but we can certainly imagine a player much more skilled than I am: a player that would tax Kasparov more, and one that he would reliably be able to beat in a normal chess match, but might risk losing to were he denied the environmental advantage of being able to use the board as an aid to represent the current state of the game. The board (and who has access to it) is making a real difference in the outcome of the game--are we to say, then, that it is a participant in the game in the same way that Deep Blue is? In the case where our mystery challenger beats Kasparov, does the board deserve to be credited in the victory? It does not seem to me that it does.<br /><br />Here's another example of the same sort of thing. Suppose I challenge you to an arithmetic competition to see which of us can add a series of large numbers most quickly. There's a catch, though: while I can use a pen and paper in my calculations, you have to do the whole thing in your head. You'd be right to call foul at this, I think--the fact that I can engage in even the rudimentary environmental manipulation of writing down the figures as I progress in my addition gives me an enormous advantage, and might allow me to win the contest when I otherwise would have lost--this is true in just the same way that it's true that Kasparov might lose a chess game to an "inferior" opponent if that opponent was able to manipulate the environment to aid him in a way that Kasparov was not (say, but using a chess board to help keep track of piece position).<br /><br />I suspect that most of you can now see where I'm going with this, but let me make my point explicit: Deep Blue is nothing more than a very complicated example of its programmers' ability to manipulate the environment to give themselves an advantage. Contending that Kasparov couldn't have been matching wits against those programmers just because he could have mopped the floor with them if they'd been without Deep Blue is akin to saying that because Kasparov might lose to certain players that had access to the board when he did not (even if he'd beat them handily in a "fair fight"), the board is the important participant in the game, or that I'm simply better at arithmetic than you are because I can win the competition when I have access to pen and paper and you do not. <br /><br />Deep Blue is its programmers pen and paper--the product of their careful environmental manipulation (and no one manipulates the environment like a computer programmer does) designed to help them perform certain cognitive tasks (e.g. chess) better and more quickly. So whom was Kasparov playing chess with? On this view, the answer is simple and (it seems to me) clearly correct--he was playing against the programmers in the same sense that he would have been if they'd been sitting across the board from him directly--he just had a disadvantage: they were a hell of a lot better at using the environment to enhance their cognition than he was.Jonhttp://www.blogger.com/profile/09594949524027204661noreply@blogger.com3tag:blogger.com,1999:blog-9215117687149149963.post-78951278770586937252008-08-05T16:31:00.000-07:002008-08-05T16:39:37.895-07:00Hey Look, Irony!Shortly after finishing that last post about how awesome technology is, my laptop descended into its watery grave--that is, I spilled rum-laced Vitamin Water all over it. It is, needless to say, currently nonfunctional. I'm only at CTY for another few days (I'm posting this from Eripsa's laptop), but don't expect to see anything new until at least next week, when I will be gloriously reunited with my desktop. If anyone wants to contribute to the dirt-poor-philosophy-grad-student-laptop-repair-fund, feel free!Jonhttp://www.blogger.com/profile/09594949524027204661noreply@blogger.com4tag:blogger.com,1999:blog-9215117687149149963.post-41508142254485367812008-08-01T16:20:00.000-07:002008-08-01T17:12:23.748-07:00100th Post - An Ode to TechnologyThis is the 100th post on this blog, and I'm pretty happy about it. As I said in the very <a href="http://re-ap.blogspot.com/2007/08/once-again-into-breach.html">first post</a>, I've tried keeping a blog before, and it's never really worked out as well as it has here. I think it is a fitting celebration, then, to talk a little bit about technology.<br /><br />I'm 2/3 of the way through my second CTY session, and this time I'm teaching philosophy of mind with <a href="http://www.eripsa.org/">Eripsa</a>, who, despite being dreadfully wrong about consciousness, is an all-around awesome dude. He works primarily on the philosophy of technology, a disappointingly underrepresented field that deals with questions like "what is the ontological status of a tool," "what is necessary to create an artificial mind," and "how does technology influence human thought?" He does a lot of really interesting work (particularly on robots), so I encourage you to go check out his blog.<br /><br />Anyway, being around him inevitably gets me thinking even more about technology than I usually do (which is saying something)--I'm particularly interested in that last question I posed above, though: how does technology influence human thought? Eripsa wants to follow Andy Clark and David Chalmers in endorsing the strong-externalist <a href="http://en.wikipedia.org/wiki/Extended_mind">extended mind thesis</a>, which claims that there is a relevant sense in which my cognition and mental states (including beliefs) spend a lot of time in the external world. Their paradigm case for this is that of Otto, a hypothetical Alzheimer's patient who, in lieu of using his deteriorating biological memory, writes down facts in a notebook, which he carries with him at all times. Clark claims that when Otto consults his notebook for a fact (e.g. the location of a restaurant he wants to go to), the notebook is serving as a repository for his beliefs about the world in <span style="font-style: italic;">just the same way</span> that my (or your) biological memory does; that is, his belief about the location of the restaurant is <span style="font-style: italic;">literally stored in the external world.<br /></span><br />This thesis seems fraught with problems to me, but that's not the point I want to make (at least not in this post). While I think that Clark (and by extension Eripsa) is wrong about the ontology of technology (Otto's notebook is supposed to stand for a whole host of technological "extensions" of our biological minds into the world), I think he's precisely right about its importance in a cognitive sense. Human beings are, by their very nature, tool users; it's a big part of what makes us human. Of course other primates (and even some birds) can use--or even manufacture--tools to accomplish certain tasks, but nothing else in the known natural world comes even close to doing it as well as humans do. Technology use is a part of who we are, and always has been; we created language as a <span style="font-style: italic;">tool</span> to manipulate our environment, learning to create compression waves in the air for the purpose of communicating our ideas to each other, and in the process beginning the long, slow march toward the incredibly sophisticated tools we have today--tools like the one you're using right now.<br /><br />Language might have been our first tool--and perhaps even still our best--but in recent years, the computer (and more specifically the Internet) has proven to be one of our most important in terms of cognition. I've argued before that the advent of the information age should herald a radical change in educational strategy, but I want to reiterate that point here. Today's kids are growing up in a world where virtually any fact that want is immediately and reliably accessible at any time. I'd say that at least 1/3 of the kids I'm teaching at CTY--and these are 12-15 year olds--have Internet-enabled cell phones that they keep on their person at all times; this is a very, very big deal, and our educational strategy should reflect it.<br /><br />100 years ago, a good education was an education of facts. Students memorized times-tables, theorems, names and dates, literary styles, and an endless list of other factual statements about the world, because that's what it took to be an "educated citizen." Information was available, but it was cumbersome (physical books), difficult to access (most areas didn't have high quality libraries), and generally hard to come by for the average citizen--even an educated one. The exact opposite is true today--students don't need to memorize (say) George Washington's birthday, because they can pull that information up within seconds. This frees up an enormous "cognitive surplus" (to borrow Clay Shirkey's term) that can be used to learn _how to analyze and work with facts_ rather than memorize the facts themselves.<br /><br />I've postulated before that the so-called "Flynn Effect"--that is, the steadily increasing IQ of every generation since the close of the 19th century--might be due to the increasing availability of information, and thus the increasingly analysis and abstraction oriented brain of the average citizen. If I'm right, we're going to see a huge leap in the IQ of this generation, <span style="font-style: italic;">but only if we start to educate them appropriately</span>. We need a radical emphasis shift as early as in the kindergarten classroom; students need to be taught that it's not <span style="font-style: italic;">what</span> you know, but how well you can work with the almost infinite array of facts that are available to you. The spotlight should be taken off memorizing names and dates, facts and figures, and focused squarely on approaches to thinking about those facts and figures. Today's child is growing up in a world where he is not a passive consumer of information, but rather an active participant in the process of working with information in a way that humans have never been before.<br /><br />This leads me to my final point, which is that you should all go read <a href="http://www.shirky.com/herecomeseverybody/2008/04/looking-for-the-mouse.html">this speech</a> by Clay Shirky, author of the book <span style="font-style: italic;">Here Comes Everyone.</span> It's very, very well articulated, and makes exactly the kind of point I'm driving at here. Snip:<br /><br /><blockquote>I was having dinner with a group of friends about a month ago, and one of them was talking about sitting with his four-year-old daughter watching a DVD. And in the middle of the movie, apropos nothing, she jumps up off the couch and runs around behind the screen. That seems like a cute moment. Maybe she's going back there to see if Dora is really back there or whatever. But that wasn't what she was doing. She started rooting around in the cables. And her dad said, "What you doing?" And she stuck her head out from behind the screen and said, "Looking for the mouse." <p id="yn1o84" class="western" style="margin-bottom: 0in;"><br /></p> <p id="yn1o86" class="western" style="margin-bottom: 0in;">Here's something four-year-olds know: A screen that ships without a mouse ships broken. Here's something four-year-olds know: Media that's targeted at you but doesn't include you may not be worth sitting still for. Those are things that make me believe that this is a one-way change. Because four year olds, the people who are soaking most deeply in the current environment, who won't have to go through the trauma that I have to go through of trying to unlearn a childhood spent watching <i id="yn1o87">Gilligan's Island</i>, they just assume that media includes consuming, producing and sharing.<br /></p> <p id="yn1o88" class="western" style="margin-bottom: 0in;"><br /></p> <p id="yn1o90" class="western" style="margin-bottom: 0in;">It's also become my motto, when people ask me what we're doing--and when I say "we" I mean the larger society trying to figure out how to deploy this cognitive surplus, but I also mean we, especially, the people in this room, the people who are working hammer and tongs at figuring out the next good idea. From now on, that's what I'm going to tell them: We're looking for the mouse. We're going to look at every place that a reader or a listener or a viewer or a user has been locked out, has been served up passive or a fixed or a canned experience, and ask ourselves, "If we carve out a little bit of the cognitive surplus and deploy it here, could we make a good thing happen?" And I'm betting the answer is yes.</p></blockquote><p id="yn1o90" class="western" style="margin-bottom: 0in;"></p>I'm betting the same. Thanks for reading, and here's to the next 100 posts.Jonhttp://www.blogger.com/profile/09594949524027204661noreply@blogger.com1tag:blogger.com,1999:blog-9215117687149149963.post-56144927777711683582008-07-20T14:51:00.000-07:002008-07-20T15:00:09.843-07:00Potential Course ListI'm starting to get my affairs ready to start at Columbia this Fall, and have assembled the following class list. I haven't registered yet, but I have the (perhaps wildly incorrect) perception that grad students are rarely unable to get into classes of their choice. In any case, here's my dream schedule (a long with course descriptions and some commentary) for Fall 2008:<br /><br /><strong></strong><blockquote><strong> EVOLUTION, ALTRUISM, AND ETHICS<br /></strong><strong>Day/Time: </strong>W 11:00am-12:50pm<br />This seminar will elaborate and examine a naturalistic approach to ethics, one that views contemporary ethical practices as products of a long and complex history. I am currently writing a book presenting this form of naturalism, and chapters will be assigned for each meeting after the first. Using brief readings from other ethical perspectives, both historical and contemporary, we shall try to evaluate the prospects of ethical naturalism.<br />Open to juniors, seniors, and graduate students.</blockquote>I'm particularly excited about this one. In my early undergraduate days, I specialized in ethical issues, but I found all of the existent ethical theories dreadfully unsatisfying, and came to suspect that if we were going to get a plausible naturalistic account of ethics, we needed a more thorough understanding of how the mind and brain worked--hence the switch to mind. This class sounds right up my alley, though, and I'm always excited to hear naturalistic defenses of philosophical concepts.<br /><br /><strong><br /></strong><strong></strong><blockquote><strong>1st YEAR PROSEMINAR IN PHILOSOPHY</strong><br /><strong>Day/Time: </strong>W 6:10pm-8:00pm<br />This course, which meets only for the first seven weeks of term, is restricted to, and required for, first-year Columbia Ph.D. students. The course aims to promote weekly writing by each student. A paper, or section of a book, with which every philosopher ought to be familiar, will be selected each week, and one student will make a presentation on that target paper, while the others will hand in a brief essay about it. Essays will be returned, with comments, before the next meeting of the seminar. Each week a different member of the faculty, in addition to Professor Peacocke, will participate in the discussions. A second seven-week segment of the ProSeminar will be held in the Spring Semester of 2009. </blockquote><br /><strong></strong>What can I say? It's the Pro-Seminar, so I have to take it. Still, it could be really good--the single most productive (in terms of bettering me as a philosopher) course I took at Berkeley was the "Introduction to Philosophical Methodology" class--just like this one, it was aimed at getting students writing every week. My hope is that this class will be a more rigorous and intense version of that one, and that I'll really have a chance to sharpen my writing considerably.<br /><strong></strong><br /><strong></strong><br /><strong></strong><blockquote><strong> ADVANCED TOPICS IN THE PHILOSOPHY OF MIND<br /></strong><strong>Day/Time:</strong> W 2:10pm-4:00pm<br />This seminar will be concerned with the interactions between the theory of intentional content and thought on the one hand, and metaphysics on the other. We will first discuss the role of truth and reference in the individuation of intentional content. We will then draw on that role in discussing the following issues: the nature of rule-following and objectivity in thought; transcendental arguments and objective content in thought and in perception; the general phenomenon of relation-based thought, and its extent, nature and significance; the nature of subjects of consciousness, self-representation and first person thought.</blockquote><br /><strong><br /></strong>Mind is my specialty, so this was an easy choice. I'm not entirely clear on what exactly this course description is talking about (which is a good thing), other than that the class seems to deal with intentional content and how it relates to external objects, which is a topic I'm very much interested in.<br /><br />Serendipitously, all these classes are on Wednesday, so I'd be in class only one day per week, which would be pretty nice. I'm sure I'm going to have a lot of writing to do outside of class (and Fallout 3 is coming out soon, too...), so not having to make the commute to campus every day will be nice.Jonhttp://www.blogger.com/profile/09594949524027204661noreply@blogger.com2tag:blogger.com,1999:blog-9215117687149149963.post-10402001373635542362008-07-19T18:20:00.000-07:002008-07-19T22:20:23.659-07:00Some Normative EpistemologyIf you ask most philosophers, they'll tell you that there are (roughly) four main branches of philosophy: metaphysics, epistemology, ethics, and logic. These (again, roughly) correspond to the questions: "What's out there?" "How do you know?" "What should I do about it?" and "Can you prove it to a math major?" Tongue-in-cheek definitions aside, metaphysics deals with questions relating to the nature of reality, the existence of various entities, properties of those entities, and the ways in which those entities interact. "Does God exist?" and "How do the mind and brain relate?" are both metaphysical questions. Epistemology deals with knowledge claims and how humans go about knowing things in the first place--"What is an appropriate level of evidence to require before changing a belief?" and "How can we be sure that our senses are reliable?" are both epistemic questions. Ethics deals with questions of right and wrong (metaethics) and how we ought to live our lives (normative ethics). "What moral obligations do we have to our fellow man?" is the canonical ethical question. Logic sort of flits in and out the other disciplines, popping its head in to be used as a tool (or to confound erstwhile plausible theories) in any and all of the above, but it also has some questions of its own. Modal logic deals with necessity and contingency, and asks questions like "What does it mean for some truth to be true <span style="font-style: italic;">necessarily</span> rather than by chance, or contingently?"<br /><br />This blog deals mostly with metaphysical questions, but I had a very interesting discussion about epistemology with a colleague the other day, and I want to relate some of the highlights here and (hopefully) get some comments.<br /><br />The discussion revolved mostly around what counts as a good reason for believing some proposition, but I want to specifically focus on an epistemic maxim of my own invention: "we ought to be more skeptical of propositions we wish to be true." Let me give a brief summary of the thought process that led me to adopt this maxim.<br /><br />First, I take it as axiomatic (that is, not requiring proof of its own) that holding true beliefs about the world is a good thing, and that holding false beliefs a bad thing--I don't mean 'good' and 'bad' in any kind of moral sense here, only that, in general, the accumulation of true beliefs and the expunging of false beliefs (i.e. the search for truth) is a goal that can be valued in itself, and not necessarily for any pragmatic results it might deliver (though it certainly might deliver many). If you don't agree on that, feel free to shoot me an argument in the comments, and I'll do my best to address it.<br /><br />With that axiom in place, then, it seems reasonable that we should do whatever we can to avoid taking on new false beliefs, as well as strive to take on as many true beliefs as we can. That last part is important, as it saves us from going too far down the path of Radical Skepticism. If we were to adopt something like Descartes' method of doubt in his <span style="font-style: italic;">Meditations</span>--that is, adopt the maxim "we should withhold assent from any proposition that is not indubitable just as we would any proposition that is clearly false"--we would certainly minimize the number of false beliefs we would take on, but at the expense of likely rejecting a large number of true ones. Radical Skepticism results in too many "false epistemic negatives," or "avoids <a href="http://en.wikipedia.org/wiki/Scylla_and_Charybdis">Scylla</a> by steering into <a href="http://en.wikipedia.org/wiki/Scylla_and_Charybdis">Charybdis</a>," as another colleague said. To continue the metaphor, it also seems to dangerous to stray toward Scylla, lest I simply believe every proposition that seems <span style="font-style: italic;">prima facie</span> pluasible--too far in the direction of naive realism, in other words. While I certainly consider myself a <a href="http://en.wikipedia.org/wiki/Naive_realism">naive realist</a> in the context of perception--I think that the way our senses present the world to us is more-or-less accurate, and that when I (say) perceive a chair or a tomato, I <span style="font-style: italic;">really am</span> perceiving a chair or a tomato, and not my "sense datum," "impression" or any other purely mental construct that is assembled by my mind--I think we ought to be somewhat more skeptical when it comes to epistemology in general.<br /><br />My colleague pressed me for the exact formulation of my response to the question at hand ("what counts as a good reason for forming or changing a belief?"), but I demurred, and on further reflection--both then and now--I'm not sure I can give a single answer. Rather, it seems to me, that there are (or at least ought to be) a variety of heuristics in our "epistemic toolbox" that either raise or lower "the bar of belief" in various circumstances. "Naive realism" is a cluster shorthand for a bundle of these heuristics, it seems to me, including (for instance) "We should be more skeptical of propositions that would have the world operating in a way that is radically different from how it seems<a href="http://www.blogger.com/post-create.g?blogID=9215117687149149963#1"><sup>1</sup></a>." I'm most interested right now in the general heuristic mentioned above, though: "we should be more skeptical of propositions we wish to be true." So let's continue with our justification of it.<br /><br />I'm not a perfect reasoner; unlike, say, <a href="http://en.wikipedia.org/wiki/Laplace%27s_Demon">Laplace's Demon</a>, it is possible for me to make a mistake in my reasoning--indeed, it happens with alarming frequency. These errors can take many forms, but they can include assenting to arguments which, though they might seem sound to me, in reality either lack true premises or are somehow invalid. If I strongly desire some proposition <span style="font-style: italic;">p</span> to be true--if, for example, a close family member is in a coma and I hear about an experimental new treatment that might allow him to awaken with full cognitive faculties--I am more likely to make these errors of judgment, as I will not necessarily apply my critical faculties with the same force as I would to another proposition p<sub>1</sub> on which such strong hopes were not resting. My colleague objected that I would, given enough care, certainly be aware of when this was happening, and could take more care in my reasoning to ensure that this result did not occur, but I am not so certain: a corollary of the fact that I am a fallible reasoner seems to be that I might not always know <span style="font-style: italic;">when</span> my reasoning is being faulty. It is no solution, therefore, to say "we need not universally require a higher standard of proof for propositions we wish to be true, we just need to be sure that our reasoning is not being influenced by our desires," as it is possible--in just the same sense that it is possible for me to make a mistake in my reasoning--that I might make a mistake in evaluating that reason itself, no matter how much care I take to be certain that my desires not influence my judgment.<br /><br />What do I mean by "skeptical," then, if not for a more careful logical rigor, one might ask. It seems to me that whenever I am thinking clearly (i.e. I am not drunk, asleep, distracted, etc.) and applying my logical faculties to the best of my ability (i.e. critically questioning my beliefs or trying as hard as I can to puzzle out a problem)--as I should be when I am seriously considering adopting a new belief or changing an existing one--I am already being as rigorous as I possibly can be; unless, for some reason, I have already lowered the "bar of belief" in a specific instance (e.g. suspending disbelief while watching an action movie) I should normally be as logically rigorous as I can be. If I'm critically examining adopting some belief that I greatly wish to be true, then, I should not only be as logically rigorous as I can be--that is, I should set the bar of belief where I normally do--and then <span style="font-style: italic;">also</span> factor in the possibility that my belief might be affecting my logical reasoning--might be lowering the bar without my knowledge--and so I ought to require <span style="font-style: italic;">more</span> evidence than I otherwise would: that is, I ought to be more stubborn about changing my position. By "skeptical" here, then, I just mean "requiring of more evidence," in the same way that if I'm skeptical of a student's claim that her computer crashed and destroyed her paper I will require more evidence attesting to the truth of it (a repair bill, maybe) than I normally would; her claim to the effect counts as at least some evidence, which might be enough if I had no reason to be skeptical.<br /><br />Let me make my point briefly and clearly. In making desire-related decisions--particularly when deciding to assent to a proposition you wish to be true--the possibility that my desire might negatively affect my reason, combined with the fact that I might not be aware of this negative effect means that I ought to apply my normal reasoning faculties with my full ability <span style="font-style: italic;">and</span> require more evidence in favor of the proposition than I normally would.<br /><br />Even more succinctly: we ought to be more skeptical of propositions we wish to be true.<br /><br /><br /><br />Thoughts? Does this make sense? What standards do you apply when trying to make up your mind about your beliefs in general?<br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><a name="1"><sup>1.</sup></a> This is not to say that propositions which say that the world operates in radically different ways than it seems to use are always (or even usually) going to be false--the atomistic theory of matter, relativity, and quantum mechanics are all theories which seem to be at least mostly true, and which describe the world as being <span style="font-style: italic;">in fact</span> much different than it seems. My point is that we should hold claims like this to a higher epistemic bar before assenting to them than we would claims like (say) there is a tree outside my window, which correspond with reality as it seems to us.<br /><br /><br /><br /><br /><br /><br />Edit: Because this discussion took place in class with my co-instructor, and because the kids all have cameras all the time, we get a picture of me thinking about this point and a picture of me arguing it with him. Enjoy.<br /><br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://photos-h.ak.facebook.com/photos-ak-snc1/v262/94/38/587553782/n587553782_1124999_9838.jpg"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 320px;" src="http://photos-h.ak.facebook.com/photos-ak-snc1/v262/94/38/587553782/n587553782_1124999_9838.jpg" alt="" border="0" /></a><br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://photos-g.ak.facebook.com/photos-ak-sf2p/v308/29/14/1229340189/n1229340189_30125486_9208.jpg"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 320px;" src="http://photos-g.ak.facebook.com/photos-ak-sf2p/v308/29/14/1229340189/n1229340189_30125486_9208.jpg" alt="" border="0" /></a>Jonhttp://www.blogger.com/profile/09594949524027204661noreply@blogger.com4