Tuesday, April 19, 2011

Born to the Metaverse: Scream 4 and the Future of Pop Culture


This is gonna contain some spoilers. Sorry, but it will. I'll try to flag the major ones, but if you really want to be surprised, you should probably stop reading right now. There's really no reason for that, though, because Scream 4 is by no means a good film. It's a slasher movie, and not a terrible one at that, but still just a slasher movie. The interesting aspects of it are entirely cultural: it's a window to the future. If you want to know what the next ten years are going to look like, you should see Scream 4. If you're a fan of the genre you might also want to see it, but I'm not going to talk about that here. I'm not a film critic, nor am I interested in what it did well with regard to film-making--I'm interested in it as a cultural phenomenon, and as a lens into the difference between youth culture 15 years ago (which is mainstream culture now), and youth culture today (which will become mainstream culture over the next 10 years). Scream 4 is one of the first definitive digital-native generation pieces. Here's why.


The original Scream was instrumental in mainstreaming the concept of explicit self-awareness in popular culture: as a friend pointed out to me, it certainly wasn't the first major movie to play around with genre-awareness (Pulp Fiction, at least, beat it by a few years), but it certainly was the first major movie to do it so explicitly: the characters inScream talked about the conventions of the teen slasher genre, and the film was largely predicated on the novelty of such meta-commentary: the characters used their knowledge of genre conventions to survive. It was a slasher film about people who knew something about slasher films. This was prescient: Scream came out in 1996, and it would be precisely that kind of self-referential meta-commentary that would come to define the popular culture contributions of the 2000s--think of the difference between the humor of The Simpsons in the 90s and the humor of Family Guy in the 2000s for another good illustration of this. Scream was a trend-setter in presaging the rise of what technology writers have called "remix culture:" the artistic style of creating novel creative works by recombining the elements of existing works, and feeding popular culture back into itself. Whatever you think of this phenomenon (FUCK OFF JARON LANIER!), there's absolutely no question that it was the definitive cultural innovation of the 2000s; the kind of entertainment that people my age (what I call "young immigrants" to the digital world) grew up on was largely predicated on playing around with self-referentiality.


For most digital natives, though, this isn't anything new or original. Remix culture isn't an innovation--it's just the way things are. The oldest digital natives were only 3 or 4 years old when the original Scream came out; that generation is now starting to graduate from high school, go off to college, and make its own mark on popular culture. Scream 4 is the embodiment of that mark. HERE COME SOME SPOILERS. The movie opens with several rapid "frame shifts." We get a few really bloody kills immediately, but after each one, the movie zooms out to reveal that the scene was actually taken place inside a slasher movie (a thinly-veiled version of the Scream franchise itself called Stab) that other people were watching. The new characters pick apart the genre for a while, and are then killed. This process repeats (if I recall) three or four times. By the last zoom-out, the whole thing is just patently ridiculous--people in the theater were either groaning or (like me) laughing. At this point, it's really unclear where the movie is going; this whole process takes somewhere between 15 and 20 minutes, and I was buckled in for a really awful movie: I thought it was just going to be another sequel recycling the now decades-old trick of genre awareness. But that's not what happened.


This early absurd level of meta-humor is essential for what follows. Like a tasteless sorbet, it cleanses the palate, erasing all traces of what came before, and opening the way for you to appreciate the new flavor of what's going to follow. It gets all the issues about self-awareness and meta-commentary right out in the open immediately, loudly, and exaggeratedly. This is surely deliberate, and the message comes through loud and clear: yes, yes, there's meta commentary going on here. That's obvious, and it's been done. Let's get all of this right out in the open. If Scream was meta, Scream 4 is post-meta: not just self-aware about horror film conventions, but self-aware about self-aware film conventions. This lets the meta aspects of the film fade into the background from that point forward: this isn't the gimmick driving the film anymore, but just a fact of life, and part of the scenery against which the rest of the narrative plays out. Everyone not only knows about the horror film conventions, but everyone knows that they know this: it's just assumed that everyone is constantly analyzing what's going on at a higher level of abstraction (or, at least all the youngcharacters; more on this in a bit).


HERE'S ANOTHER SPOILER


One of the early frames is worth mentioning in particular: it plays out the "Facebook stalker who comes to kill you" sort of plot. The girls being stalked and killed have slightly out-of-date slider phones, adding a weird sense of one-off anachronism; this is exactly the kind of thing that an actual movie would have been about just a few years ago, but now it's passe. After the characters in the frame are slashed by the FB stalker, the "zoom out" commentary remarks about how "it would be a twitter stalker today." This frame in particular has the effect of not only getting meta-commentary out of the focus and into the background, but also of getting technology out of the focus and into the background. Everyone in Scream 4 (or, again, every teen character) has a smartphone that's constantly connected to the Internet, but that fact isn't in the least gimmicky in the way that it would have been just five years ago; social networking is portrayed with perfect versimillitude, and the presence of ubiquitious computing and information is--just like the generalmeta-commentary--presented as just a normal feature of the world. Characters transition seamlessly between real-world interaction and digital interaction, and everyone just takes this as a matter of course. The fact that everyone is constantly connected to everyone else (and the Web) is no longer a novelty to be played with, but rather is just a fact in the same way that everyone's having access to a car is just a fact. Someone who knows more about the history of the genre than I do could probably point to a time when that transition took place--when teengers with automobiles ceased to be a novelty driving (so to speak) a horror film's plot, and just became transparently there. Digital technolgy in Scream 4 is perfectly transparent: ubiquitous and taken for granted.


Or, again, at least for the younger characters. Part of the film's brilliance is the degree to which it nails the weird sort of techno-gulf between the older generation (Neve Campbell and Courteney Cox, the heroines of the first film) and the new generation of high-school aged victims. Neve Campbell was supposed to be a teenager in the first film, which would put her in her very late 20s or early 30s now. She's right on the older edge of the young-immigrant generation, and is by far the most competent of her generation's characters in this film. The rest of the older characters are portrayed as relatively bumbling incompetents who are always a few steps behind (largely in virtue of their reliance on landline phones and radio communication). SPOILER. At one point, Courteney Cox's reporter character, in a vain attempt to remain relevant, sneaks into an underground film festival to plan cameras, which transmit via closed-circuit to her laptop in front of the building (as she sets them up, she congratulates herself on her relevance, saying something like "still got it!"). This would have been the height of technology in the first film, but in light of the fact that the high school's gossip-blog owner spends the entire film wearing (I shit you not) a live-streaming wireless webcam mounted on a headset (a fact which no one ever remarks on), it looks comically archaic. Her cameras are promptly disabled by the killer, and she gets stabbed when she tries to go fix them. This is what you get for using wires, 30-somethings.


The iconic trivia scene from the first film is also replayed, but it seems bizarrely out of place by the time it comes around: we've spent the whole film stewing in the fact that everyone knows everything about everyone all the time (none of the younger-generation characters are ever missing for more than a minute or two at a time, and are consistently tracked down with technology when they disappear for even a second), and so the idea that being able to simply recite a list of facts would possibly have any relevance to anything seems laughable: all that trivia knowledge that was so impressive in 1996 is just a Wikipedia search away for all characters all the time now. That point is hammered home by the single classroom scence in the film, in which a balding history instructor vainly tries to take control of a totally disinterested audience of students, each and every one of which has a smartphone out, and isn't paying attention. Behind the teacher, the phrase NO GOOGLING!! is written on the chalkboard in all capital letters. The irrelevance of normal education in light of the informal education made possible by the devices that all the students are carrying (and the teacher's frantic attempts to maintain the orthodox approach to pedagogy) practically slaps you in the face, and it is awesome.


All of this comes to a final head when the killer is revealed near the end. I'm not going to reveal which of the characters it really is, but MAJOR SPOILER he/she is one of the younger generation, and gives a long tirade about the role of the Internet, at one point yelling "We all live in public now, and to be famous you don't have to do anything great; you just have to do something fucked up. I don't want friends--I want fans." I wish I could find a copy of the film to find this speech again, because there was a lot more to it; I'll come back and revise this once I have a chance to see the movie again, but it makes many of the themes I've been pointing to here at least somewhat explicit, and also reveals the movie's greatest flaw--even though it's aware of these issues, it's ultimately still being directed, written, and produced not by digital natives (or even young immigrants), but by folks from the Old Country. In the end, all of this ubiquitious computing is cast very negatively, and it's suggested that the killer was driven to this in virtue of being immersed in a culture that encourages shallowness, extreme pragmatism ("we are whatever we need to be," says the killer), and amoral narcissism. SPOILER When the killer is finally defeated (by the just-barely-still-relevant Campbell), it is with a set of defibrilation paddles--the ultimate wired, analog technolgy finally triumphing over the digital menace.


If you want to see what the world is going to look like as the next generation starts to assume power, this is the movie to watch. You can see the writing on the wall here; the characters were born in the metaverse, and treat ubiquitous digital technology, social networking, self-referentiality, and remix culture not as novelties, but simply as part of the background against which they operate. These things are moving out of the fringes and being integrated into the mainstream of popular culture; as that move happens, it opens the door for new ways to define genre, and starts to suggest new conventions. This is beyond simple remix; the meta aspect of the 2000s has started to settle down and become just another facet of creative culture as a generation born to the Web starts to define itself in contrast to those who have come before. It's very, very exciting to watch.



Friday, March 4, 2011

UN/UNIS talk

I had the honor of speaking at the United Nations/United Nations International School global issues conference today. They held it in the UN General Assembly chamber (wow!), and I got to share the stage with such wizardly people as Clay Shirky and Alexis Ohanian. It was a tremendous amount of fun. If you're interested in seeing me--my talk was called "Building a Great Community from Terrible People (Or: How I Learned to Stop Worrying and Love 4Chan)"--then the direct link is here.

Monday, August 9, 2010

AI and Consciousness

So I've been at CTY again (as per usual). This time, I was instructor for Human Nature and Technology, for the first session, and then instructor for Philosophy of Mind second session. Yes, I am moving up in the world. As is normal at CTY, I was thinking about philosophy a lot. So today, when a friend of mine asked me to share my thoughts on the AI debate, I decided to finally try to put pen to paper and express the nascent argument I've been developing for a while now. This is how it turned out:

I think one of the most important things to press on here is the distinction between simulation and duplication. A lot of the disagreements about artificial intelligence, it seems to me, come down to disagreements about whether or not AI is supposed to be simulating a (human) mind or duplicating a (human) mind. This point is widely applicable, but one of the most salient ways that it comes up here is as a distinction between consciousness and intelligence. Let me say what I mean.

Some (many) properties of systems are functional properties: they're properties that are interesting, that is, just in virtue of what function they discharge in the system, not in virtue of HOW they discharge that function. For most purposes, human hearts are like this: an artificial heart is just as good as a normal one, because we don't really care about having a meat heart so much as having something to pump our blood. Functional properties are "medium-independent:" they are what they are irrespective of the kind of stuff the system is made out of. Not all properties are like that, though. Here's a story.

Suppose I've got a few different copies of Moby Dick. Thanks to the wonders of technology, though, all of these copies are in different formats: I've got a regular old leather-bound book, a PDF on my hard drive, and a CD with the audio book (read by...let's say Patrick Stewart). These are all representations with the same content: that is, they're all representations of the events described in Moby Dick. The content of the story is medium-independent. There are, however, facts about each of the representations that don't apply to the other representations: the physical book is (say) 400 pages long, but the audiobook has no page length. The PDF is in a certain file format (PDF), but the book has no file format. The audiobook has a peak frequency and volume level, but the PDF has neither of those. This list could be continued for quite a while; you get the picture--the important thing to emphasize is that while they all have the same content, each of the representations is instantiated in a different physical form, and thus has facts that are unique to that physical form (e.g. page length, format, peak frequency, &c.). That's all true (once again) in spite of their identical content.

Now, let's add another representation to the mix. Suppose I, like the fellows at the end of Fahrenheit 451, have decided to memorize Moby Dick. It takes quite some time, but eventually I commit the entire book to memory, and can recite it at will. This is a fourth kind of representation--a representation instantiated in the complex interconnection of neurons in my brain. Paul Churchland (2007) has developed a very nice account of how to think about neural representation, and he talks about a semantic state space (SSS)--that is, a complicated, high-dimensional vector space that represents the state of each of my neurons over time. This SSS--the changing state of my brain as I run through the story in my head--then, represents Moby Dick just as surely as the book, audiobook, or PDF does, and remains a totally physical system.

Ok, so we can ask ourselves, then, what unique things we can say about the SSS representing Moby Dick that we can't say about the other representations. It has no page length, no file format, and no peak volume. What it does have, though, is a qualitative character--what Thomas Nagel calls a "what-it-is-like-ness." It has (that is) a certain kind of feeling associated with it. That seems very strange. Why should a certain kind of representation have this unique quality? Asking this question, though, just shows a certain kind of representational chauvinism on our part: we might as well ask why it is that a book has a certain number of pages, or why a PDF has a file format (but not vice-versa). The answer to all of the questions is precisely the same: this representation has that feature just in virtue of how the physical system in which it is instantiated is arranged. The qualitative nature of the memorized book is no more mysterious than the page count of the leatherbound book, but (like page length) it isn't medium-independent either, and that's the point that cuts to the heart of the AI debate, I think. Here's why.

Think about Turing and Searle as occupying opposite sides of the divide here. Turing says something like this: when we want AI, we want to create something that's capable of acting intelligently--something that can pass the Imitation Game. Searle points out, though, this approach still leaves something out--it leaves out the semantic content that our minds enjoy. Something could pass the Imitation Game and still not have a mind like ours, in the sense that it wouldn't be conscious. This whole argument, I suggest, is just a fight about whether we should be going after the medium-independent or medium-dependent features of our minds when we're building thinking systems. That is, should we be trying to duplicate the mind (complete with features that depend on how systems like our brains are put together), or should we be trying to simulate (or functionalize) the mind and settle for something that discharges all the functions, even if it discharges those functions in a very different way? There's no right or wrong answer, of course: these are just very different projects.

Searle's point is that digital computers won't have minds like ours no matter what kind of program they run, or how quickly they run it. This makes sense--in the language of the book analogy from above, it's like asserting that no matter how much fidelity we give to the audiobook recording, it's never going to have a page length. Of course that's true. That's the sense in which Searle is correct. Turing has a point too, though: for many applications, we care a lot more about the content of the story than we do about the format in which we get it. Moreover, it looks like duplicating our sort of minds--building a conscious system--is going to be a lot harder than just building a functionally intelligent system. Ignoring the medium-dependent features of mentality and focusing on the medium-independent ones lets us build systems that behave intelligently, but gives us the freedom to play around with different materials and approaches when building the systems.

So in some sense, the question "can digital computers think" is ambiguous, and that's why there's so much disagreement about it. If we interpret the question as meaning "can digital computers behave intelligently?" then the answer is clearly "yes," in just the same sense that the answer to the question "can you write a story in the sand?" is clearly yes, even though sand is nothing like a book. If we interpret the question as meaning "can digital computers think in precisely the way that humans with brains think?" then the answer is clearly "no," in just the same sense that the answer to the question "will a story written in the sand have a page count?" is clearly "no," no matter what kind of sand you use. Searle is right to say that consciousness is special, and isn't just a functional notion (even though, as he says, it is a totally physical phenomenon). It's a function of the way our brains are put together. Turing is right to say that what we're usually after when we're trying to make intelligent machines, though, is a certain kind of behavior, and that our brains' way of solving the intelligent behavior problem isn't the only solution out there.

I'm not sure if any of that is helpful, but I've been meaning to write this idea down for a while, so it's a win either way.

Monday, June 7, 2010

On the Future of Philosophy and Science

A friend of mine asked me for my opinion about the future of philosophy. There's a popular perception that, in light of the tremendous advances science has made in the last century or so, philosophy is a dying discipline. As most of you know, I disagree with this assessment, but I do think that philosophy needs to adapt if it is to survive. Here are my brief thoughts on the future of my discipline, and its relationship to the scientific project.

Philosophy as an isolated discipline is certainly in decline--the number of questions that are purely "philosophical" (and worth answering) is shrinking. That's less a reflection on philosophy, though, and more a reflection of the state of academia in general: disciplinary lines are blurring. Physics is (at least in parts) informed by biology, information theory, and other special sciences. The special sciences themselves are (and have been for a while) mutually supportive and reinforcing; there's no clear line between a question for (say) sociology and a question for economics. None of that is to say that physics, biology, information theory, sociology, or economics is in decline, though--it just means that academia is becoming increasingly interdisciplinary.

On the face of it, this shift looks to have hit philosophy particularly hard: we've fallen quite far from our position as the queen of the Aristotelean sciences to where we are today, and there's a pervasive attitude both among other academics and among lay-people, I think, that philosophy is basically obsolete today, having been replaced by more reputable scientific investigation. There's a perception, that is, that metaphysics has been supplanted by mathematical physics, ethics has been rendered obsolete by sociobiology and evolutionary game theory, and that questions about the nature of the mind have been reduced to questions about neurobiology (or maybe computation theory). All of this is, I think, more or less true: the days of philosophy pursued as a stand-alone competitor to science are over, or at least they ought to be. This is emphatically not the same thing as saying that philosophy is dead or dying, though--it just needs to undergo the same kind of shift that other sciences have had to go through as they've entered the modern era. Philosophy needs to be incorporated into the unified structure of science generally.

It's not immediately obvious how to do this, but there are some clues. We should start by looking at the areas of science where philosophers--that is, people trained in or employed by philosophy departments using methods that are marked by careful attention to argument, critical examination of underlying assumptions, and concern with big-picture issues--are still making useful contributions to the scientific enterprise. There are, I think, two pretty clear paradigm cases here: quantum mechanics and cognitive science. In both of these fields, philosophers have made contributions that, far from consisting in idle navel-gazing and linguistic trickery, have made a real impact on scientific understanding. In QM, philosophers like David Albert, Hilary Greaves, David Deutsch, David Wallace, Barry Loewer, Tim Maudlin, Frank Arntzenius, and others have helped tremendously in clarifying foundational issues and resolving (or at least explicating) some of the trickier conceptual problems lurking behind dense mathematical formalism. Similarly, philosophers like Daniel Dennett, John Searle, Andy Clark, Ken Aizawa, and others have been instrumental in actually getting the field of cognitive science off the ground; just as in QM, these philosophers are responsible both for clarifying foundational concepts and for designing ingenious experiments to test hypotheses developed in the field.

What does the work these people are doing have in common in virtue of which it is philosophical? Again, the answer isn't clear, but this just reinforces the point that I'm making: there's no longer a clear division between philosophy and the rest of the scientific project to which philosophers ought to be contributing. If anything, the line between philosophy qua philosophy and science (insofar as there's a line at all) seems more and more to be amethodological line rather than a topical one; a philosopher differs from a "normal" scientist not in virtue of the subject matter he investigates, but in virtue of the way he approaches that subject matter. Scientists, by and large, are trained as specialists: by the time a physicist or biologist reaches the later stages of his PhD, his work is usually sharpened to a very fine point, and his area of expertise is narrow, but very deep: many (but not all) practicing scientists know a tremendous amount about their own fields, but are content to leave thinking about other fields to other specialists. Philosophers, on the other hand, are often generalists (at least when compared to physicists). In virtue of our general training in logic, argumentation, critical thinking, and, well, philosophy we're often better equipped than most to see the bigger picture--to see the way the whole scientific enterprise fits together, and to notice problems that are only apparent from a sufficiently high level of abstraction. Training in philosophy means sacrificing a certain amount of depth of knowledge--I'll never know as much about particle physics as Brian Greene--for a certain amount of breadth and flexibility; by the time my training is done, I'll know a bit about particle physics, a bit about evolutionary theory, a bit about computer science, a bit about cognitive neurobiology, a bit about statistical mechanics, a bit about climate science, a bit about the foundations of mathematics, and so on. That kind of breadth certainly has its drawbacks--a philosopher is unlikely to make the kind of experimental breakthroughs that a scientist dedicating his life to a single problem might achieve--but it also has its benefits; philosophers are in a unique position to (as it were) care for the whole forest rather than just a few trees.

Philosophers are uniquely situated, that is, to engage in the project of "bridge-building" between the individual sciences--uniquely situated to facilitate the continuing break-down of disciplinary barriers that threatened philosophy's existence to begin with. Philosophy's tool-kit is sufficiently general to be applied to any of the special sciences, given a little bit of study and localization. This isn't to suggest that philosophers should (or even can) make pronouncements about scientific issues from the armchair; that's the model of philosophy that's dying, and I'm not the only one to have said "good riddance" to it. Doing philosophy of physics means learning physics, and doing philosophy of biology means learning biology. We need to engage with the disciplines to which we contribute; the edges of the bridges need to be anchored on solid ground before they can help us cross the interdisciplinary gaps. The "big picture" questions that have been the hallmark of philosophy for millennia--questions like "what is humanity's place in the universe?" and "what do our best theories of the structure of the world mean for who we are?" and even "what's special about consciousness?" still have a place in contemporary science. Science has room for both specialists and generalists, and questions like "what's the right way to think about a real physical system's being in a state that's represented by a linear combination of eigenvectors?" have an important place in science. The scientific enterprise takes all kinds, and there's room for philosophers to contribute, if we can just get our collective head out of our collective ass and come back to the empirical party with the rest of science.

Thursday, September 3, 2009

Bertrand Russell: Leaping Tall Proofs in a Single Bound Variable

Back when I was a human larva, Bertrand Russell was one of the first philosophers I ever discovered, let alone read in any depth. I was raised moderately Catholic, but by the time I was 11 or 12, I was wrestling with nascent feelins that Catholicism--and indeed, all of religion--might be terribly inadequate. One day, while hanging out in a bookstore (yeah, I was that kind of 12 year old), I happened on a book called Why I'm Not a Christian. I read the titular essay right then and there and, after buying the book, soon devoured the rest of them. Russell's clear, lucid, humorous prose expressed all the doubts I'd been unable to put into words (and then some!) and exposed me to serious philosophy for the first time. I was hooked, and before long I was plowing through Wittgenstein's Philosophical Investigations and every other piece of philosophy I could get my hands on. Though I'm not a logician--and though Russell's work on religion was only a very, very small part of his mostly logic-oriented corpus--I still have a soft-spot in my heart for him: he was my first doorway into what eventually would become a career.

That's why I'm so delighted to discover that two gentlemen (one of them a computer science professor at Berkeley!) are publishing a graphic novel--that's what you call you comic book if you want it to be taken seriously--about Russell's struggles with life, mathematics, philosophy, and his own tenuous sanity. Snip from the article about it in The Independent:

Through GE Moore at Cambridge, he discovered Leibniz and Boole, and became a logician. Through Alfred Whitehead's influence, he travelled to Europe and met Gottlob Frege, who believed in a wholly logical language (and was borderline insane) and Georg Cantor, the inventor of "set theory" (who was locked up in an asylum) and a mass of French and German mathematicians in varying stages of mental disarray. Back home he and Whitehead wrestled with their co-authored Principles of Mathematics for years, endlessly disputing the foundations of their every intellectual certainty, constantly harassed by Russell's brilliant pupil Wittgenstein.

If the subject matter seems a little arid, with its theories of types, paradoxes and abstruse language (calculus ratiocinator?), and if its recurring theme of how logic and madness are psychologically intertwined seems a touch gloomy, don't let that put you off. Logicomix tells its saga of human argumentation with such drama and vivid colour that it leaves the graphic novel 300 (Frank Miller's take on the Battle of Thermopylae) looking like something from Eagle Annual.

This sounds great--something like Wittgenstein's Poker with pictures. It looks like the book itself isn't available for preorder on Amazon (it's going to be released in Europe on September 7, and sometime after that in the United States), but you can sign up to be notified when it is available. This is certainly something that I'll be making room in my schedule to read!

Sunday, August 9, 2009

Quicklink: Ben Bradley and Roy Sorensen on Death

I've been thinking a lot about death lately--there's no particular reason, I just find some of the questions surrounding the philosophy of death fascinating. Perhaps primarily, I'm intrigued by the intuition that some people (apparently) have that either (1) death is not an evil--that is, it isn't something that we should fear for ourselves--or that (2) indefinite life isn't something to be desired. I suspect that both of these intuitions come to more or less the same thing, but they don't seem to be universally correlated: some people will hold (1) without holding (2). When I first started talking to friends and colleagues about this issue, I was rather shocked to find out that anyone holds (1) or (2) at all--they seem so obviously false to me that I have a hard time fathoming how anyone could hold them. Still, apparently this issue is non-controversial; I've got a paper floating around in my head attacking (1) and (2), but until it manifests (maybe later this semester?), I'll have to settle for just pondering. In the mean time, here are Ben Bradley (Syracuse University) and Roy Sorensen (Washington University-Saint Louis) discussing some of these issues. The discussion is a little slow (and Ben Bradley is--ugh--a hedonist), but BloggingHeads lets you watch the whole thing at 1.4x speed. I recommend that option. They touch on some of the fundamental questions in the field, including (1) and (2)--Roy Sorensen and Ben Bradley both seem to share my shock about the fact that someone might hold (2). Enjoy!




Thanks, Leiter!

Wednesday, July 29, 2009

Having Your Qualia and Eating Your Physics Too

Can we coherently acknowledge the existence of qualia without being forced into a non-physicalist stance about the contents of the world? I'm back at CTY--as I am every summer--and today our philosophy of mind class got to Jackson's "Epiphenomenal Qualia." I was somewhat surprised, having not read the article since last year, to find that my own views on it seem to have changed considerably. Specifically, while I still agree with the main thrust of Jackson's argument (that is, that qualia exist), I'm much less impressed with the quality of his argumentation and the route by which he arrives at his conclusion; more specifically still, I'm incredibly skeptical that his "what Mary didn't know" argument shows anything like what it is purported to show. Qualia certainly deserve to be included in our ontology, but that emphatically doesn't imply that we ought to reject the physicalist picture of the world. Let me try and show how I think these two statements can be reconciled.

First, I suppose a bit of background is in order. Readers may already be somewhat familiar with the Mary case--Jackson's version of the knowledge argument against physicalism--so I won't waste a whole lot of time detailing the moves. Still, it's worth laying out exactly how the argument is supposed to proceed; as we shall see, the precise wording of one of the premises can make all the difference between soundness and total incoherence. Let's start with the informal presentation. Briefly, the standard presentation goes something like this.

Mary is a gifted neuroscientist who has dedicated her life to studying human color perception. She's learned everything there is to know about the physical process of seeing color: she knows everything about how the surface spectral reflectance of various objects interacts with environmental variables to produce changes in the photoreceptors of the eye, how those changes produce neural excitations, how those excitations are processed in the brain, and so on. She knows all the physical facts about how humans perceive color. Somewhat ironically, Mary herself has never perceived color. Her eyes (say) have been surgically altered so that she is only able to view the world in shades of grey. Nevertheless, her studies have proceeded beautifully, and she is now in a position of perfect physical knowledge. With this complete knowledge in hand, Mary undergoes an operation to reverse her perceptual idiosyncrasy; the procedure to keep her from being able to see color is reversed, and Mary's biology is returned to normal. When Mary awakens from the operation, she is presented with a red rose, and actually sees red for the first time. Does Mary learn something new?


On the standard interpretation, we're now presented with two horns of a dilemma: we're either forced to say that no, Mary has learned nothing new when she first sees color--an ostensibly counter-intuitive position to hold--or we're forced to say that yes, Mary learns something new when she sees the rose. If we take this second horn, though (so the argument goes), we must also admit that there are facts about color experience that are not physical; after all, ex hypothesi Mary knows all the physical facts about color vision--if she learns something new by actually seeing color, that new fact must be a non-physical fact. Therefore, the physicalist picture of the world is, while perhaps not strictly false, incomplete in an important way: it is incapable of accounting for the qualitative character of conscious experience. Thus, we must appeal to more than physics when describing a world that contains conscious creatures.

Here's a more formal presentation of the argument (taken from the SEP):

Premise P1Mary has complete physical knowledge about human color vision before her release.

Therefore

Consequence C1Mary knows all the physical facts about human color vision before her release.
Premise P2There is some (kind of) knowledge concerning facts about human color vision that Mary does not have before her release.

Therefore (from (P2)):

Consequence C2There are some facts about human color vision that Mary does not know before her release.

Therefore (from (C1) and (C2)):

Consequence C3There are non-physical facts about human color vision.


This is, at first glance, a very plausible argument. Jackson's own conclusion was a version of epiphenomenalism: at the time of the article's publication, he held that whatever non-physical knowledge Mary acquired must lack any kind of causal efficacy, thus maintaining the causal closure of the physical universe. That seems to me to be a pretty desperate move, though, and apparently Jackson eventually agreed--he's since recanted this position, and now holds that there must be something wrong with the Mary case. I'm not sure if he's put any work into figuring out what it is, but other people certainly have. I'm going to more or less ignore all of them, as is my wont.

Here's what struck me when I was reading this argument today while preparing to lecture to the class on it: Jackson is deeply ambiguous, confused, or otherwise mistaken about what he means in (P1). The argument never even gets off the ground just because he's wrong about the kinds of things that Mary would be able to know from her particular position in her gray scale world. Let's tease this apart a little more.

What does it mean to say that Mary knows all physical facts about color perception? Presumably, just this: for every predicate, relation, or process P that relates to human color vision, if P is constrained by the laws of physics, then Mary knows P. This should be relatively non-controversial--"physical facts" are those (and only those) facts that are about the behavior of physical systems (and nothing else). The physicalist position is that the set of these facts is identical with the set of all facts that are necessary to explain the workings of the universe; that is, the physicalist position is the position that knowing all the physical facts amounts to knowing everything worth knowing. More narrowly, the physicalist position vis-a-vis color perception is just that knowing all the physical facts about color perception is both necessary and sufficient to give a complete account of how color perception works.

Good. We're homing in on the problem. The next question that we need to answer is this one: how do we go about learning physical facts? The physicalist "bite the bullet" style response to Jackson's argument just denies that Mary learns anything new when she's exposed to color for the first time--it asserts that if she knew all the physical facts, then she'd know what the experience was like. This is not very intuitive; we have a deep intuition that no matter how much I study some subject (via books, laboratory experiments, and so on), there are just some facts--like what it's like to see color--that just won't be accessible to me. That is, we have an intuition that there are some relevant facts that either can't be written down, or can't be discerned through objective experimentation: the what-it-is-likeness of color experience is, presumably, counted among these facts. This is the intuition that Jackson's argument exploits.

It's worth proceeding carefully here, though. Is saying that some particular fact F can't be written down or accessed through objective, third-person experimentation--that is, can't be described from a "view from nowhere"--equivalent to saying that F isn't a physical fact? Can all physical facts (to put it another way) be written down and accessed from a third-person viewpoint? Recall our definition of 'physical fact' above:

"Physical facts" are those (and only those) facts that are about the behavior of physical systems (and nothing else)
Let's rephrase the question, then: can all the behavior of every physical system be represented in third-person accessible formats? If we answer this question in the affirmative, we've adopted the position that Flannagan, in Consciousness Reconsidered, terms "linguistic physicalism," and there seems to be good reason to think that we've made a mistake somewhere in our reasoning. If we answer the question in the affirmative (that is), we've committed ourselves to the following position.

(LP) What it means for some fact F to be a physical fact is for F to be representable in some observer-neutral, third-person accessible form (e.g. public language).

That's a problem, though. If we adopt (LP), then Jackson's argument collapses into something that's trivially true (if not question-begging!).

(1a) Mary knows all linguistic (i.e. third-person accessible) facts about color perception.
(2a) Mary learns something new about color perception when she sees the rose.
(3a) Therefore, there are some facts about color perception that are not representable linguistically.

Of course this is true: it's part of what it means for something to be qualitative (that is, to be a conscious experience) that it's essentially private--that it's essentially accessible only from the first-person perspective. The question, then, becomes whether or not we are justified in adopting (LP); can we give an account of what's going on that doesn't require us to adopt it? Sure: we just have to allow that there might be some physical facts--facts about the behavior of some physical systems--that aren't capturable in third-person accessible representations. If we make this concession, then explaining what's going on in the Mary case becomes very easy: while black-and-white Mary has learned all the linguistically representable physical facts about color perception, this set of facts is not identical to the set of all physical facts about color perception--that is, there are aspects of the behavior of some relevant physical systems that cannot be captured from the third-person "view from nowhere." These facts, of course, are facts about what it is like to be in a certain physical state. To put it another way, there are facts about the state of Mary's own brain--which is, of course, a physical system--that can't be known from a third person perspective: she actually has to be in that state in order to know everything about it. When she's exposed to red for the first time, then, she's adding another bit of physical knowledge--which just is, recall, knowledge about the behavior of physical systems, which includes her brain--to her knowledge-base: that bit of knowledge, though, is one that is only accessible from the first-person standpoint.

Let me try to put this point as simply as I can. The problem with this thought-experiment is that Jackson is mistaken when he says that black-and-white Mary knows all the physical facts. What he means to say is that she knows all the linguistic physical facts--all the physical facts that can be accessed from the "view from nowhere." What Mary doesn't know is the set of physical facts--facts about the physical system that is her brain--that can only be accessed from the first-person viewpoint; she doesn't know what it's like to be in a particular physical state. That's what she learns when she leaves her black-and-white operating room.

To put it one more way, let me just say this. "Physical facts" is a term that refers not to a set of facts that is defined by a mode of access--that have in common something about how they can be known--but to a set of facts that is defined by the sort of system they deal with--that have in common a subject matter, not a kind of access. Physical facts are facts about the behavior of systems for which that behavior is totally describable in terms of the laws of physics, and it makes absolutely no difference (at least as far as we're concerned here) what the mode of access to those facts is. Some (many!) of the facts are expressable in observer-neutral language. Some are not. What matters is not this mode of access, but rather whether or not what is accessed is information about the behavior of a physical system.

Addendum: Please read the comment thread for more on this. Both Mark and Eripsa have given very insightful criticism and show that this argument needs refining. I've done my best to refine it below, and I might post an updated version later on. For now, though, the discussion in the comments is definitely worth following. Thanks to Lally, too, for providing vehement (and helpful) critiques off-thread.

Monday, January 12, 2009

Musings on Embedded Epistemology

I took a course in epistemology last semester, and (surprise) it made me think about epistemology.  What follows is an attempt to summarize my random musings and conversations I've had over the last few weeks into something that begins to approach a coherent theory.  It is, as I cannot emphasize enough, very prelimary so far, and very much a work in progress.  Still, I find these considerations very interesting, and I hope you do as well.

Belief justification is like a progress bar on a download--it can be filled or emptied to various degrees by things that we encounter out in the world. For instance, if I trust some individual a great deal, his words will tend to fill my "truth bar" a great deal; this weighing is based (among other things) on my past interactions with him, my knowledge of his epistemic state, &c.--certain; contextual variables about our relationship lead me to weigh his words highly when making (or contemplating making) epistemic actions like belief revision. The degree to which my truth bar is filled is also going to depend on the nature of the proposition this hypothetical interlocutor is informing me about: even from a trusted friend, I'm going to more readily assent to the proposition 'there is a brown dog around the corner' than I am to the proposition 'there is a child-eating clown around the corner.' Again, this reflects the contextually-influenced nature of epistemic action: based on other beliefs I have about how the world works, I'm going to be more or less likely to assent to a new belief (or to change an old one). 

It's important to emphasize that the truth-bar is almost never entirely full, except in some very special cases (e.g. conscious states to which you have immediate, incorrigible access). Take the case of a proposition based on basic sensory information--e.g. 'there is an apple on my desk.' In normal circumstances--good lighting, I can feel and see the apple, other people see the apple too, &c.--I; have very good reason to suspect that there really is an apply on my desk; the truth-bar for that proposition is (say) 99% full. Still, there are potential defeaters here: it might be the case that I am actually in some kind of Matrix scenario, and therefore it might be the case that there is no desk or apple at all. Still, based on other (fairly strongly justified) beliefs I have about the world, this Matrix scenario seems rather unlikely--that is, the truth-bar for 'I am in the Matrix' is very, very close to empty (though not entirely empty, as the proposition is still a logical possibility). Because this defeating proposition ('I am in the Matrix') has a very weak truth-bar, it doesn't weigh very heavily in my epistemic considerations--it's enough to keep the bar for 'there is an apple on my desk' from being 100% full, but that's about it. 

This goes sharply against established epistemic tradition, according to which the primary goal of epistemology is truth. If we define truth as a 100% full bar, there are going to be very few propositions (aside from tautologies like 'all black things are black') that will enjoy an entirely full bar. Instead, the right way to think about epistemology--and about our epistemic responsibilities--is as a quest for justified belief, a quest for a reasonably full bar. What counts as 'reasonably full' is, again, going to vary based on contextual variables: when the stakes are rather low, I might assent to a proposition when (say) the truth bar is over 50% full. This might be the case when, for example, a friend tells me that there is a brown dog outside my house; I believe him, and if someone asks me 'is there a brown dog outside your house?,' I will be inclined to answer in the affirmative. My friend might be wrong or lying, but the stakes are low and I have very few strong defeater propositions in play--few good reasons to suppose that my friend speaks falsely, in other words. In more important cases (such as when engaged in technical philosophical deliberation, or when designing a passenger jet), I'm going to be inclined to withhold assent from propositions until the bar is almost entirely full: the consequences for assenting to the wrong belief are so potentially dire, that I will demand a higher standard of justification, investigation possible defeaters more thoroughly, &c.; 

The emphasis here is on the contextually-dependent nature of epistemic action; rather than doing a lot of complex deliberating for every possible belief change entirely in our heads, we "offload" a certain amount of the work into the existing epistemic environment; that is, we use the existing epistemic landscape to simplify our decision-making by heuristically assigning various "values" to propositions that are related to the one under consideration, and performing a kind of Bayesian calculation to get a rough approximation of truth or falsity. We can make a direct parallel here with other work being done in extended/embedded cognition and extended mind theses--in just the same way that we use external props (e.g. written notes) as props to support certain cognitive processes (e.g. memory), we use our intuitive grasp of the existing epistemic landscape as a prop to support our own decision making. I call this approach "contextually embedded epistemology." 

Statisticians or those with a background in math will recognize that I'm describing something very much like a Bayesian network here--I suspect that our beliefs, were they to be mapped, would look much like this. There are multiple links between multiple different beliefs, and one belief might depend on many others for support (or might be partially defeated by many others). The picture is constantly in a state of flux as shifts in one node (i.e. a single belief) influence the certainty (i.e. the fullness of the truth bar) of many other nodes.  The Bayesian way of looking at things is far from new, but the emphasis on partial-completeness and environmental support, as far as I know, is.  These are just some random thoughts I've had about this in the last few days, so comments and criticisms are encouraged.  This needs a lot of tightening up.

Thursday, December 25, 2008

Quicklink: How a Computer Works

BoingBoing recently featured scans of a wonderful 1978 book called How a Computer Works.  It's so full of awesome, it's a wonder it doesn't explode; there's even some implicit philosophy!  It seems almost too amazing to be real, but it's entertaining either way.  Snip:

There is something about computers that is both fascinating and intimidating.  They are fascinating when they are used in rocketry and space research, and when they can enable man to get to the moon and back.  In this respect, they are like human machines with "super-brains."  Some of them can even play music.  On the other hand, we are likely to be intimidated by their complex mechanisms and large arrays of blinking lights.  You should do what scientists tell you to.  
In fact, computers do not have brains like we do.  They cannot really think for themselves, except when they are doing complicated arithmetic.

So next time you start using your calculator program remember this: the more complex arithmetic you do, the more sentient They become--other than that, do what scientists tell you to.

Saturday, December 13, 2008

Quicklink: Dennett and Clark Smack Substance Dualists Down

New Scientist recently ran a very short piece in which Dennett and Clark respond to accusations that any talk about mind influencing body (e.g. as when a deliberate shift in attention causes a change in brain states) implies an acceptance of some kind of immaterial soul / Cartesian ego.  The rejoinder they offer is short, to the point, and (it seems to be) decisive.  Snip:

But this would lend support to the proposition that minds are non-material - in the strong sense of being beyond the natural order - only if we were to accept the assumption that thoughts, attending and mental activity are not realised in material substance.


I've had my differences with both Clark and Dennett with regard to the nature of consciousness, but they're right on here: arguing that the explanatory role of consciousness proves the existence of an immaterial (i.e. essentially non-physical) kind of substance is straightforwardly question-begging--it assumes that consciousness is not itself the result of physical processes.  Descartes' legacy haunts us still.

Tuesday, December 9, 2008

Andy Rooney Derides Upgrade Culture, Misunderstands Technology

Here's a delightful little video of Andy Rooney doing his loveable crumudgeon thing, this time with his sights set on Bill Gates, upgrade culture, and the computer's supplantation of typewriters generally.  I absolutely adore Andy Rooney, but what he has to say here is a beautiful representation of how people on the other side of the so-called "digital divide" often misunderstand technology.  Watch the video first:





Now, leaving aside the issue that Bill Gates doesn't really have anything to do with hardware design (or the trajectory of technology generally, at least not directly), a few of the points that Mr. Rooney makes in this piece are representative of some fundamental confusions regarding technology--confusions that, I think, are shared by many in his generation.  I want to say a few words about those confusions here.

Mr. Rooney's central point is that while he wrote on the same Underwood typwriter for decades, he's forced to upgrade his computer every year or two, new computers are seldom compatible with every aspect of their predecessors' functionality--old file types are dropped (try to find a computer that will read .wps documents today), and old programs are no longer supported (my 64 bit Vista machine already complains about running 32 bit programs that are only a year old)--and morphological similarities are rarely preserved.  This is all certainly true, but the same is true of technology generally--the time scale is only recently accelerated to the point where such differences become visible.

I've advocated the Vygotsky/Clark/Chalmers position of thinking of technology as cognitive scaffolding before, and I think that metaphor is informative here.  Suppose you're using scaffolding (in the traditional sense) to construct a tall building.  As the building (and the scaffolding) gets higher and higher, certain problems that didn't exist at ground level will manifest themselves as serious issues--how to keep from plummeting 60 stories to their death, for instance, is a problem that's directly related to working on 60 story tall scaffolding.  Still, it would be a mistake to say "Why do we need 60 story high scaffolding?  We didn't have any of these problems when the scaffolding was only 10 feet high, so we should have just stopped then; making higher scaffolding has caused nothing but problems."  We need 60 story high scaffolding, a contractor might point out, because it helps us do what we want to do--i.e. construct 60 story buildings.  The fact that new problems are created when we start using 60 story high scaffolding isn't a reason to abandon the building construction, but only a reason to encourage innovation and problem-solving to surmount those newly emergent issues.

Precisely the same is true, I think, of technology.  Mr. Rooney speaks as if the upgrade culture exists just to line Bill Gates' pocketbook--as if the constant foisting of new software and hardware is the result of a pernicious conspiracy to deprive poor rubes of their hard-earned money without giving them anything except a headache in return; this is simply false.  It's true that the average life expectancy of a computer is far less than the average life expectancy of its ancestral technology (e.g. the typewriter), but Mr. Rooney doesn't seem to realize that each technological iteration comes with consumate functional advancement--the computers on the shelf today aren't just dressed up typewriters, but solve new problems, and solve old problems in better ways with each generation.  Rather than just being a vehicle for word processing, computers today are word processors, communication devices, entertainment centers, encyclopedias, and a myriad of other devices all rolled into one.  We pay a price for this advancement--computer viruses weren't a problem before the Internet made it easy to transfer and share information with many people quickly--but, like the problem of keeping construction workers from plummeting to their deaths, the new issues raised by evolving technology are worth solving.  

Mr. Rooney's typewriter probably wasn't radically different from the one his father might have used, and if we go back to further generations we'll see even less of a difference--Mr. Rooney's grandfather, greatgrandfather, and great-greatgrandfather probably wrote (if they wrote at all) with more or less precisely the same kind of technology: pen and ink.  By contrast, the kind of computer I'm using right now will almost certainly bear little or no resemblance to the computers my children or grandchildren will be using 50 years down the line; the pace of technological innovation is increasing.  Still, this increasing tempo represents more than just a commercial scam--it represents the increasing productivity, cognition, and innovation that is made possible with each succeeding generation of technology: as the tools improve, they are in turn used to design even better tools.  I think this makes an occasionally moving power button a small price, and one worth paying.

Saturday, November 29, 2008

Brief Musing on Philosophy and Professionalism

A few days ago, a former student of mine sent me a link to a conversation she'd been having over a Facebook message board.  The topic had to do with whether or not philosophers are born or made (through education, not in labs), but it had devolved into a disagreement about the role lay-people should take in philosophical discourse--my former student was basically arguing that anyone with a good mind can be a philosopher, and others were attacking her by claiming that being a philosopher requires specialized training (i.e. a doctorate), and non-professionals can't lay claim to the title.  I think that's crap, so I posted a quick response, which I have reproduced here for those that might be interested.  It's relatively self-contained, except for one reference to my student by name ("Katelin").  Enjoy.

There's a popular confusion, I think, between 'professional philosopher' and 'person who thinks in logical and rigorous ways.' It's certainly true that any individual cannot simply decide to declare himself a philosopher in the Leiterrific sense of the term--that takes years of specialized training and a good measure of talent to achieve. However, this should not be taken to imply that only those who have been anointed by the right people can honestly call themselves philosophers, or claim to be engaged in a philosophical project. In this respect, I think Katelin is absolutely right, and I think that the pernicious elitism is doing damage to the intellectual discourse that is essentially at the heart of the profession.

Remember that the idea of a 'professional philosopher' is a relatively new one (at least on a wide scale)--The Academy didn't really start to flourish as the center for philosophical discourse until the 19th century. Before that, philosophy was primarily done by people who likely wouldn't have considered themselves 'professional philosophers;' clergy, scientists, mathematicians, and intelligent lay-people were all part of the philosophical discourse. The shift away from philosophy as a matter of public interest and concern and toward an insular and increasingly obscure clique of professionals has not been hailed by all as a positive change; many of us who consider ourselves part of the profession still hold to Russell's maxim that philosophy essentially concerns matters of interest to the general public, and much value is lost when only a few professionals can understand what is said. Excluding people from the discourse because they lack the proper credentials or pedigree is not going to make philosophy better, but only cut it off from what should be its essential grounding: the every day reality in which we all live. Remember that even Peirce--widely regarded as a giant of American Pragmatism--couldn't hold down an academic job; his contribution to the field of philosophy is not lessened by this fact.

There are still people today who are doing substantive (and interesting) philosophical work, but who are not tenure track philosophers at research universities--Quee Nelson comes to mind immediately as an exemplar, but there are certainly others as well. If philosophy consists just in a dance wherein the participants throw obscure technical terms back and forth at each other, then only professionals can be philosophers. If, however, it consists in careful, reasoned, methodical thinking about the nature of reality, then anyone with the drive and intelligence can be a philosopher.

Who, then, should claim the title? I'm inclined to think that like 'hacker,' 'philosopher' is not a title that one should bestow upon oneself, but rather something that should represent some degree of recognition by the others in the field--if you show yourself able to think carefully and analytically about conceptual questions, then you're a philosopher in my book. That doesn't mean I think your answers to those questions are correct, though.