Thursday, December 25, 2008

Quicklink: How a Computer Works

BoingBoing recently featured scans of a wonderful 1978 book called How a Computer Works.  It's so full of awesome, it's a wonder it doesn't explode; there's even some implicit philosophy!  It seems almost too amazing to be real, but it's entertaining either way.  Snip:

There is something about computers that is both fascinating and intimidating.  They are fascinating when they are used in rocketry and space research, and when they can enable man to get to the moon and back.  In this respect, they are like human machines with "super-brains."  Some of them can even play music.  On the other hand, we are likely to be intimidated by their complex mechanisms and large arrays of blinking lights.  You should do what scientists tell you to.  
In fact, computers do not have brains like we do.  They cannot really think for themselves, except when they are doing complicated arithmetic.

So next time you start using your calculator program remember this: the more complex arithmetic you do, the more sentient They become--other than that, do what scientists tell you to.

Saturday, December 13, 2008

Quicklink: Dennett and Clark Smack Substance Dualists Down

New Scientist recently ran a very short piece in which Dennett and Clark respond to accusations that any talk about mind influencing body (e.g. as when a deliberate shift in attention causes a change in brain states) implies an acceptance of some kind of immaterial soul / Cartesian ego.  The rejoinder they offer is short, to the point, and (it seems to be) decisive.  Snip:

But this would lend support to the proposition that minds are non-material - in the strong sense of being beyond the natural order - only if we were to accept the assumption that thoughts, attending and mental activity are not realised in material substance.


I've had my differences with both Clark and Dennett with regard to the nature of consciousness, but they're right on here: arguing that the explanatory role of consciousness proves the existence of an immaterial (i.e. essentially non-physical) kind of substance is straightforwardly question-begging--it assumes that consciousness is not itself the result of physical processes.  Descartes' legacy haunts us still.

Tuesday, December 9, 2008

Andy Rooney Derides Upgrade Culture, Misunderstands Technology

Here's a delightful little video of Andy Rooney doing his loveable crumudgeon thing, this time with his sights set on Bill Gates, upgrade culture, and the computer's supplantation of typewriters generally.  I absolutely adore Andy Rooney, but what he has to say here is a beautiful representation of how people on the other side of the so-called "digital divide" often misunderstand technology.  Watch the video first:





Now, leaving aside the issue that Bill Gates doesn't really have anything to do with hardware design (or the trajectory of technology generally, at least not directly), a few of the points that Mr. Rooney makes in this piece are representative of some fundamental confusions regarding technology--confusions that, I think, are shared by many in his generation.  I want to say a few words about those confusions here.

Mr. Rooney's central point is that while he wrote on the same Underwood typwriter for decades, he's forced to upgrade his computer every year or two, new computers are seldom compatible with every aspect of their predecessors' functionality--old file types are dropped (try to find a computer that will read .wps documents today), and old programs are no longer supported (my 64 bit Vista machine already complains about running 32 bit programs that are only a year old)--and morphological similarities are rarely preserved.  This is all certainly true, but the same is true of technology generally--the time scale is only recently accelerated to the point where such differences become visible.

I've advocated the Vygotsky/Clark/Chalmers position of thinking of technology as cognitive scaffolding before, and I think that metaphor is informative here.  Suppose you're using scaffolding (in the traditional sense) to construct a tall building.  As the building (and the scaffolding) gets higher and higher, certain problems that didn't exist at ground level will manifest themselves as serious issues--how to keep from plummeting 60 stories to their death, for instance, is a problem that's directly related to working on 60 story tall scaffolding.  Still, it would be a mistake to say "Why do we need 60 story high scaffolding?  We didn't have any of these problems when the scaffolding was only 10 feet high, so we should have just stopped then; making higher scaffolding has caused nothing but problems."  We need 60 story high scaffolding, a contractor might point out, because it helps us do what we want to do--i.e. construct 60 story buildings.  The fact that new problems are created when we start using 60 story high scaffolding isn't a reason to abandon the building construction, but only a reason to encourage innovation and problem-solving to surmount those newly emergent issues.

Precisely the same is true, I think, of technology.  Mr. Rooney speaks as if the upgrade culture exists just to line Bill Gates' pocketbook--as if the constant foisting of new software and hardware is the result of a pernicious conspiracy to deprive poor rubes of their hard-earned money without giving them anything except a headache in return; this is simply false.  It's true that the average life expectancy of a computer is far less than the average life expectancy of its ancestral technology (e.g. the typewriter), but Mr. Rooney doesn't seem to realize that each technological iteration comes with consumate functional advancement--the computers on the shelf today aren't just dressed up typewriters, but solve new problems, and solve old problems in better ways with each generation.  Rather than just being a vehicle for word processing, computers today are word processors, communication devices, entertainment centers, encyclopedias, and a myriad of other devices all rolled into one.  We pay a price for this advancement--computer viruses weren't a problem before the Internet made it easy to transfer and share information with many people quickly--but, like the problem of keeping construction workers from plummeting to their deaths, the new issues raised by evolving technology are worth solving.  

Mr. Rooney's typewriter probably wasn't radically different from the one his father might have used, and if we go back to further generations we'll see even less of a difference--Mr. Rooney's grandfather, greatgrandfather, and great-greatgrandfather probably wrote (if they wrote at all) with more or less precisely the same kind of technology: pen and ink.  By contrast, the kind of computer I'm using right now will almost certainly bear little or no resemblance to the computers my children or grandchildren will be using 50 years down the line; the pace of technological innovation is increasing.  Still, this increasing tempo represents more than just a commercial scam--it represents the increasing productivity, cognition, and innovation that is made possible with each succeeding generation of technology: as the tools improve, they are in turn used to design even better tools.  I think this makes an occasionally moving power button a small price, and one worth paying.

Saturday, November 29, 2008

Brief Musing on Philosophy and Professionalism

A few days ago, a former student of mine sent me a link to a conversation she'd been having over a Facebook message board.  The topic had to do with whether or not philosophers are born or made (through education, not in labs), but it had devolved into a disagreement about the role lay-people should take in philosophical discourse--my former student was basically arguing that anyone with a good mind can be a philosopher, and others were attacking her by claiming that being a philosopher requires specialized training (i.e. a doctorate), and non-professionals can't lay claim to the title.  I think that's crap, so I posted a quick response, which I have reproduced here for those that might be interested.  It's relatively self-contained, except for one reference to my student by name ("Katelin").  Enjoy.

There's a popular confusion, I think, between 'professional philosopher' and 'person who thinks in logical and rigorous ways.' It's certainly true that any individual cannot simply decide to declare himself a philosopher in the Leiterrific sense of the term--that takes years of specialized training and a good measure of talent to achieve. However, this should not be taken to imply that only those who have been anointed by the right people can honestly call themselves philosophers, or claim to be engaged in a philosophical project. In this respect, I think Katelin is absolutely right, and I think that the pernicious elitism is doing damage to the intellectual discourse that is essentially at the heart of the profession.

Remember that the idea of a 'professional philosopher' is a relatively new one (at least on a wide scale)--The Academy didn't really start to flourish as the center for philosophical discourse until the 19th century. Before that, philosophy was primarily done by people who likely wouldn't have considered themselves 'professional philosophers;' clergy, scientists, mathematicians, and intelligent lay-people were all part of the philosophical discourse. The shift away from philosophy as a matter of public interest and concern and toward an insular and increasingly obscure clique of professionals has not been hailed by all as a positive change; many of us who consider ourselves part of the profession still hold to Russell's maxim that philosophy essentially concerns matters of interest to the general public, and much value is lost when only a few professionals can understand what is said. Excluding people from the discourse because they lack the proper credentials or pedigree is not going to make philosophy better, but only cut it off from what should be its essential grounding: the every day reality in which we all live. Remember that even Peirce--widely regarded as a giant of American Pragmatism--couldn't hold down an academic job; his contribution to the field of philosophy is not lessened by this fact.

There are still people today who are doing substantive (and interesting) philosophical work, but who are not tenure track philosophers at research universities--Quee Nelson comes to mind immediately as an exemplar, but there are certainly others as well. If philosophy consists just in a dance wherein the participants throw obscure technical terms back and forth at each other, then only professionals can be philosophers. If, however, it consists in careful, reasoned, methodical thinking about the nature of reality, then anyone with the drive and intelligence can be a philosopher.

Who, then, should claim the title? I'm inclined to think that like 'hacker,' 'philosopher' is not a title that one should bestow upon oneself, but rather something that should represent some degree of recognition by the others in the field--if you show yourself able to think carefully and analytically about conceptual questions, then you're a philosopher in my book. That doesn't mean I think your answers to those questions are correct, though.

Wednesday, November 26, 2008

Next Semester's Schedule

I can hardly believe it, but next week I'll be finished with my first semester here at Columbia.  I'm planning on posting a retrospective discussing my initial impressions of graduate school then--I'm still writing final papers right now--but in the meantime, here are the courses I'll be taking in the Spring, for any who are interested.


PHILOSOPHY OF SCIENCE

The logic of inquiry in natural sciences: substantive as well as methodological concepts such as cause, determination, measurement, error, prediction, and reduction. The roles of theory and experiment. 

This should be fun.  It's being taught by a professor that I've gotten to know reasonably well this semester, and have enjoyed working with immensely (Jeff Helzner).  I'm very interested in the philosophy of science, but have never had an opportunity to take a formal course on the subject.  I'm looking forward to rectifying that.



DIRECTION OF TIME 

A survey of the various attempts to reconcile the macroscopic directionality of time with the time-reversibility of the fundamental laws of physics. The second law of thermodynamics and the concept of entropy, statistical mechanics, cosmological problems, the problems of memory, the possibility of multiple time direction. 


This course is being taught by David Albert, who achieved minor celebrity status a few years back because of his participation in the rapturously terrible "pop metaphysics" film What The Bleep Do We Know?! , which was, in Prof. Albert's words, "wildly and irresponsibly wrong."  The film purported to explore the connection between quantum mechanics, spirituality, and free will, but more-or-less just ended up as propaganda for J.Z. Knight's cult.  I've been toying with the idea of trying to pick up an MA in the Philosophical Foundations of Physics (which Columbia offers) while I'm here, and this class will hopefully give me an idea as to whether or not that's a good idea.



FORMAL ONTOLOGY

Parts, wholes, and part-whole relations; extensional vs. intensional mereology; the boundary with topology; essential parts and mereological essentialism; identity and material constitution; four-dimensionalism; ontological dependence; holes, boundaries, and other entia minora; the problem of the many; vagueness.


This one's taught by quite possibly one of the coolest professional philosophers I've ever met: Achille Varzi.  He's got a great sense of humor and seems sharp as a tack.  This will probably be the toughest class I'll take this semester, but it sounds interesting.  Basically, it seems like we'll be covering how parts of things relate to wholes; it's usually the courses with descriptions that I don't understand that I end up getting the most out of.



PROSEMINAR

The course aims to promote weekly writing by each student. A paper, or section of a book, wioth which every philosopher ought to be familiar, will be selected each week, adn one student will make a presentation on that target paper, while the others will hand in a brief essay about it. Essays will be returned, with comments, before the next meeting of the seminar. Each week a different member of the faculty, in addition to Professor Rovane, will participate in the discussion.

And, of course, the Proseminar.  It sounds mundane, but I actually got quite a lot out of the first half this semester.  It's great to get to meet the various members of the faculty, and the individualized attention and constant feedback on my writing were helpful.  Also, my cohort pretty much rocks.

Friday, November 21, 2008

Internal and External Language

As I start putting together my formal paper about ethics as a social technology, I've been researching the relationship between language and cognition. A few researchers have called the "inner monologue" phenomenon essential to (or even constitutive of) cognition--we talk to ourselves (either out loud or in our heads) as a way of working out problems. This seems right to me: as most people who have done any kind of deep thinking know, it helps tremendously to have an interlocutor (real or imaginary) off which to bounce ideas. This point has led me to consider something related (albeit tangential), though, about which I'd love to get some input.

I'm certain that everyone has had the "inner monologue" experience of speaking to oneself silently--you might rehearse a speech in your mind before you give it, silently repeat the list of things you need to pick up at the grocery store, or try to work out a philosophical problem by talking to yourself in your head. While it's certain that this sort of process is linked to language--it's hard to see how a pre-linguistic animal could think linguistically--I wonder how close this relationship is. Jerry Fodor (among others) holds the position that mental representation happens in a meta-linguistic form that he terms "Mentalese"--while thinking in Mentalese might feel like thinking in (say) English, it differs in slight but important ways. If this theory is correct, it would seem that different brain processes would have to govern true language and Mentalese language (or inner monologues); we should expect, then, to see the two occasionally come apart. 

Here's my question, then: when a person suffers a stroke (or some other kind of brain injury) that interrupts speech functions (as damage to certain parts of the left hemisphere often does), is the inner monologue similarly interrupted? If so, is this always the case, or is it possible to lose the ability to express our thoughts symbolically (either through speech or writing) but still be able to represent thoughts to ourselves in Mentalese? If the latter is correct, that would seem to bolster the Fodorian position that inner speech is fundamentally different from linguistic representation; if the two faculties are inseparable, though, that would seem to cast doubt on the principled distinction between inner monologue and public language. 

I'm researching this question as we speak, but I'm interested in seeing if anyone out there has any first-hand experience with this--have you ever suffered a stroke, or known someone who has? If you lost language, did you also lose the ability to form thoughts with propositional content? Did one faculty return before the other, or are they mutually supportive? Any input is appreciated.

Saturday, November 15, 2008

Quicklink: What Makes the Human Mind?

Harvard Magazine ran a short but interesting piece this week about what makes the human mind unique.  The article's not terribly in-depth, but at least they point out the complexity of the human/animal cognition problem.  Too often, we simply see the claim that human intelligence isn't unique superficially substantiated by pointing out chimpanzee tool use or bee dances--Harvard's piece points out that the issue isn't nearly that simple.  If you're interested in exploring this topic further, I'd recommend Michael Gazzaniga's newest book, Human: The Science Behind What Makes Us Unique.  It's written in reasonably accessible language, but still has enough hard science to interest those with more technical backgrounds.

Thursday, November 13, 2008

A Kripkean Limerick

I got bored this evening:

There once was a man named Saul Kripke
Who said I will force you to pick me
I designate rigidly
Yet you look at me frigidly
So come on, dear girl, and just lick me.

Oh come now, my dear Mr. Kripke
I know not what you mean by 'just lick me.'
For the word 'me,' you see
Is indexed to thee
And your theory of reference can't trick me

"Ah ha!" then said old Mr. Saul
I can tell that you're trying to stall
But with a theory so long
How can I be wrong?
By my side those descriptivists pall

I don't care about size, Mr. Kripke
And I know you're still trying to trick me
Proper names still refer
As descriptions I'm sure
And your rigid old theory shan't stick me


It's likely that no one will find this funny without some background in the philosophy of language (and even then it's still pretty likely, probably). For reference: rigid designation, Saul Kripke, descriptivism, indexicals.

Wednesday, November 12, 2008

Quicklink: Neurons and the Universe

This is just unspeakably awesome.  A side-by-side shot of a neuron and a mock-up of the visible universe show the remarkable similarities between the two:



Badass.

Sunday, November 9, 2008

DNA and Representation

A few different conversations today have gotten me thinking about a topic that bothers me from time to time: the idea that DNA in some sense is a code, language, design, blueprint, representation, or other word that implies some degree of intentionality (in the technical sense).  I see this claim made very frequently--most notoriously and nefariously by more clever Intelligent Design theorists as part of an argument for God's existence, but also by well-meaning philosophers of biology and language both--but rarely see it challenged.  I would like to at least briefly meet this challenge here; as is sometimes the case, I intend to likely turn this post into a more formal paper sometime in the future, so many of the ideas I present here are not fully fleshed out or argued for.  Comments and criticisms are, of course, always welcome.  I'm going to focus here on why this sort of approach doesn't work as a means to prove the existence of God, but I'm going to say quite a lot that's more broadly interesting along the way, I expect.

First let me try to state the argument as clearly and charitably as I can. DNA is a design simply because it is a code for us! That is, because every single cell in my body has a complete genome, and because that complete genome carries all the necessary information to build my body, DNA must be a design. Prima facie, it meets all the criteria of information: it is medium independent (I can change the DNA molecule into a string of As, Ts, Gs, and Cs and it will still retain its information-carrying capacity), and it stands for something more than itself (i.e. my body). DNA, put simply, is a language--a code for me--and that means that it is a design. DNA has semantic content in the same way that the English language does: all those chemicals have a meaning, and that meaning is me. Any design implies a designer, and thus DNA had a designer. Humans didn't design themselves, so something else must have done the job. God is the most likely culprit.

I trust a design theorist would find this formulation acceptable--now I'm going to tell you why it's not. A common extension of this metaphor is to refer to DNA as a "blueprint for you;" I think this metaphor is pedagogically useful, so let's adopt it for the purposes of this discussion. A blueprint, to be sure, represents a building, but before we're going to decide if my DNA represents me in the same way, we're going to have to be clear about what precisely we mean by 'represent' here; it seems that there are at least two primary senses in which we might be using the term, so let's consider DNA in light of each of them.

First, by 'blueprint B represents building X' we might mean something like what we mean when we say that a map represents a nation: that is, that the blueprint corresponds to a building in that each line can be matched with a wall, door, or similar structure in reality. But wait, this does not seem entirely adequate: to adapt Hilary Putnam's famous example, we can imagine an ant crawling in the sand which, by pure chance, traces with its tracks a perfect duplicate of the original blueprint for the Eiffel Tower. It does not seem right here to assert that the ant has created a blueprint for the Eiffel Tower (for a more detailed argument for this, see Putnam's Reason, Truth, and History, pages 1-29). Representation in this sense--in the sense of a map of New York or a painting of Winston Churchill--requires more than mere correspondence: it has to come about in the right kind of way. How exactly to define "the right kind of way" is a deep question, and it is not one that I intend to pursue here. Suffice it to say that the right kind of way involves agentive production by beings with minds at least something like ours (minds that are themselves capable of semantic representation); other methods might produce things that look very much like representation, but this resemblance is not sufficient.

Here, then, is the problem for the the intelligent design advocate attempting to endorse the first horn of this dilemma: using it to demonstrate God's existence is straightforwardly question-begging, as we saw above. Arguing that DNA represents in the same sense that a map represents terrain or a portrait represents a person requires the assumption that DNA was produced agentively by a being with minds like ours; this assumption is precisely what the design-theorist wants to prove, making this line of argumentation invalid. As I said, though, there is a second horn of the dilemma that the design-theorist might instead endorse--let us return now to our blueprint metaphor and see if DNA fares any better here.

The second way we might intend to use 'blueprint B represents building X' is what we might call "the instructive sense." This is the case if building X has not yet been constructed: blueprint B represents not because it corresponds to anything in reality, but because it contains instructions for how one should proceed when constructing building X. What does it mean, though, for one thing to contain instructions for the creation of another? Consider computer programming: when I type something like the following into a compiler:

#include 
main()
cout << "Hello World!"; return 0; 


am I writing instructions for the creation of something? Prima facie, this looks just like the blueprint case, but there's an important (and relevant) difference here: in typing the code into a compiler, I'm not making instructions for the program's creation, but rather creating the program itself. That is, the "code" for the program just is the program looked at in a certain way (e.g. through a decompiler); to watch someone write a computer program and then say "Well yes, I've seen you write the instructions for the program, but when are you going to make the program itself?" would make you guilty of a category mistake, it seems--again, writing the program just is writing the code. Program and code are identical. This case, I think, is instructive. We'll see how in just a moment.

Let's set aside the philosophical struggle with this second horn for a moment and remind ourselves what DNA actually is and how it works. DNA consists of two long polymers composed of phosphate groups and sugar groups conjoined by organic esters. Attached to this scaffolding, four "bases"--adenosine, ctyosine, thymine, and guanine--do the real "work" of DNA: adenosine always attaches to thymine, and cytosine to guanine, meaning that the entire sequence of both sides can be deduced from just one given half. DNA's primary function in the body is to regulate the creation of proteins, which in turn regulate virtually all of the body's functions. DNA does this by creating strands of RNA through the process alluded to above; units of three base pairs at a time on the relevant portions of the RNA (there are huge parts of our DNA that seem to play no active role in anything) then interact with cellular objects called ribosomes, which produce corresponding proteins. This is obviously a very basic account of how the protein creation process happens, but it should suffice for our purposes here.

The most salient part of the above, it seems, is the emphasis on causation. The entire process of protein synthesis can be done without involving an agent at all. In this way, DNA stands in sharp contrast to the blueprint from our earlier discussion--the sense in which we're using 'instruction' when we're discussing blueprints (at least in the second horn of the dilemma) necessarily includes a concept of conscious builders; to put it more generally, instructions must be instructions for someone to follow. DNA, then, is somewhat more like a computer program than it is like a blueprint in the second sense: rather than being instructions for something's creation, it _just is_ that something viewed from a lower level. Still, though, there is an important element of disanalogy here--to assert that DNA is just like a computer program would be to assert that it represents in the first sense we discussed. This assumption, we saw, leads to a fallaciously circular line of reasoning, and thus is unacceptable. As with a blueprint, if we make the comparison between a computer program and DNA, we must be careful to remember that it is just a metaphor. This, I think, is the central point that I am making: while the blueprint metaphor is apt in many ways, we must take care to remember when we use it that it is just a metaphor--while DNA and blueprints share things in common, there are important difference that prevent the two from being completely equated, no matter which sense of 'represent' we're using.

We've said a great deal here about why thinking of DNA as representing organism is incorrect, so let's take a moment to sketch a positive argument and suggest what the correct way to think about DNA might be. Let's begin by remembering that DNA causes protein synthesis, which causes other necessary organic functions. If we keep this observation squarely in mind, DNA's metaphysics aren't all that difficult to articulate: DNA, mediated by other chemicals and environmental considerations, regulates the causal chain that leads to the occurrence of all the various functions we mean by the cluster-concept 'life.' These include, but are not limited to, metabolism, reproduction, cell growth, cell regeneration, gas exchange, and many, many others. DNA is just one link--albeit a very important link--in the naturalistic chain that both causes and is constitutive of life.

As I said, I'm aware that there are a great many holes here that need to be plugged before this theory is really solid, but I think the rough outlines are clear.  Thoughts?

Wednesday, October 22, 2008

Ethics as a Social Technology

One of my classes this semester consists in evaluating my advisor's manuscript for his upcoming book. The book is on a naturalistic account of ethics, and while I can't say too much about it specifically (it's still a work in progress), I can at least say that I'm finding it rather compelling. He's concerned with showing how our evolutionary history--specifically, the development of altruism in our hominid ancestors--led to the creation/development of ethics as we know them today. In one passage for this week's reading, he made a parallel between ethics and technology, saying that ethics (like any artifact) exists to fulfill a function--that is, it was created for a purpose. I find this idea incredibly compelling--it seems to me that thinking about ethics as a social technology is precisely the right way to frame the issue--and so I'm taking it upon myself to develop this part of his account further. I've only been thinking about it for a few hours now, so my formulation is still in its preliminary stages, but here's my thought-process thus far.

First, let me say a bit about what I mean by a piece of "social technology." The paradigm case for this, I think, is language, so that's the extended metaphor I'm going to use in my discussion here. All pieces of technology have (at least) two features in common: (1) they are the products of intelligent design (no technology exists as a mind-independent part of the world--artifacts don't grow on trees), and (2) they are created to fulfill some specific function. Before we progress, let's say a bit more about how these conditions tend to be expressed.

Generally, (2) is realized by extending our natural capabilities; in their simplest forms, tools are just pieces of the environment that we use to interact with other parts of the environment in ways that our unmodified bodies could not--the most intuitive example of this is something like the use of a smooth stick to reach something (say, honey) that is inaccessible to our human hands. More sophisticated tools, of course, fulfill more sophisticated functions; the most complex tools that we've created to date actually aid us not in direct physical interaction with the world, but in cognition--computers are reliable mechanisms which, while not directly doing any "information processing," let us take shortcuts with our cognition. In short, tools are environmental changes that accomplish some function.

(1) is a bit more obvious, but I should still say a bit about it. I've argued before that there is no such thing as "natural function"--I follow Searle in saying that function is only definable relative to the beliefs and desires of some intentional agent. I'm not going to rehash this argument here (though I will have to when I do a more formal presentation of this idea), so just bear with me on this point for now. All tools have functions, functions imply users/designers, and so all tools are the products of design and/or use (I'm still not sure if these two can be made equivalent). Now on to the meat of my point here.

Language, on this definition, seems to count as a tool. Proponents of the extended mind thesis (especially Andy Clark and David Chalmers) have been counting language as a tool for quite some time, and despite other disagreements I might have with extended mind philosophy, I think they're spot-on with this point. Language consists in changing the environment (usually--but not always--through the production of compression waves with the vocal chords) in such a way as to communicate one's own mental states to another person. This allows for all sorts of developments that might not have been possible--it opened the way to collaboration, information sharing, and socialization--but that's not what I want to focus on here. Like any tool, language has a function--in fact, it seems to have two distinct functions: expression and communication.

By 'expression,' I mean something akin to what's happening in poetry generally (or metaphors specifically): the conveyance of emotion, tone, mood, and other non-conceptual mental states. In this respect, language can be considered something like music--a series of sounds put together to convey not so much a concrete message per se, but more to communicate a set of abstract ideas. Shakespearean language is paradigmatically expressive, it seems to me: it is flowery, beautiful, complex, metaphorical, and often designed to do more than simply express propositions.

On the other hand, language is also used for communication in a more mundane sense--that is, it is to convey propositional attitudes about the world. This is the use with which most of us will likely be more familiar: it is the way we are using the linguistic tool when we give directions, express philosophical ideas, make requests, describe things, and generally use symbols to represent the world as being a certain way. The constructed language Lojban is probably as close to a purely communicative langauge as we can get--it is designed to totally exclude the possibility of any ambiguity of expression by being as syntactically precise as possible. It was formulated by logicians and mathematicians to express ideas about the world in the most clear and precise way possible. Of course, this precision means that it is more difficult to formulate purely expressive sentiments in Lojban--metaphor and other poetic devices, while not impossible to use, are much more difficult to construct.

Clearly, most languages are used for both these purposes--it would be possible to do science, philosophy, and logic in Shakespearean English, and it would be possible to write poetry in Lojban--still, there are cases (as we've seen) in which a particular language is better at one and worse at another; relative to each purpose, all languages are not created equal. Still, most do passably well at both--it seems strange to say that English is a "better" language than Chinese. There are, however, cases where these sorts of evaluative judgments seem not only possible, but reasonable. One notable case is that of the Piraha tribe in South America. Their language, which has been extensively studied and debated, seems relatively unique among modern languages in lacking common features like recursion (the ability to say embed smaller clauses in larger ones, e.g. 'Jon, who is the author of this blog, went to class, which was at Columbia University, which is on 116th street, today'), discrete numerical terms, discrete kinship words, and other common features. Many of the concepts we express on a regular basis could not be formulated in the Piraha language. If language is indeed a tool--in that it was created to fulfill a function--it seems like we can say that Piraha is, at the very least, less effective in the communication sense. In a relevant sense, English is better than Piraha just because it does the job of language better.

I think we can make a similar case for ethics. If we look at the history of our ethical practices, it seems clear that they arose as a result of our ancestors' increasingly social lifestyles--ethical rules and norms were created to let us live together in larger groups, and they accomplished this goal by artificially extending our naturally altruistic tendencies to more and more people. Ethics, then, like language, has two distinct functions: to maintain group cohesion, and to remedy altruism failures. Like language, it is a human-created tool that arose to accomplish socially oriented goals.

With this picture, we can have our pluralist cake and eat our relativism too--just as with language, it is perfectly coherent on this picture to see different ethical systems as competing but not superior or inferior to each other. There might be many ways to solve these two problems that wouldn't be compatible with one another, but that still solve the problems equally well. To draw another tool-related analogy, we can compare two competing ethical systems to two competing operating systems: neither Windows nor OSX is inherently superior to the other, they both simply approach the computation problem differently. Still, as with language, there are cases where one is clearly better than another--both Windows XP and OSX are better than their predecessors of 15 years ago just because they discharge their fuctions (i.e. solve the releveant problems) better and more efficiently.

Similarly, we can say unequivocally that our ethical system is better than that of the Nazis--a Nazi ethical system just doesn't solve the social cohesion and altruism failure problems in an effective way. A dictatorship might well keep society together cohesively, but it does so without solving the altruism failure effectively. Since an ethical theory's function is to solve both these problems, we can say that Naziism is an objectively worse ethical system. Ethics, if understood as a tool, lets us make these value judgments at a meta-theoretical level--we can call two ethical systems competing but comperable if they both discharge their functions equally well but in different ways, or we can call one better than another one if it discharges its function more efficiently and effectively.

This seems to me to be precisely the right way to think about ethics and morality. I'm going to develop this further as the semester progresses, culminating in a formal presentation in my final paper for the course. I'll update this account as I solidify things more, but for now I would welcome comments and thoughts.

Oh Right, I Have a Blog!

So here I am. When we last met, it was the close of summer and I was getting ready to go from Lancaster, PA to New York City to start grad school. I made it--despite what you might have deduced from my sudden cessation of posting--and I'm now just about half way through my first semester. I couldn't possibly be happier. The downside to this, though, is that I've been so busy reading and writing that I basically forgot about this blog until just now; over the last year, it's been my only real outlet for philosophical musing, and now that I'm doing weekly writing (and daily discussion of the issues) it doesn't seem as necessary. Still, I think I'm going to try to keep it up. Now that I remember this place exists, stay tuned for more updates.

Thursday, August 14, 2008

Chess, Computers, and Crystal Balls

I've written before about the significance (or lack thereof) of Deep Blue's now 11 year old victory over Gary Kasparov, but this is a topic that Eripsa and I invariably end up arguing over, so my recent three weeks working with him has made me think about this issue again, and I think I've come up with a few additions to my argument.

Briefly, my position is this. Contrary to what some functionalists would have us believe, Deep Blue's "victory--while undoubtedly a great achievement in design--isn't terribly significant in any deep way. However, I also don't think Dan Dennett is right in saying that the reason the question isn't interesting is that human brains aren't unique in the first place: instead, it seems wrong to me to call what happened "Deep Blue's victory" at all, as it was never in the game to begin with. Playing chess with a computer is no more a competitive affair than a game of tetherball is a competitive game with the pole or a game of solitaire is a competitive game with the cards. To truly participate in a game is an inherently intentional act--that is, an act that requires the ability to understand how one thing can be about or stand for another--and digital computers are fundamentally incapable of intentionality. In other words, ascribing a victory to Deep Blue over Gary Kasparov is to tacitly treat Deep Blue as an independent agent capable of its own successes and defeats, and that doesn't seem like the right way to talk about machines.

Clearly something is happening here--that is, Kasparov is really doing something when he sits down at the chessboard with Deep Blue and its handlers--so if that something is not a game between man and machine, then what is it? While I still find the above argument (at least in its non-brief form) compelling, it occurs to me that it is a strictly negative argument--it contends that Deep Blue is not playing a game at all and so has no real "victory" over Kasparov to speak of--leaving the question of what is actually going on unanswered. It is this question I wish to try to address here.

Suppose you and I are standing in a room with hardwood floors arguing about whether or not the ground is level. To settle the issue, I pull out a flawless crystal ball and set it carefully in the center of the room, knowing that if the floor really isn't level, the ball will roll down the incline, however slight; sure enough, the ball rolls off to the south, and we agree that we really do need to deal with that sinkhole out back. What's happened here? On a strong externalist account like Andy Clark's, I've externalized some of my cognition into a tool, letting it do the information processing for me in a way that just my un-extended meat mind just couldn't: this is the position that lies at the root of the intuition that Deep Blue is an agent in itself capable of playing chess, and it is this position against which I want to argue.

Rather than somehow externalizing my cognition, it seems to me that I'm simply cleverly manipulating my environment in order to make my internal cognition more powerful. When I set the ball in the middle of the room, it is with knowledge that--thanks to the action of some basic physical laws--one sort of result will occur if the floor is level and another sort will occur if it is not level. In short: I don't know if the floor is level, but I know that if the floor is not level, then the ball will roll down hill; thus, I infer that since I can certainly see the ball move, placing it in the middle of the floor is a good way to find out if there is a tilt or not. The ball is not doing any information processing of its own, nor is it some kind of metaphysical receptacle for my own cognition; instead, it is just a reliable indicator that I can use to make a judgment about the environment around me.

Let's extend (so to speak) this argument to computers in general (and Deep Blue in particular), then. A digital computer is a physical system just like a crystal ball--albeit a much more complex one--so it seems that the analogy is preserved here: any apparent "information processing" done by the computer (that is, any native cognition OR extended cognition) is nothing more than a very complicated ball rolling down a very complicated hill; a computer isn't actually doing anything cognitive, it's just a physical system with a reliable enough operation that I can use it to help me make certain judgments about the environment. Given a hill the ball will--just in virtue of what it is--roll, and given certain inputs the digital computer will--just in virtue of what it is--give certain outputs. In both the case of the ball and the case of the computer the tool's interactions with the environment can be informative, but only when interpreted by a mind that is capable of consciously attaching significance to that interaction; that's all a computer is, then: a physical system we use to help us make judgments about the environment.

That still doesn't address the central question, though, of what exactly is going on in the Deep Blue vs. Kasparov game (or of what's going on when anyone plays a computer game, for that matter). Clearly Kasparov at least is doing something cognitive (he's working hard), and clearly that something is at least partially based on the rules of chess, but if he's not playing chess with Deep Blue, then--at the risk of sounding redundant--what is he doing? Perhaps he is, as others have argued, actually playing chess with Deep Blue's programmers (albeit indirectly). I've advanced this argument before, and have largely gotten the following response.

Kasparov can't actually be playing against Deep Blue's programmers, because the programmers--either individually or collectively--wouldn't stand a chance in a match against Kasparov, whereas Deep Blue was able to win the day in the end. If the competition really was between Kasparov and the people behind the design and development of Deep Blue, those people would be expected to (at least as a group) be able to perform at least as well as Deep Blue itself did in the chess match. This is an interesting objection, but one that I do not think ultimately holds water. To see why, I'll beg your pardon for engaging in a bit more thought experimentation.

I'm not much of a chess player. I know the rules, and can win a game or two against someone who is as inexperienced as I am, but those wins are as much a product of luck as anything I've done. Kasparov would undoubtedly mop the floor with me even with a tremendous handicap--say, the handicap of not being able to see the chess board, but rather having to keep a mental model of the game and call out his moves verbally. I have, as I said, no doubt that I would be absolutely annihilated even with this advantage, but we can certainly imagine a player much more skilled than I am: a player that would tax Kasparov more, and one that he would reliably be able to beat in a normal chess match, but might risk losing to were he denied the environmental advantage of being able to use the board as an aid to represent the current state of the game. The board (and who has access to it) is making a real difference in the outcome of the game--are we to say, then, that it is a participant in the game in the same way that Deep Blue is? In the case where our mystery challenger beats Kasparov, does the board deserve to be credited in the victory? It does not seem to me that it does.

Here's another example of the same sort of thing. Suppose I challenge you to an arithmetic competition to see which of us can add a series of large numbers most quickly. There's a catch, though: while I can use a pen and paper in my calculations, you have to do the whole thing in your head. You'd be right to call foul at this, I think--the fact that I can engage in even the rudimentary environmental manipulation of writing down the figures as I progress in my addition gives me an enormous advantage, and might allow me to win the contest when I otherwise would have lost--this is true in just the same way that it's true that Kasparov might lose a chess game to an "inferior" opponent if that opponent was able to manipulate the environment to aid him in a way that Kasparov was not (say, but using a chess board to help keep track of piece position).

I suspect that most of you can now see where I'm going with this, but let me make my point explicit: Deep Blue is nothing more than a very complicated example of its programmers' ability to manipulate the environment to give themselves an advantage. Contending that Kasparov couldn't have been matching wits against those programmers just because he could have mopped the floor with them if they'd been without Deep Blue is akin to saying that because Kasparov might lose to certain players that had access to the board when he did not (even if he'd beat them handily in a "fair fight"), the board is the important participant in the game, or that I'm simply better at arithmetic than you are because I can win the competition when I have access to pen and paper and you do not.

Deep Blue is its programmers pen and paper--the product of their careful environmental manipulation (and no one manipulates the environment like a computer programmer does) designed to help them perform certain cognitive tasks (e.g. chess) better and more quickly. So whom was Kasparov playing chess with? On this view, the answer is simple and (it seems to me) clearly correct--he was playing against the programmers in the same sense that he would have been if they'd been sitting across the board from him directly--he just had a disadvantage: they were a hell of a lot better at using the environment to enhance their cognition than he was.

Tuesday, August 5, 2008

Hey Look, Irony!

Shortly after finishing that last post about how awesome technology is, my laptop descended into its watery grave--that is, I spilled rum-laced Vitamin Water all over it. It is, needless to say, currently nonfunctional. I'm only at CTY for another few days (I'm posting this from Eripsa's laptop), but don't expect to see anything new until at least next week, when I will be gloriously reunited with my desktop. If anyone wants to contribute to the dirt-poor-philosophy-grad-student-laptop-repair-fund, feel free!

Friday, August 1, 2008

100th Post - An Ode to Technology

This is the 100th post on this blog, and I'm pretty happy about it. As I said in the very first post, I've tried keeping a blog before, and it's never really worked out as well as it has here. I think it is a fitting celebration, then, to talk a little bit about technology.

I'm 2/3 of the way through my second CTY session, and this time I'm teaching philosophy of mind with Eripsa, who, despite being dreadfully wrong about consciousness, is an all-around awesome dude. He works primarily on the philosophy of technology, a disappointingly underrepresented field that deals with questions like "what is the ontological status of a tool," "what is necessary to create an artificial mind," and "how does technology influence human thought?" He does a lot of really interesting work (particularly on robots), so I encourage you to go check out his blog.

Anyway, being around him inevitably gets me thinking even more about technology than I usually do (which is saying something)--I'm particularly interested in that last question I posed above, though: how does technology influence human thought? Eripsa wants to follow Andy Clark and David Chalmers in endorsing the strong-externalist extended mind thesis, which claims that there is a relevant sense in which my cognition and mental states (including beliefs) spend a lot of time in the external world. Their paradigm case for this is that of Otto, a hypothetical Alzheimer's patient who, in lieu of using his deteriorating biological memory, writes down facts in a notebook, which he carries with him at all times. Clark claims that when Otto consults his notebook for a fact (e.g. the location of a restaurant he wants to go to), the notebook is serving as a repository for his beliefs about the world in just the same way that my (or your) biological memory does; that is, his belief about the location of the restaurant is literally stored in the external world.

This thesis seems fraught with problems to me, but that's not the point I want to make (at least not in this post). While I think that Clark (and by extension Eripsa) is wrong about the ontology of technology (Otto's notebook is supposed to stand for a whole host of technological "extensions" of our biological minds into the world), I think he's precisely right about its importance in a cognitive sense. Human beings are, by their very nature, tool users; it's a big part of what makes us human. Of course other primates (and even some birds) can use--or even manufacture--tools to accomplish certain tasks, but nothing else in the known natural world comes even close to doing it as well as humans do. Technology use is a part of who we are, and always has been; we created language as a tool to manipulate our environment, learning to create compression waves in the air for the purpose of communicating our ideas to each other, and in the process beginning the long, slow march toward the incredibly sophisticated tools we have today--tools like the one you're using right now.

Language might have been our first tool--and perhaps even still our best--but in recent years, the computer (and more specifically the Internet) has proven to be one of our most important in terms of cognition. I've argued before that the advent of the information age should herald a radical change in educational strategy, but I want to reiterate that point here. Today's kids are growing up in a world where virtually any fact that want is immediately and reliably accessible at any time. I'd say that at least 1/3 of the kids I'm teaching at CTY--and these are 12-15 year olds--have Internet-enabled cell phones that they keep on their person at all times; this is a very, very big deal, and our educational strategy should reflect it.

100 years ago, a good education was an education of facts. Students memorized times-tables, theorems, names and dates, literary styles, and an endless list of other factual statements about the world, because that's what it took to be an "educated citizen." Information was available, but it was cumbersome (physical books), difficult to access (most areas didn't have high quality libraries), and generally hard to come by for the average citizen--even an educated one. The exact opposite is true today--students don't need to memorize (say) George Washington's birthday, because they can pull that information up within seconds. This frees up an enormous "cognitive surplus" (to borrow Clay Shirkey's term) that can be used to learn _how to analyze and work with facts_ rather than memorize the facts themselves.

I've postulated before that the so-called "Flynn Effect"--that is, the steadily increasing IQ of every generation since the close of the 19th century--might be due to the increasing availability of information, and thus the increasingly analysis and abstraction oriented brain of the average citizen. If I'm right, we're going to see a huge leap in the IQ of this generation, but only if we start to educate them appropriately. We need a radical emphasis shift as early as in the kindergarten classroom; students need to be taught that it's not what you know, but how well you can work with the almost infinite array of facts that are available to you. The spotlight should be taken off memorizing names and dates, facts and figures, and focused squarely on approaches to thinking about those facts and figures. Today's child is growing up in a world where he is not a passive consumer of information, but rather an active participant in the process of working with information in a way that humans have never been before.

This leads me to my final point, which is that you should all go read this speech by Clay Shirky, author of the book Here Comes Everyone. It's very, very well articulated, and makes exactly the kind of point I'm driving at here. Snip:

I was having dinner with a group of friends about a month ago, and one of them was talking about sitting with his four-year-old daughter watching a DVD. And in the middle of the movie, apropos nothing, she jumps up off the couch and runs around behind the screen. That seems like a cute moment. Maybe she's going back there to see if Dora is really back there or whatever. But that wasn't what she was doing. She started rooting around in the cables. And her dad said, "What you doing?" And she stuck her head out from behind the screen and said, "Looking for the mouse."


Here's something four-year-olds know: A screen that ships without a mouse ships broken. Here's something four-year-olds know: Media that's targeted at you but doesn't include you may not be worth sitting still for. Those are things that make me believe that this is a one-way change. Because four year olds, the people who are soaking most deeply in the current environment, who won't have to go through the trauma that I have to go through of trying to unlearn a childhood spent watching Gilligan's Island, they just assume that media includes consuming, producing and sharing.


It's also become my motto, when people ask me what we're doing--and when I say "we" I mean the larger society trying to figure out how to deploy this cognitive surplus, but I also mean we, especially, the people in this room, the people who are working hammer and tongs at figuring out the next good idea. From now on, that's what I'm going to tell them: We're looking for the mouse. We're going to look at every place that a reader or a listener or a viewer or a user has been locked out, has been served up passive or a fixed or a canned experience, and ask ourselves, "If we carve out a little bit of the cognitive surplus and deploy it here, could we make a good thing happen?" And I'm betting the answer is yes.

I'm betting the same. Thanks for reading, and here's to the next 100 posts.

Sunday, July 20, 2008

Potential Course List

I'm starting to get my affairs ready to start at Columbia this Fall, and have assembled the following class list. I haven't registered yet, but I have the (perhaps wildly incorrect) perception that grad students are rarely unable to get into classes of their choice. In any case, here's my dream schedule (a long with course descriptions and some commentary) for Fall 2008:

EVOLUTION, ALTRUISM, AND ETHICS
Day/Time: W 11:00am-12:50pm
This seminar will elaborate and examine a naturalistic approach to ethics, one that views contemporary ethical practices as products of a long and complex history. I am currently writing a book presenting this form of naturalism, and chapters will be assigned for each meeting after the first. Using brief readings from other ethical perspectives, both historical and contemporary, we shall try to evaluate the prospects of ethical naturalism.
Open to juniors, seniors, and graduate students.
I'm particularly excited about this one. In my early undergraduate days, I specialized in ethical issues, but I found all of the existent ethical theories dreadfully unsatisfying, and came to suspect that if we were going to get a plausible naturalistic account of ethics, we needed a more thorough understanding of how the mind and brain worked--hence the switch to mind. This class sounds right up my alley, though, and I'm always excited to hear naturalistic defenses of philosophical concepts.


1st YEAR PROSEMINAR IN PHILOSOPHY
Day/Time: W 6:10pm-8:00pm
This course, which meets only for the first seven weeks of term, is restricted to, and required for, first-year Columbia Ph.D. students. The course aims to promote weekly writing by each student. A paper, or section of a book, with which every philosopher ought to be familiar, will be selected each week, and one student will make a presentation on that target paper, while the others will hand in a brief essay about it. Essays will be returned, with comments, before the next meeting of the seminar. Each week a different member of the faculty, in addition to Professor Peacocke, will participate in the discussions. A second seven-week segment of the ProSeminar will be held in the Spring Semester of 2009.

What can I say? It's the Pro-Seminar, so I have to take it. Still, it could be really good--the single most productive (in terms of bettering me as a philosopher) course I took at Berkeley was the "Introduction to Philosophical Methodology" class--just like this one, it was aimed at getting students writing every week. My hope is that this class will be a more rigorous and intense version of that one, and that I'll really have a chance to sharpen my writing considerably.


ADVANCED TOPICS IN THE PHILOSOPHY OF MIND
Day/Time: W 2:10pm-4:00pm
This seminar will be concerned with the interactions between the theory of intentional content and thought on the one hand, and metaphysics on the other. We will first discuss the role of truth and reference in the individuation of intentional content. We will then draw on that role in discussing the following issues: the nature of rule-following and objectivity in thought; transcendental arguments and objective content in thought and in perception; the general phenomenon of relation-based thought, and its extent, nature and significance; the nature of subjects of consciousness, self-representation and first person thought.


Mind is my specialty, so this was an easy choice. I'm not entirely clear on what exactly this course description is talking about (which is a good thing), other than that the class seems to deal with intentional content and how it relates to external objects, which is a topic I'm very much interested in.

Serendipitously, all these classes are on Wednesday, so I'd be in class only one day per week, which would be pretty nice. I'm sure I'm going to have a lot of writing to do outside of class (and Fallout 3 is coming out soon, too...), so not having to make the commute to campus every day will be nice.

Saturday, July 19, 2008

Some Normative Epistemology

If you ask most philosophers, they'll tell you that there are (roughly) four main branches of philosophy: metaphysics, epistemology, ethics, and logic. These (again, roughly) correspond to the questions: "What's out there?" "How do you know?" "What should I do about it?" and "Can you prove it to a math major?" Tongue-in-cheek definitions aside, metaphysics deals with questions relating to the nature of reality, the existence of various entities, properties of those entities, and the ways in which those entities interact. "Does God exist?" and "How do the mind and brain relate?" are both metaphysical questions. Epistemology deals with knowledge claims and how humans go about knowing things in the first place--"What is an appropriate level of evidence to require before changing a belief?" and "How can we be sure that our senses are reliable?" are both epistemic questions. Ethics deals with questions of right and wrong (metaethics) and how we ought to live our lives (normative ethics). "What moral obligations do we have to our fellow man?" is the canonical ethical question. Logic sort of flits in and out the other disciplines, popping its head in to be used as a tool (or to confound erstwhile plausible theories) in any and all of the above, but it also has some questions of its own. Modal logic deals with necessity and contingency, and asks questions like "What does it mean for some truth to be true necessarily rather than by chance, or contingently?"

This blog deals mostly with metaphysical questions, but I had a very interesting discussion about epistemology with a colleague the other day, and I want to relate some of the highlights here and (hopefully) get some comments.

The discussion revolved mostly around what counts as a good reason for believing some proposition, but I want to specifically focus on an epistemic maxim of my own invention: "we ought to be more skeptical of propositions we wish to be true." Let me give a brief summary of the thought process that led me to adopt this maxim.

First, I take it as axiomatic (that is, not requiring proof of its own) that holding true beliefs about the world is a good thing, and that holding false beliefs a bad thing--I don't mean 'good' and 'bad' in any kind of moral sense here, only that, in general, the accumulation of true beliefs and the expunging of false beliefs (i.e. the search for truth) is a goal that can be valued in itself, and not necessarily for any pragmatic results it might deliver (though it certainly might deliver many). If you don't agree on that, feel free to shoot me an argument in the comments, and I'll do my best to address it.

With that axiom in place, then, it seems reasonable that we should do whatever we can to avoid taking on new false beliefs, as well as strive to take on as many true beliefs as we can. That last part is important, as it saves us from going too far down the path of Radical Skepticism. If we were to adopt something like Descartes' method of doubt in his Meditations--that is, adopt the maxim "we should withhold assent from any proposition that is not indubitable just as we would any proposition that is clearly false"--we would certainly minimize the number of false beliefs we would take on, but at the expense of likely rejecting a large number of true ones. Radical Skepticism results in too many "false epistemic negatives," or "avoids Scylla by steering into Charybdis," as another colleague said. To continue the metaphor, it also seems to dangerous to stray toward Scylla, lest I simply believe every proposition that seems prima facie pluasible--too far in the direction of naive realism, in other words. While I certainly consider myself a naive realist in the context of perception--I think that the way our senses present the world to us is more-or-less accurate, and that when I (say) perceive a chair or a tomato, I really am perceiving a chair or a tomato, and not my "sense datum," "impression" or any other purely mental construct that is assembled by my mind--I think we ought to be somewhat more skeptical when it comes to epistemology in general.

My colleague pressed me for the exact formulation of my response to the question at hand ("what counts as a good reason for forming or changing a belief?"), but I demurred, and on further reflection--both then and now--I'm not sure I can give a single answer. Rather, it seems to me, that there are (or at least ought to be) a variety of heuristics in our "epistemic toolbox" that either raise or lower "the bar of belief" in various circumstances. "Naive realism" is a cluster shorthand for a bundle of these heuristics, it seems to me, including (for instance) "We should be more skeptical of propositions that would have the world operating in a way that is radically different from how it seems1." I'm most interested right now in the general heuristic mentioned above, though: "we should be more skeptical of propositions we wish to be true." So let's continue with our justification of it.

I'm not a perfect reasoner; unlike, say, Laplace's Demon, it is possible for me to make a mistake in my reasoning--indeed, it happens with alarming frequency. These errors can take many forms, but they can include assenting to arguments which, though they might seem sound to me, in reality either lack true premises or are somehow invalid. If I strongly desire some proposition p to be true--if, for example, a close family member is in a coma and I hear about an experimental new treatment that might allow him to awaken with full cognitive faculties--I am more likely to make these errors of judgment, as I will not necessarily apply my critical faculties with the same force as I would to another proposition p1 on which such strong hopes were not resting. My colleague objected that I would, given enough care, certainly be aware of when this was happening, and could take more care in my reasoning to ensure that this result did not occur, but I am not so certain: a corollary of the fact that I am a fallible reasoner seems to be that I might not always know when my reasoning is being faulty. It is no solution, therefore, to say "we need not universally require a higher standard of proof for propositions we wish to be true, we just need to be sure that our reasoning is not being influenced by our desires," as it is possible--in just the same sense that it is possible for me to make a mistake in my reasoning--that I might make a mistake in evaluating that reason itself, no matter how much care I take to be certain that my desires not influence my judgment.

What do I mean by "skeptical," then, if not for a more careful logical rigor, one might ask. It seems to me that whenever I am thinking clearly (i.e. I am not drunk, asleep, distracted, etc.) and applying my logical faculties to the best of my ability (i.e. critically questioning my beliefs or trying as hard as I can to puzzle out a problem)--as I should be when I am seriously considering adopting a new belief or changing an existing one--I am already being as rigorous as I possibly can be; unless, for some reason, I have already lowered the "bar of belief" in a specific instance (e.g. suspending disbelief while watching an action movie) I should normally be as logically rigorous as I can be. If I'm critically examining adopting some belief that I greatly wish to be true, then, I should not only be as logically rigorous as I can be--that is, I should set the bar of belief where I normally do--and then also factor in the possibility that my belief might be affecting my logical reasoning--might be lowering the bar without my knowledge--and so I ought to require more evidence than I otherwise would: that is, I ought to be more stubborn about changing my position. By "skeptical" here, then, I just mean "requiring of more evidence," in the same way that if I'm skeptical of a student's claim that her computer crashed and destroyed her paper I will require more evidence attesting to the truth of it (a repair bill, maybe) than I normally would; her claim to the effect counts as at least some evidence, which might be enough if I had no reason to be skeptical.

Let me make my point briefly and clearly. In making desire-related decisions--particularly when deciding to assent to a proposition you wish to be true--the possibility that my desire might negatively affect my reason, combined with the fact that I might not be aware of this negative effect means that I ought to apply my normal reasoning faculties with my full ability and require more evidence in favor of the proposition than I normally would.

Even more succinctly: we ought to be more skeptical of propositions we wish to be true.



Thoughts? Does this make sense? What standards do you apply when trying to make up your mind about your beliefs in general?









1. This is not to say that propositions which say that the world operates in radically different ways than it seems to use are always (or even usually) going to be false--the atomistic theory of matter, relativity, and quantum mechanics are all theories which seem to be at least mostly true, and which describe the world as being in fact much different than it seems. My point is that we should hold claims like this to a higher epistemic bar before assenting to them than we would claims like (say) there is a tree outside my window, which correspond with reality as it seems to us.






Edit: Because this discussion took place in class with my co-instructor, and because the kids all have cameras all the time, we get a picture of me thinking about this point and a picture of me arguing it with him. Enjoy.




Friday, July 18, 2008

Searle on Derrida and Desconstruction

Frequent readers will undoubtedly know that I have an ongoing cold war with postmodernism, poststructuralism, deconstructionism, or whatever you want to call the very confusing (and often very nonsensical) "philosophical" position that seeks to "deconstruct" philosophy, science, and rationality in general, revealing them as "social constructs" or--even worse--"mere texualities." The claims espoused by proponents of these positions include such gems as "reality is a text," "truth is a kind of fiction," and (my personal favorite), ""what we think of as the innermost spaces and places of the body—vagina, stomach, intestine—are in fact pockets of externality folded in."

This philosophical style (and I use the term loosely) is exemplified by Jacques Derrida, a French "philosopher" who is commonly credited with having founded the field. His writing, as far as I've seen, is spectacularly confused and cloaked in so much obfuscation and deliberately obscure language as to be almost unreadable, either in French or in translation. He, like most other proponents of his field, is fond of masking his almost universally ridiculous claims in language that makes them seem profound--he could have been the very subject that Nietzsche (no bastion of clarity himself) had in mind when he said "Those who know they are profound strive for clarity. Those who would like to seem profound strive for obscurity." Here's an example from Writing and Difference, just to give you a taste of his style:

The entire history of the concept of structure, before the rupture of which we are speaking, must be thought of as a series of substitutions of centre for centre, as a linked chain of determinations of the centre. Successively, and in a regulated fashion, the centre receives different forms or names. The history of metaphysics, like the history of the West, is the history of these metaphors and metonymies. Its matrix [...] is the determination of Being as presence in all senses of this word. It could be shown that all the names related to fundamentals, to principles, or to the centre have always designated an invariable presence - eidos, arche, telos, energia, ouisa(essence, existence, substance, subject), transcendentality, consciousness, God, man, and so forth.
If you find yourself thinking "well that doesn't really say anything at all!" congratulations: you're sane. The central thrust of my objection to the entire deconstructionist thesis is (briefly) just this: is isn't accurate to portray all of reality as a "text" to be interpreted it, as a "social construct" or as a relative phenomenon. There is, in fact, a difference between referent and thing referenced, between subjectivity and objectivity, and between truth and fiction. When I make a statement like "there is a tree outside my window," I'm making a claim about how the world really is that, depending on various facts about the real world is either going to be true or false. It isn't a matter of interpretation, opinion, or "textual construction" (whatever that even means), and cloaking these sorts of inanities in sophisticated (or deep sounding) language isn't going to change that basic fact.

I'm not going to continue with my critique here, because John Searle did a much better job than I ever could. The depth and ferocity of his attack on Derrida, his disciples, and deconstructionist ideology in general is breathtaking in its effectiveness. Snip:

What are the results of deconstruction supposed to be? Characteristically the deconstructionist does not attempt to prove or refute, to establish or confirm, and he is certainly not seeking the truth. On the contrary, this whole family of concepts is part of the logocentrism he wants to overcome; rather he seeks to undermine, or call in question, or overcome, or breach, or disclose complicities. And the target is not just a set of philosophical and literary texts, but the Western conception of rationality and the set of presuppositions that underlie our conceptions of language, science, and common sense, such as the distinction between reality and appearance, and between truth and fiction. According to Culler, "The effect of deconstructive analyses, as numerous readers can attest, is knowledge and feelings of mastery" (p. 225).

The trouble with this claim is that it requires us to have some way of distinguishing genuine knowledge from its counterfeits, and justified feelings of mastery from mere enthusiasms generated by a lot of pretentious verbosity. And the examples that Culler and Derrida provide are, to say the least, not very convincing. In Culler's book, we get the following examples of knowledge and mastery: speech is a form of writing (passim), presence is a certain type of absence (p. 106), the marginal is in fact central (p. 140), the literal is metaphorical (p. 148), truth is a kind of fiction (p. 181), reading is a form of misreading (p. 176), understanding is a form of misunderstanding (p. 176), sanity is a kind of neurosis (p. 160), and man is a form of woman (p. 171). Some readers may feel that such a list generates not so much feelings of mastery as of monotony. There is in deconstructive writing a constant straining of the prose to attain something that sounds profound by giving it the air of a paradox, e.g., "truths are fictions whose fictionality has been forgotten" (p. 181).


The direct target of his attack is a book by Derrida's disciple Jonathan Culler, but the criticisms hold true for Derrida himself, as well as for much of the deconstructionist movement in general. If you are--like me--inclined to view academia in general (and philosophy in particular) as a project that aims to get at clear and rational truth about an objective world, I urge to you read Searle's criticisms: they are spot on.

Link.

Thursday, July 17, 2008

Escaping the Amish

I'm currently in Lancaster, PA, teaching CTY at Franklin & Marshall College. Lancaster, for those who aren't from around here, is the heart of Amish country--you can't really go out in public without encountering at least a few of them wandering around, even in the mall (which seems kind of strange to me). As both an atheist and somewhat of a Server Monk, the Amish have always kind of baffled me--they're more or less the opposite of everything I stand for--but I've always considered them one of the more tolerable (if odd) religious sects; at the very least, they seem peaceful, and choose to eschew those they disagree with rather than clash with them violently.

This perception, widespread as it may be, is apparently not entirely accurate. I came across an interview today with Torah Bontranger, a 28 year old woman (and recent Columbia graduate!) who "escaped" from the Amish when she was 15. The picture she paints of Amish life contradicts the gentle, tolerant, pastoral image we're usually presented with. Snip:

For as long as I can remember, I had always envisioned a life such that wouldn’t be compatible with the Amish religion and lifestyle.

I loved learning, and cried when I couldn’t go back to school the fall after graduating from Amish 8th grade. The Amish do not send their children to formal schooling past 8th grade. A Supreme Court case prevented forcing Amish children into high school on grounds of religious freedom. I knew that, by US law, I wasn’t considered an adult until eighteen. I didn’t want to wait until then to go to high school.

[...]


The Amish take the Bible verse “spare the rod and spoil the child” in a literal sense. Parents routinely beat their children with anything from fly swatters, to leather straps (the most typical weapon), to whips (those are the most excruciating of), to pieces of wood.

[...]

One of my acquaintances stuttered when he was little and his dad would make him put his toe under the rocking chair, and then his dad would sit in the chair and rock over the toe and tell him that’s what he gets for stuttering.

Even little babies get abused for crying too much during church or otherwise “misbehaving.” I’ve heard women beat their babies — under a year old — so much that I cringed in pain.

Neat, eh? Though Torah is careful to stress that she was raised in what's called an "Old Order Amish" community (apparently the anabaptist equivalent of Hasidim ), I suspect that this implies that "normal" Amish life isn't all sunshine and horse drawn buggies either. In any case, it's a compelling story, and she tells it with an intense, religious fervor that is almost certainly a by-product of her first 15 years. She's apparently got a book forthcoming--I look forward to it.

Part 1
Part 2

Wednesday, July 16, 2008

Philosophical Army

As some of you know, I'm currently teaching Philosophy of Mind with Johns Hopkins' Center for Talented Youth. Between classes today, I spotted a group of my students marching in formation with one in the lead shouting "SUPER SPARTANS, WHAT IS YOUR PROFESSION?" and the rest calling out (in perfect unison) "DISPROVING BEHAVIORISM! AH OOO! AH OOO! AH OOO!" It was delightfully geeky, and I thought some of you out there might appreciate it.

Thursday, July 10, 2008

Biological Naturalism vs. Functionalism

Derek asks:

Can you please clearly distinguish between Biological Naturalism and Functionalism? I don't get the difference. I thought a Functionalist basically said that the mind was what the brain did, like digestion is what a stomach does. So how are the schools of thought different?
I certainly can. In order to really get clear what we're talking about, though, I think I need to say a little bit about the history of philosophy of mind. In the early-mid 20th century, the "in vogue" idea about how the mind and the body related was logical behaviorism. The logical behaviorist thesis, briefly stated, argued that any talk about mental states (e.g. pains, tickles, beliefs, desires, etc.) is really reducible to talk about behaviors and dispositions to behave--if Jon believes that it is raining, that just means that Jon is disposed to behave in a certain way (to carry an umbrella if he wants to stay dry, to close the windows of his car, to turn off his sprinklers, etc.), and if Jon is in pain, that just means that Jon is disposed to behave in another certain way (to retreat from the stimulus causing the pain, to say 'ouch,' to writhe around on the floor, etc.).

There are obvious problems with this--Hillary Putnam, for one, raises the logical possibility of a race who had pain-experiences without any disposition to pain behavior as evidence that mental statements are not logically identical to behavioral statements--but the one I find most telling is that it seems impossible to reduce intentional (in the technical sense) language into behavioral disposition language without simultaneously introducing another intentional term. To carry on with the above example, while it might be right to say that "if Jon believes it is raining he will carry an umbrella," that statement only seems true if Jon also has a desire to stay dry; similarly, Jon's desire to stay dry can only be translated into a behavioral disposition to wear goulashes if he believes that it is wet outside. This problem doesn't arise for all mental terms, but the fact that it arises for even one is enough to destroy the behaviorist thesis--the notion that all mental states are logically reducible to statements about actual or potential behaviors is false.

In the death of logical behaviorism, a new doctrine--Functionalism--arose to captivate the philosophic profession, this one based on a simple idea: what if the brain just is a digital computer, and our minds just are certain software programs? Whereas behaviorism is concerned only with the system's (i.e. your) inputs and outputs, Functionalism is concerned with the functional states that combine with inputs to produce given outputs. On this view, mental states are really just certain functional states in the complex Turing Machine that is our brain, and those mental states (including consciousness as a whole) are defined strictly in terms of function--there's nothing special about my mind, and (given the right programming and sufficiently powerful hardware), there's nothing stopping me from creating a functional equivalent of it implemented in silicon rather than in "meat."

To put it more precisely, Functionalism defines everything in terms of its place in the complex causal structure that is my brain; rather than ignoring what's going on in the "black box" of the brain (as a behaviorist would want to), a functionalist will admit that mental processes are essential for the system to function as it does, but will deny that there is anything essentially "mental" about those processes; a computer program with the same set of causal states, inputs, and outputs as my brain would, on this view, have a mind by definition, as all it means to have a mind is to have a system that functions in a certain way; how that system is implemented doesn't matter.

This point is easier to see in simpler cases, so let's take the case of an adding machine. There are many different possible ways that we could "realize" (that is, implement) a machine to add numbers: my pocket calculator, MatLab, this awesome device, and an abacus will all get the job done. Functionally, all these devices are equivalent--though they're instantiated in different forms, they have internal states that are directly analogous and, in the long run, produce the same functionality across the board. The brain, on this view, is just one implementation of "having a mind," and anything (say, a digital computer running a very complex program) could, given the right functional states, also be said to have a mind.

Biological Naturalism (BN) rejects this last point. Those of us who endorse BN (or something like it), point out that defining the mind purely in terms of functional states seems to leave something vital out--the qualitative character of consciousness, as well as its capacity to represent (that is, to have intentionality). Searle's Chinese Room argument is supposed to show exactly where Functionalism goes wrong: though the behavior of the Chinese Room system as a whole is functionally identical to a human who speaks Chinese, there seems to be something important missing from the Room--understanding, or semantics. Our minds, then, have to be defined by something other than their functional roles, as a system with functionally identical states seems to be missing both intentionality and subjective character of experience, both of which are defining characteristics of minds like ours.

BN proposes a simple, intuitive, and (it seems to me) correct answer to the mind/body problem: consciousness exists as an ontologically irreducible feature of the world--it can't be eliminated away in the same way that rainbows can be eliminated away as illusory--yet it is entirely caused by and realized in neuronal processes. Statements about mental events--beliefs, desires, pains, tickles--say something true and irreducible about the organism and can't be reduced to talk of brain states without the loss of something essential: the qualitative character of consciousness. The analogy with digestion--while not exact, as there's no essentially subjective character to digestion--is instructive here: consciousness is just something the brain does, in much the same way that digestion is just something the stomach and intestines do.

That's a rather brief characterization, and if you want a more detailed account, I urge you to read Searle's latest formulation here. It's not without problems, and I'm working on a modified account that I think is better able to deal with certain objections, but it's great place to start. I hope that answers your question, Derek!

Wednesday, July 9, 2008

Damn You, Obama

I'm pissed, so I'm going to rant a little bit. Obama voted today to continue the warrantless wiretapping without FISA oversight, and to give telecom companies legal immunity for helping in this Constitutional violation. It seems that as soon as he landed the nomination, all pretense of being "different" was gone. "Change we can believe in" my ass.

He preyed on peoples' desire to believe that things could be different and better; it doesn't get any lower than that. This is going to have wide-ranging implications for our democratic system, too--a lot of young people rallied behind Obama as the first real candidate they could believe in; now that he's turned out to be just another politician, many of them will be a lot less likely to participate in the political process in the future. If someone actually comes along who REALLY DOES represent change, the fact that this shyster sold us a line of bullshit is going to make it harder for him to make his case. In case you can't tell, I'm monumentally angry about this--he fooled me, and if that costs the Democrats this election, that's something I won't easily forgive.

He's also alienating his base. I was 100% behind him during the primaries--I donated money to the campaign, talked other people into supporting him, and generally held him up as a candidate who really might have a shot at fixing our broken system. I'm not sure if he realizes this, but the grassroots (mostly online) far-left progressives are the ones who really created his momentum in the first place. At the outset of the race, Hillary was the presumptive nominee--both from the party's perspective, and from the mass media's. Obama's message of hope and change resonated with those of us who are sick to death of the status quo, of the backstabbing "compromises" made in the name of getting votes and getting power, and of the general "play the game" sentiment that most politicians have. Obama presented himself as a fresh, young, idealistic and--most of all--truly progressive candidate who would remain above the sordid "you scratch my back and I'll scratch yours" world of Washington politics, and for a time it really seemed like he was. As soon as he became the presumptive nominee, though, all that vanished--he shifted radically toward the center, and started to pander to various groups just in the name of getting votes. His chance to win this election for the Democrats rested squarely on his image of REAL change in the White House--not just a switch from Right-Center to Left-Center, but a fresh vision; he represented a message of hope that not every election had to be a psuedo-choice between the puppet on the right and the puppet on the left. He's now working steadily to destroy that image, and whether it's because he never was the candidate he purported to be or because he's getting (and following) some very bad advice about "what he needs to do to win," turning into just another politician might well lose this for the Democrats, which is something we cannot afford.

So much for hope. Damn you, Barack Obama.