Saturday, November 29, 2008

Brief Musing on Philosophy and Professionalism

A few days ago, a former student of mine sent me a link to a conversation she'd been having over a Facebook message board.  The topic had to do with whether or not philosophers are born or made (through education, not in labs), but it had devolved into a disagreement about the role lay-people should take in philosophical discourse--my former student was basically arguing that anyone with a good mind can be a philosopher, and others were attacking her by claiming that being a philosopher requires specialized training (i.e. a doctorate), and non-professionals can't lay claim to the title.  I think that's crap, so I posted a quick response, which I have reproduced here for those that might be interested.  It's relatively self-contained, except for one reference to my student by name ("Katelin").  Enjoy.

There's a popular confusion, I think, between 'professional philosopher' and 'person who thinks in logical and rigorous ways.' It's certainly true that any individual cannot simply decide to declare himself a philosopher in the Leiterrific sense of the term--that takes years of specialized training and a good measure of talent to achieve. However, this should not be taken to imply that only those who have been anointed by the right people can honestly call themselves philosophers, or claim to be engaged in a philosophical project. In this respect, I think Katelin is absolutely right, and I think that the pernicious elitism is doing damage to the intellectual discourse that is essentially at the heart of the profession.

Remember that the idea of a 'professional philosopher' is a relatively new one (at least on a wide scale)--The Academy didn't really start to flourish as the center for philosophical discourse until the 19th century. Before that, philosophy was primarily done by people who likely wouldn't have considered themselves 'professional philosophers;' clergy, scientists, mathematicians, and intelligent lay-people were all part of the philosophical discourse. The shift away from philosophy as a matter of public interest and concern and toward an insular and increasingly obscure clique of professionals has not been hailed by all as a positive change; many of us who consider ourselves part of the profession still hold to Russell's maxim that philosophy essentially concerns matters of interest to the general public, and much value is lost when only a few professionals can understand what is said. Excluding people from the discourse because they lack the proper credentials or pedigree is not going to make philosophy better, but only cut it off from what should be its essential grounding: the every day reality in which we all live. Remember that even Peirce--widely regarded as a giant of American Pragmatism--couldn't hold down an academic job; his contribution to the field of philosophy is not lessened by this fact.

There are still people today who are doing substantive (and interesting) philosophical work, but who are not tenure track philosophers at research universities--Quee Nelson comes to mind immediately as an exemplar, but there are certainly others as well. If philosophy consists just in a dance wherein the participants throw obscure technical terms back and forth at each other, then only professionals can be philosophers. If, however, it consists in careful, reasoned, methodical thinking about the nature of reality, then anyone with the drive and intelligence can be a philosopher.

Who, then, should claim the title? I'm inclined to think that like 'hacker,' 'philosopher' is not a title that one should bestow upon oneself, but rather something that should represent some degree of recognition by the others in the field--if you show yourself able to think carefully and analytically about conceptual questions, then you're a philosopher in my book. That doesn't mean I think your answers to those questions are correct, though.

Wednesday, November 26, 2008

Next Semester's Schedule

I can hardly believe it, but next week I'll be finished with my first semester here at Columbia.  I'm planning on posting a retrospective discussing my initial impressions of graduate school then--I'm still writing final papers right now--but in the meantime, here are the courses I'll be taking in the Spring, for any who are interested.


The logic of inquiry in natural sciences: substantive as well as methodological concepts such as cause, determination, measurement, error, prediction, and reduction. The roles of theory and experiment. 

This should be fun.  It's being taught by a professor that I've gotten to know reasonably well this semester, and have enjoyed working with immensely (Jeff Helzner).  I'm very interested in the philosophy of science, but have never had an opportunity to take a formal course on the subject.  I'm looking forward to rectifying that.


A survey of the various attempts to reconcile the macroscopic directionality of time with the time-reversibility of the fundamental laws of physics. The second law of thermodynamics and the concept of entropy, statistical mechanics, cosmological problems, the problems of memory, the possibility of multiple time direction. 

This course is being taught by David Albert, who achieved minor celebrity status a few years back because of his participation in the rapturously terrible "pop metaphysics" film What The Bleep Do We Know?! , which was, in Prof. Albert's words, "wildly and irresponsibly wrong."  The film purported to explore the connection between quantum mechanics, spirituality, and free will, but more-or-less just ended up as propaganda for J.Z. Knight's cult.  I've been toying with the idea of trying to pick up an MA in the Philosophical Foundations of Physics (which Columbia offers) while I'm here, and this class will hopefully give me an idea as to whether or not that's a good idea.


Parts, wholes, and part-whole relations; extensional vs. intensional mereology; the boundary with topology; essential parts and mereological essentialism; identity and material constitution; four-dimensionalism; ontological dependence; holes, boundaries, and other entia minora; the problem of the many; vagueness.

This one's taught by quite possibly one of the coolest professional philosophers I've ever met: Achille Varzi.  He's got a great sense of humor and seems sharp as a tack.  This will probably be the toughest class I'll take this semester, but it sounds interesting.  Basically, it seems like we'll be covering how parts of things relate to wholes; it's usually the courses with descriptions that I don't understand that I end up getting the most out of.


The course aims to promote weekly writing by each student. A paper, or section of a book, wioth which every philosopher ought to be familiar, will be selected each week, adn one student will make a presentation on that target paper, while the others will hand in a brief essay about it. Essays will be returned, with comments, before the next meeting of the seminar. Each week a different member of the faculty, in addition to Professor Rovane, will participate in the discussion.

And, of course, the Proseminar.  It sounds mundane, but I actually got quite a lot out of the first half this semester.  It's great to get to meet the various members of the faculty, and the individualized attention and constant feedback on my writing were helpful.  Also, my cohort pretty much rocks.

Friday, November 21, 2008

Internal and External Language

As I start putting together my formal paper about ethics as a social technology, I've been researching the relationship between language and cognition. A few researchers have called the "inner monologue" phenomenon essential to (or even constitutive of) cognition--we talk to ourselves (either out loud or in our heads) as a way of working out problems. This seems right to me: as most people who have done any kind of deep thinking know, it helps tremendously to have an interlocutor (real or imaginary) off which to bounce ideas. This point has led me to consider something related (albeit tangential), though, about which I'd love to get some input.

I'm certain that everyone has had the "inner monologue" experience of speaking to oneself silently--you might rehearse a speech in your mind before you give it, silently repeat the list of things you need to pick up at the grocery store, or try to work out a philosophical problem by talking to yourself in your head. While it's certain that this sort of process is linked to language--it's hard to see how a pre-linguistic animal could think linguistically--I wonder how close this relationship is. Jerry Fodor (among others) holds the position that mental representation happens in a meta-linguistic form that he terms "Mentalese"--while thinking in Mentalese might feel like thinking in (say) English, it differs in slight but important ways. If this theory is correct, it would seem that different brain processes would have to govern true language and Mentalese language (or inner monologues); we should expect, then, to see the two occasionally come apart. 

Here's my question, then: when a person suffers a stroke (or some other kind of brain injury) that interrupts speech functions (as damage to certain parts of the left hemisphere often does), is the inner monologue similarly interrupted? If so, is this always the case, or is it possible to lose the ability to express our thoughts symbolically (either through speech or writing) but still be able to represent thoughts to ourselves in Mentalese? If the latter is correct, that would seem to bolster the Fodorian position that inner speech is fundamentally different from linguistic representation; if the two faculties are inseparable, though, that would seem to cast doubt on the principled distinction between inner monologue and public language. 

I'm researching this question as we speak, but I'm interested in seeing if anyone out there has any first-hand experience with this--have you ever suffered a stroke, or known someone who has? If you lost language, did you also lose the ability to form thoughts with propositional content? Did one faculty return before the other, or are they mutually supportive? Any input is appreciated.

Saturday, November 15, 2008

Quicklink: What Makes the Human Mind?

Harvard Magazine ran a short but interesting piece this week about what makes the human mind unique.  The article's not terribly in-depth, but at least they point out the complexity of the human/animal cognition problem.  Too often, we simply see the claim that human intelligence isn't unique superficially substantiated by pointing out chimpanzee tool use or bee dances--Harvard's piece points out that the issue isn't nearly that simple.  If you're interested in exploring this topic further, I'd recommend Michael Gazzaniga's newest book, Human: The Science Behind What Makes Us Unique.  It's written in reasonably accessible language, but still has enough hard science to interest those with more technical backgrounds.

Thursday, November 13, 2008

A Kripkean Limerick

I got bored this evening:

There once was a man named Saul Kripke
Who said I will force you to pick me
I designate rigidly
Yet you look at me frigidly
So come on, dear girl, and just lick me.

Oh come now, my dear Mr. Kripke
I know not what you mean by 'just lick me.'
For the word 'me,' you see
Is indexed to thee
And your theory of reference can't trick me

"Ah ha!" then said old Mr. Saul
I can tell that you're trying to stall
But with a theory so long
How can I be wrong?
By my side those descriptivists pall

I don't care about size, Mr. Kripke
And I know you're still trying to trick me
Proper names still refer
As descriptions I'm sure
And your rigid old theory shan't stick me

It's likely that no one will find this funny without some background in the philosophy of language (and even then it's still pretty likely, probably). For reference: rigid designation, Saul Kripke, descriptivism, indexicals.

Wednesday, November 12, 2008

Quicklink: Neurons and the Universe

This is just unspeakably awesome.  A side-by-side shot of a neuron and a mock-up of the visible universe show the remarkable similarities between the two:


Sunday, November 9, 2008

DNA and Representation

A few different conversations today have gotten me thinking about a topic that bothers me from time to time: the idea that DNA in some sense is a code, language, design, blueprint, representation, or other word that implies some degree of intentionality (in the technical sense).  I see this claim made very frequently--most notoriously and nefariously by more clever Intelligent Design theorists as part of an argument for God's existence, but also by well-meaning philosophers of biology and language both--but rarely see it challenged.  I would like to at least briefly meet this challenge here; as is sometimes the case, I intend to likely turn this post into a more formal paper sometime in the future, so many of the ideas I present here are not fully fleshed out or argued for.  Comments and criticisms are, of course, always welcome.  I'm going to focus here on why this sort of approach doesn't work as a means to prove the existence of God, but I'm going to say quite a lot that's more broadly interesting along the way, I expect.

First let me try to state the argument as clearly and charitably as I can. DNA is a design simply because it is a code for us! That is, because every single cell in my body has a complete genome, and because that complete genome carries all the necessary information to build my body, DNA must be a design. Prima facie, it meets all the criteria of information: it is medium independent (I can change the DNA molecule into a string of As, Ts, Gs, and Cs and it will still retain its information-carrying capacity), and it stands for something more than itself (i.e. my body). DNA, put simply, is a language--a code for me--and that means that it is a design. DNA has semantic content in the same way that the English language does: all those chemicals have a meaning, and that meaning is me. Any design implies a designer, and thus DNA had a designer. Humans didn't design themselves, so something else must have done the job. God is the most likely culprit.

I trust a design theorist would find this formulation acceptable--now I'm going to tell you why it's not. A common extension of this metaphor is to refer to DNA as a "blueprint for you;" I think this metaphor is pedagogically useful, so let's adopt it for the purposes of this discussion. A blueprint, to be sure, represents a building, but before we're going to decide if my DNA represents me in the same way, we're going to have to be clear about what precisely we mean by 'represent' here; it seems that there are at least two primary senses in which we might be using the term, so let's consider DNA in light of each of them.

First, by 'blueprint B represents building X' we might mean something like what we mean when we say that a map represents a nation: that is, that the blueprint corresponds to a building in that each line can be matched with a wall, door, or similar structure in reality. But wait, this does not seem entirely adequate: to adapt Hilary Putnam's famous example, we can imagine an ant crawling in the sand which, by pure chance, traces with its tracks a perfect duplicate of the original blueprint for the Eiffel Tower. It does not seem right here to assert that the ant has created a blueprint for the Eiffel Tower (for a more detailed argument for this, see Putnam's Reason, Truth, and History, pages 1-29). Representation in this sense--in the sense of a map of New York or a painting of Winston Churchill--requires more than mere correspondence: it has to come about in the right kind of way. How exactly to define "the right kind of way" is a deep question, and it is not one that I intend to pursue here. Suffice it to say that the right kind of way involves agentive production by beings with minds at least something like ours (minds that are themselves capable of semantic representation); other methods might produce things that look very much like representation, but this resemblance is not sufficient.

Here, then, is the problem for the the intelligent design advocate attempting to endorse the first horn of this dilemma: using it to demonstrate God's existence is straightforwardly question-begging, as we saw above. Arguing that DNA represents in the same sense that a map represents terrain or a portrait represents a person requires the assumption that DNA was produced agentively by a being with minds like ours; this assumption is precisely what the design-theorist wants to prove, making this line of argumentation invalid. As I said, though, there is a second horn of the dilemma that the design-theorist might instead endorse--let us return now to our blueprint metaphor and see if DNA fares any better here.

The second way we might intend to use 'blueprint B represents building X' is what we might call "the instructive sense." This is the case if building X has not yet been constructed: blueprint B represents not because it corresponds to anything in reality, but because it contains instructions for how one should proceed when constructing building X. What does it mean, though, for one thing to contain instructions for the creation of another? Consider computer programming: when I type something like the following into a compiler:

cout << "Hello World!"; return 0; 

am I writing instructions for the creation of something? Prima facie, this looks just like the blueprint case, but there's an important (and relevant) difference here: in typing the code into a compiler, I'm not making instructions for the program's creation, but rather creating the program itself. That is, the "code" for the program just is the program looked at in a certain way (e.g. through a decompiler); to watch someone write a computer program and then say "Well yes, I've seen you write the instructions for the program, but when are you going to make the program itself?" would make you guilty of a category mistake, it seems--again, writing the program just is writing the code. Program and code are identical. This case, I think, is instructive. We'll see how in just a moment.

Let's set aside the philosophical struggle with this second horn for a moment and remind ourselves what DNA actually is and how it works. DNA consists of two long polymers composed of phosphate groups and sugar groups conjoined by organic esters. Attached to this scaffolding, four "bases"--adenosine, ctyosine, thymine, and guanine--do the real "work" of DNA: adenosine always attaches to thymine, and cytosine to guanine, meaning that the entire sequence of both sides can be deduced from just one given half. DNA's primary function in the body is to regulate the creation of proteins, which in turn regulate virtually all of the body's functions. DNA does this by creating strands of RNA through the process alluded to above; units of three base pairs at a time on the relevant portions of the RNA (there are huge parts of our DNA that seem to play no active role in anything) then interact with cellular objects called ribosomes, which produce corresponding proteins. This is obviously a very basic account of how the protein creation process happens, but it should suffice for our purposes here.

The most salient part of the above, it seems, is the emphasis on causation. The entire process of protein synthesis can be done without involving an agent at all. In this way, DNA stands in sharp contrast to the blueprint from our earlier discussion--the sense in which we're using 'instruction' when we're discussing blueprints (at least in the second horn of the dilemma) necessarily includes a concept of conscious builders; to put it more generally, instructions must be instructions for someone to follow. DNA, then, is somewhat more like a computer program than it is like a blueprint in the second sense: rather than being instructions for something's creation, it _just is_ that something viewed from a lower level. Still, though, there is an important element of disanalogy here--to assert that DNA is just like a computer program would be to assert that it represents in the first sense we discussed. This assumption, we saw, leads to a fallaciously circular line of reasoning, and thus is unacceptable. As with a blueprint, if we make the comparison between a computer program and DNA, we must be careful to remember that it is just a metaphor. This, I think, is the central point that I am making: while the blueprint metaphor is apt in many ways, we must take care to remember when we use it that it is just a metaphor--while DNA and blueprints share things in common, there are important difference that prevent the two from being completely equated, no matter which sense of 'represent' we're using.

We've said a great deal here about why thinking of DNA as representing organism is incorrect, so let's take a moment to sketch a positive argument and suggest what the correct way to think about DNA might be. Let's begin by remembering that DNA causes protein synthesis, which causes other necessary organic functions. If we keep this observation squarely in mind, DNA's metaphysics aren't all that difficult to articulate: DNA, mediated by other chemicals and environmental considerations, regulates the causal chain that leads to the occurrence of all the various functions we mean by the cluster-concept 'life.' These include, but are not limited to, metabolism, reproduction, cell growth, cell regeneration, gas exchange, and many, many others. DNA is just one link--albeit a very important link--in the naturalistic chain that both causes and is constitutive of life.

As I said, I'm aware that there are a great many holes here that need to be plugged before this theory is really solid, but I think the rough outlines are clear.  Thoughts?