Sunday, July 20, 2008

Potential Course List

I'm starting to get my affairs ready to start at Columbia this Fall, and have assembled the following class list. I haven't registered yet, but I have the (perhaps wildly incorrect) perception that grad students are rarely unable to get into classes of their choice. In any case, here's my dream schedule (a long with course descriptions and some commentary) for Fall 2008:

EVOLUTION, ALTRUISM, AND ETHICS
Day/Time: W 11:00am-12:50pm
This seminar will elaborate and examine a naturalistic approach to ethics, one that views contemporary ethical practices as products of a long and complex history. I am currently writing a book presenting this form of naturalism, and chapters will be assigned for each meeting after the first. Using brief readings from other ethical perspectives, both historical and contemporary, we shall try to evaluate the prospects of ethical naturalism.
Open to juniors, seniors, and graduate students.
I'm particularly excited about this one. In my early undergraduate days, I specialized in ethical issues, but I found all of the existent ethical theories dreadfully unsatisfying, and came to suspect that if we were going to get a plausible naturalistic account of ethics, we needed a more thorough understanding of how the mind and brain worked--hence the switch to mind. This class sounds right up my alley, though, and I'm always excited to hear naturalistic defenses of philosophical concepts.


1st YEAR PROSEMINAR IN PHILOSOPHY
Day/Time: W 6:10pm-8:00pm
This course, which meets only for the first seven weeks of term, is restricted to, and required for, first-year Columbia Ph.D. students. The course aims to promote weekly writing by each student. A paper, or section of a book, with which every philosopher ought to be familiar, will be selected each week, and one student will make a presentation on that target paper, while the others will hand in a brief essay about it. Essays will be returned, with comments, before the next meeting of the seminar. Each week a different member of the faculty, in addition to Professor Peacocke, will participate in the discussions. A second seven-week segment of the ProSeminar will be held in the Spring Semester of 2009.

What can I say? It's the Pro-Seminar, so I have to take it. Still, it could be really good--the single most productive (in terms of bettering me as a philosopher) course I took at Berkeley was the "Introduction to Philosophical Methodology" class--just like this one, it was aimed at getting students writing every week. My hope is that this class will be a more rigorous and intense version of that one, and that I'll really have a chance to sharpen my writing considerably.


ADVANCED TOPICS IN THE PHILOSOPHY OF MIND
Day/Time: W 2:10pm-4:00pm
This seminar will be concerned with the interactions between the theory of intentional content and thought on the one hand, and metaphysics on the other. We will first discuss the role of truth and reference in the individuation of intentional content. We will then draw on that role in discussing the following issues: the nature of rule-following and objectivity in thought; transcendental arguments and objective content in thought and in perception; the general phenomenon of relation-based thought, and its extent, nature and significance; the nature of subjects of consciousness, self-representation and first person thought.


Mind is my specialty, so this was an easy choice. I'm not entirely clear on what exactly this course description is talking about (which is a good thing), other than that the class seems to deal with intentional content and how it relates to external objects, which is a topic I'm very much interested in.

Serendipitously, all these classes are on Wednesday, so I'd be in class only one day per week, which would be pretty nice. I'm sure I'm going to have a lot of writing to do outside of class (and Fallout 3 is coming out soon, too...), so not having to make the commute to campus every day will be nice.

Saturday, July 19, 2008

Some Normative Epistemology

If you ask most philosophers, they'll tell you that there are (roughly) four main branches of philosophy: metaphysics, epistemology, ethics, and logic. These (again, roughly) correspond to the questions: "What's out there?" "How do you know?" "What should I do about it?" and "Can you prove it to a math major?" Tongue-in-cheek definitions aside, metaphysics deals with questions relating to the nature of reality, the existence of various entities, properties of those entities, and the ways in which those entities interact. "Does God exist?" and "How do the mind and brain relate?" are both metaphysical questions. Epistemology deals with knowledge claims and how humans go about knowing things in the first place--"What is an appropriate level of evidence to require before changing a belief?" and "How can we be sure that our senses are reliable?" are both epistemic questions. Ethics deals with questions of right and wrong (metaethics) and how we ought to live our lives (normative ethics). "What moral obligations do we have to our fellow man?" is the canonical ethical question. Logic sort of flits in and out the other disciplines, popping its head in to be used as a tool (or to confound erstwhile plausible theories) in any and all of the above, but it also has some questions of its own. Modal logic deals with necessity and contingency, and asks questions like "What does it mean for some truth to be true necessarily rather than by chance, or contingently?"

This blog deals mostly with metaphysical questions, but I had a very interesting discussion about epistemology with a colleague the other day, and I want to relate some of the highlights here and (hopefully) get some comments.

The discussion revolved mostly around what counts as a good reason for believing some proposition, but I want to specifically focus on an epistemic maxim of my own invention: "we ought to be more skeptical of propositions we wish to be true." Let me give a brief summary of the thought process that led me to adopt this maxim.

First, I take it as axiomatic (that is, not requiring proof of its own) that holding true beliefs about the world is a good thing, and that holding false beliefs a bad thing--I don't mean 'good' and 'bad' in any kind of moral sense here, only that, in general, the accumulation of true beliefs and the expunging of false beliefs (i.e. the search for truth) is a goal that can be valued in itself, and not necessarily for any pragmatic results it might deliver (though it certainly might deliver many). If you don't agree on that, feel free to shoot me an argument in the comments, and I'll do my best to address it.

With that axiom in place, then, it seems reasonable that we should do whatever we can to avoid taking on new false beliefs, as well as strive to take on as many true beliefs as we can. That last part is important, as it saves us from going too far down the path of Radical Skepticism. If we were to adopt something like Descartes' method of doubt in his Meditations--that is, adopt the maxim "we should withhold assent from any proposition that is not indubitable just as we would any proposition that is clearly false"--we would certainly minimize the number of false beliefs we would take on, but at the expense of likely rejecting a large number of true ones. Radical Skepticism results in too many "false epistemic negatives," or "avoids Scylla by steering into Charybdis," as another colleague said. To continue the metaphor, it also seems to dangerous to stray toward Scylla, lest I simply believe every proposition that seems prima facie pluasible--too far in the direction of naive realism, in other words. While I certainly consider myself a naive realist in the context of perception--I think that the way our senses present the world to us is more-or-less accurate, and that when I (say) perceive a chair or a tomato, I really am perceiving a chair or a tomato, and not my "sense datum," "impression" or any other purely mental construct that is assembled by my mind--I think we ought to be somewhat more skeptical when it comes to epistemology in general.

My colleague pressed me for the exact formulation of my response to the question at hand ("what counts as a good reason for forming or changing a belief?"), but I demurred, and on further reflection--both then and now--I'm not sure I can give a single answer. Rather, it seems to me, that there are (or at least ought to be) a variety of heuristics in our "epistemic toolbox" that either raise or lower "the bar of belief" in various circumstances. "Naive realism" is a cluster shorthand for a bundle of these heuristics, it seems to me, including (for instance) "We should be more skeptical of propositions that would have the world operating in a way that is radically different from how it seems1." I'm most interested right now in the general heuristic mentioned above, though: "we should be more skeptical of propositions we wish to be true." So let's continue with our justification of it.

I'm not a perfect reasoner; unlike, say, Laplace's Demon, it is possible for me to make a mistake in my reasoning--indeed, it happens with alarming frequency. These errors can take many forms, but they can include assenting to arguments which, though they might seem sound to me, in reality either lack true premises or are somehow invalid. If I strongly desire some proposition p to be true--if, for example, a close family member is in a coma and I hear about an experimental new treatment that might allow him to awaken with full cognitive faculties--I am more likely to make these errors of judgment, as I will not necessarily apply my critical faculties with the same force as I would to another proposition p1 on which such strong hopes were not resting. My colleague objected that I would, given enough care, certainly be aware of when this was happening, and could take more care in my reasoning to ensure that this result did not occur, but I am not so certain: a corollary of the fact that I am a fallible reasoner seems to be that I might not always know when my reasoning is being faulty. It is no solution, therefore, to say "we need not universally require a higher standard of proof for propositions we wish to be true, we just need to be sure that our reasoning is not being influenced by our desires," as it is possible--in just the same sense that it is possible for me to make a mistake in my reasoning--that I might make a mistake in evaluating that reason itself, no matter how much care I take to be certain that my desires not influence my judgment.

What do I mean by "skeptical," then, if not for a more careful logical rigor, one might ask. It seems to me that whenever I am thinking clearly (i.e. I am not drunk, asleep, distracted, etc.) and applying my logical faculties to the best of my ability (i.e. critically questioning my beliefs or trying as hard as I can to puzzle out a problem)--as I should be when I am seriously considering adopting a new belief or changing an existing one--I am already being as rigorous as I possibly can be; unless, for some reason, I have already lowered the "bar of belief" in a specific instance (e.g. suspending disbelief while watching an action movie) I should normally be as logically rigorous as I can be. If I'm critically examining adopting some belief that I greatly wish to be true, then, I should not only be as logically rigorous as I can be--that is, I should set the bar of belief where I normally do--and then also factor in the possibility that my belief might be affecting my logical reasoning--might be lowering the bar without my knowledge--and so I ought to require more evidence than I otherwise would: that is, I ought to be more stubborn about changing my position. By "skeptical" here, then, I just mean "requiring of more evidence," in the same way that if I'm skeptical of a student's claim that her computer crashed and destroyed her paper I will require more evidence attesting to the truth of it (a repair bill, maybe) than I normally would; her claim to the effect counts as at least some evidence, which might be enough if I had no reason to be skeptical.

Let me make my point briefly and clearly. In making desire-related decisions--particularly when deciding to assent to a proposition you wish to be true--the possibility that my desire might negatively affect my reason, combined with the fact that I might not be aware of this negative effect means that I ought to apply my normal reasoning faculties with my full ability and require more evidence in favor of the proposition than I normally would.

Even more succinctly: we ought to be more skeptical of propositions we wish to be true.



Thoughts? Does this make sense? What standards do you apply when trying to make up your mind about your beliefs in general?









1. This is not to say that propositions which say that the world operates in radically different ways than it seems to use are always (or even usually) going to be false--the atomistic theory of matter, relativity, and quantum mechanics are all theories which seem to be at least mostly true, and which describe the world as being in fact much different than it seems. My point is that we should hold claims like this to a higher epistemic bar before assenting to them than we would claims like (say) there is a tree outside my window, which correspond with reality as it seems to us.






Edit: Because this discussion took place in class with my co-instructor, and because the kids all have cameras all the time, we get a picture of me thinking about this point and a picture of me arguing it with him. Enjoy.




Friday, July 18, 2008

Searle on Derrida and Desconstruction

Frequent readers will undoubtedly know that I have an ongoing cold war with postmodernism, poststructuralism, deconstructionism, or whatever you want to call the very confusing (and often very nonsensical) "philosophical" position that seeks to "deconstruct" philosophy, science, and rationality in general, revealing them as "social constructs" or--even worse--"mere texualities." The claims espoused by proponents of these positions include such gems as "reality is a text," "truth is a kind of fiction," and (my personal favorite), ""what we think of as the innermost spaces and places of the body—vagina, stomach, intestine—are in fact pockets of externality folded in."

This philosophical style (and I use the term loosely) is exemplified by Jacques Derrida, a French "philosopher" who is commonly credited with having founded the field. His writing, as far as I've seen, is spectacularly confused and cloaked in so much obfuscation and deliberately obscure language as to be almost unreadable, either in French or in translation. He, like most other proponents of his field, is fond of masking his almost universally ridiculous claims in language that makes them seem profound--he could have been the very subject that Nietzsche (no bastion of clarity himself) had in mind when he said "Those who know they are profound strive for clarity. Those who would like to seem profound strive for obscurity." Here's an example from Writing and Difference, just to give you a taste of his style:

The entire history of the concept of structure, before the rupture of which we are speaking, must be thought of as a series of substitutions of centre for centre, as a linked chain of determinations of the centre. Successively, and in a regulated fashion, the centre receives different forms or names. The history of metaphysics, like the history of the West, is the history of these metaphors and metonymies. Its matrix [...] is the determination of Being as presence in all senses of this word. It could be shown that all the names related to fundamentals, to principles, or to the centre have always designated an invariable presence - eidos, arche, telos, energia, ouisa(essence, existence, substance, subject), transcendentality, consciousness, God, man, and so forth.
If you find yourself thinking "well that doesn't really say anything at all!" congratulations: you're sane. The central thrust of my objection to the entire deconstructionist thesis is (briefly) just this: is isn't accurate to portray all of reality as a "text" to be interpreted it, as a "social construct" or as a relative phenomenon. There is, in fact, a difference between referent and thing referenced, between subjectivity and objectivity, and between truth and fiction. When I make a statement like "there is a tree outside my window," I'm making a claim about how the world really is that, depending on various facts about the real world is either going to be true or false. It isn't a matter of interpretation, opinion, or "textual construction" (whatever that even means), and cloaking these sorts of inanities in sophisticated (or deep sounding) language isn't going to change that basic fact.

I'm not going to continue with my critique here, because John Searle did a much better job than I ever could. The depth and ferocity of his attack on Derrida, his disciples, and deconstructionist ideology in general is breathtaking in its effectiveness. Snip:

What are the results of deconstruction supposed to be? Characteristically the deconstructionist does not attempt to prove or refute, to establish or confirm, and he is certainly not seeking the truth. On the contrary, this whole family of concepts is part of the logocentrism he wants to overcome; rather he seeks to undermine, or call in question, or overcome, or breach, or disclose complicities. And the target is not just a set of philosophical and literary texts, but the Western conception of rationality and the set of presuppositions that underlie our conceptions of language, science, and common sense, such as the distinction between reality and appearance, and between truth and fiction. According to Culler, "The effect of deconstructive analyses, as numerous readers can attest, is knowledge and feelings of mastery" (p. 225).

The trouble with this claim is that it requires us to have some way of distinguishing genuine knowledge from its counterfeits, and justified feelings of mastery from mere enthusiasms generated by a lot of pretentious verbosity. And the examples that Culler and Derrida provide are, to say the least, not very convincing. In Culler's book, we get the following examples of knowledge and mastery: speech is a form of writing (passim), presence is a certain type of absence (p. 106), the marginal is in fact central (p. 140), the literal is metaphorical (p. 148), truth is a kind of fiction (p. 181), reading is a form of misreading (p. 176), understanding is a form of misunderstanding (p. 176), sanity is a kind of neurosis (p. 160), and man is a form of woman (p. 171). Some readers may feel that such a list generates not so much feelings of mastery as of monotony. There is in deconstructive writing a constant straining of the prose to attain something that sounds profound by giving it the air of a paradox, e.g., "truths are fictions whose fictionality has been forgotten" (p. 181).


The direct target of his attack is a book by Derrida's disciple Jonathan Culler, but the criticisms hold true for Derrida himself, as well as for much of the deconstructionist movement in general. If you are--like me--inclined to view academia in general (and philosophy in particular) as a project that aims to get at clear and rational truth about an objective world, I urge to you read Searle's criticisms: they are spot on.

Link.

Thursday, July 17, 2008

Escaping the Amish

I'm currently in Lancaster, PA, teaching CTY at Franklin & Marshall College. Lancaster, for those who aren't from around here, is the heart of Amish country--you can't really go out in public without encountering at least a few of them wandering around, even in the mall (which seems kind of strange to me). As both an atheist and somewhat of a Server Monk, the Amish have always kind of baffled me--they're more or less the opposite of everything I stand for--but I've always considered them one of the more tolerable (if odd) religious sects; at the very least, they seem peaceful, and choose to eschew those they disagree with rather than clash with them violently.

This perception, widespread as it may be, is apparently not entirely accurate. I came across an interview today with Torah Bontranger, a 28 year old woman (and recent Columbia graduate!) who "escaped" from the Amish when she was 15. The picture she paints of Amish life contradicts the gentle, tolerant, pastoral image we're usually presented with. Snip:

For as long as I can remember, I had always envisioned a life such that wouldn’t be compatible with the Amish religion and lifestyle.

I loved learning, and cried when I couldn’t go back to school the fall after graduating from Amish 8th grade. The Amish do not send their children to formal schooling past 8th grade. A Supreme Court case prevented forcing Amish children into high school on grounds of religious freedom. I knew that, by US law, I wasn’t considered an adult until eighteen. I didn’t want to wait until then to go to high school.

[...]


The Amish take the Bible verse “spare the rod and spoil the child” in a literal sense. Parents routinely beat their children with anything from fly swatters, to leather straps (the most typical weapon), to whips (those are the most excruciating of), to pieces of wood.

[...]

One of my acquaintances stuttered when he was little and his dad would make him put his toe under the rocking chair, and then his dad would sit in the chair and rock over the toe and tell him that’s what he gets for stuttering.

Even little babies get abused for crying too much during church or otherwise “misbehaving.” I’ve heard women beat their babies — under a year old — so much that I cringed in pain.

Neat, eh? Though Torah is careful to stress that she was raised in what's called an "Old Order Amish" community (apparently the anabaptist equivalent of Hasidim ), I suspect that this implies that "normal" Amish life isn't all sunshine and horse drawn buggies either. In any case, it's a compelling story, and she tells it with an intense, religious fervor that is almost certainly a by-product of her first 15 years. She's apparently got a book forthcoming--I look forward to it.

Part 1
Part 2

Wednesday, July 16, 2008

Philosophical Army

As some of you know, I'm currently teaching Philosophy of Mind with Johns Hopkins' Center for Talented Youth. Between classes today, I spotted a group of my students marching in formation with one in the lead shouting "SUPER SPARTANS, WHAT IS YOUR PROFESSION?" and the rest calling out (in perfect unison) "DISPROVING BEHAVIORISM! AH OOO! AH OOO! AH OOO!" It was delightfully geeky, and I thought some of you out there might appreciate it.

Thursday, July 10, 2008

Biological Naturalism vs. Functionalism

Derek asks:

Can you please clearly distinguish between Biological Naturalism and Functionalism? I don't get the difference. I thought a Functionalist basically said that the mind was what the brain did, like digestion is what a stomach does. So how are the schools of thought different?
I certainly can. In order to really get clear what we're talking about, though, I think I need to say a little bit about the history of philosophy of mind. In the early-mid 20th century, the "in vogue" idea about how the mind and the body related was logical behaviorism. The logical behaviorist thesis, briefly stated, argued that any talk about mental states (e.g. pains, tickles, beliefs, desires, etc.) is really reducible to talk about behaviors and dispositions to behave--if Jon believes that it is raining, that just means that Jon is disposed to behave in a certain way (to carry an umbrella if he wants to stay dry, to close the windows of his car, to turn off his sprinklers, etc.), and if Jon is in pain, that just means that Jon is disposed to behave in another certain way (to retreat from the stimulus causing the pain, to say 'ouch,' to writhe around on the floor, etc.).

There are obvious problems with this--Hillary Putnam, for one, raises the logical possibility of a race who had pain-experiences without any disposition to pain behavior as evidence that mental statements are not logically identical to behavioral statements--but the one I find most telling is that it seems impossible to reduce intentional (in the technical sense) language into behavioral disposition language without simultaneously introducing another intentional term. To carry on with the above example, while it might be right to say that "if Jon believes it is raining he will carry an umbrella," that statement only seems true if Jon also has a desire to stay dry; similarly, Jon's desire to stay dry can only be translated into a behavioral disposition to wear goulashes if he believes that it is wet outside. This problem doesn't arise for all mental terms, but the fact that it arises for even one is enough to destroy the behaviorist thesis--the notion that all mental states are logically reducible to statements about actual or potential behaviors is false.

In the death of logical behaviorism, a new doctrine--Functionalism--arose to captivate the philosophic profession, this one based on a simple idea: what if the brain just is a digital computer, and our minds just are certain software programs? Whereas behaviorism is concerned only with the system's (i.e. your) inputs and outputs, Functionalism is concerned with the functional states that combine with inputs to produce given outputs. On this view, mental states are really just certain functional states in the complex Turing Machine that is our brain, and those mental states (including consciousness as a whole) are defined strictly in terms of function--there's nothing special about my mind, and (given the right programming and sufficiently powerful hardware), there's nothing stopping me from creating a functional equivalent of it implemented in silicon rather than in "meat."

To put it more precisely, Functionalism defines everything in terms of its place in the complex causal structure that is my brain; rather than ignoring what's going on in the "black box" of the brain (as a behaviorist would want to), a functionalist will admit that mental processes are essential for the system to function as it does, but will deny that there is anything essentially "mental" about those processes; a computer program with the same set of causal states, inputs, and outputs as my brain would, on this view, have a mind by definition, as all it means to have a mind is to have a system that functions in a certain way; how that system is implemented doesn't matter.

This point is easier to see in simpler cases, so let's take the case of an adding machine. There are many different possible ways that we could "realize" (that is, implement) a machine to add numbers: my pocket calculator, MatLab, this awesome device, and an abacus will all get the job done. Functionally, all these devices are equivalent--though they're instantiated in different forms, they have internal states that are directly analogous and, in the long run, produce the same functionality across the board. The brain, on this view, is just one implementation of "having a mind," and anything (say, a digital computer running a very complex program) could, given the right functional states, also be said to have a mind.

Biological Naturalism (BN) rejects this last point. Those of us who endorse BN (or something like it), point out that defining the mind purely in terms of functional states seems to leave something vital out--the qualitative character of consciousness, as well as its capacity to represent (that is, to have intentionality). Searle's Chinese Room argument is supposed to show exactly where Functionalism goes wrong: though the behavior of the Chinese Room system as a whole is functionally identical to a human who speaks Chinese, there seems to be something important missing from the Room--understanding, or semantics. Our minds, then, have to be defined by something other than their functional roles, as a system with functionally identical states seems to be missing both intentionality and subjective character of experience, both of which are defining characteristics of minds like ours.

BN proposes a simple, intuitive, and (it seems to me) correct answer to the mind/body problem: consciousness exists as an ontologically irreducible feature of the world--it can't be eliminated away in the same way that rainbows can be eliminated away as illusory--yet it is entirely caused by and realized in neuronal processes. Statements about mental events--beliefs, desires, pains, tickles--say something true and irreducible about the organism and can't be reduced to talk of brain states without the loss of something essential: the qualitative character of consciousness. The analogy with digestion--while not exact, as there's no essentially subjective character to digestion--is instructive here: consciousness is just something the brain does, in much the same way that digestion is just something the stomach and intestines do.

That's a rather brief characterization, and if you want a more detailed account, I urge you to read Searle's latest formulation here. It's not without problems, and I'm working on a modified account that I think is better able to deal with certain objections, but it's great place to start. I hope that answers your question, Derek!

Wednesday, July 9, 2008

Damn You, Obama

I'm pissed, so I'm going to rant a little bit. Obama voted today to continue the warrantless wiretapping without FISA oversight, and to give telecom companies legal immunity for helping in this Constitutional violation. It seems that as soon as he landed the nomination, all pretense of being "different" was gone. "Change we can believe in" my ass.

He preyed on peoples' desire to believe that things could be different and better; it doesn't get any lower than that. This is going to have wide-ranging implications for our democratic system, too--a lot of young people rallied behind Obama as the first real candidate they could believe in; now that he's turned out to be just another politician, many of them will be a lot less likely to participate in the political process in the future. If someone actually comes along who REALLY DOES represent change, the fact that this shyster sold us a line of bullshit is going to make it harder for him to make his case. In case you can't tell, I'm monumentally angry about this--he fooled me, and if that costs the Democrats this election, that's something I won't easily forgive.

He's also alienating his base. I was 100% behind him during the primaries--I donated money to the campaign, talked other people into supporting him, and generally held him up as a candidate who really might have a shot at fixing our broken system. I'm not sure if he realizes this, but the grassroots (mostly online) far-left progressives are the ones who really created his momentum in the first place. At the outset of the race, Hillary was the presumptive nominee--both from the party's perspective, and from the mass media's. Obama's message of hope and change resonated with those of us who are sick to death of the status quo, of the backstabbing "compromises" made in the name of getting votes and getting power, and of the general "play the game" sentiment that most politicians have. Obama presented himself as a fresh, young, idealistic and--most of all--truly progressive candidate who would remain above the sordid "you scratch my back and I'll scratch yours" world of Washington politics, and for a time it really seemed like he was. As soon as he became the presumptive nominee, though, all that vanished--he shifted radically toward the center, and started to pander to various groups just in the name of getting votes. His chance to win this election for the Democrats rested squarely on his image of REAL change in the White House--not just a switch from Right-Center to Left-Center, but a fresh vision; he represented a message of hope that not every election had to be a psuedo-choice between the puppet on the right and the puppet on the left. He's now working steadily to destroy that image, and whether it's because he never was the candidate he purported to be or because he's getting (and following) some very bad advice about "what he needs to do to win," turning into just another politician might well lose this for the Democrats, which is something we cannot afford.

So much for hope. Damn you, Barack Obama.

Implicit Biological Naturalism

My view on the mind-body problem--a species of John Searle's Biological Naturalism--goes something like this. Brains cause minds--that is, if you knew everything about the physical structure of my brain, you could see how I couldn't but be in the mental state that I'm in--but minds are not eliminatively reducible to brain states. When I say "I believe George Bush is President," I'm saying something literally true, and I'm not just making a disguised (or confused) reference to my neuronal states. Similarly, when I (truly) say "I am in pain," I'm making a statement about a phenomenal sensation that is most certainly not true of my neurons yet most certainly is true of the system as a whole; the behavior of my neurons is the causal force directly behind my experience of pain, but that experience is not itself reducible to the neurons themselves. Consciousness, in short, is an emergent property of physical objects; it is something that the brain does in just the same way that digestion is something that the stomach does, and while we can say that my neuronal structure is causally sufficient for consciousness, it is not itself conscious.

That's a fairly brief characterization, and I'm working on cleaning it up a bit; there are problems with Searle's original formulation--mostly due to confused terminology and an unwillingness to accept certain things (like emergentism)--but I think he's basically got the right idea. Still, this isn't a position that's widely accepted in the philosophical community--most people still cling to one form of Functionalism or another.

There is, however, mounting evidence that Biological Naturalism (or something very like it) is starting to catch on in the scientific community. Today's issue of Nature contains a fascinating article about the difficulty of linking specific genotypes--that is, specific genes or specific kinds of damage to specific genes--to individual mental orders (e.g. schizophrenia or autism). The authors suggest that this might be because many psychiatric disorders might be cluster phenomena--in other words, constellations of related disorders that have radically different causes but share similar effects. A gene that sometimes seems to increase the risk factor for schizophrenia might be subtly altering some aspect of brain structure, and this alteration might in turn predispose one toward a certain behavior that might, in combination with another genetic accident, lead to psychosis; the system, in short, must be considered holistically in order to say anything meaningful about higher level features (such as thought disorders).

Vaughn over at Mind Hacks puts the issue even more explicitly in biological naturalist terms:

Genetics is a complex business, but psychiatric genetics even more so, because it attempts to find links between two completely different levels of description.

Genes are defined on the neurobiological level, while psychiatric diagnoses are defined on the phenomenological level - in other words, verbal descriptions of behaviour, or verbal descriptions of what it is like to have certain mental states.

There is no guarantee, and in many people's opinion, probably no likelihood, that these 'what it is like' descriptions actually clearly demarcate distinct processes at the biological level.

I couldn't have said it better myself. Mental illness, like conscious states in general, seems to me to me to be an emergent phenomenon: it can't be reduced to any one gene, neuronal process, or even type of brain activity, because the same phenomenal state can, given variations in the environment (in my broad sense, 'environment' includes neurobiological facts about the subject), be produced by very different physical phenomena. That's precisely why I think intentional (in the technical sense) language can't be eliminated from our "mental" vocabulary--statements about beliefs, desires, hopes, thoughts, and ideas are not reducible to statements about neurobiology, neither in a token/token sense nor in a type/type sense.

Link.

Tuesday, July 8, 2008

Contact Juggling

Contrary to popular belief (including mine, sometimes), I do have hobbies other than philosophy. One of them is contact juggling; if you're like me before I started doing it, though, you have no idea what that is, so here's a short routine I put together showing off a few simple isolations I've been working on lately. I think it looks pretty cool, but I'm always rather impressed with myself, so that might not be saying much.


Saturday, July 5, 2008

Experimental Philosophy

I know you all want to help out with the world of experimental philosophy, so head on over and take an anonymous survey designed to probe your moral intuitions and beliefs. Unlike most studies of its type, this one is specifically recruiting philosophers and people with a background in moral psychology, so anyone should be able to participate. Enjoy!

Brain Scratchingly Awesome

The New Yorker recently ran an installment of its "Annals of Medicine" series describing the neuroscience and philosophy behind the itch sensation. The first half of the article (roughly) deals with how the itch sensation is generated, using a case study of a woman they call "M." as an example. M., apparently, suffered from shingles as a result of an active HIV infection, and after they subsided, began experiencing a persistent and maddening itch in the scalp. One passage is particularly grisly, so (of course) I'm including it here. Snip:
For M., certainly, it did: the itching was so torturous, and the area so numb, that her scratching began to go through the skin. At a later office visit, her doctor found a silver-dollar-size patch of scalp where skin had been replaced by scab. M. tried bandaging her head, wearing caps to bed. But her fingernails would always find a way to her flesh, especially while she slept.

One morning, after she was awakened by her bedside alarm, she sat up and, she recalled, “this fluid came down my face, this greenish liquid.” She pressed a square of gauze to her head and went to see her doctor again. M. showed the doctor the fluid on the dressing. The doctor looked closely at the wound. She shined a light on it and in M.’s eyes. Then she walked out of the room and called an ambulance. Only in the Emergency Department at Massachusetts General Hospital, after the doctors started swarming, and one told her she needed surgery now, did M. learn what had happened. She had scratched through her skull during the night—and all the way into her brain.

The second half of the article discusses the philosophical implications of M.'s case (and of itching in general), discussing naive realism, Berkeley's idealism, active perception, and phantom limb cases. It's well written (as one might expect in the New Yorker), and very, very interesting.

EDIT: Some of you might be wondering, as I was, how it is possible to scratch through one's own skull over the course of a single night. It turns out that, because of her HIV, M.'s wound became infected, and that turned into osteomyelitis, which softened the bone to the point that this was possible. The more you know.

An Inventory of My Recent Life

At last, things begin to settle down and I find that I have a chance to blog again. Some of you may know that I'm about to start grad school, and that said grad school is on the opposite side of the USA from where I had been located. For the last three weeks or so, I've been packing, saying goodbye to friends and family, and generally getting ready to depart the West Coast on a more-or-less permanent basis. I'm now at the halfway point of my journey--Lancaster, PA--where I'm teaching Philosophy of Mind for CTY again. I'll be here for another five weeks or so, and then I'll be continuing on to New York City and starting at Columbia.

Anywho, I hope that explains the recent absence; I've been psychotically busy, and haven't even had a chance to read (much less write) blog posts, but that should be changing now. I am (for the moment at least) settled and stable, so look for more posts to be forthcoming!