Friday, December 28, 2007

So You Want to Study Philosophy: A Reading List

In the last post, I promised to provide a list of solid introductory books for someone starting an investigation into the free will problem. I got to thinking a bit, and decided that I might as well take the time to give a reading list for someone starting to get into philosophy on a wide variety of topics. These books are, for the most part, what I'd classify as "advanced introductory" texts. In other words, they presume minimal prior philosophical knowledge, but also that the reader is intelligent and at least somewhat academically minded. Reading philosophy is very different than reading most other texts, including other academic texts; you'll see what I mean when you begin. In general, these are texts that would be likely to be included in the reading lists of advanced undergraduate courses. I'm only going to deal with a few fields (those in which I feel most confident in recommending books) at first, though I might add more subject areas to the list later. Without further ado, here's the list:

Philosophy of Mind

Philosophy of mind encompasses a wide variety of topics, including the mind-body problem (what's the relationship between the brain and the mind?), questions about the nature of consciousness, artificial intelligence, representation, and a myriad of other questions. Here are a few good texts to get you going (I'll warn you right now that I'm heavily biased toward John Searle, as I studied under him at Berkeley):

  • Mind: A Brief Introduction by John Searle. ISBN 978-0195157345. This is a great introductory text. Searle writes with an accessible, conversational tone, and covers many of the major ideas in the field before presenting his own.
  • Philosophy of Mind, Classical and Contemporary Readings, an anthology edited by David Chalmers. ISBN 978-0195145816. Great anthology that covers most aspects of the field from the Greeks onward. Some of the articles are a little dense (Kripke shows up more than once), but this is perfect anthology to get you started
  • How the Mind Works by Stephen Pinker. ISBN 978-0140244915. Like Searle, Pinker writes in a very accessible tone, and can easily be understood by the uninitiated. Unlike Searle, Pinker embraces the computational theory of mind, which is the dominant theory today. Worth it to understand where many philosophers and cognitive scientists are coming from today. This was published in 1999, so it's a bit dated now, but good nonetheless.
  • Intentionality: An Essay in the Philosophy of Mind by John Searle. ISBN 978-0521273022. This one's a bit more academic, but still benefits from Searle's characteristically clear and clean style. Intentionality is absolutely central to philosophy of mind, and this is a wonderful text discussing it.

Free Will and Action Theory


Questions about whether or not we are free tend to get lumped together with questions about how our actions come to be executed, or what it means to act rationally. The whole field is called "free will and action theory."

  • Four Views on Free Will by John Fischer, Derek Pereboom, Manuel Vargas, and Robert Kane. ISBN 978-1405134866. I had the opportunity to review an early copy of this book when I took an undergraduate action theory class with Manuel Vargas. I was impressed with it then, and I imagine that it's only gotten better. Each author advocates for one of the four major positions within the field (Kane for libertarianism, Pereboom for hard incompatibilism, Fischer for [semi]compatibilism, and Vargas for revisionism). This volume contains the essay My Compatibilism by Fischer, which is the article I referenced in the series of free will themed posts.
  • The Metaphysics of Free Will: An Essay on Control by John Fischer. ISBN 978-1557868572. This is a more detailed exploration of John Fischer's ideas about free will and moral responsibility, including his account of the distinction between guidance and regulative control. A great text on his brand of compatibilism.
  • The Oxford Handbook of Free Will, an anthology edited by Robert Kane. ISBN 978-0195178548. The Oxford Handbook series are predictably excellent, and this is no exception. As with any anthology, the content runs the gamut from the reasonably accessible to the very dense and technical. Still, a round coverage of most of the classical and contemporary ideas in the field.
  • Free Will (Blackwell Readings in Philosophy), another anthology edited by Robert Kane. ISBN 978-0631221029. The Blackwell series are also very good overall. This anthology focuses much more on the current debate than the Oxford anthology, and is perhaps a bit more accessible to the layman, though somewhat less comprehensive.
  • Rationality in Action by John Searle. ISBN 978-0262692823. Searle's a die-hard libertarian about free will, one of the few positions he holds with which I disagree. In this book, he outlines the traditional philosophical concept of "acting rationally"--the orthodox view is basically that a 'rational' action is an action--motivated purely by the actor's own beliefs and desires--then, in predictable Searle fashion, turns the debate on its head and argues for exactly the opposite point. Even if you disagree with his conclusions, this book is worth reading for its clear treatment of difficult technical issues, and for the sake of appreciating Searle's gift for ingenious arguments against orthodoxy.

Ethics and Morality

Ethics is one of the five major fields of philosophy (the others are metaphysics, epistemology, aesthetics, and logic), and deals with questions of how we ought to live our lives. Here are some good places to start:

  • Consequentialism (Blackwell Readings in Philosophy), an anthology edited by Stephen Darwell. ISBN 978-0631231080. Another Blackwell entry here. Consequentialism is the idea that the relevant moral facts of the matter (i.e. the facts that determines the moral status of an action) are facts about that action's consequences, not facts about the actor's intentions. It is one of the most popular (if not the most popular) constellation of views on ethics today, and Darwell's anthology offers a broad introduction to arguments for and against it.
  • Virtue Ethics (Oxford Readings in Philosophy), an anthology edited by Slote and Crisp. ISBN 978-0198751885. Virtue ethics originated with Aristotle, and are based around the idea that being moral consists in having one or more virtues (e.g. courage, temperance, etc.). I'm not a big fan of this theory, but it's an important one in the ethics debate, and this is as good an anthology as any.
  • Groundwork Concerning the Metaphysics of Morals, by Immanual Kant. ISBN 978-0521626958. Ah, the Groundwork, how I love/hate thee. Kant advocates the idea that acting morally has nothing to do with bringing about the best consequences, but rather depends entirely on acting in accord with an overarching moral theory called the "Categorical Imperative." The essay is short, brilliant, and very, very difficult. Read it, read it again, read it again, and then maybe you'll have some idea what he's saying. Then maybe you can explain it to me. This is a tough one, but well worth the effort--when you finally finish it, you'll feel like you've just run a mental marathon, and you'll understand why Kant is widely considered to be one of the most brilliant humans to have ever lived (even if his theories might not be correct).
  • The Ethical Brain: The Science of Our Moral Dilemmas by Michael Gazzaniga. ISBN 978-0060884734. While perhaps not a philosophy book per se (Gazzaniga is a neuroscientist, not a philosopher), this is still an excellent book for anyone interested in the field of ethics. I've got a soft spot for anyone who attempts to resolve seemingly intractable philosophical problems with an appeal to neuroscience (if I had to sum up my overarching academic project in one sentence, that would be it), which is precisely what Gazzaniga does in this book. He considers various aspects of morality and various ethical dilemmas, then explains how they can be explained by/resolved by a detailed structure of neuroscience. This is a "pop science" book--i.e. written for mass consumption-- so it is very readable and interesting, though those with a significant background in philosophy, neuroscience, or cognitive science will probably find it lacking in detail.

Philosophy of Perception

I'm fascinated with issues surrounding perception, particularly color perception. This is a very diverse field, with contributions from philosophers, biologists, cognitive scientists, computer scientists, and others; a lot of philosophy really intersects here. I consider it to be a subset of philosophy of mind, but it's worth its own list here.


  • Action in Perception by Alva Noe. ISBN 978-0262640633. Noe begins the book by laying out his central thesis: "Perception is not something that happens to us, or in us; it is something we do." From this thought provoking beginning, Noe goes on to discuss various aspects of perception (focusing primarily on vision, though touching on other modalities) with a clear, lucid style and judicious use of science and philosophy. This is philosophy at its best, in my opinion: Noe presents a concise and well thought out exploration of the problems in the field, and presents what he thinks is a plausible solution to them, all designed to help cognitive scientists untangle the conceptual confusion that he thinks surrounds this topic. Noe's a rising star in philosophy, and this is a great example of why.
  • Vision and Mind: Selected Readings in the Philosophy of Perception, an anthology edited by Alva Noe and Evan Thompson. ISBN 978-0262640473. Another very good anthology, which focuses mostly on modern (i.e. last 50 years or so) explorations of perception in all its forms. Some of the articles are a little dense, but that's to be expected.
  • Readings on Color, Volume One: The Philosophy of Color, an anthology edited by Byrne and Hilbert. ISBN 978-0262522304. Within perception, I have a particular interest on color perception. My senior thesis as an undergraduate / writing sample for graduate school admissions was on color perception, and this anthology helped me a lot as I was writing. I could probably include an entire section just on color, but we'll leave it at two books. The second one being...
  • The Quest for Reality: Subjectivism and the Metaphysics of Color, by Barry Stroud. ISBN 978-0195151886. This is a detailed and rigorous examination of what we mean when we say "Tomatoes are red." Barry Stroud writes with a lucid style (though not so clear as Searle's) that is very easy to follow. His eventual argument (that colors aren't really objective or subjective at all, but something in between) also resonates strongly with me, as it is very much in line with my own idea (though Stroud appeals less directly to contemporary neuroscience).

I think that'll do to start with. Perhaps more to come later...

More Free Will Stuff: Comments From the Studio Audience

Michael, a physicist in the studio audience, had a lot to say about the last two posts on free will. His comments were well thought out and substantive (who says scientists don't know how to write?), so I thought I'd post them here for all to see, and for me to respond to.
My reaction is 180 degrees from yours. What possible point does moral responsibility have if I have no free will? What possible benefit is it to the world or to myself or to you if you assign moral responsibility to actions over which I had not choice, because I choose not to flagellate myself about those actions? Such a thing would seem to be a sort of moral version of an untestable hypothesis. Absolutely nothing in the world is changed by your decision to assign moral responsibility to me for my actions, then if you didn't assign any responsibility.

That's an interesting point. The push you're making here has vague shades of American Pragmatism--you're asking, in a nutshell, why this is even a question worth considering if we agree from the start that nothing substantive will change in the universe one way or the other. I share that concern, which is why I'm reluctant to embrace anything but semi-compatibilism. As you said, much of the free will discussion is, at this point, made up of almost entirely untestable hypotheses (e.g. that quantum level indeterminacy is enough to make us free). The advantage about embracing semicompatibilism, it seems to me, is that it doesn't have anything to say one way or the other about the tension between freedom and causal determinism. What semicompatibilism offers, then, is an explanation of how we can intelligibly assign moral responsibility (whether or not this assignment itself is a free action is irrelevant--we only care whether or not it is intelligible) while at the same time remaining neutral about genuine alternative possibilities.


The only positive purpose I can think of for assigning moral responsibility for actions is if my doing so might change my future actions in a future similar situation. But you have hypothesized that this is impossible. So why would you or I ever choose to assign moral responsibility for actions I cannot control?

I'm not hypothesizing one way or another about whether or not your actions are free (though if pressed I'd say it's more likely that they're not than that they are)--again, that's the point of semicompatibilism, as far as I'm concerned. Fischer would say that you're suffering from a conceptual confusion when you ask "So why would you or I ever choose to assign moral responsibility for actions I cannot control," because you can control your actions in the relevant sense. Remember, he wants to make a distinction between regulative control (which you may or may not have), and guidance control (which you certainly do have). The whole point of the stories about the locked room, the evil neuroscientist, and your drive to work is to show that there is some real sense in which you can be said to have control over your actions, even if at no point did you have robust alternative possibilities.


I am really enjoying your blog and checking it every few days. If I am missing something important about this, please let me know. Also, if there are any reasonably accessible books on this topic I could read, I would love a recommendation.



So glad to hear it! Sometimes I wonder if anyone is actually reading this thing; glad to know that at least one person is. I'll throw up a book list in a separate post when I'm done here.



If you have any good things to read on the libertarian point of view I would appreciate those. I am a PhD physicist who has worked a lot on quantum effects in superconductivity, and on quantum noise. When you write I've yet to see a libertarian account that impresses me as much of anything except wishful thinking (they also tend to make pseudoscientific style appeals to quantum mechanics) I have to tell you that it does seem possible that the loophole in determinism is in quantum mechanics.

I think it is important to realize that the "billiard ball" determinism of classical newtonian physics does not extend to QM. I believe it is at worst an open question, and at best clear enough, that QM is NOT deterministic. So if the physical universe is NOT deterministic, then free will creates no conflict with physics.

Ok, good. I've always wanted to ask a real live physicist about this (you people are so tough to find): is QM indeterministic in the sense that we can't predict the actions of the particles (i.e. their behavior really is a necessary product of the past plus the laws of nature, but in such a way that we are unable to predict it), or in the sense that the behavior really is indeterministic? Do we even know?

In any case, I'm not convinced that any amount of quantum indeterminacy is sufficient for freedom in any relevant sense of the term. This intuition has two primary sources. First, it seems (at least from what we know so far) that quantum indeterminacy "cancels out" when we get to macro-level structures; that is, while it might be the case that the behavior of particles at the very, very, very small level cannot be reliably predicted, this has no impact when you get a large group of particles together, as for every particle that randomly "swerves left," another randomly "swerves right." By way of analogy, we might picture a large office building dedicated to producing 'widgets;' while we might not be able to predict with absolute certainty what any individual worker in the office is doing at any given time, we know that the company as a whole (i.e. at the macro scale) is doing some particular activity (i.e. producing widgets). Is this or is this not the case? I'll admit that I haven't had much formal instruction in quantum mechanics, and most of what I know is the result of having lived with a physics major for four years in college, and having done a fair bit of reading on the 'net.

Let's suppose for the sake of argument, though, that indeterminacy doesn't really cancel out at the low level--a typical libertarian line would be something about how the brain is a sufficiently complex system that it is affected by minute variations in quantum structure that simply don't affect less complex systems--and that there really is a degree of randomness at the macro level. Even if this is the case, I'm still not convinced that we can get from there to a robust kind of freedom. After all, it doesn't seem to me that the fact that my brain is influenced by random events in its microstructure gets me any closer to being "free" than the fact that my brain is just a big deterministic electro-chemical machine. In order to get real freedom out of randomness, we'd have to explain just how it is that randomness at the quantum level somehow ceases to be truly "random" at the macro level, yet still retains enough indeterminacy for some aspect of my consciousness to influence it. I'm not saying that this won't happen, but there are a whole lot of "ifs" in there.


I don't understand the appeal of a theory that gives us free will in determining what we think of some actions, but no free will over any actual actions. First it is puzzling, if I ask you what you think, it is deterministic what you will tell me, since that is an action. Then it seems random whether your conversation about such a thing has any meaning at all. If you tend to be honest, then it is an amazing coincidence that your beliefs (over which you have free will) just happen to align over your predetermined statements about your beliefs.

I'm with Searle on this one (not that I know anything about him beyond what you said.) If you ask me if I have free will, I choose to say yes. It is hard to even parse what is going on if I "choose" to say no.

To me it seems clunky to separate free will about what I believe from determinism about what I do. And especially if the motivation to do so is what is at best an unproven and at worst a mistaken idea that physics, as far as we know, is deterministic. As far as I know, QM theories in which a wave function collapses on observation are NOT deterministic. Attempts to make them deterministic (for example, hidden variable theories) have failed.

Anyway, great stuff! Thanks!

Mike


Again, the appeal is in reconciling our intuitions that A) the universe is deterministic and B) we are free and/or morally responsible for our actions. It [semicompatibilism] is not a separation of beliefs from actions per se, but a separation of alternative possibilities / could have done otherwise kind of control from the kind of control necessary for us to feel that we'd be justified in attributing responsibility.

Thanks for the comments, Mike! Keep 'em coming!

Tuesday, December 25, 2007

Happy Holidays

Happy Holidays, blogosphere.

Wednesday, December 19, 2007

Some More Musings on Free Will

Someone once famously asked John Searle "If determinism were scientifically shown to be 100% true, would you accept it?" His response was "Think about what you're asking me: if it were scientifically proven that there are no free and deliberate actions, would you freely and deliberately accept that?" His response, while somewhat arrogantly dismissive (he's great like that), is pretty telling: no matter what we learn, we're never going to be able to escape the illusion (if it is an illusion) that our actions are free. In some sense, then, the problem of free will isn't really a problem at all--no matter what the answer is, nothing will change in our practical experience.

It's worth asking, then, why we even care about the problem in the first place, given the fact that even if we learn beyond a shadow of a doubt that we are not free, we will be unable to stop experiencing the world as if we are. One good reason to still care about free will, it seems to me, lies with moral responsibility. We have a powerful intuition that we cannot be held praiseworthy or blameworthy for anything we don't do of our own free will; if you put a gun to my head and demand that I rob a bank, it doesn't seem fair to blame me for robbing the bank. Similarly, if you put a mind control chip in my brain and force me to run around town fighting crime, it doesn't seem that I deserve to be praised for my law enforcement efforts. It seems that we need freedom to get responsibility.

This is the jumping off point for semicompatibilism, a view proposed by John Martin Fischer in his essay "My Compatibilism;" it is this idea that I want to explore today. The first question we should ask is "what sort of freedom do we think we need in order to get moral responsibility?" Our intuition, I think, is that we need alternative possibilities--if we fail to praise or blame someone for actions taken while under the effect of mind control, it is precisely because we see that the person had no alternative possibilities; in other words, it only makes sense to praise or blame someone for actions he takes when he could have done otherwise. The idea is that without alternative possibilities, we can't have moral responsibility; the two notions are stuck together in a fundamental way (this is called the Principle of Alternative Possibilites, or PAP). Let's see if we can pry them apart.

Let's suppose I kidnap you while you're asleep, and bring you to a locked room. When you wake up, you find yourself in strange, unfamiliar surroundings. On closer inspection, though, it turns out that the room you're in is full of all sorts of interesting things to explore and interact with (populate the room with whatever appeals to you). While you notice the closed door, it doesn't occur to you to try to open it, so taken are you with the diversions in the room. Let's suppose that you stay there for 45 minutes playing around before you try to leave the room, only to discover the locked door. You can't leave the room, nor could you have during the past 45 minutes.

Now, it seems to me that during the 45 minutes in which you were playing around in the room, you were staying there freely--that is, you chose to stay in the room, and any adverse consequences (say, if you forgot to pick up your kid from school) from your stay would be your moral responsibility. However, it turns out that you actually didn't have any alternative possibilities during your stay there--you couldn't have left the room even if you wanted to (think about this in relation to the conditional analysis discussed last time). It looks like we're getting somewhere here in our attempt to pry apart responsibility and alternative possibilities.

That last thought experiment was originally penned by John Locke, but critics quite reasonably pointed out that in that situation, you would have at least two distinct alternative possibilities--namely, to try to leave the room or not to try to leave the room. In response to this, philosophy Harry Frankfurt proposed moving Locke's Locked Room inside the head, giving rise to what are now called "Frankfurt cases." Here's a typical Frankfurt case.

Suppose that I, an evil neuroscientist, have implanted a control chip in your brain. This control chip is designed to let you take your actions normally when I'm not in control, but gives me the option to override at any time and take control of you. It also lets me know what your intentions are a moment before you actually take the action, so I have a chance to countermand you if I'm quick. Suppose further that in addition to being an evil neuroscientist, I'm also a devout Republican, and that I decide to test my chip in the upcoming election by forcing you to vote Republican. Election day rolls around, and you step into the booth, totally unaware that I'm watching my chip's readout with my finger on the button to make you vote Republican. After some time considering your options, you decide that the Republican candidate best represents your beliefs, and cast your vote for him. I take my finger off the button, satisfied (though perhaps a bit disappointed at not being able to test my invention).

Here we can see how Locke's room has been moved inside your head, and once again the results of this thought experiment are interesting. In the case I just described, it seems that it would be very reasonable to assign moral responsibility to you for voting Republican, even though you really had no alternative possibilities (even if you'd wanted to vote Democrat, you wouldn't have). It looks like we've succeeded in prying those two notions apart.

Fischer talks about two different kinds of control that we might have over our actions: guidance control and regulative control. The kind that we tend to think we have is regulative control--that is, regulative control implies true freedom, and the ability to do otherwise at any juncture. Guidance control, though, is the kind of control being exercised in the stories presented above. Here's another example that will let you see the difference more clearly.

Suppose I secretly install a computerized guidance system in your car before you go to work. I've been researching your route for a few days now, so I know that you always proceed to work in exactly the same way. I program the guidance computer to kick in as soon as you get ready to back out of your driveway, and to quit as soon as you park. The computer guides your car to work just as if you had driven it there; in fact, it is so convincing and well designed that you never even notice that anything is amiss--you simply feel like you're driving as normal, as the car responds appropriately when you brake, turn the wheel, or change gears.

In this case, Fischer (and I) would argue that though you lack regulative control (if you tried to diverge from the normal path to work, the computer wouldn't let you), but you do have guidance control. Just as in Locke's room and other Frankfurt-style cases, you can be said to have had a certain degree of control over your circumstances, even though you lacked regulative control (and alternative possibilities). Fischer contends that it is only this lesser notion of control that is necessary for moral responsibility, leaving the question of whether or not we actually have regulative control open.

I like this doctrine. It has problems, but I think it is parsimonious enough that a careful formulation will avoid many of them, particularly in light of the fact that it has nothing to say one way or the other about regulative control and alternative possibilities. It does seem to me that the major reason to care about the free will problem is for the sake of moral responsibility, and semicompatibilism shows us that we can be responsible without being truly free.

Tuesday, December 18, 2007

Some Musings About Free Will

When I was on my way to work today, I saw a car being pulled by a pickup truck with an attached tow-rope. To keep the car from careening off course, someone was sitting in the driver's seat and turning the steering wheel to keep the wheels aligned with the truck's. Being me, this got me thinking about free will--specifically, about the relationship between freedom and the ability to have done otherwise in any given situation.

Here's the free will problem briefly. As modern humans, we have two competing intuitions that seem to contradict each other. On one hand, we feel as if our actions are free; I feel like I freely chose to sit down and write this blog post, and that I could just as easily have chosen to do something else (my job, for instance). On the other hand, I also know enough modern physics to know that (at least at the level of cars and cats and computers and people), things don't work that way--I know that the behavior of everything is governed by deterministic physical laws, and see no reason to suppose that I, who am really just a complex physical system, should be any more exempt from those laws than (say) a bouncing ball. This idea--that the current state of any physical system is in principle knowable if you know the state immediately preceding it plus the rules affecting it--is known as causal determinism.


So, at least prima facie, it looks like we've got a contradiction here: it seems to us that our actions are totally free, but as good educated society members in the 21st century, we have a hard time seeing how this can be in light of causal determinism. What's the solution to this puzzle? Philosophers, predictably enough, are divided on the answer. Roughly speaking, there are three camps that philosophers tend to fall into: libertarianism, incompatibilism, and compatibilism.

Libertarianism (not to be confused with the equally crazy political ideology) holds that causal determinism is false, and that we (humans) do indeed have free will. I've yet to see a libertarian account that impresses me as much of anything except wishful thinking (they also tend to make pseudoscientific style appeals to quantum mechanics), so I'm not going to give it much consideration here. It could well turn out that libertarianism is the right position, but to know for certain we're going to need to advance physics, biology, and the rest of the natural sciences a lot more.

Incompatibilism (sometimes called "hard incompatibilism" to distinguish it from libertarianism, which is also technically an incompatibilist position) takes the position that freedom of the will and causal determinism are incompatible and that causal determinism is true. To put that symbolically:

~(D & F)
or
(D --> ~F) & (F --> ~D)


Incompatibilism seems significantly more plausible to me than does libertarianism, but it too involves giving up a strongly held intuition--that our actions are free. This certainly doesn't necessarily mean that incompatibilism is wrong (lots of theories with radically counterintuitive implications turn out to be true; see relativity), but the idea that we are free agents is so strongly held (and seems so obviously true from our subjective frame of reference) that I think it warrants some serious investigation before we abandon it.

Some philosophers, investigating this problem, have endorsed a third view--called compatibilism--which dictates that freedom of the will and causal determinism are not contradictory, as they first appear, but can actually coexist. There are almost as many flavors of compatibilism as there are compatibilists (it's probably safe to call this the dominant view among professional philosophers today), so I'm going to be as brief as I can in characterizing the view as a whole. Compatibilists, in general, reject the traditional analysis of freedom (we'll get there in a moment), and instead embrace what's called the "conditional analysis of 'can.'" Briefly, the conditional analysis (CA) goes something like this.

Suppose we have a man--let's call him Bill--who is trying to decide if he should go to work or stay home and watch TV all day. Under the traditionally compatibilist (i.e. CA) account of freedom, it is accurate to say that Bill can go to work, and it is accurate to say that he can stay home all day. So far so good. What compatibilists mean by 'can,' though, is something very specific, namely: "Some agent x can do y iff it is the case that 'If x wanted to y, x would y' is true." That seems complex at first, but the idea is actually fairly intuitive: for any action (say, going to work), it is fair to say "Bill can go to work" if and only if (there's the 'conditional' part of the 'conditional analysis) it is the case that the proposition "If Bill wanted to go to work, Bill would go to work" is true. This maps out fairly nicely onto our colloquial usage of 'can:' I can get up and dance right now (if I wanted to get up and dance right now, I would), but I can't fly to the moon right now (if I wanted to fly to the moon right now, I still wouldn't). This view is compatible with causal determinism because it is entirely possible that my (and Bill's) desires might be the result of a deterministic system, but that does not matter (according to the compatibilists)--even if my desires are deterministic, as long as I act in accord with them (i.e. I am not being coerced, controlled, etc.), then I am acting freely.

There are, of course, problems with this, and I don't think it's entirely satisfactory. I have to leave work now, though, so more tomorrow on my favorite view: semicompatibilism.

Monday, December 17, 2007

Applications Done!

My graduate school applications for philosophy PhD programs are complete as of today. I applied to (in no particular order):

Berkeley
MIT
NYU
Princeton
Columbia
Chapel Hill
Ann Arbor
Rutgers
Chicago
Washington


Wish me luck...

Wednesday, December 12, 2007

Sunday, December 9, 2007

A for Atheism

Written without a dictionary or thesaurus:

Ah ha! Allies and enemies alike: attend with all acumen as an ardent and altogether alethiometric activist for all things accurate and actual allows access to astoundingly allterative advice about appearance and reality. Anxiously, we animals await an awakening explanation according to which our adamant acceptance of an astonishingly inaccurate attitude about all things--an attitude altogether absurd according to which all are made for us--is accepted as accurate and argentine. Apropos, as anti-realism attacks the annals of human achievement, we, as able defenders of actuality, must arise and arrest acidic erosion of our agentive minds! Again, far from ancillary, we must attend to this aggrevating and egregious addition of atrocious abandonment of reality in all active education with able acumen. Accordingly, the only allowance is atheism, an appropriate acceptance of attitudes a bit more attached to actuality, and antipodean to allegiance to a God--there is no acceptable alternative. Alas, as an ardent (albeit allterative) atheist, our activist allows his tongue all astonishing freedom, so let me just add that it is my amazingly good honor to meet you, and you may address me as 'A.'


That is all.

Tuesday, December 4, 2007

Keeping Half an Eye on Behe

Decidedly unintelligent intelligent-design proponent Michael Behe has long been one of the loudest voices in the shouting match that the attempt to get supernaturalism into public schools often becomes. Behe, a biochemist at the Lehigh University, is the man responsible for first coining the term "irreducible complexity," which refers to the supposed property of some biological features (both on a micro and macro level) which, Behe thinks, are far too complicated to have evolved by natural selection. Ignoring for the moment that this is a piss-poor argument that amounts to little more than "Well I can't imagine how it could have happened; therefore, God did it!", I'm happy to report that Behe's been dealt a bit of a blow today.

One of Behe's favorite macro examples of an irreducibly complex feature is the human eye which, he claims, is such a marvel of adaptation that it couldn't possibly have come about over time (he likes to point to a quote attributed to Darwin, in which the scientist laments "the difficulty of believing that a perfect and complex eye could be formed by natural selection"); after all, Behe asks, "what good is half an eye?" Now, it seems that scientists have answer that question, and the answer is "better than no eye at all." Researchers at Australian National University, the University of Queensland, and the University of Pennsylvania have, through a cooperative effort, proposed a way in which the vertebrate "camera eye" could plausibly have evolved.

Their research paper is long and full of biological jargon (it's here if you want to read it anyway), but the first paragraph more or less sums up their point:

More than 600 million years ago (Mya), early organisms evolved photoreceptors that were capable of signalling light, and that presumably mediated phototaxis, predator evasion by shadow detection or vertical migration, and the entrainment of circadian rhythms. However, it was not until the Cambrian explosion, beginning around 540 Mya, that animal body plans began evolving very rapidly2, 3, 4 and image-forming eyes and visual systems emerged. The possession of advantageous capabilities or attributes, such as sight, rapid movement and armour, might have become crucial to survival, and might have led to an 'arms race' in the development of defensive and offensive mechanisms5. In the various phyla eyes evolved with diverse forms, but apparently based on certain common underlying features of patterning and development, as exemplified by genes such as PAX6 and RAX (also known as RX), which have critical roles during neurulation and brain regionalization.
So what good is half an eye? Well, just having a light-sensitive organ (as opposed to the camera-like eye of modern vertebrates) could let you avoid predators (fast moving shadow approaching? Better scoot back under cover!), navigate (Remember, the sun and moon are up!), or know when to go out and feed (No more light outside the shelter? Time to party!).

There's a lot more to this discovery that, while interesting, is beyond the scope (and readership) of this blog. If you want to read more about it, check the study itself! For now, though, suck it, IDers.