Friday, December 28, 2007

So You Want to Study Philosophy: A Reading List

In the last post, I promised to provide a list of solid introductory books for someone starting an investigation into the free will problem. I got to thinking a bit, and decided that I might as well take the time to give a reading list for someone starting to get into philosophy on a wide variety of topics. These books are, for the most part, what I'd classify as "advanced introductory" texts. In other words, they presume minimal prior philosophical knowledge, but also that the reader is intelligent and at least somewhat academically minded. Reading philosophy is very different than reading most other texts, including other academic texts; you'll see what I mean when you begin. In general, these are texts that would be likely to be included in the reading lists of advanced undergraduate courses. I'm only going to deal with a few fields (those in which I feel most confident in recommending books) at first, though I might add more subject areas to the list later. Without further ado, here's the list:

Philosophy of Mind

Philosophy of mind encompasses a wide variety of topics, including the mind-body problem (what's the relationship between the brain and the mind?), questions about the nature of consciousness, artificial intelligence, representation, and a myriad of other questions. Here are a few good texts to get you going (I'll warn you right now that I'm heavily biased toward John Searle, as I studied under him at Berkeley):

  • Mind: A Brief Introduction by John Searle. ISBN 978-0195157345. This is a great introductory text. Searle writes with an accessible, conversational tone, and covers many of the major ideas in the field before presenting his own.
  • Philosophy of Mind, Classical and Contemporary Readings, an anthology edited by David Chalmers. ISBN 978-0195145816. Great anthology that covers most aspects of the field from the Greeks onward. Some of the articles are a little dense (Kripke shows up more than once), but this is perfect anthology to get you started
  • How the Mind Works by Stephen Pinker. ISBN 978-0140244915. Like Searle, Pinker writes in a very accessible tone, and can easily be understood by the uninitiated. Unlike Searle, Pinker embraces the computational theory of mind, which is the dominant theory today. Worth it to understand where many philosophers and cognitive scientists are coming from today. This was published in 1999, so it's a bit dated now, but good nonetheless.
  • Intentionality: An Essay in the Philosophy of Mind by John Searle. ISBN 978-0521273022. This one's a bit more academic, but still benefits from Searle's characteristically clear and clean style. Intentionality is absolutely central to philosophy of mind, and this is a wonderful text discussing it.

Free Will and Action Theory


Questions about whether or not we are free tend to get lumped together with questions about how our actions come to be executed, or what it means to act rationally. The whole field is called "free will and action theory."

  • Four Views on Free Will by John Fischer, Derek Pereboom, Manuel Vargas, and Robert Kane. ISBN 978-1405134866. I had the opportunity to review an early copy of this book when I took an undergraduate action theory class with Manuel Vargas. I was impressed with it then, and I imagine that it's only gotten better. Each author advocates for one of the four major positions within the field (Kane for libertarianism, Pereboom for hard incompatibilism, Fischer for [semi]compatibilism, and Vargas for revisionism). This volume contains the essay My Compatibilism by Fischer, which is the article I referenced in the series of free will themed posts.
  • The Metaphysics of Free Will: An Essay on Control by John Fischer. ISBN 978-1557868572. This is a more detailed exploration of John Fischer's ideas about free will and moral responsibility, including his account of the distinction between guidance and regulative control. A great text on his brand of compatibilism.
  • The Oxford Handbook of Free Will, an anthology edited by Robert Kane. ISBN 978-0195178548. The Oxford Handbook series are predictably excellent, and this is no exception. As with any anthology, the content runs the gamut from the reasonably accessible to the very dense and technical. Still, a round coverage of most of the classical and contemporary ideas in the field.
  • Free Will (Blackwell Readings in Philosophy), another anthology edited by Robert Kane. ISBN 978-0631221029. The Blackwell series are also very good overall. This anthology focuses much more on the current debate than the Oxford anthology, and is perhaps a bit more accessible to the layman, though somewhat less comprehensive.
  • Rationality in Action by John Searle. ISBN 978-0262692823. Searle's a die-hard libertarian about free will, one of the few positions he holds with which I disagree. In this book, he outlines the traditional philosophical concept of "acting rationally"--the orthodox view is basically that a 'rational' action is an action--motivated purely by the actor's own beliefs and desires--then, in predictable Searle fashion, turns the debate on its head and argues for exactly the opposite point. Even if you disagree with his conclusions, this book is worth reading for its clear treatment of difficult technical issues, and for the sake of appreciating Searle's gift for ingenious arguments against orthodoxy.

Ethics and Morality

Ethics is one of the five major fields of philosophy (the others are metaphysics, epistemology, aesthetics, and logic), and deals with questions of how we ought to live our lives. Here are some good places to start:

  • Consequentialism (Blackwell Readings in Philosophy), an anthology edited by Stephen Darwell. ISBN 978-0631231080. Another Blackwell entry here. Consequentialism is the idea that the relevant moral facts of the matter (i.e. the facts that determines the moral status of an action) are facts about that action's consequences, not facts about the actor's intentions. It is one of the most popular (if not the most popular) constellation of views on ethics today, and Darwell's anthology offers a broad introduction to arguments for and against it.
  • Virtue Ethics (Oxford Readings in Philosophy), an anthology edited by Slote and Crisp. ISBN 978-0198751885. Virtue ethics originated with Aristotle, and are based around the idea that being moral consists in having one or more virtues (e.g. courage, temperance, etc.). I'm not a big fan of this theory, but it's an important one in the ethics debate, and this is as good an anthology as any.
  • Groundwork Concerning the Metaphysics of Morals, by Immanual Kant. ISBN 978-0521626958. Ah, the Groundwork, how I love/hate thee. Kant advocates the idea that acting morally has nothing to do with bringing about the best consequences, but rather depends entirely on acting in accord with an overarching moral theory called the "Categorical Imperative." The essay is short, brilliant, and very, very difficult. Read it, read it again, read it again, and then maybe you'll have some idea what he's saying. Then maybe you can explain it to me. This is a tough one, but well worth the effort--when you finally finish it, you'll feel like you've just run a mental marathon, and you'll understand why Kant is widely considered to be one of the most brilliant humans to have ever lived (even if his theories might not be correct).
  • The Ethical Brain: The Science of Our Moral Dilemmas by Michael Gazzaniga. ISBN 978-0060884734. While perhaps not a philosophy book per se (Gazzaniga is a neuroscientist, not a philosopher), this is still an excellent book for anyone interested in the field of ethics. I've got a soft spot for anyone who attempts to resolve seemingly intractable philosophical problems with an appeal to neuroscience (if I had to sum up my overarching academic project in one sentence, that would be it), which is precisely what Gazzaniga does in this book. He considers various aspects of morality and various ethical dilemmas, then explains how they can be explained by/resolved by a detailed structure of neuroscience. This is a "pop science" book--i.e. written for mass consumption-- so it is very readable and interesting, though those with a significant background in philosophy, neuroscience, or cognitive science will probably find it lacking in detail.

Philosophy of Perception

I'm fascinated with issues surrounding perception, particularly color perception. This is a very diverse field, with contributions from philosophers, biologists, cognitive scientists, computer scientists, and others; a lot of philosophy really intersects here. I consider it to be a subset of philosophy of mind, but it's worth its own list here.


  • Action in Perception by Alva Noe. ISBN 978-0262640633. Noe begins the book by laying out his central thesis: "Perception is not something that happens to us, or in us; it is something we do." From this thought provoking beginning, Noe goes on to discuss various aspects of perception (focusing primarily on vision, though touching on other modalities) with a clear, lucid style and judicious use of science and philosophy. This is philosophy at its best, in my opinion: Noe presents a concise and well thought out exploration of the problems in the field, and presents what he thinks is a plausible solution to them, all designed to help cognitive scientists untangle the conceptual confusion that he thinks surrounds this topic. Noe's a rising star in philosophy, and this is a great example of why.
  • Vision and Mind: Selected Readings in the Philosophy of Perception, an anthology edited by Alva Noe and Evan Thompson. ISBN 978-0262640473. Another very good anthology, which focuses mostly on modern (i.e. last 50 years or so) explorations of perception in all its forms. Some of the articles are a little dense, but that's to be expected.
  • Readings on Color, Volume One: The Philosophy of Color, an anthology edited by Byrne and Hilbert. ISBN 978-0262522304. Within perception, I have a particular interest on color perception. My senior thesis as an undergraduate / writing sample for graduate school admissions was on color perception, and this anthology helped me a lot as I was writing. I could probably include an entire section just on color, but we'll leave it at two books. The second one being...
  • The Quest for Reality: Subjectivism and the Metaphysics of Color, by Barry Stroud. ISBN 978-0195151886. This is a detailed and rigorous examination of what we mean when we say "Tomatoes are red." Barry Stroud writes with a lucid style (though not so clear as Searle's) that is very easy to follow. His eventual argument (that colors aren't really objective or subjective at all, but something in between) also resonates strongly with me, as it is very much in line with my own idea (though Stroud appeals less directly to contemporary neuroscience).

I think that'll do to start with. Perhaps more to come later...

More Free Will Stuff: Comments From the Studio Audience

Michael, a physicist in the studio audience, had a lot to say about the last two posts on free will. His comments were well thought out and substantive (who says scientists don't know how to write?), so I thought I'd post them here for all to see, and for me to respond to.
My reaction is 180 degrees from yours. What possible point does moral responsibility have if I have no free will? What possible benefit is it to the world or to myself or to you if you assign moral responsibility to actions over which I had not choice, because I choose not to flagellate myself about those actions? Such a thing would seem to be a sort of moral version of an untestable hypothesis. Absolutely nothing in the world is changed by your decision to assign moral responsibility to me for my actions, then if you didn't assign any responsibility.

That's an interesting point. The push you're making here has vague shades of American Pragmatism--you're asking, in a nutshell, why this is even a question worth considering if we agree from the start that nothing substantive will change in the universe one way or the other. I share that concern, which is why I'm reluctant to embrace anything but semi-compatibilism. As you said, much of the free will discussion is, at this point, made up of almost entirely untestable hypotheses (e.g. that quantum level indeterminacy is enough to make us free). The advantage about embracing semicompatibilism, it seems to me, is that it doesn't have anything to say one way or the other about the tension between freedom and causal determinism. What semicompatibilism offers, then, is an explanation of how we can intelligibly assign moral responsibility (whether or not this assignment itself is a free action is irrelevant--we only care whether or not it is intelligible) while at the same time remaining neutral about genuine alternative possibilities.


The only positive purpose I can think of for assigning moral responsibility for actions is if my doing so might change my future actions in a future similar situation. But you have hypothesized that this is impossible. So why would you or I ever choose to assign moral responsibility for actions I cannot control?

I'm not hypothesizing one way or another about whether or not your actions are free (though if pressed I'd say it's more likely that they're not than that they are)--again, that's the point of semicompatibilism, as far as I'm concerned. Fischer would say that you're suffering from a conceptual confusion when you ask "So why would you or I ever choose to assign moral responsibility for actions I cannot control," because you can control your actions in the relevant sense. Remember, he wants to make a distinction between regulative control (which you may or may not have), and guidance control (which you certainly do have). The whole point of the stories about the locked room, the evil neuroscientist, and your drive to work is to show that there is some real sense in which you can be said to have control over your actions, even if at no point did you have robust alternative possibilities.


I am really enjoying your blog and checking it every few days. If I am missing something important about this, please let me know. Also, if there are any reasonably accessible books on this topic I could read, I would love a recommendation.



So glad to hear it! Sometimes I wonder if anyone is actually reading this thing; glad to know that at least one person is. I'll throw up a book list in a separate post when I'm done here.



If you have any good things to read on the libertarian point of view I would appreciate those. I am a PhD physicist who has worked a lot on quantum effects in superconductivity, and on quantum noise. When you write I've yet to see a libertarian account that impresses me as much of anything except wishful thinking (they also tend to make pseudoscientific style appeals to quantum mechanics) I have to tell you that it does seem possible that the loophole in determinism is in quantum mechanics.

I think it is important to realize that the "billiard ball" determinism of classical newtonian physics does not extend to QM. I believe it is at worst an open question, and at best clear enough, that QM is NOT deterministic. So if the physical universe is NOT deterministic, then free will creates no conflict with physics.

Ok, good. I've always wanted to ask a real live physicist about this (you people are so tough to find): is QM indeterministic in the sense that we can't predict the actions of the particles (i.e. their behavior really is a necessary product of the past plus the laws of nature, but in such a way that we are unable to predict it), or in the sense that the behavior really is indeterministic? Do we even know?

In any case, I'm not convinced that any amount of quantum indeterminacy is sufficient for freedom in any relevant sense of the term. This intuition has two primary sources. First, it seems (at least from what we know so far) that quantum indeterminacy "cancels out" when we get to macro-level structures; that is, while it might be the case that the behavior of particles at the very, very, very small level cannot be reliably predicted, this has no impact when you get a large group of particles together, as for every particle that randomly "swerves left," another randomly "swerves right." By way of analogy, we might picture a large office building dedicated to producing 'widgets;' while we might not be able to predict with absolute certainty what any individual worker in the office is doing at any given time, we know that the company as a whole (i.e. at the macro scale) is doing some particular activity (i.e. producing widgets). Is this or is this not the case? I'll admit that I haven't had much formal instruction in quantum mechanics, and most of what I know is the result of having lived with a physics major for four years in college, and having done a fair bit of reading on the 'net.

Let's suppose for the sake of argument, though, that indeterminacy doesn't really cancel out at the low level--a typical libertarian line would be something about how the brain is a sufficiently complex system that it is affected by minute variations in quantum structure that simply don't affect less complex systems--and that there really is a degree of randomness at the macro level. Even if this is the case, I'm still not convinced that we can get from there to a robust kind of freedom. After all, it doesn't seem to me that the fact that my brain is influenced by random events in its microstructure gets me any closer to being "free" than the fact that my brain is just a big deterministic electro-chemical machine. In order to get real freedom out of randomness, we'd have to explain just how it is that randomness at the quantum level somehow ceases to be truly "random" at the macro level, yet still retains enough indeterminacy for some aspect of my consciousness to influence it. I'm not saying that this won't happen, but there are a whole lot of "ifs" in there.


I don't understand the appeal of a theory that gives us free will in determining what we think of some actions, but no free will over any actual actions. First it is puzzling, if I ask you what you think, it is deterministic what you will tell me, since that is an action. Then it seems random whether your conversation about such a thing has any meaning at all. If you tend to be honest, then it is an amazing coincidence that your beliefs (over which you have free will) just happen to align over your predetermined statements about your beliefs.

I'm with Searle on this one (not that I know anything about him beyond what you said.) If you ask me if I have free will, I choose to say yes. It is hard to even parse what is going on if I "choose" to say no.

To me it seems clunky to separate free will about what I believe from determinism about what I do. And especially if the motivation to do so is what is at best an unproven and at worst a mistaken idea that physics, as far as we know, is deterministic. As far as I know, QM theories in which a wave function collapses on observation are NOT deterministic. Attempts to make them deterministic (for example, hidden variable theories) have failed.

Anyway, great stuff! Thanks!

Mike


Again, the appeal is in reconciling our intuitions that A) the universe is deterministic and B) we are free and/or morally responsible for our actions. It [semicompatibilism] is not a separation of beliefs from actions per se, but a separation of alternative possibilities / could have done otherwise kind of control from the kind of control necessary for us to feel that we'd be justified in attributing responsibility.

Thanks for the comments, Mike! Keep 'em coming!

Tuesday, December 25, 2007

Happy Holidays

Happy Holidays, blogosphere.

Wednesday, December 19, 2007

Some More Musings on Free Will

Someone once famously asked John Searle "If determinism were scientifically shown to be 100% true, would you accept it?" His response was "Think about what you're asking me: if it were scientifically proven that there are no free and deliberate actions, would you freely and deliberately accept that?" His response, while somewhat arrogantly dismissive (he's great like that), is pretty telling: no matter what we learn, we're never going to be able to escape the illusion (if it is an illusion) that our actions are free. In some sense, then, the problem of free will isn't really a problem at all--no matter what the answer is, nothing will change in our practical experience.

It's worth asking, then, why we even care about the problem in the first place, given the fact that even if we learn beyond a shadow of a doubt that we are not free, we will be unable to stop experiencing the world as if we are. One good reason to still care about free will, it seems to me, lies with moral responsibility. We have a powerful intuition that we cannot be held praiseworthy or blameworthy for anything we don't do of our own free will; if you put a gun to my head and demand that I rob a bank, it doesn't seem fair to blame me for robbing the bank. Similarly, if you put a mind control chip in my brain and force me to run around town fighting crime, it doesn't seem that I deserve to be praised for my law enforcement efforts. It seems that we need freedom to get responsibility.

This is the jumping off point for semicompatibilism, a view proposed by John Martin Fischer in his essay "My Compatibilism;" it is this idea that I want to explore today. The first question we should ask is "what sort of freedom do we think we need in order to get moral responsibility?" Our intuition, I think, is that we need alternative possibilities--if we fail to praise or blame someone for actions taken while under the effect of mind control, it is precisely because we see that the person had no alternative possibilities; in other words, it only makes sense to praise or blame someone for actions he takes when he could have done otherwise. The idea is that without alternative possibilities, we can't have moral responsibility; the two notions are stuck together in a fundamental way (this is called the Principle of Alternative Possibilites, or PAP). Let's see if we can pry them apart.

Let's suppose I kidnap you while you're asleep, and bring you to a locked room. When you wake up, you find yourself in strange, unfamiliar surroundings. On closer inspection, though, it turns out that the room you're in is full of all sorts of interesting things to explore and interact with (populate the room with whatever appeals to you). While you notice the closed door, it doesn't occur to you to try to open it, so taken are you with the diversions in the room. Let's suppose that you stay there for 45 minutes playing around before you try to leave the room, only to discover the locked door. You can't leave the room, nor could you have during the past 45 minutes.

Now, it seems to me that during the 45 minutes in which you were playing around in the room, you were staying there freely--that is, you chose to stay in the room, and any adverse consequences (say, if you forgot to pick up your kid from school) from your stay would be your moral responsibility. However, it turns out that you actually didn't have any alternative possibilities during your stay there--you couldn't have left the room even if you wanted to (think about this in relation to the conditional analysis discussed last time). It looks like we're getting somewhere here in our attempt to pry apart responsibility and alternative possibilities.

That last thought experiment was originally penned by John Locke, but critics quite reasonably pointed out that in that situation, you would have at least two distinct alternative possibilities--namely, to try to leave the room or not to try to leave the room. In response to this, philosophy Harry Frankfurt proposed moving Locke's Locked Room inside the head, giving rise to what are now called "Frankfurt cases." Here's a typical Frankfurt case.

Suppose that I, an evil neuroscientist, have implanted a control chip in your brain. This control chip is designed to let you take your actions normally when I'm not in control, but gives me the option to override at any time and take control of you. It also lets me know what your intentions are a moment before you actually take the action, so I have a chance to countermand you if I'm quick. Suppose further that in addition to being an evil neuroscientist, I'm also a devout Republican, and that I decide to test my chip in the upcoming election by forcing you to vote Republican. Election day rolls around, and you step into the booth, totally unaware that I'm watching my chip's readout with my finger on the button to make you vote Republican. After some time considering your options, you decide that the Republican candidate best represents your beliefs, and cast your vote for him. I take my finger off the button, satisfied (though perhaps a bit disappointed at not being able to test my invention).

Here we can see how Locke's room has been moved inside your head, and once again the results of this thought experiment are interesting. In the case I just described, it seems that it would be very reasonable to assign moral responsibility to you for voting Republican, even though you really had no alternative possibilities (even if you'd wanted to vote Democrat, you wouldn't have). It looks like we've succeeded in prying those two notions apart.

Fischer talks about two different kinds of control that we might have over our actions: guidance control and regulative control. The kind that we tend to think we have is regulative control--that is, regulative control implies true freedom, and the ability to do otherwise at any juncture. Guidance control, though, is the kind of control being exercised in the stories presented above. Here's another example that will let you see the difference more clearly.

Suppose I secretly install a computerized guidance system in your car before you go to work. I've been researching your route for a few days now, so I know that you always proceed to work in exactly the same way. I program the guidance computer to kick in as soon as you get ready to back out of your driveway, and to quit as soon as you park. The computer guides your car to work just as if you had driven it there; in fact, it is so convincing and well designed that you never even notice that anything is amiss--you simply feel like you're driving as normal, as the car responds appropriately when you brake, turn the wheel, or change gears.

In this case, Fischer (and I) would argue that though you lack regulative control (if you tried to diverge from the normal path to work, the computer wouldn't let you), but you do have guidance control. Just as in Locke's room and other Frankfurt-style cases, you can be said to have had a certain degree of control over your circumstances, even though you lacked regulative control (and alternative possibilities). Fischer contends that it is only this lesser notion of control that is necessary for moral responsibility, leaving the question of whether or not we actually have regulative control open.

I like this doctrine. It has problems, but I think it is parsimonious enough that a careful formulation will avoid many of them, particularly in light of the fact that it has nothing to say one way or the other about regulative control and alternative possibilities. It does seem to me that the major reason to care about the free will problem is for the sake of moral responsibility, and semicompatibilism shows us that we can be responsible without being truly free.

Tuesday, December 18, 2007

Some Musings About Free Will

When I was on my way to work today, I saw a car being pulled by a pickup truck with an attached tow-rope. To keep the car from careening off course, someone was sitting in the driver's seat and turning the steering wheel to keep the wheels aligned with the truck's. Being me, this got me thinking about free will--specifically, about the relationship between freedom and the ability to have done otherwise in any given situation.

Here's the free will problem briefly. As modern humans, we have two competing intuitions that seem to contradict each other. On one hand, we feel as if our actions are free; I feel like I freely chose to sit down and write this blog post, and that I could just as easily have chosen to do something else (my job, for instance). On the other hand, I also know enough modern physics to know that (at least at the level of cars and cats and computers and people), things don't work that way--I know that the behavior of everything is governed by deterministic physical laws, and see no reason to suppose that I, who am really just a complex physical system, should be any more exempt from those laws than (say) a bouncing ball. This idea--that the current state of any physical system is in principle knowable if you know the state immediately preceding it plus the rules affecting it--is known as causal determinism.


So, at least prima facie, it looks like we've got a contradiction here: it seems to us that our actions are totally free, but as good educated society members in the 21st century, we have a hard time seeing how this can be in light of causal determinism. What's the solution to this puzzle? Philosophers, predictably enough, are divided on the answer. Roughly speaking, there are three camps that philosophers tend to fall into: libertarianism, incompatibilism, and compatibilism.

Libertarianism (not to be confused with the equally crazy political ideology) holds that causal determinism is false, and that we (humans) do indeed have free will. I've yet to see a libertarian account that impresses me as much of anything except wishful thinking (they also tend to make pseudoscientific style appeals to quantum mechanics), so I'm not going to give it much consideration here. It could well turn out that libertarianism is the right position, but to know for certain we're going to need to advance physics, biology, and the rest of the natural sciences a lot more.

Incompatibilism (sometimes called "hard incompatibilism" to distinguish it from libertarianism, which is also technically an incompatibilist position) takes the position that freedom of the will and causal determinism are incompatible and that causal determinism is true. To put that symbolically:

~(D & F)
or
(D --> ~F) & (F --> ~D)


Incompatibilism seems significantly more plausible to me than does libertarianism, but it too involves giving up a strongly held intuition--that our actions are free. This certainly doesn't necessarily mean that incompatibilism is wrong (lots of theories with radically counterintuitive implications turn out to be true; see relativity), but the idea that we are free agents is so strongly held (and seems so obviously true from our subjective frame of reference) that I think it warrants some serious investigation before we abandon it.

Some philosophers, investigating this problem, have endorsed a third view--called compatibilism--which dictates that freedom of the will and causal determinism are not contradictory, as they first appear, but can actually coexist. There are almost as many flavors of compatibilism as there are compatibilists (it's probably safe to call this the dominant view among professional philosophers today), so I'm going to be as brief as I can in characterizing the view as a whole. Compatibilists, in general, reject the traditional analysis of freedom (we'll get there in a moment), and instead embrace what's called the "conditional analysis of 'can.'" Briefly, the conditional analysis (CA) goes something like this.

Suppose we have a man--let's call him Bill--who is trying to decide if he should go to work or stay home and watch TV all day. Under the traditionally compatibilist (i.e. CA) account of freedom, it is accurate to say that Bill can go to work, and it is accurate to say that he can stay home all day. So far so good. What compatibilists mean by 'can,' though, is something very specific, namely: "Some agent x can do y iff it is the case that 'If x wanted to y, x would y' is true." That seems complex at first, but the idea is actually fairly intuitive: for any action (say, going to work), it is fair to say "Bill can go to work" if and only if (there's the 'conditional' part of the 'conditional analysis) it is the case that the proposition "If Bill wanted to go to work, Bill would go to work" is true. This maps out fairly nicely onto our colloquial usage of 'can:' I can get up and dance right now (if I wanted to get up and dance right now, I would), but I can't fly to the moon right now (if I wanted to fly to the moon right now, I still wouldn't). This view is compatible with causal determinism because it is entirely possible that my (and Bill's) desires might be the result of a deterministic system, but that does not matter (according to the compatibilists)--even if my desires are deterministic, as long as I act in accord with them (i.e. I am not being coerced, controlled, etc.), then I am acting freely.

There are, of course, problems with this, and I don't think it's entirely satisfactory. I have to leave work now, though, so more tomorrow on my favorite view: semicompatibilism.

Monday, December 17, 2007

Applications Done!

My graduate school applications for philosophy PhD programs are complete as of today. I applied to (in no particular order):

Berkeley
MIT
NYU
Princeton
Columbia
Chapel Hill
Ann Arbor
Rutgers
Chicago
Washington


Wish me luck...

Wednesday, December 12, 2007

Sunday, December 9, 2007

A for Atheism

Written without a dictionary or thesaurus:

Ah ha! Allies and enemies alike: attend with all acumen as an ardent and altogether alethiometric activist for all things accurate and actual allows access to astoundingly allterative advice about appearance and reality. Anxiously, we animals await an awakening explanation according to which our adamant acceptance of an astonishingly inaccurate attitude about all things--an attitude altogether absurd according to which all are made for us--is accepted as accurate and argentine. Apropos, as anti-realism attacks the annals of human achievement, we, as able defenders of actuality, must arise and arrest acidic erosion of our agentive minds! Again, far from ancillary, we must attend to this aggrevating and egregious addition of atrocious abandonment of reality in all active education with able acumen. Accordingly, the only allowance is atheism, an appropriate acceptance of attitudes a bit more attached to actuality, and antipodean to allegiance to a God--there is no acceptable alternative. Alas, as an ardent (albeit allterative) atheist, our activist allows his tongue all astonishing freedom, so let me just add that it is my amazingly good honor to meet you, and you may address me as 'A.'


That is all.

Tuesday, December 4, 2007

Keeping Half an Eye on Behe

Decidedly unintelligent intelligent-design proponent Michael Behe has long been one of the loudest voices in the shouting match that the attempt to get supernaturalism into public schools often becomes. Behe, a biochemist at the Lehigh University, is the man responsible for first coining the term "irreducible complexity," which refers to the supposed property of some biological features (both on a micro and macro level) which, Behe thinks, are far too complicated to have evolved by natural selection. Ignoring for the moment that this is a piss-poor argument that amounts to little more than "Well I can't imagine how it could have happened; therefore, God did it!", I'm happy to report that Behe's been dealt a bit of a blow today.

One of Behe's favorite macro examples of an irreducibly complex feature is the human eye which, he claims, is such a marvel of adaptation that it couldn't possibly have come about over time (he likes to point to a quote attributed to Darwin, in which the scientist laments "the difficulty of believing that a perfect and complex eye could be formed by natural selection"); after all, Behe asks, "what good is half an eye?" Now, it seems that scientists have answer that question, and the answer is "better than no eye at all." Researchers at Australian National University, the University of Queensland, and the University of Pennsylvania have, through a cooperative effort, proposed a way in which the vertebrate "camera eye" could plausibly have evolved.

Their research paper is long and full of biological jargon (it's here if you want to read it anyway), but the first paragraph more or less sums up their point:

More than 600 million years ago (Mya), early organisms evolved photoreceptors that were capable of signalling light, and that presumably mediated phototaxis, predator evasion by shadow detection or vertical migration, and the entrainment of circadian rhythms. However, it was not until the Cambrian explosion, beginning around 540 Mya, that animal body plans began evolving very rapidly2, 3, 4 and image-forming eyes and visual systems emerged. The possession of advantageous capabilities or attributes, such as sight, rapid movement and armour, might have become crucial to survival, and might have led to an 'arms race' in the development of defensive and offensive mechanisms5. In the various phyla eyes evolved with diverse forms, but apparently based on certain common underlying features of patterning and development, as exemplified by genes such as PAX6 and RAX (also known as RX), which have critical roles during neurulation and brain regionalization.
So what good is half an eye? Well, just having a light-sensitive organ (as opposed to the camera-like eye of modern vertebrates) could let you avoid predators (fast moving shadow approaching? Better scoot back under cover!), navigate (Remember, the sun and moon are up!), or know when to go out and feed (No more light outside the shelter? Time to party!).

There's a lot more to this discovery that, while interesting, is beyond the scope (and readership) of this blog. If you want to read more about it, check the study itself! For now, though, suck it, IDers.

Sunday, November 25, 2007

Project Prevarication, Part One: Portents of a Perniciously Potent Problem

I do love alliteration. Last night, a friend of mine and I were drinking and talking, and the subject of relationships came up. Perhaps understandably, the conversation turned from there to talk of lying. My friend told me two stories in which he...let's say "miscommunicated"...information either to someone with whom he was in a relationship or about someone with whom he was in a relationship (we'll get to the stories in a second). My philosopher sense started tickling with both the stories, and we discussed them in relation to the definition of a lie for a while. For much of last night and virtually all of today, I've been preoccupied with the issues we raised last night, and it seems that the more I think about them the stickier the problems become.

In this first post, I just want to lay out the problem as I see it--I want to show that it is sufficiently complex to warrant further investigation. Once that's done, I'll start exploring some of the potential solutions in more depth, and see if I can find a solution that seems satisfactory. For now, though, an introduction to the problem.

Those verbally inclined readers who saw the title will no doubt have deduced that the problem at hand has to do with the lying. Specifically, I'm concerned with doing a conceptual analysis of the notion of a "lie" to see what exactly we mean by that term; I want to investigate what counts as a lie (and what doesn't), and see what those things that count as lies have in common--if we succeed at this task, we'll be in a position to give a tight definition of the concept of a "lie."

Let me start by laying out the two cases my friend told me about. I'm going to call them Case 1 and Case 2.

1. My friend was involved in a long term relationship, but had recently cheated on his partner. His partner became suspicious, and asked him if there was something going on between him and the mistress. My friend responded with a sarcastic tone of voice, saying "Oh yeah, I'm totally sleeping with X." His partner, assuming that the sarcasm indicated that he hadn't really slept with X, was mollified.

2. In another instance, this same friend (this time single) was involved in a relationship that, for various reasons, he wanted to keep under wraps, preferring to give the public impression of a platonic relationship. The girl was coming to stay with him for a weekend, and a third party asked where she planned on sleeping. My friend replied "Well, my couch folds out into a bed, and I have a spare set of sheets." The third party, assuming that his question had been answered, dropped the inquiry.

My question, then, is a relatively simple one: did my friend lie in either case (or in both)? (1) in particular is problematic I think--his literal words were truthful, but their intended purpose was (by his own admission) to give a false impression to his interlocutor. What are we to make of cases like this? Is a lie tied to the actual symbolized semantic content (i.e. words), or to the intention of the speaker? Can one lie with body language? What about (arguably) non-symbolic things like voice inflection or tone?

A few answers to these questions spring to mind, but the more I think about them, the less satisfactory they appear to be. I'm still thinking about this, and will be posting more in the near future. In the meantime, I'd love to hear your thoughts, dear reader.

Wednesday, November 21, 2007

Signs That the New "Breakthrough" You Read About Might Be Pseudoscience

My last post got me thinking that it might be useful to come up with a list of signs that the new invention/breakthrough/idea you heard about might be pseudoscience. As I said in the last post, in many instances pseudoscience is coming to take the place of spirituality or mysticism as the leading purveyor of crap. This is not to say that spirituality and mysticism aren't still crap (they most definitely are), just that there's a new kid on the scam artist block, and he's wearing a lab coat that looks suspiciously big on him.

Suppose you turn on the morning news, and see that the world is all atwitter because someone has apparently invented....let's say a cloaking device. Like any normal human being, you're excited about the prospect of invisibility. Before you start planning your grand bank heist, though, you might want to stop and ask yourself if this is for real. There are a few warning signs that you should look for that might point to the conclusion that said cloaking device is (unfortunately) bogus. Here they are:


1. The inventor announced his discovery in the press (or advertisements) before the journals.

This is a good early warning sign, as real scientists will virtually always present a legitimate discovery to the peer-reviewed community before touting that discovery in the public sphere (or trying to sell it). There's good reason for this: if the inventor made a mistake in his measurements, forgot to carry the one in his calculations, or has simply created something that is not reproducible, the peer-review process will catch the mistake before everyone gets all excited about something like free energy. That's one of the reasons the scientific process works so well--new discoveries get scrutinized from every angle before they go into production. Scammers know this--and also know that they've got nothing legitimate to offer--so they will announce their "amazing new product" to the much more credulous mainstream media first.

2. You've never heard of the guy pitching it (and neither has anyone else)

Is the inventor the night janitor at McDonald's? Be skeptical. Of course, there really are unrecognized prodigies out there, and it is within the realm of possibility that some undiscovered genius tinkering in his garage might give the world the flying car, but it is highly unlikely. Perhaps unfortunately, the fact of the matter is that someone without a scientific background is most likely not knowledgeable enough in physics, electrical engineering, chemistry, and other disciplines that would integral in the creation of our cloaking device. Most big discoveries come from people who have dedicated their lives to their discipline, and who have access to the resources (e.g. grad students) necessary to develop groundbreaking new technology.

3. It seems like too big a leap

Would it surprise you to learn that the very first computer had 2 gigs of RAM and a 3.0 GHz CPU? It should, because it's false. The first computer (depending on how you define the term) was either the abacus (~2500 BCE) or Babbage's Analytical Engine (1837) which, though never constructed, paved the way for modern computing in terms of design. The important point is that the modern computer was not invented from scratch overnight, but rather evolved slowly as small improvements on previous designs accrued. Issac Newton famously said of his advancements (e.g. the invention of calculus and the revolution of physics) "If I have seen a little further it is by standing on the shoulders of Giants." Newton recognized that his own work built upon centuries of work by others, and most other legitimate inventors recognize the same; be skeptical if someone claims to have invented a cloaking device if you've not seen any prototypes/earlier work he might have built on. Giant steps don't just happen over night.

4. The Inventor won't explain how it work, and won't demonstrate it in public

When asked to explain how his cloaking device works, does Mr. Inventor demur, dodge the question, or simply refuse to explain? That should be another warning sign. Someone with an actual amazing invention to share is going to want to tell everyone about it, describe how it works in detail, and give as many transparent demonstrations as possible--the more people who know about it, the better! If Mr. Inventor really has created a cloaking device, he should be shouting it from the rooftops and showing it off everywhere (the better to win a Nobel Prize). Be wary too of scheduled demonstrations that are called off at the last second because of "technical difficulties," or if Mr. Inventor claims that how his device works is "a secret."

5. The Inventor is willing to explain how it works, but the explanation is full of buzzwords and empty of substance.

This one's a little trickier, and really only applies to those with at least a modicum of scientific understanding (which likely means you, if you're reading this blog). Watch out for inventions whose inner workings are explained with hand-wavy appeals to "quantum mechanics," "electromagnetism" and the like. Just as with Number 1, this has to do with the fact that the average person doesn't know much about science (and knows it), and thus is easily wowed by lots of buzz words. If you're a computer person, you already know this is true: next time someone asks you to fix his computer, try explaining the problem in totally nonsensical (but impressive sounding) terms (e.g. "Well, it looks like your fiber optics are overclocked, which is causing problems in your heat sink. I'll have to defragment your RAM and remagnetize your transistors. This might take a while"). He'll accept it, in just the same way that many people will accept an explanation of a cloaking device along the lines of "It creates quantum electromagnetic inference, which blocks light in the visible spectrum." It does what? That doesn't really make any sense, but if you didn't know that, you might accept it as an explanation. Do a little research, and see if the buzzword heavy statement you saw in TV goes any deeper--if it doesn't, you've probably got a fraud on your hands.


It is worth mentioning that though these signs should make you skeptical, the presence of one (or even more) doesn't necessarily indicate a scam, and a clever scammer might find a way to hawk his product without triggering these warning signs. Be critical, be skeptical, and make up your own mind.

Tuesday, November 20, 2007

Some Reality Apologetics

The ostensible purpose of this blog (if you believe the title) is to present a cogent and entertaining defense of reality. In practice, that means that when I see evidence of egregious supernaturalism or magical thinking in the mainstream (Fox News doesn't count), I like to point it out as the poppycock it is. This is particularly important when the hucksters peddling the poppycock in question are using it to prey on the hopes and dreams of the innocent, which brings us to today's post.

Most of the time, supernaturalism of the sort that irritates me (so really any sort) comes in the guise of religion or spirituality. See this post for an example of this. I suppose it can only be expected, though, that given the more scientific and technical nature of modern society, supernaturalism has arisen in another guise--science as magic. This is not terribly new, I guess, as it dates back at least to the proverbial "snake oil" that's been around for centuries, but the level of sophistication has certainly risen; instead of potions and elixirs, now we get quantum entanglement and DNA.

Our lesson for tonight comes in the form of an "invention" built by one Mr. Danie Krugel, an ex-cop from South Africa. This "invention" (and I use the term loosely) is (wait for it) a "quantum DNA-GPS box" that can supposedly locate anyone anywhere in the world if it is fed a strand of hair or a bit of dead skin. Seeing the word "quantum" in an invention's title should immediately set off alarm bells, because it's a beloved moniker of the modern-day shyster; there's so much we don't understand about quantum mechanics (and the average lay-person understands only a fraction of that) that an unscrupulous salesman can explain just about any seemingly magical effect by an appeal to quantum mechanics. Little-understood science, here, has taken the place of little-understood magic.

Leaving aside for a moment the question of how an ex-cop has the know-how to create ANYTHING harnessing quantum mechanics or DNA, let's have a look at how Mr. Krugel's device (supposedly) works. According to him, you insert a sample of the subject's DNA into a little box, it does "something" and then (somehow) uses quantum mechanics to spit out the subject's location in GPS coordinates. Useful? Hell yes. Plausible? Less so. Mr. Krugel has been less than forthcoming about how his device works, which should be another immediate warning sign--if he could really do what he claims, he's be first in line for a Nobel Prize (among many other awards), and so we're forced to ask why he isn't publishing in scientific journals, distributing the device for other scientists to look at, and generally doing all the things that a legitimate inventor with a legitimate (and totally amazing!) invention would be doing.

Thus far, it seems that we've got nothing more than a run-of-the-mill crank on our hands--guy claims to invent something spectacular and revolutionary, demurs when asked to explain how it works, announces a big test of the device, fails to deliver on this promise, and then makes up excuses about why he failed to deliver. This is nothing new (Steorn's free energy machine, anyone?), but what Mr. Krugel is doing with his "technology" is disgustingly novel: he's claiming to be able to find abducted children, then leading their parents on months long treks through Africa, only to have his quarry (who, he assured the parents, was "alive and on the move") discovered very dead in the forest, victim of a snake bite, then reporting the finding as a successful demonstration of his invention. This is, I think it goes without saying, absolutely despicable. Even if Mr. Krugel does not have malicious intent, does he really think that the best place to refine his prototype is in such a high-stakes arena? What about the countless man-hours that might be wasted looking in the wrong place if he is incorrect (as it seems he often is)? Does this man have no conscience?

Questions of morality aside, this device seems to "operate" on some very sketchy science. How does it pinpoint the location of the sample's "sister" DNA? How is it not fooled by the myriad of skin cells and hairs each of us sheds every day? How does a little tiny box extract DNA from a strand of hair, something that generally takes a forensic laboratory and copious amounts of time? Why would DNA exhibit any kind of special quantum interaction? It's just a molecule like any other, and it seems akin to saying "put some salt in this box, and I'll locate all other salt in the universe." Why does it seem to only work once in a while?

Here are a few easy things Mr. Krugel could do to demonstrate that his product is real:

1. Publish. Get a paper in a peer-reviewed journal about how the device works, and let other scientists critique it.

2. Give a public, open, clear demonstration of the device's function. Let's get a random sample of people, have them donate some DNA, and see how close Mr. Krugel's device comes to pinpointing their location. I'd be impressed by a 30% success rate (especially if he could get a location narrowed down to a half-mile radius or less), which is far less than his claimed 90% success rate.

3. Explain to the public how it works. No mystical appeals to mysterious physics, no jargon--just a simple, clear explanation about a new technology. People do it all the time, even with very complex equipment.

Do these three things, Mr. Krugel, and you'll at least have me listening. Until then, I'm pretty sure you're a liar, and quite possibly a horrible human being as well.

Edit: One additionally disappointing thing about this story is the degree to which the media has jumped on the Daine Krugel bandwagon, reporting the story about his working "helping" to find a missing girl with the same dry earnestness you'd expect them to employ when discussing, you know, actual forensic techniques. This, more than anything else, is why shysters like Mr. Krugel pitch their "inventions" to the media rather than the scientific community; normal media is much easier to fool, and much easier to wow with terms like "quantum entanglement."

Friday, November 16, 2007

Hobo Names

Because it's Friday night, and I'm bored and alone at work, here's some comic relief.

Most of you are undoubtedly familiar with John Hodgman, Daily Show correspondent and "PC" in the "Mac vs. PC" ad campaign. For those not familiar, he's very, very funny and has an amazing gift for timing (which is 90% of comedy). I stumbled today on an hour long MP3 of him reciting (in his usual deadpan) 700 hobo names he made up, while someone else plays a meandering rendition of "Big Rock Candy Mountain" on the guitar in the background. If you, like me, have an hour to kill, I'd highly suggest a listen: some of the names are absolutely hilarious. To whet your appetite, a sampling of some of my favorites:

Stingo the Bandana Origami Prodigy
Thermos H. Christ
Lord Winston Two Monacles
Abraham the Secret Collector of Decorative China
Linny, the Lint Collector
Pa Churchill
The Young Churchill
The Young Curchill's Hated Bride
Gooseberry Johnson, Head Brain of the Hobosphere
Blind Buck, and Woozy the Invisible Seeing Eye Dog
Fake Cockney Accent Allan Strip
Sir Francis Drank
Mariah Duck Face, the Beaked Woman
Shape-shifting Demon
Irving Alva Edison, Inventor of the Hobophone
Pring, Ultra-Lord of the Hobo Jungle
Fourty Nine State Frank, the Alaskaphobe
Panzo the Spiral Cut Ham
Sanford Who Lacks Fingerprints
Hando Whatever That Lizard Is That Walks on Water
El Caballo, The Spanish Steed
Father Christian Irish, the Deep Fat Friar
Bum Hating Virgil Hatebum
Thor the Bum Hammer
Most Agree It's Killpatrick
Myron Biscuit Spear, the Dumpster Archaeologist
Shagrat, Orc of the Ozarks
Unger and his Dust Storm Bride
Rocky Shit Stained Mankowitz
Rocky Shit Stained Mankowitz Part II: The Quickening
Experimental Hobo Infiltration Droid 61-K

I especially like "Sir Francis Drank."

Link.

Philosophy for Kids

Courtesy of the British Psychological Society's Research Digest Blog comes confirmation of something I've been saying for a long time: philosophy (i.e. advanced conceptual critical thinking skills) ought to be included in any school's curriculum in just the same way that training in math, reading, social studies is. Snip from the digest:

One hundred and five children in the penultimate year of primary school (aged approximately ten years) were given one hour per week of philosophical-inquiry based lessons for 16 months. Compared with 72 control children, the philosophy children showed significant improvements on tests of their verbal, numerical and spatial abilities at the end of the 16-month period relative to their baseline performance before the study.

Now Topping and Trickey [the study authors] have tested the cognitive abilities of the children two years after that earlier study finished, by which time the children were nearly at the end of their second year of secondary school. The children hadn't had any further philosophy-based lessons but the benefits of their early experience of philosophy persisted. The 71 philosophy-taught children who the researchers were able to track down showed the same cognitive test scores as they had done two years earlier.

These ten year olds got one hour per week of philosophical training for a year and a half, and are still seeing "significant gains" (I don't know what that means, because I don't want to pay $18 to read the study--anyone at a university, feel free to look it up and comment). This is even more interesting when you consider that the study also found that, two years later, the control group (those without any philosophical training) had actually declined cognitively, while the study group had continued to gain. Those are rather impressive results.

Critical thinking--the ability to objectively and critically evaluate the statements and arguments of others, form reasoned opinions, and express those opinions clearly and precisely--is an absolutely vital skill these days (even if you're not going to dedicate your life to philosophy), and this is definitely something we should be training our kids in. Who knows--maybe we'll even get a better President out of the deal some day...

Thursday, November 15, 2007

Some More Extended Mind Stuff: External Memory

National Geographic recently ran a feature on memory, highlighting two well-known case studies: "EM" (who has both retrograde and anterograde amenesia as the result of a viral infection that destroyed most of his hippocamus, the part of the brain responsible, among other things, for turning experiences into long-term memories, leaving him in a "spotlight" of consciousness, aware only of his sensations from moment to moment), and "AJ" (who has a hyper-accurate memory, and can recall in detail every day of her life since seventh grade). Additionally, they discuss memory in general, saying the following about the effect of technology on memory:

But over the past millennium, many of us have undergone a profound shift. We've gradually replaced our internal memory with what psychologists refer to as external memory, a vast superstructure of technological crutches that we've invented so that we don't have to store information in our brains. We've gone, you might say, from remembering everything to remembering awfully little. We have photographs to record our experiences, calendars to keep track of our schedules, books (and now the Internet) to store our collective knowledge, and Post-it notes for our scribbles. What have the implications of this outsourcing of memory been for ourselves and for our society? Has something been lost?

My disagreement with the extended mind thesis is documented, but that ontological dispute aside, I'm certainly willing to admit that the average person--even the average educated person--must remember many fewer facts today than 50 or 100 years ago in order to function. This is, as the National Geographic indicates, because the average person has immediate access to much more reliable information than the average person 50 or 100 years ago. Does the fact that the access to information is immediate and trustworthy (with the aid of the appropriate technology) make it memory in anything like the sense of biological memory? No--but that's not my point here. I want to discuss the question raised in the last two sentences of the National Geographic snip above: has something been lost in switch from storing information toward skill at accessing information? Again, I think the answer is a resounding 'no.'

It seems, actually, that something has been gained. The October/November issue of "Scientific American: Mind" magazine includes an article by James R. Flynn about the 'Flynn Effect'--the enormous increase in IQ in the last 100 years. The article notes that "Gains in Full Scale IQ and Raven's [IQ test score] suggest that our parents are some nine to 15 points duller than we are, and that our children [i.e. me and, more than likely, you] are some 9 to 15 points brighter." That would imply that our (i.e. post-baby-boomers') grandparents' generation had an average IQ of around 75--that's mildly mentally retarded by today's standards. I think most people will agree that their grandparents were not mentally handicapped, so it seems that something odd is going on here.

The Scientific American article argues (convincingly, I think) that this paradox is explained by the fact that people today are, from a very young age, trained much more for analysis and data processing than ever before. The obvious question, which I don't think the article satisfactorily addresses, is "why is this the case?". The answer, I think, should be relatively obvious: the fact that I have very easy access to vast information--more information that my grandparents could have dreamed--so long as I know how to access it means that much, much more of my brain is dedicated to skill at accessing and processing data compared to my grandparents, who had to dedicate substantial portions of their brains to storage of data.

Again, I think the question of whether or not my skill at using Google constitutes memory (it doesn't) is irrelevant to this discussion: the point is that the fact that I know I can access virtually any fact virtually whenever I want means that I don't have to remember, say, the formula for the area of a pyramid. Instead, I can spend the effort that I might have spent to memorize all those facts toward increasing my critical thinking skills--again, which is what IQ tests primarily measure.

Access to information has increased at an amazing rate even within my (relatively short) lifetime, and I suspect that it will continue to increase as technology advances. It is interesting to consider how human intelligence will advance in kind.

Tuesday, November 13, 2007

Freerice.com

I'm a big fan of vocabulary use, and I'm a big fan of people not starving to death. Thus, I'm a very big fan of http://www.freerice.com. The project, run by the same folks that run http://www.poverty.com is the sort of thing that makes you wonder "why didn't I think of this?"

The crux of the site is a multiple choice vocabulary quiz--you'll be given a word and four choices for the definition, then asked to pick the correct one. For every word you correctly define, FreeRice will donate 10 grains of rice to the UN's End World Hunger campaign. Ten grains may not seem like much, but it adds up rather quickly; correctly define ten words, and you've donated a bowl of rice that will feed one person.


The game has 50 "levels," and dynamically scales as you progress; when you miss a few questions in a row, it dials itself back a difficulty level, and when you get a few questions right in a row, it bumps the difficulty up. According to the website, it is "very rare" for people to get past level 48. My current record stands at 46 47, so I urge you to go to the website and try to beat me (oh, and feed some people while you're at it).

Monday, November 12, 2007

New Look

How do you like the new look? Thanks to my kick-ass friend Brian (who, sadly, has no blog to link to) for making the header graphic!

No Really, Democrats Are Different (Somehow...Maybe)

I hope everyone's getting excited for the approaching National Bible Week (November 18th--one week from today!). Congress certainly is, as evidenced by this gut-wrenching parade of Democrats taking up actual Congressional time to talk about how awesome a 2000+ year old piece of poorly written fiction is. Watch the video (care of Shakesville), but maybe take an anti-emetic first:




The Democrats like to talk about being the party of reason and intellect (opposing themselves to those dirty irrational Republicans), but every now and then something like this comes along to expose that for the egregious lie it is. How is it possible that in the 21st century, with all the problems of war, terrorism, global climate change, unemployment, and more war, these leaders of the Democratic party think that it's acceptable to take up any Congressional time to praise this garbage?

PZ Meyers over at Pharyngula takes this opportunity to ask "Can We Form a Rationalist Party Now?", a dream that I certainly share. However, Alonzo Fyfe, the Atheist Ethicist, points out that this might not be the best idea. Snip:

Assume, for the sake of argument, that rationalists tend to support Democrats over Republicans. If a Rationalist Party removes the rationalist vote from the Democratic Party, the Democratic candidates are going to have to make up those votes somehow. The only option is to embrace theocracy even more strongly than it has in the past, in order to seduce a larger percentage of theocratic voters out of the Republican Party. The result is to drive both major parties (the only parties capable of fielding viable candidates) even closer to theocracy.
This is a real danger, as we saw in the 2000 election with the Green Party--in a two party system, mobilizing any relatively large minority group (as rationalist voters would be) always runs the risk of simply making them not count. As Alonzo points out, the magic number in the American system is 51%; anything lower than that and you might as well not exist. This, of course, isn't true across the board, but at a national scale it is more or less the case. Our system, in contrast to other democracies (e.g. the UK) is a winner-take-all two party system: it doesn't matter who came in second or third--only the winning party counts. Unfortunately, that means that even if the Green Party (or the Rationalist Party) makes a very good showing, they don't win anything.

So what's the answer? In the long run, election reform; the system in the United States is outdated and in need of several serious overhauls. In the short term, I suspect that Alonzo is correct in that Rationalist Voters should focus on establishing a "Rationalist Caucus" within the Democratic party rather than a "Rationalist Party" itself. In other words, the best option is to try to steer the Democrats away from blindly endorsing superstition (as they are on that video) and toward recognizing that a substantial number of their constituents are not impressed when they profess to blindly accept the false beliefs of people long dead.

Oh, and how about a "National Constitution Week" to go with that National Bible Week?

Tuesday, November 6, 2007

Slow...

I haven't abandoned this blog, there just haven't really been any items I've felt like commenting on recently. Things will pick up again--they always do...

Thursday, November 1, 2007

Brain Bag

Every now and then, I wish it were socially acceptable for me to carry a purse. Now is one of those times, given the design of Jun Takashi's new handbag:





Kick. Ass.


Linky.

(Hat-tip to BoingBoing)

Wednesday, October 24, 2007

Harry Potter and the Ontology of Fiction (Oh, and Han Solo's Here Too)

In case you haven't heard, Dumbledore is apparently gay. This has, predictably enough, caused kind of an enormous uproar among fans and critics of the Harry Potter series alike--witchcraft and homosexuality: that's one recipe sure to piss off the religious right. Nice job, JKR. Now, I'm sure you're asking yourself "How can this possibly get any more ridiculous? I mean, we've spent days now discussing the sexuality of a fictional character, right? Nothing can be more inane then that."

Well, lucky for you, I'm here to save the day and take things to a whole new level. I think (and I'm not alone here) that this revelation raises very interesting questions about the ontology of fiction. Yes, that's right: we're going to go from discussing whether or not Dumbledore is gay to discussing what philosophical issues it raises to ask whether or not Dumbledore is gay. Awesome.

This discussion started over at Show-me the Argument, the blog of the University of Missouri philosophy department (which is, I just realized, titled with a pun). Here's the quote that got this whole thing rolling:

The big revelation of the night came when she was asked if Dumbledore had ever found love. With a sigh, she seemed on the verge of saying no, but then revealed, “my truthful answer to you… I always thought of Dumbledore as gay.” After a collective gasp, the audience roared with applause. Rowling was clearly astonished by the positive reaction and exclaimed, “if I’d known it would make you so happy, I would have announced it years ago!”
Scott goes on to note that this implies a very interesting question with two possibilities. The first:

...the question came up as to whether Rowling’s revelation changed the truth of the matter or the text stood alone as a sole authority. For those who accept textual authority, something like the following conditional is affirmed:

(TA) A proposition p about a story S (revealed by a text T) is true only if (1) p is explicitly stated in T or (2) p is entailed by other propositions explicitly stated facts in T.

In other words, Rowling is not the authority on her own work--she has no more say as to whether or not Dumbledore is gay than I do. While it's true that she did have the freedom to say one way or the other during the course of the series, the fact that she didn't now means that she's lost her opportunity--once her work left her own mind and entered the "collective consciousness," she lost any privileged access to it, and became just another reader. The other possibility, as outlined by Scott:

[Y]ou could hold the Lockean view that the truths about the Harry Potter world are all “in the mind of Rowling” and what she has chosen to reveal in the seven books are only a portion of the story. I think this view is supported by the quote above: “if I’d known it would make you so happy, I would have announced it years ago.” This seems to suggest that Dumbledore’s sexuality was a fact about the story that she already knew – but chose not to explicitly reveal.
More support is gained for this Lockean picture:

Rowling told the audience that while working on the planned sixth Potter film, “Harry Potter and the Half-Blood Prince,” she spotted a reference in the script to a girl who once was of interest to Dumbledore. A note was duly passed to director David Yates, revealing the truth about her character.


I think this second position is more compelling, but there are many other issues raised here. To explore them, I have to bring in Han Solo and Star Wars. Yes, wizards and Jedi in the same post. Am I going down hill? Here's a revised and expanded copy of my comments from Show-Me:

I see the question as a matter very much related to the question of non-existent intentional objects, which most people seem to accept as a legitimate field of philosophical investigation. This question, briefly, goes something like this: when I have a belief about (say) Albus Dumbledore, what exactly is the content of my belief? If I believe, for instance, the proposition 'That cat is white," it seems fairly easy to parse my intentional state--I have a belief about this creature here (a cat) involving certain color properties (its whiteness). The same, however, cannot necessarily be said of fictional characters. If I believe the proposition 'Dumbledore has a three foot beard," what do I really believe? I can't literally believe 'Dumbledore has a three foot beard,' because there is no Dumbledore really, and thus he doesn't have a three foot beard (or anything at all)--it seems akin to saying 'The present King of France is bald.' Still, it seems that I really do have a belief about something.

While this is an interesting question, it's not the one I want to address here. I only bring it up in response to critics who might try to argue, as commenter David did on Show-Me's post, that:

I was quite surprised when I first realised that there is a serious philosophical discourse concerning fictional objects, apart from the usual problems with any non-existent intentional objects. It struck me as a complete non-issue. There is no truth, no fact of the matter, concerning what happens in fiction - that’s what makes it fiction. Fiction happens when we use the forms of language that we usually reserve for reporting truth to say something that everyone knows isn’t true (but has some value for us anyway) [...] This is not to suggest that there is no problem of intentional objects. Just that there is no extra problem of fictional objects.


It seems to me that the question being posed here is a question concerning "fictional ontology"--which is to say, a question concerning the nature of fictional characters. This question, I think, is a natural extension of the question of non-existent intentional objects.

The "Han Shot First" debate, I think, really gets to the heart of this, because it's a situation where both the first instance (H1, where Han shot first) and the second instance (H2, where Han shot second) are "canon"--that is, they both originated with the creator of the franchise. If we want to buy the Lockean-ish view and reject TA, how can we reconcile the fact that the "author" of the series (i.e. Lucas) created BOTH H1 and H2, since H1 implies not H2 and vice versa? In other words, if we're right that the author of a series has some kind of privileged epistemic and ontological access to the world he creates (i.e. he knows the way the world is, and by knowing simultaneously makes it the case that the world is that way), how can we deal with an author changing his mind?

As far as I can see, there are three ways we could go with this. First, we could say that H1 is true because it was the original decree by the author. This approach has the advantage of parsimony--it is quite clear in any given dispute which account is the correct account (just check which one the author advocated first). It does, however, break down in more borderline cases: what if the author just hints at something, but later decides that the opposite of what he hinted at is true? Does this "law" only apply to explicit assertions? How explicit? This also seems to be in conflict with the "author as the final authority" stance on fictional ontology, which is something we've already accepted (at least for the sake of discussion)--if the author really is the ultimate authority on his work, then it seems like he must always have the option to revise his account.

Second, we could say that H2 is true, because it is the most recent account offered by the author. I'm more inclined to lean toward this one, but it carries problems of its own. If we accept the proposition that an author is always (and infallibly) correct about the universe he creates, then if he changes his mind when does the old account cease to be right and the new one begin to be right? Rowling touched on this when she said "If I'd know it would make you so happy, I would have announced it years ago." To get back to our original case, suppose for a moment that instead of no clues to Dumbledore's sexuality, we'd been given subtle hints that he was heterosexual (think of the hint mentioned above as part of the movie script). If, despite these hints, Rowling were to come back later and say "Dumbledore's gay, and those hints were just red herrings," would it be correct to say Dumbledore was gay or straight? If Rowling has known for some time that Dumbledore was gay, when did he "become" gay? Was it when Rowling decided he was gay, or when she announced it publicly? If the latter, what is so special about giving the idea over to the public (again, to the "collective consciousness") that makes the proposition become true?

This might incline us toward the third possibility, which is that he's neither gay nor straight until it's explicitly stated one way or another. While this might seem tempting at first, I'm not sure if this position will ultimately turn out to be viable, as it simply pushes the dispute back a step. If Dumbledore's sexuality is indeterminate (Schroedinger's.....no, let's not go there) until it's made explicit in the fictional world, how might it be made explicit? Again, it seems like we're forced to take one of two horns: either he's gay if and only if Rowling says so, or he's gay if and only if Rowling says so in the original text. Back to square one, and forced to grapple with the dilemma again. Additionally, we still don't know how to handle cases like H1 vs. H2, when the author explicitly states one thing, then explicitly states another.

How might we resolve this dilemma? It seems to me that we can break this problem up into two parts: the epistemic problem of fiction and the ontological problem of fiction--separating it into these to constituents might get us closer to a resolution.

As I said, I'm inclined more toward an H2 style explanation--that the author is infallibly correct in any statement he makes about his fictional universe, up to and including when he revises earlier accounts. On this account, there are two things that must obtain before I can truly utter the proposition 'I know that Dumbledore is gay' (leave aside for the moment questions concerning the content of that belief): it must be true that Dumbledore is gay (a question of fictional ontology), and I must have access to that information (a question of fictional epistemology). To put it simply, it seems that we've got some conceptual confusion here between the question of whether or not Dumbledore is gay (which depends on what Rowling thinks), and whether or not you and I know that Dumbledore is gay (which depends on what information Rowling has revealed).

This sort of view is supported, I think, by the Lockean account that Scott touched on: that which has made it into the Harry Potter series is only part of the story (only the relevant part for the events Rowling wanted to convey, perhaps). There are untold details (ranging from minutia like the color of Harry's shirt on any given day to larger issues like Dumbledore's sexuality, or the date on which Harry was married). Now, I think it is fair to say that if these details haven't been specified at all (one way or another), they are indeterminate--that is, when I say 'Harry Potter got married on June 27th,' I'm not saying anything true or false, because it hasn't been determined what day Harry Potter got married. However, once Rowling has decided on a date (supposing that she hasn't already), then it becomes true that Harry got married on that date, simply because Rowling just is the ultimate authority on her universe.

However, this is a far cry from saying that we know the date Harry got married. Until Rowling decides to reveal that date, I still can't really say anything meaningful about it, though there is now a fact of the matter. In contrast to the above case, my statement 'Harry Potter got married on June 27th' is now devoid of meaning not because it isn't truth-functional (once Rowling decides on a date it becomes either true or false that Harry married on a certain day), but because my statement, one way or another, just isn't verifiable. If this sounds a little too much like logical positivism for your taste, you might be right. I don't claim to have thought this through extensively, but rather am thinking it out as I write this; I'd welcome comments saying why this account doesn't work. It seems that once Rowling has decided on a date (but before she's revealed it), any statement I make about Harry's marriage is essentially akin to saying 'There is one and only one sphere made entirely of gold with a diameter greater than one foot in the Universe:' it's certainly truth-functional (that statement is either true or false), but there's no real way to tell, and thus the statement is rather empty.

This account also seems to let us address the "Han Shot First" debate. It was originally true that Han shot first, as that's what Lucas decided upon and that's what he told us (via the medium of film). However, at some point after the movie was made, Lucas changed his mind, and began to endorse the proposition 'Han shot second.' At this point, the ontology of the matter changed, and it became true that Han did shoot second. It wasn't until the movie was released, though, that you and I were in an epistemic position to know of the switch--just as if my house is robbed while I'm on vacation, and I (incorrectly) believe 'My computer is on my desk' until I return home, we incorrectly believed 'Han shot first' until Lucas told us otherwise.

So, to answer the original question: yes, Dumbledore is gay. He's either been gay all along (if Rowling decided he was gay when she first began), or his sexuality was indeterminate for a time, then he "became" gay when Rowling endorsed the proposition 'Dumbledore is gay.' As I said, I'm far from confident that this account is perfect, and would love to hear some comments. What do you all think?

Friday, October 19, 2007

2012: Just Another Year, Or The Year Some As-Yet Undetermined Thing That Has To Do With Consciousness Happens?!

The first half of this post is a little bit rantish. Sorry about that--I try to keep this blog as academic as possible, but sometimes something bothers me to the point that I just have to vent about it on the Internet. Just skip to the [/rant] tag if you don't want to read the rant.

[rant]

One of the things that really irritates me to no end is the popular confusion of actual philosophy and New Age garbage. This confusion is fairly evident anywhere either of these two are mentioned--you'll hear New Age "teachers" with credentials like:

[name removed] has studied with Alberto Villoldo and The Four Winds Society and is a graduate of the Healing the Light Body School. Ross is a mesa carrier in the lineage of the Andean Shamans and practices Energy Medicine through Soulpaths, LC, in Kansas City, Missouri.


being billed as "philosophers." Go into virtually any bookstore and you'll likely find the philosophy section (complete with Plato, Descartes, Bertrand Russell, and John Searle) right next to the "metaphysical" or "new age" section (complete with crystal healing, transcendental meditation, and volumes upon volumes on "the power of pyramids"). Whom that description above refers to doesn't really matter (it's the author of an article I read on today's topic): it's supposed to stand for a general trend.

I'm not saying that things like "crystal healing" or "energy medicine" don't work. Wait, yes, I am saying that, but that's not my point. Whether or not getting in touch with your power animal and putting a chunk of silicon dioxide molecules arranged into a tetrahedra lattice structure on your face will cure your brain tumor more reliably than, you know, surgery and science, people who push these kinds of ideas (and I use the term very loosely) are most emphatically not philosophers, and the fact that they bill themselves as such gives those of us actually doing serious philosophical work a bad name.

Philosophy, perhaps more than any other non-scientific discipline, values clarity of thought, precision of expression, and rationality of ideas--these are the cornerstones of philosophy. Philosophers strive to eliminate confusion and reduce mental clutter--we do with concepts and ideas what physicists do with hard data experiments. Practically, this means that a good work of philosophy is clear, easily understandable by those with the right background (the less background necessary to understand it the better), and follows a coherent and logical train of thought. Most New Agers, in my experience, have few of these qualities. Case and point with tonight's discussion.

[/rant]

BoingBoing ran a short piece today on Daniel Pinchbeck, the "psychedelic author" behind the recent book 2012: The Return of Quetzalcoatl. If you've missed this little bit of cultural wisdom, the world is apparently going to end in 2012--December 21st, 2012, to be precise. Why, you ask? Well, because that's the day that the Mayan Calendar ends its current cycle, of course, you silly rationalist! As insane as this sounds/is, this theory (a species of Terrance McKenna's Novelty Theory) has gained quite a large following among New Agers, spawning a host of websites devoted to the event that's going to happen in 2012. What exactly is this event? I'm so glad you asked. Here's Mr. Pinchbeck's response, as related to a Rolling Stone correspondent, and published here

Whether there will be a complete collapse of the world before 2012 is not for him to say, he says. All he knows is that the upsurge of militarism and terrorism -- as well as an increase in coincidences in his own life -- presage a time when spirit and matter will converge into one. We will then be released from the occult power of the Gregorian calendar, which is keeping us out of synchronicity with our psychic powers. We will receive the powers of telepathy and get to speak to our alien neighbors, not necessarily by mounting spaceships but through psychic evolution.

Is that clear enough for you? Spirit and matter will converge into one, and we'll be released from the occult power of the Gregorian calendar, of course! This is really helpful, because there's nothing more oppressive in the modern world than those damn Gregorians and their arbitrary way of marking time. Good thing the Mayans were on top of the whole thing.

The blurb on BoingBoing caught my eye because it included the word "consciousness." Come to find out, it links to a short video interview with Mr. Pinchbeck, in which he, er, explains (?) his beliefs, saying:

The modern way of thinking about indigenous and tribal cultures is that they were myth-based and superstitious. But it may be that indigenous cultures like the Mayans did have their own knowledge system that was as meaningful as ours, but they were interested in very different aspects of reality, being, and experience [...] Someone who just has a rational scientific mind is going to find all this very hard to accept, but there is a change happening in the Psyche. In my own life and in the lives of people I'm connected to, there seems to be an increasing level of synchronicity, so that if you have an intention about something, you get a quicker level of manifestation.
Excuse me, Mr. Pinchbeck, this might just be because I'm limited by my "rational and scientific mind," but just what the hell do you mean by that? For instance, just what is a "level of manifestation," and how can it be quicker? This writing, speech, and thought is so confused and murky that I can barely understand your point, let alone argue against it; I suppose I should have known what I was in for when the website I was directed to in order to watch the video was titled "Post Modern Times."

Allow me to make a prediction of my own: 12/21/2012 (oooo, numbers!) will come and go. The world will stay fucked up in places, and magnificent in places. People will keep scraping by to survive, and keep trying to make the world a better (or worse) place. Pinchbeck and those who follow him will come up with some lame excuse for why nothing of import happened, and will start looking for the next new, confusing, and vaguely mystical idea to latch onto. Life goes on.

The point I'd like to make with this post is that people like Daniel Pinchbeck are NOT philosophers, and it does a great disservice to those who have spent their lives in pursuit of clarity, wisdom, and truth to call people who trumpet this tripe by that title. Writing about ideas, using big words, and saying things like "increasing level of synchronicity" does not make you a philosopher. Well, maybe it does, but it makes you an extraordinarily bad one. Or a post modernist...but I repeat myself.