I certainly can. In order to really get clear what we're talking about, though, I think I need to say a little bit about the history of philosophy of mind. In the early-mid 20th century, the "in vogue" idea about how the mind and the body related was logical behaviorism. The logical behaviorist thesis, briefly stated, argued that any talk about mental states (e.g. pains, tickles, beliefs, desires, etc.) is really reducible to talk about behaviors and dispositions to behave--if Jon believes that it is raining, that just means that Jon is disposed to behave in a certain way (to carry an umbrella if he wants to stay dry, to close the windows of his car, to turn off his sprinklers, etc.), and if Jon is in pain, that just means that Jon is disposed to behave in another certain way (to retreat from the stimulus causing the pain, to say 'ouch,' to writhe around on the floor, etc.).
Can you please clearly distinguish between Biological Naturalism and Functionalism? I don't get the difference. I thought a Functionalist basically said that the mind was what the brain did, like digestion is what a stomach does. So how are the schools of thought different?
There are obvious problems with this--Hillary Putnam, for one, raises the logical possibility of a race who had pain-experiences without any disposition to pain behavior as evidence that mental statements are not logically identical to behavioral statements--but the one I find most telling is that it seems impossible to reduce intentional (in the technical sense) language into behavioral disposition language without simultaneously introducing another intentional term. To carry on with the above example, while it might be right to say that "if Jon believes it is raining he will carry an umbrella," that statement only seems true if Jon also has a desire to stay dry; similarly, Jon's desire to stay dry can only be translated into a behavioral disposition to wear goulashes if he believes that it is wet outside. This problem doesn't arise for all mental terms, but the fact that it arises for even one is enough to destroy the behaviorist thesis--the notion that all mental states are logically reducible to statements about actual or potential behaviors is false.
In the death of logical behaviorism, a new doctrine--Functionalism--arose to captivate the philosophic profession, this one based on a simple idea: what if the brain just is a digital computer, and our minds just are certain software programs? Whereas behaviorism is concerned only with the system's (i.e. your) inputs and outputs, Functionalism is concerned with the functional states that combine with inputs to produce given outputs. On this view, mental states are really just certain functional states in the complex Turing Machine that is our brain, and those mental states (including consciousness as a whole) are defined strictly in terms of function--there's nothing special about my mind, and (given the right programming and sufficiently powerful hardware), there's nothing stopping me from creating a functional equivalent of it implemented in silicon rather than in "meat."
To put it more precisely, Functionalism defines everything in terms of its place in the complex causal structure that is my brain; rather than ignoring what's going on in the "black box" of the brain (as a behaviorist would want to), a functionalist will admit that mental processes are essential for the system to function as it does, but will deny that there is anything essentially "mental" about those processes; a computer program with the same set of causal states, inputs, and outputs as my brain would, on this view, have a mind by definition, as all it means to have a mind is to have a system that functions in a certain way; how that system is implemented doesn't matter.
This point is easier to see in simpler cases, so let's take the case of an adding machine. There are many different possible ways that we could "realize" (that is, implement) a machine to add numbers: my pocket calculator, MatLab, this awesome device, and an abacus will all get the job done. Functionally, all these devices are equivalent--though they're instantiated in different forms, they have internal states that are directly analogous and, in the long run, produce the same functionality across the board. The brain, on this view, is just one implementation of "having a mind," and anything (say, a digital computer running a very complex program) could, given the right functional states, also be said to have a mind.
Biological Naturalism (BN) rejects this last point. Those of us who endorse BN (or something like it), point out that defining the mind purely in terms of functional states seems to leave something vital out--the qualitative character of consciousness, as well as its capacity to represent (that is, to have intentionality). Searle's Chinese Room argument is supposed to show exactly where Functionalism goes wrong: though the behavior of the Chinese Room system as a whole is functionally identical to a human who speaks Chinese, there seems to be something important missing from the Room--understanding, or semantics. Our minds, then, have to be defined by something other than their functional roles, as a system with functionally identical states seems to be missing both intentionality and subjective character of experience, both of which are defining characteristics of minds like ours.
BN proposes a simple, intuitive, and (it seems to me) correct answer to the mind/body problem: consciousness exists as an ontologically irreducible feature of the world--it can't be eliminated away in the same way that rainbows can be eliminated away as illusory--yet it is entirely caused by and realized in neuronal processes. Statements about mental events--beliefs, desires, pains, tickles--say something true and irreducible about the organism and can't be reduced to talk of brain states without the loss of something essential: the qualitative character of consciousness. The analogy with digestion--while not exact, as there's no essentially subjective character to digestion--is instructive here: consciousness is just something the brain does, in much the same way that digestion is just something the stomach and intestines do.
That's a rather brief characterization, and if you want a more detailed account, I urge you to read Searle's latest formulation here. It's not without problems, and I'm working on a modified account that I think is better able to deal with certain objections, but it's great place to start. I hope that answers your question, Derek!
4 comments:
Thanks for the response. I took Philosophy of Mind last year, but we didn't talk specifically about Biological Naturalism.
So if I'm understanding you correctly, is it fair to say that the primary distinction is this notion of multiple realizability? They both think that the mind is what the brain does, but the BN thinks that the only thing that can give rise to a mind is a biological system made of neurons, while a functionalist believes a mind can be instantiated on a number of substrates (like a digital computer)?
Since we don't understand how biological brains give rise to consciousness, isn't it a bit premature to hold that they are the only substrate that can produce consciousness? And if you agree that consciousness is purely the result a physical phenomenon, then what would necessarily restrict it from being realized on a non-biological substrate?
Not precisely. The biggest problem that I see with Functionalism is that it seems to totally ignore one of the essential features of consciousness--the fact that it has a subjective quality that seems independent of its causal powers. We can certainly create a functional model of consciousness: I'm totally willing to admit that, given a sophisticated enough program and good enough hardware, we could create a computer that would behave in every way as if it were conscious. It even seems plausible that, were we able to simulate exactly the workings of my brain on a neuron-for-neuron basis, we could get something that would be functionally identical to me. Still, it seems that something important would be left out--the computer program qua computer program would be incapable of intentionality, would have no semantic understanding, and would like any subjective experience. That's what the Chinese Room is supposed to show, and I think it does its job.
Notice that in saying that brains are causally sufficient for minds I'm not saying that they're causally necessary. It certainly seems plausible that, at some point in the future, we might construct an artificial creature that has a mind in just the sense that you and I do, but that creature will necessarily have something with at least the same causal power that our brains do; digital computers, while they might be functionally identical to our brains, seem to lack this causal power.
If consciousness is just the mental "instant playback" of whatever process that occasions it and its associated action ( which seems most apparent in quick, reflexive actions ) would it not lose its "causal power"?
This will be an enriching experience.
vliyanie diabeta na potenciyu
Post a Comment