Monday, January 12, 2009

Musings on Embedded Epistemology

I took a course in epistemology last semester, and (surprise) it made me think about epistemology.  What follows is an attempt to summarize my random musings and conversations I've had over the last few weeks into something that begins to approach a coherent theory.  It is, as I cannot emphasize enough, very prelimary so far, and very much a work in progress.  Still, I find these considerations very interesting, and I hope you do as well.

Belief justification is like a progress bar on a download--it can be filled or emptied to various degrees by things that we encounter out in the world. For instance, if I trust some individual a great deal, his words will tend to fill my "truth bar" a great deal; this weighing is based (among other things) on my past interactions with him, my knowledge of his epistemic state, &c.--certain; contextual variables about our relationship lead me to weigh his words highly when making (or contemplating making) epistemic actions like belief revision. The degree to which my truth bar is filled is also going to depend on the nature of the proposition this hypothetical interlocutor is informing me about: even from a trusted friend, I'm going to more readily assent to the proposition 'there is a brown dog around the corner' than I am to the proposition 'there is a child-eating clown around the corner.' Again, this reflects the contextually-influenced nature of epistemic action: based on other beliefs I have about how the world works, I'm going to be more or less likely to assent to a new belief (or to change an old one). 

It's important to emphasize that the truth-bar is almost never entirely full, except in some very special cases (e.g. conscious states to which you have immediate, incorrigible access). Take the case of a proposition based on basic sensory information--e.g. 'there is an apple on my desk.' In normal circumstances--good lighting, I can feel and see the apple, other people see the apple too, &c.--I; have very good reason to suspect that there really is an apply on my desk; the truth-bar for that proposition is (say) 99% full. Still, there are potential defeaters here: it might be the case that I am actually in some kind of Matrix scenario, and therefore it might be the case that there is no desk or apple at all. Still, based on other (fairly strongly justified) beliefs I have about the world, this Matrix scenario seems rather unlikely--that is, the truth-bar for 'I am in the Matrix' is very, very close to empty (though not entirely empty, as the proposition is still a logical possibility). Because this defeating proposition ('I am in the Matrix') has a very weak truth-bar, it doesn't weigh very heavily in my epistemic considerations--it's enough to keep the bar for 'there is an apple on my desk' from being 100% full, but that's about it. 

This goes sharply against established epistemic tradition, according to which the primary goal of epistemology is truth. If we define truth as a 100% full bar, there are going to be very few propositions (aside from tautologies like 'all black things are black') that will enjoy an entirely full bar. Instead, the right way to think about epistemology--and about our epistemic responsibilities--is as a quest for justified belief, a quest for a reasonably full bar. What counts as 'reasonably full' is, again, going to vary based on contextual variables: when the stakes are rather low, I might assent to a proposition when (say) the truth bar is over 50% full. This might be the case when, for example, a friend tells me that there is a brown dog outside my house; I believe him, and if someone asks me 'is there a brown dog outside your house?,' I will be inclined to answer in the affirmative. My friend might be wrong or lying, but the stakes are low and I have very few strong defeater propositions in play--few good reasons to suppose that my friend speaks falsely, in other words. In more important cases (such as when engaged in technical philosophical deliberation, or when designing a passenger jet), I'm going to be inclined to withhold assent from propositions until the bar is almost entirely full: the consequences for assenting to the wrong belief are so potentially dire, that I will demand a higher standard of justification, investigation possible defeaters more thoroughly, &c.; 

The emphasis here is on the contextually-dependent nature of epistemic action; rather than doing a lot of complex deliberating for every possible belief change entirely in our heads, we "offload" a certain amount of the work into the existing epistemic environment; that is, we use the existing epistemic landscape to simplify our decision-making by heuristically assigning various "values" to propositions that are related to the one under consideration, and performing a kind of Bayesian calculation to get a rough approximation of truth or falsity. We can make a direct parallel here with other work being done in extended/embedded cognition and extended mind theses--in just the same way that we use external props (e.g. written notes) as props to support certain cognitive processes (e.g. memory), we use our intuitive grasp of the existing epistemic landscape as a prop to support our own decision making. I call this approach "contextually embedded epistemology." 

Statisticians or those with a background in math will recognize that I'm describing something very much like a Bayesian network here--I suspect that our beliefs, were they to be mapped, would look much like this. There are multiple links between multiple different beliefs, and one belief might depend on many others for support (or might be partially defeated by many others). The picture is constantly in a state of flux as shifts in one node (i.e. a single belief) influence the certainty (i.e. the fullness of the truth bar) of many other nodes.  The Bayesian way of looking at things is far from new, but the emphasis on partial-completeness and environmental support, as far as I know, is.  These are just some random thoughts I've had about this in the last few days, so comments and criticisms are encouraged.  This needs a lot of tightening up.

8 comments:

That 0ne Guy said...

Maybe I'm a pessimist, but I can't help but think of ignorance.

What kind of epistemic state is that? Furthermore, how does it relate to your "download progress bar?"

I could be going in the wrong direction.

Oh, and the term "epistemic landscape" is kick ass.

Jon said...

That kind of reasoning error would be explain in these terms as an error in weighting the Bayesian network, or an error of calculating how full certain "truth bars" are. Religiously motivated intelligent design theory is a good case-study here: if you begin a scientific investigation with a fundamental conceptual error vis a vis your basic beliefs (e.g. a belief that the Earth is only 6,000 years old), the fact that you're looking at evidence through that lens can distort your scholarship. This is easily modeled in this "truth bar" terminology: ID theorists assign 'the Earth is 6,000 years old' a value of (or near) 100%, whereas most people assign it a very low (near Matrix level) value. This single error is magnified to a great extent, though, as many subsequent epistemic calculations partially depend on it--that is, 'the Earth is 6,000 years old' is a node that connects to many others in the "epistemic web," so altering its value will alter the value of a large number of other propositions.

This is a big reason why philosophy is important: part of its job is to articulate an approximate value for the "truth bar" of many low-level conceptual beliefs so as to increase the likelihood of more accurate higher level beliefs.

That 0ne Guy said...

So essentially, ignorance is a "false-filler" (if you will) on a progress bar.

It puts more progress in the truth bar for things that are actually lower.

RaplhCramden said...

http://www.newscientist.com/blogs/shortsharpscience/2009/03/human-fails-turing-test.html?DCMP=NLC-nletter&nsref=blog2

Human fails Turing test! I have often thought that Alien intelligences would fail the Turing test, which sort of made it a rather high bar for Machine Intelligence. I.e., long before M.I. passed the turing test it would already be intelligent in a non-human way. This points out the limits of this metric.

Anyway, I comment this here as a way of communicating with you. I hope school is fascinating. I miss your blog, looking forward to hearing some good stuff you thought of.

Cheers,
Mike

Biagio said...

I keep wanting to ask this to scholars.....so here goes....is knowledge cumulative?....

Mark said...

Okay, so I'm drunk and I was waiting for a reply to one of my facebook comments, and I thought I'd offer something to think about on this subject.

I'm generally inclined towards the sort of epistemic landscape you propose here, but (as far as I can tell) there's a deep problem facing any such calculus of belief. Specifically, we have a really hard time fitting this system into any deontic system of epistemic norms, and (again, I think) we have good reason to think that our epistemic norms are denotic (i.e., never believe a contradiction).

I'm not going to go into this too much now, but we should meet up for a drink sometime and discuss it. It's something I've been struggling with every since my last epistemology seminar.

J.Vlasits said...

I'm also tempted to disagree with the whole progress bar metaphor from the perspective of Pyrrhonian skepticism and belief justification. However, I do like your Bayesian network idea and I think that even lends credence to the kind of skepticism that discredits your progress bar heuristic. If your beliefs are organized in a network where they are connected in a sufficiently complex way, it seems to me that there could be situations where you would have indeterminate certainty. The image that I have right now is that of a protein in your body that has several stable conformations which it alternates between. This seems especially likely if your beliefs aren't in some sort of hierarchy where some beliefs are simply taken as "given" (even at a certain probability, like your ID example). It seems like your way of looking at things sheds a lot of light on the problems of a coherentist approach to epistemology. This all seems a little sketchy to me, but it seems as if we had any situation where A->B and B->A (where -> is logical entailment and A and B are propositions or sets of propositions with a given truth probability) in any part of the network, would you be able to determine truth probabilities for the whole network? I'm too weak on the math myself to answer that question, but it seems to me to be one worth considering. It might also be argued that we don't (or shouldn't) engage in such circular reasoning, but I'm not confident that people don't just naturally have those kinds of beliefs and one might just have to deal with them in the model. Anyway, you have a really interesting idea and I can't wait to see some further thought on it.

Anonymous said...

chains poland divya whoz submissionon axis naturalism usual sleep beginnings dhobi
lolikneri havaqatsu