Thursday, August 30, 2007

Content Awareness (Or Not...)

My friend, colleague, and bickering partner over at http://www.eripsa.org is all atwitter about this new image resizing software. While I'll admit that it's pretty fuckin' awesome, I don't think it has the far reaching implications for artificial intelligence that he seems to think it does. We had a brief discussion about it the other night. Watch the video (it should be shown here), and then read the conversation. Let me know what you think.









[22:16] Jon: holy shit, that's awesome

[22:16] Jon: wow....really clever too

[22:17] Eripsa: it also shows that you can be aware of image content without doing any kind of sophisticated object recognition

[22:18] Jon: I knew you were going to say that

[22:18] Jon: no, it really doesn't....it shows you can prioritize certain things mathematically for certain operations

[22:18] Eripsa: object recognition would probably help it to draw more efficient lines, and it would eliminate the need to do the faces hack thingy, but it shows that meaningful content can be evaluated even prior to that level of cognition

[22:19] Eripsa: sorry, I should have put scare quotes around 'content' and 'awareness'

[22:20] Jon: there's no meaning there, though, it's just exploiting flaws in our perceptual system rework pictures without any obvious change

[22:20] Jon: what do you mean by 'awareness'?

[22:20] Eripsa: well I dont want to make a metaphysical point

[22:20] Eripsa: I want to make a cognitive point

[22:21] Jon: but there isn't any cognition going on here!

[22:21] Jon: it's just math

[22:21] Eripsa: yeah, but the math suggests something about the so-called 'flaws' in our own perceptual system

[22:22] Eripsa: namely, that content recognition takes place prior to object recognition

[22:22] Jon: sure it does--that our discrimination system isn't perfect

[22:22] Jon: hmmm

[22:22] Eripsa: and therefore you can functionally isolate the former without the need to waste cognitive (or computational) resources on the latter

[22:23] Eripsa: 'perfect' relative to what?

[22:23] Jon: I don't see how this shows anything about content recognition at all--we recognize the two photos as having the same content only because we recognize them as having the same object

[22:23] Jon: not perfect meaning we can't always see when changes have been made

[22:23] Eripsa: when the system locates so-called "meaningful content" in the image, it is evaluating that content based on what our perceptual syste,finds meaningful

[22:23] Eripsa: as you just said

[22:24] Eripsa: so it might not be cognition in the sense of "thats not what our brain does", but it is certainly cognitive in the sense of "it is tracking meaning-relations that are meaningful to us"

[22:25] Jon: by 'it' you mean the program?

[22:25] Eripsa: and it can maintain those meaning relations without any need for recognizing objects

[22:25] Eripsa: yeah

[22:27] Jon: it isn't recognizing meaning at all though--it's just the smoke detector phenomenon gone a bit more sophisticated. We designed a system that exploits an ideosyncracy about our cognitive system. That doesn't mean that the system we've designed has any meaning-understanding abilities at all. And it isn't tracking meaning relations--it's tracking relationships between numbers that, by our design, will tend to correspond to meaning relations.

[22:28] Eripsa: see this is why I dont like your position.

[22:28] Jon: heh

[22:28] Eripsa: my first reaction to this video is as follows:

[22:28] Eripsa: It adds a whole new dimension to data corruption: it demonstrates quite clearly that our machines are aware of the meaningful content of information and are in a position to change it as they see fit.

[22:28] Jon: what?! how the hell does it demonstrate that at all?

[22:28] Eripsa: and my reaction doesn't even get off the ground on your view

[22:28] Eripsa: exactly

[22:29] Jon: sounds like a personal problem to me :P

[22:29] Eripsa: for instance, it shows them try to compensate for our special wetware for identifying faces and people, that is especially senstive to distortion

[22:29] Eripsa: and when you see that in action, you realize that this compensation is a hack, pure and simple

[22:30] Eripsa: ideally the computer ought to be able to do all of that work automatically

[22:30] Jon: right--tracking the relationship between the numbers and our meaning-identifying systems is harder because our system is more refined when it comes to faces

[22:30] Jon: sure it's a hack

[22:30] Eripsa: but if its a hack, that means the machines are doing things that we don't really want them to do, and they are doing it automatically

[22:30] Eripsa: and so we have to go out of our way to correct them

[22:30] Eripsa: (speaking loosely, of course)

[22:31] Jon: no, it means that our original program just wasn't written well enough and it isn't doing what we want it to do

[22:31] Eripsa: yeah

[22:31] Eripsa: which means it is doing something other than what we want it to do

[22:32] Jon: which is because of our error

[22:32] Eripsa: the changes they make (unconsciously) still have meaningful implications for the products of their processing

[22:32] Jon: if I throw a ball and I accidentally hit you in the head, we don't attribute that to something the ball is doing that was outside my control, we attribute it to the fact that I can't throw for shit

[22:33] Eripsa: well sure. Thats because we have a naive distinction between animate and inanimate objects.

[22:33] Eripsa: that aristotelian distinction hold no scientific water, of course.

[22:34] Jon: ...

[22:34] Eripsa: Look, if your point is that those meaningful implications are 'merely' meaningful to us, then sure

[22:34] Jon: so would you attribute the ball hitting you to something the ball did and not somthing I did?

[22:35] Eripsa: if the ball were lopsided, then maybe I would

[22:35] Eripsa: because it did something you didn't intend it to do, because of a quirk of its own physical makeup

[22:35] Jon: it seems more plausible to say that I threw badly by not compensating for the lopsidedness

[22:35] Jon: the ball didn't do anything--I did something with the ball

[22:35] Eripsa: or at least, I wouldn't hold you responsible, and I would 'blame' the ball. It seems perfectly natural to say "I didn't mean to do that, it was the ball's fault. Its lopsided."

[22:36] Eripsa: the content awareness system is changing meaningful information automatically.

[22:36] Jon: well I wouldn't hold be responsible either, but that's just because I didn't intend the outcome. What happened was still ultimately caused by something I did and not something the ball did (or didn't do)

[22:36] Eripsa: without our direct control, assent, or even oversight

[22:36] Jon: *hold me responsible

[22:37] Eripsa: I mean surely you agree that the possibility for intentional data corruption is magnified with these sorts of tools

[22:37] Jon: definitely

[22:37] Jon: but I don't think you can derive from that the fact that the program is aware of anything at all, or is doing anything cognitive even

[22:38] Eripsa: but the magnification of the problem doesn't come from the possibility of manipulating images; that's been around for a long time

[22:38] Eripsa: it comes from the fact that the machines are doing it automatically

[22:39] Eripsa: autonomously, in a metaphysically neutral sense

[22:39] Eripsa: in just the sense that they are algorithmically selecting which lines to eliminate

[22:39] Jon: no, it comes from the fact that such manipulation is now much easier for people to accomplish, as the tools have become more sophisticated

[22:39] Eripsa: what content to keep, and what content to ignore.

[22:39] Jon: that's your central mistake, I think--you don't want to admit that computers are fudamentally just tools

[22:39] Jon: *fundamentally

[22:39] Eripsa: yeah

[22:40] Eripsa: ok, I'll take a running start from there next time

[22:40] Eripsa: :)

[22:40] Jon: hahaha

[22:40] Eripsa: you'd absolutely hate my technology class, heh

[22:41] Jon: you need to start publishing....so I can start counter-publishing

[22:41] Jon: excellent

[22:41] Eripsa: I'm gonna be at a conference in virginia making almost exactly this argument

[22:41] Eripsa: I'll let you know how it goes

Wednesday, August 29, 2007

Start 'em Young

This story, published on the BPS Research Digest blog today, talks about a recent longitudinal study done with young children. It seems that the more often parents use mental-content words like "thinking" or "believing" while reading to their young kids, the more well developed theory of mind the child ends up with. In other words, it seems that parents "prime" a theory of mind in kids by getting them used to considering the mental states of others. This is interesting on a number of levels. Perhaps primarily, it suggests that genetics might not play as exclusive a role in theory of mind development (or cognitive development in general) as many people think. Additionally, it even more strongly demonstrates the vital importance of beginning education at a very young age. A young child's brain is incredibly plastic, and it is well known that the more used to using his or her brain a child is, the more likely he or she is to continue using it throughout life. If even something as basic as a theory of mind is influenced by early childhood interactions, imagine what else is based on the strength (or weakness) of those same interactions.

A Correlation

While browsing through Digg, I came across the following two articles posted back-to-back. Does anyone else sense a correlation here?

SAT Scores lowest in years

and

How Creationists Are Brainwashing Kids Against Science

It made me chuckle, though in a heart-wrenching kind of way.

Tuesday, August 28, 2007

Player Hatin'

There has been a lot of buzz going around the blogs lately about the question of whether or not Deep Blue's just-now 10 year old victory against Gary Kasparov is significant in any way. This discussion was kicked off by an article by Tufts philosopher (and Santa Clause look-alike) Dan Dennett in this month's Technology Review. Dennett, predictably enough, takes a vaguely Quinian approach to this question--he wants to deny that it is even an interesting question to begin with. To Dennett, the question of whether or not computers are or are not better than humans at chess is only a question because of some wrongly held (and, Dennett implies, rather naive) beliefs about the uniqueness of our own brains. The intuition driving the idea that Deep Blue didn't really beat Kasparov, Dennett contends, is based in the notion that Deep Blue's processes while playing chess were so radically different from Kasparov's that the machine's victory didn't rightly deserve the title. Dennett argues that this intuition is wrong, and attempts to show that the functional processes that Kasparov and Deep Blue went through in preparation for each move were (functionally) equivalent; Deep Blue no more engaged in pure "brute force" computation than did Kasparov.

While I'm not convinced that Dennett is right, I'm willing to grant him this point, as I don't think it is the crucial one that ought to be examined. Dennett, like many philosophers working on this problem, overlooks an even more basic question that must be answered before the question of whether or not Deep Blue's victory is significant: was Deep Blue playing chess in the first place? Note the difference between this question and the one that Dennett asks: I'm not asking about how Deep Blue plays chess, or arguing that its approach to the game is different from Kasparov's to a significant degree--I'm making a much stronger claim. This claim, put simply, is just the following: machines don't play games in the first place, so Deep Blue (while an impressive achievement certainly) didn't prove anything by "winning" the match with Kasparov; Deep Blue was never in the match in the first place.

I know that I promised reasonably short posts on this blog, but I think this claim deserves at least some explanation, considering the fact that virtually everyone disagrees with it. Why wasn't Deep Blue playing chess? After all, it was making moves that were constrained by the rules of chess, and it was playing the same functional role in the match that a human grandmaster would have played in a normal chess match--surely this is good enough? Even asking this question flies in the face of the conventional wisdom; virtually every blogger covering this story has sought to address whether or not Deep Blue's victory over Kasparov says something significant about Deep Blue's intelligence--my position is that Deep Blue didn't really win anything, because it wasn't playing in the first place.

Playing a game like chess is an inherently social activity--it requires the participation of two (or more) entities that are capable of playing the game intentionally. Ignore, for the moment, games like solitaire. We'll get to that later. Suppose you and I sit down at a chess board, but without knowing what it is (or how to use it). We decide to pass the time by taking turns moving the pieces around the board randomly. It just so happens that each of the moves we make corresponds to the actual rules of chess, but this is totally accidental--we had no intention to move them in this way. It seems to me that, though we're certainly doing something, we're not playing chess; there must be more to the game than acting in certain ways or fulfilling certain functional roles.

The case gets stronger, though. Playing a game like chess requires the capability to create meaning, and that in turn requires the capacity to represent. Without getting too much into the technical details, there are some pretty compelling reasons to believe that this is something that machines just cannot do. When you and I play chess, we're doing something more than just shuffling piece around a board. We're even doing something more than shuffling piece around a board according to prescribed rules. The difference, at least as I see it, between Kasparov and (say) Bobby Fischer playing a game of chess and Kasparov and Deep Blue "playing" a game of chess is that in the former scenario both participants are conscious entities and intentional participants in the game. In the latter case, one is conscious and the other is (by definition) not--try and guess which is which!

Once again, this is not a point about how Kasparov and Deep Blue play chess; whether or not someone is a participant in a game has virtually nothing to do with their strategy for that game. It is entirely possible that a chess savant might one day be able to "brute force" calculate moves in the same way that Deep Blue does--would that mean that the chess savant was not playing chess? Certainly not. Once again, the defining attribute of a game is intentional participants.

This is even more clear if you think about two computers "playing chess." Suppose we network Deep Blue and Deeper Blue and let them have at each other. Electrical signals fly back and forth across the connection medium, and both machines are processing at full power. Are they playing chess? From the perspective of the computers (such as it is), there's no difference between playing chess, playing tic-tac-toe, holding a conversation about the weather, or working together to solve an equation. In all cases, the machines' actions just boil down to mathematical computation or (at an even lower level) electricity flowing over wires.

Summing up: the salient question here is not whether or not Deep Blue's victory is significant for artificial intelligence, and it is not whether or not the fact that Deep Blue's approach to a chess problem is different from Kasparov's is significant. The question we should be asking is whether or not machines can be participants in games at all, and the answer to that question seems to me to be a resounding "no." More on this later, probably.

Monday, August 27, 2007

Post One: A Definition

I suppose I should begin this blog with a definition--I find that people aren't always familiar with the definition of "apologetics," and I don't want to misrepresent myself.

An apologist, according to the dictionary, is "one who speaks or writes in defense of something." It has nothing to do with apologizing in the colloquial sense: think Plato's "The Apology." Apologetics are usually associated with (unsuccessful) attempts to defend religion (generally Christianity) against objections; an essay explaining why the problem of evil isn't really a problem would, for instance, be a work of apologetics.

So why is this blog called "reality apologetics?" Put simply, I think reality is placed under attack rather consistently in every day life--I'd like to write in reality's defense. Let the games begin.

Once again into the breach

I've tried keeping blogs before. Some of you might remember The Left Coast--which has since been snapped up by one of those despicable companies that buys domains and turns them into advertising sites--and its several incarnations before finally being shut down. Here's a few reasons why I think this blog will be different.

1. I have more time now. I'm not a student and, you know, what the hell else am I going to do at work all day? My job? Fuck that shit.

2. This one will be more about philosophy and neuroscience, and less about politics. While I'm not promising that politics won't be mentioned occasionally (Alberto Gonzalez resigned today!), I'll be writing about my actual passions. That should help.

3. Shorter posts. When I was working on The Left Coast, I (for some reason) felt obligated to make every post an entire fucking essay. I'd start writing and, somewhere around the sixth page, get bogged down. That's not really a blog and, having spent more time in the blogosphere, I've realized that I don't even like to read blogs that do that. Short and pithy posts are the way to go.

There you have it; I hope this actually works out. Look for the first "real" post soon...