[22:16] Jon: holy shit, that's awesome
[22:16] Jon: wow....really clever too
[22:17] Eripsa: it also shows that you can be aware of image content without doing any kind of sophisticated object recognition
[22:18] Jon: I knew you were going to say that
[22:18] Jon: no, it really doesn't....it shows you can prioritize certain things mathematically for certain operations
[22:18] Eripsa: object recognition would probably help it to draw more efficient lines, and it would eliminate the need to do the faces hack thingy, but it shows that meaningful content can be evaluated even prior to that level of cognition
[22:19] Eripsa: sorry, I should have put scare quotes around 'content' and 'awareness'
[22:20] Jon: there's no meaning there, though, it's just exploiting flaws in our perceptual system rework pictures without any obvious change
[22:20] Jon: what do you mean by 'awareness'?
[22:20] Eripsa: well I dont want to make a metaphysical point
[22:20] Eripsa: I want to make a cognitive point
[22:21] Jon: but there isn't any cognition going on here!
[22:21] Jon: it's just math
[22:21] Eripsa: yeah, but the math suggests something about the so-called 'flaws' in our own perceptual system
[22:22] Eripsa: namely, that content recognition takes place prior to object recognition
[22:22] Jon: sure it does--that our discrimination system isn't perfect
[22:22] Jon: hmmm
[22:22] Eripsa: and therefore you can functionally isolate the former without the need to waste cognitive (or computational) resources on the latter
[22:23] Eripsa: 'perfect' relative to what?
[22:23] Jon: I don't see how this shows anything about content recognition at all--we recognize the two photos as having the same content only because we recognize them as having the same object
[22:23] Jon: not perfect meaning we can't always see when changes have been made
[22:23] Eripsa: when the system locates so-called "meaningful content" in the image, it is evaluating that content based on what our perceptual syste,finds meaningful
[22:23] Eripsa: as you just said
[22:24] Eripsa: so it might not be cognition in the sense of "thats not what our brain does", but it is certainly cognitive in the sense of "it is tracking meaning-relations that are meaningful to us"
[22:25] Jon: by 'it' you mean the program?
[22:25] Eripsa: and it can maintain those meaning relations without any need for recognizing objects
[22:25] Eripsa: yeah
[22:27] Jon: it isn't recognizing meaning at all though--it's just the smoke detector phenomenon gone a bit more sophisticated. We designed a system that exploits an ideosyncracy about our cognitive system. That doesn't mean that the system we've designed has any meaning-understanding abilities at all. And it isn't tracking meaning relations--it's tracking relationships between numbers that, by our design, will tend to correspond to meaning relations.
[22:28] Eripsa: see this is why I dont like your position.
[22:28] Jon: heh
[22:28] Eripsa: my first reaction to this video is as follows:
[22:28] Eripsa: It adds a whole new dimension to data corruption: it demonstrates quite clearly that our machines are aware of the meaningful content of information and are in a position to change it as they see fit.
[22:28] Jon: what?! how the hell does it demonstrate that at all?
[22:28] Eripsa: and my reaction doesn't even get off the ground on your view
[22:28] Eripsa: exactly
[22:29] Jon: sounds like a personal problem to me :P
[22:29] Eripsa: for instance, it shows them try to compensate for our special wetware for identifying faces and people, that is especially senstive to distortion
[22:29] Eripsa: and when you see that in action, you realize that this compensation is a hack, pure and simple
[22:30] Eripsa: ideally the computer ought to be able to do all of that work automatically
[22:30] Jon: right--tracking the relationship between the numbers and our meaning-identifying systems is harder because our system is more refined when it comes to faces
[22:30] Jon: sure it's a hack
[22:30] Eripsa: but if its a hack, that means the machines are doing things that we don't really want them to do, and they are doing it automatically
[22:30] Eripsa: and so we have to go out of our way to correct them
[22:30] Eripsa: (speaking loosely, of course)
[22:31] Jon: no, it means that our original program just wasn't written well enough and it isn't doing what we want it to do
[22:31] Eripsa: yeah
[22:31] Eripsa: which means it is doing something other than what we want it to do
[22:32] Jon: which is because of our error
[22:32] Eripsa: the changes they make (unconsciously) still have meaningful implications for the products of their processing
[22:32] Jon: if I throw a ball and I accidentally hit you in the head, we don't attribute that to something the ball is doing that was outside my control, we attribute it to the fact that I can't throw for shit
[22:33] Eripsa: well sure. Thats because we have a naive distinction between animate and inanimate objects.
[22:33] Eripsa: that aristotelian distinction hold no scientific water, of course.
[22:34] Jon: ...
[22:34] Eripsa: Look, if your point is that those meaningful implications are 'merely' meaningful to us, then sure
[22:34] Jon: so would you attribute the ball hitting you to something the ball did and not somthing I did?
[22:35] Eripsa: if the ball were lopsided, then maybe I would
[22:35] Eripsa: because it did something you didn't intend it to do, because of a quirk of its own physical makeup
[22:35] Jon: it seems more plausible to say that I threw badly by not compensating for the lopsidedness
[22:35] Jon: the ball didn't do anything--I did something with the ball
[22:35] Eripsa: or at least, I wouldn't hold you responsible, and I would 'blame' the ball. It seems perfectly natural to say "I didn't mean to do that, it was the ball's fault. Its lopsided."
[22:36] Eripsa: the content awareness system is changing meaningful information automatically.
[22:36] Jon: well I wouldn't hold be responsible either, but that's just because I didn't intend the outcome. What happened was still ultimately caused by something I did and not something the ball did (or didn't do)
[22:36] Eripsa: without our direct control, assent, or even oversight
[22:36] Jon: *hold me responsible
[22:37] Eripsa: I mean surely you agree that the possibility for intentional data corruption is magnified with these sorts of tools
[22:37] Jon: definitely
[22:37] Jon: but I don't think you can derive from that the fact that the program is aware of anything at all, or is doing anything cognitive even
[22:38] Eripsa: but the magnification of the problem doesn't come from the possibility of manipulating images; that's been around for a long time
[22:38] Eripsa: it comes from the fact that the machines are doing it automatically
[22:39] Eripsa: autonomously, in a metaphysically neutral sense
[22:39] Eripsa: in just the sense that they are algorithmically selecting which lines to eliminate
[22:39] Jon: no, it comes from the fact that such manipulation is now much easier for people to accomplish, as the tools have become more sophisticated
[22:39] Eripsa: what content to keep, and what content to ignore.
[22:39] Jon: that's your central mistake, I think--you don't want to admit that computers are fudamentally just tools
[22:39] Jon: *fundamentally
[22:39] Eripsa: yeah
[22:40] Eripsa: ok, I'll take a running start from there next time
[22:40] Eripsa: :)
[22:40] Jon: hahaha
[22:40] Eripsa: you'd absolutely hate my technology class, heh
[22:41] Jon: you need to start publishing....so I can start counter-publishing
[22:41] Jon: excellent
[22:41] Eripsa: I'm gonna be at a conference in
[22:41] Eripsa: I'll let you know how it goes