lundi 31 janvier 2011

Gallagher on representationalism

I moved to wordpress. Please comment this note at its new address: http://mlnode.wordpress.com/2011/02/01/gallagher-on-representationalism/

Today's readings included Gallagher's "Are minimal representations still representations?", which is a bit of a political choice: the author is visiting Montreal in three weeks. But it fell right in my preoccupations.

The general line goes this way: Wheeler's action-oriented representations (AOR), he says, simply lack the main characteristics they would need in order to qualify as representations. But Wheeler's not alone in this: Mark Rowlands, Andy Clark and Rick Grush are in the same boat. The representations they see in coupled system fail to be, among other things, strongly instructional or decoupleable; meaning they do not in themselves hold their interpretation, and do fail to make any sense outside of the tightly coupled systems on which these authors focus.

Now, Gallagher points out, you can't really have a representation that's easy to identify in a system, easy to decouple from it, and strongly instructional in itself. Dreyfus and Kripkenstein taught us better, and philosophers who dwell in cognitive science learned that lesson. So the point of Gallagher's article is that Clark and Wheeler are not the representationalist they claim to be: they hold a bit of a middle ground. They might be critical of naive anti-representationalists who make wild extrapolations from relatively jejune examples (Clark 2006), but they aren't completely spared by anti-representationalism.

So is representation the locus of the divide between traditional philosophy of mind and the new one which relies heavily on cognitive science (as I was alluding to in my previous post)? Strangely, Gallagher sees a link between the two phenomena, but he sees the causality acting in the opposite direction:

“... the commitment to some version of this idea of extended or situated cognition is what motivated anti-representationalism in the first place.” p.7

So, which way did it go? Given how he cites Dreyfus, it's quite possible Gallagher didn't think about it very thoroughly. In any case, the article doesn't answer this question.

______________________

Gallagher, S. (2008). Are Minimal Representations Still Representations?International Journal of Philosophical Studies, 16(3), 351-369. Routledge.

Clark, A., & Toribio, J. (2006). Doing Without Representing? Synthese, 101(3), 401-431. Springer. Retrieved from http://hdl.handle.net/1842/1301

lundi 24 janvier 2011

Dennett's ill-formed concept of pattern

I moved to wordpress. Please comment this note at its new address: http://mlnode.wordpress.com/2011/01/24/dennetts-ill-formed-concept-of-pattern/

I'm reading Dennett's "Real Patterns" – and I can't really get over one rather important problem.

Dennett's whole point is that intentional attitudes are real because there are patterns which act like affordances in our behaviour for an intentional interpretation. I could follow him there if patterns were affordances, but they're not. In order to express what they are, he uses Chatlin's definition of mathematical randomness:

“A series of numbers is random if the smallest algorithm capable of specifying it to a computer has about the same number of bits of information as the series itself” (Chaitlin, p. 48 via Dennett, p.32)

which he interprets:

“A series (of dots or numbers or whatever) is random if and only if the information required to describe (transmit) the series accurately is incomprehensible: nothing shorter than the verbatim bit map will preserve the series. Then a series is not random—has a pattern—if and only if there is some more efficient way of describing it.” (Dennett, p.32)

You'll notice that in the interpretation, "computer" has been removed. It is, however, huge.

Say you have a very basic computer plugged to a simple screen, on which you are trying to print barcodes (this isn't unlike Dennett's example). Say there's a bug in the main board that causes the pixels in the second horizontal line to be written from right to left instead of the opposite. It might take you more code, but you could still print any barcode you need.

Now, say you're main board is really messed up: pixels are so scattered that if you enter a program which writes a regular barcode, you get a random pattern (in Chaitlin's sense). Inversely, because of this malfunction, in order to actually print this barcode, you need to specify every bit one by one. Then the pattern that is most regular to us and to a standard computer can't be output by nothing short of a "verbatim bit map", while the pattern that is random to our eyes and to a standard computer's can be output by an algorithm that is much shorter than the bit map itself.

Dennett's mistake is that "computer" is anything to him. He forgets that a computer is corporeal object, and, as such, has perception bias. The only way to salvage his theory is to consider patterns as affordances.