mercredi 20 avril 2011

Supersizing the Mind - Part I: making sense of the mess

I've set out to read Clark's Supersizing the Mind. Not that it pleases me, mind you, but it's hard to be credible if you write in the extended mind without having read Clark seriously. Sure, I read his infamous 1998 paper, but this paper is somewhat programmatic, filled with loose threads, and what positive thesis it defends—the Parity Principle—is more of an eye-opener, or an "intuition pump", than an actual draft towards a theory of the mind or the cognitive. (By the way, I've never understood why commentators make so much fuss about it... it feels like the old joke: C&C are pointing towards the sky, and people are looking at the finger.)

The first part of Supersizing the Mind is not a very exciting reading. The idea, it seems, is to lay out some background knowledge in order for it to be picked up in the discussions featured in the two following parts of the book (although, honestly, I'm in the process of reading them, so I can't say for sure). In fact, it feels like reading Aristotle for the first time: it seems purely encyclopedical, there doesn't seem to be any thread holding everything together in a narrative.

Take this, for instance: in the end of the first chapter, he talks about the Dynamic Systems Theory, first to mention how its holistic approach enables us to understand phenomena which would otherwise be incomprehensible, and then pointing out that, despite Van Gelder's optimism, it fails to be appropriate for many problems, such as those involving "independant variable causally interacting substates". There are parts in systems, and sometimes it's worth examining the parts to understand them. Thus, he proposes a hybrid approach, which, he claims, is already being used by cognitive scientists: he sees "computational, representational, information-theoretic and dynamic approaches as deeply complimentary".

This might tip you in a direction: sure, the traditional information-theoretic approach commits you to a parts-relation ontology, but it's insufficient to account for the mind or the cognitive system. You need the "flow" from the dynamic approach. But then, section 4.7: EC is about vehicules, not content. He even describes how the vehicule is a part of the content-generating system, thus excluding content-generating mecanisms which are not representation (in Clark's sense) vehicules from what could count as mind. It's hard not to make the link of this part-whole distinction with the part-whole distinction that was at play when he talked about the dynamic approaches. In fact, it's probably valid: you need a holistic, probably dynamic approach to make sense of content, and yet something which is hopefully localized and which holds information in a way that can be understood by information-theoretic approaches counts as the vehicule. But then, if EC is vehicular, why is the dynamic/holistic approach so important?

Well, Clark talks a lot about recruitment. The agent, he says, is constantly renegotiating his boundaries—and to illustrate this, he talks about how tools become integrated in our activities when they become "transparent", about how our brain begins to think of space as if they were part of our body, etc. That would be the cyborg argument, and the underlying intuition behind this conception of an agent would not be the conscious, theory-ladden ones we usually see in cognitive science, but rather the one that would be implicit in our problem-solving and in our actions—in our "body schema", he mentions, to use Gallagher's terminology. The dynamic/holistic approach is just a way to study this scientifically.

I think even Rupert, as careful as he is, fails to appreciate the cyborg argument. It's not cognitive systems made on the fly, it's an agent renegotiating itself. The tool has to become familiar before it becomes transparent. You have to work for this recharacterization, but it gets done.

mercredi 16 février 2011

Moving to Wordpress

I've been having problems with Google arbitrarily deactivating content without either noticing me or answering my emails. As a result, I'm pulling the plug on Blogger and going back to Wordpress. Please update your rss readers and your bookmarks!

There is the new address:

lundi 31 janvier 2011

Gallagher on representationalism

I moved to wordpress. Please comment this note at its new address: http://mlnode.wordpress.com/2011/02/01/gallagher-on-representationalism/

Today's readings included Gallagher's "Are minimal representations still representations?", which is a bit of a political choice: the author is visiting Montreal in three weeks. But it fell right in my preoccupations.

The general line goes this way: Wheeler's action-oriented representations (AOR), he says, simply lack the main characteristics they would need in order to qualify as representations. But Wheeler's not alone in this: Mark Rowlands, Andy Clark and Rick Grush are in the same boat. The representations they see in coupled system fail to be, among other things, strongly instructional or decoupleable; meaning they do not in themselves hold their interpretation, and do fail to make any sense outside of the tightly coupled systems on which these authors focus.

Now, Gallagher points out, you can't really have a representation that's easy to identify in a system, easy to decouple from it, and strongly instructional in itself. Dreyfus and Kripkenstein taught us better, and philosophers who dwell in cognitive science learned that lesson. So the point of Gallagher's article is that Clark and Wheeler are not the representationalist they claim to be: they hold a bit of a middle ground. They might be critical of naive anti-representationalists who make wild extrapolations from relatively jejune examples (Clark 2006), but they aren't completely spared by anti-representationalism.

So is representation the locus of the divide between traditional philosophy of mind and the new one which relies heavily on cognitive science (as I was alluding to in my previous post)? Strangely, Gallagher sees a link between the two phenomena, but he sees the causality acting in the opposite direction:

“... the commitment to some version of this idea of extended or situated cognition is what motivated anti-representationalism in the first place.” p.7

So, which way did it go? Given how he cites Dreyfus, it's quite possible Gallagher didn't think about it very thoroughly. In any case, the article doesn't answer this question.

______________________

Gallagher, S. (2008). Are Minimal Representations Still Representations?International Journal of Philosophical Studies, 16(3), 351-369. Routledge.

Clark, A., & Toribio, J. (2006). Doing Without Representing? Synthese, 101(3), 401-431. Springer. Retrieved from http://hdl.handle.net/1842/1301

lundi 24 janvier 2011

Dennett's ill-formed concept of pattern

I moved to wordpress. Please comment this note at its new address: http://mlnode.wordpress.com/2011/01/24/dennetts-ill-formed-concept-of-pattern/

I'm reading Dennett's "Real Patterns" – and I can't really get over one rather important problem.

Dennett's whole point is that intentional attitudes are real because there are patterns which act like affordances in our behaviour for an intentional interpretation. I could follow him there if patterns were affordances, but they're not. In order to express what they are, he uses Chatlin's definition of mathematical randomness:

“A series of numbers is random if the smallest algorithm capable of specifying it to a computer has about the same number of bits of information as the series itself” (Chaitlin, p. 48 via Dennett, p.32)

which he interprets:

“A series (of dots or numbers or whatever) is random if and only if the information required to describe (transmit) the series accurately is incomprehensible: nothing shorter than the verbatim bit map will preserve the series. Then a series is not random—has a pattern—if and only if there is some more efficient way of describing it.” (Dennett, p.32)

You'll notice that in the interpretation, "computer" has been removed. It is, however, huge.

Say you have a very basic computer plugged to a simple screen, on which you are trying to print barcodes (this isn't unlike Dennett's example). Say there's a bug in the main board that causes the pixels in the second horizontal line to be written from right to left instead of the opposite. It might take you more code, but you could still print any barcode you need.

Now, say you're main board is really messed up: pixels are so scattered that if you enter a program which writes a regular barcode, you get a random pattern (in Chaitlin's sense). Inversely, because of this malfunction, in order to actually print this barcode, you need to specify every bit one by one. Then the pattern that is most regular to us and to a standard computer can't be output by nothing short of a "verbatim bit map", while the pattern that is random to our eyes and to a standard computer's can be output by an algorithm that is much shorter than the bit map itself.

Dennett's mistake is that "computer" is anything to him. He forgets that a computer is corporeal object, and, as such, has perception bias. The only way to salvage his theory is to consider patterns as affordances.