mercredi 20 avril 2011

Supersizing the Mind - Part I: making sense of the mess

I've set out to read Clark's Supersizing the Mind. Not that it pleases me, mind you, but it's hard to be credible if you write in the extended mind without having read Clark seriously. Sure, I read his infamous 1998 paper, but this paper is somewhat programmatic, filled with loose threads, and what positive thesis it defends—the Parity Principle—is more of an eye-opener, or an "intuition pump", than an actual draft towards a theory of the mind or the cognitive. (By the way, I've never understood why commentators make so much fuss about it... it feels like the old joke: C&C are pointing towards the sky, and people are looking at the finger.)

The first part of Supersizing the Mind is not a very exciting reading. The idea, it seems, is to lay out some background knowledge in order for it to be picked up in the discussions featured in the two following parts of the book (although, honestly, I'm in the process of reading them, so I can't say for sure). In fact, it feels like reading Aristotle for the first time: it seems purely encyclopedical, there doesn't seem to be any thread holding everything together in a narrative.

Take this, for instance: in the end of the first chapter, he talks about the Dynamic Systems Theory, first to mention how its holistic approach enables us to understand phenomena which would otherwise be incomprehensible, and then pointing out that, despite Van Gelder's optimism, it fails to be appropriate for many problems, such as those involving "independant variable causally interacting substates". There are parts in systems, and sometimes it's worth examining the parts to understand them. Thus, he proposes a hybrid approach, which, he claims, is already being used by cognitive scientists: he sees "computational, representational, information-theoretic and dynamic approaches as deeply complimentary".

This might tip you in a direction: sure, the traditional information-theoretic approach commits you to a parts-relation ontology, but it's insufficient to account for the mind or the cognitive system. You need the "flow" from the dynamic approach. But then, section 4.7: EC is about vehicules, not content. He even describes how the vehicule is a part of the content-generating system, thus excluding content-generating mecanisms which are not representation (in Clark's sense) vehicules from what could count as mind. It's hard not to make the link of this part-whole distinction with the part-whole distinction that was at play when he talked about the dynamic approaches. In fact, it's probably valid: you need a holistic, probably dynamic approach to make sense of content, and yet something which is hopefully localized and which holds information in a way that can be understood by information-theoretic approaches counts as the vehicule. But then, if EC is vehicular, why is the dynamic/holistic approach so important?

Well, Clark talks a lot about recruitment. The agent, he says, is constantly renegotiating his boundaries—and to illustrate this, he talks about how tools become integrated in our activities when they become "transparent", about how our brain begins to think of space as if they were part of our body, etc. That would be the cyborg argument, and the underlying intuition behind this conception of an agent would not be the conscious, theory-ladden ones we usually see in cognitive science, but rather the one that would be implicit in our problem-solving and in our actions—in our "body schema", he mentions, to use Gallagher's terminology. The dynamic/holistic approach is just a way to study this scientifically.

I think even Rupert, as careful as he is, fails to appreciate the cyborg argument. It's not cognitive systems made on the fly, it's an agent renegotiating itself. The tool has to become familiar before it becomes transparent. You have to work for this recharacterization, but it gets done.

mercredi 16 février 2011

Moving to Wordpress

I've been having problems with Google arbitrarily deactivating content without either noticing me or answering my emails. As a result, I'm pulling the plug on Blogger and going back to Wordpress. Please update your rss readers and your bookmarks!

There is the new address:

lundi 31 janvier 2011

Gallagher on representationalism

I moved to wordpress. Please comment this note at its new address:

Today's readings included Gallagher's "Are minimal representations still representations?", which is a bit of a political choice: the author is visiting Montreal in three weeks. But it fell right in my preoccupations.

The general line goes this way: Wheeler's action-oriented representations (AOR), he says, simply lack the main characteristics they would need in order to qualify as representations. But Wheeler's not alone in this: Mark Rowlands, Andy Clark and Rick Grush are in the same boat. The representations they see in coupled system fail to be, among other things, strongly instructional or decoupleable; meaning they do not in themselves hold their interpretation, and do fail to make any sense outside of the tightly coupled systems on which these authors focus.

Now, Gallagher points out, you can't really have a representation that's easy to identify in a system, easy to decouple from it, and strongly instructional in itself. Dreyfus and Kripkenstein taught us better, and philosophers who dwell in cognitive science learned that lesson. So the point of Gallagher's article is that Clark and Wheeler are not the representationalist they claim to be: they hold a bit of a middle ground. They might be critical of naive anti-representationalists who make wild extrapolations from relatively jejune examples (Clark 2006), but they aren't completely spared by anti-representationalism.

So is representation the locus of the divide between traditional philosophy of mind and the new one which relies heavily on cognitive science (as I was alluding to in my previous post)? Strangely, Gallagher sees a link between the two phenomena, but he sees the causality acting in the opposite direction:

“... the commitment to some version of this idea of extended or situated cognition is what motivated anti-representationalism in the first place.” p.7

So, which way did it go? Given how he cites Dreyfus, it's quite possible Gallagher didn't think about it very thoroughly. In any case, the article doesn't answer this question.


Gallagher, S. (2008). Are Minimal Representations Still Representations?International Journal of Philosophical Studies, 16(3), 351-369. Routledge.

Clark, A., & Toribio, J. (2006). Doing Without Representing? Synthese, 101(3), 401-431. Springer. Retrieved from

lundi 24 janvier 2011

Dennett's ill-formed concept of pattern

I moved to wordpress. Please comment this note at its new address:

I'm reading Dennett's "Real Patterns" – and I can't really get over one rather important problem.

Dennett's whole point is that intentional attitudes are real because there are patterns which act like affordances in our behaviour for an intentional interpretation. I could follow him there if patterns were affordances, but they're not. In order to express what they are, he uses Chatlin's definition of mathematical randomness:

“A series of numbers is random if the smallest algorithm capable of specifying it to a computer has about the same number of bits of information as the series itself” (Chaitlin, p. 48 via Dennett, p.32)

which he interprets:

“A series (of dots or numbers or whatever) is random if and only if the information required to describe (transmit) the series accurately is incomprehensible: nothing shorter than the verbatim bit map will preserve the series. Then a series is not random—has a pattern—if and only if there is some more efficient way of describing it.” (Dennett, p.32)

You'll notice that in the interpretation, "computer" has been removed. It is, however, huge.

Say you have a very basic computer plugged to a simple screen, on which you are trying to print barcodes (this isn't unlike Dennett's example). Say there's a bug in the main board that causes the pixels in the second horizontal line to be written from right to left instead of the opposite. It might take you more code, but you could still print any barcode you need.

Now, say you're main board is really messed up: pixels are so scattered that if you enter a program which writes a regular barcode, you get a random pattern (in Chaitlin's sense). Inversely, because of this malfunction, in order to actually print this barcode, you need to specify every bit one by one. Then the pattern that is most regular to us and to a standard computer can't be output by nothing short of a "verbatim bit map", while the pattern that is random to our eyes and to a standard computer's can be output by an algorithm that is much shorter than the bit map itself.

Dennett's mistake is that "computer" is anything to him. He forgets that a computer is corporeal object, and, as such, has perception bias. The only way to salvage his theory is to consider patterns as affordances.

lundi 15 novembre 2010

Wittgenstein's dinosaurs

I moved to wordpress. Please comment this note at its new address: /

"Les gardiens du bon usage" is a critical paper written by Pierre Poirier and Nicolas Payette in response to Bennett & Hacker's Philosophical Foundations of Neuroscience.

"Le bon usage" is a phrase, which one should translate as something like "the right way, according to convention". Manuels with titles such as "Savoir-vivre et usages du monde" (from the infamous Berthe Bernage) told young women how to act as a perfect housewife in any mundane situation, and "Le bon usage" was the title of the most popular prescriptive grammar handbook. If anything, "les gardiens du bon usage" is a phrase which will strike as a rather dubious praise: one could indeed see the figure of the patriach, keeper of traditions, but also the figure of the "grammar nazi" or the authoritarian grade school teacher. One could also add that there is a generational issue in it: if the older generation remembers fondly of "Les insolences du frère Untel", which denounced the use of Québec's slang as boneless and servile, their children have not taken the carried the fight for "le bon français". In any case, the ambiguity in the phrase carries into the paper, a careful criticism of Bennett & Hacker's book which can be seen as benign or destructive depending on who you are.

B&H's argument revolves around the mereological fallacy, which is about attributing a property of the whole to one of its parts, or a property of a part to the whole. In the case of the neuroscientists, B&H think they commit the later, attributing psychological states and other properties of the mind to the brain or brain tissues. The point of their book is to expose this.

Obviously, as P&P point out, some conditions apply: if, philosophically, the theory of the brainbound mind was true (as opposed to, say, the extended mind), then there would be no problem in literally attributing psychological states to the brain, since it would be identical to the mind in a sense. But notwithstanding this, exposing a the fallacy is no easy task. As P&P note, it would be very surprising if everyone in neuroscience—including researchers who are aware of the philosophical issues concerning their field, like Damasio—had done the kind of fallacy college freshmen learn to avoid in philosophy 101 courses. Thus, some charity is expected when we read phrases such as "the brain decides" or "neurons estimate probabilities".

B&H consider four ways in which neuroscientists could be warranted in their use of these phrases: they could be using them as analogies, as metaphors, they could be modfying their meaning, or using them in a completely new sense.

Analogies have been fruitful in science: take the analogy between water flow and electricity in circuits, which has yielded successful predictions. However, they did because electricity in a circuit does indeed behave like water in pipes. Such structural correspondance be found between the brain and the mind.

Metaphors are less consequential; however, in B&H's opinion, it is hard not to get carried away. Illustrating their point with an example, they contend that using psychological metaphors will confuse neurologists and lead them to mereological fallacies. P&P seem unconvinced by B&H's evidence and point out that there is no necessity in metaphors causing this confusion. A careful usage should be a sufficient remedy.

In the third and fourth scenario, neurologists would be using such words as "think", "decide", "believe", etc. not to mean think, decide, believe, etc., but rather *think, *decide, *believe, etc., which have only vaguely similar meanings to the original concepts. This, however, would mean that we'd also have to modify the meanings of all the terminology that revolves around it, and talk about *information, *knowledge, *memory, etc. That's a bit too much in one gulp, in B&H's opinion: neuroscientists haven't done the conceptual work to warrant the use of a whole new terminology. P&P think that language doesn't evolve with definitions: people use new words (or new meanings) first, it's only when they reflect on their usage that they come up with definitions. It's the usage that gives meaning to the words not the other way round. Opposing new ways of using words would cement language and prevent its semantic evolution.

P&P invoke Patricia Churchland: inspiring herself of Quine's web of beliefs, she argues a new theory could very well change the meaning of the words it uses, such that usages which were previously unwarranted become perfectly legit. For instance, before Archimedes found a way to tell true gold from fool's gold by measuring its volumic mass, you could imagine that the definition of gold was something like "yellow metal". After Archimedes, the volumic mass became part of the definition, and as science evolved, so did the concept of gold, until we could even talk of "white gold" without contradiction. Similarly, they argue, neurological advances could modify our understanding of psychological properties, to the point that behaviour would be no longer necessary nor sufficient to recording their presence.

P&P admit that B&H are right in thinking that neurologists should be careful about the way they link psychological and neurological phenomena—if only because mistakes could lead to bad diagnostics. However, banning new usage of psychological words alltogether would probably be unwise: we should judge the tree by its fruits, and those are not ripe yet.

samedi 9 octobre 2010

Crispin Wright's Kripke

I moved to wordpress. Please comment this note at its new address:

Reading Crispin Wright's  "Kripke's account of the Argument Against Private Language", you'd think he got the gist. In fact, the exposition of the skeptical paradox captures in few pages much of the complexity of Kripke's seemingly innocent dialectics, and he does realize how Kripke makes a point which really is all about semantics:

"A second response to the skeptical argument which Kripke discusses (41-50) is the idea that meaning green by 'green' "denotes an irreducible experience, with its own special quale, known directly to each of us by introspection." If there were such an experience "as unique and irreducible as that of seeing yellow or feeling a headache," then-in the presence of the relevant idealizations-it could simply be recalled in response to the skeptic's challenge and that would be that. Kripke's response to this proposal, drawing ex- tensively on themes explicit in the Investigations, is surely decisive. Quite apart from the introspective implausibility of the suggestion, it is impossible to see how such an experience could have the content that understanding is conceived as having, could have, as it were, something to say about the correct use of E in indefinitely many situations." p. 772

I'm not sure anyone ever understood the second part well—I think it's for the good reason that Kripke himself didn't consider it as an exhaustive framework, but rather as a thought experiment which could give us a hint as to where the solution may lie. For this reason, I'll skip this part

However, I couldn't pass the last section, "Resisting the Skeptical Argument", where Crispin Wright asks the very interesting question which Kripke fails to explicitely answer: what kind of rule is the skeptic after? What is the kind of rule he argues to be impossible? Well, inferential rules, it seems. One we can infer from past usages, or, for that matter, any consideration, so far as they are factual. He complains about it in these terms:

"What is unsatisfactory about the suggestion is that it gets the intuitive epistemology of understanding wrong. Recognition that a certain use of an expression fits one's former (and current) understanding of it would not, it seems, except in the most extraordinary circumstances, have to proceed by inference to the best semantic explanation of one's previous uses of that expression." p.773

Thus, "Kripke's skeptic persuades his victim to search for recalled facts from which the character of his former understanding of E may be derived." (p.774) At this point, Crispin Wright's comprehension of "infer" seems to emcompass even causality (at least the classical type). As such, I would argue that his complains are unfounded: it's as if meaning had to come from nothingness.

Hence there ought to be some sort of Deus ex machina to save meaning: Crispin Wright proposes that such things as intentions can play saviors. They can stop the skeptic, according to Crispin Wright, because they have content by their own virtue:

"To come to know that you have a certain intention is not to have it dawn on you that you have an intention of some sort and then to recover an account of what the intention is by reflecting upon recent or accompanying thoughts. It is the other way round: you recognize thoughts as specifying the content of an intention that you have because you know what the intention is an intention to do." (p. 776)

Well, I'll be damned, we have found the source of all semantics! It's all in this magical thing call intention. Now, to be fair, Crispin Wright doesn't believe that intention is the right kind of thing, but he seems to argue that the solution lies in something similar, something which has the same quality of being uninferred, of getting its content ex nihilo.

For my part, I fail to see how this makes for an appropriate response. Just as life isn't an explanation for cells' growth but rather a term that enters in its description, intention can't explain how it has content, i.e. how it relates to its realisation.

dimanche 26 septembre 2010

Searle as a Closet Dualist ?

I moved to wordpress. Please comment this note at its new address:

Sometimes, you think you know an author. And then you wonder.

Take John Searle. There is nothing ambiguous about his monism: that mental stuff like intentionality is a physical phenomenon is of utmost evidence. The only reason why we talk about it is that previous philosophers have been arguing about it for the last few millenia (to be fair, Searle stops at Descartes). If we didn't, then we'd see the light without problem. In "Intentionality and its place in nature"

"One group of philosophers sees  itself as defending the progress of science against residual superstitions.  The other group sees itself as asserting obvious facts that any moment  of introspection will reveal. But both accept the assumption that naive mentalism and naive physicalism must be inconsistent. Both accept the assumption that a purely physical description of the world could not mention any mental entities." p.9

But then, he goes on to say that mental stuff and physical stuff beg for different explanations:

"In  any  causal  explanation,  the  propositional  content  of  the  explanation specifies a cause. But in intentional explanations the cause  specified is itself an intentional state with its own propositional content."

As a matter of fact, he claims that intentional explanations have very specific features. For instance, they repeat the explanandum. Why did I go to Rome? Because I wanted to go to Rome. Furthermore, they generally aren't covered by laws (in the scientific sense) and they frequently take final causes (the end) as complete explanans.

"These features have no analogue in the standard physical sciences. If I explain the rate of acceleration of  a falling body in terms of gravitational attraction together with such other forces  as  friction  operating  on  the  body,  the  propositional  content  of  my explanation  makes reference to features of the event such as gravity and friction,  but the features themselves are  not propositional contents or parts of propositional contents."

Ok, let's repeat this. Intentional and mental stuff is real. It has causal powers. But the explanations that account for them are radically different.

Let's consider again this other sentence he said on p.9: "Both accept the assumption that a purely physical description of the world could not mention any mental entities." Would a purely physical description of the world be made of purely physical explanations? One would expect so. So we could call Searle's description of the world, with intentionality, "not purely physical".

Of course, he could claim that he accounts for intentionality in a way which puts it in the causal loop. But the causality, it seems, is different, since it requires a description of a different kind, which physical sciences would never accept. Descartes—dualism incarnated to Searle—also accounted for a form of causality in his res cogitans, although, just like Searle's, it was of a particular kind.

Anyway, I rest my case. From this article, Searle's a closet dualist.