Here’s a short note on Andy Clark’s Supersizing the Mind.
My immediate reaction is that the mechanisms he’s positing are not all that surprising: I would expect that any task we do frequently would be accompanied by various physical affordances that reduce our cognitive load, sometimes these affordances are inherent to the task, sometimes they are actively introduced to improve performance, but they’re almost always there (just try working with a keyboard with a different key layout!).
Clark’s position is admittedly a bit more subtle and is built around the Parity Principle:
“It also contained the crucial Parity Principle, which held that if a process in the world works in a way that we should count as a cognitive process if it were done in the head, then we should count it as a cognitive process all the same”
Clark’s concerns are focused on explicitly cognitive functions rather than all affordances that assist in performing a task. A paradigmatic example is the use of a notebook by Otto, a patient with mild Alzheimer’s, who uses the notebook as a component of his extended cognitive system: a trusted and always available supplement to his reduced (neural) memory capacity.
This is controversial to those who are reluctant to attribute “cognition” to anything outside the body. I’m not sure why this reluctance exists. I find it clear that the “within brain”/“outside of brain” trade-off is constantly changing, whether it is my own ability to add (which has been handed off to calculators and computers over the years) or our reduced ability to read maps and follow directions post the introduction of handheld GPS systems (don’t have a good reference for this, will have to go with a newspaper), etc.
Clark fills in some of the details on how this cognitive enhancement operates: One of the more striking examples is the extent of the changes which occur in our brain through tool use: when primates become acclimated to using sticks to extend their reach, the receptive fields of the corresponding neurons change
The experimenters found that after just five minutes of rake use, the responses of some bimodal neurons whose original vRFs picked out stimuli near the hand had expanded to include the entire length of the tool
Although the extensive number of scare stories along the lines of “this is your brain on Google” have made me resistant to these sorts of stories, further reflection leads to acceptance — otherwise practice would never make perfect (repeated use makes things almost automatic, reduced use makes things harder).
My take on these examples is that they potentially support a stronger claim: it if it affects our brain, it effects our cognition. Alzheimer’s being an extreme, and depressing, example. Whether any particular adaptation is good or bad is a separate question. Certainly “your brain on Google” doesn’t spend any time arguing over who was the 31st president of the United States. Your brain either
- Prompts the reach for the smartphone to answer the question (if you have a data connection),
- Sets a task to check the answer to your “to do” list,
- Or just punts and realizes that the question just wasn’t that important anyway.
The valuation of these changes is an orthogonal question: if we become accustomed to using the “hive mind” to answer questions, or a stick to reach for the object are we crippling ourselves for a time when these affordances go away? — feel free to answer after you have learned to start a fire by rubbing dry sticks together.
Significantly, this cognitive adaptation is reflexive and extends to our own cognitive processes.
“One intriguing speculation is that to gain this kind of benefit from the token training requires the presence of neural resources keyed to the processing and evaluation of internally generated information. In particular, there is emerging evidence that the anterior or rostrolateral prefrontal cortex (RLPFC) is centrally involved in a variety of superficially quite different tasks, all of which involve the evaluation of self-generated information (Christoff et al. 2003). Such tasks include the evaluation of possible moves in a Tower of London task (Baker et al. 1996), the processing of self-generated subgoals during working memory tasks (Braver and Bongiolatti 2002), and remembering to carry out an intended action after a delay (Burgess, Quayle, and Frith 2001).3 In general, the RLPFC is known to be recruited in a wide variety of tasks involving reasoning, long-term memory retrieval, and working memory. What unites all the cases, according to Christoff et al. (2003) is the need to explicitly (attentively, consciously) evaluate internally generated information of various kinds. The relational matching-to-sample task, Christoff et al. believe, requires just this kind of processing; that is, it requires the explicit directing of attention to internally generated information concerning, in[…]”
My initial reaction was that even though the accessibility of this information might be unique to humans, it does remind me a bit of the “ carrot and bunny” case discussed in beyond the brain. However on further consideration, I realize it is more accurately the dual of this case: rather than packing the information involved in making the decision into a unit and pushing it under the level of awareness, it packs the information in a bright shiny labeled ball that is available for further explicit cognitive processing.
The manifold nature of the consciousness described by Clark, reminds me of Marvin Minsky’s Society of Mind.
To pick one example: apparently different subsystems of our bodies have different points from which they extract the information to perform their tasks. For instance, one would think that gripping something and consciously judging its size would tap into the same data at the same point, but no:
grip size was determined entirely by the true size of the target disc [and] the very act by means of which subjects indicated their susceptibility to the visual illusion (that is, picking up one of two target circles) was itself uninfluenced by the illusion. (Milner and Goodale 1995, 168)” (citation is The Visual Brain in Action)
We might be able to make an evolutionary argument for this, but no matter, it does speak volumes to how opaque our own “conscious” actions, e.g., reaching are to our consciousness.
Ultimately I will admit to being strongly biased towards believing this type of thing but I do think the evidence is both compelling and in line with my experience:
- The AI approach of a totally conscious, totally planned world and activities, didn’t pan out.
- The neural network paradigms were designed to be less rigidly structured but were not geared towards understanding consciousness.
- The supersized mind/extended, embodied cognition approach represents a compellingly adaptable and extensible framework for understanding our own embodied cognitive processes: non-conscious; subconscious and conscious (and “super conscious” 🙂 ).