Extended Cognition and Empirical Evidence

Extending Cognition to Include Other Minds

I’ve discussed extended cognition here previously and recently read a paper by Slaby & Gallagher 1that “extends the extension” to include other minds, specifically those involved in a community of scientific practice. This is reminiscent of Hardwig’s Epistemic Dependence  paper which discussed the implications of a cutting-edge physics publication with > 100 authors 2, which also served as a meditation on the inability of any individual to know everything required to certify Truth or even, as true as we can tell given the current state of our knowledge, even in a relatively narrow field.

If we grant the existence of such an externalized truth reference peer group it raises the issue of how we select a peer group and assure its continued reasonableness as the world changes. A striking example of this is the old Gingrich/Pelosi global warming video, in which they say that despite their differences they both agree on the need to address climate change. It’s not controversial to say that, as of this moment (2017), they certainly don’t agree on global warming anymore and that they are both fully embedded in peer groups that support each of them in their current views. Obviously, change is possible over a relatively short period of time. 3

So how does change happen?

If the peer group changes its assessment, it’s very natural for individuals to go along with the change —  the social scaffolding will be in place to facilitate the change (the exception is if the situation involves a change in the assessment of people like you, in which case you’re likely to become an outcast)

If the individual assessment changes it’s generally problematic for the individual, since this change of assessment often entails very costly actions, e.g., abandoning a group of close friends.

For this discussion, I’ll assume that you can have more than one peer group and will not require that the domains of their evaluations be completely disjoint. In addition, there’s no reason to assume that the evaluative axes of each peer group are similar — in the above example it’s useful to contrast the empirical with the political, in others it might be the economic vs the cultural. This is especially important when groups diverge in their evaluation of any particular issue, e.g., climate change since the easiest way for that to happen is for them to disagree on the fundamental evaluation criteria. 4

Given this framing, how would does one evaluate the quality of their peer group relative to an empirical question. Again, thinking in terms of situation in which there are two peer groups that have very different assessments of a particular situation — a superset of topics such as climate change where at one point in time the scientific consensus and the non-partisan political consensus had a large degree of overlap which came to diverge over time.

There seem to be three necessary preconditions for errors/feedback from mistakes to overcome the influence of your socially extended mind:

    • Successful prediction must be important
    • “clear” causes or at least clear failures must be identifiable
    • success/failure can be determined in reasonable timeframes
    • (optionally) An alternative mind extension which offers a better framework

I’m assuming that importance is an obvious prerequisite: you’re not going engage in much effort, let alone abandon your peers, if the issue isn’t important. What constitutes important and how that determination is reached are individual, idiosyncratic issues, but the error must be important relative to the importance of the peer group to your emotional identity/well-being.

The need for clear causes/reasonable timeframes addresses the unavoidable ambiguity of events: is the failure unambiguous (noise free) or do the predictions mostly fail but occasionally succeed? Similarly, is the failure immediate or does the outcome require time to manifest itself? In an ideal situation, we would be able to detect the cause of failure and tie that to the portion of belief/prediction that was in error.

This is the process goal manifest during a six-sigma analysis: tighten your feedback loop so that the measurement becomes unequivocally tied to the process that caused it.

Note that this is difficult even for “simple” manufacturing processes. It is very, very hard for things like clinical trials in drug development, which require double blind studies, with lots of analysis, to determine the best of two alternatives. It becomes nearly impossible for social systems with thousands of variables which may require decades to make a call on the outcomes, let alone the causes. Just to pick one example, we’ve gone back and forth multiple times on the causes of the great depression, something with clear impact that has been the subject of significant study. Given this, it’s understandable why people have a hard time moving out of their peer group.

These preconditions highlight the reasons why we often require the existence of an alternative group that makes better predictions before we are able conclude that our extended cognitive peer group is in error.

If presented with clear evidence that clearly indicates the superiority of alternative peer group we could well choose to switch. However, even in the case of such unequivocal clarity we’re presented with a bleak picture: it’s very difficult to move out of one’s comfort group even when its errors are both “important” and salient

Caveat

As you might have noticed, there’s an assumption that’s crept into the discussion: that your extended community can not/will not change.

There’s two cases in which that may not be true. The first is if you’re influential enough within the community to pull it to your new position.

The second would be that your cognitive peer group is empirically based. If it is empirically based, it seems potentially feasible to change the group’s predictions/estimations, aka convince it to change in the same manner in which you convinced yourself. Although this is possible, the difficulty of doing it shouldn’t be underestimated. After all the quote

a new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.

is from Max Plank. Max Plank was a physicist, and physicists don’t constitute the least empirical community on the planet. In any case, since the community itself has a culture of vetting and embracing these types of changes the ostracization chances are reduced.

Leave a Reply