The central idea in Edelman’s work is neuronal group selection, shown below (figure 7.2 from the book — I’m reproducing this, since wikipedia doesn’t adequately capture the idea.)
“FIGURE 7.2 DIAGRAM OF THE THREE MAIN TENETS OF THE THEORY OF NEURONAL GROUP SELECTION. (1) Developmental selection leads to a highly diverse set of circuits, one of which is shown. (2) Experiential selection leads to changes in the connection strengths of synapses favoring some pathways over others (see the black lines). (3) Reentrant mapping. Brain maps are coordinated in space and time through ongoing signaling across reciprocal connections. The black dots in the maps indicate strengthened synapses.”
The book posits that neuronal group selection forms the fundamental units that are the basis of consciousness.
In their own words:
Before addressing this task, we adopt three related working assumptions as a methodological platform for the rest of this book: the physics assumption, the evolutionary assumption, and the qualia assumption.”
- That the mechanism has to be strictly realizable with the neuronal machinery available to the brain (they’re dismissive of quantum effects as being the substrate of consciousness and I share their skepticism. Plus, if your model is adequately explanatory without quantum effects, why bother?).
- It must be on a plausible evolutionary path — it should consist of extensions of machinery we see in our neighbors on the evolutionary tree.
- There is no attempt to describe or the experience of consciousness, only the structures that give rise to and support it.
One of their most useful design constraints is that although there might be a number of “conscious states” possible at a given time, only one drives our experience at any time.
The number of subconscious/unconscious states is effectively unlimited and consists of multiple parallel paths that might be activated by a given set of stimuli. This architecture sharpens selection, and provides a rich substrate of options to choose from.
At first glance, this is reminiscent of spreading activation, albeit much different in detail. However, neuronal group selection is “preconceptual” and designed to explain the subsecond development of active neural structures. In contrast, spreading activation is a model of how conscious thought proceeds, not a model of the brain processes enabling conscious awareness. The timescale of neuronal group selection is cellular (subsecond) rather than conscious (multi-second -> multi year).
Although the figure above looks a bit like a feedback loop, neuronal group selection operates more like correlation than feedback — think of it as a “big data” approach: not built upon any a priori sense of causal relationships. Edelman’s Darwin IV system demonstrated how cycles in the reentrant connections allow for selection and persistence of a conscious state even with transient stimuli, enabling temporal sequencing of conscious states.
Now, Darwin IV was published in 1992 — the modeling is over 20 years old, so the detail of the simulation doesn’t match the level of detail that we would expect for a current effort, but, of course, that doesn’t render it invalid.
If there’s a problem that I have with the book, it is that there hasn’t been much movement in the description of neuronal group selection over the last 20 years. I also consulted a newer reference: the Nengo/Spaun work , which appeared in Science in 2012
The Spaun authors consider neuronal group selection and complementary to Spaun, rather than precursor/successor efforts, e.g., from the Spaun paper:
However, Spaun has little to say about how that complex, dynamical system develops from birth. (Spaun references some new work published by Edelman and his colleagues using more current simulation techniques )
Edelman is definitely more concerned with the development (both evolutionary and within a particular organism) of the mechanistic substrates that support conscious activity
“Primary consciousness-the ability to generate a mental scene in which a large amount of diverse information is integrated for the purpose of directing present or immediate behavior-occurs in animals with brain structures similar to ours”
“Higher-order consciousness is built on the foundations provided by primary consciousness and is accompanied by a sense of self and the ability in the waking state explicitly to construct and connect past and future scenes”
Which leads to concepts:
“By concept, we do not mean a sentence or proposition that is subject to the tests of the philosopher’s or logician’s truth table. Instead, we mean the ability to combine different perceptual categorizations related to a scene or an object and to construct a “universal” reflecting the abstraction of some common feature across a variety of such percepts. For example, different faces have many different details, but the brain somehow.“manages to recognize that they all have similar general features.”
Again, we return to the correlation/“big data” approach: coalescing features from a set of self selected exemplars into a more abstract group of criteria for set membership.
I’m being unfair calling it a correlation model. Correlational importance is assigned using an information flavored approach, although they hide it a bit (no ln anywhere). They do seem to use an entropy based measure, occupying the same conceptual space as current “big data” approaches for unsupervised data clustering
Although an easy read, the book could use more technical depth. Of the three Edelman books mentioned, I’d recommend Neural Darwinism for the first introduction. This new book fills in the outline of that work, but elides some of the details.