Aka How important is it that our thought experiments plausibly reflect the developmental paths implicitly necessary for their postulates to hold?
I was kicked into this train of thought after reading Berit Brogaard’s The status of consciousness in nature, which discusses the Mary in a room thought experiment and has the line
While it is somewhat plausible that pre-release Mary already knows what it’s like to see blue, it is highly implausible that pre-release Mary already knows how it feels to see blue.
Which made me think that, given what we know of infant development, what with neural connections being made and broken continually and neurons dying off for lack of feedback, it is distinctly possible that Mary, having had no exposure to color during her development, would be unable to see color when she left her room.
Fortunately, a paper backing up this concern was easy to find Experience in Early Infancy Is Indispensable for Color Perception by Yoichi Sugita
The title says it all. Of course this doesn’t dismiss the notion of qualia, rather it highlights how alien Mary’s perceptions and understanding of visual perception will be, much more so than out initial expectations would be, given the initial framing. Depending upon Mary’s developmental path, perhaps her retinal cones would atrophy and she’d be limited to rod based vision: low resolution; high light sensitivity; and the inability to see red (which is part of why nighttime displays are red — since rods are insensitive red lights don’t change the light adaptation of the rods).
I don’t want to comment on Brogaard’s paper in it’s specifics, but am more concerned with the types of things it discusses and the manner in which it discusses them. The provocation is that the paper talks of consciousness primarily in terms of states, while lately I’ve been moving towards thinking of consciousness as being characterized by processes. This has a lot to do with my reading of Dehaene. As it is with most things once you start looking for consciousness as a process, it pops up everywhere, e.g., Thomas Metzinger’s interview in 3AM magazine.
On page 1, the fourth sentence of Being No One already says “The phenomenal self is not a thing, but a process…” l
If this idea that being conscious of something is ~= making dynamic global connections about the thing, then positing a static state is starting off in the wrong direction. Although it might in theory be possible to build up a dynamic model using a succession of static states and work with that as a base, the simpler (fewer bits) description would involve dynamics: attentiveness to this; coupled with a slow changing of the identity of the this; frequently there is no this, just background processing that eventually foregrounds a this. The deictic this being the grounded focus of the dynamic activity
I know this sounds a lot like a certain kind of mysticism: a “Ram Das ian” Be Here Now. I wouldn’t deny a certain resemblance, but it’s really around the deictic reference — a hammer and a coffee cup kind of thing. The amount of processing that goes on “below” conscious level is of continual surprise a dynamic analysis would necessarily account for all those things trying to bubble up from the pre-conscious to command conscious attention — remember Pylyshyn’s work on visual tracking
But what are we actually talking about
As a I wrote that, it occurred to me that it might be interesting to simulate that kind of activity. However, it’s already been done, repeatedly, in countless contexts. It’s the top-level spec for any type of notification/“important event” detection system: monitor in the background; alert the user and move the anomaly and the context necessary for it’s evaluation into the foreground. As a species our efforts along these lines have extended from cowbells to firewall/intrusion detection systems. Multiple techniques, varying levels of sophistication, but the top-level charter remains the same.
This makes me wonder: what are we talking about when we discuss consciousness, do we mean conscious awareness, conscious activity, both, or something else entirely. When discussing Dehaene, we’re really just talking conscious awareness. When discussing machine consciousness we might be thinking of conscious activity. When discussing Mary, we’re probably starting to include something else entirely (qualia).
This brings me back to a lecture Marvin Minsky once gave at Massachusetts College of Art, in which he said something along the lines of “the things that we think of as being difficult to do are relatively tractable to computers, the ‘simple things’ that children can do are pretty hard” The examples at the time (~ 1985) being chess vs facial recognition. The net being that we’re underestimating the preconditions that must be satisfied to achieve the end results we (unconsciously) have in mind, because they’re embedded so deeply into our embodied existence that we can’t imagine ourselves without them, partly because we don’t even know they’re there, as in the Pylyshyn example above.
Arguably the first two “meanings” (I’m not at all sure we know what these meanings mean, so I’ll scare quote it) have been simulated at some level: by learning systems and alerting systems. Heck we even have simulated (& successful) art collaboration.
Still, something fundamental is missing. I’d argue that the missing piece is that these systems/imagined actors are not operating in an emotional context (more next post)