The Missing Piece(s)
In my last post, I was trying to unpack a bit of what we mean when we use the term conscious. My expectation was that this follow up post would be about an active, purposeful processes focussed and driven by emotion. Upon further consideration I’ve also come to the position that it must also consist of anticipatory, reactive processes driven by awareness.
Emotions: processes for purposes. If you’re familiar with Antonio Damasio’s work, you’ve probably become sympathetic to his position (backed by his research) that emotions are what provide us with our goals, buttressed by our likes and dislikes, which evolve over time as we have new experiences. This is nicely captured in this short YouTube video of Damasio describing some of his patient interactions.
Without this emotional drive, it just looks like machines all the way down (kind of like turtles, only less interesting), possessing neither goals nor drives just “mindlessly” pursuing whatever task is set by some “master controller”. It’s symptomatic of the situation that all of the modifiers that come to mind when describing this are mindless: drone; robotic, etc.
The claim then, would be that for us to accurately assign something to a conscious state, we’d have to consider the attribution of such traits to be accurate, or at least defensible (as noted below, accuracy probably implies that the process possesses sufficient language abilities to describe its intentions)
Environmental Sensing and Reaction:
The impressive performance of Google’s Go playing robot/program AlphaGo and some of the attendant slams — notably from FaceBook was much in the news while I was writing this. I’m not particularly concerned with the whole supervised/unsupervised learning issue, since I expect that there has to be “some” supervision, even if it is what I consider the absolute minimum of specifying features of importance.
However, it did bring to mind a broader point which I first encountered in Agre & Chapman’s Pengi playing program (pdf) which posited a key role for dynamic, reactive systems — we wouldn’t consider anything conscious which didn’t react to, and act upon, it’s environment, but upon further consideration, this situationist/dynamicism view is a core driver motivating consciousness as being a process, rather than a set of states.
The net result is that we are left with is a loosely purposeful process: one that evaluates what it desires and sets out to get it, responding “real time” to changes in its environment
As a counterpoint, I’ll mention the criteria referenced in How to Build a Brain book http://nengo.ca/build-a-brain
• Dynamicism (van Gelder & Gelder, 1995, pp. 375-376): “[C]ogntition is distinguished from other kinds of complex natural processes… by at least two deep features: … a dependence on knowledge; and distinctive kinds of complexity as manifested most dearly in the structural complexity of natural languages?
• Connectionism (Rumelhart & McClelland, 1986b, p. 13): Rumelhart and McClelland explicitly identify the target of their well-known PDP research as “cognition” to address it they feel that they must explain “motor control, perception, memory, and language*
• The symbolic approach (Newell, 1990, p« 15): Newell presents the following list of behaviors in order of their centrality to cognition: (1) problem solving, decision making, routine action; (2) memory, learning, skill; (3) perception, motor behavior; (4) language; (5) motivation, emotion; (6) imagining, dreaming, daydreaming
What’s interesting here is how often language appears on this list. I keep looking at that, wondering why. I’d buy it if language wasn’t indicating our conventional verbal or written languages and was broad enough to include anything with detachable, moveable parts, the patterns of which could be used as signifiers, e.g., placing an obstacle in a doorway, or a fence at a cliff aka something that demonstrates you’re conscious and that others are also.
However, even then, it seems to be conflating self consciousness with consciousness. On the other hand, perhaps the ability of the process to explain why it is doing something, is the definitive counter to a charge or anthropomorphization?
Update 18 Mar 2016: IEEE Spectrum just published an article raising some similar points
Leave a Reply