“Making Things Happen” by James Woodward
This book has been out for over a decade, but only recently hit my radar. It begins with the assumption that we are unlikely to (ever) get a fully detailed causal explanation of human scale events, e.g., a full up quantum level analysis of why bumping the table with your knee spilled the tea on the rug. However, despite that, we still have this concept of causality and we still find useful, however poorly defined it may be when subject to detailed examination.
Using this observation as a probe, Woodward develops a manipulationist approach (his term) to thinking about causality: what are the things (causes) that we can manipulate that will have an impact upon the outcomes (effects) of interest. This immediate focus on the aspects of a situation that we can modify is appealing. It points us towards considering changes that are ready to hand, rather than those that would only have impact in unusual/futuristic situations, e.g., “what if we use our space time curvature adjustment tool to stabilize the cup?”
One of the other reasons I find Woodward’s work so appealing is that is reminiscent/influenced by Judea Pearl’s work on Bayesian networks. The structure is similar, but the goals are a bit different — best to let Woodward describe that himself (note: PI is Pearl’s system)
I emphasize again that I do not intend these remarks as a criticism of PI for the purposes to which Pearl puts it. Because PI is characterized by reference to the correct graph for the system under investigation, this will be a graph in which, in the above example, there is an arrow from Z to R but no arrow from X to R, and in which, as Pearl says, the experimental manipulation manipulates both X and Z. Given that this is stipulated to be the graph, the causal effect of X on R, as defined by Pearl, will be null, as it should be. My concern is not that Pearl’s account gives the wrong answer in this case, but that PI is not a notion of intervention that can be used to characterize what it is for X to cause R.Instead, we must presuppose some independent characterization of what it is for X to cause R when we use PI.
Now, there’s nothing within the manipulationist mindset that mandates practicality, gedanken experiments off all sorts are permitted, and “natural experiments” such as colliding black holes, are readily addressed within the framework. The manipulationist approach also tends to be self-correcting. Since it specifies causes and their effects, observing or inducing a cause without an effect strongly signals that there is something wrong with your understanding of the situation.
Similar to Pearl’s Bayesian networks a manipulationist model is conceptually a directed graph from C (cause) to E (effect). If you have the model right, initiating C results in E, in a consistent manner. If not, you’re missing something & experiments/analysis ensue. Now, at the level of Woodward’s discussion, there’s no inherent guidance as to where the problem might be:
- Are there other causes of E,
- Are there intermediate steps between C and E which rely on preconditions which you haven’t captured, or (my fave)
- Are C and E both actually effects of some underlying cause Woodward uses the example here of storms and falling indicators of barometric pressure: changing the barometer reading won’t induce a storm.
In the last case, if you were using a causal graph model indicating that B causes S,you’d quickly come to the conclusion that a direct link between B and S didn’t exist, and go back to the drawing board (literally, in this case). In the first two cases, it will likely be more difficult to determine exactly what’s wrong, but with sufficient experimentation you should make progress, assuming some manipulations/natural experiments generate repeatable results. Woodward covers a number of potentially confounding situations and delineates some heuristics to distinguish them.
As a nice bonus, satisficing naturally arises from the framework. Since there is no mandate to find all the interactions that could occur, you’re free to quit when you have an understanding sufficiently detailed to satisfy your needs. For example, you can simply say that you ducked because you saw that a falling rock was about to hit you in the head without having with discover/investigate/learn about saccades, to justify using the phrase saw the rock. My guess is that this matches well with human experience since it seems to me that humans have likely been ducking rocks aimed at their heads long before the discovery of saccades 1880s.
This reflects another theme of the book: however we look at cause currently, we know that the nature of a satisfactory explanation has changed over time and will likely change in the future. Although our written record of intuitions of causality dates back to the ancient Greeks, concepts like conservation of momentum were unknown to them. Despite this deficiency, they likely felt that the potential to be hit in the head with a rock was a good cause for them to duck. For that matter, other animals also lack a detailed grasp of conservation of momentum, even though work cited by Woodward indicates that a number of (non-human) animals do have a sense of causality. Said differently: a sense/understanding of causality need not require a detailed understanding of the entire causal chain.
The book is also noteworthy in its relative lack of “notation” — situations are primarily described by words rather than specialized symbols. This has a couple of side effects. The benefit is that, if you’re unfamiliar with the specialized notation (aka me), you don’t get bogged down looking up symbols that are difficult to look up/remember (especially if there isn’t a glossary, which is all too often the case, since seemingly everyone knows the notation) and can just proceed with reading the book. The downside is that as the author exhaustively pushes his model through a large set of potential test cases and counter examples, the wordiness becomes exhausting, as well as exhaustive. I don’t have an opinion as to the best approach, either for the author or the reader. All I can say is that it is long but rewarding work.
In summary, manipulationism provides a nice combination of flexibility and rigor: There’s nothing preventing you from positing a causal structure that would, let’s say, cover the case of two billiard balls hitting each other, but not address the situation when a micro black hole hits a billiard ball. Similarly, there’s nothing precluding one from saying that “a miracle occurs here” providing your model identifies causes and effects at a sufficient level of detail to satisfy your current needs. Of course, this works both ways: there’s nothing from preventing another model from refining this and reducing the scope of the miracles required. (as one would expect: in general, Woodward prefers more refined predictions modulo that they don’t introduce an intractable level of complexity)
Leave a Reply