Not “On What Matters”

This very short post isn’t about Derek Parfit’s book On What Matters, per se, but on the thoughts that came up while reading it.

His phrasing of the arguments reminded me a lot of the way we phrased rules “back in the day” when expert systems were at the cutting edge of artificial intelligence: you’d initially phrase the rule in a way that approximated an (expert) common sense formulation of the desired action; you’d then find out that there were edge cases that required ever greater subtlety of phrasing to get reasonable results, often rendering the so called understandability of expert systems moot.

After wrestling with this for a while, I’d finally developed the practice of reverting to an apparently simpler approach that packed the necessary complexity into a function with a communicative name that elided the true complexity. The internal complexity of the software implementing the function could vary from the trivial to the intractable. For me, a lot of the excitement around second generation neural nets (positing 3 generations as seen from 2019: Hebbian; early Hinton back propagation; CNN/deep learning) was that by fitting a smooth complex function to the problem space, the system’s edge case behavior became less brittle. Machine learning addressed the issue that the requisite functions were (& still are) simply beyond what we can a priori explicitly specify. 

You need to think in terms of higher order polynomials with a much broader set of inputs was a thought that occurred to me a lot when reading Parfit. I wouldn’t by any means claim that “neural nets solve philosophy” but I would claim that the complexity of our thought process exceeds our ability to state its operation explicitly in a linear verbal form.

Leave a Reply