CEP As A Model For Knowledge

I internalized CEP (Circular Error Probable during the cold war) http://en.wikipedia.org/wiki/Circular_error_probable as it pertained to ICBM effectiveness (for those unfamiliar, it is in some ways similar to the difference between precision and accuracy.

CEP came to mind during the polling averaging controversy of the recent 2012 presidential election. In my mind, this controversy centered on an ignorance of the value of repeated experiments. The polling averagers agreed that their methods would be unable to detect systematic bias/error in the polls and that such error would unquestionably impact their predictions. However they couldn’t see any indications of such error — one clear indication of systematic error of individual polls would be that the centroid of the average was far outside the confidence intervals (the expected random error) of the polls — which was not the case. This is not to say that systematic error could not have generated the observed results, but it is to say that even if there were, it would only be detectable at the time of the election — elections being the empirical tests that gives polling methodologies credibility. The results of recent elections have not given us reason to think that such systematic error exists.

This split between systematic and random error and it’s grounding in repeated (almost ritualized) experiment is what I like about a CEP view of the world. Ballistic missiles became very precise during the Cold War. This precision meant that when multiple warheads were aimed at a target they would come down in a tight circle a.k.a. small CEP. However, the centroid of that circle may or may not be close to the actual desired target. In the ICBM case, this error would be primarily due to variations in the gravity field on the flight path (I’m ignoring here some of the improvements achieved through the use of post boost vehicles).

From a practical standpoint this meant that you could “tune out” the systematic errors by repeatedly traversing the same trajectory and eventually physically hitting the predetermined target. As long as no new systematic error was introduced into the system, repeatability was expected. Of course on a new trajectory, the actual accuracy to expect would be unknown. The CEP was also well behaved in this case since the major source of systematic error for an inertial system, gravity, is well acknowledged. Under normal circumstances, a refinement of the gravity model along the new path is all that is required to “true up” the system. “Sensing” the appropriate compensation was not possible, since gravity cannot be sensed by the “objects” that constitute and inertial guidance system — reminiscent of the OOO perspective of the previous few posts.

I find CEP a useful framework for understanding our “knowledge in the world.” That is, the way of knowing that is relevant to our embedded, embodied existence.  In keeping with previous epistemological thought there is nothing that guarantees that the object which we “see” in front of us is really a “tree” but if our previous experience for objects with a similar appearance have met our previous expectations it is prudent to act as if it was real.

This is the take away: there’s no guarantee that our sense impressions are accurate, and there is no guarantee that decisions (in the broadest sense) based on them have a likelihood of being prudent. However, the results of repeated experiences in similar situations rightly inform our expectations. The more uncertainty in our sensory data: the greater the distance of the object, the greater intensity of the heat shimmer, the less our ability to pay attention, the greater our chance for error.

Put more succinctly it is just one giant Bayesian updating rule: prior results adjust our future expectations.

Leave a Reply