Check, One, Two

Myself and (then) SSgt Clark doing some math in Fallujah, Iraq

I used to be an artillery officer in the Marine Corps, and it's sometimes fun to bring mathematical details of my former life into the classroom.  Not only is there some useful and interesting math to be found there, but it also buys me the occasional attention of the Call of Duty crowd.  Here is a simple application of Bayes' Theorem to artillery safety.

For about two years I was a Fire Direction Officer (FDO).  My job was to run the team of Marines who receive targeting data and subsequently calculate technical firing solutions to send to the individual artillery pieces.  As you might imagine, getting good at this requires a lot of training.  And, as you might also imagine, safety is a nontrivial aspect of any sort of training that involves 100-lb. exploding bullets on an 11-mile flight.

Before any exercise, there are days of prep work to maximize the probability that everything goes smoothly and safely, but there inevitably comes the supremely intense interval between the time the very first round leaves the muzzle and the time it (fingers crossed) hits the target.  To give you an idea of just how intense this interval is, FDOs only half-jokingly call the button that transmits data to the gun line the "go-to-jail button."

One of our standard safety procedures involves the firing of two "check rounds."  The observer selects a target near the center of the impact area.  Then the check rounds are fired, one at a time, while the observer verifies that they impact reasonably near (more on this later) the correct grid coordinates.  If the rounds do, in fact, land in the vicinity of the target, we assume that the guns are laid correctly and our computational tools are functioning properly, the FDO resumes breathing, and we begin processing missions for the exercise.  But on what basis can we justify making this assumption?

Notice that what we're essentially doing is evaluating the posterior probability that the guns are laid correctly given the evidence provided by the check rounds.  This is where Bayes' Theorem comes in.  For the rest of this example, let H be the event "everything is set and functioning properly," and let E be the event "the check rounds impact near the target."

Bayes' Theorem tells us that:

$latex P(H|E)=frac{P(E|H)P(H)}{P(E)}&s=2$

If we partition the denominator and rewrite it according to the Law of Total Probability, we end up with:

$latex P(H|E)= frac{P(E|H)P(H)}{P(E|H)P(H) + P(E| neg H)P( neg H)}&s=2$

Now we'll evaluate the individual elements, but for the time being notice that the posterior probability approaches 1 as the second term in the denominator approaches 0.

P(E|H): How likely is the check round evidence if the safety hypothesis is true?

Each family of propellant charges and projectiles that we use comes with an experimental measure of spread called "probable error" that is roughly analogous, but not identical, to standard deviation.  Actually, it comes with two measures of spread, one along the direction parallel to the line of fire ("probable error in range"), and one along the direction perpendicular to the line of fire ("probable error in deflection").  I don't know exactly why the military eschews standard deviation in favor of probable error, though perhaps it's slightly more intuitive.  By definition, 50% of rounds land within one probable error of the mean point of impact, so it's equally likely that any particular round lands inside or outside of this interval.  (Compare this to the empirical rule based on standard deviation.)

Notice that the tails are truncated. Any round outside of ± 4 PE is considered "erratic" (not properly ballistic) and thus unhelpful in terms of ballistic computation, so it's ignored.

So, when we say that a check round lands "near" the target, what we really mean is that it lands within 4 probable errors in range and 4 probable errors in deflection.  Looking at the distribution above, P(E|H) = 1.  Of course, this isn't actually true.  There are always lurking potential errors that we can neither detect nor control (they get lumped into an aggregated disturbance term called "position constants"), but, if we are accurately aiming at the mean point of impact (target), the probability of observing a round within 4 probable errors is extremely high.  And, without wearing ourselves out untangling the relationship between the two trials, the likelihood of observing two rounds within those range and deflection limits is still very high.

P(H): What is the prior probability of everything being set up properly?

You should not be surprised to learn that the Marine Corps is big on procedures, especially when explosions are involved.   I mentioned before that artillery exercises involve days of preparation, a large portion of which is devoted to safety.  During that time, a lot of things happen.  The databases for two fire direction computers are built independently by two safety certified Marines.  The safety information is also computed by hand, again twice, again independently.  Then firing data is generated for some hypothetical missions.  The primary and backup computers must agree within zero tolerance, and must also agree with the manual computations within formally defined limits.

Once we actually get to the firing position, the guns are laid in place using a combination of GPS data, inertial navigation systems, and both primary and secondary manual survey devices.  The azimuth of fire is transmitted to the guns, who then must read back another azimuth based on a second aiming point (the details of which aren't known to the gunners) to avoid the possibility of Marines parroting back the correct azimuth instead of actually verifying it.  Current meteorological data is checked for suspicious fluctuations before being input into the two computers separately, line by line, and read back for accuracy.  We make corrections for 11 specific known conditions, including muzzle velocity, rotation of the Earth, variation in altitude, propellant temperature, and the inherent right-hand drift a round experiences because of the barrel's rifling.  During the actual processing of missions, basically everything is said, heard, and done twice, digit-by-digit.

All of this is a long way of saying that the prior, P(H), is also very close to 1.

P(E|¬H): What is the likelihood of the check round evidence if we're not set up safely?

The last several paragraphs were intended to give you the (correct) impression that coaxing artillery rounds into landing close enough to a given spot to blow it up is extremely difficult and sensitive to lots of variables.  I think I'm still understating my case.  Given all the ways that things can go wrong, even when we work really hard to get it right, it seems to me (and to the Marine Corps as a whole, since they endorse this line of reasoning) that it would require an extremely intricate cosmic conspiracy to get two consecutive artillery rounds to land near our target if we weren't actually aiming there in the first place!  Let's go out on a limb and say this value is close to zero.

P(¬H): What is the probability we're not set up safely?

Since I've already established (I think) that we spend an awful lot of time and energy to get P(H) close to 1, it follows that P(¬H) must be close to 0.  It's probably larger than P(E|¬H), but still pretty small.


Since P(¬H) is small, and P(E|¬H) is even smaller, their product contributes very little to the denominator.  As noted above, this implies that the posterior probability of interest is close to 1.  In other words, check rounds near the target offer very strong evidence that we've managed not to screw anything up too badly in the run-up to our training.  We can--and sometimes do--measure the actual errors in range and deflection to increase our precision as we fire more missions and accumulate more data, but Bayes' Theorem tells us that the check round heuristic is a justifiable shortcut.

Of course, the go-to-jail button is always there, smiling its bright red smile, occasionally whispering, "But are you sure?"

One thought on “Check, One, Two

  1. Pingback: Building a Probability Cannon | Lines and Lines of Tangency

Leave a Reply

Your email address will not be published. Required fields are marked *