SElogo.gif (5955 bytes)


Bayesian NDE

Your NonDestructive Evaluation system signals a "hit."  Is it really a crack?  Or is it a "false call?"  Here's an application of Bayes's Theorem that answers the question.


SEbullet_3.gif (95 bytes) Home
SEbullet_3.gif (95 bytes) Up

Bayes Index
1. Thinking
2. e.g: NDE
3. Everyday

NonDestructive Evaluation provides a numerical example of Bayes's Theorem:

(Disclaimer: All probabilities in this example are fictional, intended for pedagogical purposes only.)


Probability of Detection (POD) ...
... depends on cracksize and MIL-STD 1823 provides methods for measuring and modeling this relationship. For the sake of simplicity in explaining Bayes's Theorem, however, assume here that POD is a constant, independent of cracksize. This is really P(hit | crack), the probability of a positive test indication (a hit) given that a crack is present. But no inspection is perfect and we must contend with false calls - a positive indication when no crack is present. Call this P(hit | no crack). For the sake of exposition assume P(hit | crack)=0.95, and P(hit | no crack)=0.01. The frequentist interpretation here is that in 100 similar inspections the crack would be found 95 times, and 100 inspections of clean parts would produce one false call.

Let us further suppose ...

... that cracks exist in parts with a frequency of occurrence of 1 per 10,000. Now, we inspect the part, and get a hit. What is the probability that there really exists a crack and this is not a false call? What we want is P(crack | hit). Bayes's Theorem is how we get it.

First, we'll need the total probability of a positive indication.
P(hit), which is

P(hit | crack) P(crack) + P(hit | no crack) P(no crack).

P(hit) = 0.95 * 0.0001 + 0.01 * (1 - 0.0001) = 0.010094. (Notice that P(no crack) = 1 - P(crack) since these two events are mutually exclusive (either there is a crack or there is not).

Now Bayes's Theorem states:

P(crack | hit) = P(hit | crack) P(crack) / P(hit)

= 0.95*0.0001/0.010094

= 0.0094115

This result may seem surprising until you consider that the probability of a false indication is two orders of magnitude greater than the incidence of cracking, in this example. The prior probability of there being a crack was 0.0001. The posterior probability of a crack is 9.41 times greater. (Although NDE effectiveness wasn't the primary purpose of this example (illustrating Bayes's Theorem was), it is interesting to see how this number changes when the incidence of cracking is much higher, or the false call rate is much lower.)

This simple example has a single-valued prior and a single-valued posterior, rather than a range of possible values represented by probability densities. Suppose, for example, that you didn't know the incidence of cracked parts was 1/10,000. Suppose that your experts' opinion put the number somewhere between 1/1000 on the high side and 1/106. Now the prior wouldn't be a single value, but a distribution of possible values. For this more complicated case it is necessary to sum over all possible probability values. It is these integrations of products of probability densities that make Bayesian methods computationally cumbersome, but this has been alleviated considerably by recent advances in Bayesian computational methods.

Defining the Prior Distribution is Fundamental:

Clearly the estimated posterior probability density depends on the outcome of the experiment, and the behavior of the prior. The dependence on subjective decision about the prior makes some engineers (and all frequentists) uneasy. But engineers make subjective decisions all the time. It's an unavoidable part of anything creative, as the art of engineering most surely is. And, say the Bayesians correctly, you can't avoid it by ignoring it. As it turns out, the worry is greatly overblown. The choice of prior, while important, is not as influential as it may first appear, and a judicious choice will acquiesce to the data when the data speak clearly yet provide stability when they speak in muffled tones. With meaningful data, the resulting posterior distribution is rather insensitive to differences in the prior.

Bayesian Networks are Concatenated Applications of Bayes's Theorem:

It is easy to see conceptually that the posterior distribution of one application can provide the prior for a subsequent one. Furthermore, since a prior can be based on non-experimental knowledge such as analytic predications or engineering experience, these different sources of what is known can be linked together using networks of Bayesian calculations

(Another, very different, application of Bayesian methods can be found in Annis and Griffiths, "Staircase Testing and the Random Fatigue Limit," presented 6 March 2001, at the National HCF Conference in JAX.)

atwork.gif (2062 bytes)


SElogo.gif (5955 bytes)

[ HOME ] [ Feedback ]

Mail to
Copyright � 1998-2008 Charles Annis, P.E.
Last modified: June 08, 2014