**Contents**

3.9.9.1 Measurement uncertainty (Error)

3.9.9.2 Systematic and random uncertainties

3.9.9.3 Statistical counting uncertainty

3.9.9.4 Uncertainty propagation

3.9.9.5 Reporting confidence intervals

When radiometric measurements are made, it is always important to maintain an awareness of uncertainties in the data and to take appropriate precautions so that the data which are obtained are adequate for their intended purpose. In this regard, pre-measurement consideration of the data quality objectives may be especially important.

Three limits of uncertainty are commonly quoted; there is much confusion as to the meaning of these limits:

*The limit of confidence*(also known as limit of decision) is defined as the amount of a radionuclide that would be needed to be detected by a measurement in order to be confident that the identification is genuine.*The limit of detection*is the amount of radionuclide that one can be confident would be detected by a measurement.*The limit of quantification*(also known as limit of determination, and often referred to as minimum detectable activity (MDA)) is the amount of radionuclide that will have to be present in order to be confident that a measurement is adequate.

Whenever quoting results and uncertainties of counting measurements on low-activity samples, it is important to assure that one has specified and adhered to a consistent standard of reporting.

Before making a radiometric determination, it will be necessary to decide what sensitivity (limit of detection or limit of quantification) is required and to design the measurement such that this can be achieved. Failure to do so may result in having to repeat the measurement or in drawing an unwarranted conclusion that a particular isotope is not present.

If a given radioisotope is present in sufficient quantity, it may be possible to terminate the measurement early once the results have reached the desired statistical accuracy. An adaptive approach here can save much effort and time. Care should be taken that an overly conservative measurement (i.e., with an overly low level of uncertainty) is not required. In many cases, the overall uncertainty in a radiation measurement result will be dominated by factors other than counting statistics (in particular, there is the large variability which is inherent in sampling).

#### 3.9.9.1 Measurement uncertainty (Error)

The quality of measurement data will be directly impacted by the magnitude of the measurement uncertainty associated with it. Some uncertainties, such as statistical counting uncertainties, can be easily calculated from the count results using mathematical procedures. Evaluation of other sources of uncertainty requires more effort and in some cases is not possible. For example, if an alpha measurement is made on a porous concrete surface, the observed instrument response when converted to units of activity will probably not exactly equal the true activity under the probe. Variations in the absorption properties of the surface for particulate radiation will vary from point to point and therefore will create some level of variation in the expected detection efficiency. This variability in the expected detector efficiency results in uncertainty in the final reported result. In addition, quality control (QC) measurement results provide an estimate of random and systematic uncertainties associated with the measurement process.

The measurement uncertainty for every analytical result or series of results, such as for a measurement system, should be reported. This uncertainty, while not directly used for demonstrating compliance with the release criterion, is used for survey planning and data assessment throughout the Radiation Survey and Site Investigation (RSSI) process. In addition, the uncertainty is used for evaluating the performance of measurement systems using quality control measurement results. Uncertainty can also be used for comparing individual measurements to the DCGL. This is especially important in the early stages of decommissioning (i.e., scoping, characterization, remedial action support) when decisions are made based on a limited number of measurements.

For most sites, evaluations of uncertainty associated with field measurements are important only for data being used as part of the final status survey documentation. The final status survey data, which is used to document the final radiological status of a site, should state the uncertainties associated with the measurements. Conversely, detailing the uncertainties associated with measurements made during scoping or characterization surveys may or may not be of value depending on what the data will be used for – i.e., the data quality objectives (DQOs). From a practical standpoint, if the observed data are obviously greater than the DCGL and will be eventually cleaned up, then the uncertainty may be relatively unimportant. Conversely, data collected during early phases of a site investigation that may eventually be used to show that the area is below the DCGL – and therefore does not require any clean-up action – will need the same uncertainty evaluation as the final status survey data. In summary, the level of effort needs to match the intended use of the data.

#### 3.9.9.2 Systematic and random uncertainties

Measurement uncertainties are often broken into two sub-classes of uncertainty termed systematic (e.g., methodical) uncertainty and random (e.g., stochastic) uncertainty. Systematic uncertainties derive from a lack of knowledge about the true distribution of values associated with a numerical parameter and result in data that is consistently higher (or lower) than the true value. An example of a systematic uncertainty would be the use of a fixed counting efficiency value even though it is known that the efficiency varies from measurement to measurement but without knowledge of the frequency. If the fixed counting efficiency value is higher than the true but unknown efficiency – as would be the case for an unrealistically optimistic value – then every measurement result calculated using that efficiency would be biased low. Random uncertainties refer to fluctuations associated with a known distribution of values. An example of a random uncertainty would be a well documented chemical separation efficiency that is known to fluctuate with a regular pattern about a mean. A constant recovery value is used during calculations, but the true value is known to fluctuate from sample to sample with a fixed and known degree of variation.

To minimize the need for estimating potential sources of uncertainty, the sources of uncertainty themselves should be reduced to a minimal level by using practices such as:

- The detector used should minimize the potential uncertainty. For example, when making field surface activity measurements for 238U on concrete, a beta detector such as a thin-window Geiger-Mueller ‘pancake’ may provide better quality data than an alpha detector depending on the circumstances. Less random uncertainty would be expected between measurements with a beta detector such as a pancake, since beta emissions from the uranium will be affected much less by thin absorbent layers than will the alpha emissions.
- Calibration factors should accurately reflect the efficiency of a detector being used on the surface material being measured for the contaminant radionuclide or mixture of radionuclides (see 0 and Section 3.8.5). For most field measurements, variations in the counting efficiency on different types of materials will introduce the largest amount of uncertainty in the final result.
- Uncertainties should be reduced or eliminated by use of standardized measurement protocols (e.g., standard operating procedures) when possible. Special effort should be made to reduce or eliminate systematic uncertainties, or uncertainties that are the same for every measurement simply due to an error in the process. If the systematic uncertainties are reduced to a negligible level, then the random uncertainties, or those uncertainties that occur on a somewhat statistical basis, can be dealt with more easily.
- Instrument operators should be trained and experienced with the instruments used to perform the measurements.
- Quality assurance/Quality control should be conducted as described in Section 2.13 and Section 3.3.9.

Uncertainties that cannot be eliminated need to be evaluated such that the effect can be understood and properly propagated into the final data and uncertainty estimates. As previously stated, non-statistical uncertainties should be minimized as much as possible through the use of good work practices.

Overall random uncertainty can be evaluated using the methods described in the following sections. Section 3.9.2.9, ‘Statistitical counting uncertainty’ describes a method for calculating random counting uncertainty. Section 3.9.2.9, ‘Uncertainty propagation’ discusses how to combine this counting uncertainty with other uncertainties from the measurement process using uncertainty propagation.

Systematic uncertainty is derived from calibration errors, incorrect yields and efficiencies, non-representative survey designs, and ‘blunders’. It is difficult – and sometimes impossible – to evaluate the systematic uncertainty for a measurement process, but bounds should always be estimated and made small compared to the random uncertainty, if possible. If no other information on systematic uncertainty is available, it is recommended to use 16% as an estimate for systematic uncertainties (1% for blanks, 5% for baseline, and 10% for calibration factors).

#### 3.9.9.3 Statistical counting uncertainty

When performing an analysis with a radiation detector, the result will have an uncertainty associated with it due to the statistical nature of radioactive decay. To calculate the total uncertainty associated with the counting process, both the background measurement uncertainty and the sample measurement uncertainty must be accounted for. The standard deviation of the net count rate, or the statistical counting uncertainty, can be calculated by:

σ

_{n}= √(C_{s+b}/ (T_{s+b})^{2}+ Cb / (T_{b})^{2}) ……………………….. (3-24)

where:

σ_{n} = standard deviation of the net count rate result [ ].

C_{s+b} = number of gross counts (sample) [counts].

T_{s+b} = gross count time [s].

C_{b} = number of background counts [counts].

T_{b} = background count time [s].

#### 3.9.9.4 Uncertainty propagation

Most measurement data will be converted to different units or otherwise included in a calculation to determine a final result. The standard deviation associated with the final result, or the total uncertainty, can then be calculated. Assuming that the individual uncertainties are relatively small, symmetric about zero and independent of one another, then the total uncertainty for the final calculated result can be determined by solving the following partial differential equation:

σ

_{u}= √((∂u / ∂x)^{2}σ_{x}^{2}+ (∂u / ∂y)^{2}σ_{y}^{2}+ (∂u / ∂z)^{2}σ_{z}^{2}+ …) ……………………….. (3-25)

where:

u = function, or formula, that defines the calculation of a final result as a function of the collected data. All variables in this equation, i.e., x, y, z…, are assumed to have a measurement uncertainty associated with them and do not include numerical constants.

σ_{u} = standard deviation, or uncertainty, associated with the final result.

σ_{x}, σ_{y},… = standard deviation, or uncertainty, associated with the parameters x, y, z, …

Equation 3-25, generally known as the error propagation formula, can be solved to determine the standard deviation of a final result from calculations involving measurement data and their associated uncertainties. The solutions for common calculations along with their uncertainty propagation formulas are included below.

Data calculation |
Uncertainty propagation |

u = x + y, or u = x – y | σ_{u} = √(σ_{x}^{2} + σ_{y}^{2}) |

u = x ÷ y, or u = x × y | σ_{u} = u √((σ_{x}/x)^{2} + (σ_{y}/y)^{2}) |

u = c × x, where c is a positive constant | σ_{u} = c σ_{x} |

u = x ÷ c, where c is a positive constant | σ_{u} = σ_{x} / c |

*Note: In the above examples, x and y are measurement values with associated standard deviations, or uncertainties, equal to σ _{x} and σ_{y} respectively. The symbol ‘c’ is used to represent a numerical constant which has no associated uncertainty. The symbol σ_{u} is used to denote the standard deviation, or uncertainty, of the final calculated value u.*

#### 3.9.9.5 Reporting confidence intervals

Throughout this Section 3.9.2.9, the term ‘measurement uncertainty’ is used interchangeably with the term ‘standard deviation’. In this respect, the uncertainty is qualified as numerically identical to the standard deviation associated with a normally distributed range of values. When reporting a confidence interval for a value, one provides the range of values that represent a pre-determined level of confidence (i.e., 95%). To make this calculation, the final standard deviation, or total uncertainty σu as shown in Equation 3-25, is multiplied by a constant factor k representing the area under a normal curve as a function of the standard deviation. The value of k representing various intervals about a mean of normal distributions as a function of the standard deviation is given in Table 3.51. The following example illustrates the use of this factor in context with the propagation and reporting of uncertainty values.

Intervalμ ± kσ |
Area |

μ ± 0.674σ | 0.500 |

μ ± 1.00σ | 0.683 |

μ ± 1.65σ | 0.900 |

μ ± 1.96σ | 0.950 |

μ ± 2.00σ | 0.954 |

μ ± 2.58σ | 0.990 |

μ ± 3.00σ | 0.997 |

Table 3.51 Areas under various intervals about the mean of a normal distribution

*Example 3.19: Calculation of an uncertainty propagation and confidence interval*

Uncertainty Propagation and Confidence Interval:

* A measurement process with a zero background yields a count result of 28 ± 5 counts in 5 minutes, where the ± 5 counts represents one standard deviation about a mean value of 28 counts. The detection efficiency is 0.1 counts per disintegration ± 0.01 counts per disintegration, again representing one standard deviation about the mean.

* Calculate the activity of the sample, in dpm, total measurement uncertainty, and the 95% confidence interval for the result.

* The total number of disintegrations is:

28 counts/(0.1 c/d) = 280

* Using the equation for error propagation for division, total uncertainty is:

280 √((5/28)^{2}+ (0.01/0.1)^{2}) = 57 disintegrations

* The activity will then be 280 ÷ 5 minutes = 56 dpm and the total uncertainty will be

57 ÷ 5 minutes = 11 dpm. (Since the count time is considered to have trivial variance, this is assumed to be a constant.)

* Referring to Table 3.51, a k value of ± 1.96 represents a confidence interval equal to 95% about the mean of a normal distribution. Therefore, the 95% confidence interval would be 1.96 × 11 dpm = 22 dpm. The final result would be 56 ± 22 dpm.