Index > 3 Characterisation of radioactively contaminated sites >

3.10.7 Verify the assumptions of applied statistical tests

Contents Alternate null hypothesis

An evaluation to determine that the data are consistent with the underlying assumptions made for the statistical procedures helps to validate the use of a test. One may also determine that certain departures from these assumptions are acceptable when given the actual data and other information about the study. The non-parametric tests described in this assume that the data from the reference area or survey unit consist of independent samples from each distribution.

Spatial dependencies that potentially affect the assumptions can be assessed using posting plots (see Section More sophisticated tools for determining the extent of spatial dependencies are also available (e.g., EPA QA/G-9 [EPA-1996d]). These methods tend to be complex and are best used with guidance from a professional statistician.

Asymmetry in the data can be diagnosed with a stem and leaf display, a histogram, or a quantile plot. Data transformations can sometimes be used to minimize the effects of asymmetry.

One of the primary advantages of the non-parametric tests used in this guidance is that they involve fewer assumptions about the data than their parametric counterparts. If parametric tests are used, (e.g., Student’s t test), then any additional assumptions made in using them should be verified (e.g., testing for normality). These issues are discussed in detail in [EPA-1996d].

One of the more important assumptions made in the survey design described in Section 3.3 is that the sample sizes determined for the tests are sufficient to achieve the data quality objectives set for the Type I (α) and Type II (β) error rates. Verification of the power of the tests (1-β) to detect adequate remediation may be of particular interest. Methods for assessing the power are discussed in Appendix E.1.3. If the hypothesis that the survey unit residual radioactivity exceeds the release criterion is accepted, there should be reasonable assurance that the test is equally effective in determining that a survey unit has residual contamination less than the DCGLW. Otherwise, unnecessary remediation may result. For this reason, it is better to plan the surveys cautiously – even to the point of:

  • Overestimating the potential data variability;
  • Taking too many samples;
  • Overestimating minimum detectable concentrations (MDCs).

If one is unable to show that the DQOs were met with reasonable assurance, a resurvey may be needed. Examples of assumptions and possible methods for their assessment are summarized in Table 3.57.

Assumption Diagnostic
Spatial Independence Posting Plot
Symmetry Histogram, Quantile Plot
Data Variance Sample Standard Deviation
Power is Adequate Retrospective Power Chart

Table 3.57 Methods for checking the assumptions of statistical tests
. Alternate null hypothesis

The selection of the null hypothesis in EURSSEM is designed to be protective of human health and the environment as well as consistent with current methods used for demonstrating compliance with regulations. EURSSEM also acknowledges that site-specific conditions (e.g., high variability in background, lack of measurement techniques with appropriate detection sensitivity) may preclude the use of the null hypothesis that the survey unit is assumed to be contaminated. Similarly, a different null hypothesis and methodology could be used for different survey units (e.g., Class 3 survey units). NUREG 1505 provides guidance on determining when background variability might be an issue, designing surveys based on the null hypothesis that the survey unit concentration is indistinguishable from the concentration in the reference area, and performing statistical tests to demonstrate that the survey unit is indistinguishable from background [USNRC-1997b].