Index > 3 Characterisation of radioactively contaminated sites >

3.10.8 Data quality assessment process

Contents Assessment phase Data verification Data validation Data quality assessment Review the data quality objectives of the site characterisation survey and sampling design Conduct a preliminary data review Calculation of the basic statistical quantities; mean, standard deviation and median Data review by graphics Review the selected statistical test Verify the assumptions of the statistical test Verify the draw conclusions from the data Elevated Measurement Comparison Interpretation of statistical test results If the survey unit fails Removable activity

The strength of data quality assessment (DQA) is its design that progresses in a logical and efficient manner to promote an understanding of how well the collected site characterisation data meet the intended use. Assessment phase

In the assessment phase collected site characterisation data is evaluated whether the data meet the objectives of the survey and whether the data are sufficient to determine compliance with the DCGL. The assessment phase of the data life cycle consists of three phases: data verification, data validation, and data quality assessment (DQA). Data verification

Data verification is used to ensure that the requirements stated in the planning documents (e.g., quality assurance project plan, standard operating procedures) are implemented as prescribed.This means that deficiencies or problems that occur during implementation should be documented and reported. This also means that activities performed during the implementation phase are assessed regularly with findings documented and reported to the management. Corrective actions undertaken should be reviewed for adequacy and appropriateness and documented in response to the findings. Data verification activities should be planned and documented in the quality assurance project plan (see Section 2.13). These assessments may include but are not limited to inspections, quality control checks, surveillance, technical reviews, performance evaluations, and audits.

To ensure that conditions requiring corrective actions are identified and addressed promptly, data verification activities should be initiated as part of data collection during the implementation phase of the survey. The performance of tasks by personnel is generally compared to a prescribed method documented in the standard operation procedures, and is generally assessed using inspections, surveillance, or audits. Self-assessments and independent assessments may be planned, scheduled, and performed as part of the survey. Self-assessment also means that personnel doing work should document and report deficiencies or problems that they encounter to their supervisors or management.

The performance of equipment such as radiation detectors or measurement systems such as instruments, and human operators can be monitored using control charts. Control charts are used to record the results of quantitative quality control checks such as background and daily calibration or performance checks. Control charts document instrument and measurement system performance on a regular basis and identify conditions requiring corrective actions on a real time basis. Control charts are especially useful for surveys that extend over a significant period of time (e.g., weeks instead of days) and for equipment that is owned by a company that is frequently used to collect survey data. Surveys that are accomplished in one or two days and use rented instruments may not benefit significantly from the preparation and use of control charts. The use of control charts is usually documented in the standard operation procedures.

A technical review is an independent assessment that provides an in-depth analysis and evaluation of documents, activities, material, data, or items that require technical verification to ensure that established requirements are satisfied. A technical review typically requires a significant effort in time and resources and may not be necessary for all surveys. A complex survey using a combination of scanning, direct measurements, and sampling for multiple survey units is more likely to benefit from a detailed technical review than a simple survey design calling for relatively few measurements using one or two measurement techniques for a single survey unit. Data validation

Data validation is used to ensure that the results of the data collection activities support the objectives of the survey as documented in the quality assurance project plan, or permit a determination that these objectives should be modified. Data usability is the process of ensuring or determining whether the quality of the data produced meets the intended use of the data. Data verification compares the collected data with the prescribed activities documented in the standard operation procedures; data validation compares the collected data to the data quality objectives documented in the quality assurance project plan. Corrective actions may improve data quality and reduce uncertainty, and may eliminate the need to qualify or reject data.

Qualified data are any data that have been modified or adjusted as part of statistical or mathematical evaluation, data validation, or data verification operations. Data may be qualified or rejected as a result of data validation or data verification activities. Data qualifier codes or flags are often used to identify data that has been qualified. Any scheme used should be fully explained in the quality assurance project plan and survey documentation. The following are examples of data qualifier codes or flags (see Table 3.58) derived from national qualifiers assigned to results in the contract laboratory program.

U or < MDC The radionuclide of interest was analyzed for, but the radionuclide concentration was below the minimum detectable concentration (MDC). Section 3.11 recommends that the actual result of the analysis be reported, so this qualifier would inform the reader that the result reported is also below the MDC.

J The associated value reported is a modified, adjusted, or estimated quantity. This qualifier might be used to identify results based on surrogate measurements (see Section or gross activity measurements (e.g., gross alpha, gross beta). The implication of this qualifier is that the estimate may be inaccurate or imprecise which might mean the result is inappropriate for the statistical evaluation of the results. Surrogate measurements that are not inaccurate or imprecise may or may not be associated with this qualifier. It is recommended that the potential uncertainties associated with surrogate or gross measurements be quantified and included with the results.

R The associated value reported is unusable. The result is rejected due to serious analytical deficiencies or quality control results. These data would be rejected because they do not meet the data quality objectives of the survey.

O The associated value reported was determined to be an outlier.

Table 3.58 Examples of data qualifier codes or flags

Data validation is often defined by six data descriptors. These six data descriptors are summarized in Table 3.59 and discussed in detail in Section 3.11.2. The decision maker or reviewer examines the data, documentation, and reports for each of the six data descriptors to determine if performance is within the limits specified in the data quality objectives during planning. The data validation process for each data descriptor should be conducted according to procedures documented in the quality assurance project plan.

Data collected should meet performance objectives for each data descriptor. If they do not, deviations should be noted and any necessary corrective action performed. Corrective action should be taken to improve data usability when performance fails to meet objectives. Data quality assessment

Data quality assessment (DQA) is the scientific and statistical evaluation of data to determine if the data are of the right type, quality, and quantity to support their intended use.
There are five steps in the data quality assessment process:

  1. Review the data quality objectives (DQOs) and survey design.
  2. Conduct a preliminary data review.
  3. Select the statistical test.
  4. Verify the assumptions of the statistical test.
  5. Draw conclusions from the data.

These five steps are presented in a linear sequence, but the data quality assessment process is applied in an iterative fashion much like the data quality objectives process.

Data descriptor Suggested content or consideration *Impact if not met * Corrective action
Reports to decision maker * Site description
* Survey design with measurement locations
* Analytical method and detection limit
* Detection limits (MDCs)
* Background radiation data
* Results on per measurement basis, qualified for analytical limitations
* Field conditions for media and environment
* Preliminary reports
* Meteorological data, if indicated by DQOs
* Field reports

* Unable to perform a quantitative radiation survey and site investigation * Request missing information
* Perform qualitative or semi-quantitative site investigation
Documentation * Chain-of-custody records
* Standard Operation Procedures
* Field and analytical records
* Measurement results related to geographic location

* Unable to identify appropriate concentration for survey unit measurements
* Unable to have adequate assurance of measurement results
* Request that locations be identified
* Resurveying or re-sampling
* Correct deficiencies
Data sources * Historical data used meets Data Quality Objectives * Potential for Type I and Type II decision errors
* Lower confidence of data quality

* Resurveying, re-sampling or re-analysis for unsuitable or questionable measurements

Analytical method and detection limit * Routine methods used to analyse radio-nuclides of potential concern * Unquantified precision and accuracy
* Potential for Type I and Type II decision errors
* Re-analysis
* Resurveying, re-sampling or reanalysis
* Documented statements of limitation

Data review * Defined level of data review for all data * Potential for Type I and Type II decision errors
* Increased variability and bias due to analytical process, calculation errors or transcription errors

* Perform data review
Data quality indicators * Surveying and sampling variability identified for each radio-nuclide
* QC measurements to identify and quantify precision and accuracy
* Surveying, sampling and analytical precision and accuracy quantified

* Unable to quantify levels for uncertainty
* Potential for Type I and Type II decision errors
* Resurveying or re-sampling
* Perform qualitative site investigation
* Documented discussion of potential limitations

Table 3.59 Suggested content or consideration, impact if not met, and corrective actions for data descriptors
. Review the data quality objectives of the site characterisation survey and sampling design

The following activities are associated with this step in the DQA process:

  • The first step in the data quality assessment evaluation is a review of the data quality objective outputs to ensure that they are still applicable. For example, if the data suggest the survey unit was misclassified as Class 3 instead of Class 1, then the original data quality objectives should be redeveloped for the correct classification.
  • Review of the translating of the data user’s objectives into a statement of the hypotheses to be tested using collected site characterisation data. These objectives should be documented as part of the DQO Process, and this activity is reduced to translating these objectives into the statement of hypotheses.
  • Translating the objectives into limits on the probability of committing Type I or Type II decision errors.
  • Reviewing the survey design and noting any special features or potential problems. The goal of this activity is to familiarize the analyst with the main features of the survey design used to generate the site characterisation data. Review the survey design documentation with the data user’s objectives in mind. Look for design features that support or contradict these objectives.
  • Review the sampling design and data collection documentation. This documentation should be reviewed for consistency with the data quality objectives. For example, the review should check that the appropriate number of samples was taken in the correct locations and that they were analyzed with measurement systems with appropriate sensitivity.

Determining that the sampling design provides adequate power is important to decision making particularly in cases where the levels of residual radioactivity are near the derived concentration guideline level (DCGLW). This can be done both prospectively, during survey design to test the efficacy of a proposed design, and retrospectively, during interpretation of survey results to determine that the objectives of the design are met. The procedure for generating power curves for specific tests is discussed in Appendix E. Note that the accuracy of a prospective power curve depends on estimates of the data variability, σ, and the number of measurements. After the data are analyzed, a sample estimate of the data variability, namely the sample standard deviation (s) and the actual number of valid measurements will be known. The consequence of inadequate power is that a survey unit that actually meets the release criterion has a higher probability of being incorrectly deemed not to meet the release criterion. Conduct a preliminary data review

In this step of the DQA process, the analyst conducts a preliminary evaluation of the data set, calculating some basic statistical quantities and looking at the data through graphical representations. By reviewing the data both numerically and graphically, the analyst can learn the ‘structure’ of the data and thereby identify appropriate approaches and limitations for their use.
This step includes the activities:

  • Reviewing quality assurance reports.
  • Calculating basic statistical quantities (e.g., mean, standard deviation and median, relative standing, central tendency, dispersion, shape, and association).
  • Graphical data review (e.g., histograms, scatter plots, confidence intervals, ranked data plots, quantile plots, stem-and-leaf diagrams, spatial or temporal plots).

h.4 Calculation of the basic statistical quantities; mean, standard deviation and median

Example 3.21: Calculation of the basic statistical quantities; mean, standard deviation and median

Suppose the following 20 concentration values are from a survey unit:
* 90.7; 83.5; 86.4; 88.5; 84.4; 74.2; 84.1; 87.6; 78.2; 77.6;
* 86.4; 76.3; 86.5; 77.4; 90.3; 90.1; 79.1; 92.4; 75.5; 80.5.

First, the average of the data (83.5) and the sample standard deviation (5.7) should be calculated.

The average of the data can be compared to the reference area average and the derived concentration guideline level (DCGLW) to get a preliminary indication of the survey unit status. Where remediation is inadequate, this comparison may readily reveal that a survey unit contains excess residual radioactivity – even before applying statistical tests. For example, if the average of the data exceeds the derived concentration guideline level (DCGLW) and the radionuclide of interest does not appear in background, then the survey unit clearly does not meet the release criterion. On the other hand, if every measurement in the survey unit is below the derived concentration guideline level (DCGLW), the survey unit clearly meets the release criterion1.

The value of the sample standard deviation is especially important. If too large compared to that assumed during the survey design, this may indicate an insufficient number of samples were collected to achieve the desired power of the statistical test. Again, inadequate power can lead to unnecessary remediation.

The median is the middle value of the data set when the number of data points is odd, and is the average of the two middle values when the number of data points is even. Thus 50% of the data points are above the median, and 50% are below the median. Large differences between the mean and the median would be an early indication of skewness in the data. This would also be evident in a histogram of the data. For the example data above, the median is 84.25 (i.e., (84.1 + 84.4)/2). The difference between the median and the mean (i.e., 84.25 – 83.5 = 0.75) is a small fraction of the sample standard deviation (i.e., 5.7). Thus, in this instance, the mean and median would not be considered significantly different.

Examining the minimum, maximum, and range of the data may provide additional useful information. The minimum in this example is 74.2 and the maximum is 92.4, so the range is 92.4 – 74.2 = 18.2. This is only 3.2 standard deviations. Thus, the range is not unusually large. When there are 30 or fewer data points, values of the range much larger than about 4 to 5 standard deviations would be unusual. For larger data sets the range might be wider. Data review by graphics

Example 3.22: Data review by graphics

At a minimum, a graphical data review should consist of a posting plot and a histogram. Quantile plots are also useful diagnostic tools, particularly in the two-sample case, to compare the survey unit and reference area. These are discussed in Appendix E, Section D.3.3.

A posting plot is simply a map of the survey unit with the data values entered at the measurement locations. This potentially reveals heterogeneities in the data – especially possible patches of elevated residual radioactivity. Even in a reference area, a posting plot can reveal spatial trends in background data that might affect the results of the two-sample statistical tests.

Figure 3.14 Examples of posting plots
Figure 3.14 Examples of posting plots

If the data above were obtained using a triangular grid in a rectangular survey unit, the posting plot might resemble the display in Figure 3.14. Figure 3.14a shows no unusual patterns in the data. Figure 3.14b shows a different plot of the same values, but with individual results associated with different locations within the survey unit. In this plot there is an obvious trend towards smaller values as one move from left to right across the survey unit. This trend is not apparent in the simple initial listing of the data. The trend may become more apparent if isopleths are added to the posting plot.

If the posting plot reveals systematic spatial trends in the survey unit, the cause of the trends would need to be investigated. In some cases, such trends could be due to residual radioactivity, but may also be due to in-homogeneities in the survey unit background. Other diagnostic tools for examining spatial data trends may be found in [EPA-1996d]. The use of geostatistical tools to evaluate spatial data trends may also be useful in some cases [EPA-1989a].

A frequency plot (or a histogram) is a useful tool for examining the general shape of a data distribution. This plot is a bar chart of the number of data points within a certain range of values. A frequency plot of the example data is shown in Figure 3.15. A simple method for generating a rough frequency plot is the stem and leaf display discussed in Appendix E, Section E.3.2.The frequency plot will reveal any obvious departures from symmetry, such as skewness or bimodality (two peaks), in the data distributions for the survey unit or reference area. The presence of two peaks in the survey unit frequency plot may indicate the existence of isolated areas of residual radioactivity. In some cases it may be possible to determine an appropriate background for the survey unit using this information. The interpretation of the data for this purpose will generally be highly dependent on site-specific considerations and should only be pursued after a consultation with the responsible regulatory agency.

The presence of two peaks in the background reference area or survey unit frequency plot may indicate a mixture of background concentration distributions due to different soil types, construction materials, etc. The greater variability in the data due to the presence of such a mixture will reduce the power of the statistical tests to detect an adequately remediated survey unit. These situations should be avoided whenever possible by carefully matching the background reference areas to the survey units, and choosing survey units with homogeneous backgrounds.

Figure 3.15 Example of a frequency plot
Figure 3.15 Example of a frequency plot

Skewness or other asymmetry can impact the accuracy of the statistical tests. A data transformation (e.g., taking the logarithms of the data) can sometimes be used to make the distribution more symmetric. The statistical tests would then be performed on the transformed data. When the underlying data distribution is highly skewed, it is often because there are a few high areas. Since the elevated measurement comparison (EMC) is used to detect such measurements, the difference between using the median and the mean as a measure for the degree to which uniform residual radioactivity remains in a survey unit tends to diminish in importance. Review the selected statistical test

The statistical tests presented in Section 3.10.2 up to Section 3.10.7 are applicable for most sites contaminated with radioactive material and discuss also the rationale for selecting the statistical methods recommended for the different surveys in more detail. Additional guidance on selecting alternate statistical methods can be found in Section in Table 3.26.

An appropriate procedure for summarizing and analyzing the data should be based on the preliminary data review. Verify the assumptions of the statistical test

In this step, the analyst assesses the validity of the statistical test by examining the underlying assumptions in light of the collected site characterisation data. The key two questions to be resolved are:

  • Do the data support the underlying assumptions of the test?
  • Do the data suggest that modifications to the statistical analysis are warranted?

The underlying assumptions for the statistical tests are discussed in Section 3.10.2. Graphical representations of the data, such as those described in earlier in this Section and in Appendix E, can provide important qualitative information about the validity of the assumptions. Documentation of this step is always important, especially when professional judgement plays a role in accepting the results of the analysis.
There are three activities included in this step:

  • Determining the approach for verifying assumptions. For this activity, determine how the assumptions of the hypothesis test will be verified, including assumptions about distributional form, independence, dispersion, type, and quantity of data. In Sections 3.10.1 up to 3.10.7 methods are discussed for verifying assumptions for the final status survey statistical test during the preliminary data review.
  • Performing tests of the assumptions. Perform the calculations selected in the previous activity for the statistical tests. Guidance on performing the tests recommended for the final status survey is included in Section 3.10.
  • Determining corrective actions (if any). Sometimes the assumptions underlying the hypothesis test will not be satisfied and some type of corrective action should be performed before proceeding. In some cases, the data for verifying some key assumption may not be available and existing data may not support the assumption. In this situation, it may be necessary to collect new data, transform the data to correct a problem with the distributional assumptions, or select an alternate hypothesis test. Section discusses potential corrective actions. Verify the draw conclusions from the data

The final step of the DQA process is performing/verifying the statistical test and drawing conclusions that address the data user’s objectives. The procedure for implementing the statistical test is described earlier in this Section 3.10.
There are three activities associated with this final step:

  • Performing the calculations for the statistical hypothesis test (see Sections 3.10.1 up to 3.10.7).
  • Evaluating the statistical test results and drawing the study conclusions. The results of the statistical test will be either accept the null hypothesis, or reject the null hypothesis.
  • Evaluating the performance of the survey design if the design is to be used again. If the survey design is to be used again, either in a later phase of the current study or in a similar study, the analyst will be interested in evaluating the overall performance of the design. To evaluate the survey design, the analyst performs a statistical power analysis that describes the estimated power of the test over the full range of possible parameter values. This helps the analyst evaluate the adequacy of the sampling design when the true parameter value lies in the vicinity of the action level (which may not have been the outcome of the current study). It is recommended that a statistician be consulted when evaluating the performance of a survey design for future use.

Once the data and the results of the tests are obtained, the specific steps required to achieve site release depend on the procedures instituted by the governing regulatory agency and site-specific ALARA considerations. The following suggested considerations are for the interpretation of the test results with respect to the release limit established for the site or survey unit. Note that the tests need not be performed in any particular order. Elevated Measurement Comparison

The elevated measurement comparison (EMC) consists of comparing each measurement from the survey unit with the investigation levels discussed in Section and Section 3.10.2. The elevated measurement comparison is performed for both measurements obtained on the systematic-sampling grid and for locations flagged by scanning measurements. Any measurement from the survey unit that is equal to or greater than an investigation level indicates an area of relatively high concentrations that should be investigated – regardless of the outcome of the nonparametric statistical tests.

The statistical tests may not reject H0 when only a very few high measurements are obtained in the survey unit. The use of the elevated measurement comparison against the investigation levels may be viewed as assurance that unusually large measurements will receive proper attention regardless of the outcome of those tests and that any area having the potential for significant dose contributions will be identified. The elevated measurement comparison is intended to flag potential failures in the remediation process. This should not be considered the primary means to identify whether or not a site meets the release criterion.
The derived concentration guideline level for the elevated measurement comparison is:

DCGLEMC = Am x DCGLW ………………………………………………… (3-30)


Am = area factor for the area of the systematic grid area.
DCGLEMC = an a priori limit, established both by the DCGLW and by the survey design (i.e., grid spacing and scanning MDC).

The true extent of an area of elevated activity can only be determined after performing the survey and taking additional measurements. Upon the completion of further investigation, the a postepriori limit, DCGLEMC = Am × DCGLW, can be established using the value of Am appropriate for the actual area of elevated concentration. The area of elevated activity is generally bordered by concentration measurements below the DCGLW. An individual elevated measurement on a systematic grid could conceivably represent an area four times as large as the systematic grid area used to define the DCGLEMC. This is the area bounded by the nearest neighbours of the elevated measurement location. The results of the investigation should show that the appropriate DCGLEMC is not exceeded. Area factors are discussed in Section 3.5.

If measurements above the stated scanning maximum detection concentration (MDC) are found by sampling or by direct measurement at locations that were not flagged by the scanning survey, this may indicate that the scanning method did not meet the DQOs.
The preceding discussion primarily concerns Class 1 survey units. Measurements exceeding DCGLW in Class 2 or Class 3 areas may indicate survey unit mis-classification. Scanning coverage for Class 2 and Class 3 survey units is less stringent than for Class 1. If the investigation levels are exceeded, an investigation should:

  • Ensure that the area of elevated activity discovered meets the release criterion;
  • Provide reasonable assurance that other undiscovered areas of elevated activity do not exist.

If further investigation determines that the survey unit was mis-classified with regard to contamination potential, then a resurvey using the method appropriate for the new survey unit classification may be appropriate. Interpretation of statistical test results

The result of the statistical test is the decision to reject or not to reject the null hypothesis. Provided that the results of investigations triggered by the elevated measurement comparison were resolved, a rejection of the null hypothesis leads to the decision that the survey unit meets the release criterion. However, estimating the average residual radioactivity in the survey unit may also be necessary so that dose or risk calculations can be made. This estimate is designated δ. The average concentration is generally the best estimator for δ [EPA-1992b]. However, only the unbiased measurements from the statistically designed survey should be used in the calculation of δ.

If residual radioactivity is found in an isolated area of elevated activity – in addition to residual radioactivity distributed relatively uniformly across the survey unit – the unity rule (Section can be used to ensure that the total dose is within the release criterion:

{ δ / DCGLW + (average concentration in elevated area – δ)/ (area factor for elevated area)(DCGLW) } < 1 ……………………………………. (3-31)

If there is more than one elevated area, a separate term should be included for each. When calculating δ for use in this inequality, measurements falling within the elevated area may be excluded providing the overall average in the survey unit is less than the DCGLW. As an alternative to the unity rule, the dose or risk due to the actual residual radioactivity distribution can be calculated if there is an appropriate exposure pathway model available. Note that these considerations generally apply only to Class 1 survey units, since areas of elevated activity should not exist in Class 2 or Class 3 survey units.

A retrospective power analysis for the test will often be useful, especially when the null hypothesis is not rejected (see Appendix E, Section E.1.3). When the null hypothesis is not rejected, it may be because it is in fact true, or it may be because the test did not have sufficient power to detect that it is not true. The power of the test will be primarily affected by changes in the actual number of measurements obtained and their standard deviation. An effective survey design will slightly overestimate both the number of measurements and the standard deviation to ensure adequate power. This insures that a survey unit is not subjected to additional remediation simply because the final status survey is not sensitive enough to detect that residual radioactivity is below the guideline level. When the null hypothesis is rejected, the power of the test becomes a somewhat moot question. Nonetheless, even in this case, a retrospective power curve can be a useful diagnostic tool and an aid to designing future surveys. If the survey unit fails

The guidance provided in EURSSEM is fairly explicit concerning the steps that should be taken to show that a survey unit meets release criteria. Less has been said about the procedures that should be used if at any point the survey unit fails. This is primarily because there are many different ways that a survey unit may fail the final status survey, e.g.:

  • The overall level of residual radioactivity may not pass the non-parametric statistical tests.
  • Further investigation following the elevated measurement comparison may show that there is a large enough area with a concentration too high to meet the release criterion.
  • Investigation levels may have caused locations to be flagged during scanning that indicate unexpected levels of residual radioactivity for the survey unit classification.
  • Site-specific information is needed to fully evaluate all of the possible reasons for failure, their causes, and their remedies.

When a survey unit fails to demonstrate compliance with the release criterion, the first step is to review and confirm the data that led to the decision and communicate with the stakeholders, e.g., regulators. Once this is done, the DQO Process (see Section 2.7) can be used to identify and evaluate potential solutions to the problem. The level of residual radioactivity in the survey unit should be determined to help define the problem. Once the problem has been stated, the decision concerning the survey unit should be developed into a decision rule. Next, determine the additional data, if any, needed to document that the survey unit demonstrates compliance with the release criterion. Alternatives to resolving the decision statement should be developed for each survey unit that fails the tests. These alternatives are evaluated against the DQOs, and a survey design that meets the objectives of the project is selected.

Example 3.23: A Class 2 survey unit passes Sign test but several measurements exceed DCGLW

A Class 2 survey unit passes the non-parametric statistical tests, but has several measurements on the sampling grid that exceed the DCGLW. This is unexpected in a Class 2 area, and so these measurements are flagged for further investigation. Additional sampling confirms that there are several areas where the concentration exceeds the DCGLW. This indicates that the survey unit was mis-classified. However, the scanning technique that was used was sufficient to detect residual radioactivity at the DCGLEMC calculated for the sample grid. No areas exceeding the DCGLEMC where found. Thus, the only difference between the final status survey actually done and that which would be required for a Class 1 area, is that the scanning may not have covered 100% of the survey unit area. In this case, one might simply increase the scan coverage to 100%. Reasons why the survey unit was mis-classified should be noted. If no areas exceeding the DCGLEMC are found, the survey unit essentially demonstrates compliance with the release criterion as a Class 1 survey unit.

If, in the example above, the scanning technique was not sufficiently sensitive, it may be possible to re-classify as Class 1 only that portion of the survey unit containing the higher measurements. This portion would be re-sampled at the higher measurement density required for a Class 1 survey unit, with the rest of the survey unit remaining Class 2.

Example 3.24: A Class 1 survey unit passes Sign test but some areas were flagged

A Class 1 survey unit that passes the non-parametric statistical tests contains some areas that were flagged for investigation during scanning. Further investigation, sampling and analysis indicate that one area is truly elevated. This area has a concentration that exceeds the DCGLW by a factor greater than the area factor calculated for its actual size. This area is then remediated. Remediation control sampling shows that the residual radioactivity was removed, and no other areas were contaminated with removed material. In this case one may simply document the original final status survey, the fact that remediation was performed, the results of the remedial action support survey, and the additional remediation data. In some cases, additional final status survey data may not be needed to demonstrate compliance with the release criterion.

Example 3.25: A Class 1 survey unit fails the Sign test

Consider a Class 1 area which fails the non-parametric statistical tests. Confirmatory data indicate that the average concentration in the survey unit does exceed the DCGLW over a majority of its area. This indicates remediation of the entire survey unit is necessary, followed by another final status survey. Reasons for performing a final status survey in a survey unit with significant levels of residual radioactivity should be noted.

These examples are meant to illustrate the actions that may be necessary to secure the release of a survey unit that has failed to meet the release criterion. The DQO process should be revisited to plan how to attain the original objective that is to safely release the survey unit by showing that it meets the release criterion. Whatever data are necessary to meet this objective will be in addition to the final status survey data already in hand. Removable activity

Some regulatory agencies may require that smear samples be taken at indoor grid locations as an indication of removable surface activity. The percentage of removable activity assumed in the exposure pathway models has a great impact on dose calculations. However, measurements of smears are very difficult to interpret quantitatively. Therefore, the results of smear samples should not be used for determining compliance. Rather, they should be used as a diagnostic tool to determine if further investigation is necessary.


1 It can be verified that if every measurement is below the derived concentration guideline level (DCGLW), the conclusion from the statistical tests will always be that the survey unit does not exceed the release criterion.