Index > 3 Characterisation of radioactively contaminated sites >

3.8.5 Instrument calibration

Calibration refers to the determination and adjustment of the instrument response in a particular radiation field of known intensity. Proper calibration procedures are an essential requisite toward providing confidence in measurements made to demonstrate compliance with clean-up criteria. Certain factors, such as energy dependence and environmental conditions, require consideration in the calibration process, depending on the conditions of use of the instrument in the field. Routine calibration of radiation detection instruments refers to calibration for normal use under typical field conditions.

Considerations for the use and calibration of instruments include:

  • Use of the instrument for radiation of the type for which the instrument is designed;
  • Use of the instrument for radiation energies within the range of energies for which the instrument is designed;
  • Use under environmental conditions for which the instrument is designed;
  • Use under influencing factors, such as magnetic and electrostatic fields, for which the instrument is designed;
  • Use of the instrument in an orientation such that geotropic effects are not a concern;
  • Use of the instrument in a manner that will not subject the instrument to mechanical or thermal stress beyond that for which it is designed.

Routine calibration commonly involves the use of one or more sources of a specific radiation type and energy, and of sufficient activity to provide adequate field intensities for calibration on all ranges of concern.

Actual field conditions under which the radiation detection instrument will be used may differ significantly from those present during routine calibration. Factors which may affect calibration validity include:

  • The energies of radioactive sources used for routine calibration may differ significantly from those of radio-nuclides in the field;
  • The source-detector geometry (e.g., point source or large area distributed source) used for routine calibration may be different than that found in the field;
  • The source-to-detector distance typically used for routine calibration may not always be achievable in the field;
  • The condition and composition of the surface being monitored (e.g., sealed concrete, scabbled concrete, carbon steel, stainless steel, and wood) and the presence of overlaying material (e.g., water, dust, oil, paint) may result in a decreased instrument response relative to that observed during routine calibration.

If the actual field conditions differ significantly from the calibration assumptions, a special calibration for specific field conditions may be required. Such an extensive calibration need only be done once to determine the effects of the range of field conditions that may be encountered at the site. If responses under routine calibration conditions and proposed use conditions are significantly different, a correction factor or chart should be supplied with the instrument for use under the proposed conditions.

As a minimum, each measurement system (detector/readout combination) should be calibrated annually and response checked with a source following calibration, check national directives/regulations at this point. Instruments may require more frequent calibration if recommended by the manufacturer. Re-calibration of field instruments may also be required if an instrument fails a performance check or if it has undergone repair or any modification that could affect its response.

The user may decide to perform calibrations following industry recognized procedures [ANSI-1996a], [NCRP-1978], [NCRP-1985], [NCRP-1991], [ISO-1988], [HPS-1994], [HPS-1994a] or the user can choose to obtain calibration by an outside service, such as a major instrument manufacturer or a health physics services organization.

Calibration sources should be traceable to national standards. Where national standards are not available, standards obtained from an industry recognized organization (e.g., traceable standards from neighbouring countries) may be used.

Calibration of instruments for measurement of surface contamination should be performed such that a direct instrument response can be accurately converted to the 4π (total) emission rate from the source. An accurate determination of activity from a measurement of count rate above a surface in most cases is an extremely complex task because of the need to determine appropriate characteristics of the source including decay scheme, geometry, energy, scatter, and self-absorption. For the purpose of release of contaminated areas from radiological control, measurements must provide sufficient accuracy to ensure that clean-up standards have been achieved. Inaccuracies in measurements should be controlled in a manner that minimizes the consequences of decision errors. The variables that affect instrument response should be understood well enough to ensure that the consequences of decision errors are minimized. Therefore, the calibration should account for the following factors (where necessary):

  • Calibrations for point and large area source geometries may differ, and both may be necessary if areas of activity smaller than the probe area and regions of activity larger than the probe area are present.
  • Calibration should either be performed with the radionuclide of concern, or with appropriate correction factors developed for the radionuclide(s) present based on calibrations with nuclides emitting radiations similar to the radionuclide of concern.
  • For portable instrumentation, calibrations should account for the substrate of concern (i.e., concrete, steel) or appropriate correction factors developed for the substrates relative to the actual calibration standard substrate. This is especially important for beta emitters because backscatter is significant and varies with the composition of the substrate. Conversion factors developed during the calibration process should be for the same counting geometry to be used during the actual use of the detector.

For clean-up standards for building surfaces, the contamination level is typically expressed in terms of the particle emission rate per unit time per unit area, normally Bq/m2 or disintegrations per minute (dpm) per 100 cm2. In many facilities, surface contamination is assessed by converting the instrument response (in counts per minute) to surface activity using one overall total efficiency. The total efficiency may be considered to represent the product of two factors, the instrument (detector) efficiency, and the source efficiency. Use of the total efficiency is not a problem provided that the calibration source exhibits characteristics similar to the surface contamination (i.e., radiation energy, backscatter effects, source geometry, self-absorption). In practice, this is hardly the case; more likely, instrument efficiencies are determined with a clean, stainless steel source, and then those efficiencies are used to determine the level of contamination on a dust-covered concrete surface. By separating the efficiency into two components, the surveyor has a greater ability to consider the actual characteristics of the surface contamination.

The instrument efficiency is defined as the ratio of the net count rate of the instrument and the surface emission rate of a source for a specified geometry. The surface emission rate is defined as the number of particles of a given type above a given energy emerging from the front face of the source per unit time. The surface emission rate is the 2π particle fluence that embodies both the absorption and scattering processes that effect the radiation emitted from the source. Thus, the instrument efficiency is determined by the ratio of the net count rate and the surface emission rate.

The instrument efficiency is determined during calibration by obtaining a static count with the detector over a calibration source that has a traceable activity or surface emission rate. In many cases, a source emission rate is measured by the manufacturer and has a traceable certification. The source activity is then calculated from the surface emission rate based on assumed backscatter and self-absorption properties of the source. The maximum value of instrument efficiency is 1.

The source efficiency is defined as the ratio of the number of particles of a given type emerging from the front face of a source and the number of particles of the same type created or released within the source per unit time. The source efficiency takes into account the increased particle emission due to backscatter effects, as well as the decreased particle emission due to self-absorption losses. For an ideal source (i.e., no backscatter or self-absorption), the value of the source efficiency is 0.5.

Many real sources will exhibit values less than 0.5, although values greater than 0.5 are possible, depending on the relative importance of the absorption and backscatter processes.
Source efficiencies may be determined experimentally. Alternatively, ISO-7503-1 [ISO-1988] makes recommendations for default source efficiencies. A source efficiency of 0.5 is recommended for beta emitters with maximum energies above 0.4 MeV. Alpha emitters and beta emitters with maximum beta energies between 0.15 and 0.4 MeV have a recommended source efficiency of at least 0.25. Source efficiencies for some common surface materials and overlaying material are provided in [USNRC-1997].

Instrument efficiency may be affected by detector-related factors such as detector size (probe surface area), window density thickness, geotropism, instrument response time, counting time (in static mode), scan rate (in scan mode), and ambient conditions such as temperature, pressure, and humidity. Instrument efficiency also depends on solid angle effects, which include source-to-detector distance and source geometry.

Source efficiency may be affected by source-related factors such as the type of radiation emitted and its energy, source uniformity, surface roughness and coverings, and surface composition (e.g., wood, metal, concrete).

The calibration of gamma detectors for the measurement of photon radiation fields should also provide reasonable assurance of acceptable accuracy in field measurements. Use of these instruments for demonstration of compliance with clean-up standards is complicated by the fact that most clean-up levels produce exposure rates of at most a few nSv/h. Several of the portable survey instruments currently commercial available for exposure rate measurements of ~10 nSv/h have full scale intensities of ~30 to 50 nSv/h on the first range. This is below the ambient background for most low radiation areas and most calibration laboratories. (A typical background dose equivalent rate of 1 mSv/y gives a background exposure rate of about 100 nSv/h.) Even on the second range, the ambient background in the calibration laboratory is normally a significant part of the range and must be taken into consideration during calibration. The instruments commonly are not energy-compensated and are very sensitive to the scattered radiation that may be produced by the walls and floor of the room or additional shielding required to lower the ambient background.

Low intensity sources and large distances between the source and detector can be used for low-level calibrations if the appropriate precautions are taken. Field characterization of low-level sources with traceable transfer standards is difficult because of the poor signal-to-noise ratio in the standard chamber. In order to achieve adequate ionization current, the distance between the standard chamber and the source generally will be as small as possible while still maintaining good geometry (5 to 7 detector diameters). Generally it is not possible to use a standard ionization chamber to characterize the field at the distance necessary to reduce the field to the level required for calibration. A high quality GM detector, calibrated as a transfer standard, may be useful at low levels.

Corrections for scatter can be made using a shadow-shield technique in which a shield of sufficient density and thickness to eliminate virtually all the primary radiation is placed about midway between the source and the detector. The dimensions of the shield should be the minimum required to reduce the primary radiation intensity (this primary radiation includes also build-up) at the detector location to less than 2% of its unshielded value. The change in reading caused by the shield being removed is attributed to the primary field from the source at the detector position.

For energy-dependent gamma scintillation instruments, such as NaI(Tl) detectors, calibration for the gamma energy spectrum at a specific site may be accomplished by comparing the instrument response to that of a pressurized ionization chamber, or equivalent detector, at different locations on the site. Multiple radio-nuclides with various photon energies may also be used to calibrate the system for the specific energy of interest.

In the interval between calibrations, the instrument should receive a performance check prior to use. In some cases, a performance check following use may also provide valuable information. This calibration check is merely intended to establish whether or not the instrument is operating within certain specified, rather large, uncertainty limits. The initial performance check should be conducted following the calibration by placing the source in a fixed, reproducible location and recording the instrument reading. The source should be identified along with the instrument, and the same check source should be used in the same fashion to demonstrate the instrument’s operability on a daily basis when the instrument is in use. For analog readout (count rate) instruments, a variation of ± 20% is usually considered acceptable. Optionally, instruments that integrate events and display the total on a digital readout typically provide an acceptable average response range of 2 or 3 standard deviations. This is achieved by performing a series of repetitive measurements (10 or more is suggested) of background and check source response and determining the average and standard deviation of those measurements. From a practical standpoint, a maximum deviation of ± 20% is usually adequate when compared with other uncertainties associated with the use of the equipment. The amount of uncertainty allowed in the response checks should be consistent with the level of uncertainty allowed in the final data. Ultimately the stakeholders determine what level of uncertainty is acceptable.

Instrument response, including both the background and check source response of the instrument, should be tested and recorded at a frequency that ensures the data collected with the equipment is reliable. For most portable radiation survey equipment, EURSSEM recommends that a response check be performed twice daily when in use – typically prior to beginning the day’s measurements and again following the conclusion of measurements on that same day. Additional checks can be performed if warranted by the instrument and the conditions under which it is used. If the instrument response does not fall within the established range, the instrument is removed from use until the reason for the deviation can be resolved and acceptable response again demonstrated. If the instrument fails the post-survey source check, all data collected during that time period with the instrument must be carefully reviewed and possibly adjusted or discarded, depending on the cause of the failure. Ultimately, the frequency of response checks must be balanced with the stability of the equipment being used under field conditions and the quantity of data being collected. For example, if the instrument experiences a sudden failure during the course of the day’s work due to physical harm, such as a punctured probe, then the data collected up until that point is probably acceptable even though a post-use performance check cannot be performed. Likewise, if no obvious failure occurred but the instrument failed the post-use response check, then the data collected with that instrument since the last response check should be viewed with great skepticism and possibly re-collected or randomly checked with a different instrument. Additional corrective action alternatives are presented in Section 3.10.8. If re-calibration is necessary, acceptable response ranges must be re-established and documented.

Record requirements vary considerably and depend heavily on the needs of the user, while quality assurance and quality control programmes specify requirements.