10.3 DATA QUALITY ASSURANCE PROCEDURES
Implementing QA/QC procedures from start to finish in an investigation helps assure
data that are usable and will meet and support the DQO. Procedures for Data Quality
Assurance are presented within this subsection. Specifically, QA/QC parameters for
precision, accuracy, representativeness, completeness, and comparability (commonly
referred to as the "PARCC parameters") must be evaluated. The parameters of precision,
accuracy, and completeness are quantitative measures, while representativeness and
comparability are largely qualitative.
10.3.1 Precision and Accuracy
Precision and accuracy are evaluated quantitatively by collecting tPrecision and
accuracy are evaluated quantitatively by collecting the types of QC samples listed
in Table 10-1. While these QC samples are primarily intended for evaluation of precision
and accuracy, the results are also used as necessary information for evaluating
the other quality parameters.
Recommended QC Sample Frequency
Default Frequency 1
Soil replicates/ triplicates
Depends on numbers of Decision Units (DU), COPCs, site characteristics. See Section
4.2.3 regarding field replicates (triplicates for MIS).
1 per day for every 10 samples
Equipment rinsate blank
Not required routinely when effective decontamination protocols are documented in
the SAP. When required (e.g., investigations for trace levels), 1 per day per type
of non-disposable sampling equipment
1 per shipping container containing volatile samples
1 per water source per investigation, if used to decontaminate equipment for re-use.
1 per every 20 samples
1 per every 20 samples for soil analyses of non-volatile contaminants (triplicates
MS/MSD percent recovery
1 per every 20 samples
LCS/LCSD or blank spikes percent recovery
1 per every 20 samples
Surrogate standard percent recovery
Every sample for organic analysis by gas chromatography
LCS/LCSD Laboratory Control Sample/Laboratory Control
MS/MSD Matrix Spike/Matrix Spike Duplicate
1 Based on HEER Office guidance and SW-846
Guidance (USEPA, 2003a)
pertaining to laboratory QC.
The default, or preferred frequency, for these parameters is listed; however, different
project-specific frequencies may be proposed to best meet project DQO. If proposing
different QC sampling frequencies for a specific investigation, the proposed QC
sampling program and the rationale should be presented in detail in the project-specific
SAP or QAPP and should receive approval from the HEER Office prior to field investigation.
More detailed descriptions of the individual types of QC samples and the modes of
collection and handling are presented in Subsections 10.6
Precision is the degree of mutual agreement between individual measurements of the
same property under similar conditions. For soil samples, combined field and laboratory
precision is typically evaluated by collecting and analyzing field triplicates and
then calculating the variance between the samples as a Relative Standard Deviation
Groundwater field duplicates are evaluated by determining a RPD for the replicates,
using RPD formula as noted below for laboratory MS/MSD precision determinations.
Laboratory analytical precision is evaluated by analyzing laboratory duplicates
or MS and MSD, typically utilizing the following formula:
A = First duplicate concentration
B = Second duplicate concentration
The results of the analysis of each MS/MSD and sample duplicate pairs will be used
to calculate an RPD for evaluating precision (USEPA, 2003a).
These are default values that laboratories may use until they develop in-house QC
limits for each method, in accordance with the guidelines established in SW-846
Laboratory sub-sampling poses the greatest potential for error in soil sample analyses
for non-volatile contaminants; therefore, the HEER Office recommends laboratories
perform triplicate sub-sampling analyses from at least one in every 20 of these
soil samples (original sub-sample plus two additional sub-sample replicates collected
independently from the entire mass of soil in the sample). Laboratory sub-sampling
precision is typically calculated as RSD percent (for triplicates or more). The
lab sub-sampling precision measure is also helpful to compare the degree of lab
sub-sampling and analysis error to the total error (i.e. the field replicate precision
data representing total error from field sampling plus lab sub-sampling and analysis).
Soil sub-sample replicates (as well as sub-samples for any other soil analyses for
non-volatiles) are collected by the laboratory from the entire mass of available
sample using a sectorial splitter or by hand Multi-Increment
sampling, as described in Section 4.2.2. This laboratory
sub-sampling QC guidance applies to soil samples collected by
Multi-Increment or discrete sampling approaches.
Sample spiking will be conducted to evaluate laboratory accuracy. This includes
analysis of the MS and MSD samples, laboratory control samples (LCS) and laboratory
control sample duplicates (LCSD), or blank spikes, surrogate standards, and method
blanks. MS and MSD samples will be prepared and analyzed at a frequency of 5 percent.
LCS or blank spikes are also analyzed at a frequency of 5 percent. Surrogate standards,
where available, are added to every sample analyzed for organic constituents. The
results of the spiked samples are used to calculate the percent recovery for evaluating
accuracy (USEPA, 2003a).
S = Measured spike sample concentration
C = Sample concentration
T = True or actual concentration of the spike
Results that fall outside the project-specific accuracy goals will be further evaluated
on the basis on the results of other QC samples. Table 10-1
summarizes recommended default frequencies for QC sample types. Example default
precision and accuracy goals for laboratory analyses are described in
Representativeness is a qualitative measure that expresses the degree to which field
data accurately and precisely represents a characteristic of a population, parameter
variations at a sampling point, process condition, or environmental condition. For
purposes of environmental investigation, representativeness is how well the media
(e.g., soil) sampled represents impact (i.e., contamination) at the site. In the
initial planning stages of an investigation, representativeness of data collected
is first ensured by proper sampling design. Project planners account for the difficulty
in knowing when, where, and how to collect representative samples by developing
a statistical or random sampling approach; collecting adequate numbers of increments
or samples to determine a representative average COPC concentration in each decision
unit; collecting samples at several different phases of natural or anthropogenic
cycles; sampling at different locations within the project area; collecting Multi-Increment samples as opposed to grab
samples; and verifying and validating the sampling techniques. The general strategies
for ensuring representativeness are described in Section 3.
The specific strategy used by the investigation team for each site is to be documented
in detail in the project-specific QAPP or SAP.
One measurement of representativeness is the degree to which implementation of the
sampling program has ensured that results reflect the site contaminant conditions
and not outside impacts related to analytical preparation, field sampling, field
decontamination, sample handling, sample shipping and other aspects of field investigation.
The degree to which the sampling strategy has achieved representativeness can be
measured as a qualitative parameter based on the proper implementation of the sampling
program and laboratory analytical program (i.e., the QA/QC program set out in the
QAPP). The results of field QC samples (i.e., replicates, trip blanks, field source
blanks, or equipment blanks) may indicate that compounds have been introduced into
the samples, possibly to an extent that would affect representativeness of the overall
Representativeness may also be measured by how well samples were delivered to the
analytical laboratory within the described holding times and holding temperatures
prescribed for individual analyses. Potential impacts to data quality measured by
the QA/QC methods include (but are not limited to) the following:
- Insufficiency or lack of cleanliness of sample collection containers, materials
or preservatives provided by the analytical laboratory prior to field work, to ensure
that outside contaminants are not introduced into the analytical process
- Impurities detected in final decontamination rinse water that may not have originated
from the site
- Contaminants originating from exposure during transport of samples from the field
to the analytical laboratory
- Sample transport where delivery time to the laboratory exceeds holding time or sample
temperature exceeds allowable temperature limits. Occurrence of either may indicate
loss of contaminants during transport prior to extraction and analysis
Representativeness should be assessed for each matrix (media) and for each COPC.
In addition to trip blanks for sites with volatile organics sampling (see
Subsection 10.6.2.1) or equipment rinsate blanks and field source blanks
(as described in Subsections 10.6.2.2 and
10.6.2.3), the following field QC procedures are used in evaluating representativeness:
- Temperature measurement, usually of the samples themselves and sometimes via separate
temperature blanks. These blanks are containers of analyte-free water included with
field samples, handled and transported in the same manner and measure for temperature
upon delivery to the analytical laboratory. Trip blanks sometimes double as temperature
- Chain-of-custody forms that document date and time of sampling and sample preservation
for each sample
If analyses of field QC blank samples result in detected contaminants, the field
procedures for decontamination, sample handling, and sample transport should be
evaluated for how well procedures were followed, for any potential introduction
of contaminants from outside sources, or for potential losses in the course of sample
handling or transport.
Completeness is a measure of the percentage of data that are valid. Data validation
is performed by evaluating field and laboratory QC analyses combined with field
QC logs, and chain-of-custody form information to determine how well field samples
were collected and analyzed in accordance with QC procedures outlined in the QAPP.
Field analytical data are acceptable if log and Chain-of-Custody (COC) information
show that field QC procedures were properly followed, no significant level of analytes
are detected in QC blank analyses, and when none of the QC objectives that affect
data usability are exceeded. Data validation is also performed to determine when
data should be rejected or declared unusable due to improper field QC, detection
of analytes in blanks or laboratory QC limit exceedances. Completeness will also
be evaluated as part of the data quality assessment process. This evaluation will
help determine whether any limitations are associated with the decisions to be made
based on the data collected.
Completeness is a percentage value, calculated to determine if an acceptable amount
of usable data was obtained so that a valid scientific site assessment may be completed.
The QAPP should present completeness goals (e.g., commonly 95%) to evaluate the
degree of completeness. Percent completeness is calculated using the following equation:
%C = percent completeness
T = total number of sample results
R = total number of rejected sample results
Completeness at a minimum should be determined for all field analytical results
by method, but should also be determined by comparing the planned number of samples
per method and specific matrix.
Comparability is a qualitative parameter that expresses the confidence with which
one data set can be compared with another. It is important that data sets be comparable
if they are used in conjunction with other data sets. This type of comparison manifests
itself most commonly (but not limited to) the following scenarios:
- Data from the same site but collected during different investigations.
- Data from the same site but collected during widely separated time-frames.
- Comparison of data from the same site and investigation, but analyzed by different
Comparability of data can be achieved by consistently following standard field and
laboratory procedures and by using standard measurement units in reporting analytical
data. The factors affecting comparability include sample collection and handling
techniques, matrix type, and analytical method. If these aspects of sampling and
analysis are carried out according to standard analytical procedures and the procedures
implemented properly, the data may be considered comparable. Comparability is also
dependent upon other quality criteria, because only when precision, accuracy, and
representativeness are known may data sets be compared with confidence. In some
cases, additional care must be taken to evaluate comparability. For instance, groundwater
samples handled in the exact same fashion, collected within the same sampling event,
and analyzed by the same analytical method may not be directly comparable if one
sample was filtered and the other was not.