Department of Health Seal

TGM for the Implementation of the Hawai'i State Contingency Plan
Section 3.8


After the environmental data are collected, the data are validated in accordance with the QAPP. This data validation and assessment process establishes whether the type, quantity, and quality of sampling data are adequate to support the decision making process (Data Quality Assessment - DQA). Data Quality Assessment is performed during Step 8 of systematic planning. Given the quality of the data collected, the DQA process will verify if the estimated contaminant concentrations at the site meet the level of confidence specified in the SAP. Additional information regarding data validation and data quality assessment is available from USEPA in Guidance for Data Quality Assessment: Practical Methods for Data Analysis (USEPA, 2000d) and Data Quality Assessment: A Reviewer’s Guide (USEPA, 2006).


Data validation is the process used to determine if the environmental data are accurate; specifically, it assures that methods specified in the SAP and QAPP were correctly specified on the chain of custody document(s) and carried out by the laboratory such that the data are useful for its intended purpose(s). The data validation process begins at the analytical laboratory. The laboratory analyst verifies instrumental data, calculations, transfers, and documentation, and corrects errors, if detected. The laboratory provides QA/QC information to assure data validity. Labs selected for conducting analyses should have well-documented QA/QC procedures. Participation in established lab certification programs such as the NELAC certified laboratory program can help to establish that a laboratory has well-documented QA/QC procedures that are periodically audited by the certifying body. Technical department managers, quality control specialists, or project managers should review the laboratory data reports and supporting documentation.


Data Quality Assessment (DQA) is a five step process with the goal of determining whether the type, quantity, and quality of sampling data are adequate to support the decision making process.

Step 1: Review the DQO and Sampling Design

Review the DQO and sampling design to ensure the issues at the site have been adequately addressed. If data are not sufficient to move forward with selection of a remedy or other next step, additional sampling may be required. For example, if sampling did not delineate the vertical or horizontal extent of contamination, or if groundwater was not encountered due to drilling refusal at a site where groundwater was believed to be impacted; then additional sampling would typically be required.

Step 2: Conduct a Preliminary Data Review

Conduct a preliminary data review. Start with a review of the data validation assessment. Look for data patterns, relationships, or potential anomalies.

Step 3: Select the Statistical Method

Select statistical methods to assess the data. During the DQO development process, limits on decision error tolerance are specified. Uncertainty limits are typically proposed by establishing performance goals of the analytical data for precision, accuracy, repetitiveness, completeness, and comparability parameters. In addition, uncertainty limits and performance data are developed in more detail in the QAPP (See Subsection 3.7). Examine uncertainty limits through statistical evaluation, which is an important tool used in the data assessment to determine:

  • Whether the data meet the assumptions under which the DQO and the data collection design were developed
  • Whether the total error in the data is small enough to indicate that the data are of sufficient quality to support decisions within the tolerable error rates expressed in the DQO

During field sampling, a triplicate sample is typically collected in one DU for each batch of up to 10 similar DUs to allow for statistical calculation of several important quantities including the standard deviation of the mean, the relative standard deviation (RSD) of the mean and/or the 95% UCL of the mean. These quantities are the statistical measures that are typically selected for evaluating the overall precision of the contaminant sampling. Use of field sampling replicates and laboratory subsampling/analysis replicates to evaluate MI sample precision allows consideration of total sampling error (a combination of field sampling/field processing error and laboratory subsampling and analysis error), as well as evaluation of the magnitude of field sampling error compared to the laboratory subsampling and analysis error. The latter is evaluated by subtracting the laboratory subsampling/analysis error (the laboratory replicate data) from the total error (the field replicate data) to determine the magnitude of the field sampling error.

Step 4: Verify the Assumptions of the Statistical Method

Evaluate whether the underlying assumptions of the statistical methods hold, or whether departures are acceptable, given the actual data.

Step 5: Draw Conclusions from the Data

Draw conclusions about the data collected. Discuss the validity of the data that do not meet the performance criteria established in the DQO.

Note: The HEER Office requires that an Environmental Hazard Evaluation be prepared and submitted with a site investigation report. Representative COPC concentrations developed as part of this evaluation may involve further statistical evaluation, including, for example, the assessment of non-detect data. A detailed discussion of Environmental Hazard Evaluations is presented in Section 13.