National Weather Service Training Center

Quality Control Concepts

Introduction | Quality Control | Data Errors
General Quality Control Methods | Specific Quality Control Methods
Automated Quality Control | Actions Required to Correct Bad Data | References

Return to Main Page of Web Module

Introduction

The purpose of this web document is to describe quality control from a conceptual perspective and to suggest application of these concepts a National Weather Service (NWS) office. 

Quality Control

What is "Quality Control"?

By definition, "Quality Control" (QC) deals with the degree or grade of excellence of products and services provided by an organization.  For the NWS products and services include warnings, watches, forecasts, statements, summaries, and any other public issuances or contacts, as well as observational data either collected by NWS employees or received by the NWS as a primary distributor.

Every NWS office needs to have a formal "QC" program and every person on an office staff needs to place strong emphasis on the quality of office products, services, and data.  Formal "QC" programs at NWS offices should meet the following set of goals:

  • Ensure quality in what goes out to the user
  • Ensure high quality data are used in the creation of products
  • Ensure the quality of products, services and data during all types of weather, but with particular emphasis on significant weather events
  • Keep the workload, particularly at the local level, within reasonable limits
  • Provide rapid feedback on data and product quality to local staff and management
  • Provide different types and amounts of information to NWS groups, as needed
  • Be fair and be perceived as fair by the office staff

The extensive data-observing program of the NWS and related agencies, combined with the expansion of automated observing systems, requires a certain level of "QC" on data passing through a Weather Forecast Office (WFO).  It is the responsibility of all members of the operation staff to routinely review these observations, identify potentially erroneous data, investigate these discrepancies, and issue corrections, if necessary.

The purpose of this guide is twofold: to familiarize the WFO staff with the concepts of quality control, and to suggest applications for these concepts to the routine operations of a typical WFO.

Why is "Quality Control" important?

In this age of automation, data entered into a computer-driven communications system such as AWIPS, is unlikely to be "touched by human hands" again.  Data becomes input to NCEP and RFC models, flows into a variety of display systems used by NWS and private forecasters, published in collectives and summaries issued to the public, and is used by other Federal, state, and local agencies for their operations.  "Bad data" distributed to these various users can lead to minor embarrassments for the NWS, or if accepted for analysis and forecast models, to misleading model output.  For example, a stuck hygristor on a rawinsonde launched from Denver entered the LFM model and resulted in a bulls-eye over Colorado in the Model Output Statistics (Schwartz, 1990).

It is critically important good quality data flow from the observing sites to data users!  The WFO is the primary reviewer in this quality control process.

Good quality data are also important for verification and research purposes.  Assessment of forecast accuracy, testing the ability of a technique to forecast a weather or hydrologic event, and to validate remote sensing methods such as the WSR-88D precipitation accumulation algorithm, all require good quality data.  If, for example, routine data are not accurate, a poor technique may be accepted as good or a good technique may be discarded due to poor test results.

Good quality data are the basis for everything the NWS does.  All forecasts and warnings start with data, and data must be of the best possible quality.  There is a certain level of morality associated with providing quality data.  WFO staffs are obligated to maintain a high standard for data quality and to ensure high quality data are available to NWS, public and research users.

Return to the Top of the Page

Review Questions

Question 1

The goals of a WFO quality control program should:

A. Ensure quality in what goes out to the user
B. Ignore workload considerations
C. Provide little feedback on quality to the local staff and management
D. All of the above

Question 2

Bad data rarely affect numerical models run by the National Center for Environmental Prediction.

A. True
B. False

Data Errors

Random and Systematic Error

Every measurement is approximate by its very nature and includes the actual parameter value plus any errors resulting from or affecting the measurement method.  Therefore, an "error" is the difference between the "observed" or "measured" value of some parameter and the "actual" value of the parameter.

Estimation of error can raise many questions.  For example, which method will give the smallest error in wind speed: an objectively determined wind speed measured by ASOS, or the subjectively measured wind speed determined by a human observer using a wind recorder?

Following the discussion of Gandin (1988), data errors, whether of meteorological or other origin, are in one of two categories: random errors and systematic errors.

Random errors, distributed more or less symmetrically about zero, do not depend upon the measured value.  Random errors sometimes result in overestimation and sometimes in underestimation of the actual value.  On the average, these errors cancel each other out.  Examples of random errors include small errors in judgment or interpolation by an observer, e.g., the measurement of snow depth when drifting has occurred, or small vibrations in an apparatus when trying to use it.

Systematic errors, distributed asymmetrically about zero error, tend to bias the measured value either above or below the actual value.  The two main causes of systematic errors are a scale shift of the instrument and an influence of an unaccounted for, but persistent, factor.  An example of systematic error is a poorly calibrated instrument.

Taylor (1982), paraphrased below, offers an excellent example of the difference between random and systematic error.

Consider an experiment in which we time a revolution of a steadily rotating turntable.  Our reaction time in starting and stopping the stopwatch is one possible source of error.  If our reaction time were the same, these two times would cancel one another.  Our reaction time, however, will vary each time we start or stop the watch.  If we delay more in starting, we underestimate the time of a revolution; if we delay more in stopping, we overestimate the time.  Because either possibility is equally likely, the sign of the effect is random.  If we repeat the measurement several times, we will sometimes overestimate and sometimes underestimate.  Our reaction time will vary from one repetition to another and will show up as a variation in our measurements of turntable rotation.

If, on the other hand, the stopwatch consistently runs slow, all times will be underestimates.  No amount of repetition (with the same watch) will reveal this source of error.  This type of error is systematic because it forces our results to err in the same direction.

Wade and Barnes (1988) reported an example of a systematic error.  When the rawinsonde station at Denver, Colorado, moved in the spring of 1983, the rawinsonde calculations used an incorrect baseline pressure.  This resulted in a consistent negative bias of 16 to 30 meters in the geopotential heights at all standard pressure level above Denver from 14 April 1983 to 02 March 1988.

Another example of a systematic error is the consistent underestimation of rainfall amounts by tipping bucket rain gages.  During heavy rainfall, tipping buckets underreport precipitation.  This systematic error has been "corrected" in ASOS by use of a calibration formula for increasing one-minute rainfall amounts during heavy rainfall events.

Meteorological and Hydrologic Error

The complexity of the meteorological observation process adds another aspect to error analysis.  Errors associated with meteorological data can be divided into two types: micro-meteorological or representativeness errors, and rough errors.

Micrometeorological or representativeness errors are the result of small-scale perturbations or weather systems affecting a weather observation.  Observing systems do not adequately sample the weather systems, due to the small temporal or spatial resolution of the observing system.  Nevertheless, when such a phenomenon occurs during a routine observation, the results may look strange compared to surrounding observations taken at the same time.  For example, if, on bright sunny day, a temperature taken over asphalt were included with a group of temperature observations taken over grass, the temperature over asphalt would appear too warm.

Rough (or large) errors are long time problem causers.  Most rough errors are large in magnitude and easily detected by either visual examination of a data display or via some kind of automated "gross error check".  Malfunctioning of measuring devices or by mistakes make during processing, transmission, or receipt of data are examples of rough errors.  For example, an observer misreads a thermometer by 10 degrees or moisture in anemometer circuitry causes 180-degree errors in wind direction readings.

Return to the Top of the Page

Review Questions

Question 3

Random errors, distributed more or less symmetrically about zero, do not depend upon the measured value.

A. True
B. False

Question 4

Examples of meteorological errors include:

A. Small-scale perturbations affecting an weather observation
B. Temperatures taken over asphalt
C. Misreading a thermometer
D. All of the above.

General Quality Control Methods

General QC methods fall into four broad categories: checks of plausibility; checks for contradiction; spatial and temporal continuity checks; and use of diagnostic equations.

Checks for plausibility examine each piece of data independently of other data.  The basic question posed by this check is whether the value or magnitude of the parameter is within the expected range of the parameter.  Have the staff investigate a 103-inch 24-hour rainfall total.  Did the observer omit a decimal point?  Was the actual 24-hour rainfall total 1.03 inches or 10.3 inches?  The world record 24-hour world rainfall record is just under 72 inches.

Checks for contradiction involve the use of redundant analyses or systems.  If two independent sources of information contradict each other, one of them is likely to be incorrect.  For example, if a station reports a 24-hour rainfall amount of 0.75 inch, but skies were clear for the last two days, something is wrong.  ASOS uses this approach to compare pressure values from its barometers to ensure valid altimeter settings for aviation operations.

Spatial and temporal continuity (or consistency) checks are based on the fact the value or magnitude of nearby observations are usually similar.  The process compares observations adjacent in either space or time with each other, and if specified differences are noted, initiates further investigation.  For example, if two airports on opposite sides of a town report dew points values 10 degrees apart, an instrument check may be in order.  The evaluator in these situations must ensure the observed differences are not due to some mesoscale feature such as a front or dryline to justify the observed values.

Diagnostic equations determine whether data obey certain physical principles.  For example, are the observed winds on an upper air chart consistent with the geostrophic wind speed and direction indicated by the contour field?

Return to the Top of the Page

Specific Quality Control Methods

General Plausibility Check

As WFO staff review incoming data, ask the following questions:

Is the value of the parameter consistent with the typical range of parameter values? 

For example, if an observation reports a dew point of 92 degrees F, check the source of the information, check instrument calibration, and correct the observation.

Is the value of the parameter in the climatological range for the current month?

For example, an Arctic air mass has penetrated a region and high temperatures for the last two days have been in the teens.  One cooperative station, however, has consistently reported a high temperature of 32 degrees F.  The observer forgot to reset the maximum temperature thermometer for the last two days.  Send missing values for temperature in the corrected observations since reconstructing the missing data is impossible.

The National Climatic Data Center routinely compares the monthly temperature averages on cooperative observer monthly reports with the climatological average for the month.  If there is a significant difference between the observed average and the climatological average, it is frequently because the observer put the wrong month on the form.

Cross-Check/Redundancy Checks

One of the easiest ways to check the quality of an observation at a particular site is to check two independent systems at the site against one another.  The following are WFO system used to check weather or hydrologic information and observations.

Rainfall Accumulation:

  • WSR-88D rainfall product
  • Cooperative observer reports
  • ASOS observations
  • Automated rainfall collection systems

Thunderstorm Occurrence:

Mesoscale Features:

  • Satellite imagery
  • ASOS observation (one-minute data)
  • WSR-88D data

Severe Weather:

  • WSR-88D products
  • Spotter networks
  • Cooperative observers

Observation Content:

  • ASOS observation
  • Human backup observer

Streamflow:

  • ALERT or similar systems
  • Cooperative observers
  • Automated gage data

The availability of several systems allows the WFO staff to check an observation from several views and judge its validity.  For example, assume a line of thunderstorms moves across a county in the warning area.  The ASOS at the county airport reports 2.25 inches of rainfall during the passage of this system.  The WSR-88D rainfall product shows 3.55 inches.  A cooperative observer 5 miles from the airport calls in with 2.75 inches of rain.  Which one is believable?  Which one is used as the basis for deciding to issue a flash flood warning?  There was large hail with this storm.  This means the WSR-88D estimate may be high due to hail contamination.  On the other hand, the rainfall with these storms fell over a very short time period.  This implies ASOS may be a little low due to tipping bucket errors.  Even though the cooperative observer is 5 miles from the airport, the storm movement and intensity indicate these two locations should have similar rainfall totals.  Thus, professional judgment in this situation is the 2.75 inches is a good rainfall estimate.

Horizontal Spatial Consistency Checks

The horizontal spatial consistency or "buddy" check compares adjacent or nearby observations for consistency of parameters.  One way to conduct this check is to display available data on a computer screen or plot these data on a map.  In some cases, a visual scan of these data may suffice to identify "outliers" while in other cases a hand analysis will be necessary to find more subtle data problems.

When an observation looks out of place, investigate from several perspectives to determine what is occurring.  It may be "bad data" or a system malfunction, but it may also be a mesoscale system creating the "out of place" look.  For example, the outflow from a nearby thunderstorm may cause a wind direction is considerably different from the surrounding gradient flow.

The most difficult horizontal spatial checks are those requiring long-term evaluation.  A prime example is the cooperative network.  Rainfall reported at a particular site fits quite well with surrounding stations during the late fall, winter, and early spring.  Yet strangely, the rainfall measurements tend to deviate from the surrounding sites during the late spring, summer and early fall.  This occurrence tends to point to a rain gage sheltered by a tree and the leaves decreasing the rainfall catch of the gage.  The horizontal spatial consistency checks are excellent tools in finding equipment problems or site problems.

Vertical Consistency Checks

It is important to examine the vertical plot of temperature, dew point and winds from a rawinsonde run prior to transmission of the coded messages.  The Micro-ARTS system allows QC of data on the computer screen and the technician to correct any obvious problems.  Forecasters should analyze sounding information for vertical consistency after data transmission.

Profiler data have vertical consistency checks built into the software to process and prepare the information.  Nevertheless, "odd" wind data occasionally appear in profiler plots.  If using profiler data to modify VAD winds on the WSR-88D, ensure the winds are vertically consistent prior to entering them into the WSR-88D via the Unit Control Position (UCP).

Temporal Continuity Checks

Temporal continuity checks involve plotting time series of specific parameters in order to identify discrepancies in data.  This method of identifying data errors tends to be the most difficult in the WFO environment because data quality evaluations tend to focus on hourly or daily data changes.

As an example of a temporal continuity check, the National Climatic Data Center uses temporal continuity checks on monthly cooperative reports to check for consistency among stations.

Time series of three temperature curves

Daily Maximum Temperature

The figure above shows a typical temporal consistency check diagram.  The time series of daily maximum temperatures for three stations from the second of the month through the 25th of the month show well correlated patterns.  However, the trace for the period from the sixth through the 10th at Station 1 (in red) is flat compared to the other two stations.  Does this indicate the observer at Station 1 forgot to reset the maximum temperature thermometer during the period, or perhaps, was on vacation?

Coding Errors

Most data transmitted over weather communication lines are in coded form in order to conserve transmission time and computer storage space.  The WFO staff must be familiar with these codes and have the ability to identify miscoded data.  Several of the methods described above may assist in this identification process.

Error Logs

It is useful for the WFO staff to enter a note about the error in a Quality Control Error Log.  NWS staff should examine the log on a regular basis for repeated or consistent errors by the same observing system or observer.  If an error is intermittent or only occasional, this approach provides a better chance for detecting the source of the error.

For example, click here and review an example of a Quality Control Error Log.  Note in particular the log entries in RED and GREEN.  Several possible sources for error present themselves.

First, there are three entries about Cottonwood Falls .  On 06 February, the water equivalent of snow measurement was too high, on 10 February, the water equivalent was too low, and on 20 February, the amount of snow on the ground seemed high.  Does the observer at Cottonwood Falls need snow measuring and water equivalent training?  The HMT responsible for this station needs to investigate.

Also, note the dew point at CNU (Chanute) is mentioned twice.  This notation is consistent with the known error in the ASOS hygrothermometer about the dew point often freezes around 32 degrees F when the dew point is rising from below 32 degrees F to above the value.

The notation for 08 February raises a question about the QC program of this station: Why did it take three days to discover the max temperature error at Admire?

Return to the Top of the Page

Review Questions

Question 5

Match the specific quality control method listed below to its use in a typical situation. 

General Plausibility Check

Cross-Check/Redundancy Check

Horizontal Spatial Consistency Check

Vertical Consistency Check

Temporal Consistency Check

Coding Error

A. Meteograph of observational values
B. Comparison of a WSR-88D rainfall estimate to cooperative observations
C. 120oF temperatures in Lansing, Michigan
D. Review of the temperature and dew point profile using the AWIPS skew-t.
E. 20BKN OVC65.
F. Comparing winds at LaGuardia Airport , JFK International Airport , and Newark (NJ) Airport.

Click here to check answers

Question 6

Error logs allow the user to:

A. Pass error information from shift to shift.
B. Easily spot inconsistencies in data.
C. Correct errors with minimal effort.
D. Show higher management there is a QC program.

Automated Quality Control

The data assimilation systems used by major weather centers such as NCEP employ a wide variety of automated quality control procedures for data entering numerical models.  These procedures use a variety of sophisticated statistical methods to detect questionable data.  For example, NCEP uses a method called Complex Quality Control (CQC).  NCEP personnel check any data flagged by automated methods.

Despite use of computer-based methods, a need for the QC methods described here exists.  If all data flowed through NCEP's data assimilation system prior to reaching the public, there would still be a need for local QC.  Much of the data reaching NWS users never undergoes the NCEP QC system.

Actions Required to Correct Bad Data

As stated previously, one primary duty of the WFO staff is to routinely review observations, identify potentially erroneous data, investigate these discrepancies, and issue corrections, if necessary.  For example, while ASOS might take an observation, it is the responsibility of the WFO staff to ensure observation quality, i.e., observation accuracy and validity.

Most of this guide has discussed ideas and methods to accomplish the first three tasks noted in the previous paragraph.  Just as important as determining whether data quality is good, is the need to enter corrections or missing data entries into the communication system.  The correction allows the data processing systems to place the correct data into a model run, a table, or whatever.  Missing data reports allow elimination of poor data.  The WFO staff carries out this process routinely and ensures the quality of data entering the NWS distribution system.

Return to the Top of the Page

Review Question

Question 7

Think of ways to adapt these general quality control concepts to local operations.

References

Baker, Nancy L., 1992: Quality control for the Navy operational atmospheric databaseWeather and Forecasting, 7, 2, 250-261.

Daley, Roger, 1991: Atmospheric Data Analysis Cambridge University Press, New York , 457 pp.

Gandin, Lev S., 1988: Complex quality control of meteorological observationsMonthly Weather Review, 116, 1137-1156.

McNulty, Richard P., 1989: On verification and quality control.  4 pp. (unpublished, copy available from the author).

Schwartz, Barry E., 1990: Regarding the automation of rawinsonde observationsWeather and Forecasting, 5, 167-171.

Taylor, John R., 1982: An Introduction to Error Analysis. Oxford University Press, Mill Valley , 270 pp.

Wade, Charles G., and Stanley L. Barnes, 1988: Geopotential height errors in NWS rawinsonde data at Denver Bulletin, American Meteorological Society, 69, 12, 1455-1459.

Return to the Top of the Page

Updated 08/06/2007