What is Margin of Error?
A margin of error (MOE) represents the uncertainty or error in an estimate due to sampling variability. It is therefore a measure of the precision of an estimate at a given confidence interval. A margin of error is inherent in any estimation based on sampling. It provides a range that likely contains the true value of an estimate for a population at a specific confidence level. MOEs for published ACS estimates are typically provided at the 90 percent confidence interval. If an ACS population estimate is 5,000 with a margin of error of ± 100 at a 90 percent confidence interval, it means that if the same survey were to be repeated under identical conditions multiple times using independent samples, the range from 4,900 to 5,100 would include the true, or population, value of the estimate 90% of the time. Adding and subtracting the MOE from a point estimate yields the 90 percent confidence interval for that estimate. The smaller the margin of error in an estimate, the higher its precision.
Related measures of sampling error
Standard error
A standard error (SE) is a statistical measure of the variability due to sampling of an estimate from its true population value. It indicates how much an estimate of a parameter derived from a sample can be expected to deviate from the true value of the parameter for the full population it represents. It is effectively a measure of the standard deviation of the sampling distribution of an estimate.
The standard error is used to calculate the margin of error using the relationship:
|MOE| = Z X SE, where Z is the Z-score for the desired confidence level (1.645 for a 90% confidence level)
The margin of error and standard error are directly proportional. Higher the standard error of an estimate, higher its margin of error, and higher the uncertainty of the estimate due to sampling.
Coefficient of variation
Another useful measure of the uncertainty of an estimate to sampling is the coefficient of variation (COV), which represents the relative amount of error in an estimate and is calculated as the ratio of the standard error of a sample estimate to the estimate itself, expressed as a percentage.
The coefficient of variation is a standardized measure of the reliability of an estimate based on its size and its standard error and is therefore useful in comparing the degree of variation estimates from different data series with different mean values.
Determining high, medium, and low reliability of estimates based on their measures of sampling error
While there are no explicit guidelines from the Census Bureau to objectively and quantitatively determine the acceptable range of error in ACS estimates, CUSP Public uses reliability thresholds previously used in some Census case studies. The reliability indicator in CUSP Public, which describes aggregate county-level population estimates for the “Under 6” population from ACS table B17024 as having high, medium or low reliability, is based on the following thresholds for coefficients of variation.
- High reliability: Coefficients of variation less than 15 percent
- Medium reliability: Coefficients of variation between 15 and 30 percent
- Low reliability: Coefficients of variation higher than 30 percent