What the heck is a confidence interval? A guide for social workers

Sometimes social work research and evaluation documents report “confidence intervals.” These can be reported as ranges of numbers or as lines on a bar graph. Often, social workers I know have not been taught how to interpret confidence intervals, so this post is intended as a guide to demystifying this statistical technique.

A confidence interval (CI) can be thought of in the vernacular, *very* loosely along the lines of a “margin of error.” CIs are used to assess the reliability of an estimate (either a mean/average or a proportion/percentage – there are different mathematical formulas for each one).  A CI is used to assess how reliable results are at a certain confidence level.

There are two parts to a confidence interval: 

Confidence level:  This is the level of “sure-ness” you want to have about whether your measurement is accurate (as in 95% confidence level, p<.05). 

Confidence interval:  This is the possible range above and below your actual measurement if there was an error in data collection, for example.

A 95% confidence interval indicates that if the study is conducted multiple times, we would expect that 95% of those confidence intervals to include the true population mean (or proportion) (Tan and Tan, 2010).   

Where you will see CIs reported numerically: 

CIs are typically given alongside the report of means/averages or proportions/percentages as ranges of numbers. 

For example, in political polls: “We are 90% confident that between 35% and 45% of voters favor Raphael Warnock.” 

In this example, 90% is our “confidence level” and 35%-45% is our “confidence interval.” Confidence levels are typically given alongside statistics resulting from sampling.

In a statement “we are 90% confident that between 35% and 45% of voters favor Candidate A”, 90% is our confidence level and 35%-45% is our confidence interval.

CIs are sometimes reported in visual format, where numbers translate into lines, or error bars:

Error bars are used to compare groups. Generally, if the lines (“error bars”) from two columns overlap on a figure, there is no statistically significant difference.  If the lines from two columns don’t overlap, there is a statistically significant difference. See Figure 5, below, for an example. Remember that confidence intervals are a very rough estimate of statistically significant differences and it is better to use more detailed tests if possible, i.e. a t-test or an odds ratio, for example.

What’s up with the length of the lines? 

The shorter or smaller the CI, the larger the population from which the data were drawn. The longer or bigger the CI, the smaller the population from which the data were drawn. All other things being equal, a survey result with a small CI is more reliable than a result with a large CI.

I hope this simple guide to confidence intervals has helped you to feel more confident about this topic!