Help for social work evaluators who want an easy guide for interpreting odds ratios
Odds ratios are a commonly used statistical test in the medical and public health arenas. Increasingly, social work is practiced in these host environments. Therefore, social workers need to be familiar with the literature that professionals draw upon in these settings. An odds ratio represents the odds that an outcome will occur given a particular “exposure,” compared to the odds of the outcome occurring in the absence of that exposure. I get it, that’s some technical language. Let’s break it down.
If we think of practice in the context of evaluating a mental health service outcome, odds ratios might be used as follows:
An odds ratio might represent the odds that a positive mental health outcome such as “improved mental health status score” would occur for one group, say a treatment group, as a result of receiving cognitive behavioral therapy compared to the odds of the outcome occurring among members of the waitlist comparison group, who did not receive therapy. In this way, we can think of an odds ratio as being used to compare groups on an outcome.
Technically, what you need to know is that odds ratios are used to compare two groups at a time on a nominal or “dummy” variable, such as “improved mental health status score,” which is operationalized as either “yes” or “no.”
So, an odds ratio can be used to compare two groups in the evaluation context, such as a treatment group and a control group. You always need to know which group you are comparing to the other in order to correctly interpret the score that the test gives you. The score that you interpret is about the group you are focused on, and it is reported in comparison to the other group, which is known as the “referent.” Usually the treatment group is the group you are focused on and the control group is the referent.
Let’s look at a different example. Let’s say that this time we are comparing a treatment group and a control group on their likelihood of maintaining sobriety for 90 days post treatment. You will have data about what percentage of each group maintained sobriety for that timeframe, but you will be wanting to know if there is a statistically significant difference between groups as well. The odds ratio test will give you information about whether there is a statistically significant difference, AND, you will get another very special score. Odds ratio scores will help you to determine if there is a clinically meaningful difference between the groups.
If we have an odds ratio score of exactly 1.0 it means that the treatment and control group are exactly equal. If we have an odds ratio of anything over 1 (as in 2.0, 5.5, 7.2 or 12.6) it means that the treatment group (or the group we are focused on, in other words, NOT the referent) is more likely to have the exposure. If we have an odds ratio score of 2.3 (p<.001) it means that the treatment group is 2.3 times more likely to maintain sobriety than the control group and it is a statistically significant finding. Let’s now say we have an odds ratio score of 2.3 (p<.99), while odds ratios are still reported in data tables when there is no statistical significance, we don’t interpret them because both groups are statistically equal to have the outcome.
Now, here’s where the interpretation flips a little bit, which is something to get used to. Have patience, you will get used to it. Let’s say that our odds ratio had been 0.23 instead of 2.3. In this situation, we subtract 0.23 from 1, and get .77. Remember how 1.00 is the same as 100%? That’s what we are doing here. Converting a decimal into a percentage. We interpret this decimal as a percent. This would tell us that the treatment group was 77% less likely to maintain sobriety for 90 days (meaning there’s a problem with our program). When our odds ratios are positive (meaning 1 or higher), we talk about “times more likely” and when they are negative (meaning they start with a zero), we talk about “percent less likely.” So, an odds ratio of 0 point anything is always about lower percentage likelihood.
OK, now let’s talk about the way that odds ratio scores help us to determine clinical meaningfulness through the use of their “effect size” scores. So the deal is that we only start paying attention to positive odds ratio scores (those 1 or above) as meaningful at a certain cutoff point. In “research as a second language” we talk about this as an “effect size.” As Chen, Cohen, and Chen (2010) note, “the odds ratio (OR) is probably the most widely used index of effect size in epidemiological studies” (p. 860). Further, these authors suggest that odds ratios of 1.68, 3.47, and 6.71 are equivalent to the Cohen’s d test effect sizes of 0.2 (small), 0.5 (medium), and 0.8 (large)” (p. 860). This is another statistical test that provides judgement on what constitutes a good effect size. So unless your odds ratio score is above 1.68, you shouldn’t really consider it to be a clinically meaningful difference between groups. That’s a good rule of thumb.
Let’s take a look at how you interpret odds ratios from a table of data, because findings are not alway written out in sentences. This table is from a study I did with Salem State MSW graduate Jordan Jensen on parents with intellectual disabilities (as compared to parents without intellectual disabilities, the referents, or the group the odds ratio score does NOT focus on) in the child welfare system nationwide.
When we interpret a table, we always start by grounding ourself with a good deep breath before reading the title of the table. It is easy to get anxious when faced with a table of data, but slowing yourself down and assessing the parts of the table first, to see what’s there, can really make a difference in helping you build confidence. You can start by identifying what is in each column. In this case, the variables compared between groups are in the first column, percentage data are reported for the sample and comparison group in the second and third column, and statistical data are reported after that.
Let’s interpret three different types of findings. Look at the first line of data, which reports on different rates of physical abuse allegations among parents with and without parents with intellectual disabilities in the child welfare system. We see the percentage of parents that had this allegation in each group, but the odds ratio score information tells us that there is no statistically significant difference between the groups, meaning that they are statistically equal, despite being slightly different in score. Interpreting data for the sake of interpreting data is no good unless you then think about the implications of those data, or how to act on it. From a practice perspective, this means that neither group are more likely to be more at risk of this type of child maltreatment, so parents with intellectual disabilities should not be stigmatized as being at higher risk for child abuse.
Moving on to allegations of sexual abuse, we see that parents with intellectual disabilities had a lower rate of this type of allegation that parents without intellectual disabilities. The odds ratio gives us a measure of effect size looking at this difference, telling us that parents with intellectual disabilities are 47 percent less likely to have this type of allegation than their counterparts and that the finding is statistically significant. Thinking about practice implications, this might mean that higher numbers of service providers involved with a family could reduce the potential for sexual abuse to occur in a family led by parents with intellectual disabilities. Or, potentially, that parents with intellectual disabilities may be parenting in intergenerational families, where there are more eyes on children, which functions as a protective factor. More research is needed to determine why this result is the way it is.
Finally, let’s look at the psychological or emotional maltreatment example, which shows a higher rate among parents with intellectual and developmental disabilities. This odds ratio score indicates that parents with intellectual disabilities are 1.42 times more likely to have this allegation, and that the finding was statistically significant. However, this odds ratio score does not reach the effect size cutoff for a small effect, meaning that it falls in the “grey area” between perfect equality at 1.0 and a small effect at 1.68. This finding has practice implications for prevention work with parents with intellectual disabilities, who may need extra parenting guidance around this aspect of childrearing.
I hope these examples have been helpful to you in understanding how to interpret odds ratios as clinical practitioners in an evaluation context. Hopefully this short guide to the application of odds ratios in the context of evaluation has been helpful to you! Good luck interpreting the data!