On Methodological, Logistical and Ethical Issues in Research Related to People with Disabilities
1) A true voice: Depending on the way in which data are collected, the “voice” of the study participant may – or may not – be the truest voice of some participants whose disabilities may impact communication (e.g. people who are deaf/hard of hearing, people with vocal constraints such as advanced Parkinson’s disease, people with cognitive disabilities). This may, for example, lead to the use of either interpreters or proxies in telephone surveys, for example. This has an impact on the potential validity of responses. Take, for example, studies that address intimate partner violence, use of a partner or child as an interpreter or even a proxy may challenge validity. This may differentially impact this community within the larger disability community vis-à-vis data collected for large, nationally representative surveys and the ability to draw on such resources to derive a more broad understanding of the range of concerns faced by people with disabilities in a given topic area.
I. Receipt of disability-related entitlements
a. Example: SSI,SSDI
a. Example: Medicare or Medicaid administrative claims data
b. Example: Adoption and Foster Care Reporting System
o Example of data file available on ICPSR: Collaborative Psychiatric Epidemiology Surveys (CPES)
o Example of data file available on ICPSR: Neuropsychological and Emotional Deficits as Predictors of Correctional Treatment Response in Maryland, 2003-2005
2) Implications for consent and assent in this specific population: There are obvious implications for consent and assent and the need to very carefully word these documents to facilitate a good pre-data collection process.
3) Concerns about “dubious” data on the part of agency IRBs (concerned about liability, funding) emanating from discussions of consent/assent concerns: Third, also with respect to consent and assent, these issues have led to a sticking point for me in work with agency-based IRBs related to sensitive topics (i.e. implementation of the dignity of risk in community-based shared living settings around sexual relationships and substance use). I have had a proposal stalled based on IRB committee concerns that study participants with intellectual disabilities might incriminate themselves during an interview – regardless of whether they have done anything illegal that would need to be reported. Some of the problem here related to what is in my opinion zealous overprotection of people with intellectual disabilities combined with stereotypes about the population (e.g. unable to control sexual impulses when around children, when using substances). However, conversations about the challenges of consent and assent with people with intellectual disabilities have also led to discussions about the liability of agency if word of misbehavior “gets out” based on “faulty” data collected from agency clients.
1) Lack of “normed” instruments: With client satisfaction/program evaluation survey processes in mind, given response bias et alia concerns, while a number of vetted client satisfaction-related instruments exist for adults in outpatient mental health settings (Fischer and Valley, 2000), prison-based settings (Baker, Zucker and Gross 1998), youth in community-based service settings (Stüntzner-Gibson, Koren and DeChillo, 1995) or the families of children with emotional disturbance in community agencies (Koren, DeChillo and Friesen, 1992), few exist for people with intellectual disabilities (Duvdevany, Ben-Zur, and Ambar, 2002). The Lifestyle Satisfaction Scale (LSS) was developed to assess levels of satisfaction with residential and community-based services among people with intellectual disability (Heal and Chadsey-Rusch, 1985). Other researchers have developed vetted mechanisms for measuring quality of life among people with intellectual disability (Cummins, 19997; Schalock and Keith, 1993) although this approach to measuring client satisfaction has been criticized (Hatton, 1998). A number of other studies have demonstrated assessments of the views of clients with intellectual disability on satisfaction with community-based worksites (Eggleton, Robertson, Ryan and Kober, 1999; Kraemer, McIntyre and Blacker, 2003), group homes (Cummins, 1994; Henry, Keys, Jopp and Balcazar, 1996) and long-term planning (Heller, Miller, Hsieh and Sterns, 2000).
2) Lack of access to what “normed” instruments do exist in agency settings: However, most agency-based evaluators do not have access to any of these instruments and their scoring mechanisms, largely due permissions/copyright issues, cost and lack of training in the use and/or scoring of specialized instruments. Further, while these instruments may be reported upon in the academic literature, which agency-based social workers may or may not be able to access, journal articles tend not to include full copies of either a particular instrument or scoring mechanism. As a result of this resource unavailability, many agencies may rely on their staff to develop “home grown” client satisfaction survey approaches without benefit of the use of evidence-based literature on how to engage in this activity. While the documented “proliferation of ‘home-made’ satisfaction instruments” allows for agency-specific foci in the survey process (e.g. measurement of mission-specific items), this trend does not allow for the comparability of findings across settings and across instruments, assuming the presence of similar populations. While the use of “home-made” surveys is noted to be “analytically troublesome and imprudent” this is, nonetheless, the reality that many social workers face in agency practice. Home-made efforts to surveying clients on their satisfaction with services may be further challenged by the well-documented communication difficulties that can exist in interactions with people with intellectual disability vis-à-vis eliciting “valid” responses (i.e. “response bias”) in the direct practice context (Heal and Sigelman, 1995).
3) Potential for coercion in data collection: Closely related to this discussion is the issue of potential coercion. Clients with intellectual disabilities may experience susceptibility to potentially coercive situations with community-based support staff who are implementing satisfaction surveys (Michel, Gordon, Ornstein and Simpson, 2000). In this context, use of the term “coercion” is not meant to imply that an interviewer has bad intentions toward a consumer, rather, that if the person interviewing the client has some form of potential power over the client, this may bias responses (Weisstub and Arboleda-Florez, 1997). This may be particularly important to consider when surveying offenders with intellectual disability and/or when front-line or clinical workers support clients with intellectual disability while participating in the data collection/survey process (Michel, Gordon, Ornstein and Simpson, 2000; Weisstub and Arboleda-Florez, 1997).
1) Liability and exposure of sensitive subject matter to the larger community: When seeking buy-in from a community partner – even after years of building a relationship and a history of research or evaluation collaboration – it can be challenging to move forward on “sensitive” topics as a result of liability concerns. For example, I am very interested in learning more about the implementation of a central disability policy goal – the dignity of risk concept – in group home settings for adults with intellectual disabilities. Specifically, I am interested in learning more about how group home workers support individuals with intellectual disabilities in their decision making around sexual practices and alcohol and/or drug use given what is often a lack of explicit agency policy around these topics. Although agencies acknowledge their struggles in managing/supporting individuals with intellectual disabilities around both topics, exposing the veritable “soft white underbelly” on these matters can be threatening and lead to and end point on possible research discussions. Even if a particular community partner is willing to engage in a co-created effort to explore such topics, obtaining permission for such research often means seeking out IRB approval from a state funding and/or regulatory agency, which has often resulted in significant delays and/or eventual denials.2) Inter-agency collaboration and permission to interview subjects/use secondary data: The above-mentioned challenges related to liability and exposure of agencies/state agencies to sensitive topics also comes into play when researchers are interested in populations with child protection and/or criminal justice involvement. For example, I am currently involved in a project with an outpatient mental health clinic that has had an influx of clients with intellectual and developmental disabilities – all of whom are also involved with the state’s child protection system and/or juvenile justice systems. The agency’s attorneys insist that we obtain permission to conduct our work from the relevant state authorities before proceeding with our own work. This has led to delays of over two years as meetings between the community agency research team and state agency authorities continue. While this may reflect a good, thoughtful process that has at its heart the protection of vulnerable human subjects, it presents a problem for researchers in academic settings re: what we can be funded to do and in what time frame, among other matters.
Related articles
- Accommodating Students with Disabilities in Field Practicum: Challenges and Strategies (eslayter.com)
- The employment of people with disabilities: What’s working? (irisinstitute.ca)
- On intimate partner violence among transitional-aged women with disabilities (eslayter.com)
- Smartphone app helps people with a disability access the city (gizmag.com)