Research Briefs
Client Perceptions of Satisfaction With AIDS Services: An Instrument Development Joe Burrage, Jr., PhD, RN David Vance, PhD, MGS The purpose of this article is to show how AIDS Service Organizations (ASOs) can develop their own instruments to assess client satisfaction by using support from academic partners. The Client Satisfaction Questionnaire (CSQ) is an example of this process. The initial 12-item CSQ was piloted using a sample of 46 HIV-infected men and women, resulting in a revised 8-item CSQ that was assessed by using a sample of 121 HIV-infected men and women. The initial CSQ (12-item) yielded three factors, volunteer’s skill/ access, volunteer’s attitude, and volunteer’s caring, accounting for 74.6% of the explained variance (Cronbach’s alpha 5 0.84). The revised CSQ (8item) resulted in one factor accounting for 67% of the explained variance (Cronbach’s alpha 5 0.92). Findings indicated acceptable reliability and validity of the CSQ to assess client satisfaction as an outcome of ASO client-agency interaction. Guidelines for instrument development by ASOs are proposed. Strategies to collaborate with the academic community to facilitate instrument development are discussed. Key words: community-academic collaboration, community-based AIDS service organizations, instrument development, satisfaction
T
he majority of persons living with HIV (PLWH) receive some or all of their support through services provided by local community-based AIDS service organizations (ASOs). ASOs funded through the Ryan White HIV/AIDS Modernization Act (Health Resources and Services Administration, 1999; Health Resources and Services Administration, 2007), pro-
vide services in 51 U.S. metropolitan areas that have populations of 500,000 or more with at least 2,000 cumulative AIDS cases reported during the last 5 years. Social issues such as HIV-related stigma often create barriers to health service utilization and create health care delivery gaps (Burrage & Demi, 2003; Burrage & Porche, 2003; Kelly et al., 2006). ASOs fill these gaps by providing services such as counseling/education, early intervention services, physician referral, case management, and support groups and buddy programs for PLWH. Provision of such services involves interactions among agency staff, volunteers, and clients. With the introduction of effective combination antiretroviral therapy, HIV has become a chronic condition requiring prolonged service availability (Gifford & Groessl, 2002; Lekas, Siegel, & Schrimshaw, 2006). Adults with HIV are socially vulnerable, and many are medically underserved (Burrage & Rocchiociolli, 2003), as reflected in HIV care disparities related to morbidity, mortality, life expectancy, and access to care. Weinrich, Boyd, and Herman (2003) define the medically underserved as those who are underinsured or uninsured, have little formal education, live in rural and inner-city areas, are unemployed, and have a low socioeconomic status. The increasing need to reduce health disparities requires accurate evaluation of the ability of ASO Joe Burrage, Jr., PhD, RN, is an associate professor at Indiana University School of Nursing, Indianapolis. David Vance, PhD, MGS, is an assistant professor at the University of Alabama School of Nursing, Birmingham.
JOURNAL OF THE ASSOCIATION OF NURSES IN AIDS CARE, Vol. 19, No. 3, May/June 2008, 228-234 doi:10.1016/j.jana.2008.03.001 Copyright Ó 2008 Association of Nurses in AIDS Care
Burrage, Vance / AIDS Service Satisfaction Instrument
services to identify the needs of vulnerable PLWH, to reduce disparities, and to provide positive outcomes (Burrage & Porche, 2003). The increasing need to reduce health disparities among diverse groups who use ASO services requires the development of research instruments that reflect specific client groups. One method of assessing a program’s services and effectiveness is to determine consumer perceptions of satisfaction with such services. Satisfaction with agency personnel can foster access to care, improve adherence to recommendations, and facilitate continued client-agency interactions. Although general patient satisfaction instruments are available, they were not designed in an agency-specific context (agency services and activities). Thus, ASOs often use general instruments not designed for that purpose of assessing client satisfaction with agency services. The alternative is to design an instrument specific to the organization. Unfortunately, many ASOs do not possess the knowledge, experience, support, or resources to do this well. The purpose of this report is to describe the development of an instrument to assess PLWH satisfaction with ASO services. An overview of the instrument development process is presented with two related studies that show how to develop such an instrument, and a five-step guide for instrument development is proposed.
Overview of the Instrument Development Process Constructing a psychosocial instrument involves several phases of development. First, there is a conceptualization phase in which the topic of interest is clearly defined. Second, during a qualitative phase, information on the topic of interest is gathered from participants. Third, a content analysis is performed to determine themes surrounding the topic (Budd, Thorp, & Donohew, 1967; Carley, 1990; Strauss & Corbin, 1990). Fourth, items that correspond to the themes are taken from the qualitative research, reworded for readability, and then used as instrument items and tested in a small sample (Summers, 1992). Finally, the instrument is tested on a much larger sample to determine the validity and reliability of the instrument (Carmines & Zeller, 1979; Dunte-
229
man, 1989; Huck & Cormier, 1996; Pike, 1996; Tabachnick & Fidell, 1996).
Study 1: Development of Initial AIDS Service Organization Client Satisfaction Instrument Initial development of a new instrument for a pilot study (Buddy Programs for People with HIV: Clients’ and Volunteers’ Perceptions) was performed to assess the psychosocial outcomes of social support interventions provided by ASOs to PLWH (Burrage & Demi, 2003). It was crucial that instrumentation be succinct and tailored to this unique group of ASO clients. Because instruments that measured satisfaction with volunteer interaction within the context of an ASO could not be found, an instrument to measure client satisfaction with services was developed. Client Satisfaction Questionnaire Scale Construction A literature review of satisfaction with services within the context of chronic illness, HIV infection, and health care service provision was conducted. Consequently, an analysis of the concept was conducted, and the analysis validated the researchers’ beliefs about client satisfaction in this context. This action was followed by consultation with experts in the field and dialogue with clients to develop items to specifically measure client satisfaction with ASO client/staff interactions. Content validity was established by a panel of experts who reviewed the items. The panel consisted of a psychiatric clinical nurse specialist, a family nurse practitioner, two social workers, and a local ASO program manager, as well as three long-term ASO clients. All three clients had been participating with local ASOs for at least 3 years. From this, a 12-item client satisfaction questionnaire (CSQ) was drafted using a four-point Likert scale to assess each item ranging from 1 (extremely dissatisfied) to 4 (extremely satisfied), as shown in Table 1. The client survey addressed satisfaction with the ASO within the context of client relationships with the agency’s staff volunteers. Readability analysis indicated that the instrument was written at an eighthgrade reading level. Before use, the CSQ was tested
230 JANAC Vol. 19, No. 3, May/June 2008 Table 1.
Original and Revised Client Satisfaction Questionnaire
Client Satisfaction Questionnaire: Original 12-Item Instrument 1. The courtesy and friendliness of my volunteer 2. My volunteer’s warmth and personal interest in me 3. The amount of respect my volunteer shows to me 4. The consideration my volunteer shows for my feelings 5. The amount of concern my volunteer expresses about my problems 6. The activities my volunteer does for me or with me 7. My volunteer’s ability to meet my needs 8. The amount of time my volunteer spends with me 9. The activities the Buddy Program plans for volunteers and clients 10. The spiritual support my volunteer provides to me 11. The degree to which my volunteer seems to be familiar with my kind of problem 12. The degree to which my volunteer exhibits knowledge about HIV infection, its effects, and its treatment
for validity and comprehensiveness by content experts and piloted by two ASO clients to determine clarity and feasibility. Participants were able to complete the questionnaire in about 5 minutes and described the instrument as being clear and understandable. Methods A total of 56 ASO clients in buddy programs in five metropolitan areas in the southeastern United States were approached by research team members to participate in the study. Of these clients, 46 agreed to participate. The majority of the clients were men (85%), with a mean age of 44 years (range: 27-68), and a minimum of at least 1 year of college. More than half (57%) were White, 26% were African American, 11% were Latino, and 6% reported their race as Other. Participants had been infected with HIV for an average of 9.4 years (range: 2-18), and most had a diagnosis of AIDS. Results Principal component analysis was used to assess the construct validity of the researcher-developed CSQ. Cronbach’s alpha (.84) for the CSQ showed satisfactory internal consistency reliability. A factor loading of . .60 was set to determine groupings of items into subscales. The cutoff point was set high to support robust item selection and provide better conceptual fit.
Revised Client Satisfaction Questionnaire: Revised 8-Item Instrument 1. The courtesy and friendliness of my volunteer 2. The staff’s warmth and personal interest in me 3. The amount of respect the staff shows to me 4. The consideration the staff shows for my feelings 5. The amount of concern the staff expresses about my problems 6. The staff’s ability to meet my needs 7. The degree to which the staff seems to be familiar with my kind of problem 8. The degree to which the staff exhibits knowledge about HIV infection, its effects, and its treatment
As seen in Tables 2 and 3, three factors emerged. Six items were reported to load on the first component (volunteer skill/access); items 6, 7, 8, 10, 11, and 12 addressed the competence and availability of agency staff volunteers to the client. Three items loaded on the second component (volunteer attitude); items 1, 3, and 4 addressed perceived attitudes of ASO volunteers. Two items loaded on the third component (volunteer caring); items 2 and 5 addressed perceptions of the quality of caring in ASO volunteers. Discussion Data from Study 1 provided direction for agencyfocused changes and program redesign to enhance client satisfaction with agency services, such as redesigning training for buddy program volunteers that focused on interaction skills and engaging clients in social activities. The next step in the refinement of the CSQ was to use the outcome of the principal component analysis to refine the initial items so that the instrument’s construct validity could be strengthened using a larger sample.
Study 2: Revision/Application of the AIDS Service Organization Client Satisfaction Instrument Study 2 was implemented at an ASO in a large metropolitan area with an expressed need to obtain
Burrage, Vance / AIDS Service Satisfaction Instrument Table 2.
Principal Component Analysis Statistics for the Client Satisfaction Questionnaire (N 5 46)
Factor Extracted
Percent Cumulative Eigenvalue of Variance Percent
F1 (Volunteer Skill/Access) F2 (Volunteer Attitude) F3 (Volunteer Caring)
Table 3.
Table 4.
Principal Component Factor Patterns and Communalities for the Revised Client Satisfaction Questionnaire (N 5 121)
Item 5.5 2.1 1.3
45.7 17.8 11.2
45.7 63.5 74.6
Varimax-Rotated Principal Component Factor Patterns and Communalities for the Original Client Satisfaction Questionnaire (N 5 46) Factor Structure Coefficients Item
6. Volunteer activities 7. Volunteer meets needs 8. Volunteer time 10. Spiritual support 12. Volunteer knowledge 11. Volunteer familiarity 1. Courtesy and friendliness 3. Volunteer respect 4. Consideration for feelings 8. Volunteer Time
F1 .88 .83 .76 .68 .68 .66 .46 .31 .38 .41
2. Volunteer warmth 5. Volunteer concern
F2
F3
.86 .92 .87 .81 .65 .78 .77 .85
.74 .85 .76 .65 .42 .61 .59 .73
Note: F 5 factor, h2 5 communality.
h
.37 .54 .80 .77 .62 2.52
.49 .35
.88 .69 .77 .56
.94 .90
.89 .90
.39
1. Staff courtesy and friendliness 2. Staff warmth 3. Staff respect 4. Staff consideration 5. Staff concern 6. Ability to meet my needs 7. Staff familiarity 8. Staff knowledge
Factor Structure Coefficients F1 h2
2
.79 .82 .63 .63 .62 .75
.37
231
form to the agency’s objective of using the questionnaire to assess client satisfaction with paid agency staff. Also, the word staff replaced volunteer given the new purpose of the measure. At the suggestion of clients who reviewed the instrument, a five-point Likert scale ranging from 1 (extremely dissatisfied) to 5 (extremely satisfied) was used. Statistically, increasing the scale from four points to five should not dramatically change an instrument. The instrument was assessed for readability and was reported to be written at a seventh-grade reading level.
Note: F 5 factor, h2 5 communality.
Methods data on their clients’ satisfaction with the agency’s office staff, which consisted of a mix of volunteer and paid staff. A project was undertaken to revise the CSQ to help the ASO measure its effectiveness through an analysis of HIV-infected clients’ perceived levels of satisfaction with the office staff. It was believed that this would provide data reflective of the overall support and services provided by the agency (Burrage & Porche, 2003). The original CSQ was reexamined and modified by deleting five items (see Table 1): Item 6 (The activities my volunteer does for or with me), Item 7 (My volunteer’s ability to meet my needs), Item 8 (The amount of time my volunteer spends with me), Item 9 (The activities the buddy program plans for volunteers and clients), and Item 10 (The spiritual support my volunteer provides me). These items did not con-
The revised CSQ was mailed to 300 clients receiving services from the agency; 121 questionnaires were returned, reflecting a 40% return rate. Although the return rate seems to be low, consideration should be given to the fact that the survey was mailed to a stigmatized group of people who were offered no incentive for participation except the opportunity to provide input into agency operations. Of the 121 clients who participated, the majority were White (75%), male (77%), age 20 to 45 (67%), and had been HIV-infected for more than 5 years. African American and Latino clients accounted for 31% of the respondents. More than half were unemployed. Thus, Study 2 participant characteristics were similar to those of the participants who piloted the original version of the CSQ in Study 1.
232 JANAC Vol. 19, No. 3, May/June 2008 Table 5.
Five-Step Guide Based on the Five Phases of Instrument Development
Step 1: Partner With Universities. AIDS service organizations (ASOs) offer opportunities for academicians to do research, and academicians offer expertise in exchange for participating in service and publications. Universities can help agencies design and validate instruments and provide data analysis support. Universities benefit by having data available for analysis and publication opportunities; ASOs benefit through the results based on sound data analysis. Begin the process of conceptualization together. Step 2: Be Flexible When Designing Instruments. Succinct and accurate methods of measuring outcomes in agencies that focus on specific populations such as people living with HIV are essential. Being flexible and open to a broad spectrum of ideas will help to develop an instrument that captures truly meaningful information. The need for support and information on outcome measurement must be emphasized. Take the time to brainstorm as a team. Step 3: Question/Item Formatting. Put all questions into a uniform format to help with standardization of items and ease of responding to items. Step 4: Assess Grade Reading Level. Use readily available computer programs, such as those provided with basic word processing programs, to determine reading difficulty levels. Step 5: Pilot Test. Take the time and care to pilot test the instruments for usability prior to use.
Results The eight items from the original instrument used for the Revised CSQ were those that conceptually fit the assessment of general satisfaction with the agency’s staff interactions with clients. These previously tested questions focused on knowledge about HIV, courtesy and friendliness, concern, consideration, and familiarity with the client’s problem. The internal consistency-reliability of the items was quite high (Cronbach’s Alpha .83, p , .05) (Burrage, 2000; Burrage & Demi, 2003). Clients in Study 2 reported high levels of satisfaction with staff/agency interactions (mean 5 3.5). Principal components analysis was used to analyze the data. As before, a loading factor . .60 was set to determine how items should be grouped into subscales. This level was used to support robust item selection and provided a better conceptual fit. Principal components analysis of the eight staff/agency satisfaction items resulted in one factor, which was named staff courtesy and warmth (eigenvalue 5 5.35; 67% of variance) (Table 4). The Cronbach’s alpha for this scale was .92, which showed an improvement in internal consistency reliability over that reported in Study 1 (.084). Discussion Even though client satisfaction levels were high, the investigator met with the agency management staff to explain the data. Based on the results of the analysis, a discussion of how the staff could maxi-
mize client satisfaction by addressing ways to enhance staff courtesy and a warm approach followed. Educational sessions about communication techniques and posting reminders about being warm and courteous were suggested. Thus, data provided by this evaluation were used to assist the agency’s management team in making decisions about ways to enhance office staff performance to further improve client satisfaction with the agency.
Summary of the Five Phases The development and revision of the instrument took place over five phases. In Phase 1 (conceptualization), client satisfaction was clearly defined and operationalized. From this conceptualization, Phase 2 (qualitative data) generated or reused questions to gather the data needed to explicate components of satisfaction through informal discussions with clients, volunteers, agency staff, and clinical experts. Phase 3 (content analysis) yielded categories of satisfaction within the ASO context. Information obtained from informal discussions and a literature review led to the initial instrument subscales. Phase 4 (item development) consisted of item development for the initial instrument to measure these satisfaction subscales. The items were checked for readability using a readability program and input from clients and agency staff when the instruments were piloted for feasibility. Phase 5 (validity and reliability) was accomplished through statistical analyses (see Tables
Burrage, Vance / AIDS Service Satisfaction Instrument
2, 3, and 4). Because no other instruments that assessed this type of satisfaction within this population could be identified, it was not possible to correlate these instruments with similar ones. Internal validity was assessed via confirmatory factor analysis, and items that did not fit statistically or conceptually were deleted when the instrument was revised. Reliability statistics were above .70 for both the initial instrument and its subsequent revision.
Influence of Population/Vulnerability on Sample Size The sample size of both studies was relatively small, which had implications for interpretation of the statistical analyses and for the conclusions and implications drawn from those results. Recruiting an adequate sample was influenced by the vulnerability and stigmatization of populations served by the ASOs. However, the strength of the outcome evaluations for these instruments was evidence of an ecologically valid instrument, developed for and tested by its intended population.
Conclusion ASOs have unique needs that may not be captured by available validated instruments. This experience provided the investigator and participating ASOs an opportunity to develop a five-step guide based on the five Phases of Instrument Development (see Table 5). The guide includes strategies to promote collaboration between ASOs and academic communities to facilitate agency-specific instrument development. This experience and the suggested guide met the needs of the agencies for expertise, support, and resources and provided a service/research opportunity for the academician. In conclusion, this report provides an example through which this type of measure can be designed for population-specific outcomes in agencies that serve these clients. The use of suggested methods for instrument development and revision resulted in a succinct method that was specific to vulnerable and medically underserved groups, in this case, clients with HIV infection.
233
References Budd, R. W., Thorp, R. K., & Donohew, L. (1967). Content analysis of communication. New York: Macmillan. Burrage, J. (2000). A descriptive study of buddy programs for people infected with HIV: Clients’ and volunteers’ perceptions. Atlanta, GA: Unpublished doctoral dissertation, Georgia State University. Burrage, J., & Demi, A. (2003). Buddy programs for people infected with HIV. Journal of the Association of Nurses in AIDS Care, 14, 52-62. Burrage, J., & Porche, D. (2003). Community based AIDS service organizations/academic partnerships: A model for evaluation of services to vulnerable populations. Journal of Multicultural Nursing and Health, 9, 7-12. Burrage, J., & Rocchiociolli, J. (2003). HIV related stigma: Implications for multi-cultural nursing. Journal of Multicultural Nursing and Health, 9, 13-17. Carley, K. (1990). Content analysis. In R. E. Asher (Ed.), The encyclopedia of language (pp. 725-732). Edinburgh, UK: Pergamon Press. Carmines, E., & Zeller, R. (1979). Reliability and validity assessment. In M. Lewis-Beck (Ed.), Sage University papers series: Quantitative applications in the social sciences. Newbury Park, CA: Sage. Dunteman, G. (1989). Principal components analysis. In M. Lewis-Beck (Ed.), Sage University papers series: Quantitative applications in the social sciences. Newbury Park, CA: Sage. Gifford, A., & Groessl, E. (2002). Chronic disease self-management and adherence to HIV medications. Journal of Acquired Immune Deficiency Syndromes, 31, 163-166. Health Resources Services Administration (1999). Outcomes valuation technical assistance guide: Primary medical care outcomes—Titles I and II of the Ryan White CARE Act. Rockville, MD: Author. Health Resources Services Administration. (2007). HHS awards $1.1billion to help states, territories deliver HIV/AIDS care. Retrieved March 5, 2008, from http://hhs.gov/news/press/ 2007pres/20070405a.html. Huck, S., & Cormier, W. (1996). Reading statistics and research (2nd ed.). New York: Harper Collins. Kelly, J., Somlai, A., Benotsch, E., Amirkhanian, Y., Fernadez, M., Stevenson, L., et al. (2006). Programmes, resources, and needs of HIV-prevention nongovernmental organizations (NGOs) in Africa, Central/Eastern Europe and Central Asia, Latin America and the Caribbean. AIDS Care, 18, 12-21. Lekas, H., Siegel, K., & Schrimshaw, E. (2006). Continuities and discontinuities in the experiences of felt and enacted stigma among women with HIV/AIDS. Qualitative Health Research, 16, 1165-1190. Pike, C. K. (1996). Development and initial validation of the Social Work Values Inventory. Research on Social Work Practice, 6, 337-352.
234 JANAC Vol. 19, No. 3, May/June 2008 Strauss, A., & Corbin, J. (1990). Basics of qualitative research. Newbury Park, CA: Sage. Summers, S. (1992). Instrument development: Writing the items. Journal of Post Anesthesia Nursing, 7, 407-410. Tabachnick, B. G., & Fidell, L. S. (1996). Using multivariate statistics. New York: Harper Collins. Weinrich, S., Boyd, M., & Herman, J. (2003). Tool adaptation to reduce health disparities. In M. Frank-Stromborg & S. Olsen (Eds.), Instruments for clinical health-care research (pp. 2130). Sudbury, MA: Jones Bartlett Publishers.