Perceived Program Qualities and Outcomes of a Youth Program in Hong Kong Based on the Views of the Workers

Perceived Program Qualities and Outcomes of a Youth Program in Hong Kong Based on the Views of the Workers

Original Study Perceived Program Qualities and Outcomes of a Youth Program in Hong Kong Based on the Views of the Workers R.C.F. Sun PhD 1, Daniel T.L...

137KB Sizes 0 Downloads 55 Views

Original Study Perceived Program Qualities and Outcomes of a Youth Program in Hong Kong Based on the Views of the Workers R.C.F. Sun PhD 1, Daniel T.L. Shek PhD, FHKPS, BBS, SBS, JP 2,3,4,5,* 1

Faculty of Education, The University of Hong Kong, P.R. China Department of Applied Social Sciences, The Hong Kong Polytechnic University, Hong Kong, P.R. China 3 Centre for Innovative Programs for Adolescents and Families, The Hong Kong Polytechnic University, Hong Kong, P.R. China 4 Kiang Wu Nursing College of Macau, Macau, P.R. China 5 Department of Social Work, East China Normal University, P.R. China 2

a b s t r a c t Study Objectives: Based on the data collected in the extension phase of the Project P.A.T.H.S. in Hong Kong, this study examined the views of 9,765 program implementers on the universal curricula-based program (ie, Tier 1 Program). Design: After the Tier 1 Program was completed, workers responded to a client satisfaction scale (Form B). Utilizing the data supplied by the participating schools, the profiles and correlates of client satisfaction data were examined. Results: Program attributes, implementer attributes, and benefits of the program were viewed positively by the program implementers, with high proportion of the instructors perceiving the program contributing to the development of the students. Small grade differences on client satisfaction levels were found. Regarding predictors of perceived program effectiveness, perceived program and instructor attributes predicted perceived effectiveness of the program. Conclusion: Consistent with the data collected from the students, the present findings suggest that the Tier 1 Program was well-received by the major stakeholders, particularly with respect to its ability to promote positive development in Chinese junior high school students. The present finding replicated the previous observation that perceived program and implementer qualities were significant determinants of perceived program effectiveness. Key Words: Subjective outcome evaluation, Program implementers, Positive youth development program, Chinese, Adolescents

Introduction

It is a common practice for youth workers and helping professionals to use the client satisfaction approach to examine service recipients' views and satisfaction with the program.1,2 As far as “perceived outcomes” are concerned, program participants usually ask about the perceived benefits and effectiveness of the program, such as the degree to which the program has helped participants to overcome the problems.3 Subjective outcome evaluation is widely used in youth development programs, particularly with respect to the framework of positive youth development. For example, Heinze et al4 adopted a positive youth development framework to examine programs designed for youth with the problem of homelessness. For the program dimensions, they included relationships with staff and peers in the agency context, opportunities to cultivate belongingness and skills building, and school as well as community integration efforts. Results showed that program dimensions were strongly correlated among themselves and they were significantly related to overall satisfaction. It was also found that intangible resources such as agency climate, interactions with others, and opportunities for personal development growth had stronger influences on The authors indicate no conflicts of interest. * Address correspondence to: Professor Daniel T.L. Shek, PhD, FHKPS, BBS, SBS, JP, Department of Applied Social Sciences, The Hong Kong Polytechnic University, Hunghom, Hong Kong E-mail address: [email protected] (D.T.L. Shek).

overall satisfaction than did material goods, services, or other resources obtained. In another research investigating after-school programs for young people, 2 studies were conducted.5 In the first study, young people completed a standardized questionnaire containing measures of ethnic identity, self-esteem, and perceptions of the program at prestest and posttest. Results showed that higher perceived program quality was related to a higher level of self-esteem. In the second study, compared with those who did not attend after-school programs regularly, those attended after-school programs regularly displayed higher concentration and regulation skills after roughly 8 months. In an evaluation study of an intervention program for children, Jordans et al6 showed that children and parents reported high levels of satisfaction with the service and positive changes in emotional and behavioral functioning. Styron et al7 conducted an evaluation assessing the effectiveness of a young adult services program using multiple methods. Results showed that strengths-focused and community-focused treatment planning generated positive objective and subjective outcomes; longer duration of stay in the program was associated with higher service satisfaction and quality of life. As far as measurement is concerned, subjective outcome evaluation has been carried out in different ways. Although qualitative evaluation can be easily collected by asking some open-ended questions, client satisfaction has commonly been assessed by standardized items8,9 or by

1083-3188/$ - see front matter Ó 2014 North American Society for Pediatric and Adolescent Gynecology. Published by Elsevier Inc. http://dx.doi.org/10.1016/j.jpag.2014.02.007

R.C.F. Sun, D.T.L. Shek / J Pediatr Adolesc Gynecol 27 (2014) S10eS16

standardized scales, such as the Consumer Satisfaction Questionnaire10 and Client Satisfaction Questionnaire.11e13 For example, with reference to the assessment of client satisfaction in the youth mental health field, Athay and Bickman14 developed a brief 4-item Service Satisfaction Scale (SSS) for clinically referred youths and their caregivers. Based on different statistical analyses such as confirmatory factor analysis, measures for young people and caregivers showed good psychometric properties. However, validated measures of client satisfaction scales are not common in the pediatrics field. Besides, there are very few studies investigating the different components in client satisfaction scales within the context of youth enhancement programs. It is noteworthy that most of the existing client satisfaction studies were based on the perceptions of those who joined the programs. However, it is argued that it is equally important to understand the views of the program implementers15 because different stakeholders should be involved in the evaluation process16 to reveal a full picture on the program implementation. Besides, as pointed out by Flannery and Torquati,17 “teachers who are not satisfied with a program are less likely to use the program materials, regardless of whether their principal or district administration is supportive of the program.”p 395 Hence, it is valuable to understand the subjective views of the workers who implement the programs. There are several additional justifications for examining subjective outcome evaluation findings among program implementers. Primarily, the views of program implementers should be understood because they are major stakeholders of the programs.16,18 Second, although it can be argued that program implementers may have vested interest (hence subjective bias) in the developed program, program implementers may actually have good competence in assessment the program because program evaluation is normally covered in professional training programs. Third, asking feedback from implementers can give them a sense of fairness, respect, and engagement. Fourth, subjective outcome evaluation among the program implementers can facilitate their own reflective practice.19,20 Finally, if one adopts the strategy of triangulation, there is a need to collect subjective outcome evaluation data from different data sources. Essentially, a researcher has to address the issue of whether different types of evaluation data would produce similar conclusions (ie, triangulation in terms of different data sources). In response to different adolescent problem behavior, researchers in the West have developed adolescent prevention and positive youth development programs.21,22 Unfortunately, with specific attention to China, Shek and Yu23 showed that related programs are severely lacking, with the exception of the Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes) in Hong Kong. The Hong Kong Jockey Club Charities Trust initiated the project and invited 5 universities in Hong Kong to design and implement the related programs. The initial earmarked grant of the project was HK$400 million.24 Besides developing a youth enhancement program for Secondary 1 to Secondary 3 students in Hong Kong, the research team also developed training programs and evaluated the

S11

developed program. Because of the initial success of the project, the Trust supported the project for another cycle starting from 2009/10 to 2011/12 school years, with an additional grant of HK$350 million. In the extension phase of the project, the client satisfaction approach was mainly used to evaluate the perceived outcomes of the project. There are 2 tiers of programs in the project. For the Tier 1 program, students in Grade 7 to Grade 9 were invited to participate in a curricula-based positive youth development program with 10 to 20 hours per year. The curriculum was based on the positive youth development elements covered in the existing successful programs. The Tier 2 Program was designed for adolescents with greater needs and it was developed by school social workers in collaboration with the school staff. In the initial phase of the project, subjective outcome evaluation of the Tier 1 Program based on the perspective of the students was carried out (N 5 206,313 students). Shek and Sun25 showed that the students held positive views of the program, implementers, and program effectiveness. At the same time, subjective outcome evaluation data were collected from the program implementers. Similar findings were obtained based on the views of the workers in 9 datasets collected from 244 schools (N 5 7,926).26 Taken as a whole, the findings showed that different stakeholders in the initial implementation phase were generally satisfied with the program, instructor, and benefits of the program. In the extension phase, findings based on the client satisfaction approach similarly revealed that the program was able to promote positive development of the program participants. As the previous studies focused on data in 2009/10 and 2010/11 school years, the purpose of this study was to examine the views of the program implementers in the 3 junior secondary school years (ie, 2009 to 2012 school years). Across 3 years, there are 3 sets of data for each grade (ie, 9 sets of data in total). In this study, the quantitative responses of the implementers to the items in a subjective outcome evaluation form were examined. Although qualitative subjective outcome evaluation data were available, the focus of this paper was on the quantitative evaluation data. Besides examining the descriptive profiles of responses across 3 years, the question of whether subjective outcome evaluation differed across grades would be examined. As the program might be less novel and with the onset of adolescent rebellion in the upper grades, it was expected that client satisfaction would gradually drop across grades (Hypothesis 1). Furthermore, predictors of perceived benefits of the program were investigated in particular reference to perceived program qualities and implementer qualities in this study. Specifically, 2 additional general hypotheses were examined in the study. First, as the 3 dimensions representing different aspects of subjective outcome evaluation which are conceptually related, it was predicted that perceived subject attributes, instructor attributes, and perceived benefits would be significantly correlated among themselves (Hypothesis 2).25,26 Second, as some theories (eg, invitational education) maintains that the program and instructor would influence program success,27 it was hypothesized that perceived program and instructor

S12

R.C.F. Sun, D.T.L. Shek / J Pediatr Adolesc Gynecol 27 (2014) S10eS16

Table 1 Summary of the Program Implementers' Perceptions toward the Program Qualities Respondents with positive responses (Options 4-6)

1. The objectives of the curriculum are very clear. 2. The design of the curriculum is very good. 3. The activities were carefully planned. 4. The classroom atmosphere was very pleasant. 5. There was much peer interaction among the students. 6. Students participated actively during lessons (including discussions, sharing, games, etc.) 7. The program has a strong and sound theoretical support. 8. The teaching experience I encountered enhanced my interest towards the lessons. 9. Overall speaking, I have very positive evaluation of the program. 10. On the whole, students like this curriculum very much.

S1

S2

S3

n (%)

n (%)

n (%)

3,451 3,193 3,295 3,280 3,205 3,210 3,155 2,966 3,068 3,061

(95.46) (88.28) (91.22) (90.78) (88.81) (88.80) (87.32) (82.09) (84.87) (84.79)

3,130 2,805 2,991 2,870 2,825 2,740 2,867 2,647 2,667 2,607

(94.53) (84.77) (90.47) (86.92) (85.48) (82.90) (86.77) (80.07) (80.70) (78.93)

2,652 2,429 2,543 2,432 2,387 2,298 2,456 2,256 2,302 2,219

(94.51) (86.56) (90.63) (86.73) (85.34) (81.95) (87.68) (80.51) (82.07) (79.28)

Overall N (%) 9,233 8,427 8,829 8,582 8,417 8,248 8,478 7,869 8,037 7,887

(94.87) (86.59) (90.80) (88.30) (86.67) (84.82) (87.24) (80.95) (82.64) (81.21)

S1, Secondary 1 level; S2, Secondary 2 level; S3, Secondary 3 level. All items are on a 6-point Likert scale with 1 5 strongly disagree, 2 5 disagree, 3 5 slightly disagree, 4 5 slightly agree, 5 5 agree, 6 5 strongly agree. Only respondents with positive responses (Options 4-6) are shown in the table.

attributes would be significant predictors of perceived benefits of the program (Hypothesis 3). Methods

different ratings for an item. When the schools submitted the reports to the funding body, the soft copy of the consolidated data was submitted. Using the soft copy of the data, the overall client satisfaction profiles were reconstructed from the data.

Participants and Procedure Data Analyses

A total of 247 schools joined the extension phase of the project (2009 to 2012 school years). The data were collected from all participating schools in the 3 grade levels in 3 years (Grade 7: 667 schools, Grade 8: 620 schools, and Grade 9: 537 schools). After completion of the program, workers were invited to respond to a client satisfaction scale (Form B). From 2009 to 2012, a total of 9,765 questionnaires were collected. To facilitate the program evaluation, an evaluation manual with standardized instructions for data collection and analyses was sent to each school. Besides, the workers received training on how to collect and analyze the data during the 20-hour training workshops before the program implementation. Instruments

The Subjective Outcome Evaluation Form (Form B) was used.26 There are several sections in this measure, including views on the program (10 items), views on the worker (10 items), views on the program effectiveness (16 items), tendency to recommend the program to others with similar needs (1 item), tendency to teach similar programs in future (1 item), degree to which the program had promoted one's professional growth (1 item), and 4 open-ended questions (eg, difficulties encountered). In this study, only the findings based on the structured items were analyzed and reported. For the structured items (ie, non open-ended questions), all items are assessed by a 6-point scale with 6 response options, including “strongly disagree,” “disagree,” “slightly disagree,” “slightly agree,” “agree,” and “strongly agree.” Previous research showed that the Form B possessed good psychometric properties.28 For the quantitative subjective outcome evaluation data, the schools entered and analyzed the data using an EXCEL program developed by the research team. The program produced frequencies and percentages profiles on the

To understand the views of the program implementers, descriptive statistical analyses using frequency analyses (ie, percentages of responses) were carried out. As 6 response options were used for the items, it is possible to combine the “disagree” responses to form the “negative” responses (ie, sum of “strongly disagree,” “disagree” and “slightly disagree”) and to add the “agree” responses (ie, “strongly agree,” “agree” and “slightly agree”) to form the “positive” responses. For the first 3 domains of the scale regarding program, workers and benefits, a total score was created for each domain by dividing the total score in each domain by the number of items in that particular domain. To examine grade differences in the perceived program, instructor, and effectiveness, several 1-way ANOVAs were used. To examine the relationships among the 3 basic dimensions of Form B, several Pearson correlation analyses were performed. To clarify the predictors of program effectiveness, multiple regression analyses were carried out. Results

This paper has several objectives. First, based on the subjective views of the workers, descriptive profiles on their views of the program, workers' own performance and program effectiveness in the extension phase of the project were presented. Second, 3 hypotheses (ie, grade differences in subjective outcome evaluation, correlations among program, implementer, and effectiveness dimensions, and how program quality and instructor quality predicted perceived effectiveness of the Tier 1 Program) were examined in the study. Positive results were found from the descriptive statistical analyses. Regarding the perceptions toward the program, over four-fifths of the implementers responded

R.C.F. Sun, D.T.L. Shek / J Pediatr Adolesc Gynecol 27 (2014) S10eS16

S13

Table 2 Summary of the Program Implementers' Perceptions toward their Own Performance Respondents with positive responses (Options 4-6)

1. I had a good mastery of the curriculum. 2. I prepared well for the lessons. 3. My teaching skills were good. 4. I had good professional attitudes. 5. I was very involved. 6. I gained a lot during the course of instruction 7. I cared for the students. 8. I was ready to offer help to students when needed. 9. I had much interaction with the students. 10. Overall speaking, I have very positive evaluation of myself as an instructor.

S1

S2

S3

n (%)

n (%)

n (%)

3,216 3,228 3,262 3,452 3,380 3,064 3,512 3,530 3,400 3,470

(89.86) (90.50) (91.53) (96.61) (94.55) (85.92) (98.16) (98.58) (95.03) (97.01)

2,902 2,950 2,972 3,172 3,101 2,753 3,217 3,234 3,085 3,150

(88.39) (89.88) (90.67) (96.65) (94.40) (83.96) (97.87) (98.42) (93.91) (95.95)

2,461 2,526 2,538 2,677 2,611 2,375 2,728 2,749 2,621 2,677

Overall N (%)

(88.21) (90.70) (91.23) (96.02) (93.62) (85.19) (97.81) (98.50) (94.01) (96.02)

8,579 8,704 8,772 9,301 9,092 8,192 9,457 9,513 9,106 9,297

(88.88) (90.35) (91.15) (96.45) (94.23) (85.04) (97.96) (98.50) (94.35) (96.36)

S1, Secondary 1 level; S2, Secondary 2 level; S3, Secondary 3 level. All items are on a 6-point Likert scale with 1 5 strongly disagree, 2 5 disagree, 3 5 slightly disagree, 4 5 slightly agree, 5 5 agree, 6 5 strongly agree. Only respondents with positive responses (Options 4-6) are shown in the table.

positively (Table 1). Most of the participants agreed that the program had clear objectives (94.87%) good planning (90.80%), and positive classroom climate (88.30%). Similarly, many implementers also had very positive evaluation on their own performance (Table 2). For example, nearly all the program implementers indicated that they were willing to help the needy students (98.50%) and they concerned about the students (97.96%). Almost all of them agreed that they possessed good professional attitudes (96.45%). Regarding perceived effectiveness of the program (Table 3), most workers endorsed the beneficial nature of the program. For examples, most of them agreed that the program was able to promote the self-awareness (92.73%) and social competence (92.53%) in students. Besides, many implementers agreed that their professional growth was promoted by the program. Moreover, nine-tenths of the workers would recommend the program to needy students and roughly four-fifths of them would teach programs of similar nature in the future (Table 4). Table 5 shows the internal consistency measures for Form B. The findings generally suggest that Form B has high reliability (a for overall scale with 36 items 5 .98). The 3

basic domains of the scale were also reliable, including 10-item program domain (a 5 .95), 10-item worker domain (a 5 .94) and 16-item on program effectiveness (a 5 .97). Several 1-way analyses of variance were conducted to analyze grade differences on program qualities, implementer qualities, and program effectiveness. Significant results were found in program (F (2,1819) 5 6.98, P ! .01), worker (F (2,1817) 5 3.31, P ! .05), and program benefits (F (2,1806) 5 4.05, P ! .05). Post-hoc analysis using Bonferroni adjustment revealed that significant difference was found between Secondary 1 (M 5 4.50; SD 5 .44) and Secondary 2 (M 5 4.40; SD 5 .48) students (P !.01) and also between Secondary 1 and Secondary 3 (M 5 4.42; SD 5 .51) students (P !.05) towards their perceptions on program qualities. Similar patterns were found in their perceptions on themselves and program benefits. Implementers of the Secondary 1 Program (M 5 4.70; SD 5 .34) rated more positively than implementers of Secondary 2 Program (M 5 4.64; SD 5 .38) on their own performance (P ! .05). Similarly, implementers of the Secondary 1 Program (M 5 4.10; SD 5 .37) rated more positively than implementers of the

Table 3 Summary of the Program Implementers' Perceptions toward the Program Effectiveness The extent to which the Tier 1 Program (ie, the program in which all students have joined) has helped your students:

1. It has strengthened students' bonding with teachers, classmates and their families. 2. It has strengthened students' resilience in adverse conditions. 3. It has enhanced students' social competence. 4. It has improved students' ability in handling and expressing emotions. 5. It has enhanced students' cognitive competence. 6. Students' ability to resist harmful influences has been improved. 7. It has strengthened students' ability to distinguish between the good and the bad. 8. It has increased students' competence in making sensible and wise choices. 9. It has helped students to have life reflections. 10. It has reinforced students' self-confidence. 11. It has increased students' self- awareness. 12. It has helped students to face the future with a positive attitude. 13. It has helped students to cultivate compassion and care about others. 14. It has encouraged students to care about the community. 15. It has promoted students' sense of responsibility in serving the society. 16. It has enriched the overall development of the students.

Respondents with positive responses (Options 3-5) S1

S2

S3

Overall

n (%)

n (%)

n (%)

N (%)

3,314 3,217 3,378 3,295 3,164 3,233 3,365 3,268 3,109 3,029 3,349 3,123 3,180 2,967 2,939 3,400

(92.16) (89.36) (93.78) (91.63) (88.06) (89.98) (93.55) (90.98) (86.55) (84.19) (93.60) (87.06) (88.63) (82.76) (81.98) (94.79)

2,987 2,899 3,023 2,962 2,859 2,902 3,028 2,941 2,898 2,708 3,021 2,863 2,908 2,707 2,707 3,067

(90.85) (87.98) (91.86) (90.19) (86.95) (88.23) (91.98) (89.53) (88.14) (82.34) (91.99) (87.15) (88.47) (82.30) (82.30) (93.11)

2,524 2,487 2,558 2,518 2,478 2,489 2,570 2,512 2,502 2,331 2,571 2,477 2,473 2,342 2,350 2,615

(90.50) (89.14) (91.72) (90.61) (89.04) (89.28) (92.38) (90.39) (89.65) (83.55) (92.48) (89.00) (88.80) (84.00) (84.29) (93.73)

8,825 8,603 8,959 8,775 8,501 8,624 8,963 8,721 8,509 8,068 8,941 8,463 8,561 8,016 7,996 9,082

(91.23) (88.83) (92.53) (90.85) (87.97) (89.18) (92.68) (90.32) (87.98) (83.37) (92.73) (87.65) (88.62) (82.96) (82.76) (93.91)

S1, Secondary 1 level; S2, Secondary 2 level; S3, Secondary 3 level. All items are on a 5-point Likert scale with 1 5 unhelpful, 2 5 not very helpful, 3 5 slightly helpful, 4 5 helpful, 5 5 very helpful. Only respondents with positive responses (Options 3-5) are shown in the table.

S14

R.C.F. Sun, D.T.L. Shek / J Pediatr Adolesc Gynecol 27 (2014) S10eS16

Table 4 Other Aspects of Subjective Outcome Evaluation Based on the Program Implementers' Perceptions If you have a student/client whose needs and conditions are similar to those of your students who have joined the program, will you suggest him/her to participate in this program? Respondents with positive responses (Options 3-4) S1

S2

S3

n (%)

n (%)

n (%)

Overall N (%)

3,255 (92.00)

2,889 (89.53)

2,444 (89.10)

8,588 (90.32)

The item is on a 4-point Likert scale with 1 5 definitely will not suggest, 2 5 will not suggest, 3 5 will suggest, 4 5 definitely will suggest. Only respondents with positive responses (Options 3-4) are shown in the table. S1: Secondary 1 level; S2: Secondary 2 level; S3: Secondary 3 level. If there is a chance, will you teach similar programs again in the future? Respondents with positive responses (Options 3-4) S1

S2

S3

n (%)

n (%)

n (%)

Overall N (%)

3,046 (86.14)

2,654 (82.14)

2,287 (83.19)

7,987 (83.93)

The item is on a 4-point Likert scale with 1 5 definitely will not teach, 2 5 will not teach, 3 5 will teach, 4 5 definitely will teach. Only respondents with positive responses (Options 3-4) are shown in the table. S1: Secondary 1 level; S2: Secondary 2 level; S3: Secondary 3 level. Do you think the implementation of the program has helped you in your professional growth (eg, enhancement of your skills)? Respondents with positive responses (Options 3-5) S1

S2

S3

Overall

n (%)

n (%)

n (%)

N (%)

3,002 (85.55)

2,673 (83.85)

2,304 (84.92)

7,979 (84.79)

All items are on a 5-point Likert scale with 1 5 unhelpful, 2 5 not very helpful, 3 5 slightly helpful, 4 5 helpful, 5 5 very helpful. Only respondents with positive responses (Options 3-5) are shown in the table. S1: Secondary 1 level; S2: Secondary 2 level; S3: Secondary 3 level.

Secondary 2 Program (M 5 4.04; SD 5 .39) on their perception on the program effectiveness (P ! .05) Taken as a whole, Hypothesis 1 was supported. Regarding the relationships between program and worker and perceived benefits, analyses showed that both program and worker were significantly correlated with perceived program benefits (r 5 .78, P ! .01 and r 5 .66, P ! .01, respectively) across different grades. The findings provided support for Hypothesis 2 (Table 6 ). The multiple regression analysis results are presented in Table 7. It was found that views toward the program were positively associated with perceived program benefits (Secondary 1: b 5 .59, P ! .001; Secondary 2: b 5 .69, P ! .001; Secondary 3: b 5 .71, P ! .001). Similarly, more positive views towards implementers' own performance were also associated with higher perceived program effectiveness (Secondary 1: b 5 .21, P ! .001; Secondary 2:

b 5 .15, P ! .001; Secondary 3: b 5 .11, P ! .01). Hence, Hypothesis 3 was supported. Consistent with Hypothesis 3, analyses showed that perception of program (b 5 .66, P ! .001) and implementer qualities (b 5 .17, P ! .001) had significant predictive effects on program effectiveness. This model explained 62% of the variance of program benefits. The findings supported Hypothesis 3. Discussion

This study examined the views of the workers on the program, worker, and benefits of the Tier 1 Program of the extension phase of the Project P.A.T.H.S. in Hong Kong. As few research studies have been devoted to understand the views of program implementers who are important stakeholders of the program, this study focused on the views of the program implementers. Although both closed-ended items and open-ended questions were used in this study, only quantitative findings were reported in this paper. There are several strengths of this study. First, in view of the fact that program implementers' perspective is seldom captured in the client satisfaction literature, this is an interesting addition to the Chinese scientific literature. Second, the sample size was respectable because a large number of program implementers participated in the study over a period of 3 years. Third, a validated measure of subjective outcome evaluation with good psychometric properties was used.28 Fourth, different predictors of perceived effectiveness viewed by program implementers were covered in the study. Finally, in addition to client satisfaction data collected from the program participants, the findings provided an integrated and triangulated picture on the perceived attributes and benefits of the Tier 1 Program of the extension phase of the project. As in the previous studies,26 the present study showed that the program was perceived in a positive manner by the program implementers. For example, roughly 95% of them agreed that the program had clear objectives and 88% of the implementers perceived the class atmosphere to be very positive. The program implementers also viewed themselves in a favorable light. For example, 94% felt that they had much interaction with the students and 96% of the implementers agreed that they were satisfied with themselves. More importantly, most of the implementers perceived the Tier 1 Program to have benefits on the different domains of development in the students. One straightforward interpretation of the abovementioned positive profiles is that the program was successful and effective. However, in the absence of a control group, we should be cautious about the existence of

Table 5 Mean, Standard Deviations, Cronbach's alphas, and Mean of Inter-item Correlations among the Variables by Grade S1 Mean (SD) Program qualities (10 items) Implementer qualities (10 items) Program effectiveness (16 items) Total scale (36 items) * Mean inter-item correlations.

4.50 4.70 3.49 4.10

(.44) (.34) (.43) (.37)

S2

a (Mean*) .94 .93 .97 .98

(.62) (.59) (.70) (.53)

Mean (SD) 4.40 4.64 3.44 4.04

(.48) (.38) (.43) (.39)

S3

a (Mean*) .95 .94 .97 .98

(.65) (.62) (.71) (.56)

Mean (SD) 4.42 4.67 3.47 4.07

(.51) (.40) (.44) (.41)

Overall

a (Mean*) .96 .95 .98 .98

(.68) (.66) (.72) (.58)

Mean (SD) 4.44 4.67 3.47 4.07

(.48) (.37) (.43) (.39)

a (Mean*) .95 .94 .97 .98

(.65) (.62) (.71) (.56)

R.C.F. Sun, D.T.L. Shek / J Pediatr Adolesc Gynecol 27 (2014) S10eS16 Table 6 Correlation Coefficients on the Relationship between Program Components and Program Effectiveness by Grade Variable

S1

S2

S3

Overall

Program qualities (10 items) Implementer qualities (10 items)

.75* .64*

.80* .67*

.80* .66*

.78* .66*

* P ! .01.

alternative explanations. First, developmental maturation may account for the perceived benefits of the program. Second, as the instructors delivered the programs themselves, they might have biased perception because of vested interest. However, this possibility may not be high because the views of program participants converged with those of the program implementers.25,26 Third, although demand characteristics may explain the observations, there is no evidence suggesting the existence of this explanation. Finally, as the measure adopted in the study (ie, Form B) possesses very good psychometric properties, the alternative explanation in terms of random responses of the program implementers is also not strong. Regarding grade differences in subjective outcome evaluation findings (ie, Hypothesis 1), program implementers teaching the Secondary 1 (Grade 7) Program had better evaluation of the program, instructor, and benefits than did those teaching the Secondary 2 (Grade 8) and Secondary 3 (Grade 9) Programs. The findings showed that the implementers teaching the Tier 1 Program in the first year of the secondary school years had the highest level of satisfaction with the program. It is noteworthy that as the Tier 1 Program focused on experiential learning, this would attract Secondary 1 students. However, such novelty might diminish in Secondary 2 and Secondary 3 years, hence leading to the relatively lower satisfaction in the students in higher grades. Nevertheless, it is noteworthy that the differences across grades were of small effect size only. In fact, the satisfaction findings were in the positive direction. Consistent with our predictions, both perceived program quality and instructor quality were significantly related to perceived program benefits. Furthermore, multiple regression analyses showed that program quality and instructor quality predicted perceived program effectiveness. These observations replicated the previous findings that perceived program and instructor qualities were predictive of perceived benefits of the program.26,29 The findings are consistent with the theoretical assertion of the 5P model where it is argued that program, people, policy, process and place determine program effectiveness.27,30 Table 7 Multiple Regression Analyses Predicting Program Effectiveness by Grade Predictors

S1 S2 S3 Overall

The present study underscores the importance of understanding client satisfaction from the view of the workers. As presented in the Introduction section, there are several justifications for collecting client satisfaction data from the workers. On the other hand, although Shek and Ma26 also highlighted several arguments against the collection of evaluation data from the program implementers, including the lack of evaluation skills, role strain, and role confusion in the evaluation process, and existence of several sources of bias and demoralization argument, they also presented several counter-arguments responding to those criticisms. These arguments include: evaluation is included in the training of teachers and social workers, program implementers are generally expected to deliver program and to assess program effects, and the legitimacy of collecting data from program implementers based on different theoretical perspectives. Besides, there are research findings showing the objective outcomes converged with client satisfaction and perceived program outcomes.31,32 While the present findings replicated the findings in the initial phase of the project, we should realize the limitations of the study. First, the inherent limitations of subjective outcome evaluation should be considered.33e36 Second, instead of using individuals as the basis of analyses, schools were used as the units of analyses. While the use of schools as units of analysis may lead to statistical artifacts, previous findings showed that there was not much difference when individual data or school data were used as the bases of analyses. Third, as only quantitative data were presented, it would be helpful to include qualitative analyses in future. Finally, it was assumed that the “disagree” responses formed the negative responses whereas the “agree” responses formed the positive responses. While this literally makes sense, the degree of correspondence between agreement and experience may be unclear. Notwithstanding these limitations, the present study can be regarded as a successful replication of the previous studies. When different sources of evaluation findings for the Project P.A.T.H.S. are taken into account, the present study reinforces the conclusion that different stakeholders are satisfied with the content, delivery, worker and benefits of the Tier 1 Program.37e40 Acknowledgments

The preparation for this paper and the Project P.A.T.H.S. were financially supported by The Hong Kong Jockey Club Charities Trust. The authorship is equally shared between the first author and second author. References

Model

Program qualities

Implementer qualities

ß*

ß*

R

R2

.59z .69z .71z .66z

.21z .15z .11y .17z

.76 .81 .81 .79

.58 .66 .65 .62

* Standardized coefficients. y P ! .01. z P ! .001.

S15

1. Shek DT, Sun RC: Evaluation of the Project P.A.T.H.S. (extension phase) based on the perspective of the program participants. Int J Adolesc Med Health 2013; 25: 405 2. Shek DT, Sun RC: Post-lecture evaluation of a university course on leadership and intrapersonal development. Int J Disabil Hum Dev 2013; 12:185 3. Shek DT, Sun RC: Post-course subjective outcome evaluation of a course promoting leadership and intrapersonal development in university students in Hong Kong. Int J Disabil Hum Dev 2013; 12:193 4. Heinze HJ, Hernandez Jozefowicz DM: Toro PA: Taking the youth perspective: assessment of program characteristics that promote positive development in homeless and at-risk youth. Child Youth Serv Rev 2010; 32:1365

S16

R.C.F. Sun, D.T.L. Shek / J Pediatr Adolesc Gynecol 27 (2014) S10eS16

5. Riggs NR, Bohnert AM, Guzman MD, et al: Examining the potential of community-based after-school programs for Latino youth. Am J Community Psychol 2010; 45:417 6. Jordans MJ, Komproe IH, Tol WA, et al: Practice-driven evaluation of a multi-layered psychosocial care package for children in areas of armed conflict. Community Ment Health J 2011; 47:267 7. Styron TH, O'Connell M, Smalley W, et al: Troubled youth in transition: an evaluation of Connecticut's special services for individuals aging out of adolescent mental health programs. Child Youth Serv Rev 2006; 28:1088 8. Kocher MS, Steadman JR, Briggs KK, et al: Relationships between objective assessment of ligament stability and subjective assessment of symptoms and function after anterior cruciate ligament reconstruction. Am J Sports Med 2004; 32:629 9. Krupat E, Bell RA, Kravitz RL, et al: When physicians and patients think alike: patient-centered beliefs and their impact on satisfaction and trust. J Fam Pract 2001; 50:1057 10. Holcomb WR, Adams NA, Ponder HM, et al: The development and construct validation of a consumer satisfaction questionnaire for psychiatric inpatients. Eval Program Plann 1989; 12:189 11. Attkisson CC, Zwick R: The client satisfaction questionnaire: psychometric properties and correlations with service utilization and psychotherapy outcome. Eval Program Plann 1982; 5:233 12. Vandiver V, Jordan C, Keopraseuth K: Family empowerment and service satisfaction: an exploratory study of Laotian families who care for a family member with mental illness. Psychiatr Rehabil J 1995; 19:47 13. Walsh T, Lord B: Client satisfaction and empowerment through social work intervention. Soc Work Health Care 2004; 38:37 14. Athay MM, Bickman L: Development and psychometric evaluation of the youth and caregiver Service Satisfaction Scale. Adm Policy Ment Health 2012; 39:71 15. Shek DT, Siu AM, Lee TY: Subjective outcome evaluation of the Project P.A.T.H.S.: findings based on the perspective of the program implementers. ScientificWorldJournal 2007; 7:195 16. Patton MQ: Utilization-Focused Evaluation, (4th ed.). Thousand Oaks, CA, Sage, 2008, pp 1e688 17. Flannery DJ, Torquati J: An elementary school substance abuse prevention program: teacher and administrator perspectives. J Drug Educ 1993; 23:387 18. Joint Committee on Standards for Educational Evaluation: The Program Evaluation Standards. Thousand Oaks, CA, Sage, 1994, pp 1e217 19. Osterman KF, Kottkamp RB: Reflective Practice for Educators. Thousand Oaks, CA, Corwin Press, 2004, pp 1e197 20. Taggart GL, Wilson AP: Promoting Reflective Thinking in Teachers: 44 Action Strategies. Thousand Oaks, CA, Corwin Press, 1998, pp 1e245 21. Catalano RF, Fagan AA, Gavin LE, et al: Worldwide application of prevention science in adolescent health. Lancet 2012; 379:1653 22. Shek DT, Yu L: Longitudinal impact of the Project P.A.T.H.S. on adolescent risk behavior: what happened after five years. ScientificWorldJournal 2012; 2012: 316029

23. Shek DT, Yu L: A review of validated youth prevention and positive youth development programs in Asia. Int J Adolesc Med Health 2011; 23:317 24. Shek DT, Sun RC: The Project P.A.T.H.S. in Hong Kong: development, training, implementation, and evaluation. J Pediatr Adolesc Gynecol 2013; 26:S2 25. Shek DT, Sun RC: Participants' evaluation of the Project P.A.T.H.S.: are findings based on different datasets consistent? ScientificWorldJournal 2012; 2012: 187450 26. Shek DT, Ma CM: Program implementers' evaluation of the Project P.A.T.H.S.: findings based on different datasets over time. ScientificWorldJournal 2012; 2012:918437 27. Shek DT, Sun RC: Implementation quality of a positive youth development program: cross-case analyses based on seven cases in Hong Kong. ScientificWorldJournal 2008; 8:1075 28. Shek DT, Yu L: Testing factorial invariance across groups: an illustration using AMOS. Int J Disabil Hum Dev (In press) 29. Shek DT, Ma CM, Tang CY: Subjective outcome evaluation of the Project P.A.T.H.S.: findings based on different datasets. Int J Adolesc Med Health 2011; 23:237 30. Smith KH: Invitational education: a model for teachers and counselors [2013]. Available at: http://iaie.webs.com/ie/PDFs/K_Smith.pdf. Accessed March 6, 2014 31. Shek DT, Lee TY, Siu AM, et al: Convergence of subjective outcome and objective outcome evaluation findings: insights based on the Project P.A.T.H.S. ScientificWorldJournal 2007; 7:258 32. Shek DT: Subjective outcome and objective outcome evaluation findings: insights from a Chinese context. Res Soc Work Pract 2010; 20:293 33. Weinbach RW: Evaluating Social Work Services and Programs. Boston, Allyn and Bacon, 2005, pp 1e258 34. Yi Y: A critical review of consumer satisfaction. In: Zeithami V, editor. Review of Marketing 1990. Chicago, American Marketing Association, 1990, pp 68e123 35. Grigoroudis E, Siskos Y: Customer Satisfaction Evaluation: Methods for Measuring and Implementing Service Quality. New York, Springer, 2010, pp 1e311 36. Royse D: Research Methods in Social Work, (4th ed.). Pacific Grove, CA, Thomson Brooks/Cole, 2004, pp 1e287 37. Shek DT, Ma CM: Subjective outcome evaluation of the Project P.A.T.H.S. in different cohorts of students. ScientificWorldJournal 2012; 2012:493957 38. Shek DT, Yu L: Subjective outcome evaluation of the Project P.A.T.H.S. (extension phase) based on the perspective of program implementer. ScientificWorldJournal 2012; 2012:589257 39. Shek DT, Yu L, Ho VY: Subjective outcome evaluation and factors related to perceived effectiveness of the Project P.A.T.H.S. in Hong Kong. ScientificWorldJournal 2012; 2012:490290 40. Shek DT, Sun RC, editors. Development and Evaluation of Positive Adolescent Training through Holistic Social Programs (P.A.T.H.S.). Heidelberg, Springer, 2013, pp 1e328