Empirical evidence that school-university collaboration is multicultural education

Empirical evidence that school-university collaboration is multicultural education

Teaching & Teacher Education, Vol. 6, No. 2, pp 14%163, 1990 Printed in Great Britain 0742-05 IXlg0 $3 1~)+0 00 ~ 1990 Pergamon Press plc EMPIRICAL ...

1MB Sizes 0 Downloads 71 Views

Teaching & Teacher Education, Vol. 6, No. 2, pp 14%163, 1990 Printed in Great Britain

0742-05 IXlg0 $3 1~)+0 00 ~ 1990 Pergamon Press plc

EMPIRICAL EVIDENCE THAT SCHOOL-UNIVERSITY COLLABORATION IS MULTICULTURAL EDUCATION

S U S A N M. B R O O K H A R T Duquesne University, U.S.A. and W I L L I A M E. L O A D M A N The Ohio State University, U.S.A. Abstract - - Four frames of reference existing in school and university cultures were defined: (1) work tempo and the nature of professional time; (2) professional focus, from theoretical to practical; (3) career reward structure and the nature and importance of rewards for work; and (4) sense of personal power and efficacy, or the degree to which one perceives one's efforts lead to intended outcomes. A survey instrument indexing the four dimensions with four summated rating scales was mailed to a sample of 1225 educators from the 19 Midwest Holmes Group universities and the public schools with which they work. Results indicated that Tempo, Focus, Reward, and Power standardized scale scores differed by position, collaboration experience, and university affiliation. These differences are presented and discussed with respect to their meaning in the context of school-university collaboration.

In the last decade, "school-university collaboration" has changed from a general description to a technical term and from an occasional pursuit to a g r o u n d swell. Increasing the a m o u n t and quality of school-university collaboration promises to be a way for educators to pursue important goals with integrity between means and ends. Goals like better educational research and higher standards for professional teacher preparation are consistent with m e t h o d s which actively involve both university and public school people. Culture is the f r a m e w o r k within which people interpret life, helping them decide what meaning to attach to persons and events and how to act themselves (Geertz, 1973). A g r o u p of people together share a culture or meaningmaking framework. W h e n two different groups, with two different ways to interpret the world, interact, the resulting activity is multicultural. This study presents descriptive evidence about ways in which school-university collaboration projects are multicultural, that is, evidence about how school and university

educators look at their work differently. Others have already observed that universities and public schools have two different cultures (Sarason, 1982). In order for meaningful c o m m u n i c a tion to occur between them, participants in school-university collaboration must understand the perspectives of both university and public school educators (Gifford & G a b e l k o , 1987). Such "cross-cultural" understanding is m o r e important for the second wave of educational reforms p r o p o s e d in the middle and later 1980s than ever before because of the stress placed on both school and university involvement. In this study, then, the term "multicultural education" is a double-entendre, referring to both the intended educational o u t c o m e s of school-university projects and to the fact that this work educates the participants themselves, w h o b e c o m e m o r e aware of their colleagues' perspectives. T h e purpose of studying culture is to access the conceptual world of the subjects, because only then can one converse with them (Geertz, 1973). A d v o c a t i n g school-university collabora149

150

SUSAN M. BROOKHART and WILLIAM E. LOADMAN

tion assumes conversation is possible between tion in an academic field, and academic rank. school and university people (Buchmann, Public school people are motivated most by in1985). Committees can plan together only if the trinsic rewards, especially relationships with m e m b e r s understand one another when they students. University people are more geared totalk. People with two different frames of refer- ward extrinsic rewards (Lortie, 1975; ence within which to understand goals and K o t t k a m p , Provenzo, & Cohn, 1986; Gifford, strategies for meeting them will make slightly 1986). different sense out of the same proceedings. For 4. Sense of personal power and efficacy: example, at a planning session, a university pro- Sense of efficacy means a feeling that one's effessor might talk about a curriculum change tak- forts lead to intended outcomes, that what one does makes a difference. In comparison with ing "'a long time" to implement. By this phrase, he or she might mean several years. A class- teachers, university educators have traditionroom teacher hearing the phrase might underally felt more power and status. They perceive stand it to mean 6 or 8 weeks. Miscommunica- themselves as agents whose work yields results. Collaboration for research can lead to a retion can occur where none was intended because of cultural differences. School-univerported increase in feelings of professional competence for classroom teachers (McLaughlin & sity project committees can write their own prescriptions for particular actions if and only if Marsh, 1978: Cooper, 1988; Porter, 1987; Parthose on each side of the hyphen comprehend kay, 1986). one another. Literature about the cultures of schools and universities and about the processes and outObjectives comes of collaborative projects suggested that there are at least four dimensions of meaning to The purpose of this study was to investigate which educators attend and along which they in- these four dimensions across the two cultures terpret their experiences. I Briefly, four dimenwhich are involved whenever school-university sions along which educators in school and uni- collaboration occurs. In this study, "'collaboraversity cultures can be expected to differ include tion" was defined as work on a joint school-unithe following. versity project for research or teacher training. 1. Work tempo: Work time is perceived diffe- A fully collaborative project required input rently at the public school and the university. from both school and university people in setEvents happen quickly and simultaneously ting project goals, carrying out project acwhen there are large groups of children in- tivities, and sharing in project benefits. This devolved. University professors need more time finition is similar to Schlechty and Whitford's for review and reflection and less emphasis on a (1988) concept of "organic collaboration" or to daily routine and regimented schedules. These Hord's (1986) concept of collaboration as opdifferences lead to different perceptions of proposed to cooperation. It probably had its first fessional time (Lieberman & Miller, 1984; Porformal expression in Tikunoff and Ward's ter, 1987). (Tikunoff, Ward, & Griffin, 1979) Interactive 2. Professional focus." "'Theory" and "prac- Research and D e v e l o p m e n t on Teaching tice" are two traditional names for the ends of a model. The operational definition of "collabcontinuum in any field. Those on the practical oration experience" for this study was self-reend of the continuum a r e concerned mainly ported participation in a project fitting this dewith the daily application of ideas in activities scription for at least 3 months, the length of one and concrete plans. Those on the theoretical academic quarter. end of the continuum do their work by thinking Conceptually, the population of interest for in broader terms (Tikunoff, Ward, & Griffin, this study was all educators associated with the 1979; Hering & Howey, 1982). 19 Midwest (U.S.A.) Holmes G r o u p institu3. Career reward structure: University re- tions, who will increasingly be called upon to wards come in the form of publication, recognipractice collaboration because of their instituZFor a more detailed review of the literature, including a discussion of this use of the term "culture" and a review of currently used definitions of school-university collaboration, see Brookhart & Loadman (1989).

Collaboration and School Culture

tion's membership in the group. The Holmes Midwest Region steering committee had targeted Holmes Group goal four, "to create partnerships between schools of education and schools," for study in 1987-88. The 19 Midwest Holmes institutions produce a disproportionately large number of teachers for the Midwest region of the United States. Operationally, this target population was defined as two groups. One group was full-time faculty members in the college or school of education in any one of the Midwest Holmes institutions. The second group in the target population was classroom teachers and building administrators in school districts which already have regular working relationships (e.g., as field work sites) with one of these institutions and would, therefore, be likely first-choice candidates for future collaboration, as well. Building administrators were included with teachers in the school population because the support of administrators has been cited as important in project reports. District-level administrators were not included in the population because projects are usually carried out at the building level. The purpose of this study, to gather evidence about the multicultural nature of school-university collaboration by investigating four dimensions, involved two objectives. First, the study had a methodological objective, to use summated rating scales to measure shared perceptions. The traditional methods for studying culture have been various kinds of observation and interviews, most suitable for single-case studies. But broad program and policy recommendations, such as the Holmes Group goal "to connect schools of education with schools" (Holmes Group, 1986) call for coordinated decisions which affect several, or even many, institutions at the same time. A paper-and-pencil approach permits broad sampling. A traditional method for indexing attitudes and perceptions is a summated rating scale (Likert, 1974). This study used a survey methodology and summated rating scales to test whether the four frames of reference described in the literature could be empirically validated across institutions. It was expected that a paper-and-pencil instrument could be written and that four scales could be established, confirmed with factor

151

analysis, and supported with appropriate reliability and validity checks. Further, the investigators hypothesized that a meaningful interpretation of the data collected would be consistent with the literature reviewed, offering empirical evidence to validate the constructs discovered there. Second, the study had a substantive objective. The substantive research questions were as follows. What are the (1) Tempo, (2) Focus, (3) Reward, and (4) Power perceptions of Midwest Holmes-related public school teachers and administrators and university faculty, and do they differ with respect to position? Do these perceptions vary across institutions? Do they vary with collaboration experience? These questions are important to the Midwest Holmes Group because the answers have implications for the methods that participants in school-university collaboration will be able to use effectively. Results would also increase mutual understanding, raising awareness of four areas in which care must be taken to communicate clearly during school-university collaboration. It was hypothesized that respondents' scale scores on these dimensions would show a main effect for position. This would be evidence that, despite individual variation and institutional ("subcultural") variation, universities and public schools in general have two different cultures. Public school educators should have a higher mean Tempo score, indicating a busier pace, and a lower mean Power score, indicating less sense of personal ability to effect change, than university educators. Classroom teachers should have a higher Focus score, signifying a more practical than theoretical orientation. It was also hypothesized that teachers' scale scores would vary with collaboration experience, with those who report having worked with such projects scoring higher on the Power scale. Method

Instrument Development A draft instrument was prepared. Four scales of 15 items each reflected agreement or disagreement with four sets of items (Likert, 1974). Four scale scores were defined as the sum of responses to the items on each scale. Items were scored from 1, "strongly disagree," through 5,

152

SUSAN M. BROOKHART and WILLIAM E. LOADMAN

"'strongly agree." The T e m p o scale measured speed and pace, with a higher score indicating a more frenetic work life. The Focus scale measured theory/practice concerns, with a higher score indicating a more practical orientation and more concern with detail and application. The Reward scale measured degree of concern with being rewarded for one's work, with a higher score indicating more interest in a variety of rewards. The Power scale measured the degree to which one perceived that one's activities yielded results, with a higher score indicating a more "can-do" feeling. Content validity was established by consultation with a panel of teachers, school administrators, and university education faculty members. The instrument was piloted with 100 classroom teachers in Columbus, Ohio, and Pittsburgh, Pennsylvania. Exploratory factor analysis of the 60 items did find four factors. Scales were revised by inspecting factor Ioadings, item intercorrelations, contribution to internal consistency alpha, and item content. Revised scales, ranging in length from 11 to 13 items, each met an alpha reliability criterion of .70. The revised scales were rechecked for content validity by repeating the consultation with the experts. The revised scales were put together, alternating items to minimize context effects, in a 45 item instrument. Seven reverse scored items were included. Items were not labeled by scale, so that confirmatory factor analysis could test whether respondents continued to see four different dimensions a m o n g the items. An explanation and directions page was added as a first page. An information page was added as a last page, asking about position and collaboration experience.

Sample Responses were solicited from each of the 19 Midwest Holmes universities. All 19 agreed to participate. Contact people in each university provided information about education faculty. In 17 universities, faculty were sampled randomly by taking every other name on the School or College of Education faculty mailing list. In two universities, the contact people were willing only to provide teacher education faculty names; for both of these, the n u m b e r sampled was about half the n u m b e r of education faculty.

The survey was sent to a total of 960 university faculty, with institutional representation proportional to the size of the school or college of education. Results reported are based on 559 returns. University return rate was 58% overall. The range of the 19 individual university return rates was 24% to 76%. Eighteen of the 19 universities provided names of public school teachers and building administrators with whom they worked. Public school sample sizes were made roughly equivalent to their partner university sample sizes. Contact people used their knowledge of public school sites to intentionally sample some public school educators with collaboration experience and some without. Eleven of the contact people sent or delivered surveys to the public school sample themselves; seven provided mailing lists for individual mailings. A total of 1050 public school surveys were mailed. Reported results are based on 666 returns. Public school return rate overall was 63%. The range of the 18 individual public school groups' return rates was 31% to 96%.

Results Statistical analysis of item responses proceeded in the following order. Reliability analysis included calculating Cronbach's alpha for each scale, examining item intercorrelations, and revising scales for final use. The scales used for M A N O V A and follow-up analyses were as follows: T e m p o (10 items, alpha = .76), Focus (11 items, alpha = .63), Reward (11 items, alpha = .62), and Power (13 items, alpha = .75). To test whether the items did group into four dimensions, confirmatory factor analysis by the method of m a x i m u m likelihood was done to test whether a four-factor model did fit the correlation structure of the items. Some of the distributions of responses for individual items were quite skewed, depressing the value of item intercorrelations. Since the data departed from multivariate normality, it was expected that the confirmatory factor analysis would not be able to fit an entirely satisfactory model to the data, as tested by a chi-square goodness of fit test (Joreskog & Sorbom, 1986). This was indeed the case, but by another measure, root mean square residual, the model did not fit badly. The

Collaboration and School Culture Reward and Power scales were allowed to be correlated. A four-factor model had a root mean square residual of .086, even though chisquare with 944 degrees of freedom was 5992.42. Thus it appeared that a four-factor model was reasonable, given the skewed distributions in the data. To demonstrate that the poor fit of the four-factor model was because of lack of multivariate normality and not because another number of factors would be more appropriate, maximum likelihood exploratory factor analysis was done on the same item correlation structure, checking models with five, six, and seven factors. Even a seven-factor model did not attain a good fit, although seven factors explained 100% of the c o m m o n variance. Chi-square for this model was 1898.10 with 696 degrees of freedom, and the Tucker-Lewis coefficient was .85, still short of the .90 or better generally considered to indicate an acceptable model. The exploratory factor analyses demonstrated that no model would fit the data very well. The skewed distributions were therefore judged to be the cause of the problem. The four-factor model fitted as well as could be expected, and it succeeded in reproducing the original correlation matrix fairly well, as measured by the root mean square residual. In addition, it was interpretable, and so the four-factor model specified in the confirmatory factor analysis was accepted as the best one. Scale scores were standardized, after summing items, with a mean of 50 and a standard deviation of 10. This put each scale into the same metric. Scale intercorrelations were mixed, ranging from - . 2 1 to .52. While the factors were not highly correlated, the correlations were not uniform. Thus, a multivariate analysis of variance ( M A N O V A ) was chosen over the other option, univariate repeated measures analysis of variance, for an overall first look at the data. M A N O V A makes use of dependent variable correlations, while the univariate repeated measures A N O V A assumes equal dependent variable correlations. Factors investigated with M A N O V A included position (teacher, principal, or university faculty), collaboration experience (yes or no), and university affiliation. These factors matched the research objectives. The results of a three-factor M A N O V A on

153

the four standardized scale scores indicated a main effect for position (Wilks' L a m b d a = .701, F(6,2254) = 73.13, p = .0000). There was a main effect for collaboration experience (Wilks' Lambda = .987, F(3,1127) = 4.89, p = .0022). There was a main effect for university (Wilks' L a m b d a = .939, F(54,3358.83) = 1.33, p = .0566); however, both two-way interactions involving university were also significant. The university by position interaction effect was significant (Wilks' L a m b d a = .888, F87,3372.98) = 1.57, p = .0006). The university by collaboration interaction effect was also significant (Wilks' L a m b d a = .938, F(54,3358.83) = 1.35, p = .0468). Neither the three-way interaction (position by collaboration by university) nor the position by collaboration interaction was significant. Mean scale scores on each dimension, for each position for each university, were examined to interpret the position by university interaction. Two things were apparent. First, cell sizes ranged from 1 to 59, causing some means to be stable estimates and some unstable. Second, among the cells with reasonable sample sizes, the pattern of mean scale scores by position for individual universities generally matched the overall pattern of means by position, with differences in degree rather than in direction. For example, each university's teacher group had a higher focus score than that university's faculty group, but how much higher differed slightly among the universities. University affiliation thus did make a difference in scale scores, and the amount of difference related to position and collaboration experience varied a m o n g universities. Some of this variation may be attributed to the differences in public school sampling procedures among universities, and some probably reflects genuine differences among subcultures. Because most of the university variations were differences in degree, not in kind, scale scores were collapsed across universities for analysis of patterns of difference by position and collaboration experience. Interpretations and conclusions drawn from the results of these further analyses were tempered with the results of this three-factor MANOVA: Perceptions of educators from any given university can be partly explained by the larger educational culture to which they belong, but are partly influenced by the local setting or subculture.

154

SUSAN M. B R O O K H A R T and WILLIAM E. LOADNIAN

in Tables 3 through 10. For smoothness of style, the word "'significantly" will not be repeated in most of the rest of this Results section. If a mean score is pronounced "'higher" or "'lower" than another mean, the reader may assume they are statistically significantly different. As an aid in interpretation, the reader should keep in mind that 50 was the mean of the standardized scores for each scale. G r o u p means above 50 represented relatively higher scores for the group's members on the scale. G r o u p means below 50 represented relatively lower scores for the group's members on the scale.

A second M A N O V A on the four standardized scale scores was done with position and collaboration as the factors of interest. The results in Table 1 show that there was no interaction between position and collaboration (Wilks' L a m b d a = .997, F(6,2434) = .54, p = .78). Therefore, the main effects were examined. The position effect was large (Wilks" L a m b d a = .659, F(6,2434) = 94.07, p = .0000). There was a significant collaboration effect (Wilks" L a m b d a = .993, F(3,1217) = 3.02, p = .03). The n u m b e r of respondents in each group is presented in Table 2. After differences were identified with the multivariate procedure, however, univariate follow-up analysis of mean scale scores was chosen for interpretation. No assumptions about the data were violated by proceeding in this way, and the research question demanded a univariate treatment: Information about group differences was sought for particular scale scores, not some combination of them. Once position and collaboration effects had been identified with the M A N O V A , univariate analyses of variance ( A N O V A s ) looked for the location of significant differences by position and collaboration, for each scale separately. The results of these A N O V A s and their follow-up contrasts, as well as the means themselves, are presented

Tempo Standardized T e m p o scale scores (see Tables 3 and 4) differed by position (F(2,1219) = 8.24). The highest mean T e m p o score was that of school principals, 52.56. This was higher than the mean T e m p o score for teachers, 50.36, which in turn was higher than the mean T e m p o score for university faculty, 49.06. This pattern also held among those who did not have collaboration experience: Administrators' mean score was higher than teachers" mean score, which was higher than faculty mean score. Faculty with collaboration experience, how-

Table 1

Two-factor MANOVA on Four Scale Scores, by Position and Collaboration Experience Source Dimensions' Dimensions x position: Dimensions x collaboration 3 Dimensions x position x collaboration

df

Wilks" Lambda

F

p

3,1217 6,2434 3,1217 6,2434

.976 .659 .993 .997

I0.15 94.07 3.02 0.54

.0001 .0000 .0288 .7771

~Dimensions - - Four standardized scale scores (Tempo, Focus, Reward, Power). 'Position - - Teacher, Public school administrator, or University education faculty. JCollaboration - - Yes or No (Yes means at least 3 months experience).

Table 2

Number of Respondents in Each Group, by Position and Collaboration Experience Collaboration Position

No

Yes

Total

Teacher School administrator University faculty

321 65 199

215 65 360

536 130 559

Collaboration and School Culture

155

4.05). The Focus mean scores were the most extreme of the four scale means (see Figure 1). The difference was between teachers, whose mean Focus score (56.19) was much higher, and the other two groups, whose mean Focus scores (44.97 for administrators and 45.24 for faculty) were much lower. This pattern held in the contrasts involving those who had collaboration experience as well as those who had not had collaboration experience: Teachers had higher focus scores. The collaboration effect was most noticeable between teachers with and without collaboration experience (F(1,1219) = 3.34). Regardless of position, however, the direction of the change in mean Focus scores between those with and without collaboration experience was the same. Mean scores among collaborators were higher than for non-collaborators.

ever, had higher Tempo scores than their colleagues Without collaboration experience: 49.61 as opposed to 48.05, F(1,1219) = 3.12. A m o n g those with collaboration experience, then, only the extreme scores were still significantly different (administrators' mean of 52.51 was higher than the faculty mean of 49.61, F(1,1219) = 4.69). The collaboration effect for faculty was not strong enough to cause a main effect for collaboration in the overall A N O V A because there was no difference at all in mean Tempo scores between teachers or administrators with and without collaboration experience. Focus

Standardized Focus scale scores (see Tables 5 and 6) differed by position (F(2,1219) = 255.88) and by collaboration experience (F(1,1219) = Table 3

Follow-,tp Analysis of Tempo Scale Score Means, by Position and Collaboration Experience Source Position Collaboration Position x collaboration Error Contrasts

F

Collab. vs. not for teachers Collab. vs. not for administrators Collab. vs. not for faculty Teachers vs. admin, with no collab. Admin. vs. faculty with no collab. Teachers vs. faculty with no collab. Teachers vs. admin, with collab. Admin. vs. faculty with collab. Teachers vs. faculty with collab. Teachers vs. administrators Administrators vs. faculty Teachers vs. faculty

0.17 0.00 3.12"* 3. I 1** 10.25" 5.83" 1.89 4.69" 1.28 4.89* 14.56" 6.39"

df

SS (adjusted)

MS

F

2 1 2 1219

1630.92 71.21 123.44 120652.19

815.46 71.21 61.72 98.98

8.24* 0.72 0.62

" p < .05, * ' ~ p < .10. Table 4

Tempo Scale Score Means, by Position and Collaboration Experience Collaboration Position Teacher School administrator University faculty

No

Yes

Total

50.22 52.61 48.05

50.58 52.51 49.61

50.36 52.56 49.06

156

SUSAN M. B R O O K H A R T and W I L L I A M E. L O A D M A N 58

Reward

56

S t a n d a r d i z e d R e w a r d scale scores (see Tables 7 a n d 8) s h o w e d b o t h p o s i t i o n (F(2,1219) = 5.50) a n d c o l l a b o r a t i o n (F(1,1219) = 17.65) main effects. T h e m e a n R e w a r d scale score for t e a c h e r s w h o h a d c o l l a b o r a t i o n e x p e r i e n c e was 50.26, as c o m p a r e d with 47.57 for t e a c h e r s without c o l l a b o r a t i o n e x p e r i e n c e , F(1,1219) = 9.61. T h e m e a n R e w a r d scale score for administ r a t o r s with c o l l a b o r a t i o n e x p e r i e n c e was 51.77, as c o m p a r e d with 48.07 for a d m i n i s t r a t o r s without c o l l a b o r a t i o n e x p e r i e n c e F(1,1219) = 4.59. T h e m e a n R e w a r d scale score for faculty with c o l l a b o r a t i o n e x p e r i e n c e was 52.21, as c o m p a r e d with 49.70 for faculty w i t h o u t c o l l a b o r a tion e x p e r i e n c e , F(1,1219) = 8.36. T h e p o s i t i o n effect was that t e a c h e r s ' m e a n R e w a r d scale score (48.65) was l o w e r than the m e a n score for facuhy (51.32). A d m i n i s t r a t o r s ' m e a n score

54

8 50-

8 46-44--

4z --T~

-- A--

I

I

Tempo

FOCUS

i I~word

I Power

Teach/NC

+

Teach/C

- - F - m FocuLty / h E

Adrnin /C

Admin/NC

+

Focut'ty /C

Figure 1. Mean scale scores, by position and collaboration experience. Table 5

Follow-up Analysis o f Focus Scale Score Means, by Position and Collaboration Experience Source Position Collaboration Position × collaboration Error Contrasts

df

SS(adjusted)

MS

F

2 1 2 1219

35913.36 284.04 48.29 85543.49

17956.68 284.04 24.15 70. IS

255.88 * 4.05 ~ 0.34

F

Collab. vs. not for teachers Collab. vs. not for administrators Collab. vs. not for faculty

3.34"* 1.28 0.69

Teachers vs. admin, with no collab. 102.08" Admin. vs. faculty with no coltab. 0.35 Teachers vs. faculty with no cotlab. 204.25* Teachers vs. admin, with collab. 89.16" Admin. vs. faculty with collab. 0.09 Teachers vs. faculty with collab. 255.37" Teachers vs. administrators 190.68" A d m i n i s t r a t o r s vs. faculty 0.05 Teachers vs. faculty 456.79" *p<

.05, " * p <

.10.

Table 6

Focus Scale Score ;Weans, by Position and Collaboration Experience Collaboration Position Teacher School ad min istrator University faculty

No

Yes

Total

55.65 44.13 44.84

57.00 45.80 45.46

56.19 44.97 45.24

Collaboration and School Culture

(49.92) occupied a middle position not different from either of the extremes. Regardless of position, mean Reward scale scores were higher for those who had collaborated. Collaboration groups all had means above 50, and the noncollaboration groups all had means below 50. For the Reward scale, the collaboration effect was stronger than the position effect. For the Focus and Power scales, the other two scales which showed a collaboration effect, the position effect was the stronger effect. On Focus and Power, group means ranged above or below 50 according to position.

157

the mean scores for administrators (52.36) and faculty (51.10), which were not different. The collaboration effect was that those with collaboration experience had higher mean Power scale scores than those without. This was particularly noticeable among administrators (/7(1,1219) = 3.50). The mean Power scale score for administrators with collaboration experience was 53.98, as compared with 50.74 for administrators without collaboration experience.

Discussion

Power All four scale scores were measures of selfperceptions: Scale scores represented how things felt to the respondent, not necessarily how they would appear to a more objective observer. This is appropriate to Geertz's (1973) concept of culture as a shared system for con-

Standardized Power scale scores (see Tables 9 and 10) showed a main effect for position (F(2,1219) = 13.53) and a main effect for collaboration (F(1,1219) = 4.88). The mean Power scale score for teachers (48.28) was lower than Table 7

Follow-up Analysis of Reward Scale Score Means, by Position and Collaboration Experience Source Position Collaboration Position x collaboration Error Contrasts

F

Collab. vs. not for teachers Collab. vs. not for administrators Collab. vs. not for faculty Teachers vs. admin, with no collab. Admin. vs. faculty with no collab. Teachers vs. faculty with no collab. Teachers vs. admin, with collab. Admin. vs. faculty with collab. Teachers vs. faculty with collab. Teachers vs. administrators Administrators vs. faculty Teachers vs. faculty

9.61 * 4.59* 8.36* 0.14 1.34 5.72* 1.17 0. l 1 5.28" 1.08 1.15 11.01"

df

SS (adjusted)

MS

F

2 1 2 1219

1068.01 1711.98 36.60 118263.63

534.01 17tl .98 18.30 97.02

5.50* 17.65" 0.19

* p < .05.

Table 8

Reward Scale Score Means, by Position and Collaboration Experience Collaboration Position Teacher School administrator University faculty

No

Yes

Total

47.57 48.07 49.70

50.26 51.77 52.21

48.65 49.92 51.32

158

SUSAN M. BROOKHART and W'ILLIAM E. LOADMAN

Table 9 Follow-up Analysis of Power Scale Score Means, by Position and Collaboration Experience

Source Position Collaboration Position x collaboration Error

df

SS (adjusted)

MS

F

2 1 2 1219

2638.80 475.87 194.21 118911.60

1319.40 475.87 97.11

13.53" 4.88 * 1.00

97.55

Contrasts F Collab. vs. not for teachers 1.17 Collab. vs. not for administrators 3.50** Collab, vs. not for faculty .35 Teachers vs. admin, with no collab. 4.48* Admin. vs. faculty with no collab. .00 Teachers vs. faculty with no collab. 10.41* Teachers vs. admin, with collab. 13.54" Admin. vs. faculty with collab. 4.11" Teachers vs. faculty with collab. 8.26* Teachers vs. administrators 16.97" Administrators vs. faculty 1.89 Teachers vs. faculty 18.64" "p < .05, **p < .10. Table 10 Power Scale Score Means. by Position and Collaboration Experience

Collaboration Position Teacher School administrator University faculty

No

Yes

Total

47.90 50.74 50.77

48.84 53.98 51.29

48.28

structing m e a n i n g out of experiences. People will decide how to b e h a v e and how to i n t e r p r e t o t h e r s ' b e h a v i o r based on how they perceive things. P e r c e p t i o n s are not e n t i r e l y the p r o d u c t of individual psychology. H o w e v e r , they are based on shared frames of reference (Mitchell, Ortiz, & Mitchell, 1987). The four d i m e n s i o n s or frames of reference i n d e x e d by the scale scores in this study were T e m p o , Focus, R e w a r d , a n d Power. T h e T e m p o scale m e a s u r e d the self-perceived a m o u n t of time available to do o n e ' s alloted tasks, i n c l u d i n g how quickly o n e n e e d e d to m o v e a n d how often o n e was i n t e r r u p t e d . T h e Focus scale m e a s u r e d how practical or theoretical one perceived o n e ' s work, A m o r e practical o r i e n t a t i o n implied work c o n c e r n e d with application of k n o w l e d g e in c o n c r e t e situations, with p l a n n i n g for specific activities. A m o r e theoretical o r i e n t a t i o n implied work c o n c e r n e d with

52.36

5 t. L0

ideas themselves, rather than their application. T h e R e w a r d scale m e a s u r e d how much one agreed that a variety of different rewards were i m p o r t a n t . A m o r e r e w a r d - o r i e n t e d person was m o r e likely' to agree that respect, status,collegiality, m o n e y , and fame were i m p o r t a n t to him or her. T h e Power d i m e n s i o n m e a s u r e d how likely one was to agree that o n e ' s professional activities led to i n t e n d e d o u t c o m e s , that one could m a k e decisions that would have effects, a n d that o n e could choose what ought to be d o n e . T h e results s u p p o r t e d an affirmative answer to the m e t h o d o l o g i c a l objective. T h e s u m m a t e d rating scales did f u n c t i o n to m e a s u r e shared p e r c e p t i o n s , i n t e r p r e t a b l e from a cultural perspective. This was a n t i c i p a t e d , since the summ a t e d rating scale was d e v e l o p e d for the purpose of m e a s u r i n g attitudes and p e r c e p t i o n s (Likert, 1974). This study gave two kinds of evi-

Collaboration and School Culture dence that this measurement methodology was useful for investigating research questions set, like the ones for this study, in a cultural framework. First, the items used in scale development, and the four scales themselves, were a product of a literature review which emphasized the cultural perspectives of educators. Scales constructed in this way showed themselves to be amenable to the traditional development process for summated rating scales: pilot testing, inspection of alpha and item intercorrelations, and revision based on both statistical and content concerns. A confirmatory factor analysis established that, in final form, a four-factor model fitted the data. This means that among that group of items, the public school and university educators continued to respond along the four hypothesized dimensions. Respondents grouped items in their own minds into four different sets, corresponding to the hypothesized dimensions of professional T e m p o , Focus, Reward, and Power. The second kind of evidence that summated rating scales did tap shared frames of reference in the cultural sense was that the scale scores did correspond to the substantive hypotheses. This was evidence that the constructs indexed by the scale scores were validly measured by their respective scales. One traditional check for the construct validity of any measurement instrument is to see whether the scores behave as one would expect, given what the scores are supposed to measure. Although alpha reliability for the main study was lower than for the pilot test, the impact on results was negligible, since significant, hypothesized differences were consistently found. With higher reliability, probably even greater degrees of difference would have been found. The importance of demonstrating the usefulness of summated rating scales to measure shared perceptions should be emphasized. Rosenholtz (1987) insisted that research should be one tool used for making policy decisions. She argued that information should be gathered and analyzed before important policy changes involved' in educational reforms are set in place. Data gathered for policy research could help avoid many difficulties and setbacks, help predict areas of particular strength or potential weakness, and help select appropriate policies and methods for implementing them. If an edu-

159

cational reform is to touch many institutions, then it will not be possible to do in-depth case analysis for policy research; there will simply be too many settings involved. A paper-and-pencil approach would be useful in these cases, so these results have substantial implications for policy. The results of this study also met the substantive objective of describing both school and university educators' perspectives on the four dimensions of T e m p o , Focus, Reward, and Power. Patterns of differences were interpretable as cultural variation and supported the thesis that school-university collaboration is a multicultural activity. Return rate was 61%, so the issue of representativeness arises. Strictly speaking, the results may not reflect the Midwest Holmes population. It is likely that the results are fairly representative, however, because they support hypotheses which came from case studies and other literature. The large sample size also contributes to confidence in the generalizability of results. Overall, scale scores showed both positionby-university and collaboration-by-university interaction effects and a main effect for university affiliation. This is evidence for what Feiman-Nemser and Floden called subcultures. They acknowledged a tension between the need to explore the commonalities among educators in various educational settings and the fact that each educational setting is different. Their metaphor for this was "finding common threads in a complex carpet" (Feiman-Nemser & Floden, 1986, p, 507). The interaction effects on the four dimensions offered evidence that subcultural variation existed. How separated teachers, administrators, and faculty were, in any given university and affiliated public school grouping, differed among the 19 Midwest Holmes institutions. How much of a difference collaboration made differed among universities. With few exceptions, however, the interactions were of differences in degree. The patterns of mean scores for teachers, administrators, and faculty from each university were more or less severely different from one another, but the position effects were similar for each university. This implies that some universities will be working with groups which are more at odds than others as they initiate collaborative projects or continue work on

160

SUSAN M. BROOKHART and WILLIAM E. LOADMAN

present projects. Comparisons of response patterns among individual universities cannot be definitively interpreted because faculty samples were random but public schools samples were not. The existence of a university effect and position-by-university and collaboration-by-university interactions suggests two conclusions. First, subcultural differences were found among educational settings. When any particular university sits at the conference table with its public school colleagues to work on any collaborative project, participants must keep their eyes and ears open for the particular dynamics of the group. The second conclusion follows from the first. Given the findings of this study, it would be wise policy to enter any collaborative activities mindful of, and ready to deal with, the most severe differences found among positions. Better to be more sensitive than necessary than not sensitive enough. This study did highlight some important differences that held up despite local variations. There are some ~'common threads" in the warp and woof of the education tapestry. The substantive results of this study confirmed all hypotheses. It had been hypothesized that teachers who collaborated would score higher on the Power scale than those who had not collaborated. This hypothesis was influenced by individual cases in which teachers who collaborated reported feeling more professional and capable (Parkay, 1986; Hering & Howey, 1982; Tikunoff, Ward, & Griffin, 1979). Collaboration showed a significant main effect on the Power scale scores in this study, but the effect was most pronounced for administrators. Since this was not an experimental study, these results cannot be pressed to say that the experience of collaboration caused an increase in educators' feelings of ability and accomplishment. This idea is nevertheless supported by these results in conjunction with the literature just cited. A recommendation for further research is to do a longitudinal study, starting with a group of educators prior to the c o m m e n c e m e n t of a school-university collaborative project and following t h e m through the process, to see whether collaboration can be demonstrated to cause an increase in Power perceptions. Collaboration also had an effect on the Focus and Reward scale scores. Those who reported

they had collaboration experience had higher mean scores on each scale. These differences are interpretable within the theoretical framework of this study. All those who have collaborated, from universities and public schools, have experienced complicated committee work, arrangements, and new professional tasks. A higher Focus score indicated a more practical orientation, which may be an expected outcome of coordinating one's efforts with others. The Reward scale scores were correlated with Power scale scores (r = .52). This means the Reward scale scores, which were higher for those with collaboration experience than for those without, were moderately related to the Power scale scores for which this effect was expected. The literature which indicated that teachers who collaborated felt more "professional" (Parkay, 1986, p. 389) can sensibly be read to imply that teachers were pleased to be accorded professional status, indexed in this study by the Reward scale. On the other hand, in at least one case history of a particular school-university consortium, the report was that for school administrators and teachers, tangible rewards from participation were not forthcoming (Sirotnik & Goodlad, 1988, pp. 163-164). While a higher mean Reward scale score indicated those who collaborated were more interested in career rewards, this may remain at the wish level until school-university collaboration as an activity develops to a greater degree. The Reward scale scores showed a primary effect for collaboration. A question worth pursuing is whether collaboration can be a strategy used in efforts to further professionalize teaching. Professional rewards are the result of professional practice, not the creators of it (Cooper, 1988). Cooper argued, therefore, that instead of imposing outside rewards in an effort to make teaching more closely match other professions, educators should search for practices which foster professional rewards. The surprisingly strong effect of collaboration on the Reward scale scores in this study suggests that school-university collaboration may work to help professionalize teaching. Further investigation of this possibility is warranted. A longitudinal study which examines the question of causality would be a good first step. If it can be demonstrated that participation in collabora-

Collaboration and School Culture tive projects causes educators to become more interested in professional rewards, then a second step should be taken, a look at the cognitive processes by which this change is mediated. Mitchell, Ortiz, and Mitchell (1987) have developed some categories for investigating what is rewarding to teachers and why. Perhaps these two lines of research can inform each other. The strongest differences among scale scores were by position. Results by position were consistent with hypotheses. Teachers, school administrators, and university faculty did not approach the four dimensions with the same perspectives. The three groups all differed with respect to T e m p o . Administrators reported running the fastest; teachers were in the middle; faculty perceived themselves as having the slowest work tempo. Implications for collaboration are simple but easy to overlook. Differing expectations for time commitments can have adverse effects on collaboration (Sirotnik & Goodlad, 1988). These results suggest that on project work, time expectations should be made explicit and not assumed. Once they are explicit, collaborators can expect to differ as they discuss time concerns. Faculty should realize that a quicker pace is the norm for administrators, and should be careful not to attribute a desire to move ahead quickly to impatience. Administrators and teachers should be warned that faculty who seem to move too slowly for their tastes are not dragging their feet, but are merely proceeding according to different norms. In light of these understandings, participants may make a clear and concerted effort to communicate time expectations with each other and work out what must be done on any given project. The remaining three frames of reference demonstrated a pattern with perhaps more serious implications than the T e m p o differences. The Focus results pointed out that teachers' orientations were very much more practical than either public school administrators or university faculty. This was the largest difference found in the study, as was expected. The practicality of teachers is well documented (Liberman & Miller, 1984), as are the differences in practicality between teachers and researchers (Tikunoff, Ward, & Griffin, 1979). One teacher's written c o m m e n t illustrated the effects of the difference in practicality between teachers and adminis-

16l

trators, too. The teacher wrote, by an item about knowing ahead of time what one would do on a given day, "I know what I plan to do, what I need to do and what I want to do. Poor planning and disorganization on the administrative level often makes my plans impossible to implement (interruptions, unannounced happenings on a daily basis)." This teacher clearly valued planning ahead and perceived administrator actions to mean the administrator was '*poor" at this important function. The Focus differences found in this study suggest that the administrator did not intend to scuttle plans, but rather was operating from a different perspective, according to which activity planning was less central than it was for the teacher. The Power scale results created a similar grouping; teachers perceived themselves as significantly less powerful than either administrators or faculty. This point has been made directly in essays on the topic (Cooper, 1988; Buchmann, 1985). In addition, a power differentia[ between teachers and faculty has been cited as a problem in descriptions of particular collaborative projects (Sirotnik & Goodlad, 1988; Gifford & Gabelko, 1987). The Reward scale pattern was similar in separating teachers from faculty; faculty registered a larger interest in career rewards than did teachers. The administrators were in the middle on the Reward scale. Intrinsic rewards have been found to be more important to teachers than extrinsic ones in other studies, as well (Kottkamp, Provenzo, & Cohn, 1986; Lortie, 1975). In response to one item's query about promotions and raises, one teacher wrote: "Teachers are not rewarded in this way. Their reward is self-gratification!" Notice the pattern here, emphasized by its being repeated three times. On three major dimensions of work perceptions, central to the culture of one's educational setting, teachers were different from both administrators and faculty, who were similar to each other. At work on collaborative projects, the teacher group can be expected to be the "odd man out." This implies teacher representatives in collaboration are at the greatest risk of misunderstanding proceedings and of being misunderstood. It implies that what teachers see as their contributions may not be evaluated against the norms under which they were con-

162

SUSAN M. BROOKHART and WILLIAM E. LOADMAN

c e i v e d . . l t implies that even with e q u a l r e p r e s e n tation on c o m m i t t e e s , t e a c h e r s m a y hold a minority opinion. The Focus, Reward, and P o w e r d i f f e r e n c e s are t h r e e specific areas in which c o m m i t t e e s m a y e i t h e r f l o u n d e r or, recognizing cultural d i f f e r e n c e s for w h a t they are, h a n d l e issues with sensitivity. T e a c h e r s are not the o n l y ones w h o risk b e i n g m i s u n d e r s t o o d in s c h o o l - u n i v e r s i t y c o l l a b o r a tion if t h e y are a c u l t u r a l m i n o r i t y , h o w e v e r . T e a c h e r s m a y unfairly a n d too q u i c k l y dismiss faculty o r a d m i n i s t r a t o r s as unrealistic, j u d g i n g being practical a n d c l a s s r o o m - c e n t e r e d as the o n l y way to be c o r r e c t . A d m i n i s t r a t o r s , faculty, a n d the w h o l e s c h o o l - u n i v e r s i t y c o l l a b o r a t i v e p r o j e c t will be at risk to the d e g r e e that p a r t i c i p ants talk past each o t h e r . D i f f e r e n c e s in n o r m s , e x p e c t a t i o n s , a n d p e r c e p t i o n s m a y , if not m a d e explicit, be t a k e n as p e r s o n a l affronts. C o m m u n i c a t i o n m a y b r e a k d o w n . A r e t h e r e ways to use d i f f e r e n c e s to m a x i m i z e payoffs for all? T h a t is o n e of the g r e a t e s t c h a l l e n g e s of the collaboration movement. It is at this p o i n t that the results o f this study can m a k e c o n c r e t e c o n t r i b u t i o n s . T h e study identifies four p a r t i c u l a r a r e a s on which c o m m u n i c a t i o n , i n c l u d i n g m a k i n g v a l u e s and ass u m p t i o n s clear, m u s t occur. T h e s e results flag the d i m e n s i o n s o f T e m p o , F o c u s , R e w a r d , and P o w e r as p a r t i c u l a r issues for c o l l a b o r a t o r s . S c h o o l - u n i v e r s i t y c o m m i t t e e s will w a n t to include these in specific discussions of a g e n d a items. T h e y will w a n t to c o n s i d e r w h a t is n o r m a tive for each p a r t i c i p a n t , to discuss e x p e c t a t i o n s a n d n e g o t i a t e p o i n t s o f c o m p r o m i s e so that all p a r t i c i p a n t s feel p o s i t i v e a n d gain from the coll a b o r a t i v e e x p e r i e n c e . E a c h has a c o n t r i b u t i o n to m a k e to the w o r k . F i n a l l y , the results o f this study e m p h a s i z e that s c h o o l - u n i v e r s i t y c o l l a b o r a t i o n , for all its difficulties, is a n e c e s s a r y m e t h o d for e d u c a t o r s to use if t h e y really w a n t d i f f e r e n t p o i n t s o f view r e p r e s e n t e d . This s t u d y has given e v i d e n c e that u n i v e r s i t y a n d p u b l i c school e d u c a t o r s have different n o r m a t i v e p e r s p e c t i v e s on f o u r d i m e n sions central to the d e f i n i t i o n of e d u c a t i o n a l goals a n d s t r a t e g i e s for m e e t i n g t h e m : T e m p o , F o c u s , R e w a r d , a n d P o w e r . T h e r e a r e two cultures, and within t h e m m a n y s u b c u l t u r e s . F o r a p r o j e c t to be sensitive to a n d b e n e f i t from different p o i n t s o f view, it m u s t i n c l u d e k n o w l e d g e a b l e a n d o p e n - m i n d e d p a r t i c i p a n t s from both

cultures. This s t u d y ' s results will help such participants begin d i a l o g u e with an idea of w h e r e t h e i r c o l l e a g u e s a r e c o m i n g from and thus will help m a x i m i z e how far t h e y can go.

References Brookhart, S. M., & Loadman, W. E. (1989, March). School-university collaboration: Why it's rnulticultural education. Paper presented at the annual meeting of the

American Educational Research Association. San Francisco. Buchmann, M. (1985). Improving education by talking: Argument or conversation? Teachers College Record, 86. 441-453. Cooper, M. (1988). Whose culture is it, anyway? In A. Lieberman (Ed.), Building a professional culture in schools (pp. 45-54). New York: Teachers College Press. Feiman-Nemser, S., & Floden, R. (1986). The cultures of teaching. In M. C. Wittrock (Ed.), Handbook o f research on teaching (3rd ed., pp. 505-526). New York: Macmillan. Geertz, C. (1973). The interpretation of culture. New York: Basic Books. Gifford, B. R. (1986). The evolution of the SchoolUniversity Partnership for Educational Renewal. Education and Urban Society, 19, 77-106. Gifford, B. R., & Gabelko, N. H. (1987). Linking practicesensitive researchers to research-sensitive practitioners. Education and Urban Society, 19,368--38S. Hering, W. M., & Howey, K. R. (1982). Research in, on. and by teacher centers: Summary report. San Francisco: Far West Laboratory for Educational Research and Development. Holmes Group. (1986). Tomorrow's teachers: A report of the Holmes Group. East Lansing: Michigan State University. Hord, S. M. (1986). A synthesis of research on organizational collaboration. Educational Leadership, 43(6), 22-26. Joreskog, K. G., & Sorbom, D. (1986). LISREL VI, Fourth edition. MooresvilIe, IN: Scientific Software. Kottkamp, R. B., Provenzo, E. F.. & Cohn. M. M. (1986). Stability and change in a profession: Two decades of teacher attitudes, 1964--1984. Phi Delta Kappan, 67,559567. Lieberman, A., & Miller, L. (1984). Teachers, their world, and their work. Alexandria, VA: Association for Supervision and Curriculum Development. Likert, R. (1974). A method of constructing an attitude scale. In G. M. Maranell (Ed.), Scaling: A sourcebookfor behavioral scientists (pp. 233-243). Chicago: Aldine Publishing. Lortie, D. C. (1975). Schoolteacher. Chicago: University of Chicago Press. McLaughlin, M. W., & Marsh, D. D. (1978). Staff development and school change. Teachers College Record, 80, 69-94. Mitchell, D. E., Ortiz, F. l., & Mitchell, T. K. (I987). Work orientation and fob performance: The cultural basis of

Collaboration and School Culture

teaching rewards and incentives. Albany: State University of New York Press. Parkay, F. W. (1986). A school/university partnership that fosters inquiry-oriented staff development. Phi Delta Kappan, 67,386-389. Porter, A. C. (1987). Teacher collaboration: New partnerships to attack old problems. Phi Delta Kappan, 69, 147-152. Rosenholtz. S. J. (1987). Education reform strategies: Will they increase teacher commitment? American Journal of Education, 95,534-562. Sarason, S. B. (1982). The culture of the school and the problem of change, 2rid ed. Boston: Allyn and Bacon.

163

Schlechty, P. C., & Whitford. B. L. (1988). Types and characteristics of school-university collaboration. Kappa Delta Pi Record, 24.81-83. Sirotnik, K. A., & Goodlad. J. I. (Eds.). (1988). Schooluniversity partnerships in action: Concepts, cases, and concerns. New York: Teachers College Press. Tikunoff, W. J., Ward, B. A., & Griffin, G. A. (1979). Interactive research and development on teaching: Executive summary. San Francisco: Far West Laboratory for Edcuational Research and Development. (ERIC document no. ED 186 386.) Received 15 August 1989 [ ]