Studies in EducationalEvaluation, Vol. 7, pp. 75-84. ~c) Pergamon Press Ltd. 1981. Printed in Great Britain.
0191k491X/81/0401-O075505.00/O
C L A S S R O O M E N V I R O N M E N T S A N D PROCESSES IN A MULTILEVEL F R A M E W O R K B. Treiber Department of Psychology, Heidelberg University I NTRODUCTI ON
Empirical teaching-learning research over the last five years has produced, accumulated and consolidated a small but increasingly coherent knowledge base concerning classroom process-outcome relationships, particularly linkages between teacher behavior and student learning of basic (maths and reading) skills in the elementary grades. This becomes quite obvious when summary statements of earlier (Dunkin & Biddle, 1974; Heath & Nielson, 1974; Rosenshine & Furst, 1973) and more recent research reviews (Brophy, 1979; Gage, 1978; Good, 1979; Rosenshine, 1976) are contrasted. The results of several major correlational as well as (quasi) experimental studies are often reduced to a now well established relationship between time on task, student engagement, teacher classroom management and instruction skills, and student learning, which has come to be known as "direct instruction" (DI) (Rosenshine, 1979). Components of a Direct Instruction Model also serve as a conceptual and theoretical framework of the IEA Classroom Environment Study (cf. Postlethwaite et al., 1979). Closer examination of several model implications of direct instruction reveal a number of shortcomings concerning their internal and external validity ~Anderson, 1980; Berliner, 1980): I. It is still unclear how individual achievement determinants selected in the D1 model can best be represented empirically in descriptive indicator models, connected with each other, and finally specified in their functional status for cognitive development. And yet, these variables are of sufficient prognostic value, at least when taken together. This goes essentially back to the fact that, in empirical studies, individual process variables of direct instruction are usually clustered, summed together, or otherwise combined into large blocks of variables (cf. Coleman, 1975; Lohnes, 1979) before they enter structural equation analyses. Hence, they overdetermine student achievement effects and, thus, compensate the notorious explanatory weakness of individual predictors. But their overlap (indicated by a moderate to high multicollinearity) also masks the theoretical and analytical weakness of the Dl model which is, after all, of only suboptimal informative value: being both too expensive in its explanatory components, and too unproductive in the empirical implications to be derived from it. This input-output ratio can only be improved by slimming down the predictor side of the DI model and by sharpening its internal validity. 2. Empirical tests of D1 model versions have been most successful in explaining the mean student achievement level of basic skills in the elementary grades, typically (but not always) in schools serving primarily low socioeconomic status populations (Brophy, 1979). It is uncertain, however, whether the findings explicated within these narrow boundary lines can also be generalized 75
76
B. Treiber
over the most important analytical facets (Erlich & Shavelson, 1978; Gage, 1979, Shavelson & Atwood, 1977), i.e. remain invariant over persons, situations, learning tasks, settings, time, and measurement methods. Hence, generalization studies will have to delineate the exact application range (or external validity) of theoretical assumptions concerning the effectiveness of DI.
DIRECT INSTRUCTION IN INTERNATIONAL COMPARATIVE STUDIES Future classroom teaching-learning research will be occupied, then, with analyzing and improving the internal and external validity of the D1 model. For that, international comparative studies are of utmost importance. The following points may serve to illustrate this argmnent and describe in more detail some of the options cross-national educational research may have in validating the Dl model.
Internal Validity Cognitive achievement effects have mostly been conceptualized as learning growth, increment or yield and then assessed in cross-sectional one-wave or longitudinal two-wave designs. Several more or less problematic versions for measuring achievement change have also been used rather routinely (e.g., difference or residualized achievement test scores or analysis of covariance correction procedures). These data assessment and analysis procedures are rarely appropriate for the reconstruction of individual achievement development. To avoid some of their problems, it is frequently suggested, therefore, (cf. Baltes et al., 1979) to use longitudinal multi-wave designs which allow a less restricted assessment of cognitive developmental characteristics in specific student populations. The estimation of schooling effects even on a strictly descriptive level (i.e. without specifying the exact effect conditions in classroom instruction or interaction) is bound to consider the simultaneous membership of students in classrooms, schools, and school systems (Barr, 1980 ab; Ferguson, 1980). Even the merely quantitative lengths of schooling may have differential effects in various educational (and hence analytical) units. To assess these effects properly it is necessary to disentangle global schooling effects in hierarchical models of additive variance decomposition (Brimer et al., 1979; Rakow et al., 1978; Kellaghan et al., 1979, Haney, 1974). Previous IEA studies only used separate between-students, between-schools, and between-school systems analyses (Peaker, 1975). These are obviously only insufficient approximations to a multilevel decomposition of schooling effects which may be done in conventional nested analysis-of-variance designs. It should be of particular interest for international studies of educational effects to assess and compare the relative importance of school system, school and classroom influences simultaneously over several effect criteria and different student populations. This may also give useful information about the specific level on which educational interventions appear most promising for a given target population. Educational effect variables have mostly been measured by using m e a n student achievement in classrooms, schools, school districts, or even in school systems. This restriction to only a single effect variable has both research-strategic and practical disadvantages. Educational inputs often have different cognitive and affective effects on the student side. With only one effect criterion, there is always the danger, then, that their effect potential is underestimated or even completely misjudged, if other more important effects are neglected and masked (Klitgaard, 1975, Lohnes, 1972).
Classroom Environment and Processes
77
On the other hand, educational goals are usually quite complex and e~race, even in the cognitive area, a plurality of differing, and sometimes conflicting ends (e.g. the mastery of basic skill requirements by all students, the attainment of high qualification standards by a high student quota, the equalization of educational opportunities, the furtherance of specially talented student groups, etc.). Educational research, by its focus on only mean student achievement, has had the tendency to consider in its analytic designs a very small segment of a much broader goal complex. Ironically, mean achievement may -at least in comparing educational system effectiveness between OECD countries -have lost its previous prevalence in favor of equality of opportunities or the special education of intellectually gifted youth: both educational criteria would not be covered by previous research which concentrates on increases in only the mean achievement. On the other hand, the simultaneous pursuit of equalizing educational opportunities, improving their quality and extending their quantity confronts developing countries with a severe dilemma, which, in face of scarce resources, leaves only a number of suboptimal solutions (cf. Naik, 1979). It seems promising, then, to develop appropriate indicator models for different effect types and to compare different educational outcomes of Direct Instruction in some form of 'social accounting'. It is perfectly conceivable that increases in mean achievement go at the expense of equal educational opportunities or the special opportunities for intellectually talented (Stanley, 1980) or handicapped (Zigmond et al., 1980) students. Yet, these social dilemmas have to be assessed before solutions for optimizing goal attainment can be developed. Teaching-learning models with only one effect criterion are of limited value (Brown & Saks, 1975 ab; Ritzen et al., 1979). It is not only the dependent variable side of Dl models, however, that has to be re-examined, but also their context and independent variables and their interaction with student, family, and peer group variables. * It is still unclear, for example, under what organizational, institutional and interpersonal conditions components of D1 are at all effective. Obvious constraint variables are financial or personnel resources of schools, the size and composition of classes according to social background, ability, and achievement motive, or the total instruction time available. According to Dahll~f (1971) and Lundgren (1972, 1977), possible options in the case of limited instructional resources are: omitting certain subject areas, lowering the level of achievement standards, or neglecting low-ability students. In no case, however, do classroom teaching-learning processes remain uninfluenced by these constraints: They often determine the instructional pace directly and, thus, also the mean achievement as well as the achievement dispersion within a given class (Barr & Dreeben, 1978). Though some of these constraint implications for the utilization of direct instructional and interactional strategies are obvious, there are scarcely any data demonstrating the precise nature or explaining the mediational mechanisms of these boundary conditions on educational effects. * A second equally unsettled research problem is the choice of appropriate indicators for those theoretical constructs which are invariant components of D1 models. Several motivational and instructional quality components do overlap in the core assumptions of a number of prominent teaching-learning models (cf. Berliner, 1979; Bloom, 1976; Carroll, 1963; Cooley & Leinhardt, 1980; HarnischFeger & Wiley, 1977; Walberg, 1976, 1981). Earlier empirical tests of their usefulness in predicting student learning were mostly restricted to the assessment of some surface-structure variables, with little or no theoretical background (Brophy, 1979; Fasen, 1979). Also little attention was paid to reconcile or otherwise clarify the obvious discrepancies between different assessment approache~ (i.e. between teacher self-perception, student perception or some objective classroom observation) (cf. Borich et al., 1978; Seidman et al., 1979). It is only in more recent investigations that narrowly circumscribed classroom teaching events
78
B. Tre~er
(like teacher reinforcement behavior) are anchored in a coherently organized theoretical model (of causal attributions, social cognitions, reference-group mechanisms, etc.) (cf. Weiner, 1979). Only in this case can it be specified sufficiently for what student subgroups and under what classroom contextual and situational conditions will the 'same' reinforcing behavior of a teacher have an informative or evaluative function for a student, be translated and internally represented as such, and will, subsequently motivate or de-motivate future achieve ment-related student behavior. Another series of analyses tries to explicate the construct of 'instructional quality' and applies various notational systems for the deep-structural description of domain-specific knowledge bases (Cooley et al., 1979; Leinhardt & Seewald, 1980; Leinhardt et al., 1979; Porter et al., 1978; Schmidt, 1978). Teacher classroom utterances, passages in text material, or student engaged learning time can then be related to specific deep structural elements. It is apparent that both research programs which analyze motivational and instructional quality aspects of classroom teaching greatly depend in their progress on parallel developments of basic research in general and developmental cognitive psychology (cf. Nicholls, 1980 ab; Glaser & Pellegrino, in press; Pellegrino & Glaser, 1979). This is especially clear for achievement test items whose effective solution depend on the availability and accessibility of semantic, declarative and procedural knowledgebases, appropriate optional operation systems, and heuristic, executive or meta-cognitive problem-solving strategies. * Another consequence of further elaborating previous macro-constructs of the D1 model is that their formal status in a multilevel network of schooling conditions can be clarified, llere, indicator models for aggregate, individual, frog-pond and composition effects can be constrasted (Firebaugh, 1979 ab; Karweit & Fennessey, 1978). Components of direct instruction have previously been considered only as aggregate treatment variables. Future studies will have to be more precise in the specification of direct instruction effects and their mediation. This may require the proper representation of classroom effect conditions on different levels of analysis (Barr, 1980 ab; Burstein, ]978; Cooley, 1979; NcPartland & Karweit, 1979). * Empirical investigations of D1 models have been mostly under-controlled with regard to the antecedent or concurrent influences of individual student attributes family background, peer groups, and the local opportunity structure of the community. Only a few (mostly student ability or social status) parameters have usually been included as covariates and entered the resulting structural equations This consideration of out-of-school variables in empirical tests of instructional models may be insufficient for at least two reasons. The usual background assumption which justifies the choice of these covariate: postulates either a high (dispositional) stability or, in the case of sociostructural variables, a direct influence on student achievement. This is clearly questionable (Boulanger, 1980): nominally stable cognitive ability and aptitude attributes of students can still be changed in school and altered in their impact on student achievement. And socio-economic or socio-cultural variables of family background are not in themselves of direct functional significance for the achieve ment process. Rather, their impact is mediated by more proximal variables of the home environment, i.e. by parental attitudes and practices, or by differential treatment in interaction with teachers and co-students. To correct the notorious undercontrol of non-experimental classroom teachinglearning research, it seems necessary in future model tests to use relatively complex variable structures that integrate simultaneously different types of (family, student, peer group, classroom and school) determinants of the achievement process and extend them over different levels of analysis (Cronbach, 1976; Keesling, 1976, 1978; Keesling & Wiley, 1976; Treiber, 1980; Weinert & Treiber,
1980).
Classroom Environment and Processes
79
* Another problem of connecting different types of explanatory variables in multilevel teaching-learning models is their implicit assumption of thereby choosing a parsimonious set of necessary conditions for the achievement process (Weinert, 1980). This is an unusually strong assumption which ~resumably can not be maintained. Particularly with learning tasks or achievement test items of medium difficulty, it is more realistic to see student ability and effort parameters, their active learning time, the quantity and quality of instruction and out-of-school assistance by family and peer-group members as the only sufficient achievement determinants which can (partially) be substituted for each other (Weinert & Petermann, 1980; Weinert et al., 1980; Treiber & Schneider, 1980). Only with this assumption can the striking elasticity and flexibility of the achievement process be fully understood and adequately be represented in explanatory models. With the liberalization of achievement explanations that would have used only necessary conditions, it is of particular importance to find out which educational instances ~school, family or peer groups) fulfill the hypothesized necessary functions for cognitive development of specific student populations. It is around this hard core of necessary achievement conditions (not to be substituted or compensated by antecedent or concurrent variables) that universalistic models of schooling can be developed. Sufficient conditions, however, will have to be represented in versions of compensation models which relate the changing function of individual explanatory parameters to socio-structural or institutional characteristics of the larger context of the schooling process. The point is, then, that only international comparative studies will be able to discriminate between both types of achievement models. Re-analyses of previous IEA data (for the case of the USA, Sweden and England) show, for example, (cf. Burstein et al., 1978, 1979), that, with reduced social disparities in a welfare state like Sweden and a strongly centralized administrative school control (leaving little betweenschool variance), both social background and school input factors account for relatively little of the total student achievement variance, leaving all the more influence to instructional or interpersonal factors within schools. Frequently, classroom studies report explanatory variable structures of student learning that differ considerably between schools and classrooms in the relative weight given to specific achievement conditions. This may now be understood as a result of the substitution of variables so typical in explanations of cognitive development (Weinert & Treiber, 1980). The greatest benefit to be expected from international classroom studies would then be to predict the onset and degree of this substitution process from differences between educational or even societal systems. It becomes apparent now that the internal validity of the previously most successful teaching-learning model that explains student achievement by direct instruction parameters in classrooms can be improved considerably in an international comparative study. This is particularly so because the future elaboration of the D1 model will transform its oversimplified core assumptions into a complex network of both necessary and sufficient achievement conditions, stretched over multiple effect criteria, measurement points, indicators, and levels of analysis.
External V a l i d i t y of the D1 Model The most obvious profit of international studies of teaching-learning models lies, of course, in testing the external validity of these models. The usual generalization facets of person, task, situation, setting, time and measurement instrument can all be examined systematically in theory-guided comparisons of instructional effects. This point has previously been dealt with already in great length (Postlethwaite et al., 1979; Travers, 1980). It may suffice here to touch briefly on two arguments. * One refers to the obvious ecological constraints of learning tasks (as they are indexed by traditional norm-referenced achievement test items) for the study of cognition in everyday life. The classical argument here asserts that a major
80
B. Treiber
benefit of education is that school learning transfers to new extra-curricular situations. Yet, the empirical evidence which could be relevant to this assumption is scarce and often inappropriate. In particular, the analogues of achievement-test tasks in everyday life are obscure. Hence, this still leaves the possibility open that schooling may have no effects or even negative effects on intellectual development (see Ginsburg, 1979). * Another point is to extend earlier arguments concerning context or setting differences to a consideration of period and/or cohort effects (cf. Baltes et al., 1980; Schaie & Willis, 1979) in time-dependent analyses of student achievement and to connect socio-cultural change with cognitive development in sequential designs. This point has been most neglected in teaching-learning research and may have contributed, therefore, to the under-estimation of the time- and contextspecifity of teaching-learning models (Cronbach, 1975). On the other hand, the more recent and still increasing speculations concerning the adequacy of nomothetic generalizations in educational research in particular and in social science research in general do require rigorous empirical tests. Obviously, the IEA classroom environment study would be an excellent testing ground for just these metatheoretical and theoretical claims.
CONCLUSIONS None of the points raised here is new, though some of the solutions are. It is important, however, that their coordinated implementation on an international comparative study may greatly contribute to a now much-needed improvement of the D1 model which has become the most promising prototheoretical framework for modeling student achievement in a classroom teaching-learning context. The possible research options which this paper suggested have both methodological and theoretical implications, i.e.: -- The application of more sophisticated data analysis procedures for already given data sets. This includes a hierarchical decomposition of schooling effects, the estimation of multiple effect criteria, and the identification of level-specific effect determinants ~such as individual, aggregate, reference-group, and ~roup composition conditions). -- The elaboration of the traditional design for data assessment which is more or less closely tied to a theoretical background. It encompasses more measurement points in versions of an age-cohort sequential design, but also allows for the sampling of more fine-grained deep-structural dependent, independent, mediational, and framework variables of cognitive outcomes of schooling whose nomological network may also entail the possibility of variable substitution and transfer. It is this theoretical augmentation and differentiation of the original D1 model that may finally result in the expected progress of research into classroom teaching-learning processes.
REFERENCES ANDERSON, L.W. New directions for research on instruction and time-on-task, Paper presented at the Annual Meeting of the American Educational Research Association, Boston, 1980. BALTES, P.B., CORNELIUS, S.W., & NESSELROADE, J.R. Cohort effects in developmental psychology. In J.R. Nesselroade & P.B. Baltes (Eds.), Longitudinal research in the study of behavior and development. New York: Academic Press, 1979, pp. 61-88. BALTES, P.B., REESE, H.W., & LIPSITT, L.P. Life-span developmental psychology. Annual Review of Psychology, 1980, 31.
Classroom Environment and Processes
81
BARR, R. School, class, group, and pace effects on learning. Paper presented at the Annual Meeting of the American Educational Research Association, Boston, 1980 Ca). BARR, R. The productivity of multilevel educational organizations. Paper presented at the Annual Meeting of the American Educational Research Association, Boston, 1980 (b). BARR, R., & DREEBEN, R. Instruction in classrooms. In L.S. Shulman (Ed.), Review of Research in Education, Vol. 5. Itasca, Ill.: Peacock, 1977. pp.89-162. BERLINER, D.C. Tempus educare. In P.L. Peterson & H.J. Walberg (Eds.), Research on teaching. Berkeley: McCutchan, 1979. BERLINER, D.C. Teacher behavior: Research trends, research needs. Paper presented at the Annual Meeting of the American Educational Research Association, Boston, 1980. BLOOM, B.S. Human characteristics and school learning. New York: McGraw-Hill, 1976. BORICH, G.D., MALITZ, D., & KUGLE, C.L. Convergent and discriminant validity of five classroom observation systems: Testing a mode]. Journal of Educational Psychology, 1978, 70, 119-128. BOULANGER, F.D. The relationship of ability and instructional constructs to learning in science and general education. Paper presented at the Annual Meetings of the American Educational Research Association, Boston, 1980. BRIMER, A., MADAUS, G.F., CHAPMAN, B., KELLAGHAN, T., & WOOD, R. Sources of differences in school achievement. Windsor: National Foundation for Educational Research, 1979. BROPHY, J.E. Teacher behavior and its effects. Journal of Educational Psychology, 1979, 71. BROWN, W., & SAKS, D.H. The production and distribution of cognitive skills within schools. Journal of Political Economy, 1975, 83, 571-593 (a). BROWN, W., & SAKS, D.H. Proper data aggregation for economic analysis of school effectiveness. Public Data Use, 1975, 3, 13-18 (b). BURSTEIN, L. The role of levels of analysis in the specification of educational effects. University of Chicago, Finance and Productivity Center, 1978. BURSTEIN, L., FISCHER, K.B., & MILLER, M.D. Social policy and school effects: A cross-national comparison. Paper presented at the IX World Congress of Sociology, Upsala, Sweden, 1978. BURSTEIN, L., FISCHER, K.B., & MILLER, M.D. The multilevel effects of background on science achievement: A cross-nationaI comparison. University of California, Los Angeles. Unpublished paper, 1979. CARROLL, J.B. A model of school learning. Teachers College Record, 1963, 64, 723-733. COLEMAN, J.S. Methods and results in the IEA studies of effects on school learning. Review of Educational Research, 1975, 45, 335-386. COOLEY, W.W. Unit of analysis. Paper presented at the Johns Hopkins University National Symposium of Educational Research, Washington, DC, 1979. COOLEY, W.W., & LEINHARDT, G. The Instructional Dimensions Study. Educational Evaluation and Policy Analysis, 1980, 2(1), 7-25. COOLEY, W.W., LEINHARDT, G., & ZIGMOND, N. Explaining reading performance of learning disabled students. Pittsburgh, PA: University of Pittsburgh, Learning Research and Development Center, 1979. CRONBACH, L.J. Beyond the two disciplines of scientific psychology. American Psychologist, 1975, 30, 116-127. CRONBACH, L.J. (with assistance of J.E. DEKEN & N. WEBB). Research on classrooms and schools: Formulation of questions, design and analysis. Occasional Paper. Stanford Evaluation Consortium, 1976. DAHLLOF, U. Ability grouping, content validity, and curriculum process analysis. New York: Teachers College Press, 1971. DUNKIN, M.J., & BIDDLE, B.J. The study of teaching. New York: Holt, Rinehart & Winston, 1974.
82
B. Treiber
ERLICH, 0., & BORICH, G. Occurrence and generalizability of scores on a classroom interaction instrument. Journal of Educational Measurement, 1979, 16(I), ll-18. FERGUSON, T.L. Variation in grouping and instructional format patterns: Using a multilevel framework. Paper presented at the Annual Meeting of the American Educational Research Association, Boston, 1980. FIREBAUGH, G. Groups as contexts and frog ponds. Unpublished ms. Vanderbilt University 1979 (a). FIREBAUGH, G. Assessing group effects: A comparison of two methods. Sociological Methods & Research, 1979, 7, 384-395 (b). GAGE, N.L. The scientific basis of the art of teaching. New York: Teachers College Press, 1978. GAGE, N.L. The generalizability of dimensions of teaching. In P.L. Peterson & H.J. Walberg (Eds.), Research on teaching: Concepts, findings, and implications. Berkely, CA.: McCutchan, 1979. GINSBURG, H.P. Commentary to D. Sharp, M. Cole & C. Lave. Education and cognitive development: The evidence from experimental research. Monographs of the Society for Research in Child Development, 1979, 44(1-2, Serial No. 178). GLASER, R., & PELLEGRINO, J. Improving the skills of learning. Intelligence. (in press). GOOD, T.L. Teacher effectiveness in the elementary grades. Journal of Teacher Education, 1979, 30(2), 52-64. GOULET, L.R. Longitudinal and time-lag designs in educational research: An alternate sampling model. Review of Educational Research, 1975, 45, 505-523. HANEY, W. Units of analysis issues in the evaluation of Project Follow Through. Cambridge, Mass.: Huron Institute, 1974. HARNISCHFEGER, A., & WILEY, D. Conceptual issues in models of school learning. Studies of Educative Processes, No. 10. Chicago, Ill.: CEMREL, Inc., 1977. HEATH, R.W., & NIELSON, M.A. The research base for performance-based teacher education. Review of Educational Research, 1974, 44, 463-484. HUSEN, T. General theories in education: A twenty-five years perspective. International Review of Education, 1979, 25, 325-345. KARWEIT, N., & FENNESSEY, J. Examination of a pragmatic framework for studying school effects. Paper presented at the Annual Meeting of the American Educational Research Association, Toronto, 1978. KEESLING, J.W. Components of variance models in multilevel analyses. Paper presented at a conference on Methodology for Aggregating Data in Educational Research, Stanford, 1976. KEESLING, J.W. Some explorations in multilevel analysis. Paper presented at the Annual Meeting of the American Educational Research Association, Toronto, 1978. KEESLING, J.W., & WILEY, D.E. Regression models of hierarchical data. Paper presented at the Annual Meeting of the Psychometric Society, Stanford, 1974. KELLAGHAN, T., MADAUS, G.F., & RAKOW, E.A. Within-school variance in achievement: School effects or error? Studies in Educational Evaluation, 1979, 5, 101-107. KLITGAARD, R.E. Going beyond the mean in educational evaluation. Public Policy, 1975, 23, 59-79. LEINHARDT, G., & SEEWALD, A.M. Overlap: ~%at's tested, what's taught. Paper presented at the Annual Meeting of the American Educational Research Association, Boston, 1980. LEINHARDT, G., ZIGMOND, N., & COOLEY, W.W. Reading instruction and its effects. Paper presented at the Annual Meeting of the American Educational Research Association, Boston, 1980. LOHNES, P.R. Statistical descriptors of school classes. American Educational Research Journal, 1972, 9, 547-556. LOHNES, P.R. Factorial modeling in support of causal inference. American Educational Research Journal, 1979, 16, 323-340. LUNDGREN, U.P. Model analysis of pedagogical processes. Stockholm: CWK Gleerup, 1977.
Classroom Environment and Processes
83
McPARTLAND, J.M., & KARWEIT, N. Research on educational effects. In H.J. Walberg (Ed.), Educational environments and effects. Berkeley: McCutchan, 1979. pp. 371-385. NAIK, J.P. Equality, quality and quantity: The elusive triangle in Indian education. International Review of Education, 1979, 25, 167-185. NICHOLLS, J.G. Motivation for intellectual development and performance: An integrative framework. Paper presented at the Annual Meeting of the American Educational Research Association, Boton, 1980. (a) NICHOLLS, J.G. Motivation theory and equality of optimum motivation. Paper presented at the Annual Meeting of the American Educational Research Association, B o s t o n , 1980. (b) PEAKER, G.F. An empirical study of education in twenty-one countries: A technical report. Stockholm: Almqvist & Wiksell, 1975. PEDHAZUR, E.J. Analytic methods in studies of educational effects. In F.N. Kerlinger (Ed.), Review of Research in Education, Vo]. 3. Itasca, Ill: Peacock, 1975. pp. 243-286. PELLEGRINO, J.W., & GLASER, R. Cognitive correlates and components in the analysis of individual differences. Intelligence, 1979, 3, 187-214. PORTER, A.C., SCHMIDT, W.H., FLODEN, R.E., & FREEMAN, D.J. Impact on what? The importance of content covered. Research Series No. 2. East Lansing: Institute for Research on Teaching. Michigan State University, 1978. POSTLETHWAITE, T.N., GAGE, N.L., & LEVIN, T. A proposal for an IEA Classroom Environment Study: Teaching for learning, 1979. RAKOW, E.A., AIRASIAN, P.W., & MADAUS, G.F. Assessing school and program effectiveness: Estimating teacher level effects. Journal of Experimental Education, 1979, 47(4), 311-319. ROSENSHINE, B.V. Classroom instruction. In N.L. Gage (Ed.), The psychology of teaching methods. Seventy-fifth Yearbook of the National Society for the Study of Education, Pt.l. Chicago: University of Chicago Press, 1976, pp. 335-371. ROSENSHINE, B.V. Content, time, and direct instruction. In P.L. Peterson& H.J. Walberg (Eds.), Research on teaching. Berkeley: McCutchan, 1979. pp. 28-56. ROSENSHINE, B.V., & FURST, N. The use of direct observation to study teaching. In K.J. Travers (Ed.), Second handbook on research on teaching. Chicago: Rand McNally, 1973. SCHAIE, K.W., & WILLIS, S.L. Life-span development: Implications for education. In Review of Research in Education, Vol. 6. American Educational Research Association, 1979. pp. 120-156. SCHMIDT, W.H. Measuring the content of instruction. Research Series No. 35. East Lansing, Michigan: Institute for Research on Teaching, Michigan State University, 1978. SEIDMAN, E., LINNEY, J.A., RAPPAPORT, J., HERZBERGER, S., KRA~R, J., & ALDEN, L. Assessment of classroom behavior: A multiattribute, multisource approach to instrument development and validation. Journal of Educational Psychology, 1979, 71, 4 5 1 - 4 6 4 . SHAVELSON, R., & ATWOOD, N. G e n e r a l i z a b i l i t y o f m e a s u r e s o f t e a c h i n g p r o c e s s . In G.D. B o r i c h ( E d . ) , The a p p r a i s a l o f t e a c h i n g : C o n c e p t s and p r o c e s s . Reading, M a s s . : A d d i s o n - W e s l e y , 1977. SLOAT, K.C.M. Characteristics of effective instruction. Paper presented at the Annual M e e t i n g o f t h e American E d u c a t i o n a l R e s e a r c h A s s o c i a t i o n , B o s t o n , 1980. STANLEY, J . C . On e d u c a t i n g t h e g i f t e d . E d u c a t i o n a l R e s e a r c h e r , 1980, 9 ( 3 ) , 8 - 1 2 . TRAVERS, K . J . P r o b l e m s and p r o s p e c t s i n c r o s s - n a t i o n a l r e s e a r c h : The Second IEA M a t h e m a t i c s Study as a c a s e s t u d y . P a p e r p r e s e n t e d a t t h e Annual M e e t i n g o f t h e American E d u c a t i o n a l R e s e a r c h A s s o c i a t i o n , B o s t o n , 1980. TREIBER, B. Qualifizierung und Chancenausgleich in Schulklassen. Vol. I u. 2. Las Vegas: Lang, 1980. TREIBER, B., & SCHNEIDER, W. Qualification and equalization effects of classroom teaching. Zeitshrift fur Entwicklungspsychologie und P~dagogische Psychologie. 1980, 12, 261-283. ( I n German) WALBERG, H . J . Psychology of learning environments: Behavioral, structural or
84
B. Treiber
perceptual? In L.S. Shulman (Ed.), Review of Research in Education, Vol. 4. Itasca, Ill.: Peacock, 1976. pp. 142-176. WALBERG, H.J. Psychological theories of educational productivity. In F.H. Farley & N. Gordon (Eds.), Contemporary perspectives on educational psychology. Chicago: National Society for the Study of Evaluation, University of Chicago Press, 1981. WEINER, B. A theory of motivation for some classroom experiences. Journal of Educational Psychology, 1979, 71, 3-25. WEINERT, F.E. Programmatische Anmerkungen zur Uberwindung allzu optimistischer und pessimistischer Sackgassen in der Bildungsforschung. Bildung und Erziehung, 1980, 33(1), 39-43. WEINERT, F.E., & PETERMANN, F. Erwartungswidrige Sch~lerleistungen oder unterschiedlich determinierte Schulleistungen? In H. Heckhausen (Hrsg.), F~higkeit und Motivation in erwartungswidriger Schulleistung. G~ttingen: Hogrefe, 1980. WEINERT, F.E., & TREIBER, B. School socialization and cognitive development. In W.W. Hartup (Ed.), Review of Child Development Research, Vo]. 6. Chicago: University of Chicago Press, 1980. ZIGMOND, N., LEINHARDT, G., & COOLEY, W.W. Improving instructional environments for learning disabled students. Paper presented at the Annual Meeting of the American Educational Research Association, Boston, 1980.
THE AUTHOR BERNHARD TREIBER received a M.Sc. from Strathclyde University (UK), a Diploma in Psychology and a Ph.D. from Heidelberg University. At present, he is Assistant Professor of Developmental Psychology in the Institute of Psychology, Heidelberg University, specializing in theories and methods of research in cognitive development in schools and teaching-learning processes.