The structure of research methodology competency in higher education and the role of teaching teams and course temporal distance

The structure of research methodology competency in higher education and the role of teaching teams and course temporal distance

Learning and Instruction 21 (2011) 68e76 www.elsevier.com/locate/learninstruc The structure of research methodology competency in higher education an...

185KB Sizes 0 Downloads 9 Views

Learning and Instruction 21 (2011) 68e76 www.elsevier.com/locate/learninstruc

The structure of research methodology competency in higher education and the role of teaching teams and course temporal distance Karl Schweizer*, Merle Steinwascher, Helfried Moosbrugger, Siegbert Reiss Department of Psychology, Goethe University Frankfurt, Mertonstr. 17, 60054 Frankfurt a. M., Germany Received 23 February 2009; revised 31 July 2009; accepted 23 November 2009

Abstract The development of research methodology competency is a major aim of the psychology curriculum at universities. Usually, three courses concentrating on basic statistics, advanced statistics and experimental methods, respectively, serve the achievement of this aim. However, this traditional curriculum-based course structure gives rise to the question whether an integrative research methodology competency is actually achieved or whether independent course-specific sub-competencies are established instead. To find out whether the course structure is favourable for the development of research methodology competency, items representing the contents of the three courses were applied to a sample of university students. Content validity was assured by a close relationship of the items with course contents and course tests. The investigation revealed a three-dimensional first-order structure in combination with a common second-order dimension. Differences between teaching teams and course temporal distance showed to have no influence. Ó 2009 Elsevier Ltd. All rights reserved. Keywords: Competency; Competency structure; Curriculum

1. Introduction This article investigates competency development as the aim of education at university. The focus is on the effect of characteristics of the curriculum on competency development. Competency development is considered as the establishment of the capability to meet a specific set of predefined demands (Koeppen, Hartig, Klieme, & Leutner, 2008). Accordingly, in this paper competency is perceived as closely linked to a specific set of demands that are defined by the societal environment of the individual and enables the individual to adopt a specific role within society. It is the social and cultural definition of the demands that implicitly or explicitly gives special emphasis to competency development. Since learning is a decisive factor in the achievement of a good competency level, Weinert (2001) characterizes learning as a major characteristic of competency.

* Corresponding author. Tel.: þ49 69 79822081; fax: þ49 69 79823847. E-mail address: [email protected] (K. Schweizer). 0959-4752/$ - see front matter Ó 2009 Elsevier Ltd. All rights reserved. doi:10.1016/j.learninstruc.2009.11.002

The influence of the societal environment on competency development is especially obvious in the curricula of education at universities. A curriculum defines the various contents that need to be acquired, it imposes a structure on the course of teaching, and it contributes to the establishment of conditions that are essential for successful learning (Bereiter & Scardamalia, 1989). The curriculum usually proposes an ordered sequence of courses, which concentrate on different contents, include instances of special skill training and other provisions that contribute to the development of the competency. An increasing number of curricula are conceptualized in such a way that the development of competencies is presented as the major aim (Cubic & Gatewood, 2008; van Zuilen et al., 2008). These curricula concentrate on sets of competencies that are associated with general professional perspectives. Such competencies usually show a considerable degree of complexity (Kegan, 2001). Competencies that have so far received considerable attention in research are, for example, reading literacy, mathematical literacy, and scientific literacy. Such competencies clearly differ from skills not only according to complexity but also according to the possibility

K. Schweizer et al. / Learning and Instruction 21 (2011) 68e76

of automation (Weinert, 2001) that is less likely in competency development than in skill acquisition. The present study investigated whether the structure imposed on teaching by the curriculum influences competency development. It concentrated on a special competency of the psychology curriculum, that is, research methodology competency. In the following paragraphs competency is considered in more detail and findings concerning the conditions of teaching research methodology competency are reviewed before the results of an empirical investigation are reported. 1.1. The concept of competency In this article competency is considered much in the same way as in studies by the Organisation for Economic Cooperation and Development (OECD, 2000). These studies focused on the knowledge, capacities and skills necessary for meeting complex demands of types of literacy (Banta, 2001; Harvey, 2001). In other studies the demands are matched with specific professional roles (Kaslow, 2004). The perspective selected for these studies emphasizes the demand-related aspects of competent performance and downplays social and motivational aspects that may also be important for demonstrating competency successfully. The advantage of the restriction to the demand-related aspects is the better conceptualization of the measured competencies for scientific investigations since focusing on demand-related competencies installs preconditions suitable for arriving at a conception that is in agreement with the demands to a precise definition (Ghiselli, Campbell, & Zedeck, 1981). Accordingly, this article concentrated on the knowledge and characteristic capacities or skills that are necessary for appropriate performance in the field of research methodology. Since neither the knowledge nor the capacities or the skills characterizing research methodology are innate, this competency like other competencies can only be the result of a learning process (Weinert, 2001). The emphasis on learning makes obvious that competencies have a cultural origin and an ongoing shaping process reflecting the influence of cultural forces is rather likely. An implication of this dependency on cultural forces is that competencies normally change over time. The notion of competency that underlies the OECD studies may be considered as rather narrow since nowadays there is the tendency to perceive competencies as a means that leads individuals to ‘‘a successful and responsible life and for society to face the challenges of the present and near future’’ (Rychen, 2001, p. 7). Furthermore, competency is frequently associated with effectiveness, efficiency, proficiency and similar terms that suggest favourable outcomes for the individual (Weinert, 2001). Although all these associations and effects describe highly desirable things, their scientific value is questionable since it is difficult or even impossible to represent them appropriately by a measure. In order to avoid vagueness and ambiguity for the present study, we decided to stay with a restricted working definition that emphasizes whether an individual is able to meet a set of demands.

69

The restriction of the definition of competency in meeting a set of demands in a particular domain brings about a kind of homogenization since the focus is on what may be considered as the basis of a competency. These demands can be assumed to relate to each other in one or the other way. Because common basic concepts and common basic principles exist that underlie the defined demands and guide the ways of meeting the various demands in a domain, an underlying structural unity between them can be assumed. Therefore, an empirical investigation of the various demands should reveal either a one-dimensional structure or eventually related dimensions that might give rise to a hierarchical structure of competency. This kind of unity is desirable because it facilitates communication within the community of persons sharing the same competency. 1.2. Research methodology competency Research methodology competency provides the object for the investigation that is reported in the following sections. Research methodology competency denotes the knowledge, capacities and skills necessary for conducting all sorts of empirical investigations based on psychological research questions independently and according to the current standards. The contents of the courses for the development of research methodology competency reflect the research traditions in this field (Wagner & Maree, 2007). Major traditions are the experimental and differential traditions that are associated with rather independent research methods (Cronbach, 1957). The development of research methodology competency must take these traditions into consideration in order to prepare the students appropriately for all the requirements of psychological research. Generally, it means learning about methods for planning investigations, collecting and analyzing data. Research methodology competency shows a sufficient degree of complexity for the investigation of the effect of course structure since the curriculum of research methodology usually comprises several courses and specifies dependencies among these courses. In a way the differentiation of the contents because of the three courses is even amplified by the availability of textbooks that concentrate on the contents of one course, as for example research designs and data collection or corresponding methods for statistical analysis. However, despite the specificity of the courses there are also unifying aspects that should not be downplayed. There are many common basic concepts (e.g., the concepts known as variable, scale, variance and covariance), and there are common basic principles (e.g., the principles of hypothesis testing and of standardization) that should guide the ways of meeting the various demands. These research methodology courses are usually not among the students’ favourites. A study even showed that exposure to such a course reduces the interest in scientific activities (Manning, Zacher, Ray, & LoBello, 2006). The curriculum for the development of research methodology competency is rather voluminous, and the courses are

70

K. Schweizer et al. / Learning and Instruction 21 (2011) 68e76

the result of subdividing the whole teaching load into manageable parts that show dependency among each other. The first course, namely Basic Statistics, is expected to provide fundamental ideas and basic concepts whereas the other courses, namely Advanced Statistics and Experimental Methods, relate to sophisticated methods and applications.1 Since the research methodology courses are usually taught in the first and second semester after admission to university (as the guidelines of professional associations in German speaking countries suggest), it can be assumed that in the beginning all the students show approximately the same degree of preparedness. Such an initial equality is a favourable precondition for an empirical investigation. Furthermore, the dependency among the courses should guarantee systematic knowledge growth. 1.3. Course structure as possible source of multidimensionality A considerable degree of complexity reflecting the great variety of research methods necessary for mastering the manifold of research problems characterizes research methodology competency. Because of this high degree of complexity the development of such a competency by means of a single course is not a realistic option. There is simply too much that needs to be learnt. A high-demanding approach to learning with the intention to comprehend and to active conceptual analysis that is known as deep approach (Struyven, Dochy, Janssens, & Gielen, 2006) appears to be necessary for the successful development of such a complex competency. From research focusing on the micro level of learning we know that transfer of learning is normally limited. Cognitive load theory provides a major reason for the limitation in transfer (Sweller, 2006). According to this theory it is necessary to create mental representations as part of the process of learning (Renkl, Gruber, Weber, Lerche, & Schweizer, 2003). If the object of learning is too complex, an inappropriate mental representation may obstruct learning. The next steps must assure the integration of the object of learning into the available knowledge structure. The knowledge structure that constitutes long-term memory (LTM) gives guidance to cognitive processing in complex learning. As a result, there is knowledge elaboration (Kalyuga, 2009). Appropriate external instruction can supplement this process of knowledge elaboration. However, it cannot outweigh basic working memory and knowledge limitations and avoid the time-consuming, gradual build-up of knowledge structure since it is complex knowledge that can only be acquired in a step-wise fashion (Amadieu, van Gog, Paas, Tricot, & Marine, 2009). 1 In German speaking countries the guidelines of professional associations suggest this structure. More general evidence supporting this structure is provided by textbooks that are available for the three courses. For experimental research methods see Elmes, Kantowitz, and Roediger (2009); for basic statistics see Freund and Simon (1997); and for advanced statistics see Tabachnick and Fidell (2007).

Although the need for several courses, which accomplish the gradual build-up of the knowledge structure, is rather obvious, it is not clear whether the course structure is really instrumental with respect to the aim of the curriculum: the development of research methodology competency. To find out something about the consequences of the course structure, it is necessary to consider possible consequences of having separated courses that may give rise to all sorts of differences in learning. First, the consequences of different courses for the integration of the knowledge structure needs to be questioned. Does enlarging the knowledge structure by adding new parts give rise to a well integrated whole or does the upgrading occur course-dependent? Unfortunately, elaboration is found to be specific and to occur within domains (Kalyuga, 2009). However, although this specificity is a disadvantage, it must not lead to the disintegration of the individual knowledge structure. The result depends on instruction. Reasonably planned instruction can avoid disintegration. Second, different learning environments may be created within different courses, and learning environmentshave been found to be important for successful learning (Bereiter & Scardamalia, 1989). These environments can differ according to their appropriateness for the occurrence of deep learning (Entwistle & Entwistle, 1991). Although a learning environment is rather restricted and, therefore, may be considered as unimportant in comparison to the content of the curriculum, teachers may generate series of similar learning environments so that differences in such environments may count in the long run. Similar learning environments may support the elaboration of particular domains of the knowledge structure rather than other domains. Third, the various teaching styles are to be considered as another source of possible differences in student learning. It can be assumed that different persons teaching different courses have an effect on learning in different ways due to their teaching styles. Fortunately, severe differences need not to be expected since students seem to gain most from their preferred teaching style (Vermetten, Vermunt, & Lodewijks, 2002). Under the assumption of a random mixture of students preferring different teaching styles it is rather likely that there are always some students who gain a lot whereas other students are disadvantaged (Goldman, 2009; Scheiter, Gerjets, Vollmann, & Catrambone, 2009). Fourth, motivation may be a crucial variable since the effects of teaching and learning environment on learning seem to be either moderated or mediated by motivation, as it was demonstrated in several recent studies (Alonso-Tapia & Pardo, 2006; Frenzel, Pekrun, & Goetz, 2007). Fortunately, motivation is a property of the individual or of learning environment and, therefore, probably does not vary as a result of the course structure. All in all, guidance in the elaboration of the knowledge structure is crucial for the development of an integrative research methodology competency. The possible effect of learning environments seems to be less serious since differences between them must be created in a very consistent way in order to be effective. Teaching style seems to be of even lower importance. Finally, there is motivation that may have

K. Schweizer et al. / Learning and Instruction 21 (2011) 68e76

an indirect effect on acquisition of competency. However, it becomes a problem only if there are systematic differences in motivation between students, classes, or courses. 1.4. The present study e hypotheses The aim of the present study was, first, to find out whether the course structure according to the curriculum has an effect on competency development with respect to research methodology. Two alternative outcomes need to be considered. One outcome is that the students show successful performance with respect to all the competency-characterizing demands. The competent individual meets the various demands associated with the competency. In this case the students have acquired research method competency, as it is the aim of the curriculum. The other alternative outcome is that there is a lack of overall consistency in students’ research methodology competency, despite the fact that each course leads to the development of a kind of sub-competency. In this case the teaching according to the curriculum would give rise to three correlated sub-competencies. Since high consistency means one dimension, a structural investigation that eventually confirms this expectation needs to be conducted. If one dimension is sufficient for representing the data this would mean that the education at university with respect to the competency-characterizing demands has generated a knowledge structure with a high degree of integration (Hypothesis 1a). Otherwise, the knowledge structure can be assumed to show disintegrated domains corresponding to the courses (Hypothesis 1b). The second aim was to find out whether there was an effect of learning environment on the structure of the research methodology competency. In assuming that teacher and associated teaching assistants were a major component of the learning environment different teaching teams were assumed to be a potential source of different outcomes. The prediction was that the consistency between the courses depended on whether the teaching teams were the same or different. The higher degree of consistency should be observed for courses taught by the same teaching team (Hypothesis 2). The third aim was to find out whether the temporal distance had an effect on the structure of research methodology competency. It could be envisaged that a small temporal distance might promote the amalgamation of the contents of different courses whereas a large temporal distance might be less favourable. The prediction was that the increase in temporal distance would decrease the consistency of the courses (Hypothesis 3). 2. Method 2.1. Research design The present study is a quasi-experimental research and took place in an ecologically valid context, namely the courses and students of a particular German university. The curriculum in this university associates three courses with the development

71

of research methodology competency: basic statistics, advanced statistics and experimental methods. In order to evaluate similarities and differences with respect to competency development appropriately, it is reasonable to try to quantify the relationships among the courses. There were three major characteristics that distinguished these courses: contents, teacher and semester. Each course had its own contents. The teaching was done by two professors (A and B) and several teaching assistants (the teams of A and B). One professor and the corresponding teaching assistants were responsible for the courses entitled Experimental Methods and Basic Statistics and the other professor together with the corresponding teaching assistants for the course entitled Advanced Statistics. Furthermore, one course (Basic Statistics) was usually assigned to the first semester whereas the other two courses (Experimental Methods and Advanced Statistics) to the second semester. The characteristics of the courses give rise to an incomplete but interesting and important design, as it is evident in Fig. 1. It is obvious that the first combination of courses (i.e., Experimental Methods and Basic Statistics) may have similar learning environments due to the same teaching team whereas another combination of courses (Experimental Methods and Advanced Statistics) is taught during the same semester. These design characteristics are partly due to the curriculum and partly to the department-specific organisation of teaching. 2.2. Sample The sample consisted of 127 psychology students, 97 females and 30 males of mean age 22.02 years (SD ¼ 5.09). About 80% of them were first-year students who were enrolled on research methodology courses of the first and second semesters. The remaining 20% were senior students and students preparing for the final examinations. These students completed the courses in the previous years. The resulting heterogeneity is considered favourable since the students’ different developmental stages caused a broad distribution of the performance scores. In order to avoid frustration, the participants were informed that the measure might include task demands that were beyond their educational level. Furthermore, participants were instructed to skip tasks showing an apparently too high level of difficulty. They received course credit for participation. 2.3. Measure In the construction process special emphasis was given to the achievement of content validity. In the first step of constructing the measure for the assessment of research methodology competency the curriculum was consulted, the respective course teachers were interviewed, and a list including all the major topics was compiled. In the next step a scheme that specified the items with respect to course and topic was prepared. In the following step items applied as part of the examinations in the previous years were checked for their appropriateness. Furthermore, their empirical properties

K. Schweizer et al. / Learning and Instruction 21 (2011) 68e76

72

Semester Teacher

1st Semester

2nd Semester

Professor A (+ Team A)

Course: Basic Statistics

Course: Experimental Methods

Professor B (+ Team B)

Course: Advanced Statistics

Fig. 1. Characteristics of the design reflecting the two major aspects of the learning environments: Teaching teams and temporal distance between the courses.

were considered. This procedure led to the selection of an assortment of items that were considered as appropriate for the measure of research methodology competency. In this way 10 items representing contents of experimental methods, 17 items representing contents of basic statistics and 12 items representing contents of advanced statistics were compiled. The items were arranged in such a way that they could be expected to show an increasing degree of difficulty. The internal consistencies for the final versions of the corresponding measures are reported in Section 3.2. In checking the items with respect to their empirical appropriateness for representing research methodology competency and possible sub-competencies some items were removed from the item set. Five of the eliminated items originated from basic statistics and two from advanced statistics. The results presented in the following sections are based on the items reserved after the item analysis. 2.4. Procedure Nearly all first-year students attending the corresponding courses for the development of research methodology participated in the study. They were tested in four groups under supervision outside of the teaching context. Senior students and students in preparation for the final examinations were tested in smaller groups with up to five participants; in a few cases, individual test-trials were conducted. The participants were instructed in written form. Although they were told that completing the items of the measure takes about 90 min, a time limit was not really imposed. 2.5. Analyses The statistical investigation of the structural properties of the data was performed in two major steps. In the first step the items associated with the three courses were investigated separately by means of confirmatory factor analysis. It was checked whether these item sets could be considered as homogeneous measures. In the second step confirmatory factor analysis was applied to parcels of items. Parcels of items were computed in order to achieve a favourable relationship of the number of items of the respective manifest variables and the number of participants included in the sample. The items were assigned to parcels following a method proposed by Little,

Cunningham, Shahar, and Widaman (2002). Each course was represented by four item parcels consisting of two to three items. LISREL (Jo¨reskog & So¨rbom, 2001) served as program for confirmatory factor analysis. 3. Results The first subchapter (3.1) of the results section provides a description of the data. The second subchapter (3.2) reports Table 1 Arithmetic means, standard deviations, and standardized loadings on the latent factors associated with the three courses. Item number

M

SD

Loading

Basic statistics 4 7 13 14 17 19 24 27 29 32 36 37

.780 .656 .615 .217 .752 .264 .441 .415 .467 .163 .122 .213

.387 .294 .371 .401 .398 .344 .498 .228 .306 .237 .287 .256

.25 .31 .61 .43 .42 .64 .70 .39 .52 .79 .58 .36

Advanced statistics 6 8 11 15 16 20 23 26 28 35

.276 .181 .123 .224 .382 .185 .158 .281 .292 .282

.392 .387 .293 .375 .396 .320 .349 .443 .307 .363

.58 .93 .70 .85 .72 .43 .42 .78 .69 .68

Experimental methods 1 3 10 18 22 25 31 33 34 39

.608 .199 .639 .563 .567 .514 .447 .095 .232 .626

.267 .301 .193 .355 .314 .366 .311 .242 .352 .274

.70 .67 .51 .74 .54 .49 .61 .38 .75 .35

K. Schweizer et al. / Learning and Instruction 21 (2011) 68e76

findings concerning the three courses separately. In the third subchapter (3.3) the results of investigating the first hypothesis are presented while the investigation of the second and third hypotheses led to the findings reported in the fourth subchapter (3.4). 3.1. Overall item descriptions After adjusting the ranges of the possible values of all the items to a lower limit of zero and an upper limit of one, arithmetic means and standard deviations were computed. The results of computing the descriptive statistics are included in Table 1. The first column gives the item number, which identify the position within the measure. The arithmetic means and standard deviations are provided in the second and third columns of this Table. The arithmetic means varied between .04 and .78 and the standard deviations between .19 and .50. Furthermore, overall scores were computed for all the participants by summing up all the items. The overall scores representing research methodology competency varied between 2.50 and 27.71 while the theoretical range was from 0 to 32. The mean of the distribution of the overall scores was 11.98 and the standard deviation 6.61. The distribution showed a small degree of positive skew (skewness ¼ .74). The skew was probably due to the senior students who had already completed all the courses. Additionally, the consistency of the items according to Cronbach’s Alpha was computed. This investigation led to a consistency coefficient of .93 that was really a favourable result. All in all, the arithmetic means suggested that there was a desirable broad range of item difficulties and that there was a broad distribution of the participants’ performances. 3.2. Results for the course-specific item sets In this section the results achieved for the course-specific item sets are reported. At first, procedures for determining the number of underlying dimensions were applied to the data: the Kaiser criterion, the scree test and parallel analysis. In each item set several eigenvalues were larger than one (Experimental Methods items: 2, Basic Statistics items: 5, Advanced Statistics items: 2). However, the results of scree test and of parallel analysis suggested one component only. Since the Kaiser method is known to overestimate the true number of components, preference was given to the results of scree test and parallel analysis. The computation of consistency Table 2 Fit indices of the three congeneric models constructed for representing the items of the three courses designed for the development of research methodology competency. Course

c2

df

RMSEA

GFI

CFI

NNFI

Basic statistics Advanced statistics Experimental methods

79.57 54.30 40.69

54 35 35

.061 .066 .036

.91 .92 .94

.96 .98 .99

.95 .98 .98

73

according to Cronbach’s alpha led to .84 for the Experimental Methods items, to .82 for the Basic Statistics items, and to .89 for the Advanced Statistics items. Next, confirmatory factor analysis was conducted separately for each one of the course-specific item sets. The models representing the courses were investigated by means of LISREL (Jo¨reskog & So¨rbom, 2001). The fit results (c2, RMSEA, GFI, CFI, NNFI and AIC) are presented in Table 2. Each row provides the results for the items of one course. All three ratios of chi-squares and degrees of freedom are smaller than 2. Such ratios indicate a good model fit. All the other statistics make obvious that the model fit is good or acceptable. The model of the experimental method items fits especially well to the data. All these fit results provide additional evidence for the assumption that the items of each course show one underlying dimension. The fourth column of Table 1 gives the standardized loadings. The sizes of the loadings are a bit heterogeneous. All the loadings reached the level of significance. There was one loading below the conventional .30 level. It was retained because it represents the upper tail of the range of arithmetic means. Furthermore, items with loadings below .35 were not removed in order to assure content validity. 3.3. Overall structure The overall structure was investigated at the level of parcels by means of confirmatory factor analysis. Following Little et al. (2002) the items associated with each course were assigned to item parcels on the basis of their empirical properties. The items associated with basic statistics gave rise to four parcels based on three items each. The parcels of advanced statistics included either two or three items. In experimental methods four item parcels including either two or three items were computed. Three models served the investigation of the overall structure. The first model (M1) denoted the general factor model; it assumed that one latent variable is sufficient for representing the data. This model represented the notion of a general research methodology competency. All the item parcels were assumed to load on the general latent variable (research methodology competency). The second model (M2) assumed that three uncorrelated latent factors associated with the three courses, accounted for the data. This model represented the notion that sub-competencies were developed instead of a general research methodology competency. The third model (M3) allowed the first-order latent factors to correlate with each other and assumed an additional secondorder latent variable so that a hierarchy of latent variables was achieved (see Schweizer, Moosbrugger, & Schermelleh-Engel, 2003).2 This model reconciled the two opposing positions concerning the development of research methodology competency since latent variables associated with both types 2 It should be noted that correlations among the three first-order factors are a necessary precondition for the establishment of a second-order factor.

K. Schweizer et al. / Learning and Instruction 21 (2011) 68e76

74

Table 3 Fit indices of the three models investigating the structure of research methodology competency. Type of model

c2

Df

RMSEA

GFI

General factor model (M1) Three-factor first-order model (uncorrelated) (M2) Hierarchical model (M3)

195.60

54

.144

.79

.95

.94

243.60

198.82

54

.146

.79

.92

.91

246.82

45.26

51

.000

.94

1.00

1.00

99.26

of competencies are considered. The fit results of the investigations of the three models are presented in Table 3. Most of the fit coefficients achieved for the general factor model (M1) suggested the rejection of this model. The results for the three-factor first-order model (M2) presented in the second row were even worse. Not even one of the fit statistics could be considered as acceptable. Only the hierarchical model (M3) yielded results suggesting a good model fit. The ratio of chi-squares and degrees of freedom was good and all the other statistics were beyond the boundary for good results. The hierarchical model showed a substantially better model fit than the general factor model according to the chi-square difference test, c2(3) ¼ 150.34, p < .05. Fig. 2 provides an illustration of this successful model with completely standardized estimates. The standardized coefficients of the relationships between first- and second-order latent factors were .88, .86 and .93 for Basic Statistics, Advanced Statistics, and Experimental Methods, respectively. These coefficients indicated that more than 50% of the variance of the first-order latent factors could be explained by the second-order latent factor of research methodology competency whereas the remaining variance of the first-order latent variables provided evidence for the existence of sub-competencies. The loadings of the manifest variables varied between .66 and .87, and the range of the error components was between .25 and .56. Obviously, the model accounted for a considerable amount of the variance of the item parcels. 3.4. Results concerning teaching teams and temporal distance The investigation of the effects of the different teaching teams and different temporal distances between the courses was considered helpful for the interpretation of the results presented in the previous section. First, the effects of the different teaching teams are considered. The investigation of these effects required the comparison of the correlations between experimental methods and basic statistics and between experimental methods and advanced statistics. These combinations of courses were characterized by the same temporal distance. The correlations of Table 4 indicate that there was an increase from .78 to .80 when the teaching team was constant. However, the statistical test of the difference did not indicate a substantial difference (Z ¼ .42). Second, the effects of the temporal distance were in the focus of the next investigation: Did teaching during the same

CFI

NNFI

AIC

semester lead to a higher degree of consistency than teaching during different semesters? In order to investigate this research question, the correlations between experimental methods and advanced statistics had to be compared with the correlation between basic statistics and advanced statistics. Each combination was characterized by different teaching teams. According to the correlations of Table 4 (.80 and .72) there was constancy (Z ¼ 1.49), so that the assumption of an effect due to temporal proximity had to be rejected. 4. Discussion The present study was guided by three research questions. The first regarded whether a curriculum prescribing three different courses is really appropriate for creating a unique and consistent competency or whether three specific sub-competencies may result instead. Two alternative hypotheses (Hypothesis 1a vs. 1b) were formed with Hypothesis 1a expecting development of a unique and consistent competency is in agreement with the aim of the curriculum, that is, the development of research methodology competency as an unrestricted whole and as based on a number of common basic concepts. Hypothesis 1b is justified by the purposeful subdivision of the contents associated with research methodology competency. In this case the resulting knowledge structure should show some degree of disintegration. This subdivision was expected to maximize homogeneity within the parts of the knowledge structure associated with the courses. Our investigation yielded results that supported both hypotheses. There is evidence in favour of three sub-competencies (Hypothesis 1b) since it was not possible to represent the data by a model with one factor only whereas three (correlated) factors would be sufficient for representation. Furthermore, there is also evidence in favour of a general competency (Hypothesis 1a) since three uncorrelated factors did not lead to a good model fit whereas the model including a second-order factor provided the best representation. Furthermore, there may be extracurricular reasons supporting the development of sub-competencies. Hypotheses 2 and 3 addressed two factors that might have an effect on the structure of the research methodology competency. The findings suggest the exclusion of two possible major reasons, namely temporal distance and teaching team. Hypotheses 2 and 3 were not supported since the hypothesized differences in consistency were not observed. A small temporal distance was expected to be favourable for the occurrence of learning according to the deep approach since the contents of different

K. Schweizer et al. / Learning and Instruction 21 (2011) 68e76

75

Research methodology competency

.84

.74

P1

.45

.86

.93

Sub-competency: basic statistics

Sub-competency: advanced statistics

.77

.85

.66

P2

.78

P3

.40

.56

P4

P5

.39

.87

P6

.28

.25

.85

P7

.28

Sub-competency: experimental methods

.77

.81

P8

P9

.41

.74

.34

.71

P 10

.45

.76

P 11

.50

P12

.43

Fig. 2. Illustration of the hierarchical research methodology competency model based on item parcels with standardized estimates of loadings and error components.

courses can be integrated into concurrent attempts to comprehend and store the contents (Struyven et al., 2006). However, our study does not suggest an advantage because of a small temporal distance (Hypothesis 3). The other possible reason was the teaching team. Teachers are a source of difference because they can differ considerably according to commitment and teaching style (Costa, van Rensburg, & Rushton, 2007; Postareff & Lindblom-Yla¨nne, 2008; Walker, 2008). For the present study it is possible to exclude the teaching team as source of the diversity since differences between the teaching teams were not observed (Hypothesis 2). We presume that the combination of difference between course contents and of the course teachers’ attempts to present the course contents as consistent wholes was responsible for the development of sub-competencies. This combination is a rather likely source of discrepancy because the necessity to support learning by giving a proper structure to the course contents creates special homogeneity within the individual courses. This Table 4 Correlations between the latent variables representing sub-competencies associated with courses. Sub-competency

Basic Statistics

Advanced Statistics

Experimental Methods

Basic Statistics Advanced Statistics Experimental Methods

1.00 .72** .78**

1.00 .80**

1.00

**p < .01.

tendency is even strengthened by course evaluation that concentrates on courses and is routinely performed at many universities. In contrast, there is usually no pressure to care for a good coordination of the contents of the various courses. Instead there may be the tendency to make a course unique and distinguishable from the other courses or may even be some kind of competition among the various teaching teams. Finally, a problem needs to be addressed that results from the relationship of competency and ability. There is a well established source of achievement that needs to be taken into consideration, namely general ability (Jensen, 1998). A major characteristic of this ability is the especially close link to fluid ability (see Schweizer, Goldhammer, Rauch, & Moosbrugger, 2007). Some researchers even assume identity of general and fluid ability (Gustafsson, 1984). This link to fluid ability is important with respect to competency since there is a developmental theory of fluid ability that is of great relevance in the field of achievement. It is Cattell’s (1957, 1971; Schweizer & Koch, 2001) Investment Theory that suggests an influence of fluid ability on the development of other abilities and the more specific capabilities in the field of achievement. Moreover, this theory suggests that the general source contributes to all kinds of performances. There is even the possibility that it contributes to performances that are ascribed to competencies. Therefore, the question arises whether research methodology competency is really something special or simply reflects fluid ability. Further studies are necessary to clarify this question.

76

K. Schweizer et al. / Learning and Instruction 21 (2011) 68e76

References Alonso-Tapia, J., & Pardo, A. (2006). Assessment of learning environment motivational quality from the point of view of secondary and high school learners. Learning and Instruction, 16, 295e309. Amadieu, F., van Gog, T., Paas, F., Tricot, A., & Marine, C. (2009). Effects of prior knowledge and concept-map structure on disorientation, cognitive load, and learning. Learning and Instruction, 19, 376e386. Banta, T. W. (2001). Assessing competencies in higher education. In C. A. Palomba, & T. W. Banta (Eds.), Assessing student competencies in accredited disciplines (pp. 1e12). Sterling, VA: Stylus. Bereiter, C., & Scardamalia, M. (1989). Intentional learning as a goal of instruction. In L. B. Resnick (Ed.), Knowing, learning, and instruction: Essays in honor of Robert Glaser (pp. 361e392). Hillsdale, NJ: Erlbaum. Cattell, R. B. (1957). Personality and motivation: Structure and measurement. New York: World Book. Cattell, R. B. (1971). Abilities: Their structure, growth, and action. Boston: Houghton Mifflin. Costa, M. L., van Rensburg, L., & Rushton, N. (2007). Does teaching style matter? A randomised trial of group discussion versus lectures in orthopaedic undergraduate teaching. Medical Education, 41, 214e217. Cronbach, L. J. (1957). The two disciplines of scientific psychology. American Psychologist, 30, 116e127. Cubic, B. A., & Gatewood, E. E. (2008). ACGME core competencies: helpful information for psychologists. Journal of Clinical Psychology in Medical Settings, 15, 28e39. Elmes, D. G., Kantowitz, B. H., & Roediger, H. L., III (2009). Research methods in psychology (9th ed.). Pacific Grove, CA: Brooks/Cole Publishing Company. Entwistle, N. J., & Entwistle, A. (1991). Contrasting forms of understanding for degree examinations: the student experience and its implications. Higher Education, 22, 205e227. Frenzel, A. C., Pekrun, R., & Goetz, T. (2007). Perceived learning environment and students’ emotional experiences: a multilevel analysis of mathematical classrooms. Learning and Instruction, 17, 478e493. Freund, J. E., & Simon, G. A. (1997). Modern elementary statistics (9th ed.). Upper Saddle River, NJ: Prentice-Hall. Ghiselli, E. E., Campbell, J. P., & Zedeck, S. (1981). Measurement theory for the behavioral sciences. San Francisco: Freeman. Goldman, S. R. (2009). Explorations of relationships among learners, tasks, and learning. Learning and Instruction, 19, 451e454. Gustafsson, J. E. (1984). A unifying model for the structure of intellectual abilities. Intelligence, 8, 179e203. Harvey, L. (2001). The British experiences in assessing competence. In C. A. Palomba, & T. W. Banta (Eds.), Assessing student competencies in accredited disciplines (pp. 217e244). Sterling, VA: Stylus. Jensen, A. R. (1998). The g factor. The science of mental ability. Westport, CT: Praeger. Jo¨reskog, K., & So¨rbom, D. (2001). LISREL 8.5: User’s reference guide. Chicago, IL: Scientific Software International. Kalyuga, S. (2009). Knowledge elaboration: a cognitive load perspective. Learning and Instruction, 19, 402e410. Kaslow, N. J. (2004). Competencies in professional psychology. American Psychologist, 59, 774e781. Kegan, R. (2001). Competencies as working epistemologies: ways we want adults to know. In D. S. Rychen, & L. H. Salanik (Eds.), Defining and selecting key competencies (pp. 193e203). Ashland, OH: Hogrefe & Huber.

Koeppen, K., Hartig, J., Klieme, E., & Leutner, D. (2008). Current issues in competence modeling and assessment. Journal of Psychology, 216, 61e73. Little, T. D., Cunningham, W. A., Shahar, G., & Widaman, K. F. (2002). To parcel or not to parcel: exploring the question, weighing the merits. Structural Equation Modeling, 9, 151e173. Manning, K., Zacher, P., Ray, G. E., & LoBello, S. (2006). Research methods courses and the scientist and practitioner interests of psychology majors. Teaching of Psychology, 33, 194e196. Organisation for Economic Cooperation and Development(OECD). (2000). Measuring student knowledge and skills: The PISA 2000 assessment of reading literacy, mathematical literacy, scientific literacy. Paris: Author. Postareff, L., & Lindblom-Yla¨nne, S. (2008). Variation in teachers’ description of teaching: broadening the understanding of teaching in higher education. Learning and Instruction, 18, 109e120. Renkl, A., Gruber, H., Weber, S., Lerche, T., & Schweizer, K. (2003). Cognitive load beim lernen aus lo¨sungsbeispielen. [Cognitive load in learning from examples]. Zeitschrift fu¨r Pa¨dagogische Psychologie, 17, 93e101. Rychen, D. S. (2001). Introduction. In D. S. Rychen, & L. H. Salanik (Eds.), Defining and selecting key competencies (pp. 1e15). Ashland, OH: Hogrefe & Huber. Scheiter, K., Gerjets, P., Vollmann, B., & Catrambone, R. (2009). The impact of learner characteristics on information utilization strategies, cognitive load experienced, and performance in hypermedia learning. Learning and Instruction, 19, 387e401. Schweizer, K., Goldhammer, F., Rauch, W., & Moosbrugger, H. (2007). On the validity of Raven’s matrices test: does spatial ability contribute to performance? Personality and Individual Differences, 43, 1998e2010. Schweizer, K., & Koch, W. (2001). A revision of Cattell’s investment theory: cognitive properties influencing learning. Learning and Individual Differences, 13, 57e82. Schweizer, K., Moosbrugger, H., & Schermelleh-Engel, K. (2003). Models for investigating hierarchical structures in differential psychology. Methods of Psychological Research Online, 8, 159e180. Struyven, K., Dochy, F., Janssens, S., & Gielen, S. (2006). On the dynamics of students’ approaches to learning: the effect of the teaching/learning environment. Learning and Instruction, 16, 279e294. Sweller, J. (2006). The worked example effect and human cognition. Learning and Instruction, 16, 165e169. Tabachnick, B. G., & Fidell, L. S. (2007). Using multivariate statistics (5th ed.). Boston: Peason, Allyn & Bacon. Vermetten, Y., Vermunt, J. D., & Lodewijks, H. G. (2002). Powerful learning environments? How university students differ in their response to instructional measures. Learning and Instruction, 12, 263e284. Wagner, C., & Maree, D. (2007). Teaching research methodology: Implications for psychology on the road ahead. South African Journal of Psychology, 37, 121e134. Walker, J. M. (2008). Looking at teacher practices through the lens of parenting style. Journal of Experimental Education, 76, 218e240. Weinert, F. E. (2001). Concept of competence: a conceptual clarification. In D. S. Rychen, & L. H. Salanik (Eds.), Defining and selecting key competencies (pp. 45e65). Ashland, OH: Hogrefe & Huber. van Zuilen, M. H., Mintzer, M. J., Milanez, M. N., Kaiser, R. M., Rodriguez, O., Paniagua, M. A., et al. (2008). A competency-based medical student curriculum targeting key geriatric syndromes. Gerontology & Geriatrics Education, 28, 29e45.