A structural examination of the Learning Experiences Questionnaire

A structural examination of the Learning Experiences Questionnaire

Journal of Vocational Behavior 80 (2012) 50–66 Contents lists available at ScienceDirect Journal of Vocational Behavior j o u r n a l h o m e p a g ...

832KB Sizes 0 Downloads 13 Views

Journal of Vocational Behavior 80 (2012) 50–66

Contents lists available at ScienceDirect

Journal of Vocational Behavior j o u r n a l h o m e p a g e : w w w. e l s ev i e r. c o m / l o c a t e / j v b

A structural examination of the Learning Experiences Questionnaire☆ David M. Tokar ⁎, Taneisha S. Buchanan, Linda M. Subich, Rosalie J. Hall, Christine M. Williams University of Akron, USA

a r t i c l e

i n f o

Article history: Received 12 May 2011 Available online 6 August 2011 Keywords: Social Cognitive Career Theory Learning experiences Holland's theory

a b s t r a c t The underlying factor structure of the Learning Experiences Questionnaire (LEQ; Schaub, 2004) was examined using data from 742 male and female college-age respondents. The LEQ items reflect a variety of learning experiences (generated based on Bandura's (1986, 1997) four sources of self-efficacy perceptions) that might occur in each of Holland's (1997) six RIASEC domains. The LEQ was expected to have a multi-dimensional factor structure, with dimensions representing both RIASEC context and learning experience type. A multi-trait multi-method framework was used to compare the relative fits of a series of models with different theoretically implied dimensionalities, estimated using confirmatory factor analysis. Results supported the proposed multi-dimensional structure. Results of follow-up tests of measurement invariance for men and women revealed that configural and partial factorial invariance held, implying that the LEQ measure functions similarly for men and women. © 2011 Elsevier Inc. All rights reserved.

Social Cognitive Career Theory (SCCT; Lent, Brown, & Hackett, 1994) has emerged as a prominent force in vocational psychology research. Briefly, SCCT suggests that background and contextual variables lead persons to differential learning experiences. Among other things, as detailed in Bandura's (1986) Social Cognitive Theory, learning experiences in general provide important information that helps develop an individual's self-efficacy (i.e., confidence) and outcome expectations for specific tasks. In turn, self-efficacy and outcome expectations inform the individual's occupational interests and goals. Recent reviews (e.g., Betz, 2008; Lent, 2005) underscore the frequency of the SCCT's use and attest to the strength of the research support of its utility for understanding processes that contribute to occupational interest and choice. The focus of the current study is the psychometric investigation of a recently developed measure – the Learning Experiences Questionnaire (LEQ; Schaub, 2004) – which allows the assessment of different types of learning experiences categorized within a career theory-based framework. The rationale for the development of this measure is presented next. Much of the extant SCCT research has operationalized the content of people's career interests or choices using Holland's (1997) RIASEC domains (i.e., Realistic, Investigative, Artistic, Social, Enterprising, Conventional). For example, SCCT has proven useful for understanding RIASEC-based occupational interests and choice goals of diverse populations, including Mexican American college students (Flores, Robitschek, Celebi, Andersen, & Hoang, 2010), Portuguese high school students (Lent, Paixo, da Silva, & Leito, 2010), and US college students in the computing disciplines from historically Black and predominately White universities (Lent, Lopez, Lopez, & Sheu, 2008). Indeed, Sheu et al. (2010) offered evidence derived from meta-analytic path analyses that relationships of self-efficacy and outcome expectations with interests and goals as implied by SCCT are well supported for the RIASEC domains. However, there has been little empirical examination of learning experiences in the SCCT literature, even though theory suggests they are immediate precursors of self-efficacy and outcome expectations (Bandura, 1986; Lent et al., 1994). This omission does not appear to reflect authors' discounting of, or lack of interest in, these learning experiences. Although researchers do not empirically assess learning experiences, they often refer in the discussion of their findings to the likely role learning experiences

☆ We thank Michael Schaub for kindly granting us permission to include the LEQ items and scoring instructions in this article. ⁎ Corresponding author at: Department of Psychology, University of Akron, Akron, OH 44325-4301, USA. Fax: + 1 330 972 5174. E-mail address: [email protected] (D.M. Tokar). 0001-8791/$ – see front matter © 2011 Elsevier Inc. All rights reserved. doi:10.1016/j.jvb.2011.08.003

D.M. Tokar et al. / Journal of Vocational Behavior 80 (2012) 50–66

51

played in contributing to their participants' self-efficacy or outcome expectations (e.g., Byars-Winston & Fouad, 2008; Flores et al., 2010). Indeed, in a recent chapter on SCCT, Lent and Fouad (2011) highlighted the role of learning experiences as sources of selfefficacy and their pivotal role in career counseling interventions. Thus, the lack of empirical study of learning experiences likely reflects the dearth, until recently, of measurement tools to assess them. Schaub (2004) developed the Learning Experiences Questionnaire (LEQ) to address this void in the literature. The LEQ enables researchers to assess career-related learning experiences relevant to each of Holland's (1997) RIASEC domains. Its 120 items are scored for 24 different learning experiences, four types for each of the six RIASEC domains. The LEQ items were developed rationally by Schaub, based on two theoretical literatures: (a) Holland's (1997) descriptions of the RIASEC themes and his SelfDirected Search (SDS; Holland, Fritzsche, & Powell, 1994) which operationalizes those themes, and (b) Bandura's (1986, 1997) description of the four sources of self-efficacy expectations: personal performance accomplishments, vicarious learning, verbal persuasion, and physiological arousal. Thus, each of the 24 subscale scores of the LEQ indicates the extent to which the respondent has learning experiences which provide a specific source of efficacy expectations relevant to a particular RIASEC domain, e.g., Realistic Performance Accomplishments or Social Verbal Persuasion. Subsequent research with the LEQ has supported its utility for predicting RIASEC self-efficacy and outcome expectations, as well as the importance of considering personality and gender in relation to RIASEC learning experiences (Dickinson, 2008; Schaub & Tokar, 2005; Tokar, Thompson, Plaufcan, & Williams, 2007; Williams & Subich, 2006). By extension, these results offer preliminary validity evidence for the LEQ as an adequate operationalization of RIASEC-based learning experiences. However, the LEQ is a relatively new instrument, and more could be known about its construct validity and psychometric properties. Thus, the current investigation is intended to advance knowledge about the LEQ by evaluating its internal factor structure.

LEQ structural issues A potential limitation of the LEQ is that each of the 24 specific learning experiences is assessed with only five items. The brevity of the subscales contributes to somewhat low internal consistency reliability estimates for some of the LEQ subscales. For example, in his original development of the LEQ, Schaub (2004) reported alpha estimates below .60 for several subscales, and other researchers have reported similarly low alphas for their samples (Dickinson, 2008; Williams & Subich, 2006). Although it is not clear that internal consistency has the same detrimental effects on validities as do other forms of reliability (McCrae, Kurtz, Yamagata, & Terracciano, 2011), the low internal consistency estimates for these subscales may be concerning to some. An alternative to scoring the LEQ for all 24 specific learning experiences subscales is to aggregate across the four sources of selfefficacy for each of the RIASEC domains, thus creating a set of six subscale scores which indicate the totality of learning experiences within a specific RIASEC domain. Several recent investigations of SCCT have used this strategy in scoring the LEQ (Dickinson, 2008; Schaub & Tokar, 2005; Tokar et al., 2007). Advantages of scoring the LEQ for RIASEC-based total scales include (a) increased breadth of construct coverage, (b) greater internal consistency of the subscale score, and (c) a more parsimonious operationalization of learning experiences for the RIASEC constructs, which can be particularly important in complex model testing. These advantages, however, are offset by at least two potentially serious limitations. First, research findings showing relationships (or non-relationships) of global RIASEC-based learning experiences with other constructs such as self-efficacy, outcome expectations, and person inputs may leave unclear which specific learning experiences contributed to the observed results. Such more detailed information can be helpful in more fully understanding career development. For example, Williams and Subich's (2006) analyses based on individual LEQ subscales demonstrated that, for all RIASEC domains, domain-specific performance accomplishments contributed significantly to the prediction of domain-specific self-efficacy. For some RIASEC domains, verbal persuasion and physiological arousal also contributed predictive utility for domain-specific self-efficacy. For outcome expectations, verbal persuasion was the only learning experience which consistently uniquely contributed to prediction when the individual's level of self-efficacy was included in the prediction model. It is also noteworthy that these patterns varied somewhat by gender. For men, performance accomplishments contributed strongly to the prediction of self-efficacy across RIASEC domains, whereas for women performance accomplishments along with physiological arousal were consistent and strong predictors of self-efficacy across domains; verbal persuasion contributed more weakly and differentially by gender across domains. Thus, the finer-grained analysis possible with the 24 LEQ subscales compared to the LEQ total scales may be of interest, as illustrated by the Williams and Subich study of antecedents and consequences of RIASEC-based learning experiences. Second, although the LEQ can be scored both for the 24 types of specific learning experiences and also for the more global RIASEC themes, so far there is little empirical investigation of the psychometric basis of the latter, more aggregate scores. Schaub and Tokar (2005), Tokar et al. (2007), and Williams and Subich (2006) reported acceptable internal consistency estimates for the LEQ RIASEC summary scales (ranging from .70 to .90, with a median of .83 across these three studies), but other findings call into question the LEQ's implied hierarchical structure of learning experiences. For example, Schaub (2004) found that within each RIASEC theme, LEQ subscale intercorrelations varied widely depending upon the specific learning experiences involved. Within each of the six RIASEC themes, performance accomplishments correlated moderately to strongly with the other three corresponding learning experiences (median rs ranged from .26 for Conventional to .59 for Realistic), as would be expected if they were all interrelated indicators of the same factor. However, the comparable correlations of physiological arousal with (a) vicarious learning (ranging from −.01 to .33, mdn = .22) and (b) verbal persuasion (ranging from .02 to .36, mdn = .24) were quite modest.

52

D.M. Tokar et al. / Journal of Vocational Behavior 80 (2012) 50–66

To date, no published study has reported LEQ subscale intercorrelations or demonstrated other potential support for the LEQ's implied hierarchical structure. Therefore, it is an open question whether Bandura's (1986, 1997) four different sources of learning experiences covary meaningfully and substantively as constituent facets, or components, of broader, higher-order RIASEC-based learning experience constructs. Certainly this is plausible from a theoretical perspective. Some empirical guidance on this issue may come from Dickinson (2008), who scored the LEQ for the 24 subscales as well as the six global RIASEC themes in her dissertation research. She reported subscale intercorrelations for specific sources of self-efficacy (e.g., vicarious learning) across the six RIASEC themes that equaled or exceeded intercorrelations among the four learning experiences within a particular theme (e.g., Realistic). For example, the 15 correlations among the six different verbal persuasion subscales ranged from .35 to .66, with a median of .47. Even more striking, given its previously described modest correlations with corresponding RIASEC vicarious learning and verbal persuasion scales, were the correlations among the physiological arousal subscales, which ranged from .35 to .59, with a median of .44. Dickinson's (2008) results suggest an alternative plausible underlying factor structure for the LEQ. That is, individual differences in people's characteristic pursuit of different types of learning experiences (as categorized according to Bandura's (1986, 1997) four sources of self-efficacy) may drive observed responses on the LEQ. However, and even more consistently with the theory underlying the development of the LEQ than a structure based on RIASEC or efficacy source domains only, the 24 LEQ subscale scores may be multi-dimensional in nature. That is, responses to those items may reflect the influence of individual differences both in the pursuit of learning experiences in different RIASEC domains and of different types of efficacy sources. Finally, the evidence just discussed was all in the form of correlation coefficients. However, factor loadings, because of their partialled nature, may be preferable to zero-order correlations for assessing convergent validity in measures such as the LEQ where responses are believed to reflect multiple sources of variability. Study purpose and model tests Previous research with the LEQ (i.e., Dickinson, 2008; Schaub & Tokar, 2005; Tokar et al., 2007) has fruitfully utilized the six broad RIASEC learning experience constructs. Future researchers also may want to score the LEQ for those broad RIASEC constructs and/or broadly for Bandura's (1986, 1997) four types of learning experiences aggregated across RIASEC themes. Given these past and potential future varied scoring practices for the LEQ which extend beyond use of the 24 subscale scores, it is important to conduct an empirical investigation of the LEQ's internal structure to determine which alternative scorings are psychometrically defensible and to better understand their implications. Thus, the primary purpose of this study was to examine the underlying factor structure of the LEQ. To this end, we proposed and estimated a series of hierarchically nested factor models of varying factor dimensionalities. More specifically, six different nested CFA models were specified according to a multi-trait multi-method (MTMM) framework (Campbell & Fiske, 1959), implemented in a structural equation modeling context which allowed statistical comparisons of the fit of different competing factor structures (see Widaman, 1985). Each model incorporated different dimensionalities for “trait” (RIASEC) and “method” (type of learning experience) factors, including models that had only trait or only method factors. (Although the choice is fairly arbitrary and has no implications for the model tests, the four types of learning experiences are labeled as “method” factors because they represent efficacy as different sources from RIASEC-based learning experiences.) In comparisons of model fit, models with more complex factor structures involving a greater number of factors were preferred to less complex models with a smaller number of factors only if their additional complexity resulted in a significantly improved fit to the data. The first model was based on Schaub's (2004) scoring scheme, such that differences in the 24 LEQ subscales were assumed to reflect only common variance due to six underlying RIASEC trait constructs. Thus, Model 1 included six correlated trait factors and no method factor (see Fig. 1). The second model was a four-factor model in which the differences in the subscales were assumed to

Realistic

RPA

RVL

RVP

Investigative

RPHA

IPA

IVL

IVP

Artistic

IPHA

APA

AVL

AVP

Enterprising

Social

APHA

SPA

SVL

SVP

SPHA

EPA

EVL

EVP

Conventional

EPHA

CPA

CVL

CVP

CPHA

Fig. 1. Confirmatory factor analysis model with six RIASEC trait factors. R = Realistic. I = Investigative. A = Artistic. S = Social. E = Enterprising. C = Conventional. PA = Performance Accomplishments. VL = Vicarious Learning. VP = Verbal Persuasion. PHA = Physiological Arousal.

D.M. Tokar et al. / Journal of Vocational Behavior 80 (2012) 50–66

53

reflect only the four experiential sources of self-efficacy proposed by Bandura (1986, 1997); these four correlated learning experiences factors were modeled across the six RIASEC themes. Thus, Model 2 included four correlated method factors and no trait factor (see Fig. 2). Model 3 was a global, one-factor model of learning experiences that encompassed all four learning experiences across all six RIASEC themes. This model made no distinction between common variance due to trait effects and common variance due to method effects. It implies that there is no discriminant validity in LEQ subscale scores due either to RIASEC themes or self-efficacy sources. The fourth, fifth, and sixth models were multi-dimensional measurement models that included the effects of both traits and methods as two independent sources of common variance for the manifest LEQ variables. Model 4 was a 10-factor correlated trait– correlated method (CT–CM) model (Widaman, 1985). Specifically, it had six RIASEC trait factors and four learning experiences method factors (see Fig. 3). This model allowed for covariances among the set of trait factors and among the set of method factors; however, the trait factors were not allowed to covary with the method factors (i.e., these covariances were fixed to zero). Model 4 was most conceptually consistent with the development of the LEQ, as it implies that responses to the LEQ items reflect both RIASEC themes and specific types of learning experiences. Statistically significant factor loadings for Model 4 on RIASEC latent constructs or on learning type latent constructs would indicate convergent validity for different LEQ subscale scores for RIASEC themes and for types of learning experiences, respectively. The fifth and sixth models were simpler, but still multi-dimensional, alternatives to the 10-factor CT–CM model. Model 5 included six covarying RIASEC trait factors and one general learning method factor (see Fig. 4). Model 6 included one general RIASEC trait factor and four covarying learning method factors (see Fig. 5). As was the case with the complete CT–CM model, all of the trait factors of Model 5 and all of the method factors of Model 6 were allowed to covary with each other; however, all trait– method factor covariances were set to zero. Models 5 and 6 allow the assessment of discriminant validity. Specifically, if the fit of Model 5 is not significantly poorer than that of Model 4, it implies a lack of discriminant validity among the four types of learning experiences. Similarly, if the fit of Model 6 is not significantly poorer than that of Model 4, it implies a lack of discriminant validity among the six RIASEC themes. In sum, the statistical comparisons of the fits of specific pairs of models allowed us to address the following two questions. First, do multi-dimensional structures incorporating both trait and method factors (i.e., Models 4, 5, and 6) better represent the LEQ subscale structure than any of the uni-dimensional models (Models 1, 2 and 3)? Second, if there is support for a multi-dimensional model, does the 10-factor CT–CM model (Model 4) which implies discriminant validity of both the six RIASEC themes and the four learning types have a significant improvement in fit over the simpler multi-dimensional models that include one general trait factor (Model 6) or one general method factor (Model 5)? Finally, after determining the best fitting model from the MT–MM comparisons, we investigated whether this model fit the data equally well for men and women in our sample, and whether the parameters of the model were invariant for the two gender groups. Method Participants and procedure Data were drawn from two previously published studies (Tokar et al., 2007; Williams & Subich, 2006) and two unpublished studies (Williams, 2010; Williams, Thompson, & Robinson, 2006) which used the LEQ for other research purposes. Tokar et al. used the LEQ, along with measures of personality and conformity to masculine and feminine gender role norms, to examine precursors of RIASEC-based learning experiences. Williams and Subich used the LEQ and corresponding measures of self-efficacy and outcome expectations to investigate gender differences in these constructs as well as their interrelations. The two unpublished studies examined the role of learning experiences in the prediction of women's self-efficacy and outcome expectations. More specifically, the Williams et al. study focused on African American women, while the Williams (2010) study considered gender role norm conformity as a predictor of learning experiences. Participants in all four studies (N = 766) were solicited from psychology courses at the same large Midwestern university, and they were offered course extra credit for their participation.

Performance Accomplishments

RPA

IPA

APA

SPA

EPA

Vicarious Learning

CPA

RVL

IVL

AVL

SVL

Verbal Persuasion

EVL

CVL

RVP

IVP

AVP

SVP

Physiological Arousal

EVP

CVP

RPHA

IPHA

APHA

SPHA

EPHA

CPHA

Fig. 2. Confirmatory factor analysis model with four learning experiences method factors. R = Realistic. I = Investigative. A = Artistic. S = Social. E = Enterprising. C = Conventional. PA = Performance Accomplishments. VL = Vicarious Learning. VP = Verbal Persuasion. PHA = Physiological Arousal.

54

D.M. Tokar et al. / Journal of Vocational Behavior 80 (2012) 50–66

Fig. 3. Correlated trait–correlated method (CT–CM) confirmatory factor analysis model of six RIASEC traits and four learning experiences methods. Values represent standardized maximum likelihood parameter estimates for factor loadings and correlations among factors. Additional parameter estimates not depicted are: (a) the factor loading of Performance Accomplishments to RVL (λ = .31, p b .05), (b) the correlation of the uniquenesses of EPHA and SPHA (r = .43, p b .05), and (c) the correlation of the uniquenesses of IPHA and IPA (r = .36, p b .05). R = Realistic. I = Investigative. A = Artistic. S = Social. E = Enterprising. C = Conventional. PA = Performance Accomplishments. VL = Vicarious Learning. VP = Verbal Persuasion. PHA = Physiological Arousal. *p b .05.

D.M. Tokar et al. / Journal of Vocational Behavior 80 (2012) 50–66

55

RPA

RVL Realistic RVP

RPHA

IPA

IVL Investigative IVP

IPHA

APA

AVL Artistic AVP

APHA Learning Experience SPA

SVL Social SVP

SPHA

EPA

EVL Enterprising

EVP

EPHA

CPA

CVL Conventional

CVP

CPHA Fig. 4. Correlated trait-single method confirmatory factor analysis model of the six RIASEC traits and one global learning experience method. R = Realistic. I = Investigative. A = Artistic. S = Social. E = Enterprising. C = Conventional. PA = Performance Accomplishments. VL = Vicarious Learning. VP = Verbal Persuasion. PHA = Physiological Arousal.

56

D.M. Tokar et al. / Journal of Vocational Behavior 80 (2012) 50–66

Fig. 5. Single trait-correlated method confirmatory factor analysis model of one global RIASEC trait and four learning experiences methods. R = Realistic. I = Investigative. A = Artistic. S = Social. E = Enterprising. C = Conventional. PA = Performance Accomplishments. VL = Vicarious Learning. VP = Verbal Persuasion. PHA = Physiological Arousal.

Those who agreed to participate gave their consent and completed the LEQ (and other measures not used in this study) either on paper or online. Persons who chose not to participate had additional extra credit options available and all participants were free to withdraw their participation at any time. Instrument Learning Experiences Questionnaire (LEQ; Schaub, 2004) The LEQ is a 120-item self-report measure of career-related learning experiences. The LEQ was developed rationally to tap each of Bandura's (1986) four sources of self-efficacy information (i.e., personal performance accomplishments, vicarious learning,

Table 1 LEQ subscale intercorrelations, internal consistencies, means, and standard deviations (N = 742). Scale

1

2

3

5

1.00 0.14 1.00 0.02 0.47 − 0.02 0.55 0.51 0.38 0.02 0.20 − 0.02 0.18 − 0.09 0.11 0.34 0.04 − 0.01 0.29 − 0.11 0.16 − 0.15 0.19 0.36 0.01 0.04 0.34 − 0.10 0.16 − 0.10 0.20 0.38 0.03 − 0.00 0.32 − 0.05 0.32 − 0.07 0.23 0.47 0.10 0.73 0.67 19.02 20.20 4.98 4.73

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

1.00 0.68 1.00 0.11 0.13 1.00 0.21 0.25 − 0.07 1.00 0.45 0.36 − 0.07 0.57 1.00 0.29 0.31 − 0.10 0.64 0.66 1.00 − 0.01 0.01 0.28 0.18 0.09 0.13 1.00 0.21 0.20 − 0.01 0.27 0.28 0.25 0.05 1.00 0.37 0.29 − 0.14 0.27 0.44 0.39 0.01 0.46 1.00 0.36 0.41 − 0.17 0.28 0.43 0.44 − 0.00 0.55 0.63 1.00 − 0.05 − 0.03 0.36 − 0.05 − 0.06 − 0.03 0.30 0.15 0.04 0.08 1.00 0.27 0.33 0.04 0.25 0.20 0.26 0.10 0.48 0.28 0.36 0.19 1.00 0.45 0.37 − 0.10 0.18 0.38 0.28 − 0.00 0.38 0.48 0.45 0.07 0.45 1.00 0.34 0.37 − 0.08 0.13 0.25 0.28 0.02 0.25 0.27 0.46 0.04 0.54 0.50 1.00 − 0.04 − 0.07 0.37 − 0.06 − 0.09 − 0.03 0.35 0.08 − 0.07 − 0.03 0.67 0.30 0.03 0.09 1.00 0.18 0.20 0.06 0.14 0.13 0.07 − 0.04 0.43 0.23 0.23 0.07 0.32 0.22 0.19 0.02 1.00 0.44 0.38 − 0.05 0.22 0.35 0.26 0.00 0.43 0.42 0.46 0.03 0.35 0.54 0.35 − 0.02 0.35 1.00 0.40 0.40 − 0.06 0.16 0.36 0.25 − 0.03 0.32 0.38 0.48 − 0.04 0.31 0.47 0.52 − 0.08 0.42 0.47 1.00 − 0.05 − 0.07 0.51 − 0.03 − 0.11 − 0.07 0.26 0.08 − 0.12 − 0.11 0.50 0.12 − 0.02 − 0.09 0.48 0.20 − 0.02 − 0.04 1.00 0.71 0.73 0.67 0.72 0.63 0.68 0.51 0.64 0.66 0.73 0.72 0.72 0.60 0.70 0.72 0.57 0.65 0.58 0.69 18.08 17.82 18.82 18.01 17.89 16.79 18.23 24.23 21.22 20.74 20.23 21.05 20.01 18.16 19.04 21.71 22.06 19.07 19.65 4.79 5.02 4.93 5.71 5.16 5.45 4.44 3.63 4.79 5.01 4.90 4.58 4.60 4.80 4.84 4.29 4.58 4.09 4.70

D.M. Tokar et al. / Journal of Vocational Behavior 80 (2012) 50–66

1. RPA 1.00 2. RVL 0.53 1.00 3. RVP 0.71 0.54 1.00 4. RPHA 0.30 0.12 0.13 5. IPA 0.35 0.30 0.34 6. IVL 0.32 0.38 0.45 7. IVP 0.28 0.28 0.45 8. IPHA 0.14 0.02 0.02 9. APA 0.25 0.28 0.23 10. AVL 0.23 0.40 0.35 11. AVP 0.13 0.24 0.24 12. APHA 0.04 − 0.07 − 0.03 13. SPA 0.16 0.36 0.10 14. SVL 0.12 0.29 0.21 15. SVP 0.08 0.28 0.27 16. SPHA − 0.00 0.01 − 0.13 17. EPA 0.29 0.27 0.26 18. EVL 0.20 0.35 0.30 19. EVP 0.20 0.21 0.34 20. EPHA 0.04 − 0.02 − 0.09 21. CPA 0.17 0.24 0.12 22. CVL 0.25 0.47 0.34 23. CVP 0.22 0.37 0.42 24. CPHA 0.04 0.01 − 0.07 α 0.85 0.78 0.76 M 19.77 22.63 17.11 SD 5.95 5.05 5.16

4

Note. R = Realistic. I = Investigative. A = Artistic. S = Social. E = Enterprising. C = Conventional. PA = Performance Accomplishments. VL = Vicarious Learning. VP = Verbal Persuasion. PHA = Physiological Arousal. Correlations in bold are significant at p b .05.

57

58

D.M. Tokar et al. / Journal of Vocational Behavior 80 (2012) 50–66

verbal persuasion, and physiological arousal) for each of Holland's (1997) six RIASEC themes. Each of the four types of learning experiences is assessed by five items, yielding four learning experience subscales for each RIASEC domain. See the Appendix for the LEQ items and scoring instructions. Respondents indicate their level of agreement with each item using a 6-point Likert-type scale (1 = strongly disagree, 6 = strongly agree). Each specific learning experience subscale score is calculated as the sum of the relevant five items. Consistent with the LEQ scoring key (Schaub, 2003), physiological arousal subscale scores were reversed, so that higher scores reflected lower recollected levels of physiological arousal experienced while engaged in RIASEC-based learning experiences. To initially develop the LEQ items, Schaub (2004) consulted the Activities, Competencies, and Occupations items of the SelfDirected Search (SDS; Holland et al., 1994), the most widely used measure of Holland's (1997) RIASEC personality types, in order to ensure the items adequately sampled the RIASEC domains. Consistent with Bandura's (1997) conceptualization of self-efficacy beliefs as constructed “from the cognitive processing and conscious evaluation of the consequences of experienced events” (Schaub, 2004, p. 104), LEQ items were worded as recollections of previous experiences (e.g., “I have made repairs around the house”). To develop the LEQ, a pilot version of the questionnaire was completed by 222 college students, then evaluated for internal consistency using the metric of a minimum corrected item-total correlation of .30. Three Ph.D.-level vocational researchers with extensive knowledge of SCCT and Holland's (1997) theory (i.e., “experts”) were also asked to judge the content of the proposed LEQ items for correspondence with their intended construct and for clarity, as well as to evaluate the extent to which each Holland theme was represented across the four sources of self-efficacy information. Where necessary based on the results of item analysis and expert judges' feedback, items were revised to improve their clarity and content validity (see Schaub, 2004 for additional details). The resulting LEQ subscale α's ranged from the mid .40's to mid .80's, with a median α of about .67, in two independent samples of college students (Schaub, 2004; Williams & Subich, 2006). Construct validity information includes reported positive associations of LEQ subscale and RIASEC summary scores with corresponding RIASEC self-efficacy and outcome expectations scores (Schaub & Tokar, 2005; Williams & Subich, 2006). Analytic approach Confirmatory factor analyses (CFAs) were performed to test the fit of the LEQ data to six different measurement models (described below) in order to determine the LEQ factor dimensionality. Following this, multi-sample CFA models with structured means were estimated to determine the extent of measurement invariance across gender groups. For all models tested, the 24 LEQ subscale scores served as observed indicators. Mplus version 6.1 (Muthén & Muthén, 1998–2010) was used to estimate the CFA models, using maximum likelihood procedures. The fit of the data to each model was evaluated using the χ 2 goodness-of-fit test. Because the chi-square statistic can be sensitive to small and potentially unimportant sources of model misfit when sample size is large (Kline, 2005), we also consulted the following alternative fit indices: comparative fit index (CFI), Tucker–Lewis fit index (TLI), root mean square error of approximation (RMSEA), and standardized root mean square residual (SRMR). CFI and TLI values ≥ .95 and RMSEA and SRMR values ≤ .08 were used as thresholds to indicate good fit (Browne & Cudeck, 1993; Hu & Bentler, 1998; MacCallum, Browne, & Sugawara, 1996). To determine whether the fit of our best-fitting CFA model was comparable for men and women in our sample, we conducted a series of exploratory tests of measurement invariance across gender. Broadly speaking, measurement invariance is the extent to which measurements collected under different conditions (e.g., from different populations, across time, paper versus computer administration) are the same psychometrically. Confirmatory factor analysis can be used to test for several different forms of measurement invariance, ranging from weak to strict. Establishing invariance of a measure is important because at least partial invariance of factor loadings across groups is necessary before comparisons can be made between those groups on the measure (Byrne, Shavelson, & Muthén, 1989). Table 2 Summary of fit statistics for all models. Model description

χ2

df

CFI

TLI

RMSEA

SRMR

Models for trait and method effects comparisons Model 1: 6 RIASEC Model 2: 4 Learning Model 3: 1 Global Model 4: 6 RIASEC, 4 Learning Model 5: 6 RIASEC, 1 Learning Model 6: 1 RIASEC, 4 Learning

3229.26 3596.76 4975.76 721.88 1442.83 2493.26

237 246 252 208 213 222

.65 .61 .44 .94 .86 .73

.59 .56 .39 .92 .81 .67

.130 .135 .159 .058 .088 .117

.117 .091 .131 .043 .051 .070

725.04 788.78 774.52 846.55

410 449 447 461

.96 .96 .96 .95

.95 .95 .95 .94

.046 .045 .045 .048

.047 .053 .052 .054

Multi-group MT–MM models for tests of measurement invariance Configural invariance: no equality constraints Invariance of factor loadings Partial invariance of factor loadings Partial invariance of factor loadings, invariance of intercepts

Note. N = 742 for Models 1–6. N = 736 for models for tests of measurement invariance. df = degrees of freedom; CFI = comparative fit index; TLI = Tucker–Lewis Index; RMSEA = root mean square error of approximation; SRMR = standardized root mean square residual.

D.M. Tokar et al. / Journal of Vocational Behavior 80 (2012) 50–66

59

In the current study, we tested for (a) configural invariance, (b) invariance of factor loadings, and (c) invariance of the measured indicator intercepts (also known as “scalar invariance”). Configural invariance is a weak form of invariance that occurs when the same general form of factor structure holds across different groups. In other words, going across groups there is the same pattern of indicators loading on the same number of latent constructs, although the values of the factor loadings and intercepts may vary across groups. Invariance of factor loadings occurs when there is not only configural invariance, but the values of the unstandardized factor loadings for a given indicator are also equal across groups. Finally, invariance of intercepts is generally assessed while also holding factor loadings invariant, and is a stricter form of invariance in which the intercepts of the measured variables are constrained to be equal — a form of invariance which suggests that there are not mean differences between groups in the measured indicator variables. Results Preliminary data screening and descriptive statistics Twenty-two cases were dropped because of missing data, and two others were dropped due to minimal or no variability in LEQ responses. The final combined data set included 742 undergraduate students (446 women and 290 men; 6 did not report). The mean age of participants was 21.90 years (SD = 6.40). The majority of the participants were European American (79.5%), with 13.1% African-American, 1.8% Asian/Pacific Islander, 1.3% Hispanic/Latino, 0.3% Native American, 2.8% who identified as other ethnicities, and 1.2% who did not report their ethnicity. Among the participants, 43.3% were first-year students, with 25.2% sophomores, 13.9% juniors, 8.4% seniors, 7.8% reporting other years, and 1.5% who did not report their year of study. Participants were drawn from 113 different academic majors, the most heavily represented of which were psychology (15.8%) and nursing (13.9%), with a large contingent of undecided students (10.1%). Table 1 presents means, standard deviations, internal consistency reliability estimates (i.e., alphas), and intercorrelations for the 24 LEQ subscales. LEQ subscale α's ranged from .51 (Artistic Physiological Arousal) to .85 (Realistic Performance Accomplishments), with a median alpha estimate of .70. Twelve of the 24 LEQ subscales had αs ≥ .70, and 21 of 24 had αs ≥ .60. Given the rather modest reliabilities for some LEQ subscales, we examined the item-total correlations for each LEQ subscale; 111 of 120 (93%) LEQ items had corrected item-total correlations ≥ .30 with their intended LEQ subscale. The other nine items (which were spread across four of six RIASEC themes and all four learning experiences) had corrected item-total correlations ranging from .17 to .29 (mdn= .25), and only one item correlated b.23 with the remaining four items composing its intended subscale. Importantly, we noted that in six of the nine cases, the subscale alphas would either remain the same or actually decrease if the item were removed from the subscale. These results suggested reasonable consistency of item content within each 5-item LEQ subscale. The modest α's observed for some LEQ subscales were likely due to the small number of items used to estimate each LEQ learning experience. Based on results of item-total correlations and reliability analyses, we concluded that the LEQ subscales were reasonably reliable, and thus appropriate to use as measured indicators (i.e., measured variables) for the confirmatory factor analyses. Confirmatory factor analyses As a precondition for the focal analyses, the LEQ subscale scores were screened for normality to determine whether the levels of skew and kurtosis posed a problem for the proposed maximum likelihood (ML) estimation of the CFA model parameters and fit statistics. Although skew and/or kurtosis was statistically significant for some of the LEQ subscales, the actual levels of skew (−.8 to +.1) and kurtosis (−.7 to +.8) were not very substantial for any of the subscales, with no values greater than 1.0. Thus, we concluded the data were appropriate for ML estimation (Finney & DiStefano, 2006). To verify this conclusion, the focal Model 4 was re-estimated, requesting the Sartorra–Bentler adjusted chi-square and robust standard errors, a frequently used adjustment for non-normality. As expected, the resulting parameter estimates (e.g., factor loadings and intercorrelations) were identical to the previous estimation. Only very minimal changes in the chi-square test and standard errors were observed and no conclusions about statistical significance changed. Model 1 The first model tested was a six-factor model based on Schaub's (2004) scoring scheme for the six LEQ RIASEC scales. For each of the six RIASEC constructs (e.g., Realistic), the four LEQ subscales corresponding to that Holland theme (e.g., Realistic Performance Accomplishments, Realistic Vicarious Learning, Realistic Verbal Persuasion, Realistic Physiological Arousal) served as the indicator variables. The six RIASEC themes were modeled as correlated latent variables (see Fig. 1). Table 2 summarizes the fit statistics for Model 1 as well as the other models. Model 1 had a poor fit to the data, although the resulting factor loadings suggested the potential usefulness of a more complex model which incorporated RIASEC latent constructs. Model 2 The second model tested was a four-factor model in which each of four correlated learning experiences factors (e.g., Performance Accomplishments) had six manifest indicator variables, one corresponding to each of the six RIASEC themes (e.g., Realistic Performance Accomplishments, Investigative Performance Accomplishments; see Fig. 2). As shown in Table 2, Model 2 had a poor fit to the data. Again, however, the resulting factor loadings suggested that learning experience factors might explain some portion of the variance in responding to the LEQ items.

60

D.M. Tokar et al. / Journal of Vocational Behavior 80 (2012) 50–66

Model 3 The third model tested was a one-factor learning experiences model with 24 manifest indicator variables. As shown in Table 2, Model 3 had a poor fit to the data. Model 4 The fourth model tested was a 10-factor correlated trait–correlated method (CT–CM) model in which RIASEC themes (e.g., Realistic) were modeled as six correlated trait factors, and learning experiences (e.g., Performance Accomplishments) were modeled as four “method” factors. Trait factor covariances and method factor covariances were freely estimated; however, no trait–method (e.g., Realistic with Performance Accomplishments) covariances were allowed. Each RIASEC trait factor had four manifest indicator variables, corresponding to the four learning experiences for that particular Holland theme, whereas each learning experiences method factor had six manifest indicator variables, one corresponding to each of the six RIASEC themes. Thus, in this model, each LEQ subscale served as an indicator of one RIASEC trait factor and one learning experiences method factor (see Fig. 3). Initially, the CT–CM model did not reach an admissible solution, due to an out-of-bounds estimate, namely, a large negative residual variance for the indicator variable of Conventional Physiological Arousal (CPA) (unstandardized estimate = −465.25, p = .69, 95% CI [−2753.84, 1823.34]). Residual variance parameters which have negative estimates are relatively common, and may be set to zero if they are statistically non-significant and if the confidence interval includes zero (Chen, Bollen, Paxton, Curran, & Kirby, 2001). Therefore, the CT–CM model was re-estimated fixing the residual variance for CPA to zero. As shown in Table 2, although this revised Model 4 had a statistically significant goodness-of-fit test statistic, values of the alternative fit indices were acceptable. Model 5 The fifth model tested was a multi-dimensional model with six correlated trait factors, and one global learning experiences method factor. Trait factor covariances were freely estimated; however, all trait–method covariances were fixed to zero. In this model, each LEQ subscale served as an indicator of one of six RIASEC trait factors (identical to Models 1 and 4) as well as the global learning experiences method factor (see Fig. 4). As shown in Table 2, Model 5 did not fit the data well. Model 6 The sixth model tested was a multi-dimensional model with four correlated method factors, and one global trait factor. Method factor covariances were freely estimated; however, all trait–method covariances were fixed to zero. In this model, each LEQ subscale served as an indicator of one of four learning experiences method factors (identical to Models 2 and 4) as well as the global trait factor (see Fig. 5). As shown in Table 2, Model 6 provided a poor fit to the data. Model comparisons A preliminary comparison was made of Model 3 with the two uni-dimensional Models 1 and 2, to rule out the potential that a single global factor explained all of the systematic variance in the LEQ subscales. Both of these chi-square difference tests were statistically significant at p b .001, indicating that both Model 1 and Model 2 were preferable to Model 3. Next, we compared the goodness-of-fit of different nested models using a hierarchical chi-square test to determine whether a multi-dimensional structure (Models 4, 5, and 6, which all include both trait and method effects) better represented the LEQ subscale structure than did a uni-dimensional structure (Models 1 and 2). In every case, the multi-dimensional models provided a better fit to the data. For example, Model 6, which had the poorest fit among the multi-dimensional models, fit significantly better than did Model 1, which was the best-fitting uni-dimensional model, Δχ 2 (15, N = 742) = 736.01, p b .001. We next compared the fit of the 10-factor complete CT–CM model (Model 4) to that of the multi-dimensional models specifying one general trait factor (Model 6) or one general method factor (Model 5). Model 4 fit the data better than did either Model 6 (Δχ 2 [14, N = 742] = 1771.37, p b .001) or Model 5 (Δχ 2 [5, N = 742] = 720.95, p b .001). The improvement in fit of Model 4 over Model 5 suggested discriminability of the four correlated learning type methods, whereas the superior fit of Model 4 over Model 6 provided support for the discriminant validity of the LEQ as a measure of six correlated RIASEC traits versus one global trait factor (Widaman, 1985). Results of model comparisons indicated that Model 4, the complete CT–CM model, provided the best relative fit to the data, with acceptable alternative fit index values. Nevertheless, CFI and TLI values for Model 4 (.94 and .92, respectively) were slightly below the recommended threshold to indicate good fit (i.e., .95). Therefore, to achieve a conventionally good fit to the data, we retested Model 4 with a few minor modifications determined by an inspection of empirical modification indices and a consideration of conceptual sensibility. These modifications were allowing a secondary factor loading of the Realistic Vicarious Learning subscale on the Performance Accomplishments factor, and allowing two sets of uniquenesses to covary (Enterprising Physiological Arousal with Social Physiological Arousal, and Investigative Physiological Arousal with Investigative Performance Accomplishments). The minor modifications resulted in a good fit for Model 4, χ 2 (205, N = 742) = 526.78, p b .001, CFI = .96, TLI = .95, RMSEA = .046, SRMR = .041. Parameter estimates for the modified Model 4 are presented in Fig. 3. The factor loadings in this model for all ten factors were generally sizable and significant at p b .05, excepting the loading from the Conventional factor to Conventional Vicarious Learning (.06, p = .13). Trait factor loadings ranged from .06 to .83 (mdn = .46), and method factor loadings ranged from .34 to .75 (mdn = .60). The factor loadings both generally support the convergent validity

D.M. Tokar et al. / Journal of Vocational Behavior 80 (2012) 50–66

61

of LEQ subscale scores which load on the same factor, and also suggest that the learning experiences methods tended to account for more variance in the scores than did the RIASEC trait constructs. Correlations among the six RIASEC trait factors ranged (in absolute magnitude) from .01 (p = .83) for Investigative with Social to .33 (p b .001) for Realistic with Investigative and Artistic with Social. Four of the five highest positive correlations involved adjacent RIASEC constructs in Holland's (1997) hexagonal model, and the strongest negative correlation—that between Realistic and Social (r = −.25)—involved RIASEC constructs located on opposite poles of the hexagon, suggesting some consistency with the circumplex structure of the hexagonal model. (Future researchers may want to investigate this issue more formally in another sample.) All correlations among learning experiences method factors were significant at p b .05, ranging (in absolute magnitude) from −.11 for Vicarious Learning with Physiological Arousal to .85 for Vicarious Learning with Verbal Persuasion. The pattern of method factor intercorrelations indicated considerable commonality among Performance Accomplishments, Vicarious Learning, and Verbal Persuasion (rs ranging from .63 to .85), whereas Physiological Arousal shared little variance with the other three factors (rs ranging from −.17 to .15). Examination of invariance by gender Four models based on our best-fitting Model 4 were tested in order to determine whether invariance held; their fit statistics are summarized in the bottom half of Table 2. The first, and least constrained, model allowed factor loadings and indicator intercepts for men and women to be freely estimated. This model would be expected to fit the data adequately in an absolute sense, as well as to be the best-fitting model of the sequence, given its lack of constraints. We estimated this multiple group model, applying to both males and females the Model 4 specifications (including the aforementioned minor modifications; however, conclusions do not change whether those modifications are included or not). Results revealed an acceptable fit in the multi-group model. These results suggested that configural invariance was obtained, as the model fit well in both groups. The next model estimated constrained all factor loadings on both the RIASEC and the learning type factors to be equal across gender groups. Its fit (see Table 2) was compared to that of the configural model. The difference in chi square between the two models was significant, Δχ 2 (39, N = 736) = 63.739, p = .0059, suggesting that at least one factor loading is different across the two gender groups. However, an inspection of the values of the factor loadings that were freely estimated in the previous configural invariance model suggested that many, if not most, of the factor loadings were likely to be the same in the two groups. Thus, we explored partial factorial invariance models, and found that freeing only two factor loadings to differ across the two groups resulted in a model which did not significantly differ in fit from the fully freely estimated configural model. Specifically, when the loadings of the Artistic Physiological Arousal subscale on the Artistic construct and the Social Physiological Arousal subscale on the Social construct were allowed to be freely estimated in both groups, the difference in chi square between the two models was not significant, Δχ 2 (37, N = 736) = 49.478, p = .0823. Thus, factorial invariance across genders appears to hold except for the two aforementioned physiological arousal indicators. The Social Physiological Arousal subscale had a considerably stronger, statistically significant loading on the Social construct for women, but was weaker and non-significant for the men. The Artistic Physiological Arousal subscale had significant loadings on the Artistic construct for both men and women, but was somewhat stronger for women. Finally, we estimated a model incorporating the partial invariance of factor loadings as just described plus invariance of the measured variable intercepts. This model had a significantly poorer fit than did the partial factor invariance model, Δχ 2 (14, N = 736) = 72.032, p b .0001. However, this model provided interesting information about group differences in latent factor means because in contrast to the prior multi-group models which fixed factor means to zero in both groups in order to achieve identification, this model allowed factor means in the two groups to be freely estimated. The results showed that the male sample had significantly higher means on the Realistic, Investigative, and Enterprising latent constructs, while the female sample had significantly higher means on the Social, Performance Accomplishments, Vicarious Learning and Verbal Persuasion latent constructs. The latent construct means were not significantly different between the two groups for the Artistic, Conventional, and Physiological Arousal factors. In sum, results of gender invariance analyses provided strong support for the complete CT–CM model (Model 4) as the bestfitting model to the LEQ subscale data for both men and women. In addition, the partial factorial invariance results suggest that with the exception of two of the 48 loadings of the LEQ subscale scores on the 10 latent constructs, the relationships of the measured variables to the latent constructs are equivalent for men and women. This high level of factorial invariance made it possible to compare the latent construct means for men and women, resulting in the identification of several significant gender differences. Discussion The overall purpose of this study was to examine the internal structure of the LEQ. Specifically, using the MT–MM approach with nested structural CFA models (Widaman, 1985), we estimated and compared the goodness-of-fit of six different models specifying different numbers of RIASEC “traits” and learning experiences “methods.” Finally, we investigated whether the best-fitting model was invariant across gender. Results supported a 10-factor complete CT–CM model (Model 4) as the best structural representation of the LEQ subscales for both women and men. The tests of invariance showed that

62

D.M. Tokar et al. / Journal of Vocational Behavior 80 (2012) 50–66

configural and partial factorial invariance held, implying that the LEQ measure functions similarly for men and women. Several gender differences in latent construct means, however, were identified. In the paragraphs that follow, we present a more detailed discussion of our findings and their research implications as well as offer directions for future research involving the LEQ. Perhaps the most important finding revealed by our study is that the LEQ is a multi-dimensional measure of RIASEC-based learning experiences applicable to both men and women. That is, responses to the LEQ items and the resulting differences in the 24 LEQ subscale scores reflect both Holland's (1997) six RIASEC trait constructs and the four types of learning experiences posited by Bandura (1986, 1997). Comparisons of the fit of the complete CT–CM model to that of models with only the six RIASEC trait factors (Model 1) or four learning experiences method factors (Model 2) suggested that the traits-only and methods-only models were inadequate (and, relative to Model 4, worse) representations of the LEQ data. We also compared the complete CT–CM model to multi-dimensional models with one global method factor +6 RIASEC factors (Model 5) or one global trait factor + 4 learning type factors (Model 6). Neither of the multi-dimensional models with a general factor produced an adequate fit to the data, and both models were improved significantly when the general (either trait or method) factor was modeled as six correlated traits or four correlated methods. The improved fit of the complete model (Model 4) over the multi-dimensional model with one general trait factor (Model 6) represents preliminary evidence of the LEQ's disciminant validity with respect to distinguishing different types of learning experiences according to their relevance for the six RIASEC domains. Indeed, Schaub (2004) developed the LEQ specifically to measure learning experiences for each of Holland's (1997) six RIASEC domains, and the present results on the structure of the LEQ offer support that his objective was achieved. In addition, we note that the pattern of intercorrelations among the LEQ RIASEC trait factors showed some consistency with Holland's conceptualization of the hexagonal arrangement of the RIASEC vocational interests. Specifically, the LEQ RIASEC traits were modestly to moderately inter-correlated, with the strongest positive correlations occurring between RIASEC constructs adjacent on the hexagon (e.g., Realistic and Investigative, Artistic and Social), and the strongest negative correlation occurring between RIASEC constructs positioned on opposite sides of the hexagon (i.e., Realistic and Social). Also consistent with past research on RIASEC interest dimensions are the gender differences observed for the RIASEC latent construct means. That the male sample had significantly higher means on the Realistic, Investigative, and Enterprising latent constructs, while the female sample had significantly higher means on the Social latent construct, is quite consistent with the extensive research literature on gender and Holland-based interests (Spokane & Cruza-Guet, 2006). In contrast, the gender differences in the learning latent constructs (i.e., women had higher latent means for Performance Accomplishments, Vicarious Learning and Verbal Persuasion) are unexpected and require additional study. Additionally, the superiority of Model 4 over Model 5, which included one general method factor, suggests that the LEQ assesses experiential sources of self-efficacy that are distinct rather than redundant indicators of a global learning experiences construct. In Model 4, the factor loadings for the RIASEC learning experiences and experiential sources of self-efficacy were generally comparable; however, sources of self-efficacy explained more of the variance in the LEQ subscales. We can infer that variations in LEQ subscale scores are driven more by the experiences respondents endorse rather than the RIASEC typology alone, thus, distinguishing the LEQ as a measure of learning experiences distinct from RIASEC measures of other vocational constructs. The performance accomplishments, vicarious learning, and verbal persuasion constructs were robustly correlated, albeit clearly distinct. The correlation of vicarious learning and verbal persuasion was particularly strong, but perhaps not surprising; in the case of in vivo modeling, conversation between the model and observer is likely, and when the modeling is intended to be instructional the model may verbally guide the learner. Physiological arousal, in contrast, had substantially lower relationships with the other three types of learning experiences. The origin of these differential factor relations may lie in items that address affect experienced while engaged in learning experiences versus cognition and behavior related to learning experiences. In addition, Bandura (1986) suggested that physiological arousal differs from the other types of learning in that it is a mode of learning (e.g., “I observe my anxiety as I begin to speak to a group and interpret my performance as likely to be poor”) as well as a personal response to learning (e.g., “I wander off point in my speech and as a result become anxious”); these two aspects of arousal may be confounded in responses to the LEQ items. Examination of the individual items composing the LEQ physiological arousal subscales (e.g., “I have felt uneasy while using tools to build something,” “I have become nervous while developing new friendships”) suggests that these items also may be tapping individual differences in the personality trait of anxiety as well as respondents' recollections of affective components of their past learning experiences. Such a possibility was raised by Lent et al. (1994) in their presentation of Social Cognitive Career Theory when they stated that experiential data at times may be “filtered” by persons' affective dispositions; this too might contribute to our observation that the Physiological Arousal factor measures something very different from the other three learning experiences factors, and it may suggest a potential challenge to the LEQ's construct validity that warrants future investigation. Research implications Overall, the present findings support the use with men and women of the LEQ to assess RIASEC learning experiences and/or the four experiential sources of learning discussed by Bandura (1986). These data further bolster the limited literature on the LEQ, and this is encouraging news for Social Cognitive Career Theory researchers who have had limited choices of instruments to assess this critical SCCT construct. The LEQ offers not only strong instrumentation, but a content domain broadly relevant to vocational

D.M. Tokar et al. / Journal of Vocational Behavior 80 (2012) 50–66

63

behavior (i.e., RIASEC). In using the LEQ to assess either more global RIASEC learning or the generic types of learning experiences, though, researchers clearly need to remain aware that they are capturing only part of the “story” behind participants' responses as both structural dimensions are present in LEQ responses. This being the case, the 24 LEQ subscales may continue to be useful to researchers as they allow examination of trait and method factors simultaneously. Other researchers may wish to add items to the LEQ, which may also strengthen the internal consistency of the 24 subscales, but this may come at the cost of respondent fatigue. Indeed, to the extent that the RIASEC domains do not comprehensively account for the content domains of learning, the LEQ may be insufficient for some research purposes, and this may also affect the sufficiency of the LEQ's coverage of the four types of learning experiences (e.g., perhaps there are other domains that are important to include in a global assessment of vicarious learning). It may be, too, that different RIASEC domains are best captured by different mixes of types of learning; the present equal distribution of LEQ items across all four types of learning for each RIASEC domain may do a disservice to measurement of a particular RIASEC domain (e.g., Realistic learning experiences may be best captured with more performance accomplishments and fewer verbal persuasion items). These issues may be clarified with additional research into the instrument's construct and criterion validity. The role and function of physiological arousal in learning, and the challenge of how to assess it apart from dispositional tendencies and personal responses to learning, offer additional avenues for research with the LEQ. It would be interesting to examine LEQ responses in conjunction with measures of dispositional affectivity, and to investigate how arousal specific to a learning experience may be interpreted differently depending on the person's mood or temperament; one person's interpretation and labeling of arousal cues as anxiety may be another's excitement. Relatedly, the assessment of physiological arousal in the LEQ may be limited in that items focus only on negative affective interpretations (e.g., anxious, nervous). Excitement, serenity and relaxation are other interpretations of arousal cues not captured by the current LEQ items. It is unknown how this omission may affect the utility of the LEQ's physiological arousal subscales for predicting self-efficacy or outcome expectations. Williams (2010) offered an initial such inquiry into LEQ subscale content and function when she explored the nuances of the vicarious learning subscales, specifically the role of similar others in vicarious learning, by modifying LEQ items to reflect observation of a similar model rather than a generic other; these modified subscales were compared to the original ones, but no differences were found in the functioning of the original and modified subscales. Further such theory driven examinations will extend our understanding of the sophistication of the LEQ. The growing body of evidence for the reliability and validity of the LEQ also may open pathways for its use for research on propositions of Lent et al.'s (1994) SCCT that have thus far gone untested. Examinations of the comparative power of the types of learning experiences in determining social cognitive constructs and guiding goals and actions may now be more readily accomplished. Similarly, assessment of relative strengths and deficits in certain types of learning between men and women or persons of different cultural or social status backgrounds may offer insight into the origin of differences in choice goals and actions as well as how to intervene to influence choices and open options. Appendix A. Items of the Learning Experiences Questionnaire Reproduced with author permission. Copyright 2003 by Michael Schaub, Ph.D. The Learning Experiences Questionnaire (LEQ) may be used with the written consent of Michael Schaub (email: [email protected]). 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24.

I performed well in biology courses in school. People whom I respect have encouraged me to work hard in math courses. I remember seeing my family plan out the details of vacations. I have made simple car repairs. While growing up, I saw people whom I admire work in youth ministry. I have become nervous while solving math problems. I have become uptight while trying to repair something that was broken. I have seen people whom I respect read business magazines. I have seen family members perform work which involved organizing information. People I respect have urged me to learn how to fix things that are broken. I was successful performing science experiments in school. In school, I saw teachers whom I admired work on science projects. I have felt uneasy when people would come to me with their problems. I have seen people whom I trust successfully manage a business. The artwork I have created usually turned out well. I remember my family telling me that it is important to be able to solve science problems. People whom I looked up to told me that it is important to read scholarly articles. I remember watching members of my family create art. My teachers have encouraged me to explore jobs in the helping professions (e.g., counseling). I have kept accurate records of my financial documents. I have been able to sell a product effectively. I have observed members of my family build things. I have made repairs around the house. I have become anxious while learning new computer software.

64

25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 36. 37. 38. 39. 40. 41. 42. 43. 44. 45. 46. 47. 48. 49. 50. 51. 52. 53. 54. 55. 56. 57. 58. 59. 60. 61. 62. 63. 64. 65. 66. 67. 68. 69. 70. 71. 72. 73. 74. 75. 76. 77. 78. 79. 80. 81. 82. 83. 84. 85.

D.M. Tokar et al. / Journal of Vocational Behavior 80 (2012) 50–66

I received good grades in my art courses in school. I have become nervous when working on mechanical things (e.g., appliances). I have seen people whom I respect enter the teaching profession. I have done a good job at proofreading my papers for mistakes. I have seen my parents keep organized records of their important financial documents. I have been successful when I used tools to work on things. I have felt anxious when I had to act in a play. I have been successful at caring for children. I have listened to members of my family speak in public. I received high scores on the math section of my college entrance exam (e.g., SAT). I have felt nervous when I had to sell something. I have been successful at teaching people. I have felt nervous while debating a topic. I watched people whom I respect work in the outdoors. I have felt anxious about creating artwork. Teachers I admired encouraged me to take classes in which I can use my mechanical abilities. I watched my friends as they participated in school plays. People whom I admire have told me that it is important to learn new computer software. While growing up, I saw people I respected using math to solve problems. I have felt anxious while taking a science course in school. I have seen people whom I respect participating in activities that require math abilities. I have seen people whom I respect enter politics. I have become nervous while teaching something new to a classmate. I have felt uneasy while using tools to build something. I have felt anxious while organizing resources for a term paper. I have seen people whom I admire dedicate their lives to helping others. I recall seeing adults whom I admire working in a research laboratory. I have successfully persuaded people to do things my way. I have done a good job at writing poetry. People whom I respect have encouraged me to play a musical instrument. I have observed people whom I admire perform volunteer work. I have felt uneasy while learning new topics in biology courses. I have easily understood new math concepts after learning about them in class. My parents have encouraged me to pursue jobs that involve keeping track of records. I observed people whom I respect repair mechanical things. My family encouraged me to take social science courses (e.g., psychology). Teachers whom I respect have told me that it is important to have good organizational skills. I have demonstrated skill at conducting research for my term papers. While growing up, I watched adults whom I respect fix things. I have seen people whom I admire write fiction stories. Reading scientific articles has made me feel uneasy. I have felt anxious while performing basic repairs on a car. My family has encouraged me to find a job which involves performing basic office tasks. I have accurately balanced a checkbook. I have been successful at creating a sculpture with clay. My family taught me that it is important to develop my interpersonal communication skills. I have watched people whom I respect perform detail-oriented work. I have been able to hold a conversation with all types of people. I have felt nervous learning how to operate office machines. During school, I admired teachers whom I saw create art. Teachers whom I respect have encouraged me to take a business management course. Adults whom I admire have urged me to enter a profession in which I manage others. I have been successful at playing a musical instrument. I have listened well to people who are having personal difficulties. Teachers whom I respect have encouraged me to take an art class. I have done a good job at things that involved physical labor (e.g., landscaping). People whom I respect have encouraged me to develop my leadership skills. I have felt uneasy about taking a leadership role in a group. I have done a good job at operating new computer programs (e.g., word processing). I have felt uptight while entering data at a computer terminal. I have felt dread while using math in a job.

D.M. Tokar et al. / Journal of Vocational Behavior 80 (2012) 50–66

86. 87. 88. 89. 90. 91. 92. 93. 94. 95. 96. 97. 98. 99. 100. 101. 102. 103. 104. 105. 106. 107. 108. 109. 110. 111. 112. 113. 114. 115. 116. 117. 118. 119. 120.

65

During school, I have felt uptight while working as a part of a small group. While growing up, I recall seeing people I respected reading scientific articles. I have seen people whom I respect hold jobs which involved performing routine office work. I remember feeling anxious while working on something that required manual labor. I have done a good job at performing basic office work (e.g., filing). Family members have urged me to learn how to sing. People whom I trust have told me that it is important to be able to persuade others to do things. I have become anxious initiating conversations with people I do not know. I have felt uptight while writing a short story for school. I have been a successful leader in school. My friends have encouraged me to use my research abilities. Teachers whom I admire have encouraged me to take science courses. I have seen people whom I admire lead others. I remember feeling uptight when I had to keep clear, precise records. I observed people whom I admire work in a garden. While growing up, adults I respected encouraged me to work with tools. While growing up, I listened to family members play musical instruments. People whom I respect have encouraged me to be a detail-oriented person. I have felt uneasy while supervising the work of others. I have done well in building things. People whom I admire have encouraged me to be a salesperson. I have done well at public speaking. While growing up, adults whom I admired told me that it is important to be a good writer. I have felt uneasy while drawing something. I have felt uncomfortable while playing a musical instrument for other people. Friends have urged me to act in a play. I have become nervous while developing new friendships. People whom I look up to have urged me to pursue activities that require manual dexterity. I have felt anxious when I attempted to persuade someone to do things my way. I have seen people I know enter work in the helping professions (e.g., social work). People whom I respect have encouraged me to perform volunteer work. I earned good grades in social science courses. Family members have encouraged me to pursue activities that involve working outdoors. My friends have urged me to help others resolve their personal difficulties. I have successfully supervised the work of others.

Scoring instructions: All items are rated on a 6-point Likert scale ranging from 1 (strongly disagree) to 6 (strongly agree), and subscale scores are the summed ratings of the five items composing that subscale. Realistic Performance Accomplishments = 4, 23, 30, 80, and 105 Realistic Vicarious Learning = 22, 38, 59, 63, and 100 Realistic Verbal Persuasion = 10, 40, 101, 113, and 118 Realistic Physiological Arousal = 7, 26, 48, 66, and 89 (all reverse-scored) Investigative Performance Accomplishments = 1, 11, 34, 57, and 62 Investigative Vicarious Learning = 12, 43, 45, 51, and 87 Investigative Verbal Persuasion = 2, 16, 17, 96, and 97 Investigative Physiological Arousal = 6, 44, 56, 65, and 85 (all reverse-scored) Artistic Performance Accomplishments = 15, 25, 53, 69, and 77 Artistic Vicarious Learning = 18, 41, 64, 74, and 102 Artistic Verbal Persuasion = 54, 79, 91, 108, and 111 Artistic Physiological Arousal = 31, 39, 94, 109, and 110 (all reverse-scored) Social Performance Accomplishments = 32, 36, 72, 78, and 117 Social Vicarious Learning = 5, 27, 50, 55, and 115 Social Verbal Persuasion = 19, 60, 70, 116, and 119 Social Physiological Arousal = 13, 47, 86, 93, and 112 (all reverse-scored) Enterprising Performance Accomplishments = 21, 52, 95, 107, and 120 Enterprising Vicarious Learning = 8, 14, 33, 46, and 98 Enterprising Verbal Persuasion = 75, 76, 81, 92, and 106 Enterprising Physiological Arousal = 35, 37, 82, 104, and 114 (all reverse-scored)

66

D.M. Tokar et al. / Journal of Vocational Behavior 80 (2012) 50–66

Conventional Performance Accomplishments = 20, 28, 68, 83, and 90 Conventional Vicarious Learning = 3, 9, 29, 71, and 88 Conventional Verbal Persuasion = 42, 58, 61, 67, and 103 Conventional Physiological Arousal = 24, 49, 73, 84, and 99 (all reverse-scored) References Bandura, A. (1986). Social foundations of thought and action: A social cognitive theory. Englewood Cliffs, NJ: Prentice-Hall. Bandura, A. (1997). Self-efficacy: The exercise of control. New York: W. H. Freeman. Betz, N. E. (2008). Advances in vocational theories. In S. Brown, & R. Lent (Eds.), Handbook of counseling psychology (pp. 357–374). New York: Wiley. Browne, M. W., & Cudeck, R. (1993). Alternative ways of assessing model fit. In K. A. Bollen, & J. S. Long (Eds.), Testing structural equation models (pp. 136–162). Newbury Park, CA: Sage. Byars-Winston, A. M., & Fouad, N. A. (2008). Math and science social cognitive variables in college students: Contributions of contextual factors in predicting goals. Journal of Career Assessment, 16, 425–440. Byrne, B. M., Shavelson, R. J., & Muthén, B. (1989). Testing for the equivalence of factor covariance and mean structures: The issue of partial measurement invariance. Psychological Bulletin, 105, 456–466. Campbell, D. T., & Fiske, D. W. (1959). Convergent and discriminant validation by the multitrait–multimethod matrix. Psychological Bulletin, 56, 81–105. Chen, F., Bollen, K. A., Paxton, P., Curran, P. J., & Kirby, J. B. (2001). Improper solutions in structural equation models: Causes, consequences, and strategies. Sociological Methods & Research, 29, 468–508. Dickinson, J. (2008). An examination of the applicability of social cognitive career theory for African American college students. Dissertation Abstracts International: Section B: The Sciences and Engineering, 68, 6298. Finney, S. J., & DiStefano, C. (2006). Nonnormal and categorical data in structural equation modeling. In G. R. Hancock, & R. O. Mueller (Eds.), Structural equation modeling: A second course (pp. 269–314). Greenwich, CT: Information Age Publishing. Flores, L. Y., Robitschek, C., Celebi, E., Andersen, C., & Hoang, U. (2010). Social cognitive influences on Mexican Americans' career choices across Holland's themes. Journal of Vocational Behavior, 76, 198–210. Holland, J. L. (1997). Making vocational choices: A theory of vocational personalities and work environments (3rd ed.). Odessa, FL: Psychological Assessment Resources. Holland, J. L., Fritzsche, B. A., & Powell, A. B. (1994). The Self-Directed Search (SDS) technical manual. Odessa, FL: Psychological Assessment Resources. Hu, L., & Bentler, P. M. (1998). Fit indices in covariance structure modeling: Sensitivity to underparameterized model misspecification. Psychological Methods, 3, 424–453. Kline, R. B. (2005). Principles and practice of structural equation modeling (2nd ed.). New York: Guilford. Lent, R. W. (2005). A social cognitive view of career development and counseling. In S. D. Brown, & R. W. Lent (Eds.), Career development and counseling: Putting theory and research to work (pp. 101–127). Hoboken, N.J.: John Wiley & Sons. Lent, R. W., Brown, S. D., & Hackett, G. (1994). Toward a unifying social cognitive theory of career and academic interest, choice, and performance. Journal of Vocational Behavior, 45, 79–122. Lent, R. W., & Fouad, N. A. (2011). The self as agent in social cognitive career theory. In P. J. Hartung, & L. M. Subich (Eds.), Developing self in work and career: Concepts, cases, and contexts (pp. 71–87). Washington, DC: American Psychological Association. Lent, R. W., Lopez, A. M., Lopez, F. G., & Sheu, H. (2008). Social cognitive career theory and the prediction of interests and choice goals in the computing disciplines. Journal of Vocational Behavior, 73, 52–62. Lent, R. W., Paixo, M. P., da Silva, J. T., & Leito, L. M. (2010). Predicting occupational interests and choice aspirations in Portuguese high school students: A test of social cognitive career theory. Journal of Vocational Behavior, 76, 244–251. MacCallum, R. C., Browne, M. W., & Sugawara, H. M. (1996). Power analysis and determination of sample size for covariance structure modeling. Psychological Methods, 1, 130–149. McCrae, R. R., Kurtz, J. E., Yamagata, S., & Terracciano, A. (2011). Internal consistency, retest reliability, and their implications for personality scale validity. Personality and Social Psychology Review, 15, 28–50. Muthén, L. K., & Muthén, B. O. (1998–2010). Mplus user's guide. (6th ed.). Los Angeles, CA: Muthén & Muthén. Schaub, M. (2003). Learning Experiences Questionnaire (LEQ) scoring key. Unpublished manuscript. Schaub, M. (2004). Social cognitive career theory: Examining the mediating role of sociocognitive variables in the relation of personality to vocational interests. Dissertation Abstracts International Section A: Humanities & Social Sciences, 64 (7-A), 2463. Schaub, M., & Tokar, D. M. (2005). The role of personality and learning experiences in social cognitive career theory. Journal of Vocational Behavior, 66, 304–325. Sheu, H., Lent, R. W., Brown, S. D., Miller, M. J., Hennessy, K. D., & Duffy, R. D. (2010). Testing the choice model of social cognitive career theory across Holland themes: A meta-analytic path analysis. Journal of Vocational Behavior, 76, 252–264. Spokane, A. R., & Cruza-Guet, M. C. (2006). Holland's theory of vocational personalities in work environments. In S. D. Brown, & R. W. Lent (Eds.), Career development and counseling: Putting theory and research to work (pp. 24–41). Hoboken, N.J.: John Wiley & Sons. Tokar, D. M., Thompson, M. N., Plaufcan, M. R., & Williams, C. M. (2007). Precursors of learning experiences in Social Cognitive Career Theory. Journal of Vocational Behavior, 71, 319–339. Widaman, K. F. (1985). Hierarchically nested covariance structure models for multitrait–multimethod data. Applied Psychological Measurement, 9, 1–26. Williams, C. M. (2010). Gender in the development of career-related learning experiences. Ph.D. dissertation, The University of Akron, United States — Ohio. Retrieved April 22, 2011, from Dissertations & Theses @ University of Akron. (Publication No. AAT 3417890). Williams, C. M., & Subich, L. M. (2006). The gendered nature of career related learning experiences: A social cognitive career theory perspective. Journal of Vocational Behavior, 69, 262–275. Williams, C. M., Thompson, M. N., & Robinson, R. P. (2006). The career related learning experiences of African American college students. Paper presented at the annual conference of the American Psychological Association, New Orleans, LA.