Computers in Human Behavior Computers in Human Behavior 23 (2007) 127–145 www.elsevier.com/locate/comphumbeh
The development of a measure of subjective computer experience Brooke Smith a, Peter Caputi a
a,*
, Patrick Rawstorne
b
Department of Psychology, University of Wollongong, Northfields Ave, Wollongong, NSW 2522, Australia b University of New South Wales, Australia Available online 12 May 2004
Abstract The present study examined the psychometric properties of a recently developed measure of subjective computer experience using a sample of 179 first year psychology students. The Subjective Computer Experience Scale (SCES) was developed to measure the construct of subjective computer experience, defined for present purposes, as a private psychological state reflecting the thoughts and feelings a person ascribes to some previous or existing computing event. Factor analysis revealed five factors that were labelled, Frustration–Anxiety, Autonomy–Assistance, Training–Education, Enjoyment–Usefulness and Negative Performance Appraisal, respectively. Acceptable internal-consistency estimates of the five subscales were obtained. Convergent validity was evidenced by significant correlations between the SCES and measures of computer attitude and objective computer experience. Evidence for divergent validity was obtained with scores on four of the five subscales of the SCES being unrelated to dispositional coping style. In sum, the SCES was found to have promise as a psychometrically sound instrument for measuring subjective computer experience. Ó 2004 Elsevier Ltd. All rights reserved. Keywords: Subjective computer experience; Measurement
1. Introduction An important construct addressed in computer-behaviour research is that of computer experience (Kay, 1992). However, what constitutes ‘‘computer experience’’ has not always *
Corresponding author. E-mail address:
[email protected] (P. Caputi).
0747-5632/$ - see front matter Ó 2004 Elsevier Ltd. All rights reserved. doi:10.1016/j.chb.2004.04.001
128
B. Smith et al. / Computers in Human Behavior 23 (2007) 127–145
been clearly defined in the research literature (Potosky & Bobko, 1998). Consequently, the multitude of terms and measures used to examine computer experience has added considerable complexity to understanding the association between computer experience and other psychological constructs such as computer attitudes, computer anxiety and selfefficacy for computer-related tasks (Beckers & Schmidt, 2003; Colley, Gale, & Harris, 1994; Durndell, Macleod, & Siann, 1987; Ertmer, Evenbeck, Cennamo, & Lehman, 1994; Hasan, 2003; McInerney, McInerney, & Sinclair, 1994; Robertson, Calder, Fung, Jones, & O’Shea, 1995; Robinson-Staveley & Cooper, 1990; Shashaani, 1994; Todman & Monaghan, 1994; Yelland, 1995). After considering how computer experience has been defined and measured in over 40 different studies in the research literature (see Smith, 1998; Smith, Caputi, Crittenden, Jayasuriya, & Rawstorne, 1999), we proposed that computer experience be conceptualised as bi-dimensional, consisting of subjective and objective constituents. Therefore, in the present paper, objective computer experience (OCE) and subjective computer experience (SCE) are considered distinct and thus need to be defined separately. In the past, there has been no clear distinction made between OCE and SCE. Rather these concepts and their subsequent measures are frequently classified together under the general rubric of ‘‘computer experience’’. For the purpose of this paper, OCE is defined as the totality of externally observable, direct and/or indirect human–computer interactions that transpire across time. Instruments developed to measures OCE range from simple one-item questions concerned with the amount of computer experience acquired over time (Bandagliacco, 1990; Burkes, 1991; Busch, 1995; Caputi, Jayasuriya, & Fares, 1995; Culpan, 1995; Czaja & Sharit, 1991; Hall & Cooper, 1991; Miller & Varma, 1994; Pope-Davis & Twing, 1991; Ryker & Nath, 1995; Scarpa, Smeltzer, & Jaison, 1992), through to a comprehensive 90-item measure (Lambert & Lewis, 1989, cited in Lambert, 1991). The sheer number and diverse range of items comprising these measures of computer experience has made it difficult to identify a common theme among these instruments (Kay, 1993). However, the adoption of Jones and Clarke’s (1995) multi-component approach provides a uniform framework for classifying current measures of OCE. Drawing on Jones and Clarke’s work, OCE can be conceptualised and measured as a composite of the following four components; opportunity to use computers, diversity of experience, amount of experience and sources of information. Amount of computer use and opportunity to use computers reflect the accumulative use of computers and the availability of resources contributing to, or resulting in, the use of computer technologies within and across various settings (e.g., home, school, work). The diversity of experience variable examines a person’s usage of a variety of computing software packages (e.g., word processing, programming language). Therefore, based on Jones and Clarke’s position, direct OCE comprise three basic variables, namely, amount of computer experience, diversity of computer experience and opportunity to use computers. In contrast, indirect OCE consists of a single variable measuring available sources of information through which information about computers may be acquired (Jones & Clarke, 1995). Sources of information are considered a measure of indirect OCE because individuals are not required to personally engage in any computer related activities. On the contrary, relevant information about computers is acquired through remote or indirect channels, such as observing, reading or hearing about another’s computing experiences (Eagly & Chaiken, 1993). These measures reflect OCE insofar as they focus specifically on the externally observable properties of the computing experience (Reber, 1985). In con-
B. Smith et al. / Computers in Human Behavior 23 (2007) 127–145
129
trast, SCE is a latent process with the fundamental nature of the computing event being experienced internally or privately (Reber, 1985). Importantly, this study provides a new measure of SCE that begins with a construct definition of what is measured. Consistent with recent theorising (Eagly & Chaiken, 1993; Farthing, 1992; Lundin, 1991), SCE is defined as a private psychological state, reflecting the thoughts and feelings a person ascribes to some previous or existing computing event. In referring to SCE as a private psychological state, we propose that SCE is a latent process that individuals possess but that cannot be observed directly. The latent process conceptualisation implies that psychological and physiological events underlie SCE (see Smith et al., 1999). Moreover, inherent in our definition is that the cognitive and affective processes underlying SCE are elicited by some existing computer event. By existing computing event, we mean any immediate behavioural experience that directly or indirectly involves computers. Specifically, direct behavioural experience involves participation through personal action (Barki & Hartwick, 1994). In this context, direct experience encompasses a broad range of overt behaviours involving actual interaction with, or manipulation of, the object in question, that is, the computer (Fazio & Zanna, 1978, 1981). Therefore, when people directly encounter computers, the subjective experience would most likely be formed on the basis of cognitive and affective processes (Barki & Hartwick, 1994; Eagly & Chaiken, 1993). Alternatively, individuals may learn about computers entirely on the basis of reading, watching others or discussing computers with a significant other. Under these circumstances of indirect experience, SCE would be formed primarily on the basis of acquiring knowledge and forming beliefs about computers (Eagly & Chaiken, 1993). Therefore, to fully comprehend SCE formation, researchers need to consider the direct and/or indirect nature of the behavioural experience and the cognitive and affective processes associated with the behavioural experience. The intimate thoughts and feelings concerning the computing experience can only be retrieved through a process of reflective introspection, best described as a personal observation of one’s own private subjective experience (Farthing, 1992). Through this reflective process, the person asks ‘‘What did I perceive/think/feel?’’ (Farthing, 1992, p. 13). Therefore, whenever individuals try to think or reflect upon their percepts and thoughts objectively as they happen, or after the fact, they are necessarily engaging in reflective introspection (Farthing, 1992). This reflective process enables both the participant and the researcher to acquire information about the psychological and physiological events underlying the computing experience. Therefore, to measure the construct of subjective computer experience, respondents should be encouraged to reflect upon and report their private thoughts and feelings concerning some previous or existing computing event(s). The exact description of the psychological and physiological events underlying the computing experience can be best obtained through the development of a psychometrically sound measure of the construct of SCE. At present, an instrument is needed to measure subjective computer experience in a reliable and valid fashion. One instrument that holds some promise in this regard, is the 31-item Subjective Computer Experience Scale (SCES: Rawstorne, Caputi, & Smith, 1998) developed for the purpose of this research. To date, no studies have examined the psychometric properties of the SCES. Hence, the current study was designed to explore the reliability and construct validity of scores on this instrument among first
130
B. Smith et al. / Computers in Human Behavior 23 (2007) 127–145
year psychology students. An exploration of the internal consistency, factor structure, and convergent and divergent validity of the SCES will be conducted to accomplish this aim. With regard to convergent validity, scores on the SCES should be related to measures of computer attitude and objective computer experience. With regard to divergent validity, scores on the SCES are expected to be unrelated to scores on the Miller Behavioral Style Scale (MBSS: Miller, 1987), a measure of dispositional coping style. Therefore, the objective of the present study was to develop a measure of subjective computer experience, explore the measure’s underlying factor structure, and examine the convergence and divergence between scores on the new measure and scores on previously developed instruments. 2. Method 2.1. Participants One hundred and eighty-five first year undergraduate psychology students participated in return for partial course credit. Of the 185 response sheets returned, 6 were incomplete, leaving 179 students, (122 female, 40 male and 17 unknown) for analysis. Of the 166 respondents reporting their age, mean age was 20 (SD = 5.26), and age ranged from 17 to 52 years. 2.2. Subjective Computer Experience Scale The SCES (Rawstorne et al., 1998) was developed following tape-recorded interviews with two small focus groups, each consisting of four psychology undergraduate students, varying considerably in levels of computer experience and expertise. The two focus groups provided participants with the opportunity to freely discuss the qualitative aspects of their previous computing experiences. Both tape recordings were subsequently transcribed and the following salient themes identified; benefits of computer use, challenge and enjoyment associated with computer use, accessibility of technical support and assistance, frustration and anxiety relating to computer use. A set of 31 statements (including 8 positive and 17 negative items) was generated to reflect these salient themes, and concerns about technical support and computer assistance. Responses were scored on a 5-point Likert type scale ranging from strongly agree (1) to strongly disagree (5). If the question seemed irrelevant, respondents were provided with the opportunity to circle a ‘‘Not Applicable’’ option. Scoring for the negative items were reversed so that high scores indicated a more positive subjective computer experience. In the present study, maximum likelihood factor analysis with oblique rotation produced a five-factor solution that explained 48% of the total variance. These factors were labelled, Frustration–Anxiety, Autonomy–Assistance, Training–Education, Enjoyment–Usefulness and Negative Performance Appraisal, respectively. Reliability analyses were conducted on each of the five factors. Cronbach alpha was 0.86 for the Frustration–Anxiety subscale, 0.68 for the Autonomy–Assistance subscale, 0.83 for the Training–Education subscale, 0.75 for the Enjoyment–Usefulness subscale and 0.79 for the Negative Performance Appraisal subscale, respectively. The SCES is presented in Appendix A.
B. Smith et al. / Computers in Human Behavior 23 (2007) 127–145
131
2.3. Computer Attitude Scale The Computer Attitude Scale (CAS: Loyd & Gressard, 1984) was used as one measure of computer attitudes. The CAS consists of three subscales of 10 items each: computer anxiety (CAS-Anxiety), computer liking (CAS-Liking) and computer confidence (CASConfidence). To improve the conceptual precision of the factors and to achieve a more parsimonious scale, the following three items were removed from the corresponding CAS subscales, ‘‘I could get good grades in computer courses’’ (CAS-Confidence), ‘‘When there is a problem that I can’t immediately solve, I would stick with it until I have the answer’’ (CAS-Liking), ‘‘I feel aggressive and hostile towards computers’’ (CAS-Anxiety). Each item is rated on a 5-point Likert-type scale ranging from strongly agree (1) to strongly disagree (5). Higher score on the anxiety subscale correspond to higher anxiety, whilst higher scores on the computer liking and computer confidence subscales correspond to more positive attitude toward working with and learning about computers. Loyd and Gressard (1984) reported coefficient alpha reliabilities of 0.86, 0.91, 0.91 for the Computer Liking, Computer Anxiety and Computer Confidence subscales, respectively, and 0.95 for the total score. In the present study a Cronbach alpha of 0.87 was obtained for the CAS-Liking subscale, 0.90 for the CAS-Anxiety subscale, and 0.90 for the CAS-Confidence subscale. Dambrot, Watkins-Malek, Silling, Marshall, and Garver (1985) developed a 20-item scale to measure gender differences in computer attitudes and usage. The Computer Attitude Scale (CATT) consists of 9 positive (e.g., ‘‘I think computers are facilitating’’) and 11 negative statements (e.g., ‘‘I feel very negative about computers in general’’) regarding computers. The item ‘‘Computers are being forced on us; we are having our decision process replaced by them, making us lose control of our lives’’, was modified by was dividing it into two separate items (‘‘Computers are being forced on us’’; ‘‘We are having our decision process replaced by computers, making us lose control of our lives’’). Respondents rated the 21 items on a 5-point Likert-type scale from strongly agree (1) to strongly disagree (5). Adding the rating for the 21 items with scoring for the negative items reversed derives a total CATT score. A high score represents more positive attitudes toward computers. Dambrot et al. (1985) reported a coefficient alpha of 0.84 when the CATT was administered to a sample of 841 students from a freshman introductory psychology class. Moreover, the CATT was shown to be related to student math anxiety scores (r = 0.24, p < 0.05), math experience (r = 0.09, p < 0.05), computer experience (r = 0.19, p < 0.05), math aptitude (r = 0.14, p < 0.05), high school achievement (r = 0.07, p < 0.05) and gender (r = 0.24, p < 0.05). Subsequently, Dambrot, Silling, and Zook (1988) reported a coefficient alpha of 0.81 when administering the inventory to 192 university students who had completed an assembly language computer programming course. Further, Zakrajsek, Waters, Popovich, Craft, and Hampton (1990) reported a coefficient alpha of 0.86 for the total scale. An alpha coefficient of 0.85 was obtained in the present study. 2.4. Objective Computer Experience: Amount of experience Participants completed four questions assessing amount of computer experience. Responses to the first question, ‘‘On an average day when you are actually using a computer, how much time to you spend on the system?’’, were rated on a 6-point scale, ranging
132
B. Smith et al. / Computers in Human Behavior 23 (2007) 127–145
from almost never (1) to more than 3 h (6). Responses to the second question, ‘‘On average, how frequently do you use a computer?’’, were also rated on a 6-point scale, ranging from less than once a month (1) to several times a day (6) (Igbaria, Iivari, & Maragahh, 1995). The remaining two experience items parallel the questions used by Jones and Clarke (1995). Respondents were asked to indicate how often they used the computer at home and whether they had used the computer for purposes other than study or work (Jones & Clarke, 1995). Both these questions were rated on a 5-point Likert-type scale ranging from never (1) to daily (5). 2.5. Objective Computer Experience: Opportunity to use computers The Opportunity to Use Computers measure consisted of three items reflecting the perceived availability of computer resources within academic facilitates and the home (Jones & Clarke, 1995). These items were adapted from Jones and Clarke’s (1995) 16-item, Computer Experience Scale. For each item respondents were provided with a 5-point response scale ranging from none (scored 0) to a great deal (scored 4). 2.6. Miller Behavioral Style Scale The MBSS (Miller, 1987) differentiates monitors from blunters on the basis of individual preference for information (i.e., monitoring) or distraction (i.e., blunting) in four hypothetical stress-evoking scenes. Respondents are asked to vividly imagine each of the hypothetical situations and to indicate (placing a tick next to the appropriate response), which of the eight possible coping responses they would most likely adopt for each situation. Four of the statements following each hypothetical scene are of a monitoring variety (e.g., in the aeroplane situation: ‘‘I would read and reread the safety instruction booklet’’). The other four statements are of a blunting variety (e.g., ‘‘I would watch the inflight movie even if I had seen it before’’). Miller (1987) reported coefficient alphas of 0.79 and 0.69 for the two MBSS subscales, namely, monitoring and blunting, when administering the MBSS to a sample of 30 students faced with a physical stressor (i.e., prospect of electric shock). Miller (1987) also administered the MBSS to 40 students faced with a psychological stressor (i.e., light signalling performance level on a series of tests). Alpha coefficients of 0.75 and 0.67 were reported for the monitoring and blunting subscales, respectively. Moreover, the MBSS (Miller, 1987) reliably predicts individual coping responses in a range of real life situations. Dispositional coping style (as indicated by scale responses) has been shown to predict situational coping style under the threat of aversive laboratory situations (i.e., electric shock) (Miller, 1987) and prior to a diagnostic medical procedure (Miller, 1989; Miller & Mangan, 1983). In other research (see Miller, 1992) the MBSS was unrelated to demographic variables and to trait measures such as repression-sensitisation, anxiety, depression and Type A personality. In the present study, individuals were not classified according to their style of coping (monitors or blunters), but were assigned a single, monitoring score. The number of blunting responses endorsed was subtracted from the number of monitoring responses endorsed, such that a higher score reflects a greater tendency toward information seeking (i.e., monitoring).
B. Smith et al. / Computers in Human Behavior 23 (2007) 127–145
133
2.7. Procedure Participants were presented with two booklets containing the relevant questionnaires and instructed to take the booklets home, complete and return them as soon as possible. It was emphasised that there were no right or wrong answers and that responses were confidential. On the front of each booklet was detailed instructions concerning the generation of a personal code number. Before departing, these instructions were discussed and participants were asked to ensure that the same code be provided on both booklets to enable subsequent matching of responses. The first booklet contained the SCES, MBSS and the CAS. The second booklet contained the CATT and measures of objective computer experience, namely, amount of computer experience and opportunity to use computers. 3. Results 3.1. Factor analysis of SCES Given that the SCES is a relatively new measure, preliminary factor analyses with three extraction methods, namely, principal components, maximum likelihood, and principle axis factoring were undertaken to examine the dimensionality of the scale and ascertain the stability of the factor structures. Oblique rotation was used to obtain a simple structure as some of the factors were assumed to be correlated (Kline, 1993). Since a five-factor solution was evident across all three extraction methods, the most appropriate factor analytic solution was determined by considering (i) the proportion of items which failed to load significantly on any one factor; (ii) the simplicity of the factor structure; and (iii) the proportion of variance for which it accounted. The maximum likelihood factor analysis with oblique rotation was most relevant, accounting for 48% of the total variance. The scree test also suggested that there were five main factors (Tabachnick & Fidell, 1996). To test the degree to which the existing model could account for the covariance among the scores on the SCES, a v2, goodness-of-fit was calculated, v2 (320) = 375.26, p > 0.01. This finding suggests a reasonable fit of the solution to the data. After inspecting the factor matrix, a reduced matrix consisting of 26 statements was ascertained. Six of the original 31 (2, 5, 15, 17, 26 and 28) items entered into the analysis failed to load above 0.33 on any of the five factors and were thus omitted from the fivefactor solution (Tabachnick & Fidell, 1996). The factor loadings for the reduced item set are presented in Table 1. The five factors were labelled according to the major items defining the factors, they were: Frustration–Anxiety, Autonomy–Assistance, Education–Training, Enjoyment– Usefulness, and Negative Performance Appraisal. Inter-rater agreement demonstrated consistency in the naming of such factors. Factor 1, Frustration–Anxiety, accounted for most of the covariance (27.63%) and consisted of five items with loadings ranging from 0.47 to 0.89. The six items of the Autonomy–Assistance subscale accounted for 8.57% of the covariance. These items had loadings ranging from 0.34 to 0.76. The three items of the Training–Education subscale explained 6.51% of the covariation, whilst factor loadings ranged from 0.64 to 0.86. Factor 4, namely, Enjoyment–Usefulness explained a small amount of covariance (2.61%) and was defined by five items. These items had loadings ranging from 0.38 to 0.92. The six items of the Negative Performance
134
B. Smith et al. / Computers in Human Behavior 23 (2007) 127–145
Table 1 Rotated factor loadings, descriptive statistics and internal consistency estimates of reliability for subjective computer experience items and subscales Item no.
06 11 07 18 27
16 12 08 31 30 01
29
25
09 03 13 04
14
21 19 23 10
Subscales and item content Factor 1 Anxiety–Frustration I usually get frustrated when using a computer I usually get frustrated when using certain software In the past I have felt anxious when required to use certain software I often feel scared when using a computer I often feel isolated from other people when using a computer Factor 2 Autonomy–Assistance Instead of asking for assistance with a computer-related problem, I prefer to try and solve it myself From past experience, I would prefer to learn a new computer software package on my own I am reluctant to ask for help when using a computer When I encounter a computer related problem that I cannot resolve myself, I feel comfortable about asking an expert I feel more at ease using a computer when alone than with a group of people When using a computer, I prefer to learn through trial and error Factor 3 Training–Education In the past, computer training has improved my ability to use computer software The training I have received in computer usage has been very beneficial In the past, computer education has facilitated my understanding of computer software capabilities Factor 4 Enjoyment–Usefulness I enjoy exploring new applications/uses for the computer or software I have generally enjoyed learning how to use computer software I am usually curious to use the latest version computer software In situations where I have had to learn how to use a computer system, I have found the operating manuals difficult to understand Computer support staff talk in computer jargon with which I am unfamiliar Factor 5 Negative Performance Appraisal I feel incompetent when having to ask for computer assistance When I seek advice about a computer-related question, I feel stupid when I am told that the answer is simple When I cannot understand how to use computer software I evaluate my own performance in a negative way Other people seem to be more skilful at using a computer than myself
Factor loading
Cronbach alpha
M
SD
2.95 3.04 2.99
1.21 1.16 1.19
2.26 2.74
1.17 1.32
0.76
2.78
1.25
0.67
2.46
1.22
0.54 0.39
2.30 2.17
1.14 1.12
0.39
3.29
1.34
3.23
1.17
0.86
3.76
1.01
0.79
3.75
1.08
3.46
1.10
0.92
3.00
1.25
0.60
3.59
1.00
0.58
2.65
1.31
0.42
2.34
1.11
2.78
1.17
0.79
2.64
1.15
0.77
3.16
1.29
0.67
2.64
1.19
0.40
4.03
1.12
0.89 0.72 0.53 0.52 0.47
0.34
0.64
0.38
0.86
0.68
0.83
0.75
B. Smith et al. / Computers in Human Behavior 23 (2007) 127–145
135
Table 1 (continued ) Item no.
Subscales and item content
Factor loading
20
I often feel concerned that I might do damage to the computer if I make a mistake I feel quite powerless when I am being instructed to use a computer or computer software for the first time
0.37
24
0.36
Cronbach alpha
0.79
M
SD
2.97
1.47
2.97
1.26
Note. Only items with factor loadings >0.33 are included in Table 1.
Appraisal subscale explained the least amount of covariation (2.35%). Loadings on these items ranged from 0.36 to 0.79, respectively. Three items (18, 27 and 16) loaded on more than one factor but are presented for the factor on which the loading was highest. These factor loadings suggest a good correspondence between the observed variables and the underlying construct of subjective computer experience (Tabachnick & Fidell, 1996). 3.2. Internal consistency In the initial analysis of the correlation matrix for the 25 statements, none of the item– item correlations exceeded 0.73. Item–item correlations ranged from 0.42 to 0.74 for the Frustration–Anxiety subscale, from 0.59 to 0.65 for the Training–Education subscale, from 0.26 to 0.65 for the Enjoyment–Usefulness subscale from 0.24 to 0.61 for the Negative Performance Appraisal subscale and 0.02 to 0.58 for the Autonomy–Assistance subscale. For the Autonomy–Assistance subscale, only 53% of the item–item correlations were in the expected range of 0.20–0.70. However, more than 90% of the item–item correlations for the four remaining subscales were in the expected range, indicating homogeneity among the subscale items (Anastasi, 1990). On each of the five subscales, corrected item–total correlations were above 0.29 (Anastasi, 1990) except for item 31 (‘‘When I encounter a computer-related problem that I cannot resolve myself, I feel comfortable asking an expert’’). Since this item did not adversely affect the overall value of alpha and because it loaded above 0.33 on the Autonomy–Assistance subscale (see Table 1), it was retained in the factor structure. Deletion of any other items either reduced alpha or left alpha unchanged. Therefore, the items retained for each of the subscales appear to measure their respective underlying constructs (Kaplan & Saccuzzo, 1993). A further estimate of the internal consistency of the factor solution ascertained was given by the squared multiple correlations (SMC) of factor scores. The SMC ranged from 0.33 to 0.62 (M = 0.50) for the Anxiety–Frustration subscale, 0.08 to 0.44 (M = 0.26) for the Autonomy–Assistance subscale, 0.43 to 0.50 (M = 0.47) for the Training–Education subscale, 0.23 to 0.55 (M = 0.34) for the Enjoyment–Usefulness subscale and 0.22 to 0.50 (M = 0.34) for the Negative Performance Appraisal subscale. The observed variables accounted for substantial variance in the factor scores. Therefore, the five factors are clearly defined by the observed variables comprising each subscale (Tabachnick & Fidell, 1996). Internal consistency reliability analyses were conducted on each of the five factors. As shown in Table 1, Cronbach alpha was 0.86 for the Frustration–Anxiety subscale, 0.68 for the Autonomy–Assistance subscale, 0.83 for the Training–Education subscale, 0.75 for the Enjoyment–Usefulness subscale and 0.79 for the Negative Performance Appraisal
136
B. Smith et al. / Computers in Human Behavior 23 (2007) 127–145
subscale, respectively. These coefficients indicate good internal consistency for each of the factors. These findings, along with item–item correlations, corrected item–total correlations and modest SMC, provide some evidence for the internal consistency of the SCES. 3.3. Correlations of SCES scores The correlations among the subscales are presented in Table 2. Anxiety–Frustration yielded a moderate and negative association with Training–Education (r = 0.32) and Enjoyment–Usefulness (r = 0.58), and a strong positive association with Negative Performance Appraisal (r = 0.66), respectively. Autonomy–Assistance was not related to Anxiety–Frustration (r = 0.13), Training–Education (r = 0.09) or Negative Performance Appraisal (r = 0.08). The Training–Education subscale yielded a moderate, positive relationship with the Enjoyment–Usefulness subscale (r = 0.39) and a smaller negative relationship with Negative Performance Appraisal (r = 0.23). The significant correlations suggest that these variables were all measures of the subjective computer experience construct. However, the correlation pattern among the five subscales showed that Autonomy–Assistance was relatively independent of the other subscales, with the exception of a small significant relationship (r = 0.21) between the Autonomy–Assistance and Enjoyment–Usefulness subscales. Further, no correlation was equal to or larger than 0.80, indicating that each factor provided different information to the total score (Jones & Clarke, 1995). Correlations between each of the SCES subscales and a total composite measure of this scale are also presented in Table 2. Correlations ranged from 0.86 (for the Anxiety– Frustration subscale and total SCES) to 0.80 (for the Enjoyment–Usefulness subscale and total SCES). The low but noteworthy correlation between the total SCES and the Autonomy–Assistance subscale (r = 0.26, p < 0.01) indicates that although the Autonomy–Assistance factor appears to be orthogonal to the other factors, this subscale is still measuring an important dimension of the subjective computer experience construct. 3.4. Evidence for construct validity Evidence for convergent validity was established by correlating the SCES with two measures of objective computer experience, opportunity and amount of computer experience (Jones & Clarke, 1995) and two measures of computer attitudes, namely, the CAS (Loyd & Gressard, 1984) and the CATT (Dambrot et al., 1985). Table 3 presents validity coefficients between subscale scores on the SCES and these criterion variables. The correTable 2 Correlations among factors and correlations between five SCES factors and the SCES Variable
1
2
3
4
5
6
Frustration–Anxiety Autonomy–Assistance Training–Education Enjoyment–Usefulness Negative performance appraisal Total subjective experience
1.0 0.13 0.32** 0.58** 0.66** 0.86**
1.0 0.09 0.21* 0.08 0.26**
1.0 0.39** 0.23** 0.48**
1.0 0.46** 0.80**
1.0 0.77**
1.00
* **
p < 0.05. p < 0.01.
B. Smith et al. / Computers in Human Behavior 23 (2007) 127–145
137
Table 3 Correlations between the SCES and computer attitude and OCE measures; correlations between SCES subscales and validation measures Variable Convergent evidence OCE-Amount OCE-Opportunity CAS-Anxiety CAS-Liking CAS-Confidence CATT Discriminant evidence #Monitoring
Anxiety– Frustration
Autonomy– Assistance
Training– Education
Enjoyment– Usefulness
Negative Performance Appraisal
0.30** 0.31** 0.74** 0.57** 0.67** 0.60**
0.26** 0.17* 0.13 0.17* 0.18* 0.23**
0.18* 0.25** 0.45** 0.45** 0.48** 0.39**
0.33** 0.33** 0.66** 0.73** 0.68** 0.62**
0.28** 0.26** 0.58** 0.39** 0.52** 0.41**
0.05
0.00
0.01
0.07
0.25**
Note. Higher score reflects a greater tendency toward information seeking (i.e., monitoring). * p < 0.05. ** p < 0.01.
lations between the five subscales of the SCES and the three subscales of the CAS ranged from 0.67 to 0.74. It should be noted that with the exception of Autonomy–Assistance, correlations between the subscales of the SCES and the three CAS subscales were in the expected direction, and were moderate to high in magnitude. Specifically, Frustration– Anxiety correlated positively with the CAS-Anxiety subscale and negatively with the CAS-Liking and CAS-Confidence subscales. Moderate correlations were obtained between Training–Education and each of the CAS subscales. A negative correlation was found between Training–Education and scores on the CAS-Anxiety subscale. In contrast, positive correlations were reported between scores on the Training–Education subscale and scores on the CAS-Liking and CASConfidence subscales. As shown in Table 3, the direction of the correlations between the three subscales of the CAS and the Enjoyment–Usefulness variable were consistent with those reported for Training–Education, although the strength of the correlations were stronger. Negative Performance Appraisal correlated positively with CAS-Anxiety and negatively with CAS-Liking and CAS-Confidence. Correlations between Autonomy– Assistance and the subscales of the CAS ranged from 0.13 to 0.18, suggesting that the respondent’s tendency to persist autonomously on a computer related problem was not associated with feelings of computer confidence, anxiety or liking. Taken together these findings provide some evidence for the convergent validity of the subjective computer experience construct. Correlations between the five subscales of the SCES and the CATT also appear in Table 3. Negative and moderate to strong correlations were found between the CATT and both the Anxiety–Frustration and Negative Performance Appraisal variables. A moderate and positive association was found between the CATT and Computer Training–Education and Enjoyment–Usefulness variables. Correlations between the Autonomy–Assistance subscale and the CATT were non-significant. These findings provide further evidence for the convergent validity of the SCES. Validity coefficients between the five subjective computer experience subscales and the two measures of objective computer experience were low to moderate and in the expected direction. Scores on the Anxiety–Frustration and Negative Performance
138
B. Smith et al. / Computers in Human Behavior 23 (2007) 127–145
Appraisal subscales correlated negatively with opportunity to use computers, amount of computer use and diversity of experience. Whereas Autonomy–Assistance, Training– Education and Enjoyment–Usefulness were positively associated with all three measures of Objective Computer Experience. Evidence of convergent validity was also established through these validity coefficients. Evidence for discriminant validity was established through correlations between subscale scores on the SCES and scores on the MBSS (Miller, 1987), a dispositional measure of coping. Table 3 presents validity coefficients between subscales scores on the SCES and the MBSS. With the exception of Negative Performance Appraisal, all other variables were unassociated with Monitoring as predicted. These validity coefficients were taken as evidence for discriminant validity of the SCES. 4. Discussion The findings from this study provide initial evidence of the construct validity and internal consistency of the modified 25-item SCES (Rawstorne et al., 1998). The factor structure of the original 31-item SCES was assessed via factor analyses using three methods of extraction, namely, principal axis factoring, principal components and maximum likelihood. Each of the factor analyses suggested a five-factor solution. Factors two and three emerged as identical constructs across all three analyses, attesting to the robustness of the Training–Education and Autonomy–Assistance subscales. The inability to detect a replicable factor pattern for the Anxiety–Frustration, Enjoyment–Usefulness and Negative Performance Appraisal subscales, suggests that these variables were less robust. Nonetheless, it is conceivable that these results might have been different if a more heterogenous sample was examined. Therefore, it would be useful for future studies to explore the dimensions of the SCES using a different subject pool. Using maximum likelihood with oblique rotation, a five-factor solution was ascertained, accounting for 48% of the systematic variance among SCES items. The deletion of items with loadings less than 0.33 resulted in a structure with five factors measured by 25 items, with factor loadings ranging from 0.34 to 0.92. These five variables were labelled Frustration–Anxiety, Autonomy–Assistance, Training–Education, Enjoyment– Usefulness, and Negative Performance Appraisal, respectively. Overall, the five factor solution explained a significant amount of variance in the student data. Acceptable item–item correlations, corrected item–total correlations and squared multiple correlations provided initial support for the internal reliability of the SCES. These findings along with uniformly moderate alpha reliability estimates for the subscales of the SCES (ranging from 0.68 to 0.86) provide evidence that the SCES reliably measures five distinct dimensions of subjective computer experience (Tabachnick & Fidell, 1996). The results of the study also provided support for the construct validity of the modified 25-item instrument. Initial evidence for convergent validity for the SCES scores was demonstrated through low but noteworthy correlations being reported between the subscales of the SCES and two measures of objective computer experience, namely, opportunity to use computers and amount of computer use. To further establish evidence of construct validity, SCES scores were correlated with two computer attitude measures, namely, the CAS (Loyd & Gressard, 1984) and the CATT (Dambrot et al., 1985). As predicted, moderate to high correlations were found between the subscales of the SCES and scores on the CATT, CAS-Anxiety, CAS-Liking and CAS-Confidence sub-
B. Smith et al. / Computers in Human Behavior 23 (2007) 127–145
139
scales. However, small and non-significant correlations were obtained between the Autonomy–Assistance subscale of the SCES and the subscales of the CAS (r’s = 0.13 to 0.18, p > 0.01). These findings suggest that a person’s decision to work alone with computers or seek assistance when challenged by a computer related problem is not influenced by a person’s general attitude toward working with computers that reflects liking, confidence and freedom from anxiety (Dambrot et al., 1985; Loyd & Gressard, 1984). On the whole, however, subscale scores of the SCES, with the exception of the Autonomy–Assistance subscale, were moderately to highly related to computer attitudes (r’s = 0.66 to 0.74, p < 0.01). To establish evidence of discriminant validity, scores on the five subscales of the SCES were correlated with the MBSS (Miller, 1987), a measure of dispositional coping style. In the present study, the number of blunting responses endorsed was subtracted from the number of monitoring responses endorsed, such that higher scores reflected a tendency toward information seeking (i.e., monitoring). Overall, the hypothesis of no relationship between the dimensions of SCES and information seeking (i.e., monitoring) was supported, with one exception. A significant correlation was found between scores on the Negative Performance Appraisal subscale of the SCES and monitoring scores on the MBSS. This association seems plausible when examining the composition of the items loading on the Negative Performance Appraisal variable. Specifically, higher scores on the MBSS reflect a greater tendency towards information seeking (i.e., monitoring), whilst items loading on the Negative Performance Appraisal component of the SCES reflect ones’ negative feelings towards ‘‘asking for assistance’’ or ‘‘seeking advice’’ when challenged by a computer related problem. Moreover, research has shown that monitors as opposed to blunters, spend more time attending to information concerning their own personal performance on a presumed task (Miller, 1987). Given that items on the Negative Performance Appraisal subscale assess respondents evaluation of their own computing performance (e.g., ‘‘When I cannot understand how to use computer software I evaluate my own performance in a negative way’’, ‘‘Other people seem more skilful at using a computer than myself’’), we should expect some relationship between Negative Performance Appraisal and high monitoring (i.e., information seeking). Hence, the noteworthy correlation between the MBSS and Negative Performance Appraisal (r = 0.25, p < 0.01) seems consistent with theoretical expectations (Miller, 1987, 1992). The negligible correlations between the remaining four subscales of the SCES and the MBSS provide evidence for discriminant validity by indicating that the precarious nature of subjective computer experience is not related to dispositional coping style. Together, these results seem especially encouraging, providing initial evidence for the convergent and divergent validity of the SCES. However, validation efforts cannot be considered complete. It will be important to replicate these results using different samples and analytic techniques, such as confirmatory factor analysis. 4.1. Practical implications The aim of this study was to provide a psychometrically sound instrument for studying subjective computer experience. One possible use of this instrument is to assess pre-post changes in respondents’ subjective responses towards computer technologies and computer training. Pre-measures may inform educators about significant aspects of the computing environment viewed as positive or negative by the individual. Post measures may
140
B. Smith et al. / Computers in Human Behavior 23 (2007) 127–145
inform educators as to whether new training techniques or educational experiences have affected the participants’ feelings of frustration, anxiety, and autonomy and improved or hindered understanding and confidence when working with computer related technologies. Importantly, the SCES may be used to determine the extent to which the effects of satisfactory or unsatisfactory early experiences with computers influence subsequent computer attitudes and computer related behaviours. Therefore, the usefulness of the SCES as a predictor of subsequent computer-related behaviour needs to be established in future studies. 4.2. Limitations Although this study has shown to be a necessary and useful step toward improving the proposed measurement of subjective computer experience, it nevertheless suffered from some limitations that should be addressed in subsequent research. The sample used in this study was predominantly female first year psychology students (76%), aged in their late teens to early twenties. This sample characteristic limits the generalisability of the findings. Drawing on recent empirical findings, such a sample may have more (e.g., compared to mature adults) or less (e.g., compared to males) computer experience than other groups (Durndell et al., 1987; Miller & Varma, 1994; Robertson et al., 1995; Robinson-Staveley & Cooper, 1990; Shashaani, 1994; Yelland, 1995). At least at this point, any generalisations concerning results must be made with caution. In future research, the SCES should be generalised to other male and female age groups, in various organisational, industrial, health informatics and educational settings. A further limitation of this study involves granting permission for respondents to complete the available questionnaires outside of the laboratory in their own time. This choice was made, however, because of the length of the questionnaires administered. In allowing participants to complete the questionnaires in their own time implied that if needed, items could be completed over the course of a few days, thus reducing cognitive demands and mental fatigue (Krosnick, Boninger, Chuang, Berent, & Carnot, 1990). Nonetheless, future studies might attempt to replicate our findings whilst providing researcher supervision. A third possible limitation concerns sample size. Confirmatory factor analysis verifies the structure of the attributes of a set of measures, thus providing an assessment of the construct validity of the measure. Because of the limited sample size, this procedure was not feasible, thus a replicable factor structure of the SCES could not be ascertained. Consequently, one possibility for future research is to conduct confirmatory factor analysis on the SCES data, using a different subject pool (Tabachnick & Fidell, 1996). 4.3. Additional areas for future research Several potentially directions for future research are indicated by the present study. This study provides preliminary support for the reliability and validity of the SCES for use with first year psychology students. To improve the internal consistency of the Autonomy–Assistance subscale of the SCES, subsequent research should attempt to develop additional items reflecting personal preference for seeking computer related support and/or working alone with computer technologies. Although the present study demonstrates the viability of the five dimensions of subjective computer experience, other dimen-
B. Smith et al. / Computers in Human Behavior 23 (2007) 127–145
141
sions may yet be identified and, if so, should be incorporated in the measurement of subjective computer experience. Moreover, there is a need for cross-validation of the present findings. Item loadings and alpha coefficients based on a selected sample need to be tested against a second sample drawn from the same population (Friedman, 1995). In addition, there is a need for validity generalisation, where alphas and item-loadings based on the student population are sought in another population, which differs systematically from the current population in some characteristic. Applying such comparisons enables the researcher to determine whether validity will generalise to different populations (Friedman, 1995). It is important to obtain further evidence of convergent and discriminant validities. In particular, it would be of interest to examine whether the SCES correlates with other measures reflecting the qualitative nature of the computing experience (e.g., computer anxiety, enjoyment, control, self efficacy) and whether it can be differentiated from other attitude measures. Such research would provide more detailed information concerning the utility of this scale as a measure of subjective computer experience as opposed to computer attitudes. 5. Conclusion In sum, the contribution of this study lies in the development of a measure to assess subjective computer experience. This aim is important in light of the apparent weaknesses of measures used in previous research focusing on the qualitative nature of early, foregoing computer experiences (Todman & Monaghan, 1994). What is evident from the present research is that the SCES is a useful and relatively easily administered measure of a broad range of physiological and psychological responses relating to previous computing experiences. The psychometric properties indicate satisfactory reliabilities and give evidence for the construct validity of the SCES. Therefore, scores on the SCES could be used to explore the effects of qualitatively different forms of early experiences with computers on the users attitudes towards computers, performance in computer training courses, computer anxiety and subsequent computer usage. The SCES provides a effective tool for exploring the nature of early, formative computer experiences and therefore we encourage researchers to use this scale to help provide a better understanding of the relationship between the qualitative nature of the experience provided and subsequent computer use. Appendix A. Subjective Computer Experience Scale (SCES), Rawstorne et al. (1998) Listed below are a series of statements that reflect the way that people interpret their experience(s) with computers. Please indicate whether you agree or disagree with each statement (i.e., how accurately does each statement reflect your own experience with computers?). For example, if you agree strongly with the statement, then circle 5. If you strongly disagree, circle 1, and so forth. Use the following scale to guide your responses to each statement: 1 ¼ Strongly disagree
4 ¼ Mostly agree
2 ¼ Mostly disagree 3 ¼ Uncertain
5 ¼ Strongly agree NA ¼ Not applicable
142
B. Smith et al. / Computers in Human Behavior 23 (2007) 127–145
1. When using a computer, I prefer to learn through trial and error. (strongly disagree) 1 2 3 4 5 (strongly agree)
NA
2. In the past, computers have made my task(s) far simpler. (strongly disagree) 1 2 3 4 5
(strongly agree)
NA
3. I have generally enjoyed learning how to use computer software. (strongly disagree) 1 2 3 4 5 (strongly agree)
NA
4. In situations where I have had to learn how to use a computer system, I have found the operating manuals difficult to understand. (strongly disagree) 1 2 3 4 5 (strongly agree) NA 5. I feel inadequate when receiving training at the computer. (strongly disagree) 1 2 3 4 5 (strongly agree)
NA
6. I usually get frustrated when using a computer. (strongly disagree) 1 2 3 4
(strongly agree)
NA
7. In the past I have felt anxious when required to use certain software. (strongly disagree) 1 2 3 4 5 (strongly agree)
NA
8. I am reluctant to ask for help when using a computer. (strongly disagree) 1 2 3 4 5
(strongly agree)
NA
9. I enjoy exploring new applications/uses for the computer or software. (strongly disagree) 1 2 3 4 5 (strongly agree)
NA
10. Other people seem to be more skilful at using a computer than myself. (strongly disagree) 1 2 3 4 5 (strongly agree)
NA
11. I usually get frustrated when using certain software. (strongly disagree) 1 2 3 4 5
NA
5
(strongly agree)
12. From past experience, I would prefer to learn a new computer software package on my own. (strongly disagree) 1 2 3 4 5 (strongly agree) NA 13. I am usually curious to use the latest version computer software. (strongly disagree) 1 2 3 4 5 (strongly agree)
NA
14. Computer support staff talk in computer jargon with which I am unfamiliar. (strongly disagree) 1 2 3 4 5 (strongly agree)
NA
15. I have not received sufficient training at the computer. (strongly disagree) 1 2 3 4 5
NA
(strongly agree)
16. Instead of asking for assistance with a computer-related problem, I prefer to try and solve it myself. (strongly disagree) 1 2 3 4 5 (strongly agree) NA
B. Smith et al. / Computers in Human Behavior 23 (2007) 127–145
143
17. When seeking advice from computer support staff (technician), I am often unable to state clearly what my query or question is about. (strongly disagree) 1 2 3 4 5 (strongly agree) NA 18. I often feel scared when using a computer. (strongly disagree) 1 2 3 4
5
(strongly agree)
NA
19. When I seek advice about a computer-related question, I feel stupid when I am told that the answer is simple. (strongly disagree) 1 2 3 4 5 (strongly agree) NA 20. I often feel concerned that I might do damage to the computer if I make a mistake. (strongly disagree) 1 2 3 4 5 (strongly agree) NA 21. I feel incompetent when having to ask for computer assistance (strongly disagree) 1 2 3 4 5 (strongly agree)
NA
22. The training I have received in computer usage has been very beneficial. (strongly disagree) 1 2 3 4 5 (strongly agree)
NA
23. When I cannot understand how to use computer software I evaluate my own performance in a negative way. (strongly disagree) 1 2 3 4 5 (strongly agree)
NA
24. I feel quite powerless when I am being instructed to use a computer or computer software for the first time. (strongly disagree) 1 2 3 4 5 (strongly agree) NA 25. In the past, computer education has facilitated my understanding of computer software capabilities. (strongly disagree) 1 2 3 4 5 (strongly agree)
NA
26. In the past I have had insufficient time at work to learn to use computer software. (strongly disagree) 1 2 3 4 5 (strongly agree) NA 27. I often feel isolated from other people when using a computer. (strongly disagree) 1 2 3 4 5 (strongly agree)
NA
28. Most computer manuals need to be read from front to back to be understood. (strongly disagree) 1 2 3 4 5 (strongly agree) NA 29. In the past, computer training has improved my ability to use computer software. (strongly disagree) 1 2 3 4 5 (strongly agree) NA 30. I feel more at ease using a computer when alone than with a group of people. (strongly disagree) 1 2 3 4 5 (strongly agree)
NA
31. When I encounter a computer-related problem that I cannot resolve myself, I feel comfortable about asking an expert. (strongly disagree) 1 2 3 4 5 (strongly agree) NA
144
B. Smith et al. / Computers in Human Behavior 23 (2007) 127–145
References Anastasi, A. (1990). Psychological testing (6th ed.). New York: Macmillan. Bandagliacco, J. M. (1990). Gender and race in computing attitudes and experience. Social Science Computer Review, 8(1), 42–63. Barki, H., & Hartwick, H. (1994). Measuring user participation, user involvement and user attitude. MIS Quarterly, 18(1), 39–77. Beckers, J. J., & Schmidt, H. G. (2003). Computer experience and computer anxiety. Computers in Human Behavior, 19(6), 785–797. Burkes, M. (1991). Identifying and relating nurses attitudes toward computer use. Computers in Nursing, 9(5), 190–201. Busch, T. (1995). Gender differences in self-efficacy and attitudes toward computers. Journal of Educational Computing Research, 12(2), 147–158. Caputi, P., Jayasuriya, R., & Fares, J. (1995). The development of a measure of attitudes toward computers in nursing. In H. Hasan & N. Nicastri (Eds.), HCI a light into the future: Conference proceedings (pp. 138–141). Australia: Computer Human Interaction Special Interest Group. Colley, A. M., Gale, M. T., & Harris, T. A. (1994). Effects of gender role identity and experience on computer attitude components. Journal of Educational Computing Research, 10, 129–137. Culpan, O. (1995). Attitudes of end-users towards information technology in manufacturing and service industries. Information and Management, 28, 167–176. Czaja, S. J., & Sharit, J. (1991). Age differences in the performance of computer-based work. Psychology and Aging, 8(1), 59–67. Dambrot, F. H., Silling, S. M., & Zook, A. (1988). Psychology of computer use: Sex differences in prediction of course grades in a computer language course. Perceptual and Motor Skills, 66, 627–636. Dambrot, F. H., Watkins-Malek, M. A., Silling, S. M., Marshall, R. S., & Garver, J. (1985). Correlates of sex differences in attitudes toward and involvement with computers. Journal of Vocational Behaviour, 27, 71–86. Durndell, A., Macleod, H., & Siann, G. (1987). A survey of attitudes to knowledge about and experience of computers. Computer Education, 11(3), 167–175. Eagly, A. H., & Chaiken, S. (1993). The psychology of attitudes. Fort Worth: Hardcourt Brace Jovanovich. Ertmer, P. A., Evenbeck, E., Cennamo, K. S., & Lehman, J. D. (1994). Enhancing self-efficacy for computer technologies through the use of positive classroom experiences. Educational Technology, Research and Development, 42, 45–62. Farthing, G. W. (1992). The psychology of consciousness. Englewood Cliffs, NJ: Prentice-Hall. Fazio, R. H., & Zanna, M. P. (1978). Attitudinal qualities relating to the strength of the attitude–behaviour relationship. Journal of Experimental Social Psychology, 14, 398–408. Fazio, R. H., & Zanna, M. P. (1981). Direct experience and attitude–behaviour consistency. Advances in Experimental Social Psychology, 14, 161–202. Friedman, I. A. (1995). Measuring school principle-experienced burnout. Educational and Psychological Measurement, 55(4), 641–651. Hall, J., & Cooper, J. (1991). Gender, experience and attribution’s to computers. Journal of Education and Computer Research, 7(1), 51–60. Hasan, B. (2003). The influence of specific computer experiences on computer self-efficacy beliefs. Computers in Human Behavior, 19(4), 443–450. Igbaria, M., Iivari, J., & Maragahh, H. (1995). Why do individuals use computer technology. A Finnish case study. Information and Management, 29, 227–238. Jones, T., & Clarke, V. A. (1995). Diversity as a determinant of attitudes: A possible explanation of the apparent advantage of single-sex settings. Journal of Educational Computing Research, 12, 51–64. Kaplan, R. M., & Saccuzzo, D. P. (1993). Psychological testing: Principles, applications, and issues (3rd ed.). California: Brooks/Cole. Kay, R. H. (1992). An analysis of methods used to examine gender differences in computer related behaviour. Journal of Educational Computing Research, 8, 323–336. Kay, R. H. (1993). An exploration of theoretical and practical foundations for assessing attitudes toward computers: The computer attitude measure (CAM). Computers in Human Behaviour, 9, 371–386. Kline, P. (1993). The handbook of psychological testing. London: Routledge. Krosnick, J. A., Boninger, D. S., Chuang, Y. C., Berent, M. K., & Carnot, C. G. (1990). Attitude strength: One construct or many related constructs. Journal of Personality and Social Psychology, 65(6), 1132–1151.
B. Smith et al. / Computers in Human Behavior 23 (2007) 127–145
145
Lambert, M. E. (1991). Effects of computer use during coursework on computer aversion. Computers in Human Behaviour, 7, 319–331. Loyd, B. H., & Gressard, C. (1984). Reliability and factorial validity of computer attitude scales. Educational and Psychological Measurement, 44(2), 501–505. Lundin, R. W. (1991). Theories and systems of psychology (4th ed.). Massachusetts: Heath and Company. McInerney, V., McInerney, D., & Sinclair, K. (1994). Student teachers, computer anxiety and computer experience. Journal of Educational Computing Research, 11(1), 27–50. Miller, S. M. (1987). Monitoring and blunting: Validation of a questionnaire to assess styles of information seeking under threat. Journal of Personality and Social Psychology, 52, 345–353. Miller, S. M. (1989). Cognitive information styles in the process of coping with threat and frustration. Advances in Behavioral Research and Therapy, 11, 223–234. Miller, S. M. (1992). Individual differences in the coping process: What to know and when to know it. In B. Carpenter (Ed.), Personal coping: Theory, research, and applications. New York: Praeger. Miller, S. M., & Mangan, C. E. (1983). The interacting effects of information and coping style in adapting to gynecologic stress: Should the doctor tell all? Journal of Personality and Social Psychology, 45, 223–236. Miller, F., & Varma, N. (1994). The effects of psychosocial factors on Indian children’s attitudes towards computers. Journal of Educational Computing Research, 10(3), 223–238. Pope-Davis, D., & Twing, J. S. (1991). The effects of age, gender, and experience on measures of attitude regarding computers. Computers in Human Behaviour, 7, 333–339. Potosky, D., & Bobko, P. (1998). The computer understanding and experience scale: A self report measure of computer experience. Computers in Human Behavior, 14, 337–348. Rawstorne, P., Caputi, P., & Smith, A. (1998). Towards the development of a measure of subjective computer experience: Working paper 2. Unpublished Manuscript, University of Wollongong. Reber, A. S. (1985). Dictionary of psychology. London: Penguin Group. Robertson, S. I., Calder, J., Fung, P., Jones, A., & O’Shea, T. (1995). Computer attitudes in an English secondary school. Computer Education, 24(2), 73–81. Robinson-Staveley, K., & Cooper, J. (1990). Mere presence, gender, and reactions to computers: Studying human–computer interaction in the social context. Journal of Experimental Social Psychology, 26, 168–183. Ryker, R., & Nath, R. (1995). An empirical examination of the impact of computer information system on users. Information and management, 29(4), 207–214. Scarpa, R., Smeltzer, S. C., & Jaison, B. (1992). Attitudes of nurses towards computerization: A replication. Computers in Nursing, 10(2), 72–80. Shashaani, L. (1994). Gender differences in computer experience and its influence on computer attitudes. Journal of Educational Computing Research, 11(4), 347–367. Smith, B.L. (1998). Subjective computer experience: Construct validation and relations to computer attitude. Unpublished Honours Thesis, University of Wollongong, Australia. Smith, B. L., Caputi, P., Crittenden, N., Jayasuriya, R., & Rawstorne, P. (1999). A review of the construct of computer experience. Computers in Human Behavior, 15, 227–242. Tabachnick, B. G., & Fidell, L. S. (1996). Using multivariate statistics (3rd ed.). New York: Harper Collins. Todman, J., & Monaghan, E. (1994). Qualitative differences in computer experience, computer anxiety, and students use of computers: A path model. Computers in Human Behaviour, 10(4), 529–539. Yelland, N. (1995). Young children’s attitudes to computers and computing. Australian Journal of Early Childhood, 20(2), 21–25. Zakrajsek, T. D., Waters, L. K., Popovich, P. M., Craft, S., & Hampton, W. T. (1990). Convergent validity of scales measuring computer-related attitudes. Educational Psychological Measurement, 50, 343–349.