Organizational Behavior and Human Decision Processes 97 (2005) 161–177 www.elsevier.com/locate/obhdp
An examination of the eVect of computerized performance monitoring feedback on monitoring fairness, performance, and satisfaction G. Stoney Alder a,¤, Maureen L. Ambrose b a
Department of Management, University of Nevada, Las Vegas, 4505 Maryland Parkway, P.O. Box 456009, Las Vegas, NV 89154-6009, USA b Department of Management, University of Central Florida, P.O. Box 161400, Orlando, FL 32816-1400, USA Received 23 August 2002 Available online 27 April 2005
Abstract Research has examined how the design and implementation of computerized performance monitoring (CPM) systems aVects individuals’ performance and attitudes. In this study, we examine how the attributes of the feedback received in a CPM context aVects individuals’ reactions to monitoring. One hundred and sixty-Wve individuals participated in an experiment that examined the eVect of three feedback attributes (feedback control, feedback constructiveness, and feedback medium) on monitoring fairness judgments, performance, and satisfaction. Results demonstrate feedback constructiveness signiWcantly predicted monitoring fairness. Additionally, supervisor-mediated feedback was associated with higher levels of monitoring fairness than was computer-mediated feedback. Moreover, monitoring fairness mediated the relationship between these feedback attributes and performance and satisfaction. However, contrary to expectations, feedback control did not aVect perceptions of monitoring fairness. Implications for future research on the design of CPM systems are discussed. 2005 Elsevier Inc. All rights reserved. Keywords: Computer performance monitoring; Fairness; Performance feedback
Organizations have monitored their employees for centuries (US Congress, 1987). However, recent advances in computer technology are transforming the nature of employee performance monitoring. Although there has been substantial debate about the beneWts and costs of this technology, most researchers suggest that monitoring technology itself is neutral; it is the design and implementation of the technology that aVects employee reactions (Ambrose & Kulik, 1994; Attewell, 1987; DeTienne & Abbott, 1993; Stanton, 2000a; Westin, 1992). In this paper, we explore the eVect of three feedback attributes on individuals’ reactions to a computer *
Corresponding author. E-mail addresses:
[email protected] [email protected] (M.L. Ambrose).
(G.S.
Alder),
0749-5978/$ - see front matter 2005 Elsevier Inc. All rights reserved. doi:10.1016/j.obhdp.2005.03.003
performance monitoring system. Additionally, we suggest that the perceived fairness of monitoring mediates the relationship between these feedback attributes and task performance and job satisfaction.
Computer performance monitoring There is a wide range of technology-aided monitoring techniques (e.g., video surveillance, call observation, telephone call accounting, keystroke or computer time accounting, cards and beepers to monitor locations, computer Wle monitoring, screen sharing capabilities on networks, and telephone call observation). In this paper, we focus on computer performance monitoring (CPM). CPM is the use of computer hardware and software to
162
G.S. Alder, M.L. Ambrose / Organizational Behavior and Human Decision Processes 97 (2005) 161–177
collect, store, analyze, and report individual or group actions or performance (Nebeker & Tatum, 1993). An increasing number of organizations are turning to CPM in an attempt to increase the eVectiveness of their monitoring eVorts. In 1987, the OYce of Technology Assessment (OTA) estimated that 6 million US workers were electronically monitored (US Congress, 1987). By 1994, this number had grown to 10 million (Flanagan, 1994). Recent estimates indicate that as many as 75% of large companies electronically monitor their employees (American Management Association, 2000) and at least 40 million US workers may be subject to electronic monitoring (Botan, 1996). CPM has engendered debate about its beneWts and costs (Hays, 1999; Kovach, Conner, Livneh, Scallan, & Schwartz, 2000). Opponents of monitoring claim that such systems invade worker privacy, increase work pressure and stress, reduce social interaction and support, create an atmosphere of mistrust, and undermine employee loyalty (DeTienne, 1993; Frey, 1993; Jenero & Mapes-Riordan, 1992; Parenti, 2001). In contrast, proponents of monitoring argue that it is an invaluable management tool that can beneWt organizations and their employees. Monitoring systems provide managers with more useful information on employee performance because they can continuously monitor performance, record a great deal of data, and measure these data unobtrusively (Stanton, 2000b). CPM systems may also enhance the objectivity of performance appraisals (Angel, 1989). In general, researchers take a middle ground suggesting that CPM technology itself is neutral. For example, Attewell (1987) suggested that the use of performance monitoring is as old as industry itself and, although CPM provides new methods for examining employees’ work, the fundamental purposes, uses, and results of electronic and computer monitoring do not diVer from more traditional forms. Aiello and Svec (1993) similarly concluded that the way employers, managers, and employees use the information gathered through monitoring carries the greatest weight in determining the reactions to and the impact of computer monitoring. According to this perspective, it is how the system is designed, implemented, and used that aVects employee reactions and the system’s eVectiveness (Alder, 1998; ChalykoV & Kochan, 1989; Stanton, 2000a; Stanton & Weiss, 2000). CPM researchers suggest fairness is an important determinant of employees’ behavioral and attitudinal reactions to CPM (Ambrose & Alder, 2000; Kidwell & Bennett, 1994; Stanton, 2000b). Although little empirical research exists on the eVects of perceived monitoring fairness, research considers a variety of CPM features that might aVect the perceived fairness of the CPM system. For example, Stanton (2000b) found monitoring consistency, monitoring control, knowledge of performance from monitoring, and monitoring justiWcation
aVected the perceived fairness of both electronic and traditional (in-person) monitoring. Douthitt and Aiello (2001) found participation aVected CPM fairness. Alge (2001) found the relevance of the activities monitored, participation, and perceptions of privacy invasion aVected the perceived fairness of CPM. We suggest that monitoring-based feedback is an important aspect of CPM that inXuences its perceived fairness. The importance of performance feedback is widely acknowledged in both theoretical and empirical work in organizational behavior. For example, Klein’s (1989) control theory of motivation positions feedback and goals as dual elements in the motivation process. In this perspective, goals and performance standards without feedback are of little value. Feedback similarly plays a signiWcant role in other models of motivation such as goal setting (Locke, Shaw, Saari, & Latham, 1981), positive reinforcement (Luthans & Kreitner, 1985), and job design (Hackman & Oldham, 1980). Feedback research indicates that feedback strongly inXuences individuals’ behaviors and attitudes (Fedor, 1991; Fried & Ferris, 1987). Indeed, Taylor, Fisher, and Ilgen (1984) suggest that feedback is essential to organizational eVectiveness and Cusella (1987) argues that understanding feedback is central to our understanding of organizational behavior. Similarly, feedback may be central to individuals’ reactions to CPM. The central purpose of both in-person and computer performance monitoring is to gather information to provide feedback to employees, adjust operations, and make reward allocation decisions (Komaki, 1986; NiehoV & Moorman, 1996). Feedback is a fundamental aspect of any electronic performance monitoring system (Amick & Smith, 1992). In early work on CPM, ChalykoV and Kochan (1989) suggested that feedback processes were critical to individuals’ reactions to CPM. Yet, despite the central role of feedback in CPM systems, little research empirically examines the relationship between the feedback received by individuals who are monitored by computer systems and job outcomes. In the manuscript, we explore this relationship. We Wrst examine three attributes of CPM related feedback—control over the frequency of feedback, feedback constructiveness, and feedback medium—and their eVect on perceptions of CPM fairness. We subsequently explore the eVects of CPM fairness on performance and satisfaction.
Feedback attributes and CPM fairness Although previous empirical research has not considered the eVect of feedback on CPM system fairness, the importance of feedback for CPM fairness has not been completely overlooked. Ambrose and Alder (2000) present the most comprehensive model of CPM fairness and we draw on their model as a foundation for our study. Ambrose and Alder (2000) describe a variety of CPM
G.S. Alder, M.L. Ambrose / Organizational Behavior and Human Decision Processes 97 (2005) 161–177
attributes that are expected to aVect perceptions of monitoring fairness and, consequently, monitoring outcomes such as performance and satisfaction. Among these attributes, they identify feedback timing and feedback tone. Their feedback attributes correspond generally with ours. Timing of feedback reXects when employees receive feedback, which is reXected in our control over feedback attribute. Feedback tone is equivalent to feedback constructiveness. (Indeed, Ambrose and Alder use the two terms interchangeably.) To these two feedback attributes, we add feedback medium. This is a construct Ambrose and Alder acknowledge (pp. 211–212), but do not include in their model. Drawing on Ambrose and Alder (2000), and given the central role feedback plays in CPM, we suggest feedback attributes warrant empirical investigation. We expect feedback attributes to have a direct eVect on the perceived fairness of monitoring and an indirect eVect on performance and satisfaction. SpeciWcally, we suggest the relationship between feedback attributes and outcomes is mediated by the perceived fairness of the monitoring. Our proposed relationships are depicted in Fig. 1. In the remainder of this section, we explore the relationship between these feedback attributes and CPM fairness. We then consider the relationship between perceived CPM fairness and satisfaction and performance. Feedback control Control plays an important role in human behavior. A belief in personal control over one’s environment has
163
been viewed as an essential aspect of human motivation (Averill, 1973; Ganster & Fusilier, 1989; Terry & Jimmieson, 1999). For example, Kelly (1955) argued that individuals’ ultimate aim is to predict and control events. Similarly, Lefcourt (1973) concluded that the perception of control is a common predictor of responses to aversive events and the sense of control, the illusion that one can exercise personal choice, has a positive role in sustaining life. In industrial-organizational psychology, personal control is central to the research on stress and well-being (Ganster & Fusilier, 1989; Terry & Jimmieson, 1999). Research demonstrates that perceived control over the onset, timing, or termination of an aversive event can make the anticipation and experience of the event more tolerable (Averill, 1973; Ganster & Fusilier, 1989; Wright, 1998). Moreover, higher levels of job control are associated with higher levels of organizational fairness (Elovainio, Kivimaki, & Helkama, 2001). In the organizational justice literature control is an essential aspect of organizational fairness (Cropanzano, Byrne, Bobocel, & Rupp, 2001). Indeed, the eVect of control on perceptions of fairness is perhaps one of the most robust eVects in the organizational justice literature (Bies & Shapiro, 1988; Lind, Kanfer, & Early, 1990; Lind, Kurtz, Musante, Walker, & Thibaut, 1980; Thibaut & Walker, 1975). In their seminal work, Thibaut and Walker (1975) found that when individuals were allowed control over the decision or process, they perceived the process as more fair and were more satisWed with their outcomes than when they were not allowed control.
Fig. 1. Antecedents and consequences of monitoring fairness.
164
G.S. Alder, M.L. Ambrose / Organizational Behavior and Human Decision Processes 97 (2005) 161–177
Cropanzano et al. (2001) suggest one can view this Wnding as a manifestation of individual’s need for control. Recent meta-analyses demonstrate the consistent eVect of control on perceived fairness (Colquitt, Conlon, Wesson, Porter, & Ng, 2001; Cohen-Charash & Spector, 2001). Control in the context of CPM Research on reactions to CPM has also considered the eVect of control. Stanton and Barnes-Farrell (1996) draw on Greenberger and Strasser’s (1986) theory of personal control to suggest that control over CPM is a type of personal control. They demonstrated that individuals who were able to control monitoring (i.e., delay or prevent monitoring) had better performance than individuals without control. Similarly, Douthitt and Aiello (2001) and Aiello and Svec (1993) found computer monitored individuals performed better when they had control over monitoring, speciWcally, when they could turn monitoring oV or interrupt the monitoring. ChalykoV and Kochan (1989) reported that surveyed employees who had voiced opinions about how a monitoring system would be used were more accepting of the system. Two studies examine the relationship between monitoring control and fairness. Stanton (2000b) found that perceived control over the time and setting of monitoring was associated with greater perceptions of fairness in both electronically and traditionally monitored work places. Douthitt and Aiello (2001) found mixed results for control. Direct control over monitoring (the ability to turn monitoring oV) did not aVect the perceived fairness of monitoring. However, monitored individuals who were allowed input about the work process rated monitoring as fairer than individuals who were not allowed input. Feedback control In previous CPM research on monitoring, control has usually been manipulated by allowing participants to turn oV, interrupt, or delay the monitoring (versus no control). However, organizations may be reluctant to relinquish this much control over such a powerful management tool as CPM. Perhaps in recognition of this, monitoring researchers have suggested that diVerent targets of control should be investigated (Douthitt & Aiello, 2001). We agree there may be other ways to increase worker control in the context of CPM. We suggest one such way is to provide monitored employees with control over when they receive CPM-related performance feedback. Workers might be permitted to turn oV or on monitoring-related feedback rather than monitoring itself. In the feedback literature, the issue of control has often been associated with research on feedback frequency. The research on feedback frequency and feedback control is complex. Research on feedback frequency suggests frequency may aVect perceptions of
control. Indeed, both insuYcient amounts of feedback and too frequent feedback can aVect perceptions of personal control. Ashford and Cummings (1983) suggest if individuals are given insuYcient feedback they may feel as though they lack the informational resources necessary to monitor their progress and may therefore feel as though they have less control over the situation. However, Ilgen, Fisher, and Taylor (1979) suggest that as the frequency of feedback increases, the degree to which the recipient is controlled by the source also may increase, leading to a perceived loss of personal control. Similarly, Chhokar and Wallin (1984) suggest too frequent feedback may diminish individuals’ feelings of personal control. Research on feedback seeking indicates that employees attempt to exert some control on the amount of feedback they receive (Ashford, 1986; Ashford & Cummings, 1983; Larson, 1989). Giving individuals control over the feedback may restore their sense of personal control. Previous research demonstrates that increased personal control over job attributes is associated with increased perceptions of fairness (Elovainio et al., 2001). From an organizational justice perspective, enabling employees to control the frequency of feedback they receive aVords them a form of process control and, thus, enhances fairness. We hypothesize: H1: CPM systems will be perceived as fairer when monitored individuals have control over the frequency of feedback they receive. Feedback constructiveness Our second feedback attribute is constructiveness. Ambrose and Alder (2000) draw on research by Baron (1988, 1993) for their discussion of constructiveness, and we draw on Baron’s research as well. We deWne constructive feedback as feedback that is speciWc and sensitive (considerate in tone, contains no threats, and does not attribute poor performance to internal causes). Constructive feedback may be contrasted with destructive feedback, which is general and insensitive. For example, participants in Baron’s (1988) constructive conditions received feedback that stated, “I think there is a lot of room for improvement. A better product name would helpƒ” In contrast, participants in the destructive condition were told, “I wasn’t impressed at allƒ I had the impression that you didn’t try (or maybe it’s just a lack of talent). If your work doesn’t improve, I’d get someone else to do it” (p. 200). Baron’s (1988) research indicates that constructiveness impacts recipient reactions to feedback. Destructive feedback may result in counter-productive behaviors and attitudes (Baron, 1988, 1993). SpeciWcally, Baron (1988) found that destructive feedback related positively to increased anger, tension, and conXict. Additionally,
G.S. Alder, M.L. Ambrose / Organizational Behavior and Human Decision Processes 97 (2005) 161–177
participants who received destructive feedback set lower goals and reported lower self-eYcacy than participants who received constructive feedback. Baron (1993) suggests that constructive feedback may be more eVective than destructive feedback because recipients consider it more interpersonally sensitive and fair. There is substantial research in the organizational justice literature that suggests feedback constructiveness will aVect monitoring fairness. Research on organizational justice indicates that the quality of interpersonal treatment individuals receive from authority Wgures inXuences perceptions of fairness (Bies & Moag, 1986; Colquitt et al., 2001; Folger & Bies, 1989; Tyler & Lind, 1992). Research has identiWed two dimensions of interpersonal treatment that aVect perceptions of fairness: interpersonal sensitivity and respect (Greenberg, 1994; Shapiro & Brett, 1993) and adequate justiWcations (Bies & Shapiro, 1988; Bies, Shapiro, & Cummings, 1988; Shapiro, Buttner, & Barry, 1994). Feedback that accords recipients courtesy, respect, and politeness should be perceived as more interpersonally sensitive and fair. We suggest the threats and internal attributions inherent in destructive feedback may be considered impolite and lacking in courtesy and thereby diminish perceptions of fairness. In contrast, constructive feedback is devoid of attributions and threats and is therefore more interpersonally sensitive. Additionally, constructive feedback is speciWc whereas destructive feedback is general. To the extent that the added information provided by more speciWc feedback explains or justiWes performance evaluations, it may also help satisfy the justiWcation criteria and further bolster perceptions of fairness. Feedback constructiveness has been considered indirectly in the CPM literature. One of the criticisms of CPM is that it is used to punish low performers with feedback that is intimidating, harassing, and hostile (Nine to Five, 1990; Nussbaum & duRivage, 1986). DiTecco, Cwitco, Arsenault, and Andre (1992) found that the quality of feedback was a concern for monitored operators because they were simply told to speed up in a very general way. For example, CPM systems are sometimes designed to provide workers with messages such as “work faster.” This type of feedback would be categorized as destructive because it is both general and insensitive. In contrast, ChalykoV and Kochan’s (1989) research indicates that constructive feedback may enhance reactions to monitoring. ChalykoV and Kochan surveyed 960 employees in the Automated Collection System, which is part of the Tax Collection Division of the Internal Revenue Service (IRS). They conclude that a majority of employees will respond positively to monitoring to the extent that a positive, developmental approach is fostered. Similarly, Aiello and Shao (1992) found that when monitoring was used to help monitored individuals, par-
165
ticipants viewed the experiment and the supervisor in a more favorable and positive way, were more satisWed with their performance, and experienced less anxiety than when monitoring was used to “catch” low performance. Constructive feedback is more conducive to a developmental approach to monitoring than is destructive feedback. We expect individuals will perceive monitoring as fairer when they receive constructive feedback rather than destructive feedback. We hypothesize: H2: Higher perceptions of CPM fairness will be associated with constructive than with destructive feedback. Feedback medium Our third feedback attribute is feedback medium. Feedback research indicates that the source of the feedback is a major determinant of recipient responses to the feedback (Cusella, 1982; Fedor, 1991; Ilgen et al., 1979). In the feedback literature, there are two primary approaches to research on the eVect of feedback source. One approach is to consider the eVects of various source characteristics including source credibility (e.g., trustworthiness and expertise), source power, and source intentions (Bannister, 1986; Ilgen et al., 1979; PodsakoV & Farh, 1989). A second approach—and the one we take in this paper—is to consider the eVect of feedback from superiors versus other sources of feedback (e.g., peers, the task environment; Ashford & Cummings, 1983; Earley, 1988; Fedor, 1991; Kluger & Adler, 1993; Greller, 1980; Herold, Liden, & Leatherwood, 1987). In general, research demonstrates the source of feedback aVects employee reactions. CPM researchers have also recognized that source1 may aVect employee reactions (Alder & Tompkins, 1997; Amick & Smith, 1992). CPM research suggests individuals’ reactions to CPM are inXuenced by whether the CPM system provides information to employees, their supervisor, or both. Grant and Higgins (1989) found that employees were more accepting of CPM systems that provide data only to the employee. Carayon (1993) similarly argues that monitored employees will experience reduced work pressure when they receive feedback directly from the CPM system. However, a common criticism of CPM is that it reduces interaction between workers and their supervisors (Aiello, 1993; Amick & Smith, 1992; Irving, Higgins, & Safayeni, 1986; Nussbaum & duRivage, 1986). Amick and Smith suggest electronic performance monitoring 1 “Source” refers to the origin of the information; “medium” is the channel (e.g., face-to-face, phone, and memo) used to communicate the information. In most feedback research, the source of the feedback and medium are the same—the supervisor both generates and delivers feedback. However, in CPM research a distinction between the two is necessary. In our study, the computer is the source of all information, but the medium—computer or face-to-face—varies.
166
G.S. Alder, M.L. Ambrose / Organizational Behavior and Human Decision Processes 97 (2005) 161–177
systems, such as CPM, change the interactions between supervisors and subordinates. CPM systems perform many tasks that previously required interaction among workers and their supervisors. In essence, the “electronic supervisor” may often replace the human supervisor and, in the process, eliminate or greatly reduce human interaction (Nussbaum & duRivage, 1986; US Congress, 1987). Aiello (1993) found that monitored workers perceived less social support and felt lonelier at work. CPM may depersonalize the work environment, creating feelings of isolation and loneliness (Aiello, 1993; Amick & Smith, 1992; Carayon, 1993). Research suggests that supervisor-provided face-toface feedback may mitigate the negative eVect of depersonalization and result in more positive employee attitudes than feedback provided by the CPM system (Alder & Tompkins, 1997; Amick & Smith, 1992; ChalykoV & Kochan, 1989; Kidwell & Bennett, 1994). Thus, to the extent that face-to-face feedback facilitates supervisory support, it may serve to humanize the workplace and enhance employee reactions. Justice research leads to similar conclusions. For example, research indicates that the quality of interpersonal treatment individuals receive impacts their perceptions of fairness (Bies & Moag, 1986; Lind & Tyler, 1988; Tyler & Bies, 1990; Tyler & Lind, 1992). Tyler (1989) argues that individuals are concerned with their standing in an organization. As a result, fairness perceptions are enhanced when individuals feel they are valued by the organization. To the extent that feedback from the CPM system reduces interaction and depersonalizes the work environment, interpersonal sensitivity is diminished. As a result, individuals will feel less valued by the organization, their perceived standing within the organization will be threatened, and they will perceive the system as less fair. However, the suggestion that person-mediated feedback may enhance employee attitudes and fairness perceptions runs counter to feedback research (Earley, 1988; Kluger & Adler, 1993). Earley (1988) found that computer-provided feedback was more trusted and led to better performance than supervisor-provided feedback. Kluger and Adler (1993) found participants in a laboratory experiment were more likely to seek feedback from a computer than from a person. In short, monitoring and justice research lead to diVerent conclusions regarding the eVect of person versus computer-mediated feedback than does feedback research. This discrepancy suggests that it may be insuYcient to assess only the medium of feedback. Other factors may inXuence the relative impact of computermediated and face-to-face feedback. We suggest the constructiveness of the feedback is one such factor. Feedback medium and feedback constructiveness The source of feedback is one attribute of the feedback process. However, organizational justice research
demonstrates that how a procedure is enacted aVects perceptions of fairness. Thus, the source alone is not the only relevant feedback characteristic. Research on communication media is relevant to the relationship between constructiveness, medium, and individuals’ perceptions of monitoring fairness. Communication research indicates that face-to-face, personal communication is the richest and most emotional medium (Daft & Lengel, 1984, 1986; Huber & Daft, 1987; Williams, 1978). Face-to-face communication is more personal and meaningful and therefore, may engender stronger reactions than leaner media such as computermediated communication. Thus, we suggest that the eVect of constructiveness will be magniWed by person-mediated feedback relative to computer-mediated feedback. SpeciWcally, we expect when constructive feedback is provided by a human supervisor, this will enhance the interpersonal sensitivity of the communication, which will result in greater perceptions of fairness than will constructive communication provided by a computer. In contrast, we expect when a human supervisor provides destructive feedback, the interpersonal insensitivity will be heightened, resulting in greater perceptions of unfairness than will destructive feedback provided by a computer. Although individuals may be more inclined to seek computer-mediated feedback than person-mediated feedback, constructiveness will inXuence their reactions to the feedback. Kluger and Adler’s (1993) study provides indirect support for this conclusion. Although participants in Kluger and Adler’s study were more likely to seek feedback from a computer than from a person, they received no support for their hypothesis that personmediated feedback would produce negative eVects relative to computer-mediated feedback. We suggest that the eVect of feedback medium on individuals’ reactions depends on the constructiveness of the feedback. We predict: H3: Feedback constructiveness will moderate the eVect of feedback medium. When individuals receive constructive feedback, face-to-face feedback will be associated with higher perceptions of CPM fairness than will computer-provided feedback. When feedback is destructive, computer-provided feedback will be associated with higher perceptions of CPM fairness than will face-toface feedback.
Outcomes of CPM fairness Conceptual research suggests that CPM fairness is a key determinant of individuals’ behavioral and attitudinal reactions to CPM (Ambrose & Alder, 2000; Stanton, 2000a). However, little empirical research has examined the eVects of CPM fairness on performance and satisfaction. In this section, we explore these relationships.
G.S. Alder, M.L. Ambrose / Organizational Behavior and Human Decision Processes 97 (2005) 161–177
Monitoring fairness and performance We suggest that the perceived fairness of monitoring inXuences recipients’ performance by inXuencing perceived expectancies and utilities. According to control theories of motivation, performance feedback acts as a sensor that provides information about the recipient’s task performance. The recipient subsequently compares that information to individual goals or established performance standards. If the comparison reveals a discrepancy, the recipient may be motivated to eliminate the discrepancy either by taking corrective action (increasing eVort) or through physical and or psychological withdrawal (Carver & Scheier, 1981; Klein, 1989). Klein suggests that the course of action individuals receiving negative feedback will take is determined by the perceived subjective expected utility (SEU) of pursuing the original goal or standard. When perceived SEU is high, individuals will remain committed to the goal and attempt to reduce the performance-standards gap with increased eVort. In contrast, when perceived SEU is low, individuals may respond with behavioral or cognitive withdrawal. An individual who has withdrawn mentally would be expected to reduce his or her eVorts, to simply go through the motions, and to avoid feedback that would increase the salience of the discrepancy. Fairness signals the existence of a system that will permit organizational members to ultimately attain valued outcomes (Brett, 1986; Lind & Tyler, 1988; Thibaut & Walker, 1975). In contrast, unfair processes indicate the existence of a system that may preclude the attainment of valued outcomes. Thus, perceptions of fairness likely inXuence subjective expectancy. Additionally, fairness is a central antecedent of status evaluations such as pride and respect (Tyler, 1999; Tyler & Blader, 2000; Tyler, Degoey, & Smith, 1996). Fairness enhances individuals’ sense of pride in the group and instills a feeling that they are respected by the group. As a result of this pride, fairness also encourages individuals to behave in ways that promote the welfare of the group and help the group accomplish its goals (Tyler & Blader, 2002). In essence, when fairness is high, individuals may be expected to place higher value in their performance because individual performance presumably promotes the welfare of the group and organization. In short, we expect perceptions of monitoring fairness to inXuence monitored individuals’ task performance because of its impact on perceived subjective expected utility. This is consistent with Lind and Tyler’s (1988) conclusion that fairness increases individuals’ willingness to exert extra eVort. Additionally, recent meta-analyses demonstrate a signiWcant relationship between perceived fairness and performance (Cohen-Charash & Spector, 2001; Colquitt et al., 2001). We predict that CPM fairness will be positively associated with participants’ task performance.
167
H4: CPM fairness will be positively associated with task performance. Monitoring fairness and satisfaction Kidwell and Bennett (1994) found that the perceived procedural fairness of monitoring was positively related to individuals’ satisfaction with the monitoring system. We suggest that the inXuence of CPM fairness extends beyond individuals’ satisfaction with the system to include their job satisfaction. Lind, Kulik, Ambrose, and Park (1993); Lind (2001); van den Bos, Lind, and Wilke (2001) argue for the existence of fairness heuristics. This work suggests it is impractical for organization members to evaluate the fairness of all encounters with the organization so they form a global impression about the organization based on judgments of the fairness of a limited number of critical events. Drawing on Lind’s work, Ambrose and Alder (2000) argue that, because CPM is a salient part of the work environment, it likely serves as a basis for broader organizational fairness judgments and inXuences individuals’ attitudes toward their job and organization. Research on organizational justice demonstrates perceptions of fairness are positively related to satisfaction. Indeed, recent meta-analyses show a strong relationship between fairness and job satisfaction (Cohen-Charash & Spector, 2001; Colquitt et al., 2001). Similar conclusions may be drawn from CPM research. Thus, we predict below that monitoring fairness will be positively related to job satisfaction. H5: CPM fairness will be positively related to job satisfaction. In summary our model and hypotheses suggest that feedback attributes inXuence perceived monitoring fairness. Monitoring fairness, in turn, aVects performance and satisfaction. In combination, the Wve hypotheses imply a mediated model in which monitoring fairness mediates the relationship between feedback attributes and satisfaction and performance. Our tests of the hypotheses will indicate whether mediation was supported.
Methods Participants and design A total of 165 undergraduates (97 men and 68 women) participated in the study. Participants received partial course credit for their participation. The study employed a 2 £ 2 £ 2 between-subjects design in which the following variables were manipulated: (1) Control over feedback (Control versus No control), (2) Constructiveness (Constructive versus Destructive), and (3) Feedback Medium (Computer versus Supervisor).
168
G.S. Alder, M.L. Ambrose / Organizational Behavior and Human Decision Processes 97 (2005) 161–177
Procedure On reporting to the session, participants were told that the purpose of the study was to examine the impact of management practices and techniques on the motivation of data entry workers. They were then given a stack of catalog orders and were informed that they would be entering catalog orders for a major retailer into a computer database for a period of 2 h. Although the actual orders were Wctitious, the order forms were authentic forms from an actual retailer. Additionally, the forms contained catalog numbers and item descriptions that accurately correspond to the numbers and descriptions utilized by the company. Finally, the addresses, zip codes, phone numbers, and area codes on each order form corresponded to real-life addresses and phone numbers. To create a situation across conditions in which the level of diYculty did not diVerentially aVect attitudes and performance, each participant, regardless of condition was given identical stacks of orders. Participants were told that everyone would be entered into a lottery drawing for cash prizes and that, based on performance; they would be awarded up to Wve lottery tickets. The Wrst name drawn would be awarded $500, the next two names would receive $200,and the next three names would get $100. The experimenter told participants the number of orders they would need to successfully enter to earn a speciWed number of tickets. The instructions varied at this point based on the feedback medium and control manipulations. Participants in the Computer medium conditions were told that their computers were programmed to monitor their performance (including both the number of orders and errors) and to provide them with feedback on their work. Participants in the Supervisor medium condition were informed that their computers were connected to the “master” computer in the supervisor’s oYce and that the supervisor would use the master computer to monitor their data-entry performance (including quantity and quality) while they work. To ensure the believability of this manipulation, participants were shown the supervisory computer as well as how their computers networked with the supervisory computer. The experimenter provided participants instructions for performing the task and demonstrated a completed order form. After receiving instructions as a group, participants were each assigned a private oYce. This was done to ensure that participants would not overhear the feedback given to other participants. Participants then performed the data entry task for 2 h. Following the task session, participants received an evaluation of their performance and were told how many lottery tickets they would be awarded. They were then asked to respond to a series of questions appearing on the display terminal. At the completion of this stage of the experiment, individu-
als were thanked again for their involvement in the study and debriefed concerning the purpose of the study. At the conclusion of the study, participants received a second debrieWng via email that informed them of the procedures employed for the lottery drawing (i.e., that all participants had an equal chance in the lottery).2 Manipulations Three feedback attributes were manipulated in the study. In the Computer medium conditions, the computer automatically Xashed a feedback message on the screen for participants to read. In the Supervisor medium conditions, the session supervisor entered the participants’ oYces and orally provided the same message. Participants in all conditions received accurate feedback based on diYcult performance standards (two standard deviations above the mean). SpeciWcally, participants were informed that they would receive 0 tickets if they entered less than 33 orders, 1 ticket for entering 33–40 orders, 2 tickets for entering 41–48 orders, 3 tickets for entering 49–57 orders, 4 tickets for entering 58–66 orders, and 5 tickets for entering more than 66 orders. Baseline performance standards were established in the pilot study and served as a benchmark for feedback provided to participants in the main study. Participants in the Destructive conditions received negative feedback consistent with concerns raised by critics of monitoring that CPM systems are often designed to provide workers with intimidating feedback messages such as “You’re not working as fast as the person next to you” or “work faster” (DeTienne & Abbott, 1993). Our destructive feedback and constructive feedback manipulations were modeled after Baron’s (1988) but were adapted to the task and environment of the present research. For example, a sample destructive feedback message stated, “At this rate, you will only receive 2 lottery tickets. It appears you aren’t even trying.” A sample constructive feedback message included, “You have entered 20 orders. At your current rate, you would enter a total of 50 orders and earn 2 lottery tickets. You can shoot for more.” Pilot testing on an initial pool of
2 Although the fact that all participants ultimately received the same number of lottery tickets potentially raises a question about fairness to the higher performers who left the session believing they had earned a higher number of tickets, there were two reasons this was necessary. First, we hypothesized that the experimental manipulations would lead to lower performance for some participants. To award these participants fewer tickets would raise a greater concern for fairness than did awarding all participants an equal number of tickets. Second, the validity of the study would be threatened to the extent that participants later in the study knew they would get the same number of tickets regardless of their performance. Thus, although nothing signiWcant happened between the two debrieWngs, we delayed the second debrieWng to mitigate the risk of individuals informing future participants that everyone received the same number of lottery tickets.
G.S. Alder, M.L. Ambrose / Organizational Behavior and Human Decision Processes 97 (2005) 161–177
feedback statements identiWed eight constructive and 10 destructive feedback statements that best conformed to Baron’s descriptions. These items were retained for the actual simulation. Appendix A lists the constructive and destructive feedback messages that were used in the study. Finally, the feedback control manipulations varied as to whether or not participants had control over the frequency of performance feedback. Participants in the Control conditions could receive feedback upon demand by clicking on a feedback button on their video display terminal. When participants in the Supervisor medium condition clicked on the feedback button, an indicator light appeared on the supervisor’s monitor. The supervisor then entered the participant’s oYce and gave her/him feedback on her/his performance. When participants in the Computer medium condition clicked on the feedback button, the computer Xashed a feedback message on the screen for her/him to read. In contrast, there was no feedback button on the display terminal for participants in the No Control conditions. Rather, participants in the feedback No Control conditions automatically received performance feedback at pre-determined intervals. Participants in the No Control conditions were yoked with a participant in the Control condition and received feedback at the same interval as that participant. Participants received feedback an average of 4.9 times during the 2-h session. There was no signiWcant relationship between participant performance and the number of times they received feedback (r D .06, p < .45). Dependent variables At the end of the task session, participants responded to a series of questionnaire items directly on the computer. This survey obtained demographic information, the dependent measures of interest, and several manipulation checks. Monitoring fairness Greenberg (1990) observes that justice measures should be speciWc to the context in which the study is being conducted. We assessed perceptions of monitoring fairness with a two-item scale developed for this study (“I think the computer monitoring procedures used in this experiment were fair.” “The way the computer monitored my performance was unfair.”) Participants used a Wve-point Likert type scale to respond to the fairness items. Table 1 presents scale reliabilities, standard deviations and intercorrelations for all dependent variables in the study. Satisfaction Job satisfaction was assessed with a three-item scale from the Michigan Organizational Assessment Questionnaire (Cammann, Fichman, Jenkins, & Klesh, 1979;
169
Table 1 Means, SD, and intercorrelations among dependent variables Variable
Mean
SD
1
2
3
4
1. Monitoring fairness 2. Satisfaction 3. Orders entered 4. Error rate
3.66 1.55 48.18 .90
1.03 .71 10.16 .50
(.77) .21¤¤ .17¤ ¡.24¤¤
(.62) .12 ¡.10
— ¡.27¤¤
—
Note. CoeYcient scale reliabilities are shown in parentheses on the diagonal. ¤ p < .05. ¤¤ p < .01.
Seashore, Lawler, Mirvis, & Cammann, 1982). We adapted the original scale by asking participants to assume they were to accept a permanent job performing the same task under the same conditions and then respond to the items. A sample item included, “All in all, I would be satisWed with the job.” Participants responded to the satisfaction items using a Wve-point Likert-type scale. Performance We assessed two aspects of performance: quantity and quality. We assessed quantity by the total number of orders participants entered during the simulation. Participants’ error rate served as a measure of performance quality. Error rate was determined by dividing the total number of errors keyed by the total number of orders entered. Control variable We used participants’ self-reported typing speed (in words/min) as a control variable because previous research indicates that baseline ability inXuences reactions to feedback.
Results We conducted manipulation checks for feedback control and constructiveness. First, we assessed the eVectiveness of the control manipulation with a single item; “I had control over when I received feedback.” Participants responded to this item on a Wve-point scale ranging from Strongly Disagree (1) to Strongly Agree (5). As expected, participants in the Control condition reported feeling a much stronger sense of control over the feedback they received than did participants in the No Control conditions (t (164) D 10.28, p < .01; M D 3.89 and 1.78). Next, we tested the eVectiveness of the constructiveness manipulation with a four-item scale that assessed participants’ perceptions of feedback constructiveness. These items were based on the deWnition of constructiveness (e.g., “How sensitive was the tone of the feedback you were given?”). As expected, participants in the Constructive conditions reported that the feedback they received was more constructive than did participants in the Destruc-
170
G.S. Alder, M.L. Ambrose / Organizational Behavior and Human Decision Processes 97 (2005) 161–177
tive conditions (t (164) D 15.29, p < .01, M D 3.88 and 2.07). We conducted a conWrmatory factor analysis (CFA) for the items for monitoring fairness and job satisfaction. First, we assessed the Wt of a two-factor model. The analyses demonstrated that the two-factor model provided a good Wt (2 D 3.33, df D 4, p D .50; RMSEA D 0.0, IFI D 1.0, CFI D 1.0) (Hu & Bentler, 1999). The two factors were correlated at .35. The fairness items standardized factor loadings were .86 and .72. However, the standardized factor loadings for the job satisfaction items were .38, .99, and .34. As described below, the pattern of standardized item loadings for the job satisfaction items indicates that the coeYcient may provide an artiWcially low estimate of reliability. We also compared the Wt of the two-factor model to that of a one-factor model that combined all items (2 D 43.86, df D 5; RMSEA D .22, IFI D .71, CFI D .71). The two-factor produces a signiWcant improvement in 2over the one factor model (2 diVerence D 40.53, df D 1, p < .01) suggesting a better Wt than this alternative model (Schumacker & Lomax, 1996). As shown in Table 1, the scale reliabilities for our monitoring fairness and job satisfaction measures using coeYcient were .77 and .62, respectively. When estimated with Cronbach’s , the reliabilities of our fairness and satisfaction measures were .76 and .54, respectively. However, coeYcient is based on the assumption of equivalence (i.e., the loadings of items assigned to a scale are equal). When equivalence is violated, as is the case with our job satisfaction items, the reliability estimate given by is biased downward (Edwards, 2003). In contrast, relaxes the assumption of equivalence. If a scale’s items are equivalent, reduces to . Otherwise, provides a more accurate, unbiased estimate of reliability than that given by (Edwards, 2003). Therefore, is the appropriate indicator of reliability for the measures in our study. There has been discussion in the literature about the appropriate method for testing mediation. Mackinnon, Lockwood, HoVman, West, and Sheets (2002) recently conducted a Monte Carlo simulation to compare 14 methods to test mediation. Results indicate the widely used method proposed by Baron and Kenny (1986) has Type I error rates that are too small in all conditions and have very low power, unless the eVect or sample size is large. Rather, they suggest an indirect eVects approach better evaluates the mediation relationship. (See also Collins, Graham, & Flaherty, 1998.) SpeciWcally, they note that in contrast to the Baron and Kenny approach, the test of joint signiWcance provides the best balance of Type I error and statistical power. Based on their results, Mackinnon et al. “strongly recommend this test for experimental investigations involving simple intervening variable models” (p. 99). We use this approach to test our model and hypotheses.
The joint signiWcance test approach suggests one can conclude mediation occurs when two conditions are met: (1) the independent variable predicts the mediator and (2) the mediator predicts the dependent variable, controlling for the independent variables (Mackinnon et al., 2002). We follow this approach here by conducting two regressions. The Wrst regression assesses the relationship between feedback attributes and perceptions of monitoring fairness. This analysis tests the Wrst condition for mediation and Hypotheses 1–3. The second regression assesses the relationship between monitoring fairness and performance and satisfaction, assesses Hypotheses 4 and 5, and indicates whether mediation was supported. The results for the Wrst step are shown in Table 2. This analysis provides some support for our hypotheses. Contrary to expectations, Hypothesis 1 was not supported. Feedback Control did not signiWcantly aVect the perceived fairness of monitoring. However, Hypothesis 2 was supported. Constructiveness is a signiWcant predictor of monitoring fairness. Participants in the Constructive conditions rated the monitoring as fairer than did those in the Destructive conditions (B D .79, p < .001, M D 4.11 and 3.20). Hypothesis 3 predicted a signiWcant interaction between Feedback Constructiveness and Medium on monitoring fairness. This hypothesis was not supported. However, there was a signiWcant main eVect for Feedback Medium on monitoring fairness such that participants in the Supervisor-medium conditions rated the monitoring as fairer than did those in the Computer-medium conditions (B D .34, p < .05, M D 3.89 and 3.43). The second set of analyses tests the second step for mediation and examines our fourth and Wfth Hypotheses. Table 3 depicts the results of this analysis. Consistent with Hypothesis 4, the perceived fairness of monitoring signiWcantly predicts performance (both quantity and Table 2 EVects of feedback attributes on monitoring fairness Variable
Ba
Typing Speed
0.01 (0.01) 0.79¤¤ (0.20) 0.34¤ (0.20) ¡0.03 (0.14) 0.24 (0.29) 2.92¤¤ 0.24 10.51¤¤ .50 .25
Constructiveness Medium Control Constructiveness £ Medium Constant Model F Multiple R R2 a ¤ ¤¤
Unstandardized regression coeYcients (SE in parentheses). p < .05. p < .001.
G.S. Alder, M.L. Ambrose / Organizational Behavior and Human Decision Processes 97 (2005) 161–177 Table 3 EVects of monitoring fairness on task performance and satisfaction No. of orders Error Satisfaction entered rate Ba Ba Ba Typing speed Constructiveness Medium Control Constructiveness £ Medium Monitoring fairness Constant Model F Multiple R R2
.18¤¤¤¤ (0.05) ¡1.72 (2.22) ¡1.71 (2.17) ¡0.90 (1.51) 2.45 (3.04) 1.67¤¤ (0.84) 36.96¤¤¤¤ (3.52) 3.52¤¤¤¤ .36 .13
¡.003 (0.002) ¡0.06 (0.11) 0.13 (0.11) 0.15¤¤ (0.07) 0.10 (0.15) ¡0.13¤¤¤¤ (0.04) 1.35¤¤¤¤ (0.17) 3.83¤¤¤¤ .36 .13
0.01¤¤ (.003) ¡0.09 (0.16) ¡0.15 (0.15) ¡0.01 (0.11) 0.41¤ (0.21) 0.10¤¤ (0.06)
3.28¤¤¤ .33 .11
a
Unstandardized regression coeYcients (SE in parentheses). p < .10. ¤¤ p < .05. ¤¤¤ p < .01. ¤¤¤¤ p < .001. ¤
quality). Monitoring fairness was positively related to quantity of orders entered (B D 1.67, p < .05) and negatively related to error rate (B D ¡.13, p < .001). Our Wfth hypothesis predicted that monitoring fairness would be positively related to job satisfaction. As shown in Table 3, the results provide support for this hypothesis. Monitoring fairness was positively related to job satisfaction (B D .10, p < .05). The pattern of results provides partial support for our mediation model. Monitoring fairness mediated the relationship between Feedback Constructiveness and outcomes. Monitoring fairness also mediated the relationship between Feedback Medium and outcomes. However, as there was no signiWcant relationship between Feedback Control and monitoring fairness (failing step 1 in the test for mediation), monitoring fairness did not mediate the relationship between Feedback Control and outcomes.
Discussion This study examined how three attributes of feedback aVected individuals’ responses to CPM as well as their performance and satisfaction. The results demonstrate feedback attributes directly aVect the perceived fairness of CPM and indirectly aVect performance and satisfaction. Consistent with expectations, monitoring fairness mediated the relationship between Constructiveness and Medium and performance and satisfaction. Perceptions of monitoring fairness signiWcantly inXuenced partici-
171
pants’ performance and satisfaction. Monitoring fairness was associated with higher quantity and higher quality performance as well as greater satisfaction with the job. The results also support the expectation that feedback attributes will aVect the perceived fairness of monitoring. Two of the three feedback attributes aVected participants’ perceptions of monitoring fairness. Feedback constructiveness signiWcantly and positively inXuenced participants’ perceptions of monitoring fairness. Participants felt monitoring was fairer when they received constructive feedback than when they received destructive feedback. Results also supported the expectation that feedback medium would inXuence participants’ fairness judgments. However, contrary to the hypothesis, there was no signiWcant Medium £ Constructiveness interaction. Rather, Medium and Constructiveness both exerted signiWcant main eVects on monitoring fairness. Perceptions of monitoring fairness were higher among participants who received face-to-face feedback than among those who received computer-mediated feedback. The main eVect for Medium is important for our understanding of the design of CPM systems. This Wnding demonstrates the beneWts of supplementing CPM with supervisor provided feedback. Previous conceptual research suggested that supplementing CPM with supervisor-mediated feedback may enhance employee reactions to monitoring (Alder & Tompkins, 1997). However, we predicted that the enhancing eVect of faceto-face feedback would come at signiWcant risk because destructive supervisor feedback would give rise to strong negative reactions. The results suggest that this risk associated with supervisor feedback does not exist. Providing individuals with supervisor-mediated feedback enhances the perceived fairness of both constructive and destructive feedback. One reason for the overall positive eVect of face-to-face feedback on the perceived fairness of monitoring may lie in the potential for the face-to-face interaction to provide employees an opportunity for voice. For example, Greenberg (1986) demonstrated that giving recipients an opportunity to respond to performance feedback and appraisal enhances perceptions of fairness. In a CPM setting, face-to-face feedback may provide such an opportunity. For example, in the Supervisor medium conditions participants often responded to the feedback either by oVering an explanation for their performance (e.g., “sorry, I am just a slow typist”) or by expressing their disapproval of the feedback. Although consistent with conceptual work on monitoring, the Medium main eVect runs counter to previous feedback research. Previous feedback research indicated that individuals more readily solicit computer-mediated feedback than supervisor-mediated feedback (Kluger & Adler, 1993). Other research indicates that computermediated feedback may be more trusted than supervisormediated feedback (Earley, 1988). However, this study
172
G.S. Alder, M.L. Ambrose / Organizational Behavior and Human Decision Processes 97 (2005) 161–177
indicates that supervisor-mediated feedback may enhance the perceived fairness of CPM and that CPM fairness is positively associated with performance and job satisfaction. This discrepancy may be due to diVerences in the dependent variables of interest. Kluger and Adler (1993) focused on the eVects of supervisor versus computer feedback on feedback seeking. Earley (1988) assessed the relationship between feedback medium and trust in the feedback. In contrast, our study was concerned with the relationship between feedback medium and fairness. Alternatively, this divergence from previous feedback research may reXect broader, societal level developments. The explosion of technology in the last decade and a half has resulted in less and less face-to-face interaction. Referring to this development, Hallowell (1999) speaks of the human moment and argues that people in organizations need face-to-face communication. To the extent that face-to-face supervisor provided feedback helps satisfy this need, it may address a gap that was not as strong when Earley (1988) and Kluger and Adler (1993) conducted their research. As a result, the relative eVects of face-to-face vis-à-vis computerized feedback may have changed in recent years such that feedback recipients now respond more positively to face-to-face feedback than to computerized feedback. Although the results did not reveal the predicted interaction between Medium and Constructiveness, the additive eVect of the two main eVects demonstrates that, consistent with our expectations, the highest perceptions of fairness are associated with face-to-face, constructive feedback. However, contrary to our expectations, these eVects combine such that the lowest perceptions are associated with computerized, destructive feedback. On the surface, these results suggest that monitoring organizations may obtain optimal results by complementing CPM with face-to-face feedback from supervisors. However, there remains one issue that warrants note. Constructiveness has a stronger eVect on perceived fairness of monitoring than does Medium (as demonstrated by the larger coeYcient in Table 2). Indeed, post hoc tests indicate face-to-face destructive feedback was associated with signiWcantly lower judgments of fairness than was computerized constructive feedback (M D 3.30 and 4.02, p < .05). Thus, if an organization’s supervisors tend to give destructive feedback, computerized feedback may appear to be the best alternative. Organizations can guarantee that computerized feedback is constructive. However, they relinquish control over the constructiveness of feedback when their supervisors are the feedback medium. Although some organizations may be tempted to reduce the risk associated with destructive feedback by eliminating the supervisor from the feedback loop, the results of this study indicate that this may only guarantee middle-of-the-road outcomes. It is constructive feedback in addition to face-to-face feedback that pro-
vides the most positive perceptions of CPM, and consequently, more positive performance and attitudes. Consequently, we would encourage organizations to train supervisors to provide constructive feedback and to keep supervisors in the monitoring process. In contrast to our expectations regarding the eVects of Medium and Constructiveness on monitoring fairness, giving participants control over feedback did not enhance their assessments of the fairness of monitoring. Although our Wndings are consistent with those of Douthitt and Aiello (2001), they are contrary to our expectations because justice research consistently demonstrates a positive relationship between control and fairness perceptions. One explanation for this lack of signiWcance may lie with our conceptualization and operationalization of control. Some previous research shows that control (e.g., turning monitoring oV or on) aVects CPM fairness perceptions (Stanton & Barnes-Farrell, 1996). However, Douthitt and Aiello (2001) showed that diVerent types of control had diVerent eVects on fairness perceptions. They found that input over the work process more generally (e.g., how they felt performance should be evaluated) inXuenced perceptions of fairness, whereas control over monitoring did not. It may be that the narrow outcome that participants were able to control in this study (i.e., the frequency of feedback rather than overall evaluations) limited its eVect. The control over frequency of feedback may not have been suYciently meaningful to the participants. Although we argued that organizations might be reluctant to provide control over monitoring to employees, an alternative might be to provide control regarding broader issues. Thus, the opportunity to express an opinion about a larger issue may have a more profound eVect than the ability to have greater control over a smaller issue. This suggestion is consistent with research on CPM that suggests employees perceive monitoring as more fair when they have input to the process (Aiello & Kolb, 1995; Alge, 2001; Douthitt & Aiello, 2001). Alternatively, it may be that the lack of eVect for control is due to the fact that participants had to ask for feedback. Larson (1989) suggests that employees will be less likely to inquire about how their performance is evaluated by others when the nature of their work makes this strategy time consuming. Thus, the positive eVects of control may be diminished to the extent that participants perceive it is costly (in terms of time or eVort) to ask for feedback. However, virtually no time or eVort was required for participants to request feedback in this study. They simply had to click a button on their display terminal and could receive feedback without leaving their workstation or interrupting their work. Indeed, this may be the most eYcient and least burdensome way to give employees control over feedback in a computer monitored environment.
G.S. Alder, M.L. Ambrose / Organizational Behavior and Human Decision Processes 97 (2005) 161–177
Finally, it may be that the duration of the task reduced the observed impact of control on fairness judgments. That is, there may not be a large diVerence in the amount of feedback individuals consider fair over a 2-h period even though there may be signiWcant diVerences over a longer period of time (e.g., an 8-h shift, a week, etc.). Averill (1973) argued that even when control has little or no eVect on short-term stress reactions it may facilitate long-term adaptation to stress. Similarly, control over feedback may inXuence fairness on tasks that last longer than the task session in this study. As a whole, the results of this study oVer important contributions to monitoring and feedback research. First, the study identiWes feedback attributes as important antecedents of monitoring fairness. Feedback Constructiveness and Medium both aVected perceptions of monitoring fairness. Second, the study extends and provides empirical support for conceptual research on CPM that suggests fairness perceptions inXuence reactions to CPM. In this study, monitoring fairness positively related to monitoring participants’ satisfaction. Monitoring fairness also enhanced performance. As fairness increased, so too did the number of orders participants completed. SigniWcantly, this increased productivity was accompanied by a simultaneous increase in performance quality. Participants who felt the monitoring was fair both entered more orders and made fewer errors. This result suggests that to the extent monitoring organizations utilize CPM in ways that cultivate perceptions of fairness, they may have more satisWed and better performing employees. As with all research, this study has some limitations. First, we assessed participants’ perceptions of monitoring fairness and satisfaction with the same instrument. This raises the issue of common method variance. However, the results of our conWrmatory factor analysis suggest that participants discriminated between these two constructs and thus mitigate this concern. Second, we have suggested that monitoring fairness mediates the relationship between feedback attributes and satisfaction and performance. However, the simultaneous collection of the data does not allow us to rule out the possibility that satisfaction and performance precede perceptions of monitoring fairness. Third, the reliability of our satisfaction measure (.62) falls below conventional standards (i.e., .70 as recommended by Nunnally’s, 1978). On the one hand, this low reliability raises questions regarding the interpretation of the satisfaction measure and concerns regarding construct validity. We measured participants’ satisfaction with the job satisfaction measure from the Michigan Organizational Assessment Questionnaire (Cammann et al., 1979). Use of this scale required participants to equate the term “job” in the original items with the task they performed. Although this assumption is reasonable within the context of our study, the required extension
173
may have lowered the scale’s reliability. On the other hand, measurement error in a dependent variable reduces explained variance but does not bias coeYcient estimates. As a result, the low reliability of our satisfaction scale likely produced a conservative test of the relationship between monitoring fairness and satisfaction. Nonetheless, the results for the job satisfaction measure should be considered in light of the measure’s reliability. Finally, although the use of a laboratory simulation is a standard paradigm for CPM research (Aiello & Kolb, 1995; Aiello & Svec, 1993; GriYth, 1993; Nebeker & Tatum, 1993; Stanton & Barnes-Farrell, 1996), it is also a limitation. Concerns about external validity warrant attention. It is possible that diVerent results may emerge if this study were conducted in an organizational setting. We attempted to address some of these concerns in the choice of our participants and task. College undergraduates who need part-time work to Wnance their education constitute a substantial portion of the work force in industries where monitoring dominates (e.g., tele-marketing, data entry). Additionally, our study utilized a task comparable to that used in CPM environments. Participants in previous monitoring research have performed tasks such as solving anagrams and entering a series of six digit numbers from a work sheet into a computer database. Finally, in contrast to previous studies, tangible outcomes associated with participant performance were used to enhance participants’ perceptions of the importance of the task situation. Clearly, however, Weld research may help further establish the generalizability of the results obtained here. Future research may seek to replicate our study in an intact organization. This study suggests several additional potential avenues for future research eVorts. Feedback is a complex phenomenon that may vary in a number of ways. Although constructiveness, medium, and amount are critical feedback attributes, other variations in the feedback process may inXuence fairness perceptions and consequently behavioral and attitudinal reactions. Thus, future research should examine the relationship between variations in other aspects of feedback and reactions to CPM. For example, all participants in our study received accurate negative feedback. Future research could extend this study by examining the eVects of our three feedback attributes on monitored individuals’ reactions to positive feedback. Additionally, empirical research (Greenberg, 1986) and theoretical arguments by both justice (Leventhal, 1980) and monitoring researchers (Ambrose & Alder, 2000; Hawk, 1994) suggest that giving individuals the chance to respond to monitoring feedback should enhance fairness judgments and reactions to CPM. We argued above that implicit opportunities to respond may have enhanced the perceived fairness of supervisor-provided feedback. Researchers should explore the relationship between explicit opportunities to respond to monitoring-related feedback and reactions to CPM.
174
G.S. Alder, M.L. Ambrose / Organizational Behavior and Human Decision Processes 97 (2005) 161–177
This paper examines the eVects of feedback attributes and monitoring fairness on performance and satisfaction. Future research could explore the relationship between fairness and additional outcomes. For example, CPM research documents the potentially stressful eVects of CPM (e.g., Smith, Carayon, Sanders, & LeGrande, 1992). Recent research demonstrates increased fairness is associated with lower levels of job strain, job stress, and positive health outcomes (Elovainio et al., 2001; Elovainio, Kivimaki, Eccles, & Sinervo, 2002; Elovainio, Kivimaki, & Vahtera, 2002; Elovainio, Kivimaki, Vahtera, Keltikangas-Jaervinen, & Virtanen, 2003; Tepper, 2001). Thus, it seems plausible that fairness perceptions might help mitigate stress associated with CPM. Although this study enables us to draw conclusions about the eVects of variations in the application of CPM technology, it does not enable us to draw new conclusions about computer monitoring relative to other forms of monitoring. Thus, future empirical research should assess the eVect of CPM relative to other forms of monitoring. Along these lines, Stanton (2000b) found diVerences in perceptions of monitoring control and monitoring justiWcation between electronically monitored and traditionally monitored work environments. Future research could examine the interaction between feedback dimensions and type of monitoring. For example, although the eVect of control over feedback was not signiWcant in this study, it may be that control over the frequency of feedback is a signiWcant determinant of fairness when monitoring occurs in-person rather than by computer. Finally, the experimental design employed in this study controlled for individual diVerences. However, future research could explicitly examine the eVect of individual diVerences on reactions to CPM and to our three dimensions of CPM-based feedback. For example, individuals diVer in the amount and frequency of feedback they desire (Fedor, 1991; Northcraft & Ashford, 1990). These diVerences may also inXuence individuals’ desire to control the frequency of that feedback. Indeed, reviews of the work control research suggest that the eVects of control may be dependent on a number of dispositional variables including individuals’ desire for control, locus of control, and negative aVectivity (Ganster & Fusilier, 1989; Terry & Jimmieson, 1999). There are proponents and opponents of CPM. And though these groups may disagree on the beneWts and costs of computer monitoring, it is clear CPM systems are here to stay. Thus, we should focus not on whether monitoring is productive or counter-productive but rather on how organizations may best integrate CPM into the overall performance management process. This study begins to answer that question by drawing on research on feedback and fairness. The results suggest that careful attention to the feedback given to employees in connection with CPM may enhance fairness percep-
tions associated with CPM, and as a result aVect both performance and satisfaction.
Appendix A. Constructive and destructive feedback messages Constructive feedback messages 1. You have entered __ orders. At your current rate you would enter a total of __ orders and earn __ lottery tickets. You can shoot for more. 2. You have entered __ orders. At your current rate you would enter a total of __ orders and earn __ lottery tickets. Although, it may seem like the extra credit you are getting for this project is not enough, your help really is appreciated. 3. You have entered __ orders. At your current rate you would enter a total of __ orders and earn __ lottery tickets. You can do better. 4. You have entered __ orders. At your current rate you would enter a total of __ orders and earn __ lottery tickets. Data entry can be quite tedious but we really need your best eVort anyway. 5. You have entered __ orders. At your current rate you would enter a total of __ orders and earn __ lottery tickets. Others are working fast enough to earn more. You can too. 6. You have entered __ orders. At your current rate you would enter a total of __ orders and earn __ lottery tickets. We understand it is hard to do the same task for 2 h but you only need to push it for a little while longer. 7. You have entered __ orders. At your current rate you would enter a total of __ orders and earn __ lottery tickets. You can improve on these numbers. 8. You have entered __ orders. At your current rate you would enter a total of __ orders and earn __ lottery tickets. Although data entry can be harder than it appears, you really can do better. Destructive feedback messages 1. At this rate, you will only receive __ lottery tickets. Maybe you simply don’t have the ability to do this job well. 2. Others are working faster than you. It is obvious that they care more about the project and will have a much better chance at the lottery than you will. 3. You are still working too slow. Clearly, we should not use you again. At this rate, you might as well forget about the lottery. 4. At this rate, you will only receive __ lottery tickets. Apparently, you aren’t even trying. 5. At this rate, you would get __ lottery tickets. Why do you insist on working so slow?
G.S. Alder, M.L. Ambrose / Organizational Behavior and Human Decision Processes 97 (2005) 161–177
6. The way you are working, it is clear that you are not very good at data entry even though it really is pretty easy to do. 7. At this rate, you will only receive __ lottery tickets. Either you are lazy or simply do not care about this job. 8. Your productivity is too low. You must increase your productivity or we may have to pull you and oVer the extra credit to someone else. 9. You are only working fast enough to get __ lottery tickets. Obviously you were not cut out for data entry work. 10. You are working considerably slower than others on this project. It really is unfair that some people like you think they should get the same amount of credit as others without doing their share of the work.
References Aiello, J. R. (1993). Computer-based work monitoring: Electronic surveillance and its eVects. Journal of Applied Social Psychology, 23, 499–507. Aiello, J. R., & Kolb, K. J. (1995). Electronic performance monitoring and social context: Impact on productivity and stress. Journal of Applied Psychology, 80, 339–353. Aiello, J. R., & Shao, Y. (1992). EVects of computer monitoring on task performance. Paper presented as part of “Computerized Performance Monitoring: Its impact on employees, supervisors, and organizations” (John R. Aiello, Chair). A symposium at the seventh annual conference of the Society for Industrial and Organizational Psychology. Aiello, J. R., & Svec, C. M. (1993). Computer monitoring of work performance: Extending the social facilitation framework to electronic presence. Journal of Applied Social Psychology, 23, 537–548. Alder, G. S. (1998). Ethical issues in electronic performance monitoring: A consideration of deontological and teleological perspectives. Journal of Business Ethics, 17, 729–743. Alder, G. S., & Tompkins, P. K. (1997). Electronic performance monitoring: An organizational justice and concertive control perspective. Management Communication Quarterly, 10, 259–288. Alge, B. J. (2001). The eVects of computer surveillance on perceptions of privacy and procedural fairness. Journal of Applied Psychology, 86, 797–804. Ambrose, M. L., & Alder, G. S. (2000). Designing, implementing, and utilizing computer performance monitoring: Enhancing organizational justice. In G. R. Ferris (Ed.), Research in personnel and human resource management (Vol. 18, pp. 187–219). Greenwich, CT: JAI Press. Ambrose, M. L., & Kulik, C. T. (1994). The eVect of information format and performance pattern on performance appraisal judgments in a computerized performance monitoring context. Journal of Applied Social Psychology, 24, 801–823. American Management Association, 2000. Workplace testing and monitoring. New York: Author. Amick, B. C., & Smith, M. J. (1992). Stress, computer-based work monitoring and measurement systems: A conceptual overview. Applied Ergonomics, 23, 6–16. Angel, N. F. (1989). Personnel Administrator, 67–72. Ashford, S. J. (1986). Feedback-seeking in individual adaptation: A resource perspective. Academy of Management Journal, 29, 465–487.
175
Ashford, S. J., & Cummings, L. L. (1983). Feedback as an individual resource: Personal strategies of creating information. Organizational Behavior and Human Performance, 32, 370–398. Attewell, P. (1987). Big Brother and the sweatshop: Computer surveillance in the automatic oYce. Sociological Theory, 5, 87–99. Averill, J. R. (1973). Personal control over aversive stimuli and its relationship to stress. Psychological Bulletin, 80, 286–303. Bannister, B. D. (1986). Performance outcome feedback and attributional feedback: Interactive eVects on recipient responses. Journal of Applied Psychology, 71, 203–210. Baron, R. A. (1988). Negative eVects of destructive criticism: Impact on conXict, self-eYcacy, and task performance. Journal of Applied Psychology, 73, 199–207. Baron, R. A. (1993). Criticism (informal negative feedback) as a source of perceived unfairness in organizations: EVects, mechanisms, and countermeasures. In R. Cropanzano (Ed.), Justice in the workplace (pp. 150–170). Hillsdale, NJ: Lawrence Erlbaum Associates. Baron, R. M., & Kenny, D. A. (1986). The moderator-mediator variable distinction in social psychological research: Conceptual, strategic, and statistical considerations. Journal of Personality and Social Psychology, 51, 1173–1182. Bies, R. J., & Moag, J. S. (1986). Interactional justice: Communicative criteria of fairness. In R. J. Lewicki, B. H. Sheppard, & B. H. Bozeman (Eds.), Research on negotiation in organizations (pp. 43–55). Greenwich, CT: JAI Press. Bies, R., & Shapiro, D. (1988). Voice and justiWcation: Their inXuence on procedural fairness Judgments. Academy of Management Journal, 31, 676–685. Bies, R. J., Shapiro, D. L., & Cummings, L. L. (1988). Causal accounts and managing organizational conXict: Is it enough to say it’s not my fault?. Communication Research, 15, 381–399. Botan, C. (1996). Communication work and electronic surveillance: A model for predicting panoptic eVects. Communication Monographs, 6, 293–313. Brett, J. M. (1986). Commentary on procedural justice papers. In R. Lewicki, M. Bazerman, & B. Sheppard (Eds.), Research on negotiation in organizations (Vol. 1, pp. 81–90). Greenwich, CT: JAI press. Cammann, C., Fichman, M., Jenkins, D., & Klesh, J. (1979). The Michigan Organizational Assessment Questionnaire. Unpublished manuscript. University of Michigan, Ann Arbour. Carayon, P. (1993). EVects of electronic performance monitoring on job design and worker stress: Review of the literature and conceptual model. Human Factors, 35, 385–395. Carver, C. S., & Scheier, M. F. (1981). The self-attention-induced feedback loop and social facilitation. Journal of Experimental Social Psychology, 17, 545–568. ChalykoV, J., & Kochan, T. A. (1989). Computer-aided monitoring: Its inXuence on employee job satisfaction and turnover. Personnel Psychology, 42, 807–834. Chhokar, J. S., & Wallin, J. A. (1984). A Weld study of the eVect of feedback frequency on performance. Journal of Applied Psychology, 69, 524–530. Cohen-Charash, Y., & Spector, P. E. (2001). The role of justice in organizations: A meta-analysis. Organizational Behavior and Human Decision Processes, 86, 278–324. Collins, L. M., Graham, J. W., & Flaherty, B. P. (1998). An alternative framework for deWning mediation. Multivariate Behavioral Research, 33, 295–312. Colquitt, J. A., Conlon, D. E., Wesson, M. J., Porter, C., & Ng, K. Y. (2001). Justice at the millennium: A meta-analytic review of 25 years of organizational justice research. Journal of Applied Psychology, 86, 425–445. Cropanzano, R., Byrne, Z. S., Bobocel, D. R., & Rupp, D. R. (2001). Moral virtues, fairness heuristics, social entities, and other denizens of organizational justice. Journal of Vocational Behavior, 58, 164–209. Cusella, L. P. (1982). The eVects of source expertise and feedback valence on intrinsic motivation. Human Communication Research, 9, 17–32.
176
G.S. Alder, M.L. Ambrose / Organizational Behavior and Human Decision Processes 97 (2005) 161–177
Cusella, L. P. (1987). Feedback motivation and performance. In F. M. Jablin, L. L. Putnam, K. H. Robertsand, & L. W. Porter (Eds.), Handbook of Organizational Communication (pp. 624–678). Newbury Park, CA: Sage. Daft, G. P., & Lengel, R. H. (1984). Information richness: A new approach to managerial information processing and organizational design. In B. Staw & L. L. Cummings (Eds.), Research in organizational behavior (pp. 191–233). Greenwich, CT: JAI Press. Daft, G. P., & Lengel, R. H. (1986). Organizational information requirements, media richness, and structural design. Management Science, 32, 554–571. DeTienne, K. B. (1993). Big brother or friendly coach. Computer monitoring in the 21st century. The Futurist, 27, 33–37. DeTienne, K. B., & Abbott, N. T. (1993). Developing an employee centered electronic monitoring system. Journal of Systems Management, 44, 12–15. DiTecco, D., Cwitco, G., Arsenault, A., & Andre, M. (1992). Operating stress and monitoring practices. Applied Ergonomics, 23, 29–34. Douthitt, E. A., & Aiello, J. R. (2001). The role of participation and control in the eVects of computer monitoring on fairness perceptions, task satisfaction, and performance. Journal of Applied Psychology, 86, 867–874. Earley, P. C. (1988). Computer generated performance feedback in the magazine subscription industry. Organizational Behavior and Human Decision Processes, 41, 50–64. Edwards, J. R. (2003). Construct validation in organizational behavior research. In J. Greenberg (Ed.), Organizational behavior: The state of the science (Second ed., pp. 327–371). Mahwah, NJ: Erlbaum. Elovainio, M., Kivimaki, M., & Helkama, K. (2001). Organizational justice evaluation, job control and occupational strain. Journal of Applied Psychology, 83, 418–424. Elovainio, M., Kivimaki, M., Eccles, M., & Sinervo, T. (2002). Team climate and procedural justice as predictors of occupational strain. Journal of Applied Social Psychology, 32, 359–374. Elovainio, M., Kivimaki, M., & Vahtera, J. (2002). Organizational justice: Evidence of a new psychosocial predictor of health. American Journal of Public Health, 92, 105–108. Elovainio, M., Kivimaki, M., Vahtera, J., Keltikangas-Jaervinen, L., & Virtanen, M. (2003). Sleeping problems and health behaviors as mediators between organizational justice and health. Health Psychology, 22, 287–293. Fedor, D. B. (1991). Recipient responses to performance feedback: A proposed model and its implications. Research in Personnel and Human Resources Management, 9, 73–120. Flanagan, J. (1994). Restricting electronic monitoring in the private workplace. Duke Law Journal, 43, 1256–1281. Folger, R., & Bies, R. J. (1989). Managerial responsibilities and procedural justice. Employee Responsibilities and Rights Journal, 2, 79– 89. Frey, B. S. (1993). Does monitoring increase work eVort. The rivalry with trust and loyalty. Economic Inquiry, 31, 663–670. Fried, Y., & Ferris, G. R. (1987). The validity of the Job Characteristics Model: A review and meta-analysis. Personnel Psychology, 40, 287– 322. Ganster, D. C., & Fusilier, M. R. (1989). Control in the workplace. In C. L. Cooper & I. T. Robertson (Eds.), International review of industrial and organizational psychology (Vol. 4, pp. 235–280). Chichester, UK: Wiley. Grant, R., & Higgins, C. (1989). Monitoring service workers via computer: The eVect on employees, productivity, and service. National Productivity Review, 8, 101–112. Greenberg, J. (1986). Determinants of perceived fairness of performance evaluation. Journal of Applied Psychology, 71, 340–342. Greenberg, J. (1990). Looking fair vs. being fair: Managing impressions of organizational justice. In B. M. Staw & L. L. Cummings (Eds.), Research in Organizational Behavior (Vol. 12, pp. 111–157). Greenwich, CT: JAI Press.
Greenberg, J. (1994). Using socially fair treatment to promote acceptance of a worksite smoking ban. Journal of Applied Psychology, 79, 288–297. Greenberger, D. B., & Strasser, S. (1986). Development and application of a model of personal control in organizations. Academy of Management Review, 11, 164–177. Greller, M. M. (1980). Evaluation of feedback sources as a function of role and organizational level. Journal of Applied Psychology, 65, 24–27. GriYth, T. L. (1993). Monitoring and performance: A comparison of computer and supervisor monitoring. Journal of Applied Social Psychology, 23, 549–572. Hackman, J. R., & Oldham, G. R. (1980). Work redesign. Reading, MA: Addison-Wesley. Hallowell, E. M. (1999). The human moment at work. Harvard Business Review, 77, 58–65. Hawk, S. (1994). The eVects of computerized performance monitoring: An ethical perspective. Journal of Business Ethics, 13, 949–957. Hays, S. (1999). October. To snoop or not to snoop?. Workforce, 78(10), 136. Herold, D. M., Liden, R. C., & Leatherwood, M. L. (1987). Using multiple attributes to assess sources of performance feedback. Academy of Management Journal, 30, 826–835. Hu, L., & Bentler, P. M. (1999). CutoV criteria for Wt indexes in scovariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling, 6, 1–55. Huber, G. P., & Daft, R. L. (1987). The information environments of organizations. In F. M. Jablin, L. L. Putnam, K. H. Robertsand, & L. W. Porter (Eds.), Handbook of organizational communication (pp. 130–164). Newbury Park, CA: Sage. Ilgen, D. R., Fisher, C. D., & Taylor, M. S. (1979). Consequences of individual feedback on behavior in organizations. Journal of Applied Psychology, 64, 349–371. Irving, R. H., Higgins, C. A., & Safayeni, F. R. (1986). Computerized performance monitors: Use and abuse. Communications of the ACM, 29, 794–801. Jenero, K. A., & Mapes-Riordan, L. D. (1992). Electronic monitoring of employees and the elusive right to privacy. Employee Relations Law Journal, 18, 71–102. Kelly, G. A. (1955). The psychology of personal constructs. New York: Norton. Kidwell, R. E., Jr., & Bennett, N. (1994). Employee reactions to electronic control systems: The role of procedural fairness. Group and Organization Management, 19, 203–218. Klein, H. J. (1989). An integrated control theory model of work motivation. Academy of Management Review, 14, 150–172. Kluger, A. N., & Adler, N. A. (1993). Person versus computer-mediated feedback. Computers in Human Behavior, 9, 1–16. Komaki, J. L. (1986). Toward eVective supervision: An operant analysis and comparison of managers at work. Journal of Applied Psychology, 71, 270–279. Kovach, K., Conner, S., Livneh, K., Scallan, K., & Schwartz, R. (2000). Electronic communication in the workplace—something’s got to give. Business Horizons, 43(4), 59. Larson, J. R. (1989). The dynamic interplay between employees’ feedback-seeking strategies and supervisors’ delivery of performance feedback. Academy of Management Review, 14, 408–422. Lefcourt, H. M. (1973). The function of the illusion of control and freedom. American Psychologist, 28, 417–425. Leventhal, G. S. (1980). What should be done with equity theory?. In K. J. Gergen, M. S. Greenberg, & R. H. Willis (Eds.), Social exchange: Advances in theory and research (pp. 27–55). New York: Plenum. Lind, E. A. (2001). Fairness Heuristic Theory: Justice judgments as pivotal cognitions in organizational relations. In M. S. Greenberg & R. Cropanzano (Eds.), Advances in organizational justice (pp. 56–88). CA: Stanford University Press.
G.S. Alder, M.L. Ambrose / Organizational Behavior and Human Decision Processes 97 (2005) 161–177 Lind, E. A., Kanfer, R., & Early, P. (1990). Voice, control, and procedural justice: Instrumental and non-instrumental concerns in fairness judgments. Journal of Personality and Social Psychology, 59, 952–959. Lind, E. A., Kulik, C. T., Ambrose, M. L., & Park, M. V. (1993). Individual and corporate dispute resolution: Using procedural fairness as a decision heuristic. Administrative Science Quarterly, 38, 224–251. Lind, E. A., Kurtz, S., Musante, L., Walker, L., & Thibaut, J. (1980). Procedure and outcome eVects on reactions to adjudicated resolutions of conXicts of interest. Journal of Personality and Social Psychology, 39, 643–653. Lind, E. A., & Tyler, T. R. (1988). The social psychology of procedural justice. New York: Plenum. Locke, E. A., Shaw, K. N., Saari, L. M., & Latham, G. P. (1981). Goal setting and task performance: 1969–1980. Journal of Applied Psychology, 90, 125–152. Luthans, F., & Kreitner, R. (1985). Organizational behavior modiWcation and beyond. Glenview, IL: Scott Foresman. Mackinnon, D. P., Lockwood, C. M., HoVman, J. M., West, S. G., & Sheets, V. (2002). A comparison of methods to test mediation and other intervening variable eVects. Psychological Methods, 7, 83–104. Nebeker, D. M., & Tatum, C. B. (1993). The eVects of computer monitoring, standards, and rewards on work performance, job satisfaction, and stress. Journal of Applied Social Psychology, 23, 508–536. NiehoV, B. P., & Moorman, R. H. (1996). The inXuence of job type and position tenure on performance monitoring and workplace justice: An application of agency theory to supervisor–subordinate relationships. Paper presented at the annual meetings of the Academy of Management, Boston, MA. Nine to Five, Working Women Education Fund (1990). Stories of mistrust and manipulation: The electronic monitoring of the American work force. Cleveland, OH: Author. Northcraft, G. B., & Ashford, S. J. (1990). The preservation of self in every day life: The eVects of performance expectations and feedback context on feedback inquiry. Organizational Behavior and Human Decision Processes, 47, 42–65. Nunnally, J. C. (1978). Psychometric theory (2nd ed.). New York: McGraw-Hill. Nussbaum, K., & duRivage, V. (1986). Computer monitoring: Mismanagement by remote control. Business and Society Review, 56, 16–20. Parenti, C. (2001). Big brother’s corporate cousin. The Nation, 273(5), 26–31. PodsakoV, P. M., & Farh, J. L. (1989). EVects of feedback sign and credibility on goal setting and task performance. Organizational Behavior and Human Decision Processes, 44, 45–67. Schumacker, R. E., & Lomax, R. G. (1996). A beginner’s guide to structural equation modeling. Mahwah, New Jersey: Lawrence Erlbaum Associates. Seashore, S., Lawler, E., Mirvis, P., & Cammann, C. (1982). Observing and measuring organizational change: A guide to Weld practice. New York: Wiley. Shapiro, D. L., & Brett, J. M. (1993). Comparing three processes underlying judgments of procedural justice: A Weld study of mediation and arbitration. Journal of Personality and Social Psychology, 65, 1167–1177. Shapiro, D. L., Buttner, E. H., & Barry, B. (1994). Explanations for rejection decisions: What factors enhance their perceived adequacy and moderate their enhancement of justice perceptions?. Organizational Behavior and Human Decision Processes, 58, 346–368. Smith, M. J., Carayon, P., Sanders, K. J., & LeGrande, D. (1992). Employee stress and health complaints in jobs with and without electronic performance monitoring. Applied Ergonomics, 23, 17–27. Stanton, J. M. (2000a). Reactions to employee performance monitoring: Framework, review, and research directions. Human Performance, 13, 85–113.
177
Stanton, J. M. (2000b). Traditional and electronic monitoring from an organizational justice perspective. Journal of Business and Psychology, 15, 129–147. Stanton, J. M., & Barnes-Farrell, J. L. (1996). EVects of electronic performance monitoring on personal control, task satisfaction, and task performance. Journal of Applied Psychology, 81, 738–745. Stanton, J. M., & Weiss, E. M. (2000). Electronic monitoring in their own words: An exploratory study of employees’ experiences with new types of surveillance. Computers in Human Behavior, 16, 423–440. Taylor, M. S., Fisher, C. D., & Ilgen, D. R. (1984). Individuals reactions to performance feedback in organizations: A control theory perspective. In K. M. Rowland & G. R. Ferris (Eds.), Research in personnel and human resources management (pp. 81–124). Greenwich, CT: JAI Press. Tepper, B. J. (2001). Health consequences of organizational injustice: Tests of main and interactive eVects. Organizational Behavior and Human Decision Processes, 86, 197–215. Terry, D. J., & Jimmieson, N. L. (1999). Work control and employee well-being: A decade review. In C. L. Cooper & I. T. Robertson (Eds.), International review of industrial and organizational psychology (Vol. 14, pp. 95–148). Chichester, UK: Wiley. Thibaut, J., & Walker, L. (1975). Procedural justice: A psychological analysis. Hillsdale, NJ: Lawrence Erlbaum Associates. Tyler, T. R. (1989). The psychology of procedural justice: A test of the group-value model. Journal of Personality and Social Psychology, 57, 830–838. Tyler, T. R. (1999). Why people cooperate with organizations. Research in Organizational Behavior, 21, 201–246. Tyler, T. R., & Bies, R. J. (1990). Beyond formal procedures: The interpersonal context of procedural justice. In J. S. Carroll (Ed.), Applied social psychology and organizational settings (pp. 77–98). Hillsdale, NJ: Lawrence Erlbaum Associates. Tyler, T. R., & Blader, S. L. (2000). Cooperation in groups: Procedural justice, social identity, and behavioral engagement. Philadelphia, PA: Psychology Press. Tyler, T. R., & Blader, S. L. (2002). Autonomous vs. comparative status: Must we be better than others to feel good about ourselves?. Organizational Behavior and Human Decision Processes, 89, 813– 838. Tyler, T. R., Degoey, P., & Smith, H. J. (1996). Understanding why the justice of group procedures matters: A test of the psychological dynamics of the group-value model. Journal of Personality and Social Psychology, 70, 913–920. Tyler, T. R., & Lind, E. A. (1992). A relational model of authority in groups. Advances in Experimental Social Psychology, 25, 115– 191. US Congress, OYce of Technology Assessment (1987). The electronic supervisor: New technology, new tensions, OTA-CIT-333. Washington, DC: U.S. Government Printing OYce. van den Bos, K., Lind, E. A., & Wilke, H. (2001). The psychology of procedural and distributive justice viewed from the perspective of fairness heuristic theory. In R. Cropanzano (Ed.), Justice in the workplace: Volume II From theory to practice. Hillsdale, NJ: Lawrence Erlbaum & Associates. Westin, A. F. (1992). Two key factors that belong in a macroergonomic analysis of electronic monitoring: Employee perceptions of fairness and the climate of organizational trust or distrust. Applied Ergonomics, 23, 35–42. Williams, E. (1978). Teleconferencing: Social and psychological factors. Journal of Communication, 28(3), 125–131. Wright, R. A. (1998). Ability perception and cardiovascular response to behavioral challenge. In M. Kofta, G. Weay, & G. Sedek (Eds.), Personal control in action: Cognitive and behavioral mechanisms (pp. 197–232). New York: Plenum.