Psychological climate and decision-making performance in a GDSS context

Psychological climate and decision-making performance in a GDSS context

Information & Management 48 (2011) 125–134 Contents lists available at ScienceDirect Information & Management journal homepage: www.elsevier.com/loc...

565KB Sizes 1 Downloads 22 Views

Information & Management 48 (2011) 125–134

Contents lists available at ScienceDirect

Information & Management journal homepage: www.elsevier.com/locate/im

Psychological climate and decision-making performance in a GDSS context Reza Barkhi a,*, Yi-Ching Kao b a b

Department of Accounting & Information Systems, Pamplin Hall, Virginia Tech, Blacksburg, VA 24061-0101, United States Menlo College, CA 94027-4301, United States

A R T I C L E I N F O

A B S T R A C T

Article history: Received 20 February 2009 Accepted 25 September 2010 Available online 26 February 2011

We studied how psychological climate can influence performance of members in groups that interact using Group Decision Support Systems (GDSS). Drawing on theories in psychology, we conducted an experiment to examine the impact of psychological climate (the individual’s perceptions of the environment) on decision-making performance. Controlling for settings of GDSS session, we found that individual performance depended on two dimensions of the psychological climate. First, GDSS users perceiving a higher level of psychological safety made more effective and efficient decisions. Second, GDSS users perceiving higher level of psychological meaningfulness made better decisions if they had a clear understanding of the decision goal. Our study therefore highlighted the importance of individual psychological perceptions in a GDSS context. ß 2011 Elsevier B.V. All rights reserved.

Keywords: Decision-making Group Decision Support Systems Context Data envelopment analysis

1. Introduction Group Decision Support Systems (GDSS) are now being used to facilitate the interactions between people engaged in electronic transactions; their primary objective is to improve the efficiency and effectiveness of the group. Numerous studies have examined the effects of GDSS on group outcomes but their results have been mixed [7]. Some found that GDSS improved decision-making (e.g., [4]) while others showed that GDSS makes no impact or impedes the process (e.g., [3]). In GDSS literature, a contingency model has been used to explain this by noting that the performance depends upon the factors defining the decision-making context. The contingency factors studied have included communication modes, decision process structure, group size, task characteristics, and individual capabilities. In our study, we examined contingencies that impact GDSS performance from a psychological aspect, focusing on the constructs of psychological climate [9]. Understanding the role of psychological climate in a GDSS environment could help identify ways to improve decision-making performance [17,19]. Psychological climate is a set of perceptions that describe how an individual cognitively appraises the environment, based on personal experience. Numerous studies have demonstrated that an individual’s perception and evaluation of the job environment, in addition to the actual job attributes and work structure, can influence a person’s job attitude and performance [10]. In our research, we considered psychological climate as an individual GDSS user’s perception of the

environment, measuring it empirically. Then, we examined whether these factors, in addition to the arrangements of GDSS sessions, affected the individual user’s decision-making performance. Psychological climate shapes one’s experiences towards work context, and thus, impacts one’s work motivations and task performance [16,18]. We focused on two dimensions of psychological climate: psychological safety and psychological meaningfulness, which have been shown to explain effort and performance of work groups, and hence are likely to explain the performance of individuals within a group. Psychological safety refers to the freedom that the GDSS users feel in expressing themselves without negative effects on their images or careers (i.e., how safe it is to participate in the decision process). Psychological meaningfulness refers to the reciprocity that GDSS users feel in terms of the returns they receive from their input to the decision-making team using GDSS (i.e., how meaningful it is to participate in the group decisionmaking context). We examined how the answers to these questions could influence a person’s decision-making performance under different GDSS session settings. We varied GDSS session settings by manipulating the communication channel, incentive structure, and group leadership. We evaluated the performance of an individual decision-maker using various effectiveness and efficiency measures. 2. Theoretical background and research hypotheses 2.1. Psychological climate and work performance

* Corresponding author. Tel.: +1 540 231 5869; fax: +1 540 231 2511. E-mail address: [email protected] (R. Barkhi). 0378-7206/$ – see front matter ß 2011 Elsevier B.V. All rights reserved. doi:10.1016/j.im.2011.02.003

Psychological climate is a term that is generally used to refer to an individual’s perception and interpretation of the workplace.

R. Barkhi, Y.-C. Kao / Information & Management 48 (2011) 125–134

126

Research on the effect of psychological climate (an individual rather than a group attribute) has focused on how personal experience and current task factors affect a person’s response to the work and its environment; thus measurement and analysis of the psychological climate must be conducted at the individual level. Prior literature has described several models that attempt to predict the effects of psychological climate on work performance [14]. Yet, no study has considered psychological climate as one of the factors that determine decision-making performance. 2.2. Research hypotheses 2.2.1. Psychological safety When the task environment seems to be trustworthy, secure, and predictable, the user sees it as having Psychological Safety; in a GDSS environment, supportive and trusting interpersonal relationships promote it, as they foster an atmosphere that allows people to try to operate, and even fail, without fearing the consequences. Trust is the expectation that an individual’s group will reliably accomplish its tasks without any need to attempt to control its behavior. The sense of security and flexibility of choice are likely to increase work motivation and promote better performance. Group commitment is the degree to which the group members value their group membership and want to continue to belong to it [8]. With commitment, members tend to try to accomplish the group task and take pleasure from belonging to the group [15]. They also tend to work more closely together to find a mutually agreeable solution to the outcome. Overall, we expected that GDSS users would perceive a higher level of psychological safety as a stronger motivation to providing more effort. Therefore, we made the following hypothesis: H1. GDSS users who perceive higher level of psychological safety will achieve better performance in decision-making.

2.2.2. Psychological meaningfulness This can alter the degree of self-efficacy that has been found to influence how individuals use computers [5]. People experience meaningfulness when they believe that they can make a difference and feel valued. It reflects how people invest themselves in tasks and roles to satisfy their own needs and related expectations [6]. Its underlying element is power and influence as well as sense of being valued, and needed (people look for ways to feel important and special). Therefore, roles that allow people to have a feeling of influencing the external world create a sense of caring about a given task and provide a feeling of meaningfulness. To a GDSS user, the perception that one’s input can significantly affect the decision process is likely to increase the person’s involvement. The increase in work involvement and task input is thus positively associated with task performance. Therefore, we expected that GDSS users with higher level of psychological meaningfulness would perform better in decision-making. Accordingly, we hypothesized: H2. GDSS users who perceive higher level of psychological meaningfulness will achieve better performance in decision-making.

2.2.3. Moderating effect of goal clarity A central tenant of Action Regulation Theory is that work behavior is predominantly goal-directed behavior initiated by internal goals set by the individual. A goal regulates the user’s actions by guiding his or her attention and effort towards the desired outcome. Thus the goal should be clear and specific. When the person is clear about the goals of the tasks, he or she is more likely to perform the tasks successfully; as a result, the exerted effort is likely to provide measurable performance improvement. A clear goal increases a member’s commitment and arouses his or her interest in the task; this suggests that goal clarity is an important variable as it moderates the effect of psychological meaningfulness. We therefore hypothesized:

GDSS Performance

Settings of GDSS Sessions Communication Channel

Psychological Climate

Task Reward Points DEA Efficiency

Incentive Structure

Psychological Safety

Leadership Structure

Psychological Meaningfulness

Satisfaction

Time Efficiency

Goal Clarity : existing link in literature, controlled in our study : hypothesized links in our study : uncertain link, controlled in our study to explore possible effect Fig. 1. GDSS context, psychological climate, goal clarity, and GDSS performance.

R. Barkhi, Y.-C. Kao / Information & Management 48 (2011) 125–134

Hypothesis 3. For GDSS users who perceive higher level of psychological meaningfulness, recognizing their decision goals clearly will enhance their decision-making performance. Goal clarity may moderate the performance impact of the other dimension of psychological climate. Considering this effect, we included an interaction term dealing with goal clarity and psychological safety in our empirical model to explore and control for a possible moderating effect due to goal clarity. Fig. 1 shows the framework of our study and illustrates the hypothesized influence of psychological climate on GDSS performance and to the impact of different GDSS session types. 2.3. GDSS context In our experiment, we manipulated three important GDSS context factors to create variations in GDSS environments. 2.3.1. Communication mode Users of a Distributed GDSS (DGSS) must obviously communicate differently from those of a Face-To-Face GDSS (FGDSS). According to social presence theory, a face-to-face communication has a relatively high level of social presence, allowing nonverbal reactions and attraction, which reduces anxiety and hostility. Media Richness Theory (MRT) also suggests that an FGDSS with richer media is more suitable for decision-making tasks with higher levels of uncertainty and equivocality. Consequently it should be easier for members of FGDSS than DGDSS groups to develop common goals and reach better final solutions; thus we expected that members in FGDSS groups out-perform those in DGDSS groups. 2.3.2. Incentive structure Incentive structures for members of GDSS groups can affect the way members cooperate or compete, and either perform well or shirk their responsibilities. In our experiment, we included incentives at both the group- and individual-based levels. The group-based incentive structure rewarded the members equally, based on the performance of the group, encouraging cooperation, while the individual-based incentive structure rewarded each member based on his or her own performance, encouraging an individualistic orientation and making it difficult to converge to a consensus decision [1]. We expected that a GDSS user would perform better under group-based incentive. 2.3.3. Leadership structure The final variable that we used to define different GDSS contexts was leadership structure. Classical studies of leadership found that groups with leaders can be more effective than groups without leaders. In order to operationalize the role of leader in traditional organizations, our experiment empowered a designated leader of the group with the authority to override a consensus recommendation of group members or to select among competing recommendations.

127

3.1. Experimental design Our experiment manipulated three GDSS performance contingencies by setting the GDSS session arrangements. In contrast to the psychological climate variables that reflected individual perceptions, these GDSS session setting variables defined the GDSS group contexts where the users in the same group shared the same values. Our objective was to show that, in addition to the GDSS group settings that had been found important in prior efforts, psychological climate explained the variations in GDSS users’ decision-making performance. Communication mode had two values: face-to-face or distributed; leadership mode had two values: group with a leader, or without a leader; incentive structure had two values: individual-, or group-based. 3.1.1. Experimental participants The participants for the experiment were 156 junior and senior undergraduate business students enrolled in business decisionmaking course sections. Approximately 40% of the participants were female. The average age was 21. In the context of our study, the participants had extensive experience with online decision tools and shared similar characteristics with most GDSS users. The participants were asked to use the GDSS to solve a production-planning problem as a member of a management team. They were randomly assigned to the eight different GDSS contexts resulting from the combinations of three context variables: communication channel, incentive structure, and leadership structure. The distribution of the group assignment is shown in Table 1. Each group had three members and some groups also had a designated leader who could override member solutions while the other groups were democratic. The random group assignment was made in the first week of class. The students were given multiple group assignments during the term before they participated in our experiment. Thus, the groups had learned to use the GDSS and developed some group history prior to working on our case. To acquaint the subjects with the GDSS, a training session was held and a sample problem was solved. The formal experiment began after it was determined that the subjects fully understood the problem and the features of the GDSS, which supported real-time communication between the group members. Members of the FGDSS groups were seated in a meeting room and could communicate both verbally and via a computer during the problem solving session. Members of the DGDSS groups were physically separated, did not have face-to-face contact and communicated only via the communication subcomponent of the GDSS. All groups had access to the same GDSS. There was no time limit imposed. On average, it took a group about 2 h to complete the experiment. The groups (managers of manufacturing, marketing, and purchasing) met and exchanged information to decide on a mutually acceptable solution to a production planning problem. The task is described in Appendix A. We had two types of incentive structures; for comparison purpose, the reward points generated by the two incentive structures were designed to be comparable, as explained in the

3. Research methodology We captured and evaluated psychological climate in the dimensions of psychological safety and psychological meaningfulness and did not examine its availability dimension because all our subjects were students attending a required course that included 2 h of regular class time to perform the assigned task using the same GDSS. We first piloted the study to confirm that the availability of resources were not significantly different among the experiment participants.

Table 1 Between-subjects factors.

Leadership Communication Mode Incentive

Treatment

Number of subjects

NO LEADER LEADER DGDSS FGDSS GROUP INDIVIDUAL

72 84 84 72 82 74

128

R. Barkhi, Y.-C. Kao / Information & Management 48 (2011) 125–134

first two sections of Appendix A. We have also, in Appendix B, given a numerical example to illustrate the decision problems faced by the group members. In groups with leaders, the leader tries to maximize the organizational profit as his or her reward was directly tied to organizational profit. The leader’s problem is shown in Appendix C. The leader originally had only an estimate of the departmental costs and the members could send him or her the updated information using the GDSS. The reward for each member was computed from the objective function value resulting from the solution adopted. To make the participants take this experiment seriously, 15% of their course grade was allocated for the experiment. Each student’s grade for the experiment was directly proportional to the reward points he or she received on the exercise. Each subject could use the optimization capability of the GDSS to find the optimal solution from his/her perspective. This value was an estimate of the reward points that the student could receive on the exercise; it was used as a measure of performance. Once the group agreed on a compromise solution, the performance for each member was compared against those achieved by others in the same treatment condition. Then the highest performance received the full 15% grade and the others received a percentage that was proportional to the percentage of the highest value. By doing this we provided an incentive system that encouraged students to maximize their points so that they obtained close to15%. 3.2. GDSS features The GDSS used by the groups was developed to support the problem solving and communication needs of cross-functional groups and to operate in a client-server environment. Both face-toface and distributed groups used this GDSS. The GDSS provided modeling and optimization capabilities, information exchange facilities, a ‘‘what-if’’ capability, and a way of capturing group memory. The GDSS provided process support (information exchange via pre-defined templates), task structure (modeling and ‘‘what-if’’ capability), and task support (optimization based on the model formulated for each decision maker). The GDSS also provided communication support (real-time chat and pre-defined templates), some process structuring (complete record of group interaction), and information processing capabilities (evaluate

information, cross-impact analysis). The GDSS consisted of three main screens: GDSS Menu, Outgoing Messages, and Public Message Board. The information exchange facility allowed the user to send textual messages as well as task-specific templates of data. The screen for textual message exchange has two major windows: in one, the user typed messages for others, while in the other, the user observed all messages that had been sent to him or her. The task-specific templates provided the means to exchange information via pre-defined templates. This aided the exchange of structured and numeric information (e.g., to allow members to transmit their departmental cost information to others, and to propose solutions to other members). When a task-specific template was transmitted, the local database of each recipient was automatically updated. The modeling capability of the GDSS selected the appropriate model for the problem from the modelbase component of the GDSS. It used the information in a group member’s database to formulate specific problem instances and to solve each one optimally. Because each member had to specify the effort he or she exerted in achieving it, a preliminary screen asked each member to assign a specific effort level to each order. The GDSS then added the costs associated with these effort levels to formulate the model of the problem. The GDSS would then use an optimization module to compute the optimal solution to the problem along with the corresponding reward to the user (see Fig. 2). A participant evaluated another participant’s proposed solution by using the system’s ‘‘what-if’’ capability. This allowed each person to examine the incremental effect of changes to a solution (see Fig. 3). The ‘‘group memory’’ capability kept a history of all proposed solutions. Group members had information available about the solutions that had already been proposed by the group members. This feature aided in the negotiation process by providing participants with information about the preferences of others and how these preferences were changing over time. 3.3. Data collection We collected data on the 135 group members in our experiment, excluding the group leaders from our analysis as their task objectives were different from those of the other group

Fig. 2. The GDSS and optimization module.

R. Barkhi, Y.-C. Kao / Information & Management 48 (2011) 125–134

129

Fig. 3. Solution evaluation module.

members. To understand how the group members deployed the GDSS in various group settings, our system kept logs for each GDSS user’s time and frequencies of using different features, including the optimization tool, evaluation tool, what-if analysis, and solution proposal. To reveal unobservable factors during the decision process, we had each user fill out a short survey after completing the experiment. Each item in the survey was measured on a seven-point Likert scale from 1 to 7, where 1 = strongly disagree, 4 = neither agree nor disagree, and 7 = strongly agree. We measured the constructs of psychological safety and psychological meaningfulness by using multiple-item scales: the measurement items were developed based on the work of May et al. [12] adapted to the context of our study. We conducted a pilot study of the instruments to assess their adequacy and made further modifications of them according to the feedback of the pilot study participants. In our survey, psychological safety was measured by three items: (1) the level of trust and openness in the group, (2) the level of empathy members had to each other, and (3) the ‘‘degree of belonging to the group’’. Psychological meaningfulness was measured by four items: (1) the level of participation in group decisions, (2) the level of effectiveness in utilizing the group resources, (3) the level of distribution of leadership among all members, and (4) the level of empowerment of members during the decision process. We used a single Likert Scaled item to measure the ‘‘clarity’’ of the perceived group goal. Although the use of single item measure is usually not recommended, it was adequate because the construct being measured was simple and clear to the respondent. In our study, goal clarity was obvious and a single item measure was therefore appropriate. We also used only one item to measure the user’s overall satisfaction with the final group solution and another item to assess the level of effort that each user exerted on the group task. 4. Empirical analysis 4.1. Measuring psychological climate and decision-aking performance We performed confirmatory factor analysis (CFA) on the 7 survey items measuring psychological climate to validate the

Table 2 Results of confirmatory factor analysis on survey items for psychological safety and meaningfulness. Item names

Cronbach alpha

Psychological safety Trust and openness in the group Empathetic to each other Feel belong to the group Psychological meaningfulness Leadership distributed among all members Fully participated the group decisions Effectively utilized the group resources Fully empowered during the decision process

0.73

Loadings on factor 0.75 0.74 0.63

0.70 0.62 0.48 0.61 0.70

Goodness of fit measures: Chi-square (df): 28.99 (13); Chi-square/df: 2.23; goodness-of-fit index: 0.94; adjusted goodness-of-fit index: 0.87.

grouping of the items. The results of our CFA are shown in Table 2. The standardized factor loadings of the items were all greater than 0.4 and significant at the 0.1% level (t > 3.3), indicating effective measurement of their corresponding constructs. The ratio of chisquare to the degrees of freedom fell in the acceptable range, between 2 and 5. Both the goodness-of-fit index and the adjusted goodness-of-fit index were higher than 0.90 and 0.80, respectively. Cronbach’s alpha coefficients of the two psychological climate factors were both greater than 0.70, showing an acceptable level of reliability. Therefore, the model demonstrated reasonable fit to the data set. The correlation between the two psychological climate factors was 0.37. We then constructed two variables to measure psychological climate: SAFETY, and MEANINGFULNESS, by taking the average of the corresponding items. In addition, a dummy variable GOAL measuring goal clarity was assigned the value of 1 if the answer to the related survey item is greater than 4 on the seven-point Likert scale, and 0 otherwise. To evaluate the performance of GDSS users, we considered several frequently used outcome variables in literature: decision quality [11], decision satisfaction, and decision time [13]. The decision quality and satisfaction measure the effectiveness of the user’s decision process and hence are critical measures for the impact of GDSS on decision-making. Decision time measures how efficiently the GDSS influenced the decision-making process.

R. Barkhi, Y.-C. Kao / Information & Management 48 (2011) 125–134

130 Table 3 Descriptive statistics. Variable

Mean

Standard deviation

REWARD SATISFACTION DEA_EFFICIENCY TIME SAFETY MEANINGFULNESS GOAL

95 5.0 0.83 115 5.5 4.9 0.6

22 1.7 0.56 25 1.0 0.9 0.5

 PROXIMATY: The communication mode was a dummy variable assigned 1 for FGDSS, and 0 for DGDSS.  LEADER: The leadership structure was a dummy variable assigned 1 for a group with a leader, and 0 otherwise.  GROUP_INC: The incentive structure was a dummy variable assigned 1 for the group-based and 0 for the individual-based incentive. 4.2. Estimation models We constructed four regression models to test the impacts of psychological climate variables empirically on the four decisionmaking performance variables, under different GDSS contexts: communication mode, leadership structure, and incentive structure. We modeled GOAL as a moderator variable that affected the strength of the relation between MEANINGFULNESS and the performance measures, as well as that between SAFETY and the performance measures. The equations were:

 REWARD: This objective measure of decision quality was the number of reward points the user received on the experiment.  SATISFACTION: This subjective measure of decision quality was the GDSS user’s level of overall satisfaction with the final group solution.  TIME: This objective measure of time is the number of minutes the user spent during the GDSS decision process, as recorded by the computer clock at the start and end of the session.

REWARD ¼ a0 þ a1 SAFETY þ a2 SAFETY  GOAL þ a3 MEANINGFULNESS þ a4 MEANINGFULNESS

In addition, we derived an overall efficiency measure to evaluate the GDSS user’s performance by data envelopment analysis (DEA) [2]. We modeled the individual decision-making process in a GDSS environment from the viewpoint of production economics. We considered the GDSS decision process as a production process where participants used a set of decision resources (time and various GDSS features) to produce a set of decision outputs. During the decision process, each GDSS user spent time, effort, and features as decision inputs and converted them into a decision output. We measured the decision output from the perspectives of quality and user satisfaction. Therefore, we constructed a linear program model with six inputs and two outputs for each GDSS user in our data set to conduct outputoriented DEA (see Appendix C). The linear program compared the production performance of user i against other individuals in the sample. Since there were 135 members in our sample, we iteratively run 135 DEA programs to obtain the estimator of DEA score, uˆ i , for each user. In the output-oriented setting, uˆ i actually is an inefficiency indicator for each user’s decision-making in the GDSS environment. A larger value of uˆ i represents a less efficient decision maker and the most efficient decision maker scores at 1. To enable easier interpretation of our result, we derived the reciprocal of uˆ i to generate a positive indicator of decision efficiency and labeled it DEA_EFFICIENCYi. The DEA efficiency score measured how efficiently a GDSS user converted various decision resources into the final solution. Table 3 shows the descriptive statistics of the variables. In addition, we constructed the following variables representing the GDSS contexts:

 GOAL þ a5 PROXIMITY þ a6 LEADER þ a7 GROUP INC þ e1

(1)

SATISFACACTION ¼ b0 þ b1 SAFETY þ b2 SAFETY  GOAL þ b3 MEANINGFULNESS þ b4 MEANINGFULNESS  GOAL þ b5 PROXIMITY þ b6 LEADER þ b7 GROUP INC þ e2

(2)

TIME ¼ g 0 þ g 1 SAFETY þ g 2 SAFETY  GOAL þ g 3 MEANINGFULNESS þ g 4 MEANINGFULNESS  GOAL þ g 5 PROXIMITY þ g 6 LEADER þ g 7 GROUP INC þ e3

(3)

DEA EFFICIENCY ¼ d0 þ d1 SAFETY þ d2 SAFETY  GOAL þ d3 MEANINGFULNESS þ d4 MEANINGFULNESS  GOAL þ d5 PROXIMITY þ d6 LEADER þ d7 GROUP INC þ e4

(4)

Table 4 displays the estimation results using least square analysis (OLS). We also conducted several tests to ensure that our analysis did not violate basic econometric assumptions: we

Table 4 Estimation results. Variable

SAFETY GOAL  SAFETY MEANINGFULNESS GOAL  MEANINGFULNESS PROX LEADER GROUP_INC * ** ***

Performance indicator Individual reward

Individual satisfaction

Decision time

REWARD (R2 = 0.33)

SATISFACTION (R2 = 0.27)

TIME (R2 = 0.15)

4.50 4.11 3.79 6.24 10.78 2.48 20.02

Significant at the 10% level (one-side test). Significant at the 5% level (one-side test). Significant at the 1% level (one-side test).

**

(1.75) (1.14) (1.02) (1.55)* (3.12)*** (0.74) (6.14)***

0.53 0.20 0.39 0.33 0.15 0.23 0.46

***

(2.68) (0.72) (1.78)** (1.03) (0.56) (0.89) (1.84)**

10.57 3.36 6.44 3.52 2.52 7.79 12.24

Decision efficiency

***

(3.25) (0.74) (1.79)** (0.69) (0.59) (1.84)** (2.98)***

DEA_ EFFICIENCY (R2 = 0.23) 0.06 (2.70)*** 0.04 (1.27) 0.007 (0.27) 0.06 (1.71)** 0.06 (1.89)** 0.002 (0.09) 0.09 (2.92)***

R. Barkhi, Y.-C. Kao / Information & Management 48 (2011) 125–134

checked the value of studentized residuals to see if there were any influential outliers and found that all observations were in the acceptable range with the absolute value of studentized residuals smaller than three. The Belsley–Kuh–Welsch condition indices indicated that multicollinearity was not a problem. The Shapiro– Wilk test revealed that the residuals from the model did not violate the normality assumption. 4.3. Estimation results Psychological safety was significantly and positively associated with DEA_EFFICIENCY, REWARD and SATISFACTION, and significantly and negatively associated with TIME. This suggested that psychological safety increased the user’s overall decision efficiency (DEA_EFFICIENCY), decision quality, satisfaction, and time efficiency. These results provide evidence for accepting hypothesis H1. The interaction term of psychological safety and goal clarity did not have a significant impact in any of the four equations. Therefore, goal clarity apparently did not moderate the effect of psychological safety on decision-making performance. Since psychological safety positively influenced GDSS user’s work attitude and task performance, managers should provide a psychologically safe environment for GDSS users to enhance their performance. Also, the design of GDSS features can potentially induce a psychologically safe environment. For example, if the GDSS allowed members to pose questions on the social network and also provided an effective knowledge management system that induced knowledge sharing, then the feeling of safety may be enhanced. The psychological meaningfulness variable itself was significantly and positively associated with SATISFACTION and TIME but had no significant impact on DEA_EFFICIENCY and REWARD. These results show that a higher level of perceived psychological meaningfulness by itself increased the participant’s time spent in the tasks and allowed him or her to feel more satisfied with the decision process and results. However, a higher level of perceived meaningfulness by itself did not improve the quality of the solution (measured by REWARD). It also did not impact the overall efficiency of the decision process (measured by DEA_EFFICIENCY). Consequently, our research hypothesis H2 about the impact of psychological meaningfulness was only partially supported. Our empirical results suggested that GDSS users who perceived a higher level of psychological meaningfulness achieved better decision-making performance in term of higher self-satisfaction levels, but not from the perspectives of objective performance measures, such as reward level, time efficiency, and overall decision-making efficiency. Interestingly, we found that when GDSS users clearly recognized the decision goal, higher perceived psychological meaningfulness helped them earn higher reward points and achieve higher decisionmaking efficiency. The multiplication term of psychological meaningfulness and goal clarity was significantly and positively associated with DEA_EFFICIENCY, and REWARD, but insignificantly associated with SATISFACTION and TIME. Thus, research hypothesis H3 about the moderating effect of goal clarity was supported when the decision-making performance was evaluated from the perspective of decision quality and decision efficiency. We conclude that stronger psychological meaningfulness by itself enables higher level of user satisfaction and this exists regardless of how clearly the GDSS users perceived their decision goal. On the other hand, psychological meaningfulness by itself impacted neither decision quality nor overall decision efficiency. It was only when the GDSS user was clear about the decision goal that psychological meaningfulness had a positive and significant impact on decision quality and overall decision efficiency. Therefore, to improve decision quality and overall decision efficiency, ways of enhancing the user’s understanding of the decision goal should be implemented with those to increase the psychological meaningfulness.

131

For the impact of the GDSS group settings, we found that PROXIMITY had significant and positive impact on overall decision efficiency and reward. This confirmed our theoretical assumption that users with FGDSS may benefit from the richer communication mode and perform better than those using DGDSS. The leadership structure did not have a significant influence on any of our performance indicators. The incentive structure variable significantly affected the four performance indicators, showing that the GDSS users under the group-based incentive scheme performed significantly more efficiently than those under the individual-based structure. 5. Summary and implications Our empirical results demonstrate how psychological climate and GDSS contexts can impact decision-making performance in a GDSS environment. We showed that GDSS users’ perceptions of their group decision environment were related to their decision-making performance. Our results showed that psychological climate plays critical roles in decision-making in GDSS contexts. We found that higher levels of psychological safety lead to better decision-making performance. In contrast, perceiving high level of psychological meaningfulness by itself enabled only a higher satisfaction level. Therefore, in addition to proper design of GDSS context and features, managers designing a GDSS context may benefit by promoting an atmosphere that ensures a feeling of security and meaningfulness to allow the users to reap the full benefit of GDSS. Making sure that all participants in a group clearly understand its goals is a major way of improving the performance given the strong interaction it has with psychological meaningfulness. Of course, our study has its limitations. First, the reader should be cautious about extending the results beyond its experimental conditions. Though it is not uncommon to use students as subjects for learning about human decision-making, we acknowledge the limitation associated with using students as subjects. In addition, the way we designed different GDSS contexts in our study does not capture all organizational contexts. For example, there may be many different ways that leadership can be introduced in a GDSS context. Also, the communication channels can be implemented in other ways, such as video teleconferencing. Another potential limitation is that, given the time constraints, we used single-item questions to measure clarity and satisfaction. In our study, we found that two dimensions of psychological climate play a critical role in determining the effective use of GDSS. However, psychological climate can have other dimensions beyond those we adopted in our study. Despite these limitations, we believe that the results of our study provided significant insights into the influence of psychological climate on decision-making. Appendix A. The task and its incentives Consider a company that manufactures four products. A customer order consists of some combination of all four products. For example, an order might consist of 200 units of product 1, 450 units of product 2, 200 units of product 3, and 400 units of product 4. Three members in a group represent the managers of three departments. They met to decide which orders to fill for the company. The information available to the subject includes the details of the order and the total revenue the order generates. Associated with each order is a departmental Projected Cost (PC) that is the best estimate the company expects each department to incur for filling the particular order. This information is available to all departments (all group members). Each department, however, has

132

R. Barkhi, Y.-C. Kao / Information & Management 48 (2011) 125–134

information about its internal costs, the Actual Departmental Cost (ADC), which is not available to the other departments unless the department wishes to reveal these numbers. ADC is not necessarily equal to the departmental PC. Further, each department has information about Uncompensated Departmental Effort Cost (UDEC) which represents the departmental effort and associated costs the department incurs for filling an order but is invisible to the company and, hence, the department is not directly compensated for these costs. In general, the harder the departments work, the lower the ADCs would become. However, the harder the department works, the more departmental resources will be used. The UDEC captures the extra cost of departmental resources that the department has to absorb internally without compensation from the company. In essence, the ADC is a decreasing function of effort while the UDEC is an increasing function of effort in the department. Each department makes its effort level decision based on the incentive structure and the tradeoffs between ADC and UDEC.

Yi ¼



1 0

i f order i taken otherwise

Q ik ¼ Quantity o f product k 2 K required in order i 2 S E ¼ Set o f e f fort level choices ¼ f1; 2; 3; 4g S ¼ Set o f orders ¼ f1; 2; . . . ; 20g K ¼ Set o f products ¼ f1; 2; 3; 4g and PCid = Projected Cost of filling order i, at department d. This is the best estimate the organization has regarding how much it should cost the department to fill an order. ADCijd = Actual Departmental Cost of filling order i, expending effort level j at department d. For varied levels of effort, the ADC differs and this information is internal to each department. UDECijd = Uncompensated Departmental Effort Cost is the cost that department d incurs, but is not compensated by the organization directly, for filling order i, for effort level j. This information is internal to each department.

A.1. Individual-based incentive A.2. Group-based incentive Typical of many organizational incentive structures, member (department manager) bonus is based on how well each department controls its Actual Departmental Costs (ADC) compared to Projected Costs (PC). A department receives a bonus equal to a percentage, without loss of generality, of the difference between the ADC and PC. To lower the ADC’s in an attempt to maximize bonus, each department incurs some Uncompensated Departmental Effort Costs (UDEC). The UDEC is borne by the department and is not directly compensated by the organization, hence the name uncompensated. As a member increases his effort level, the ADC decreases and, hence, the deviation of ADC and PC widens resulting in a higher bonus. However, because increased bonus is associated with increased UDEC, the reward that is the bonus minus the UDEC is not necessarily an increasing function of effort level. For ease of exposition and to avoid overly complicating the experiment, we select four discrete levels of effort (including the optimal) and the corresponding values of ADC and UDEC and presented them to the subjects. The reward due to selecting a subset of orders (among 20 orders) and expending an effort level (choices were 1, 2, 3, or 4) at department d (three departments of marketing, production, and purchasing) is the objective that along with the product capacity constraints lead to the model of the problem, PROBINDIVIDUAL, given below. PROB-INDIVIDUAL Max

XX

½ðPC id  ADC i jd Þ60%  UDEC i jd Xi j

A percentage of the organizational profit, without loss of generality, is assigned for bonuses and each member receives an equal percentage of this group outcome. Organizational profit is calculated by subtracting the sum of ADCs incurred at the three departments from the revenues generated by the selected orders. Hence, the bonus each member receives depends on the Actual Departmental Costs (ADC) of other members as well as his own. If costs of other members are assumed constant, then each member may find the optimal effort level for each order, and this information is available locally at the department. However, due to the interaction between a member’s bonus and other members’ costs, isolated local effort decisions do not result in the best reward for all members. As a member increases his effort level, the ADC decreases and, hence, the deviation between revenue and sum of the ADCs at the three departments widens resulting in a higher bonus for each member. However, because increased bonus is associated with increased UDEC, the reward that is bonus minus the UDEC is not necessarily an increasing function of the effort level. Although the increased bonus is shared by others, the increased UDEC is borne by the department alone. Like with individual-based incentive structure, we provide the subjects with the values of ADC and UDEC corresponding to four effort level choices. Each Department’s problem, with group-based incentive structure, PROB-GROUP, can be modeled as follows. PROB-GROUP:

i2S j2E

s.t. X

Max

XX

20 15%4@Revi 

i2S i2E

Q ik Y i < Ca pacityk

8k2K

i2S

S:t: X

X

1

ADC it  ADC i jd A  UDEC i jd 5X i j

t 6¼ d;t 2 D

Q ik Y i < Ca pacityk

8k2K

i2S

X

Xi j ¼ Y i

8i2S

Xi j ¼ Y i

8i2S

i2E

i2E

where

where Xi j ¼

X



1 0

i f order i taken at e f fort level j otherwise

3

D ¼ Set o f De partments; 1 ¼ Mrkt:; 2 ¼ Prd:; 3 ¼ Purch Revi ¼ The Revenue generated by filling order i

R. Barkhi, Y.-C. Kao / Information & Management 48 (2011) 125–134

Appendix B. A simple example of the problem faced by a group We provide a simple example to illustrate the problem faced by the group members and the interplay of the key variables. Table A.1 shows the information available to the three group members for a specific order. Table A.2 shows the ADC’s and UDEC’s for each department (group member). This information is provided to the department only and is not available to other departments (other group members). By selecting the order depicted above, a revenue of $865 is generated. For this example, assume that the managers of marketing, production, and purchasing departments take effort levels 2, 2, and 3, respectively. This results in an organizational profit of $185. The bonus for each of the department managers is 15% of this resulting in $27.75. The reward is found by subtracting UDEC from the bonus resulting in $17.5, $7.5, $7.5 for each of the three managers. If they all selected effort level one, then the reward would have been $9.75 for each. It is trivial to calculate the impact of one member expending effort level 1 while others expend higher effort levels to see that there is a motivation for members to select low effort levels, to take free rides from others, or to prevent others from taking free rides from them.

Table A.1 Public information about an order. Order 1 Revenue = $865.00 Projected Costs (Marketing) = 200 Projected Costs (Production) = 400 Projected Costs (Purchasing) = 250

Table A.2 Local information available at each department for each order. Production

Marketing

Purchasing

Effort

ADC

UDEC

Effort

ADC

UDEC

Effort

ADC

UDEC

1 2 3 4

190 150 140 120

0 10 20 30

1 2 3 4

360 330 300 300

0 20 40 50

1 2 3 4

250 240 200 190

10 20 30

Appendix C. The LP model The following linear program with six inputs and two outputs was computed for each user i in our data set: Max u i 135 X s:t: ln X n j  X i j

for j ¼ 1; 2; 3; 4; 5; 6

n¼1

ui Y ik 

135 X

ln Y nk for k ¼ 1; 2

n¼1 135 X

ln ¼ 1

n¼1

ui > 0 ln > 0; n ¼ 1; 2; 3; . . . ; 135

133

where Yi1 = REWARD (the number of reward points the member receives on the experiment, which is an objective measure of decision quality), Yi2 = SATISFACTION (the member’s level of satisfaction with the final group solution measured on a seven point Likert scale), Xi1 = EFFORT (the user’s level of effort exerted to bear on the group task measured on a seven-point Likert scale), Xi2 = TIME (the number of minutes the member spends during the GDSS decision process as recorded by the computer clock for start and end of session), Xi3 = OPT (the number of times the member employs the GDSS optimization tool), Xi4 = EVAL (the number of times the member uses the GDSS solution evaluation tool), Xi5 = W_IF (the number of times the member conducts the GDSS what-if analysis), Xi6 = PROP (the number of unique solutions the member proposes to the group using the GDSS). The linear program compares the production performance of user i against other individuals in the sample. The outputoriented setting implies that each decision maker seeks to maximize his or her outputs given the available set of inputs.

References [1] R. Barkhi, The effects of decision guidance and problem modeling on group decision-making, Journal of Management Information Systems 18 (3), 2001, pp. 259–282. [2] R. Barkhi, Y. Kao, Evaluating decision making performance in the GDSS environment using data envelopment analysis, Decision Support Systems 49 (2), 2010, pp. 162–174. [3] R.S. Batenburg, F.J. Bongers, The role of GSS in participatory policy analysis – a field experiment, Information and Management 39 (1), 2001, pp. 15–30. [4] A. Dennis, M.J. Garfield, The adoption and use of GSS in project teams: toward more participative processes and outcomes, MIS Quarterly 27 (2), 2003, pp. 289–323. [5] S. Djamasbi, D.M. Strong, The effect of positive mood on intention to use computerized decision aids, Information & Management 45, 2006, pp. 43–51. [6] W. Hacker, Action Regulation Theory: a practical tool for the design of modern work processes? European Journal of Work and Organizational Psychology 12 (2), 2003, pp. 105–130. [7] W. Huang, D. Li, Opening up the black box in GSS research: explaining group decision outcome with group process, Computers in Human Behavior 23 (1), 2007, pp. 58–78. [8] A.H. Huang, D.C. Yen, X. Zhang, Exploring the potential effects of emotions, Information & Management 45, 2008, pp. 466–473. [9] L.R. James, C.C. Choi, C.E. Ko, P.K. NcNeil, M.K. Minton, M.A. Wright, K. Kim, Organizational and psychological climate: a review of theory and research, European Journal of Work and Organizational Psychology 17 (1), 2008, pp. 5– 32. [10] T.A. Judge, C. Thoreson, J.E. Bono, G.K. Patton, The job satisfaction–job performance relationship: a qualitative and quantitative review, Psychological Bulletin 127, 2001, pp. 376–407. [11] M. Limayem, P. Banerjee, L. Ma, Impact of GDSS: opening the black box, Decision Support Systems 42, 2006, pp. 945–957. [12] D.R. May, R.L. Gilson, L.M. Harter, The psychological conditions of meaningfulness, safety and availability and the engagement of the human spirit at work, Journal of Occupational and Organizational Psychology 77, 2004, pp. 11–37. [13] M. Parikh, B. Fazlollahi, S. Verma, The effectiveness of decision guidance: an empirical evaluation, Decision Sciences 32 (2), 2001, pp. 303–331. [14] C.P. Parker, B.B. Baltes, A.Y. Scott, J.W. Huff, R.A. Altmann, H.A. Lacost, J.E. Roberts, Relationships between psychological climate perceptions and work outcomes: a meta-analytic review, Journal of Organizational Behavior 24, 2003, pp. 389– 416. [15] A. Schwarz, C. Schwarz, The role of latent beliefs and group cohesion in predicting Group Decision Support Systems Success, Small Group Research 38 (2), 2007, pp. 195–229. [16] J. Schepers, A. Jong, W. Martin, K. Ruyter, Psychological safety and social support in groupware adoption: a multi-level assessment in education, Computers & Education 51, 2008, pp. 757–775. [17] M. Strite, J.E. Galvin, M.K. Ahuja, E. Karahanna, Effects of individuals’ psychological states on their satisfaction with the GSS process, Information & Management 44, 2007, pp. 535–546. [18] J.B. Thatcher, M.L. Loughry, J. Lim, D.H. Mcnight, Internet anxiety: an empirical study of the effects of personality, beliefs, and social support, Information & Management 44, 2007, pp. 353–363. [19] H. Van Der Heijden, T. Verhagen, Online store image: conceptual foundations and empirical measurement, Information & Management 41 (5), 2004, pp. 609– 618.

Reza Barkhi is Associate Professor and PriceWaterhouseCoopers Research Fellow in the department of Accounting and Information Systems at Virginia Tech (USA). He served as department Head MIS at & Management 48 (2011) 125–134 R. Barkhi, Y.-C. 134 Kao /of Information American University of Sharjah during 2006-2009 while on leave from Virginia Tech. He will serve as the department Head of Accounting & Information Systems department at Virginia Tech starting July Yi-Ching Kao is an Assistant Professor of Accounting at 2011. His current research interests are in the areas Menlo College. She received her Ph.D. in Management of Computer Supported Collaboration, Electronic Science from the University of Texas at Dallas. Her Commerce, Information Systems, and IT Audit & current research interest is performance evaluation in Control. Reza has published in journals such as various services industries including public accounting, Location Science, European Journal of Operational Research, Computers & OR, Group software production, electronic commerce and nonDecision and Negotiation, and Decision Support Systems, Communication Research, profit organizations. She has published in the European Communications of AIS, Information Technology and Management, Information Journal of Information Systems, Decision Support Systems, & Management, and Journal of Management Information Systems. He received a Information &Management and the Journal of InformaBS in Computer Information Systems, MBA, MA, and Ph.D. in Business focusing tion Systems. on Information Systems, and Decision Sciences all from The Ohio State University.