Computer decision support for senior managers: encouraging exploration

Computer decision support for senior managers: encouraging exploration

InL J. Man-Machine Studies (1986) 25, 139-152 Computer decision support for senior managers: encouraging exploration TIM SMITHIN AND COLIN EDEN Scho...

937KB Sizes 0 Downloads 23 Views

InL J. Man-Machine Studies (1986) 25, 139-152

Computer decision support for senior managers: encouraging exploration TIM SMITHIN AND COLIN EDEN

School of Management, University of Bath, Claverton Down, Bath BA2 7A Y, U.K. (Received 30 October 1985) This paper discusses issues involved in designing a computer-based decision support system for senior decision makers in business organizations. It is based on the authors' experiences of developing and using such a system over several years in a number of large U.K. companies. The paper focusses upon the nature of decision-making for senior managers and emphasizes the highly political and turbulent environment in which they work, and the implications this has for designing a system which can compete for the attention of a busy manager. Rather than describe "another system" we are anxious to discuss the problems involved in creating decision support systems that will be practical, and assist with decisions that "really matter".

Introduction In this paper we discuss issues involved in designing computer decision support systems based on our work researching and developing a decision support system to encourage the participation of senior managers in corporate planning activities within a large U.K. company. Whilst the use of computer decision support is not new, and there is a rapidly growing interest in its practical application, we believe that our system is a unique experience for managers in that it provides informed and relevant help set against the practical and political realities of everyday working life. Our work is firmly grounded in a psychological and political approach to understanding organizations rather than being concerned with computers and computer science and this provides a different interpretation of the issues involved in the employment of computing power. In our work we emphasize the psychological and political aspects of decision support because we believe that it is at this level that we must consider the use of computers and decision support if they are to make a practical and relevant contribution to organizational life. Whilst our backgrounds in the psychology of organizational decision making predispose us to this view, it is noticeable that as computer decision support systems gain wider visibility these issues are coming to the fore.

The growth of computer aid for decision-making In recent years there has been a growing interest in the provision of computer aid for complex decision making. Some of the earliest work attempted to " a p e " the skills of a management consultant in a way which foreshadowed current interest in providing intelligent support for managers (Joyner & Tunstall, 1970). More recently in Europe there have been a variety of government initiatives to encourage research into a wide range of information technology projects, a significant part of which should foster 139 0020-7373/86/080139 + 14503.00/0

9 1986 Academic Press Inc. (London) Limited

140

T. S M I T H I N

AND

C. E D E N

developments in decision support systems (Alvey Committee, 1982; ESPRIT, 1985) Work on computer aid for complex decision-making has also been extensively studied by groups which are closer to the decision maker's perspective: for example, early work on our system was influenced by computer analysis of foreign-policy decisionmaking (Bonham, Shapiro & Nozicka, 1976; Axelrod, 1976). Closely related work on rule-guided computer models to aid organizational policy analysis was also influential (Bossel, 1977; Eden, 1978). More recently, Barbaric (1981) surveyed a number of computer-based decision support systems, all of which have a background in the psychology of organizational decision-making and organizational analysis. The use of decision support within a managerial or organizational perspective is also becoming increasingly researched in the U.S,A. (Power, 1982) and in the U.K., systems have been developed to aid decision-making from a background in personal psychology and decision theory (Boxer, 1979; Phillips, 1984; Wooler, 1982). As well as these general decision-making systems there have also been changes to many other computer modelling systems which bring them into the category of computer-assisted decision support. For example, the work in the U.K. at the University of Warwick, on visual interactive simulation has changed the use of simulation models into an interactive user-based problem-solving activity, albeit over a restricted range of problems (Withers & Hurrion, 1982) and work at the University of Kent has a similar aim (O'Keefe, 1984) of making decision support software more approachable by managers. Similarly interactive report generators move traditional database work more obviously into this arena (Blanning, 1984). Another computer development which has grown towards this area of decision support is that of computer mail and conferencing systems where the increasing range of facilities on these systems, and their availability, has meant that they are now frequently used as a significant part of a group decision-making process (Featheringham, 1977). The above is by no means an exhaustive portrayal of work on computer-aided decision making but it does illustrate the amount of effort that is now going into this area of research and the widely differing backgrounds of those involved, and the variety of theoretical perspectives, not all of which are computer-based. In view of this, recent worries expressed about the lack of knowledge of the wider context of decision support, can be partly answered by paying attention to ideas, systems and work which are already available, but not usually associated with the area of decision support systems. Two researchers have recently suggested that: "Whilst considerable enthusiasm for DSS exists in many quarters there are still a number of fundamental issues which have not been seriously addressed. Of particular concern is how DSS is likely to affect organizations." (Klein & Hirschheim, 1985.) Such worries are also seen in the related area of work on the development of expert systems, which has been frequently criticized for neglecting the human aspects of system use in organizations (O'Keefe, 1985), and even where decision support systems are considered more widely, in a problem-solving context, the description of that context is often oversimplified, and tends to ignore the influence of subjectivity within the decision making process (Landry, Pascot & Briolat, 1985). In our attempts to develop and use a decision support system, we work with a broad view of the decision-making task, and have become increasingly convinced that any serious work in the use of decision support systems in organizations should be grounded in an understanding of the nature of decision-making in such environments, rather

142

T. S M I T H I N

A N D C. E D E N

issues, and is significant in setting users' expectations when they "sit in front o f " a particular system. The notions of utility and alienation are perhaps just the most recent indicators of a long cultural heritage of our relationships with machinery. Heidegger, for example, has argued that our conception of technology as an objectified aspect of experience in contrast to ourselves as subjects, stems from a Cartesian separation of the individual from the world, and technology becomes an instrument for the achievement of human desires and purposes quite separate from the human subject. This separation effectively splits the world of experience into two distinct parts, the subjective and objective, and the idea of technology defines those aspects of the world which are capable of manipulation in a specific way, i.e. those which behave as objects, and can be treated as such (Heidegger, 1977; Descartes, 1637). Therefore all things "technological" are placed in a different realm of experience to all things "personal", and there is an intrinsic alienation in our experience of technology, which contains at its root a sense of alienation of subject from object. However, aspects of experience placed in the objective realm have the contrasting property of being capable of control and manipulation to a degree not envisaged in the subjective realm. Technology therefore suggests a degree of utility and presents inanimate objects capable of manipulation. Our ambivalence to modern technology, which is manifested in many ways in our reactions to computers, is, according to this analysis, hardly surprising. The use of a computer, almost more than anything else, seems to bring out extremes of behaviour; on the one hand, for some the suggestion of using a computer to help in their work generates immense hostility: on the other hand, the existence of computer addicts is now a matter of significant social concern. Computers and all their trappings seem to encourage extreme reactions. We do not believe that this is solely due to social factors such as job security and civil rights [though these are important (Jenkins & Sherman, 1979; Cooley, 1980)], but reflects a psychological unease with the manmachine interaction. The roots of this alienation seem to lie in our difficulties in placing the computer system in an appropriate role in our social world, for as a combination of hardware and software the system has apparent properties of "intelligence" which place it as a subject in our world, and equally properties which suggest it is simply an inanimate object. This confusion is evidenced in the lack of distinction most users make between hardware and software in the use of the system: for example, we usually conceive of the machine as an objective, inanimate, technological entity, and yet when we "dialogue" with it we expect it to behave like a human being. It is not surprising that most interactions with computers are described as being "unsatisfactory" (Smith, 1980). Weizenbaum described this conflict when discussing early work on artificial intelligence. "Most men don't understand computers to even the slightest degree. So unless they are capable of very great scepticism they can explain the computer's intellectual feats only by bringing to bear the single analogy available to them, that is their model of their own capacity to think." (Weizenbaum, 1976.) Perhaps today we understand computers a little better than 9 years ago, or at least have experiences that reinforce more scepticism, but the point still remains that most people generally have only one main model of intelligent behaviour, and using this model as a basis for interacting with "intelligent" machines may be very confusing. There are both practical and philosophical issues of considerable magnitude to tackle in this context (Dennett, 1978; Hofstadter, 1979; Michie & Johnston, 1984) but in this

142

T. S M I T H I N

A N D C. E D E N

issues, and is significant in setting users' expectations when they "sit in front o f " a particular system. The notions of utility and alienation are perhaps just the most recent indicators of a long cultural heritage of our relationships with machinery. Heidegger, for example, has argued that our conception of technology as an objectified aspect of experience in contrast to ourselves as subjects, stems from a Cartesian separation of the individual from the world, and technology becomes an instrument for the achievement of human desires and purposes quite separate from the human subject. This separation effectively splits the world of experience into two distinct parts, the subjective and objective, and the idea of technology defines those aspects of the world which are capable of manipulation in a specific way, i.e. those which behave as objects, and can be treated as such (Heidegger, 1977; Descartes, 1637). Therefore all things "technological" are placed in a different realm of experience to all things "personal", and there is an intrinsic alienation in our experience of technology, which contains at its root a sense of alienation of subject from object. However, aspects of experience placed in the objective realm have the contrasting property of being capable of control and manipulation to a degree not envisaged in the subjective realm. Technology therefore suggests a degree of utility and presents inanimate objects capable of manipulation. Our ambivalence to modern technology, which is manifested in many ways in our reactions to computers, is, according to this analysis, hardly surprising. The use of a computer, almost more than anything else, seems to bring out extremes of behaviour; on the one hand, for some the suggestion of using a computer to help in their work generates immense hostility: on the other hand, the existence of computer addicts is now a matter of significant social concern. Computers and all their trappings seem to encourage extreme reactions. We do not believe that this is solely due to social factors such as job security and civil rights [though these are important (Jenkins & Sherman, 1979; Cooley, 1980)], but reflects a psychological unease with the manmachine interaction. The roots of this alienation seem to lie in our difficulties in placing the computer system in an appropriate role in our social world, for as a combination of hardware and software the system has apparent properties of "intelligence" which place it as a subject in our world, and equally properties which suggest it is simply an inanimate object. This confusion is evidenced in the lack of distinction most users make between hardware and software in the use of the system: for example, we usually conceive of the machine as an objective, inanimate, technological entity, and yet when we "dialogue" with it we expect it to behave like a human being. It is not surprising that most interactions with computers are described as being "unsatisfactory" (Smith, 1980). Weizenbaum described this conflict when discussing early work on artificial intelligence. "Most men don't understand computers to even the slightest degree. So unless they are capable of very great scepticism they can explain the computer's intellectual feats only by bringing to bear the single analogy available to them, that is their model of their own capacity to think." (Weizenbaum, 1976.) Perhaps today we understand computers a little better than 9 years ago, or at least have experiences that reinforce more scepticism, but the point still remains that most people generally have only one main model of intelligent behaviour, and using this model as a basis for interacting with "intelligent" machines may be very confusing. There are both practical and philosophical issues of considerable magnitude to tackle in this context (Dennett, 1978; Hofstadter, 1979; Michie & Johnston, 1984) but in this

M A N A G E R I A L D E C I S I O N SUPPORT

143

paper we focus upon the practical implications for the man-machine dialogue of attributing human characteristics to a computer system. The result of this "object/person" confusion about computer use is that neither designers nor users have a very clear idea of what it is that the computer is "supposed to be". Whilst the interaction with a program may give the impression that the computer is responding as an intelligent being we are simultaneously aware that it cannot be "alive" in the same way that we are. Also when considered more closely even the computer's characteristics which make it appear to be an intelligent being, are idealized in a way that make it very non-human; it never forgets, it is invariably consistent and is always polite. Consequently, our judgements about the interaction are often misleading, we expect the machine to " k n o w " and do things which we would not expect of a human consultant. Is it a "pseudo person", or an "intelligent machine", or an "inanimate number cruncher"? (Shallis, 1984), and this confusion is exacerbated by the presentation of each of these "machine personas" in a single interaction.

Machine "intelligence" and human intelligence This understanding of the differences between what is generally labelled machine "intelligence" and human intelligence is very important for successfully construing machine "behaviour" and enabling the human actor to place the machine in an appropriate social role with respect to his workplace needs. Evidence to date suggests that most users, especially managers who have little previous experience using a computer, are unable to construe the machine in an appropriate way. As we stress elsewhere: "In many important respects it can never be the same as a human interface but should be seen as a different type of experience with advantages and disadvantages all of its own." (Eden, Williams & Smithin, 1986). An example may make this clearer. Modern chess-playing computer systems can, apparently, play a very good game of chess and recently (as has been the case throughout the history of computer chess) there was a strong expectation that the best systems could take on and regularly beat players of international status. The results of a number of matches between machines and good players fuelled this expectation. Therefore, the result of the match between "Cray Blitz" (running on a very powerful mainframe machine) against a rather "rusty" international master came as a considerable surprise when Levy had no difficulty in defeating "Cray Blitz" over a series of four games (Levy, 1984a). The critical factor was that unlike most good chess players Levy is also a writer of chess-playing software and his chess-playing strategy is based on a very good knowledge of what the machine does well or badly, i.e. he has a good understanding of the "intelligence" of the machine. Levy is also able to provide a model of machine behaviour for others in terms of a few general rules, so it is likely that machine play will be less successful than previously against human opponents. In this case construing the machine as "a good human player" ignores the possibility of strategies which can exploit the weaknesses which are peculiar to machine play (Levy, 1984b). It is precisely this problem which may face users of any system. If they attribute the wrong characteristics to the software then they will become dissatisfied and frustrated because the system "will not make sense". More importantly, they will not be able to make use of the specific advantages that are available when using a machine to aid thinking. If we can,

144

T. S M I T H I b l A N D C. E D E N

in one way or another, provide the user with a way of attributing the correct characteristics, then the interaction will inevitably be more rewarding. This implies that in the design of any decision support system it is important to construct an appropriate role for the machine, to present the system in such a way that the user is able to construe it as an "intelligent" machine rather than as an intelligent person. Put another way, this is the task designing appropriate role expectations in the interaction: to recognize that the machine plays a role in the humanity or intellect of the decision-maker's existence.

Computer images Designing appropriate roles for the user and the machine is therefore a central task of system design, and this requires a true understanding of the user's world as he sees it, and not just attention to the ergonomics of computer use. " . . . . concentrating on the design of the physical interface . . . . is about as valuable as specifying an ideal secretary in terms of what he or she looks like. What we need is a framework for considering the whole person-machine transaction" (Canter, 1984). Many system designers are now involved in this wider conception of design and have moved away from the lure of technical attractions to focus on an interaction which takes place on a cognitive rather than a physical level, yet even designs which attempt to define a wider context for the system fall prey to an overemphasis on the physical features of the environment. For instance the Star user interface, which was the precursor of modern window technology (where computer operations are represented graphically rather than textually), links its operations to the familiar environment of its intended business users; an office desk top. In this way it was presumed that users would already have some constructs for predicting the operation of the software (Cranfield Smith, Irby, Kimball & Verplank, 1982; Clanton, 1983), but there is a fundamental conceptual difference between the "electronic desk" and the "paper desk" and users rarely realize that the electronic equivalent of their desk is much less safe than the p a p e r one, since in the former, it is possible to completely lose a day's work (through a machine crash for instance) in a way which is not conceivable for a paper-based medium. Similarly the analogy ignores the significance of the fuzzy boundary of a physical desk, which extends over a considerable breadth of vision, compared with the much more tightly defined computer screen. Behind the attractive facade something very unfriendly lurks. Similarly, suppose we attempt to reassure the manager that the system is really quite easy to use, and needs little training or expertise to master; it is easy-going, friendly and infallible. Typically such statements, in many variants, come from those who market computer systems as a way of countering sales resistance, and from computer professionals who see it as a way of reducing users' fears (Shallis, 1984). This may be initially comforting but a user who gets involved in an interaction in this frame of mind is placed in an impossible psychological position when errors occur (as they must), since the only person left to blame for the difficulties is himself; feeling embarrassed, foolish and incompetent are hardly positive results from an interaction with a computer system. It seems that man is just not destined to get to grips with the computer, but must we conclude that it should only be left to the enthusiastic elite who seem to be conversant

MANAGERIAL

DECISION

SUPPORT

145

with the physics of chips, knowledgeable on the chemistry of silicon, adept at electronic engineering, familiar with the intricacies of microcode, happy to hack away at machine code, and confident in the tangles of Fortran? To date computer systems seem to have been designed for this kind of person be they hobbyist or data-processing manager. The user looking for help with problem solving or decision making does not really want a simple guide to using the keyboard or knowing what discs do. What energizes him is the need to solve a management problem, and the software and guidance need to be related to this: "the use of computers is only possible because it occurs in a context which provides a. degree of support for that use". "A good system not only wins respect and makes you want to use it, but it generates satisfying feelings and confidence in your capacity to use it effectively." (Shneiderman, 1980) How might we then design systems for managerial tasks which busy managers are prepared and want to use?

Making system use more interesting Decision support is necessarily a more complex and open-ended computing task than using a word processor, and it is often the case that users of suc h systems feel that a lot of effort is required in relation to the results; it is very easy to become lost in the detail and complexity of the system. The population of users for such a system (senior managers) is one in which time (and patience) are often at a premium and for such users the system often seems like a vast passive "data sink" into which precious energy and ideas are sunk but from which nothing ever emerges (Eden et al., 1986). This is a very one-sided interaction and is also in Alter's terms conversationally uninteresting, in fact it is not a "conversation" at all. The net result for the user is frustration and boredom. Alter suggests that systems would be much more interesting if they were able to: --give answers --give opinions or make suggestions --give orientation rather than: - - a s k for answers - - a s k for opinions - - a s k for orientation. Most systems provided for business use today still seem to be rooted firmly at the "ask" end of the above spectrum and whilst such a system is undoubtedly interactive it is not really drawing the user into the interaction and engaging his attention (Pinsky, 1983). There is a need for a system to be responsive and not just interactive. One solution to this problem of passivity is io design a more prescriptive system, but this usually implies that clear goals can be identified and the user can be guided in relation to them, but here the "confidence" displayed by the system can be shallow and inappropriate. But this is not the only way to make an interaction interesting, a consultant for example provides another model for an interesting and informative discussion. We do not expect him to provide simple and clear-cut answers, indeed we

146

T. S M I T H I N

A N D C. E D E N

would feel that he had not fully understood our problem if he did, but we do expect that he: "is not boring", "does not ask unnecessary things", "appears coherent and consistent", and "is able to explain his actions" (Smith, Lee & Hand, 1983). Perhaps what we have argued is that systems generally enable a considerable measure of control (the user can make something happen), but only poorly aid prediction (the user can't make sense of what has happened, or what is going to happen in anything but the short term). The preceding paragraphs have focused on the psychology of computer use working from the presumption that many of the problems of using computers are psychologically rather than technically based. This is by no means to deny the importance of technical innovation in hardware and software but to emphasize that no amount of attention to technical detail can provide a system for which it is worth spending time to use. Our work, significantly, supports the view that: "People do not mind dealing with complexity if they have some way of controlling or handling it" (Jagodinski, 1983), and we have alternatively defined "friendliness" much more in terms of responsiveness, "intelligence", and the provision of facilities which enable the user to get on with a task confident in the available tools and support. In this way we aim to encourage an informed way of working rather than a directed and prescribed problem-solving activity. But even if we are able to design a system which plays a useful and informed role within a decision-making context there is still more to understand about the differences between the role the machine plays in contrast to that of a decision maker. This was brought out in a famous earlier controversy about the capabilities of chess-playing systems (Dreyfus, 1979) and is reflected more practically in our work. Can the computer play a good game of chess? Can the computer make business decisions? To focus attention on the operational aspects of these questions, is to partly miss the whole point of the question, because computer " p l a y " , no matter how operationally effective, is very different to h u m a n play: it is qualitatively different. This qualitative difference is linked to what it is for a human being to be committed to a course of action or decision. Both machine and man can pursue objectives, but in the end a man is "gripped by commitment", is involved in the decision and its consequences in a way that no machine can be, and this is why, in an organization, things can be made to happen, rather than just be decided. This is to say that man is involved as a purposive and political actor. It is essential therefore for the models which the computer uses and the way in which those models are used in the context of decision making to reflect this fundamental difference. It also implies that there is an aspect of h u m a n judgement and involvement which cannot be delegated to the machine. The conclusion that we take from the previous sections is that a decision support system will only be effective if it takes account of the realities of the situation in which it is to be used. That is to say, it is seen as personally relevant to the needs of those using it in their work and is modelled into a political environment of purposive individuals.

The mundane practicalities of computer use But what is it like actually to use a computer system as part of everyday managerial work? For someone busy at work with a variety of other interruptions and demands

M A N A G E R I A L D E C I S I O N SUPPORT

147

on their time the availability of the system is critical to its use or non-use. Two of the most frequently cited reasons for non-use of computer systems are failure to get a connection to the computer and the difficulty of accessing terminal equipment (Bryant & Corner, 1982). As Tracz (1980) suggests: " . . . crucial for successful m a n - m a c h i n e interaction on a (computer conferencing system) is complete ease of access to a terminal, ideally to one located in one's office..." The physical access barrier is often underrated by computer professionals who characterize it as sloth or obstinacy on the part of the user, and the provision of a computer terminal room or access point is often seen as sufficient. Computer conferencing, for example, has had to tackle these issues since the client group for the service, and the nature of the activity, militate against use in a public setting. For our work we have argued elsewhere (Eden et al., 1986) that the amount of " f o r m a l " time a manager has for planning may be very limited and so the activity, if it is to be done at all, will be irregular and the session of short duration. Many of the activities of senior managers are characterized by brevity and discontinuity, there is an emphasis on verbal interactions rather than written reports, and much of the information they use to evaluate situations is never written down or coherently espoused. This is the reality in which decision support systems operate (Mintzberg, 1975). In one study it was found that senior managers typically used a database inquiry system at frequent but irregular intervals and sessions rarely lasted more than 30 min (Eason, D a m o d a r a n & Stewart, 1975). So the access to a system must be constantly available otherwise it simply will not be able to compete with other demands in this kind of environment. Even if the equipment is readily available, access to the system may also prevent an insuperable barrier which is heightened by the contrasting needs of security for such systems. A "logging on" procedure which seems quick and simple to those in a management services section can be highly irritating in a more pressured environment. Also the personal costs of failing to "log on" or of being made to look incompetent are more serious for senior decision makers. Our recent experiences with the use of an electronic mail system for communicating with senior managers, fell down largely because of the complicated access procedures. Automating these through a batch file made little improvement since any failure of the automatic system left the user completely in the dark as to what to do next. The consequence was that they simply did not use it!

The politics of computer decision support Our work in tackling complex decision making has stressed the need to pay attention to the politics involved in the use of systems by individuals or within a task-oriented group. The nature of the impact of a decision support system on the effectiveness of the organization will not only be dependent upon the technical prowess of the system but also upon the ability of the designer to anticipate fully the changes in power and control. Any decision support system has both a legitimate and espoused effect on the organization, and also an illegitimate effect which can rarely be openly admitted. The latter is rarely a consequence of the unreasoning paranoia of those faced by computer systems for the first time, but is more likely to be a reasoned appreciation that the introduction of such a system has political consequences and will change the balance of power and skills within the organization. In a practical sense the use of a computer

148

T. S M I T H I N

A N D C. E D E N

system necessarily entails different skills to those required for effective participation in a face-to-face meeting. For example, skills related to writing and composing may be more helpful than social skills or skills of spoken rhetoric. Such a change will inevitably change the ability (for better or worse) of individuals to influence the process. One major area of research into group behaviour has stressed the significance of dramaturgical models of group behaviour (Mangham, 1979; Goffman, 1959). These researchers focus on the way in which the particular dynamics of a decision-making team predominantly influence the outcome of the meeting. If decision support becomes successfully entrenched within organizations then some of these dynamics will be replaced, some fundamentally changed, with new machine-centred dynamics of significance. If we take these interactionist models of organizational life seriously then we shall need to concern ourselves with anticipating the importance of the sequence in which different participants interact with a support system. Similarly individuals who are used to controlling the availability of information see its potential wider dissemination as a change in their role, and may well look for facilities which enable them to pursue their role without change (Hiltz, 1978). Also the "experts" who r u n the decision support system have a different and potentially more powerful role than previously. In addition to the above issues of changes in the balance of power, the medium itself is influential in changing the nature of the event. For example, the formal and "permanent" nature of displayed or printed text both alters the things that can be said, and the way in which they are said (in a dramaturgical sense it changes the scripts and characters). Although a number of ingenious ways in which users have attempted to make computer communication more emphatic have been reported, it is a difficult medium in which to make emphasis, humour, scorn, and all the other expressions achieved by manner, gesture, and tone o f voice. These imbalances create tensions and issues which to the extent that a user is significantly disadvantaged by them he will attempt to move the "real" debate from the machine environment, and the system will be defined as irrelevant by some and crucial by others. The effects of the computer medium are most advantageous to those with a commitment to a bureaucratic way of operating, where any attempt to reduce the impact of intuition, politics, and the social hither and thither of decision making is regarded as proper. In a bureaucratic framework a de-emphasis on social skills, and the formality and anonymity of the input, are believed to focus attention on the content of the debate and encourage a wider level of participation and ownership of the resulting ideas. Advantages of this sort are indicated by those who report that conferencing is better for "report writing" and encourages the reaching of consensus without compromise (Rice, 1980). These issues are the consequence of an organization consisting of purposeful individuals who have a commitment to and stake in their decisions. A critical part of a decision support system as we envisage it is to maintain and support the decision maker in that role, that is to be able to make a clear distinction between machine "intelligence" and human intelligence, and provide relevant support in a turbulent political environment.

Computers as a relevant part of our social world When we set out to write this paper we had intended a more detailed and technical description of our COPE system. However, we soon found that we were continually

M A N A G E R I A L D E C I S I O N SUPPORT

149

looking to justify each design decision and feature in terms of a much wider context of its intended use than a more technical paper would allow. This concern is reflected in the first and final sections of this paper. The early paragraphs attempt to explore the uneasy nature of man's "relationship" with computing machinery, which is reflected in many of the difficulties of, and varied reactions to, the use of computer systems and the final sections consider the use of a decision support system as an intrinsically political activity. We have argued that unless the system design addresses these fundamental issues then no amount of careful "human factors" design will turn the system into one which is "really" used. Our understanding of the role that computers can play should be enhanced by paying attention to these ways in which it can participate in our social world. But we must tread carefully here, for in setting up a tool to aid decision making, we enshrine a view of what is involved in decision making which is inevitably a reduced and limited view of the process, and the danger is that this representation then alters our view of the process itself. We can be caught in a feedback loop which continually diminishes our view of our own faculties, and some computer-based approaches to decision aid reflect this particularly vividly, to the extent, for example, that there are now psychological theories of man "as just an information processor". As Weizenbaum argues, there is a real issue of what the computer will do to man's view of himself (Weizenbaum, 1980). This diminishing feedback is most graphically portrayed in Thomas Hardy's novel Tess of the D' Urbevilles, where the arrival of a mechanical baler drastically transforms the farm workers' definitions of themselves and the task of haymaking. The engineer particularly, defines himself solely in relation to his machine. "What he looked he felt. He was in the agricultural world, but not of it. He served fire and s m o k e . . , his thoughts being turned inwards upon himself, his eye on his iron charge, hardly perceiving the scenes around him, and caring for them not at all: holding only strictly necessary intercourse with the natives, as if some ancient doom compelled him to wander here against his will in the service of his Plutonic master" (Hardy, 1891). The parallel between this passage and those in Tracy Kidder's description of workers developing a new minicomputer, where the intensity their work creates a social world which revolves around and is primarily defined by the nature and needs of the new machine, is quite striking, and a powerful reminder of the ways in which technology reimposes our ideas on ourselves in a restrictive and diminished way. In Kelly's terms technology seems to foster pre-emptive constructs, i.e. constructs which do not admit the elements they define to be within other constructs. They create unidimensional and categorized interpretations of experience, for example, "man is just an information processor" (Kidder, 1981; Kelly, 1963). In design we need therefore to create a technology which can avoid these problems, and can enable the decision maker to use the system and still retain a realistic view of the task, and not be beguiled by "magical computer solutions". The user must stay in touch with the full variety and complexity of the decision task (Eden, 1976). This necessarily includes an attempt to broaden the view of decision support by stressing the practical and political realities with which it has to cope. The whole event must include a recognition of this fact, and include facilities which enable users to behave politically. Anything less, creates a diminished view of the task, and leads to the system's non-use. Whilst an appreciation of the potential problems of the use of decision support systems is critical to finding ways of making these activities practically viable it would

150

T. SMITHIN AND C. EDEN

be wrong to over-stress the negative aspects of technology. Much of our work is a practical demonstration of our belief that computer power can aid human reason. We finally, therefore, describe what it should be like to use a decision support system. The form of this description is based partly on what Carroll has called an "exploratory environment" (Carroll, 1982). You should look forward to using a decision support system and not see it as a chore. When you do something you should get some response, and as you use it there should be a sense of becoming skilled with the tool in the prosecution of a task. You should not be worried if there are things that you do not understand, nor should you "live in fear" of causing a disaster. You should be able to design and plan your own actions, and you should be able to learn about this as you go along. Much of what you learn should be applicable throughout the system, and you should not need to refer to manuals too often. If you get stuck or bored, new ideas and opportunities should arise spontaneously. Overall you should feel that you have some control of the system, and that when using it, you are getting some informed and relevant help with your task.

References ALVEY COMMITI'EE (1982). A program for Advanced Information Technology: the report of the Alvey Committee. London: H.M.S.O. AXELROD, R. (1976). Structure of Decision. Princeton: Princeton University Press. BARBARIC, A. J. (1981). A decision support system based on mental representations. Working Paper, WP 81-25. Laxenburg, Austria: IIASA. BLANNING, R. W. (1984). Conversing with management information systems in natural language. Communications of the ACM, 27, 201-207. BONHAM, G., SHAPIRO, M. J. & NOZICKA, G. T. (1976). A cognitive process model of foreign policy decision making. Simulation and Games, 7, 123-151. BOSSEL, H. (1977). (Ed.) Concepts and Tools of Computer Assisted Policy Analysis, Vol. 1-3. Basle: Birkhauser. BOXER, P. J. (1979). Reflective analysis. International Journal of Man-Machine Studies, 11, 547-584. BRYANT, J. & CORNER, L. (1982). Training service managers using a microcomputer. Journal of the Operational Research Society, 33, 977-982. CANTER, D. (1984). From knobs and dials to knowledge. Design, 428, 21. CARROLL,J. M. (1982). The adventure ofgetting to know a computer. IEEE Computer, 15, 49-58. CLANTON, C. (1983). The future of metaphor in man-computer systems. Byte, 8, December, 263-280. COOLEY, M. (1980). Children with a hammer: or viewing the world as a nail. Computer Age, 11, 4-8. CRANFIELD-SMITH, D., IRBY, C., KIMBALL, R. & VERPLANK, B. (1982). Designing the Star User Interface, Byte, 7, 242-282. DENNE'IT, D. C. (1978). Brainstorms: Philosophical Essays on Mind and Psychology. Sussex: Harvester Press. DESCARTES, R. (1637). Discourse on Method. Modem Translation by SUTCLIFFE, F. E. (1968). In Discourse on Method and the Meditations. Harmondsworth: Penguin. DREYFUS, R. (1979). What Computers Can't Do. New York: Harper Colophon Books. EASON, K. D., DAMODARAN, L. & STEWART, T. F. (1975). Interface problems in the ManComputer Interaction. In MUMFORD, E. & SACHMAN, H., Eds., Human Choice and Computers. North Holland: Elsevier. EDEN, C. (1976). Decision models, system designand dynamic goal systems. Journal of@stems Engineering, 4, 107-116.

MANAGERIAL DECISION SUPPORT

151

EDEN, C. (1978). Computer assisted policy analysis: contributions from Germany. Policy Sciences, 9, 345-360. EDEN, C., WILLIAMS, H. & SMITHIN, T. (1986). Synthetic wisdom: the design of a mixed mode modelling system of organizational decision making. Journal of the Operational Research Society, 37, 233-241. EDEN, C., SMITHIN, T. & WILTSHIRE, J. (1980). Cognition simulation and learning. Journal of Experiential Learning and Simulation, 2, 131-143. EDEN, C., SMITHIN, T. & WILTSHIRE, J. (1985). COPE User Guide and Reference Manual. Bristol: Bath Software Research. ESPRIT (1985). ESPRIT: Europe challenges U.S. and Japanese competitors. Future Generations Computer Systems, 1, 61-68. FEATHERINGHAM, T. R. (1977). Computerized conferencing and human communication. IEEE Transactions on Professional Communication, PC-20, 4, 207-213. GOFFMAN, E. (1959). The Presentation of Self in Everyday Life. New York: Doubleday, Anchor. HARDY, T. (1891). Tess of the D' Urbervilles. Republished in 1978. Harmondsworth: Penguin. HEIDEGGER, M. (1977). The Question Concerning Technology and Other Essays. Translated by LOVITT, W. New York: Harper and Row. HILTZ, S. R. (1978). The computer conference. Journal of Communications, 28, 157-163. HOFSTADTER, D. R. (1979). Godel, Escher, Bach: an Eternal Golden Braid. Harmondsworth: Penguin. JAGODINSKI, A. P. (1983). A theoretical basis for the representation of on-line computer systems to naive users. International Journal of Man-Machine Studies, 18, 215-252. JENKINS, C. & SHERMAN, B. (1979). The Collapse of Work. London: Methuen. JOYNER, R. & TUNSTALL, K. (1970). Computer augmented organizational problem solving. Management Science, 17, B212-B225. KELLY, G. A. (1963). A Theory of Personality: the Psychology of Personal Constructs. New York: Norton. KIDDER, T. (1981). The Soul of a New Machine. Harmondsworth: Penguin. KLEIN, H. K. & HIRSCHEIM, R. (1985). Fundamental issues of decision support systems. Decision Support Systems, 1, 5-24. LANDRY, M., PASCOT, D. & BRIOLAT,D. (1985). Can DSS evolve without changing our view of the concept of 'problem'. Decision Suoport Systems, 1, 25-36. LEVY, D. (1984a). Squaring up to the Cray. Practical Computing, 7, 7, 82-84. LEVY, D. (1984b). The Chess Computer Handbook. London: Batsford. MANGHAM, I. L. (1979). The Politics of Organizational Change. London: Associated Business Press. MICHIE, D. & JOHNSTON, R. (1984). The Creative Computer. Harmondsworth: Viking. MINTZBERG, H. (1975). The manager's job: folklore and fact. Harvard Business Review, 53, 4, 49-61. O'KEEFE, R. M. (1984). Truly interactive: inter Sim. discussion paper No. 67. Studies in Quantitative Social Science and Management Science. Canterbury: University of Kent. O'KEEFE, R. M. (1985). Expert systems and operational research: mutual benefits. Journal of the Operational Research Society, 36, 125-129. PHILLIPS, L. D. (1984). Decision support for top managers. In OTWAY, H. J. & PELTA, M., Eds, The Managerial Cha!lenge of New Office Technology. London: Butterworth. PINSKY, L. (1983). What kind of "dialogue" is it when working with a computer. In GREEN, T. R. G., PAYNE, S. J. & VAN DER VEER, G. C., Eds, The Psychology of Computer Use. London: Academic Press. POWER, D. J. (1982). Case study of the design and development of a decision support system. Working Paper, College of Business and Management, University of Maryland at College Park, U.S.A. RICE, R. E. (1980). Computer conferencing. In DEVIN, B. & VOIGT, M. J., Eds, Progress in Communication Sciences, Vol. 2. Norwood, U.S.A.: Ablex. SHALLIS, M. (1984). The Silicon Idol: The Micro Revolution and Its Social Implications. Oxford: Oxford University Press.

152

T. SMITHIN AND C. EDEN

SHNEIDERMAN, B. (1980). Human Factors in Computer and Information Systems. Cambridge, Mass: Winthrop. SMITH, A. M. R., LEE, L. S. & HAND, D. J. (1983). Interactive user-friendly interfaces to statistical packages. The Computer Journal, 21, 199-204. SMITH, H. T. (1980). Human-computer communication. In SMITH, H. T. & GREEN, T. R. G., Eds, Human Interaction with Computers. London: Academic Press. TRACZ, G. (1980). Computerized conferencing: an eye-opening experience with EIES. Canadian Journal of Information Science, 5, 11-20. WEIZENBAUM, J. (1976). Computer Power and Human Reason. San Francisco: Freeman. WEIZENBAUM, J. (1980). Where are we going? In FORESTER, Z., Ed., The Microelectronics Revolution. Oxford: Blackwell. WELLBANK, M. (1983). A review of knowledge acquisition techniques for expert systems. Memorandum No. R19/022/83. British Telecom: Martlesham Heath, Ipswich. WITHERS, S. J. & HURRION, R. D. (1982). The interactive development of visual simulation models. Journal of the Operational Research Society, 33, 973-975. WOOLER, S. (1982). A decision aid for structuring and evaluating career choice options. Journal of the Operational Research Society, 33, 343-351.