Int. J. Man-Machine Studies (1989) 31, 323-347
Pictorial dialogue methods P. G. BARKER AND K. A. MANJI Interactive Systems Research Group, School of Information Engineering, Polytechnic, County Cleveland, UK (Received January 1988, and in revtied form September
Teesside
1988)
Human-computer communication provides the basic mechanisms by which computer users are able to express their requirements and influence the mode of operation of sophisticated information processing machines. In the past, textual dialogue has been the primary mode of facilitating such communicative encounters. Increasingly, pictorial dialogue methods are being employed in order to overcome some of the limitations and inefficiencies of textual exchange. This paper describes and discusses some of our work relating to the use of pictorial dialogue methods to support: (1) end-user interaction with electronic books; (2) mixed-mode consultations with expert systems; and (3) multi-media instruction through the use of computer assisted learning techniques.
1. Introduction The world in which we exist consists of a complex assembly of interacting systems. Each of these may have a large number of possible goal states and/or objectives that may either support or conflict with each other. In his systems map of the universe, Checkland (1972) identifies five basic types of system. These are: natural systems; transcendental systems; designed abstract systems; designed physical systems; and human-activity systems. It is this latter class of system which is of primary importance in the context of: (1) the application and use of advanced information processing technologies within human societies; and (2) the design of appropriate human interfaces to these technologies. There are a number of approaches to designing interfaces to computer-based technology. One approach is through the use of serial communication channels such as text and speech. An alternative method is via the use of highly parallel communication channels such as those involved in pictorial communication (Kindborg & Kollerbaur, 1987) or multi-media techniques that combine a number of channels simultaneously (Naffah & Karmouch, 1986; Weinstein, 1987). We have argued elsewhere about the limitations of conventional text as a communication medium (Barker, 1987a; Barker, Najah & Manji, 1987) for use in time critical applications. Our major criticism of it was its relatively slow rate of assimilation due to its inherently serial nature. Another important limitation is its lack of expressive capability. Thus, the printed form is unable to communicate (in an effective way) many scientific and engineering phenomena such as: flight; flow and fluid dynamics; molecular structure; weather conditions; surface and deep-lying stress patterns in objects; seismic data; radar signals; or the thermal distributions that might exist within a heat exchanger or nuclear reactor. The limitations of conventional text have also been discussed and debated by 323 0020-7373/89/030323+ 25$03.00/O
0 1989 Academic Press Limited
324
P. G. BARKER
AND K. A. MANJI
Waern & Rollenhagen (1983). They have made extensive studies of text readability and the potential benefits of CRT-based text compared with its paper-based equivalent. Within an interactive environment a number of methods and techniques can be employed to improve the utility of text as a communication vehicle. Unfortunately, no matter how text is improved, human-computer interface designers will always be subject to its serial nature. Consequently, for many applications its use is much less acceptable than many other forms of human-computer communication. This is particularly true in environments involving, for example, various types of visual design activity, computer based training, medical diagnoses, image processing and manipulation and the control of complex manufacturing and production processes. In many of these situations there is a great need for more technical and cognitive bandwidth. For this reason we have been making extensive studies of pictorial communication and of pictorial dialogue methods. In this paper we describe, discuss and evaluate some of the methods and techniques that we have been using in order to provide low-cost, easy-to-use methods of facilitating human communication with computers via the use of pictorial forms. We commence by outlining the basic nature of pictorial communication and its effects on human cognition. We then discuss some approaches to the fabrication of dialogue that is based either upon pictures alone or (more importantly) upon multi-media dialogue in which images play an important role. We then give a description of some case studies in which our methods have been used. These case studies deal with: the use of electronic books; pictorial consultations with expert systems and the use of surrogation for the creation of new types *of learning metaphor for use in interactive computer-based learning environments. 2. The nature of pictorial dialogue There is an old proverb, possibly Chinese, which claims that “a picture is worth 10,000 words”. Although it is difficult to justify this claim on purely computational grounds, some evidence to support it has been presented (in the context of problem solving activity) in a paper by Larkin & Simon (1987). There is also some evidence to suggest that for certain types of cognitive activity (such as category matching) the time taken to understand pictures is less than the time needed to understand words (Potter & Faulconer, 1975). These findings would suggest two possible important outcomes with respect to the utility of pictures as a communicative resource. First, that pictorial forms offer a high bandwidth mechanism of communication. Second, that pictures and images may be easier to assimilate than text and other forms of the printed word. Because of their importance as a communication aid this paper is concerned with the use of both conventional paper-based images and electronic imagery for initiating and developing interactive human-computer dialogues. Further descriptions and discussions of the relative merits of images as a communication mechanism have been presented by Adams (1987), Kindborg & Kollerbaur (1987) and Barker (1988c). Before discussing the more detailed nature of pictorial dialogue it would be appropriate to describe the basic nature of human-computer dialogue in terms of a simple state transition graph. Such a description is presented in Fig. 1. This diagram uses three primitive states to describe the fundamental principle underlying
325
PICTORIAL DIALOGUE METHODS wake up woke up
FIG. 1. The oscillatory nature of human-computer
dialogue. Note *: dependent
upon system semantics.
communicative exchange. As far as information exchange is concerned the three important states of both the human andthe computer components of the system are: transmit, receive and idle. Movement between these states is controlled by messages transmitted and received over a suitable communication channel. Within Fig. 1 the following abbreviations have been used: EOT, = end of transmission (transmitted); EOT, = end of transmission (received); EOM, = end of message (transmitted); and EOM, = end of message (received). The conventions used to implement these signalling functions will depend upon the way in which any particular human-computer system is designed and the semantics associated with the specific application that it embeds. Of course, the state transition graph shown in Fig. 1 represents system behaviour with respect to just one communication channel. In a multi-media dialogue, such a graph would naturally need to be applied to many channels simultaneously. A possible scenario for a pictorial dialogue might now proceed as follows: (1) a user presents an image, an image sequence, or an image referent to the computer; (2) this pictorially orientated input would embed the knowledge, information or data needed to sustain some ongoing application; (3) the computer extracts the relevant items from the input data and reacts accordingly; (4) where it is appropriate, the computer then synthesizes or retrieves an image (or sequence of images); and (5) this image sequence is then presented to the user. Obviously, a variety of methods exist whereby a pictorial dialogue sequence could be fabricated. For example, the user might sketch (in real time) a picture on a high resolution digitizer or employ some form of sophisticated image acquisition facility. The computer might create an image using computational graphics or it might retrieve image segments from a video disc or CDROM. Through the technique of
326
P. G. BARKER
AND K. A. MANJI
“value added imagery” these two approaches might be combined in order to optimize the efficiency of interaction. In the context of the output of information, the term “visualization” is often used to describe computational processes that analyse data and information and then present it in a pictorial format that is easily, rapidly and, hopefully, unambiguously interpreted. For their successful realization, visualization processes depend critically upon the effective use of both graphics and image processing technologies. Unfortunately, at present, one of the most difficult aspects of exploiting visualization facilities is the provision of conceptual/semantic processing resources needed to (a) derive concepts from “massed” (and possibly unstructured) data; and (b) interconvert between different representations of a given concept. The mode of implementing pictorial dialogue depends critically upon the nature of the image processing equipment that is available and the characteristics of the application involved. For example, in the context of expert systems one simple way of implementing pictorial dialogue is through the use of pointing operations using either paper-based or CRT based images (for user input) and video disc images as the source of the advice that is offered during the course of the consultation. It is these techniques that are described in the case studies that are described later in this paper. Pictures, image segments and iconic forms can be used in a dialogue in a variety of different ways. As we have suggested above they can constitute the only channel of communication between the computer and its user or they can form part of a multi-media integrated message that also includes sound and text. Notice that because the sonic part of a message cannot be seen its presence is often represented through the use of a graphic icon. For example, in a multi-lingual computer-assisted learning package the user might select a suitable accompanying audio narrative by touching an icon (such as a gag or some other national emblem) that represents the language to be used. From what has been said above it is easy to see that within a pictorial dialogue (as in any other type of dialogue) the syntactic and semantic elements embedded within the pictorial forms that are used can be employed in order to perform a number of important functions simultaneously. In other words, the different elements contained within an image will each perform a slightly different function. Of course, this is one of the reasons why there are so many psycho-motor and cognitive benefits associated with the use of pictures. Some of the functions that the components of a pictorial message have to perform are depicted schematically in Fig. 2. The first function and perhaps the most important is the transmission of the message itself. The second function is the setting of any context(s) that may need to be taken into account when the message is interpreted. Of course, the image itself is also likely to embed the rules that are to be used in its interpretation; this is the third function of the picture components. If the image is reactive then it must also contain mechanisms for controlling its own presentation and delivery. Finally, certain parts of the image might be used to communicate aesthetic values. Examples of some of the many functions that a pictorial multi-media interface might perform are depicted within the paper-based device overlay that is illustrated schematically in Fig. 3. This overlay is intended to be used in conjunction with a “concept” keyboard (see below) that supports low resolution pointing activity. This
PKXORIAL
327
DIALOGUE METHODS
MESSAGE
CONTENT
AESTHETIC CONTENT
CONTROL FUNCTIONS
FIG. 2. Basic functions of a reactive pictorial form.
activity is achieved by means of finger touch operations on the surface of the overlay-which, in turn, is mounted in a suitable position on top of the concept keyboard. Essentially, the overlay makes available pictorial/iconic menus. Depending upon where the pointing operations are made, a different context needs to be applied to their interpretation. Thus, pointing operations made on the upper part of the overlay are used for the specification and selection of information. However, pointing operations made on the bottom part are used to control the presentation of the information. The rules governing the way in which control operations can be used are embedded on the pictorial syntax diagram shown in the picture control and
Disc
iz:g/ q :;:;:;:;:i:; q
mnufactue
(laservision
ati
X:i:i:i:i:
rob+ics)
Information storage and (The Megadoc system) New video cameras
$;;;;iii :.:.:.:.:.:.
The Taiwan culture centre
\
PREVIOUS _
FREEZE
V \
7.‘t
NEXT PICTURE
I
r CHAPTER Control
To freeze a picture press FREEZE
REPEAT CHAPTER
RESTART
f
EXIT CHAPTER
__t /
FIG. 3. Multiple functional roles within paper-based interfaces.
,
328
P. G. BARKER AND K. A. MANJI
chapter control areas of the interface. The grammar of interaction that this interface supports is discussed in more detail later in the paper. It is important to emphasize that the aesthetic values associated with colour and touch are not portrayed in Fig. 3 because of the limitations inherent in “black and white” paper-based reprography. The actual interfaces that are used are highly coloured (in order to bring out different functionality) and “nice” to touch (in order to make them pleasant to use). As another example of the multiplicity of function that can be embedded within a multi-media interface, consider the use of a “bit-mapped” screen facility. Imagine that such a high resolution display device is fitted with a touch sensitive overlay facility (Pickering, 1986) that is capable of responding to certain types of touch operation made upon its surface by its user. From the interface designer’s point of view a facility of this type offers a very attractive communication medium. Figure 4 depicts a typical spatial arrangement of display items within such a communication facility. In keeping with Figs. 2 and 3, this diagram again brings out the idea of parallelism within z&r image based multi-media message system. However, unlike Fig. 2, this interface is, of course, CRT-based and so can be made reactive; that is, it can be made to respond in a dynamic way to touch operations made on its surface. By “dynamic response” we mean the ability of the interface to change its own form and appearance. Within Fig. 4 the arrows that are shown denote locations within the text and/or image segments at which hyper-text (or hyper-image) branches may be taken (Conklin, 1987). It is important to emphasize that the picture segment illustrated in Fig. 4 could contain either static or animated images. For example, at some particular instant in time the picture segment might contain a photo of an automobile engine. Touching a certain part of the engine might then cause a Bit-mapped
n
CRT display
Text window
FIG. 4. Parallelism within a CRT-based image.
PICTORIAL DIALOGUE METHODS
329
close-up view of a particular component to appear. Touching another option might cause the operation of that component (such as a fan, a pump or a motor) to be displayed-possibly accompanied by appropriate sound effects. Most of the discussion in this section has concentrated on the use of images and pictorial forms for display purposes or in order to control resources via direct manipulation. Obviously, a truly pictorial dialogue is likely to involve substantial image analysis. There are numerous reasons why images need to be analysed by automatic means. Many of these are concerned with security and surveillance, robot vision, and studies in automation (Agin, 1980; Brady, 1982; Bhanu, 1987). The basic paradigm upon which most work on image analysis is based has been described in some detail by Rosenfeld (1984). From the point of view of pictorial dialogue one of the major objectives of image analysis is to locate and identify its syntactic and semantic elements and the relationships between them (if any exist) so that meaning can be extracted from the image. A number of ways might be used to implement the analysis depending upon the source of the image, its quality, where it is resident and whether or not the analysis is to take place in real time-that is, as it is being generated or as it is acquired. In order to illustrate the wide variety of possibilities that exist some examples will be cited. We have already mentioned the possibility of analysing sketches in real-time and the interpretation of line drawings (Barrow & Tenenbaum, 1981). Another approach, (involving high degrees of parallelism) is the analysis of images that are being created in real-time from a number of different data sources simultaneously. Partial image analysis can also be conducted; an example of this approach is the use of an image scanner to extract particular image segments from a paper-based image (Barker & Manji, 1988~). Very often in order to extract the items of interest from an image it is necessary to embed “hooks” (or markers) in the picture so that the software can “pick up” the items that it is looking for. We use this approach to study the “meaning” in pictures that are composed from various types of icons that bear different kinds of spatial relationship with each other (Barker & Manji, 1987).
3. Fabricating pictorial dialogue We suggested in the previous section that a variety of different ways of generating pictorial dialogue currently exist. The approach used in any given situation is strongly influenced by the nature of the images involved, by the amount of effort needed to generate/analyse the pictorial forms that are employed and, of course, by any financial restraints that might influence the type of equipment that is used. Increasingly, there is a need to utilize low-cost equipment that is reliable, resilient and easy to use. Because it would be inappropriate to offer a comprehensive review of pictorial dialogue generation techniques, in this section we concentrate on providing a description of the methods we have been employing within the case studies that are described later in the paper. Generally, the control of end-user interfaces within an interactive system is usually delegated to a User-Interface-Management-System, or UIMS (Bennett, 1986). This is the approach that we have adopted in the case studies that are presented in section 4. The major components of our UIMS are: a database
330
P. G. BARKER AND K. A. MANJI
management system, a library of device support routines for handling the different workstation peripherals and a library of application software modules that run within the workstation. By means of the data held within the database facility, the UIMS is able to set up linkages between end-user interface subsystems and the application software that depends upon these interfaces for their operation. Application modules may need to access any of a pool of application support devices that are relevant to their specific application domain. Communication between any particular support device and an application module is mediated by the overall workstation control system (OWCS) via a communication subsystem. The relationship between the various system components is illustrated schematically in Fig. 5. The basic architecture embedded within this diagram has been used as a basis for the production of a number of different workstations. In addition to a conventional keyboard, the main types of interaction peripheral that we employ within our
____-_--___-_--.-_-T1 IInteraction Devices
I I L--_
L
B
<
Application Module
‘T-fl
SUPPORT DEVICES
DATABASE
INTERFACE SUPPORT RWTINES
INTERFACE SUPPORT DATA
APPLICATION SOFTWARE
i
FIG. 5. Interface control through a UIMS.
PICTORIAL DIALOGUE
METHODS
331
workstations are: concept keyboards, digitizers, touch screens and a mouse. This range of pointing (and drawing) devices allows both low and high resolution selection (or drawing) operations to be made in conjunction with both paper-based and CRT-based images. Such operations are used for both textual and pictorial menu selection (Arthur, 1986; Koved & Shneiderman, 1986; Barker & Najah, 1985) implementing direct manipulation techniques (Shneiderman, 1987) and for the generation of multi-media messages (IEEE, 1985). Based upon the peripheral devices that were listed above we are able to identify six classes of pictorial/multi-media image based interface facility that might be used to fabricate end-user dialogue within our workstations. These are: (A) paper-based interfaces to support menu selection based upon stylus pecking operations; (B) paper-based interfaces to support menu selections made by touching with a finger; (C) screen-based interfaces to support menu selection based upon cursor manipulation using a mouse; (D) screen-based interfaces to support item selection operations based upon touching a screen with a finger; (E) screen-based interfaces to support area specification based upon “ring and click” operations using a mouse; and (F) screen-based interfaces to support area specification based upon the use of “boxing” operations. The first of the above types of interface have been described in considerable detail in a previous paper (Barker et al. 1987). Because it is based upon points in a plane this type of interface offers a very large menu selection address space. Type B interfaces are somewhat analogous to the previous class except that no stylus is needed. Furthermore, the menu selection address space is much reduced because it is based upon the use of a highly structured set of discrete areas within a plane. The devices (concept keyboards) that we use to support this type of interface usually offer 128 or 256 discrete rectangular areas; the size of the individual areas depends upon the dimensions of the concept keyboard that is employed. A typical example of a paper-based device overlay for use in a concept keyboard has been presented earlier in Fig. 3; using an A4 size keyboard the individual touch areas (shown shaded in Fig. 3) are each about 1.8 cm square. Type C interfaces are well documented in the literature (Shneiderman, 1987; Parker, Kennard & King, 1987). A similar comment applies to interfaces of Type D (Pickering, 1986; Whitefield, 1986). Examples of interfaces of type E can be found in many of the drawing and pointing packages that are currently used for preparing tables and graphical illustrations for use in integrated desk-top publishing systems. Fundamental to the successful operation of this type of package are a number of quite sophisticated mouse-based object manipulation dialogues; these involve several different kinds of primitive specification operations. From the point of view of the dialogues used within our workstations (see section 4.3) two of the more important of these are (1) the ability to generate an arbitrary open or closed planar geometrical shape; and (2) movement of that shape around the display area of the CRT screen using a mouse-based “dragging” dialogue. Our use of this type of operation is for the generation of closed geometrical figures (called rings) that specify an area of an image that is to be manipulated or processed in some way. Because the operation of a ring involves a sequence of mouse-move and button-clicking tasks, this type of interaction protocol is often referred to as “ring and click”. Interfaces of type E allow arbitrary image segments to be specified and
P. G. BARKER AND K. A. MANJI
332
manipulated. In many situations, however, it is more convenient to specify an image segment by generating a rectangular box that contains the area of interest. The tasks involved in generating a box, changing its shape and moving it around the screen to capture the image segment are referred to as “boxing” operations. Each of these primitive operations is implemented by means of appropriately designed “point and click” and dragging dialogues using a mouse.
3.1. DIALOGUE
DESIGN CONSIDERATIONS
When designing user interfaces for use within a workstation attention must be given to a number of important issues. Amongst these we include: (1) the design of appropriate icons for selecting various system functions or specifying particular processes that are to be initiated within particular end-user applications; (2) deciding upon the relative utility of paper-based versus CRT-based interfaces for any given application (where a choice is available); (3) determining the relative merits of the various interaction devices (again, where a choice is possible); and (4) layout and presentation considerations for multi-dimensional and/or multi-media messages that are displayed upon either a single screen or multiple screens. Useful sets of guidelines for the design and utilization of icons have been presented by both Gittins (1986) and Rubinstein & Hersh (1984). Most of the comments and recommendations made by these authors relate to the use of screen-based icons. Our own work has involved the use of both screen-based and paper-based icons. Our screen-based icons have been produced by means of a sprite editing system. The icons that are produced can be utilized in a static way by positioning them on a screen and then making them reactive (either by means of a touch screen or by mouse pointing operations). Our CRT-based icons can be made dynamic by using simple turtle graphics operations to move them around the screen. Unlike the CRT-based icons, the paper-based icons are reactive but not dynamic. Guidelines for choosing between the use of paper-based us CRT-based interfaces are not well documented in the literature. Consequently, we feel that this is an area that is worthy of more in-depth study. As we discuss later, we believe that certain isomorphisms exist between the two types of media so that guidelines for the use of one medium may be carried across to situations involving the use of the other. Provided this is so, choosing between the use of a particular medium in any given design situation could then be based upon a relatively small number of criteria. A number of studies have been made of the relative merits of different types of interaction device. The studies by Ewing, Mehrabanzad, Sheck, Ostroff & Shneiderman (1986); Whitefield (1986), Karat, McDonald & Anderson (1986); and Whitfield, Ball & Bird (1983) are typical of those that are reported in the recent literature. Unfortunately, although some of the work includes studies of conventional keyboards there is little documented work on the use of a concept keyboard. Consequently, this is an area where we hope to develop evaluation techniques and, hence, design guidelines. CRT screen layout and presentation considerations have been discussed by a number of researchers such as Norman, Weldon & Shneiderman (1986), Rubinstein & Hersh (1984), Jenkin (1982) and Moreland (1983). Our own interests in this topic lie in the area of evaluating the use of special effects that can be produced
PICTORIAL
DIALOGUE
METHODS
333
through the use of a real time video effects frame store (Videologic, 1987). Such a frame store allows the rapid digitization of video images that originate from a video disc, a video camera or some other source. Once digitized, the images can be manipulated and processed in various ways and then presented on a high resolution screen similar to that depicted in Fig. 4. Using an arrangement of this type it is possible, for example, to use images from a video disc as part of a windowed screen display or (through image compression) to present multiple images on a segmented screen display. A wide range of other special presentational effects are also possible such as wipes, pushes, pulls, reveals, conceals, zoom and so on (Barker & Yeates, 1985). As we discuss later, it is our intention to use a frame store of this type to investigate dialogue design guidelines for use in electronic books and sophisticated workstations that support interactive learning activities.
4.
Some case studies
In the previous sections of this paper the nature of pictorial dialogue has been discussed and some methods for its implementation have been outlined. In this section we present some case studies that describe some of the work that we have been involved in and which incorporates these approaches to pictorial dialogue. 4.1. ELECTRONIC
BOOKS
An “electronic book” might be defined, in an informal way, as being an organized collection of data, information, and knowledge that has some binding themes and which is accessed by means of some form of computer system, (Barker & Manji, 19886). Techniques for the construction of electronic books have been described by a number of researchers (Weyer & Boming, 1985; Yankelovich, Meyrowitz & van Dam, 1985; Morariu & Shneiderman, 1986). Such books may be based upon the use of conventional magnetic and electronic computer storage facilities or they may utilize optical storage methods involving the use of video disc or compact disc read-only-memory. The electronic books that have been produced to-date have involved both the use of a single medium (text or graphics) and multi-media communication techniques. In the electronic books that we have been constructing, we have employed optical laser disc as the storage medium to hold collections of static and animated images that together constitute an electronic encyclopedia. The arrangement of our system is depicted schematically in Fig. 6. The system is constructed from four basic components: an optical laser disc player; a controlling microcomputer; a concept keyboard; and a conventional television set. The microcomputer is interfaced to the video disc player and controls all its operations by passing appropriate control codes over the RS-232-C interface by which the two devices are linked (Barker, 1985). The output from the video disc is passed directly to the television set. This has been fitted with a special decoder that enables textual information and simple graphics to be displayed on its screen at the same time as video images from the disc are being displayed. This overlay information is encoded into the signal sent to the television by the video disc player as a result of special commands sent to it from the microcomputer. Thus, the overlay information that appears on the television screen is under the direct control of the microcomputer.
334
P. G. BARKER AND K. A. MANJI
VIDEO
DISC
V MICRO
______-_----J
-Z
I I I
.
A2 CONCEPT KEYBOARD
FIG. 6. Basic arrangement for an electronic book.
A large A2 concept keyboard is also interfaced to the microcomputer. This controls all user interaction with the electronic book. Indeed, as far as the user is concerned, the only two visibly apparent components of the system are the television screen and the concept keyboard that is positioned in front of it. Users interact with the electronic book by means of overlay devices similar to that which was depicted previously in Fig. 3. A formal description of the type of dialogue that interfaces of this kind support is presented in Table 1. In this table an augmented BNF-like metanotation is used to express the nature of the basic steps, objects and tasks involved in using our electronic books. It is important to realize that in this definition emphasis has only been given to the “touch part” of the dialogue. No attempt has been made to formally describe the “watching” operations associated with the pictorial dialogue. Each video disc that constitutes an electronic encyclopedia can have either a single concept keyboard overlay or a collection of such overlays associated with it. Device overlays are designed by the users themselves and so represent their particular views of the information and knowledge that is embedded within any particular encyclopedia. Users may select: (1) their own topic titles; (2) what information they choose to include within these topics; and (3) the actual control operations they wish to embed within their particular interfaces. Within a multi-user system it is important to maintain control over both the usage of video discs and the pictorial interfaces that go with them. For this reason the user interface management system that is embedded within the control software of the encyclopedia system is heavily dependent upon the use of a database management facility (Barker & Najah, 1985; Barker et al. 1987). The design of the video discs and the supporting database software for our electronic books is discussed in more detail elsewhere (Barker & Manji, 1988b; Manji, 1988). We have not yet made any formal evaluation of the users’ acceptability of our interfaces to the electronic books. However, we have used books of this type quite widely in a number of different contexts. Our observations suggest that these interfaces are easy to prepare and easy to use. A major objective of our future research with this system will therefore be a more formal evaluation of the interface system.
335
PICrORIAL DIALOGUE METHODS TABLE 1
Formal definition of a touch diaIogue with an electronic book (dialogue)
: = (start-up) (dialogue-body)
(dialogue-body) (touch-part)
:: = [ ( watch-part) (touch-part)
:: = (valid-context)
(valid-context)
(termination) N
1
(valid-touch-seqience)
: : = (selection)
(control)
(selection) I
(valid-touch-sequence)
1
:: = (shadybox-touch)
[
(timeout-exit)
(control) I
(valid-touch-sequence)
(control-touch-sequence)
(no-loop-touch)
(loop-touch)
1
’ (valid-touch-sequence)]
: : = (freeze-touch) (control-touch-sequence) :: = (loop-touch)
:: = (repeat)
::= [ (prev)
I(cl
(control)
(no-loop-touch)
( (exit) 1 (restart)
( (next,]:
[(no-loop-touch)]: M
(watch-part) ::= [ ( screen-gaze) etc.
(keyboard-gaze)
1I
4.2. EXPERT SYSTEMS
An expert system consists of three basic components: an inference engine, a knowledge base, and a conversational facility (Barker, 1988~). The knowledge base is created by means of an appropriate knowledge representation language (KFU). The conversational facility is sometimes called a consultation shell; its function is depicted schematically in Fig. 7. Such a shell provides the mechanism by which users express their problem parameters and obtain help and guidance with respect to the solution of the problems they have within the domain that the expert system covers. The KRL and consultation shell together provide two of the most important end-user interfaces to expert system software. For the majority of currently available expert systems the end-user interfaces referred to above usually take the form of a dialogue involving textual exchange. That is, the user employs a keyboard device in order to type a character string that embodies some aspect of the consultation domain; the computer then makes an appropriate response upon a CRT screen-again, based upon the use of text. Dialogues of this form place severe limitations on human-computer communication in terms of both (1) transmission bandwidth and (2) the nature of objects that the user can “talk” about. Two alternative approaches towards a more efficient interaction are the provision of various types of natural language interface (such as speech, writing or gestures) or the use of pictorial/graphical interfaces. In this case
P. G. BARKER AND K. A. MANJI
336 Consultation
Facility
FIG. 7. Multi-media consultation dialogue.
study we are concerned with the latter approach. Pictures represent a high bandwidth communication medium. Therefore pictorial dialogue (based upon the use of icons, direct manipulation, image segments and complete pictures) could thus provide an opportunity to overcome some of the limitations of text-based systems. When this type of dialogue is combined with knowledge that is encoded in pictorial form, sophisticated types of expert system can be constructed (Barker & Manji, 1987). The remainder of this case study describes our progress towards the realization of such systems based upon workstation environments that are capable of supporting a variety of different types of mixed-mode, multi-media dialogue involving the use of pictorial forms. We have constructed a number of workstations in order to enable us to study the problems associated with the provision of expert system consultation dialogues. These workstations allow a variety of different peripherals to be used in order to generate human-computer dialogue by means of appropriate interfaces and interaction protocols. Obviously, from what has been said above, user-interface control is an important aspect of virtually all expert systems. In general two aspects of expert system UIMS software will be of major concern to the user. First, the facilities that it provides for knowledge representation. Second, the dialogue control mechanisms that are made available during consultation processes. The latter must cater for two basic functions: (1) input from the user, and (2) the presentation of knowledge and data to the user by the system. We have been examining techniques to allow pictorial dialogue methods to be used in conjunction with both (a) conventional expert systems (based on textual KRL.s) and (b) new types of expert system based upon the use of pictorial knowledge representation. However, in the two workstation environments described below we only discuss pictorial dialogue to support the first of these two categories. The design and fabrication rationales for these workstations are based upon the ideas inherent in multi-media communication methodologies. Extensive studies have been made of the use of paper-based pictorial images that act as overlay documents on a cluster of interaction peripherals (Workstation 1). We have also been investigating the types of dialogue needed to support the use and control of images taken from a video disc system (Workstation 2). The use of a special workstation control language for deployment within the UIMS support software has also been investigated.
PICTORIAL DIALOGUE METHODS
337
4.2.1. Workstation 1: paper-based inteqaces The first workstation that was constructed contained two concept keyboards (one of size A4 and another of size A2), a bar-code reader and a high resolution digitizer. There was also a connection via a local area network to a minicomputer system. The usefulness of this type of workstation lies in the fact that they allow the fabrication of human-computer dialogues that are based upon the use of pictorial forms using images and icons that are drawn on paper. The concept keyboards are used to support low resolution finger touch selections whereas the digitizer (and its associated stylus) is used when higher resolution touching operations need to be implemented (for example, with maps or circuit diagrams). The successful operation of a workstation of this type is critically dependent upon the design of an appropriate set of paper-based interface documents (or overlays). For a particular application, a given collection of overlays is often referred to as a “document cluster” (Barker et al. 1987). Each document cluster consists of: (1) zero, one or more paper overlays for use with the digitizer; (2) a number of documents for use with the A2 concept keyboard; and (3) a set of overlays for use with the A4 concept keyboard. Each of the documents within the cluster is identified with its own unique bar-code. As individual documents are used with their supporting host device the bar-code reader is used to scan their label-the computer is therefore able to check that the correct document is in place for any specific context within any given application. Figure 8 shows a schematic illustration of the relationship between a set
Locality
Bird pictures
rqjg
Claw shapes
Beak shapes
1 Video disc UIMS - User Interface
Management
System
FIG. 8. Pictorial interfaces to expert systems.
]
338
P. G. BARKER AND K. A. MANJI
of overlay documents intended for use in a bird identification system. The interfaces, interaction protocols and pictorial dialogue methods used in this application are discussed in more detail elsewhere (Barker & Manji, 1987). 4.2.2. Workstation 2: using a video disc The second workstation that we have constructed is illustrated schematically in Fig. 9. It contains a laser optical disc unit that is interfaced to a personal microcomputer system that is fitted with a hard disc to store the expert system software. Both the KRL compiler and the consultation shell for the expert system development tool that we have been using are written in Prolog. Within the KRL, facilities exist to embed end-user routines that are also written in this language. Consequently, the software that has been developed for controlling the video disc system are all Prolog based; however, there are some limitations in the current implementations of these. Images from the video disc are displayed on a conventional colour TV screen; textual dialogue associated with consultations is presented on the microcomputer CRT screen. This “twin screen” system has both advantages and disadvantages. Control of image presentation is effected by means of keyboard entries made in conjunction with a CRT-based menu. A variety of control options is available: frame freeze; single step forward; single step backward; forward at slow speed; exit the presentation; and so on. During a consultation dialogue images from the video disc can be used in two ways. First, they can be used for information gathering during the problem solving process-by showing the user pictures of possibilities that would be difficult to express textually (for example-“point to the type of spots that the patient has on his chest”). Second, at the end of a consultation dialogue, to provide the user with pictorial advice relating to the solution of the problems being
FIG. 9. An image store based on video disc.
PICTORIAL DIALOGUE
339
METHODS
solved (for example, “follow the procedure involved in the following pictorial sequence”). Obviously, the design of optical discs to support pictorial dialogue methods for use with expert systems is a difficult task. The procedures involved are discussed in more detail elsewhere (Manji, 1988). Although each of the workstations described in this case study are operational they are incomplete. The first requires a more substantive and rigorous approach to the development of the UIMS facility that we are using. The second requires the interfacing of peripheral devices that will permit more user-friendly styles of pictorial dialogue to be accomplished-for example, the use of a touch sensitive screen and windowing facilities to enable the simultaneous presentation of text, computational graphics and video imagery. Our future investigations will therefore be directed at overcoming each of these current limitations. 4.3. INTERACTIVE
LEARNING
ENVIRONMENTS
Computers have been used for educational purposes for many decades both in academic environments and in industry. Typical approaches to the use of the computers for pedagogic applications have been described and discussed elsewhere, (Barker & Yeates, 1985; Barker, 19873). Recently, because of the widespread availability of powerful microcomputers and the significant developments that have been made in the technology of human-computer interaction, there has been considerable interest in the development of interactive learning systems. An interactive learning system is one that is highly participative. It provides an environment in which either a single learner or a group of collaborative learners can develop a variety of different physical and/or intellectual skills. These skills may be needed in order to perform some technical, social, personal or managerial job function. Such systems may also be used as a general educational resource within many different types of learning and training situation. In the interactive learning systems that we have been developing we have been using pictorial dialogue methods for two fundamental types of operation: courseware authoring and the implementation of new types of learning metaphor. Each of these is briefly described in this case study. 4.3.1. Courseware authoring Courseware is the technical term used to describe software that is written for instructional applications of the computer. Barker & Yeates (1985) have given a detailed description of techniques for its preparation and a taxonomy of the different learning styles and strategies that such software might embed. The process of preparing courseware for use in an instructional situation is often referred to as “authoring”. Detailed descriptions of several different types of courseware development tool have been given by Barker (19876). Conventional approaches to authoring usually employ an author language or an authoring system in order to embed knowledge about the subject domain that is to be taught. By using an appropriately designed learning centre, the knowledge that is embedded in the courseware can be delivered to a student or students thereby stimulating learning processes and the acquisition of new skills.
340
P. G. BARKER
AND
K. A. MANJI
Unfortunately, by embedding knowledge in a conventional program it is often rendered inflexible. This means that: (a) it cannot easily be used for anything other than the specific application for which it was intended; and (b) it becomes difficult to make the instructional software adaptable to the individual learning needs of particular students. In order to overcome these limitations we have been exploring the possibility of developing courseware that is driven by sophisticated knowledgebased structures that allow for different views of knowledge. Such an approach means that adaptable instruction is easily feasible and the embedded knowledge is available for more than one application. The technical detail of this approach is documented elsewhere (Barker & Proud, 1987; Barker, 19886). Consequently, in this case study we concentrate on providing an overview description of the pictorial interface to the authoring facility that is used to create the knowledge-based structures upon which our system is based. An in-depth description of the nature of these structures and the types of primitive operation needed for their creation and maintenance has been presented by Barker (19883). For convenience, their basic format is depicted schematically in Fig. 10. Figure 10 illustrates the type of CRT-based, menu driven interface facility that we have been developing. It is based upon the use of pull-down menus, icons and mouse selection grammars involving various types of “point and click” operation. The CRT screen depicted in the upper part of Fig. 10 shows a selection operation being made, This enables the user to create a relationship between two objects in the knowledge structure that is portrayed beneath the menu area; this knowledge structure exists within a domain called 047. By means of pointing and clicking operations (using the mouse that is attached to the authoring station) the user is able to create both named and unnamed relationships between concepts that exist within the domain; concepts are represented by nodes in the graph. In the upper part of Fig. 10 the relationships between Cl and C3 and between C3 and C4 are un-named while that between Cl and C2 is called Rl. In the CRT screen shown in the lower part of the diagram another named relationship (called Mary) has been created between the concept nodes C2 and C4 of the knowledge structure. As soon as knowledge-based structures (of the type discussed above) can be created it will be possible to use them in two important ways. First, as a means of supporting different user-views of the stored knowledge; various techniques of user-modelling will be important here. Second, by means of appropriate “threads”, alternative routes through the stored knowledge will be possible-thereby enabling significant flexibility to be introduced into the instructional software. We are currently exploring the nature of the courseware that is needed in order to exploit these possibilities. 4.3.2. Paradigms, metaphors and myths for interactive learning Interactive learning systems are heavily dependent upon the “reactive media” paradigm for their successful implementation (Barker, 1988c). Such systems often embed many different types of learning metaphor. The nature of the metaphor that is used within any particular learning system is reflected through the external myths that are embedded in end-user interfaces to the system (Rubinstein & Hersh, 1984; Barker, 1988c; Barker & Manji, 1988~). We have been building a number of different types of multi-media workstation to
PICTORIAL
DIALOGUE
341
METHODS
(A) item Selection Cvemtions Tools
(B)
1 Help 1Medio
Crwte ConuPt Domam
Modes 1 Models 1 Views
Item Creation Operations Tools
1 Help 1Media
Create
Modes
1 Models 1Views
Dcmain KB Relotionahip Tc$.iC
CREATE RELATITIONSHIP To wedh (I mlatbntip
pi-
wclfy:-
(1I entitim
involwd [m0ucSl (2) link points [moUeI (3) optimal nmm [keyboard] (4) dmctmallty [F-key] [HELP IS ovadab@]
FIG. 10. Courseware authoring using pictures.
support interactive learning through the use of new types of learning metaphor. A multi-media workstation is one which supports the use of text, sound, graphics and a range of highly participative learning devices. The particular types of resource used in any given situation will depend upon the nature of the pedagogic processes that are to be initiated. Naturally, within these learning/training processes there is a large emphasis on realism; that is, making the learning experience as near to reality as is possible. This realism can be achieved in a variety of ways: for example, through simulation, game playing, the creation of physical models and by using surrogations. A surrogation is essentially a highly interactive pictorial simulation that depends for its success upon: rapid access to static images and animation; high-speed image manipulation; appropriate use of text and icons; and the provision of any supporting sound effects that may be necessary. The most well-known examples of the principle of surrogation are surrogate experiments and surrogate travel (Barker, 1987~). Interactivity within such systems can be accomplished using
342
P. G. BARKER
AND K. A. MANJI
a variety of different types of peripheral-such as a concept keyboard, a mouse, a digitizer, a training panel or a touch screen. A particularly useful device for use in the fabrication of multi-media workstations is the bit-mapped screen that was described in earlier sections of this paper. When used in conjunction with appropriate software (such as a window system or an image processing package) and hardware (for example, a video frame store), bit-mapped screens can be used to produce many novel and exciting ways of interacting with a computer system. By means of suitably designed interactive dialogue and appropriate knowledge engineering techniques the student can use screens of this type to explore hyper-text and hyper-image networks; any supporting sound effects can also be accessed if and when they are needed. Of course, the creation of knowledge networks to support this type of interactive learning depends critically upon the availability of suitable media for the storage of large volumes of text, pictures and sound. In this context, optical media such as video disc and compact disc read-only memory have a very important role to play (Barker, 1987a). The ability of a multi-media workstation to access large volumes of pictorial information is extremely important. Because this information will probably need to be shared between many users the problem of access control needs to be solved. This can easily be achieved through the use of a database management system. Indeed, this was the approach that we adopted in the electronic books that we described in section 4.1. This approach is also used to support the new types of learning paradigm and metaphor that we are currently developing. The successful implementation of these also depends upon the availability of low-cost image processing equipment and the development of algorithms to support a variety of different types of high-speed image manipulation primitive. The principle of surrogation which was described above is fundamental to the implementation of our new learning metaphors (“scrapbooks” and “shopping baskets”). So too is the creation of suitable types of human-computer dialogue. Our systems employ mixed-mode, multi-media dialogues that are based upon the use of mouse grammars and both paper-based and screen-based icons. Within our workstations the database facility is an important part of the UIMS. In this context it has two major roles to play. First, it keeps track of the many different kinds of interface that are employed. Second, it maintains a record of the images used within any given surrogation and, at the same time, it keeps details of any picture processing operations that were initiated during the user’s interactive dialogue with the system. The type of workstation that we have been using to explore the problems associated with the development of new learning metaphors is illustrated schematically in Fig. 11. It uses a conventional keyboard, a concept keyboard, a mouse and touch screens to facilitate end-user interaction. The video disc provides high quality images that may be passed via a “genlock” facility (Barker, 19876) to a high resolution display monitor that is equipped with a touch screen. This arrangement is used to generate “value added images” since the genlock facility enables overlay graphics (and text) created by the microcomputer to be super-imposed upon images coming from the video disc. The images coming from the video disc may also be passed to an image processing unit that is able to rapidly convert the analogue video images into digital format. In this form they can be manipulated in various ways so that they can form part of multi-media messages that are displayed on the second
343
PICTORIALDIALOGUE METHODS
1 IMAGE SCANNER
I-,
,--+
LASER
PRINTER
DATA BASE
Processing 1Image Unit
1FE? qZ$s
1
I image segments
overlay graphics
, ;,CRO
,
~
Hiqh resalution display _ with touch SC&
FIG. 11. Multi-media workstation to study isomorphism.
screen (also fitted with a touch screen). The author of these messages creates and manipulates image segments (taken from the video disc) using suitably designed “boxing” and “ring and click” interaction protocols (Manji, 1988). Currently, we are using this equipment to generate new learning metaphors for children based upon the principle of surrogation that was discussed earlier. These allow children to create electronic scrapbooks based upon surrogate journeys that they undertake; they also allow students to make shopping expeditions around surrogate shops. Although we have not yet formally evaluated these approaches to learning our initial impressions are that students find them exciting, motivating and fun to learn with.
5. Future directions In the previous sections of this paper we have described three different types of multi-media workstation each of which is capable of supporting the use of pictorial forms as part of various human-computer dialogue facilities. Currently, all three workstations are essentially prototype systems that are intended: (1) to investigate some of the problems involved in defining and using
344
P. G. BARKER AND K. A. MAN11
pictorial dialogue methods; and (2) to develop and catalogue dialogue design guidelines for future applications of the techniques that are being researched. While the majority of technical problems involved in workstation fabrication and device interfacing have been solved the development of design guidelines for pictorial multi-media dialogues still remains as one of our most necessary future research objectives. Of course, there are also many other directions in which our future research in this area must progress. We discuss these in terms of the three basic workstation environments that were described in section 4. Future development of our electronic books requires the solution of four basic problems. First, an attempt must be made to assess and document the relative merits of the various user-interface design options that we have available. Second, methods for more accurately assessing the cost of producing video discs and CDROMS need to be developed (probably through the use of expert systems that embed design and production expertise). Third, software tools to facilitate disc indexing need to be produced. Fourth, concept and control dictionaries need to be developed in order to facilitate the automation of human-computer interface creation by passing the design and development tasks back to the users themselves. Our workstations for pictorially driven expert systems require significant development before they will become a useful and practical aid outside of the development laboratory in which they have been developed. A major problem that we see for the future is the development of techniques for easily encoding knowledge in pictorial form rather than in a textual format. Of course, a major associated problem will be the production of inference engines that are able to analyse and use knowledge that is represented in this way. In our own work we are hoping to make a small contribution towards the solution of these problems through our studies on icon association and their spatial distribution within both static and animated imagery (Barker & Manji, 1987). However, significant research effort needs to be devoted to this problem before useful results are likely to be produced. The workstations that we have built to study interactive learning processes are amongst the most sophisticated that we have developed to date. They embed low-cost image capture, image manipulation and image display facilities. They also utilize a wide range of interaction peripherals. Currently, we are using these workstations to study the development of new learning metaphors and the specification of algorithms and user interfaces to support the use of these. An important future study that we need to undertake is an investigation of the possible isomorphic relationships that exist between CRT-based and paper-based images. As we suggested earlier this could lead to a substantial reduction in both the number and the complexity of the dialogue design guidelines that need to be considered during the development phases involved in fabricating human-computer interfaces. 6. Conclusion Over the last few decades information storage, processing and dissemination technologies have each made rapid and substantial advances. Today, these technologies can be combined in a variety of ways to make computer-based data, information and knowledge available to a much wider audience of people than has ever been the case previously. An understanding of the needs and limitations of
PICTORIAL DIALOGUE METHODS
345
people within the systems that we build is therefore vital; so too is the development of efficient and effective human interfaces to the technology that we use for information acquisition and display. Technology has done much to accelerate the speed with which we can generate and disseminate information. However, it has achieved much less in terms of improving the speed with which we can assimilate and digest this information. We believe that the pictorial representation and communication of knowledge can do much to help rectify this limitation. However, to support this approach to information exchange appropriate pictorial dialogue methods need to be explored, defined and developed. This paper has attempted to describe some of the approaches that we have been exploring in order to realize these objectives.
References Communicating with electronic images: transforming attitude, knowledge and perception, British Journal of Educational Technology, 18, 15-21. AGIN, G. J. (1980). Computer vision systems for industrial inspection and assembly, IEEE
ADAMS, D. M. (1987).
Computer, W(5), 11-20. ARTHUR, J. D. (1986). A descriptive/prescriptive model for menu-based interaction, International Journal of Man-Machine Studies, 25, 19-32. BARKER, P. G. (1985). Programming a video disc, Microprocessing and Microprogramming, 15,263-276. BARKER, P. G. (1987a). The potential of optical media for creating adult learning opportunities, Bulletin Leren Van Volwassenen, 20,165-180. BARKER, P. G. (19876). Author Languages for CAL. London; Macmillan Press. BARKER, P. G. (1988u). Expert systems in engineering education, Engineering Applications of Artificial Intelligence, 1, 47-58. BARKER, P. G. (19886). Knowledge engineering for CAL. In F. LEVIS & E. D. TAGG, Eds. Proceedings of ECCE 88, IFIP European Conference on Computers in Education, 24-29 July, Lausanne, Switzerland, pp. 529-535. Amsterdam: North-Holland. BARKER, P. G. (1988c). Basic Principles of Human-Computer Interface Design. London: Hutchinson Education. BARKER, P. G. & MANJI, K. A. (1987). Pictorial knowledge bases. In Proceedings of the British Computer Society HCI ‘87 Conference, University of Exeter, 7-11 September, 1987, pp. 161-173. Cambridge: Cambridge University Press. BARKER, P. G. & MANJI, K. A. (1988u). Use of II low-cost image scanner to analyse paper-based images, Working Paper. Interactive Systems Research Group, School of Information Engineering, Teesside Polytechnic, UK. BARKER, P. G. & MANJI, K. A. (19886). New Books for Old, Programmed Learning and Educational Technology, 25, 310-313. BARKER, P. G. & MANJI, K. A. (1988c). Paradigms, Metaphors and Myths for Interactive Learning. Paper presented at the British Computer Society’s HCI ‘88 Conference, September 5-9 1988, University of Manchester Institute of Science and Technology, UK. BARKER, P. G. & NAJAH, M. (1985). Pictorial interfaces to databases. International Journal of Man-Machine Studies. 23, 423-442. BARKER, P. G., NAJAH, M. & MANJI, K. A. (1987). Pictorial communication with computers. International Journal of Man-Machine Studies, 27, 315-366. BARKER, P. G. & YEA-I-ES, H. (1985). Introducing Computer Assisted Learning. Englewood Cliffs, NJ: Prentice-Hall. BARKER, P. G. & PROUD, A. (1987). A practical introduction to authoring for computer assisted instruction. Part 10: Knowledge-based CAL. British Journal of Educational Technology, 18, 140-160. BARROW, H. G. & TENENBAUM, J. M. (1981). Interpreting line drawings as three dimensional surfaces. Artificial Intelligence, 17, 75-117.
P. G. BARKER
346
AND K. A. MANJI
advanced user interfaces. IBM Systems Journal, 25(3/4), 354-368. BHANU, B. (1987). CAD-based robot vision. IEEE Computer, u)(8), 13-16. BRADY, M. (1982). Computational approaches to image understanding. Computer Surveys, BENNETT, J. L. (1986). Tools for building
14, 3-71.
P. B. (1972). A systems map of the universe. In J. BEISHON & G. PETERS, Eds pp. 50-55. London: Open University Press. CONKLIN, J. (1987). Hypertext: an introduction and survey. IEEE Computer, 20(9), 17-41. EWING, J., MEHRABANZAD, S., SHECK, S., OSTROFF, D. & SHNEIDERMAN,B. (1986). An experimental comparison of a mouse and arrow-jump keys for an interactive encyclopedia. International Journal of Man-Machine Studies, 24, 29-45. GI~NS, D. (1986). Icon-based human-computer interaction. International Journal of CHECKLAND,
Systems Behaviour,
Man-Machine
Studies, 24, 519-543.
IEEE (1985). Special issue on Multi-media Communications, IEEE Computer, 18(10). JENKIN, J. M. (1982). Some principles of screen design and software for their support. Computers and Education,
6, 25-31.
LARKIN, J. H. & SIMON, H. A. (1987). Why a picture is (sometimes) worth ten thousand words. Cognitive Science, 11,65-99. KARAT, J., MCDONALD, J. E. & ANDERSON, M. (1986). A comparison of menu selection techniques: touch panel, mouse and keyboard. International Journal of Man-Machine Studies, 25, 73-88.
KINDBORG, M. & KOLLERBAUR, A. (1987). Visual languages and human computer interaction. In Proceedings of the British Computer Society HCZ ‘87 Conference, University of Exeter, 7-11 September, 1987, pp 175-187. Cambridge: Cambridge University Press. KOVED, L. & SHNEIDERMAN, B. (1986). Embedded menus: selecting items in context. Communications of the ACM, 29, 312-318.
~NJI, K. A. (1988). Pictorial Communication with Computers, Draft Ph.D Thesis, Teesside Polytechnic, Cleveland, UK. MORARIU, J. & SHNEIDERMAN, B. (1986). Design and research on the electronic encyclopedia system (TIES). Proceedings of the 29th Conference of the Association for the Development of Computer Based Instructional Systems, pp. 19-21. MORELAND, D. V. (1983). Human factors guidelines for terminal interface design, Communications of the ACM, X,484-494.
NAFFAH, N. & KARMOUCH, A. (1986). Agora-an experiment in multi-media message systems. IEEE Computer, 19(5), 56-66. NORMAN, K. L., WELDON, L. J. & SHNEIDERMAN,B. (1986). Cognitive layouts of windows and multiple screens for user interfaces. International Journal of Man-Machine Studies, 25, 229-248. PARKER, J., KENNARD, A. & KING, D. (1987). The ‘window’ terminal. The Computer Journal, 30,558-564. PICKERING,J. A. (1986). Touch sensitive screens: the technologies and their application. Znternational Journal of Man-Machine Studies, 25, 249-269. PO-I-IER,M. C. & FAULCONER,B. A., (1975). Time to understand pictures and words. Nature, 253, 437-438. ROSENFELD , A. (1984). Image analysis: problems, progress and prospects. Pattern Recognition, 17(l), 3-12. RUBINSTEIN, R. & HERSH, H. (1984). The Human Factor: Designing Computer Systems for People. Burlington, MA: Digital Press. SHNEIDERMAN, B . (1987). Designing the User Znlerface-Strategies for Effective HumanComputer Interaction, Reading, MA: Addison-Wesley. Videologic (1987). VEFS Product Specification. Hertfordshire, UK: Videologic WEARN, Y. & ROLLENHAGEN, C. (1983). Reading text from visual display units (VDUs). International Journal of Man-Machine Studies, 18,441-465. WEINSTEIN,S. B. (1987). Telecommunicationsin the coming decades. IEEE SPECTRUM,
24(11), 62-67.
PICTORIAL DIALOGUE
METHODS
347
WEYER, S. A. & BORNING, A. H. (1985). A prototype
electronic encyclopedia. ACM Transactions on Ofice Inform&on Systems, 3(l), 63-88. WHITEFIELD,A. (1986). Human factors aspects of pointing as an input technique in interactive computer systems. Applied Ergonomics, 17(2), 97-104. WHITFIELD,D., BALL, R. G. & BIRD, J. M. (1983). Some comparisons of on-display and off-display touch input devices for interaction with computer generated displays, Ergonomics, 26, 1033-1053. YANKELOVICH,N., MEYRO~KZ, N. & VAN DAM, A. (1985). Reading and writing the electronicbook, IEEE Computer, 18(10), 15-30.