On physiological computing with an application in interactive art

On physiological computing with an application in interactive art

Interacting with Computers 16 (2004) 897–915 www.elsevier.com/locate/intcom On physiological computing with an application in interactive art Ernest ...

229KB Sizes 1 Downloads 34 Views

Interacting with Computers 16 (2004) 897–915 www.elsevier.com/locate/intcom

On physiological computing with an application in interactive art Ernest Edmondsa,*, Dave Everittb, Michael Macaulayc, Greg Turnera a

Creativity and Cognition Studios, University of Technology, Sydney, NSW, Australia b Eco Consulting Partnership, Melton Mowbray, Leicestershire LE13 1DZ, UK c Department of Information Systems and Information Technology, South Bank University, London, UK Available online 21 October 2004

Abstract The paper presents a discussion on the logic of the necessity for investigation into the area of physiological computing and reviews empirical work by some of the authors. In particular, the paper discusses the reliability of information that can be inferred from certain biological sensor data and ways in which positive benefits can be ensured or measured relating to the use of the feedback that can result from its use. One important and emerging application area for physiological feedback in interactive computing is in interactive art systems. In some respects, this application has been making strong progress for the particular reason that the interactive experience itself, rather than more abstract and problematic information handling, is at the core. Another interesting aspect of the applications in art is that they provide informal experimental investigations into these new forms of human–computer interaction, and artists are already devising new applications and interfaces for physiological information. The paper describes an art work employing physiological feedback, including a discussion of how it was built and of the participating audience reactions when exhibited. q 2004 Elsevier B.V. All rights reserved. Keywords: Interactive art; Physiological computing; Human–computer interaction

1. Introduction The interaction between one human and another is particularly effective because multiple signals are involved, which are either produced consciously or unconsciously. Those consciously produced usually include visible and audible signals deliberately aimed * Corresponding author. E-mail address: [email protected] (E. Edmonds). 0953-5438/$ - see front matter q 2004 Elsevier B.V. All rights reserved. doi:10.1016/j.intcom.2004.08.003

898

E. Edmonds et al. / Interacting with Computers 16 (2004) 897–915

at complimenting each other in order to communicate a message effectively. The signals produced unconsciously are usually not under the control of the person who produces them. Some of these subconsciously produced signals can be visible such as gesticulations, or audible such as exclamations, whereas others, such physiological signals, are not. What makes the interaction of one human with another as effective as it can most time be is the ability of the human to read the perceptible ones of these signals, analyse them within the prevailing context, and decide on an action deemed appropriate for the interaction situation. This decision may not always be right, but most humans do learn and improve this art of observing, analysing, and taking a course of action aimed at making an interaction relatively effective. Of course, it is also true that the action taken does not necessarily have to be towards continuing an interaction. For example, effective interaction with an undesirable human may simply be to discontinue the interaction. The signals exhibited by a human can be as a result of a number of things. They may be related to the specific emotions being experienced, specific thought processes, or both— particularly since affective and cognitive processes can occur simultaneously. Equally, some of these signals may be fake. Whatever their cause, or causes, what a human does is perceive them, estimate their authenticity within a given context, based on his experience and knowledge, and then take an appropriate action as to how to manage the interaction, based on whether or not the capacity to take the action exists. For example, a human might see signals from another human but be unable to interpret and associate them with a human state, and therefore not take the actions that would be appropriate for improving interaction. In other words, what makes a human decide to take an action towards improving an interaction is the ability to perceive signals and guess what they mean. Equally, a human might perceive and understand the signals from another human, but not have the irresistible feeling or the urge to want to do something as a result of the signals, and therefore not take any action. And yet another element in the equation is the knowing of the appropriate action to take, after signals have been perceived and understood. These types of human properties, which are driven by both affective and cognitive processes, are now increasingly seen as the ‘more wholesome intelligence’ or ‘true intelligence’. That is, whereas emotion did not quite feature in the measurement of intelligence before, it is now considered an important component known as emotional intelligence. Indeed, the notion of human intelligence has shifted so much recently in favour of emotional intelligence that a human without it can hardly be said to be ‘truly’ intelligent (Salovey and Mayer, 1990; Goleman, 1995). Therefore, what makes a human interact well with another is substantially emotional intelligence, plus other types of intelligence, all of which have properties, most of which are associated with various signals manifested by humans. Described another way, what makes human-to-human interact as effective as it can be is the ability of humans to interact via multiple channels of communication. In contrast to the human, the computer presently is very disadvantaged in its ability to perceive or understand what is necessary for high-level interaction with the human. As it stands, the computer is generally capable only of reading limited number of signals, such as mouse clicks and key presses from its human user. Therefore, even if it has the processing power necessary, it still would not be able to interact with the human at a level near enough for comparison with that achievable between even one child and another. Put another way, if a human can only perceive from another human as little as a computer does

E. Edmonds et al. / Interacting with Computers 16 (2004) 897–915

899

from a human, it would be difficult not to perceive an interaction with such a human as frustrating, not very intelligent, and unsociable. Yet it has been implied that the interaction between the human and the computer is largely natural and social (Reeves and Nass as cited in Picard, 2000). Furthermore, the computer is increasingly being seen as a social machine. What this may suggest is that, in order for the computer to interact with the human at a level higher than it presently does, factors that are important in human–human interaction, such as affective and cognitive, need to be included in the equation. Consequently, it seems inevitable that the future of human–computer interaction is one that will demand that the computer be able to read and analyse a range of signals from the human user and utilise the resulting ‘understanding’ as a basis for an action taken to improve an interaction situation. One of the immediate tasks in getting the human and the computer to interact at a ‘higher level’ is the acquisition of the essential signals from the human by the computer. While this has been difficult, advances in sensing technology and its relative availability and affordability are now making this possible. Indeed, it is not only becoming increasingly possible to capture the signals considered central to effective human–human interaction, such as gestures, facial expressions and eye movements, other signals such as electroencephalogram (EEG), electromyogram (EMG), electrocardiogram (EKG) and galvanic skin resistance (GSR) that are covert and the management of which can enhance interaction are now also detectable with increasing reliability. However, although much work has been successfully done in the areas of capturing and understanding overt signals, it is the covert signals that some (Scheirer et al., 2002) believe may have a greater and more acceptable role in human–computer interaction, in that they offer the individual the control of access to their psychological state during interaction. At any rate, using one type should not preclude the other, since it is considered that the capture and understanding of as many signals as possible during human–computer interaction makes for a better estimation of the psychological state of a user during the interaction process. Presumably, a computer that has this sort of interaction capability would have been a product of Affective Computing or/and Cognitive Computing. And it is this theory, together with the enabling technologies and research, and more that constitute part of the field known as physiological computing. The other main part of this field involves the use of physiological signals to control the computer and/or its peripherals towards to performing a specific task or expressing some sort of artistic creativity. One of the application areas of this is interactive art. With applications like CubeLife and BodySynth in interactive art, there is some evidence that much fun is already being had in some application areas of physiological computing. However, the progress in other areas, such as affective and cognitive computing is not as visible yet. This might be due in part to the fact that there are more difficult obstacles to negotiate in these areas. For example, there are serious technological issues, both hardware and software, some which seem insurmountable, given our present depth and scope of knowledge and perception. But there is no reason why the quest for the seamless involvement of additional channels of communication in human–computer interaction will not continue. This paper discusses issues relating to this need and trend via physiological computing, particularly as evident in the application area of interactive art.

900

E. Edmonds et al. / Interacting with Computers 16 (2004) 897–915

2. Towards enhancing human–computer interaction The need to enhance the effectiveness of human–computer interaction via various routes cannot be overstated, if the computer is to one day assume the role that many have dreamt for it in our society. And although the absence of a much better means of interaction than is presently available may not necessarily lead to the defenestration of computers from homes and offices, given the increasingly long time spent with the computer, and its further inclusion in more of our daily activities, the unnatural nature of the interaction is unlikely to be free of problems, at least over a long term. Already there is evidence in relation to health and productivity problems. One health problem area is repetitive strain injury, which can result from the repetitive use of the current interaction devices. Equally, there is suggestion that users are finding it increasingly difficult coping with some interaction task demands. Some would say that an interaction with adverse side effects is an ineffective one, irrespective of whether the interaction leads to task completion. But an increasing number is more questioning. More specifically, it seems no longer acceptable that task-related success should automatically translate into effective interaction. A revised way of thinking has long started taking root that believes that many factors contribute to effective interaction, not least, ease of completion, the number of different ways available to complete a task, and the psychological state of the user during interaction. As some (Benyon and Murray, 1988; Macaulay, 2000) suggested, human–computer interaction as a process involves a multitude of activities, ranging from the actual act of operating the computer, to what is happening inside the user, and the surrounding environment. This implied the recognition that user-related covert activities such cognitive and affective activities play a significant role in the effectiveness of a human– computer interaction, irrespective of the domain in which the interaction is taking place. Indeed, some (Macaulay, 2000) have suggested that affective factors such as anxiety should be monitored during human–computer interaction, and that the success (i.e. the effectiveness) of the interaction should be based on a combination of measures relating to the task being completed and the affective state of the computer user. For surely, if a task was completed—and in good time—but the computer user was overly anxious or stressed during the completion of the task, this should not be seen as a wholly successful human– computer interaction situation—particularly when the interaction is in the area of learning where affective factors have been suggested to have a reducing effect on learning performance (Pekrun, 1992; Hembree, 1988). Simply, there is the need to continually match the characteristics of the task being performed (e.g. level of difficulty and pace) to the ability of the user, while ability can be seen as a function of factors such as knowledge level, cognitive and affective readiness to perform task, and the interaction environment. What this means is that new ways must be continually sought to measure, monitor and evaluate effectiveness of human–computer interaction. As previously implied, one of the principal elements that contributes to the effectiveness of interaction between one human and another is the availability of multiple channels of communication. That is, this interaction requires both humans to be able to perceive, understand and communicate over the same multiple channels. This, incidentally, also makes the case that in order for the human and the computer to

E. Edmonds et al. / Interacting with Computers 16 (2004) 897–915

901

communicate at a similar level as one human to another, the computer will need to ‘express’ itself over the same channels the human does. In other words, the computer must become ‘pseudo-human’. This is still a very difficult task to contemplate. However, the truth is also that in human–human interaction, even if it is only one person who can express over multiple channels, interaction can still be very effective, as long as the other can understand and act on the message being communicated. Reasoning along this logic, one can say that the computer really does not have to be human-like, or express itself over the same channels as humans for it to be able to have a highly effective interaction with the human, as long as it can perceive over channels that the human organism uses to communicate and understand the communicated message. So, what are the prospects of increasing the number of human signals that the computer can perceive and understand? The prevailing notion is that progress is being made, both in terms of technology and research. Central to this drive are the overt and covert signals manifested by the user. Overt signals here are meant to refer to signals that are visible or audible, such as gesture, facial expression, eye movement, and voice intonation, whereas covert signals refer to those that are not normally visible to the naked eyes, such as physiological signals. All of these have been shown to have roles to play in effective human–computer interaction. Presently, literature review suggests that there are two main ways in which they feature in the drive towards improving the interaction between the human and the computer, and both have been implied earlier. The first is using the signals produced subconsciously by a user during the interaction to estimate the user’s cognitive or affective state, and then manipulating the necessary interaction parameters, accordingly, towards a more effective interaction. In this category, no more is required of the user than the normal human–computer interaction skills. The types of data collected and analysed include involuntary overt signals such as eye movements, facial expression (Ekman, 1993) and vocal intonation (van Bezooyen, 1984), and covert signals such as physiological signals. The second way identified in which overt and covert signals are being explored in human–computer interaction is the use of the signals to directly instruct the computer to perform specific tasks. Under this category are various feedback systems, which have applications in areas such as music, interactive art and industry. Examples include: CubeLife and BodySynth, which are discussed below. In this category, the human user is usually required to wear some sort of device, and bears the onus of consciously generating the signals necessary to instruct the computer. The role of physiological signals in realising the latter in physiological computing and the application of the latter in interactive art is the main focus of this paper.

3. Physiological signals Physiological signals define the signals from the processes of the elements of an organism. It is believed that these signals mirror the states of the elements, which in turn mirror the various cognitive and affective states of a human. Those that have generated most research interest to date include EEG, EMG, electrooculogram (EOG), GSR, EKG, blood volume pulse (BVP), skin temperature, and respiration. The interest in these signals

902

E. Edmonds et al. / Interacting with Computers 16 (2004) 897–915

and how to use them to estimate human states or how humans might use them to improve their capabilities span decades. In particular, early findings provided much hope. For example, studies (Clites, 1936; Diggory et al., 1964; Malmo, 1975) reported correlation between EMG and cognitive activities, such as thinking, memorisation, mental multiplication, and problem solving. Studies on ECG (Hodges and Spielberger 1966; Kelly et al., 1970) reported positive correlation between ECG and anxiety. And studies on EEG (Mundy-Castle, 1951) reported positive correlation between frontal EEG beta and arousal. And yet another area that generated great interest was that of biofeedback. This is based on the principle that people can learn to change the patterns of physiological signals they produce if fed back to them in a way that they can understand. Applying this principle, some (Kamiya, 1971) reported that using EKG feedback, it was possible to decrease heart rate in an effort to reduce anxiety or induce relaxation. However, in spite of all the initial enthusiasm and promises that came with the discovery of physiological signals, their widespread application was very limited by a number of difficulties, such as the complexity in interpretation, the need for dedicated complicated devices and the expertise to run them, and lack of reliability in data analysis. This had meant that only the medicalrelated fields, where dedicated, expensive and often cumbersome devices are used, have been able to avail of the possible messages provided by these signals. But with the recent advancement in both computer and sensing technologies, interests and hopes have been renewed and new fields of study are being created to harness the technological flourish. Central to the capturing and harnessing of physiological signals are sensors (also known as biosensors), which are available in different forms. Typically, they are required to be placed at strategic points on the body, and the way they work depend on the type of signal they have been designed to collect. For example, while a sensor designed to measure EMG can use a small electrode to measure tiny voltage differences from a muscle when it contracts, a sensor for measuring BVP signals may use a photosensor, and photoplethysmography, which is a process of applying a light source to the skin and measuring the light reflected. The difference in the intensity reflection results from the contraction of the heart and the forcing of blood through vessels, which then change in opacity and the amount of light they reflect. Similarly, a sensor for measuring EKG signals may use a few electrodes to measure the electrical activities of the muscles of the heart. A physiological sensing system will normally come with a number of different sensors to measure a number of signals, and will also include the necessary technology to encode signals into a form readable by the computer, filter out unwanted data, and connect to the computer. Increasing number of these types of systems are becoming commercially available and affordable. A further inducement to experiment is that most of sensing devices are compact, with some wireless and able to transmit over several feet. Advancement in sensor technology is also beginning to point in the direction of remote non-contact sensors. For example, non-contact sensors that can read muscle signals even through a layer of clothing are already being tested (Bluck, 2004). These advancements in particular improve the possibility of application in areas where the wearer has to move around, such as in interactive art or music. More recent studies have added to prior findings. For example, in the area of EEG, is now a reasonable belief that some sort of correlation exists between amplitude changes in specific EEG frequencies and specific state of cognition (Ray and Cole, 1985;

E. Edmonds et al. / Interacting with Computers 16 (2004) 897–915

903

Kiroy et al., 1996; Markand, 1990; Oken and Salinski, 1992; Crews and Landers, 1992; Inouye et al., 1993; Jacobs et al., 1996) and affect (Petruzzello and Landers, 1994; Field et al., 1998), between ECG and self-reported emotion (Nagane, 1990; Morse 1993; Sharpley, 1994; Brand et al., 1995; Calvo et al., 1996), or between EMG and cognitive activities (Wærsted et al., 1991, 1996), as well as emotion (Lundberg et al., 1994; Larsson et al., 1995). Equally, there seems a history of lack of correlation among physiological signals as well as inconsistencies in the use of each as an index of a specific human state. One of the reasons suggested for this is that a physiological signal can often be associated with more than one state. For example, an increased heart rate is not only related to anxiety but also to other states of arousal, such as sexual arousal (Gatchel, 1979). However, in spite of this problem, the suggestion is that the use of all possible categories of physiological signals is likely to minimise inconsistencies and inaccuracies (Gatchel, 1979; Whittenberger, 1979; Nagane, 1990). It is these types of findings and the belief that human cognitive and affective state can be influenced through appropriate stimuli that has created and is sustaining the general optimism behind some areas of physiological computing, such as affective and cognitive computing. In these areas, it is also recognised that much more work is still required on how to map patterns of physiological signals to specific psychological states. In contrast, in the areas premised on biofeedback, where the focus is the use of physiological signals to control the computer, there are less difficult problems. In these areas, whether or not a signal is used is determined by the extent to which it affords control by the human.

4. Physiological computing Physiological computing defines the issues relating to the application of physiological signals in human–computer interaction. This means that it covers a range of areas from the studying of physiological signals and the technologies used to capture and analyse them, to their application areas. The implication of this is that it has many goals. However, literature review seems to suggest only three of these to be the most fundamental, and they can be described as the realisation of: (1) an affective computer, (2) a cognitive computer, and (3) a biofeedback computer. Furthermore, these main goals seem to fall into two categories: user psychological state monitoring and biofeedback-based computer control. This first concerns the monitoring of user psychological state during human–computer interaction, and the second the use of physiological signals to manipulate the computer. In a sense, this suggests that a physiologically enabled computer, or device, can be of many flavours. It could be, for example, a computer that counsels an individual, based on the physiological signals received from him/her. It could be a computer that monitors the cognitive and affective state of a learner and manages relevant factors to ensure a wholesomely rewarding learning experience. It could be a physiologically enabled fridge that says to the human: “May I suggest you have orange juice instead of beer, as your heart rate is quite high and beer might further aggravate it.” Or “No, you can’t take beer, only orange”, with only the compartment for orange juice opening. It could be an item of clothing that changes its properties, based on physiological signals from its wearer. It could be a device or computer that allows the human to express his/her creativity via

904

E. Edmonds et al. / Interacting with Computers 16 (2004) 897–915

the manipulation of their physiological signals, such as is presently evident in some interactive artworks. What literature review also suggests is that physiological computing consists of only three main stages: data acquisition, data analysis, and action, and that the details of the steps within each stage is determined by the goal. The rest of this section focuses on the categories of goals identified above, with emphasis on the use of physiological signals for computer control and its application in interactive art. This category of physiological computing concerns the use of the principles of biofeedback to operate the computer, either towards performing a specific task or expressing some form of creativity. In contrast to the goals of the previous category, the goal of this category seems relatively easy to achieve, particularly because the difficult task of estimating human psychological state is not involved. Indeed, the goal of controlling a device with the use of a physiological signal was achieved since the 1970s. It is only that the advancement in computer and physiological signals sensing technologies has improved the ease with which this goal can be achieved, thereby opening up the possibilities of application in many areas. In theory, any physiological signal can be used, since it is believed that humans are capable of manipulating all. However, it is reasonable to say that the extent to which a signal can be easily manipulated naturally would play an important role in determining its application in a biofeedback system. For example, because EMG is easier to manipulate than EEG, it is likely that the former would be used when accuracy and meaningful control of the computer is needed. The biofeedback principles, coupled with the notion that creativity is associated with cognitive processes, and that these, in turn, are represented in physiological signals forms the basis for the application of physiological computing in many areas. One of these areas is interactive art. The rest of this paper looks at the applications in this area, with special focus on a specific example. 4.1. The context of interactive art—a brief history of art-technology Art has long sought to make an impact on its audience and participants and, historically, has been an early adopter of new ideas concerning perception, sensation, and emotionincluding those arising from technology. For example, in European art there is the crucial depiction of emotion in Giotto’s stage-like religious scenes, and Brunelleschi’s innovations in mathematical perpective. These and other key developments throughout art history are all intended to increase the viewer’s mental and emotional involvement, and are usually designed to have a tangible impact. In the earlier part of the previous century, artists began to adopt procedural and mathematical methods of creating work from sets of rules, and abstraction had some of its roots in technological development—one the one hand, exciting new possibilities and methods for constructing art and on the other, a disruption of the sense of reality. One of the pioneering abstract painters Kandinsky wrote of Rutherford’s famous breakthrough: “the crumbling of the atom was for me like the crumbling of the whole world. suddenly the heaviest walls toppled. everything became uncertain, tottering and weak.” (Lindsay and Vergo, 1994). Later that century, and much earlier than might be supposed, artists were amongst the first to experiment informally—yet with impressive commitment—with early computing devices; Harold Cohen (Cohen) and Edward Ihnatowicz (Zivanovic, 2003) developed

E. Edmonds et al. / Interacting with Computers 16 (2004) 897–915

905

what are now seen as historically significant applications. The 1968 show Cybernetic Serendipity (MacGregor, 2002) is usually cited as an important seed for much of this later activity (Brown, 2003). In particular, pioneering artists such as Edward Ihnatowicz were seen to be experimenting with machine intelligence at a notably early date. “One of the works shown at Cybernetic Serendipity was Edward Ihnatowicz’ ‘Sound Activated Mobile —or— SAM’. It consisted of four parabolic reflectors formed like the petals of a large flower on an articulating ‘stem’ or neck. Microphones placed at the foci of the reflectors enabled SAM to accurately detect the location of sounds and to track them as they moved around the exhibition. The visitor was left with an uncanny sensation of being ‘watched’ as they walked around.” .and even with early forms of what may be seen as physiological computing: “Ihnatowicz’ final robot piece was ‘The Bandit’ part financed by the Computer Arts Society and exhibited at their show ‘Interact’ at the 1974 Edinburgh Festival. It was based on the familiar ‘One Armed Bandit’ gambling machine. Visitors interacted with the lever and the system was able to make pretty accurate analyses of their gender and temperament. [.] Contemporary roboticists and AI specialists working in the now popular bottom-up methodologies (like for example evolutionary robotics) are often astounded to learn of Ihnatowicz work, particular when they are told its early date. Ihnatowicz died in 1986.” (Brown, 2003). Much technology-based art has featured the computer as part of the visible presence of the work, either as an object or as a result of making computational processes evident (Verotsko, 2002). More recently, the urge to challenge, disturb and shock has also played an active role—see the tongue-in-cheek work of the Experimental Interaction Unit (Experimental Interaction Unit) or the anarchic Survival Research Labs and ‘extreme Java’ of Mark Pauline (Pauline). Other artists (Davies, 1998; Turrell) have instead used technology more gently to seduce the participant through the time-honoured techniques of manipulating colour and subtle image. Research initiatives in HCI, computing and the arts from the 1970s onwards involved artists from the start (Edmonds, 2003). Magazines, organisations and online resources grew up around this activity to promote and support the cross-pollination of art-technology partnerships (Leonardo, Arts Catalyst, Fine Art Forum, Rhizome) and—in the UK—even a government-funded organisation (NESTA). For those areas of contemporary art that focus on physiological and overtly medical information (for example, the SciArt organisation is funded by the Wellcome Trust), the relevance of physiological computing is obvious. Today (after some resistance), the involvement of technology in art practice has become an accepted part of the contemporary art world, although it is still seen by some to follow a ‘separate aesthetic’. However, it is among those artists whose work seriously crosses over into computing that any possibilities for innovation in physiological computing exist. 4.2. Physiological computing and the art-technology field Picard (1998) gives examples of possible applications of physiological computing under affective computing. However, although in affective computing applications are still

906

E. Edmonds et al. / Interacting with Computers 16 (2004) 897–915

being predicted, in other sub-fields of physiological computing, such as the use of physiological signals to operate computers, applications are already a reality. Some examples of areas of application are music and art. Examples of applications in the area of music include (Marrin, 2000): BodySynth (by Ed Severinghaus and Chris van Raalte), which uses EMG to generate music and lighting effects. BioMuse (by Hugh Lusted and Benjamin Knapp) uses EOG and EEG and was designed initially to enable people with movement impairments and paralysis to operate the computer, but can also be used for generating music. HeartBeat (by Chris Janney) is a wireless device, which amplifies and converts the electrical impulses that stimulate the heart into sound, which is then combined with other types of sounds, such as jazz scat. The Conductor’s Jacket (by Teresa Marrin) is a wearable physiological device that uses four types of sensors to monitor EMG, respiration, heart rate, GSR and temperature in order to provide physiological data about a conductor during performance. It was designed to investigate the relationship between the nature of musical expression and how it is conveyed via gestures. Easily gathered data such as the effect of relaxation on EEGs figure in several works (Gabriel, 2001, for other examples see Everitt et al., 2002). Although this kind of data is under the control of the participant, it raises questions about what kind of control some applications of physiological computing should be aiming for, an issue which is explored below. Physiological computing offers the possibility of putting the human sense of self in the central position, with the added option of creating a sense of personal extension or enhancement. Some examples of the use of sensors and physiological interfaces in art are of historical importance. Artist Char Davies’ ‘Osmose’ and ‘Ephemere’ (Davies, 1998) are immersive virtual worlds navigated by employing breath and balance sensors. Davies is the former vice-president of Softimage, later acquired by Microsoft. In 1998, she founded Immersence, and was also a PhD fellow at the Centre for Advanced Inquiry in the Interactive Arts (CAiiA) at the University of Wales College, Newport (Grau, 2003). Davies’ works might be considered only partially physiological, but they are groundbreaking because they combine physiological monitoring with a sense of what might be called ‘deep engagement’: “Osmose cultivates the user-interface—a central parameter of virtual art—at a level that is still unequaled; an independent treatise could be written on this aspect alone. Osmose is a technically advanced and visually impressive simulation of a series of widely branching natural and textual spaces: a mineral/vegetable, intangible sphere. [.] in the data space [.] phosphorescing points of light glimmer in the dark in soft focus. Osmose is an immersive interactive environment, involving head mounted display (HMD), 3-D computer graphics, and interactive sound, which can be explored synaesthetically” (Grau, 2003). Crucially, Davies challenges the cultural stereotypes that are unconsciously—but commonly—reproduced in interface device design, and raises an issue that has to be relevant to physiologiocal computing: “The user interface. is based on tracking the participant’s breathing and balance [to] navigate through the spatial realms of the work—they breathe in to rise, out to fall. This interface was intended to pose an alternative to conventional approaches

E. Edmonds et al. / Interacting with Computers 16 (2004) 897–915

907

to VR, whereby the interface usually involves the hands. There may be exceptions of course, but in general, hand-held interface devices reinforce a dominating stance to the world in terms of ‘I’m doing this to that’. And this not only reflects, but reinforces the conventional sensibility of our culture [.] If a joystick is involved, the work is merely repeating our habitual approach to controlling, or rather, mastering the world around us.” (Gigliotti, 2002). There are hints here that have powerful implications for physiological computing interface design—the use of ‘intimate’ physiological data also invites a fresh approach to interaction, since physiological computing might be seen to connect users with the computer (and potentially other users) at a deeper psychological level than conventional interfaces. Indeed, the use of physiological data itself (as in cubeLife, below) brings a sense of the personal to the interaction—the distinction between ‘doing’ something to achieve a goal (trackpad, keyboard, screen, windows, etc.) and ‘being’ something to have an effect (physiological indicators of affective states represented as output) alters the sense of the word ‘interaction’. When physiological input is used not to ‘control’ something directly, but to ‘affect’ it, the relationship between user and machine is altered in a such way that conventional doing models of interaction would seem in need of revision. In some art-technology applications (and perhaps in other instances), it might be more advatageous to think in terms of a user influencing the digital environment without doing anything conscious at all. This subtler sense of interaction demands a different HCI paradigm to the ‘I’m doing this to that’ model. Wireless ambient devices are already available that indicate the state of whatever variables they are programmed to track without the need for overt interaction—a glance at the ‘ambient orb’ will do to see if my stocks are ‘glowing’ green, red, orange or whatever (Ambient Devices). Such subtle signals (vibration, discrete sounds, colours) might also provide appropriate feedback to physiological data without requiring the user to ‘pay attention’ too consciously to concentrate on anything else. Work on increasing the accuracy of physiological sensors means that their incorporation in art becomes an increasingly attractive prospect to artists working with technology. However, more invasive devices—yielding usually more reliable signals— often require surgical installation, and are inevitably bound up with medical research areas such as enabling technologies. (Kyberd, 2002; Huggins et al., 2003; Bayliss) The use of such technologies by artists is difficult due to obvious logical barriers such as cost and access to surgery. However, where possible, more challenging work by artists might seek to exploit more physically invasive sensor technology such as implants (Warwick, 2002; Warwick and Gasson, 2004; Direct Brain Interface Project) or assistive and prosthetic technologies (Poulton et al., 2002; Adaptive Technology Resource Center) but typically, this has extended so far only to the artist as a performer or exhibit where the audience are unable to participate directly. Artist Ansuman Biswas combines physiological sensors with video and the cultivation of states of being intended to affect the data gathered: ‘Self/Portrait [.] is a durational art work prompted by recent research into the relationship between emotions and physiological states. In this performance, Ansuman harnesses the pulsating energy of the heart to paint and compose. Small electrodes on the artist’s skin will sense the internal electrical weather of his body

908

E. Edmonds et al. / Interacting with Computers 16 (2004) 897–915

and feed it into a computer. The external view of the artist will also be fed into the computer via a video camera. These internal and external views will then be mingled and projected onto the wall in front of the performer. Through various contemplative practices, Ansuman will cultivate particular states of mind and thus modulate the video portrait of himself. Ansuman will conduct a special closing ceremony on the final evening involving a musical performance and including, as one of his instruments, an ECG device.’ (Biswas, 2001). Stelarc is more overt, and—as well as using his own body as an experimental workshop— involves participants in the experience of non-surgical prosthetics and the semi-invasive (non-surgical) process of muscle stimulation. His ‘Amplified Body’ includes information from ‘brainwaves (EEG), muscles (EMG), pulse (plethysmogram) and bloodflow (Doppler flow meter). Other transducers and sensors monitor limb motion and indicate body posture’ (Stelarc). ‘Stimbod’ invites participants to manipulate his body via physiological devices: “a touch-screen interface to a multiple muscle stimulator allows the body’s movements to be programmed by touching the muscle-sites on the computer model. [.] The body performs in a structured and interactive lighting installation which flickers and flares in response to the electrical discharges of the body—sometimes synchronising, sometimes counterpointing. Light is treated not as an external illumination of the body but as a manifestation of the body’s rhythms. The peformance is a choreography of controlled, constrained and involuntary motions of internal rhythms and external gestures. It is an interplay between physiological control and electronic modulation, of human functions and machine enhancement.” At the extreme end of the spectrum, the philosophical extension of physiological computing into cybernetic totalism is an issue of current debate (Dennett and Hofstadter, 1981; Lanier, 2002; Transhumanism). However, the formal involvement of artists in surgical experimentation is a feasible option, especially when artists such as Orlan, the French artist who has offered her face and body as a canvas of flesh where plastic surgeons create art by surgery, are already practicing voluntary body modification (Orlan). Further, in the drive to develop neural interfaces, some formerly invasive technologies offering more reliable data such as Electrocorticogram (ECoG) implants are becoming subtler, more accurate and less damaging (Nicolelis and Chapin, 2002; Kennedy, 1999), and neurotrophic electrodes are advancing this area of technology: “Neuroscientists [.] who last year implanted a neurotrophic electrode into the brain of a paralyzed, speech-impaired patient, continue to help the patient learn to communicate by moving a cursor on a computer screen. Following the brain implant almost a year ago in March 1998, the patient first learned to express himself by indicating phrases on the computer screen such as “I am thirsty” and “It was nice talking to you.” More recently, he has learned to move the cursor to letters of the alphabet and spell his own name and the name of his doctors.” (Kennedy, 1998). Laudable though the work is, it has to be lamented that ‘moving a cursor’ to spell out phrases is still the goal, when a more innovative interface might provide greater ease

E. Edmonds et al. / Interacting with Computers 16 (2004) 897–915

909

and motivation. Enabling physiological computing sometimes falls into this trap, i.e. using advanced technology to laboriously control a conventional cursor and windows interface. Since there are cultural implications of using devices in art more usually seen primarily in a medical context, perhaps one of the roles of the artist using such devices is to help form new contexts and interfaces for physiological technology. A wider set of users might thus become familiar with physiological computing applications incorporating medically connected components in non-medical contexts. Judging by some of the examples given above the ability of the artist, thorugh physiological computing, to involve participants at such a level is proving an attractive prospect, and is likely to stimulate innovation in this field. The ability to reflect a person’s inner state is almost more intimate than interpersonal contact, simply because sensors can gather information that human senses cannot. The impact of this ‘intimate data’ within art practice is potentially capable of eliciting responses that traditional digital media cannot, and it seems likely from current trends that artists working with technology will be among the first to take up the challenge of devising interfaces and applications in physiological computing that break the mould of the doing interface challenged by Davies (Gigliotti, 2002). 4.3. An application in interactive art CubeLife, by (Everitt and Turner, 1999) is an interactive digital artwork where the sole means of input is audience participants’ heartbeats These are used to trigger sound files and generate graphical representations of the structures of magic cubes1. Each magic cube is unique to the individual whose heartbeat created it, and is able to move around a virtual space and over the internet to interact with other cubes created by other participants, according to various adjustable rules. A primary aim of the exhibited participatory module of cubeLife was to create a digital artwork in which no usually recognisable computer interface existed—the computer and associated familiar peripherals were not visible. The work depended entirely upon the digital processing of a physiological input from participants; in fact, without input from the heartbeat monitor, nothing takes place. The technology was made as unobtrusive as possible; all that was visible to participants was a small fingertip/earlobe heartbeat monitor clip, and a large wall-sized back-projection (see Fig. 1). The physical heartbeat detector device is simple in operation, comprising a clip worn on the finger (or earlobe) containing an optosensor which measures the amount of light passing through the finger. Each pulse of blood causes the finger to darken momentarily, and when the measured light readings over a period of time are digitised, the graphed result is similar to the familiar ECG readouts seen at hospital bedsides. There are many possible ways in which a blood flow signal might be processed to inform the work (intervals or variations between peaks/troughs, total length of input, various comparisons to other inputs, etc.), but although some of these were used as 1 A magic cubes is an n!n!n array of non-repeating—and in this case sequential—integers in which the n2 rows, n2 columns, n2 pillars, and four triagonals each sum to a single number.

910

E. Edmonds et al. / Interacting with Computers 16 (2004) 897–915

Fig. 1. CubeLife by Dave Everitt and Greg Turner, exhibited in Loughborough University Art Gallery, UK, 1999.

variables, a recognisable initial output (mirroring the user’s heartbeat) was required in order to foster a ‘baseline sense of control’ and to represent in real time the direct relation of the visual display to a known physiological signal which users could immediately relate to their input. Since, the heartbeat sensor was the sole means of input to the system, the detection algorithm, as well as the process of detecting pulses, had to fulfil four requirements that would not be necessary in, say, a piece of exercise equipment. First, the response to each pulse had to be low latency, to allow the participant to identify the effect (output form the system) with the cause (their heartbeat input). Secondly, the presence of an audience had to be detected in order to set-up the generation of a new magic cube and this was achieved by detecting the first pulse. Thirdly, the algorithm had consequently to filter out the noise created by attaching and removing the sensor, or by sudden finger movements, to avoid spurious readings being used in the generation of magic cubes. This was achieved by applying a low-pass filter and introducing selfcalibration in the pulse-rate domain, so that unexpectedly sudden ‘pulses’ were ignored. Lastly, it was necessary to signal the absence of a heartbeat, which was triggered by the maximum amount of light reaching the sensor (meaning that no finger was interposed) or a timeout after the last detected pulse (meaning that the clip had recently been removed). These three additions made it possible to generate unique magic cubes based upon number of pulses and length of input session. The generation process was an animated ‘growing’ of the cube every n pulses, from 12 to 32 (no 2!2!2 cubes exist) to 42 and so on, with different patterns of cubes being used for different input sessions. When the finger

E. Edmonds et al. / Interacting with Computers 16 (2004) 897–915

911

is removed from the sensor, the cube is ‘released’ to move about in virtual space and interact with other cubes created by other people. The preliminary version showed that, although audience members were aware that the cubes were somehow related to their heartbeat, it was not readily apparent in what way they were related, possibly due to a lack of sensor feedback (analogous to not being able to see your mouse pointer). Visual feedback of the readily recognisable ECG-style signal was consequently added. The solution was to move the embryonic cube toward and away from the vision plane as a function of the amount of blood in the finger. This simple addition resulted in a profound change of attitude in audience members. Instead of detachedly witnessing the generation of their cube due to some mysterious process, the connection between the human internals and the computer internals was far more apparent. Although, the resulting cubes would have been the same, given the same pulse, the physiological feedback meant that audiences became much more engaged and invested more emotion into the interaction—some experimenting with willing their pulserate to slow, others jumping around to speed it up (although this did not work so well, as the jumping introduced noise). Some research on affective computing (Kort and Reilly, 2002) concentrates on enhanced learning and the process of monitoring engagement via physiological indicators of affective response. The prerequisites for this process of engagement were part of the initial approach to cubeLife. One of the features of the exhibited work was the creation of an environment—a physical cinema-like area—that implied a sense of shared intimacy; a kind of ‘sacred space’ set aside for a specific purpose and designed to heighten participants’ sense of involvement in the work. This space offered a gentle introduction to what was required from participants: intimate, although not necessarily private, data in the form of their own heartbeat. A general sense of group contribution was created through the persistence of created objects and sounds, which were given a finite ‘life’ as a function of various variables. If left alone after participant interaction, computational activity ceases as each object dies off and a static screen (representing a composite of all inputs) remains until further input. In designing the screen-based (i.e. non-immersive) virtual world created by cubeLife, some of the recognised shortcomings and limitations (as well as the strengths) of VR were recognised, (Lanier, 2003; Howarth and Costello, 1996) hence a ‘semi-immersive’ environment was conceived. A large ‘responsive wall’, with a single visual and audio entity representing each individual input was built. A little like being contained behind a computer screen, this environment transcended traditional output by drawing in participants through a ‘sense’ of shared contribution, rather than through interaction based on representations of conventional social–computer interaction (e.g. game/chat room with avatars). In this way, the work aimed at a level of human response lower than the conscious, attempting in this way to build a sense of ‘belonging’; in contrast, ‘participation’ implies a consciously controlled interaction with a strong distinction between ‘self’ and ‘other’. This distinction has enormous implications for the process of engagement in physiologically based art and—by implication—in physiological computing. The refinement of physiological input devices requires that output technologies begin to transcend the self-other barrier; in other words, the user needs to receive the technology as a natural extension of their own personal space or even identity.

912

E. Edmonds et al. / Interacting with Computers 16 (2004) 897–915

The experience of cubeLife suggests a basic set of requirements for successful engagement in physiologically driven computing and art: (1) ease of connection to input (both non-invasive and surgical technologies) (2) immersion in output (outputs to provide adequate and appropriate feedback) the above two leading to (3) sense of personal connection with technologies (‘somatic extension’). The third can only occur if the first two are successful. Whether these factors are likely to elicit a more powerful emotional investment from the user and thus a desire to ‘understand’—or at least empathise—with a system that may be perceived as an extension to the sense of self, depends upon the success or failure of the interaction.

5. Conclusion The paper presented a discussion of the logic of the investigation into physiological computing and reviewed empirical work by some of the authors. In particular, the paper discussed the reliability of information that can be inferred from certain biological sensor data and ways in which positive benefits can be ensured or measured relating to the use of the feedback that can result from its use. One important and emerging application area for physiological feedback in interactive computing is in interactive art systems. In some respects, this application has been making strong progress for the particular reason that the interactive experience itself, rather than more abstract and problematic information handling, is at the core. Another interesting aspect of the applications in art is that they provide informal experimental investigations into these new forms of human–computer interaction. The paper described an art work employing physiological feedback, including a discussion of how it was built and of the participating audience reactions when exhibited.

References Adaptive Technology Resource Center. Neural interfaces, University of Toronto. Available at http://www. utoronto.ca/atrc/reference/tech/neuralinterface.html, last accessed April 28, 2004. Ambient Devices, Inc. http://www.ambientdevices.com/cat/orb/orborder.html, last accessed April 27, 2004. Arts Catalyst. The. http://www.artscatalyst.org/, last accessed April 27, 2004. Bayliss, J. Alternative Interface lab, Department of Computer Science, Rochester Institute of Technology. Papers available at http://www.cs.rit.edu/~jdb/research/, last accessed April 25, 2004. Biswas, A., 2001. Self-portrait (digital artwork). Details at http://www.thelab.org/archive01/gateway/selfportrait.htm, last accessed April 25, 2004. Brand, H., Gortgak, R., Abraham-Inpijn, L., 1995. Anxiety and heart rate correlation prior to dental checkup. International Dental Journal 45, 347–351. Brown, P., 2003. The idea becomes a machine: AI and Alife in early British computer arts, Proceedings of consciousness reframed 2003 2003. Available at http://www.paul-brown.com/WORDS/INDEX.HTM, last accessed 27 April, 2004.

E. Edmonds et al. / Interacting with Computers 16 (2004) 897–915

913

Calvo, M., Szabo, A., Capafons, J., 1996. Anxiety and heart rate under psychological stress: the effects of exercise-training. Anxiety, Stress, and Coping 9, 321–337. Clites, M.S., 1936. Certain somatic activities in relation to successful and unsuccessful problem solving. Journal of Experimental Psychology 10, 8–20. Cohen, H. http://crca.ucsd.edu/~hcohen/Biography on KurzweilAI.net http://www.kurzweilai.net/bios/frame. html?mainZ/bios/bio0028.html? last accessed 28 April, 2004. Crews, D., Landers, D., 1992. Electroencephalographic measures of attentional patterns prior to the golf putt. Medicine and Science in Sports and Exercise 25 (1), 116–126. Davies, C., 1998. Ephemere and (1995) Osmose (digital artworks). Available at http://www.immersence.com, last accessed April 25, 2004. Dennett, D., Hofstadter, D., 1981. The Mind’s I: Fantasies and Reflections on the Self and the Soul (ISBN 0553345842). Diggory, J., Klein, S., Cohen, M., 1964. Muscle-action potentials and estimated probability of success. Journal of Experimental Psychology 68, 449–455. Direct Brain Interface Project. Electrocorticogram (ECoG) Implants. University of Michigan. Available at http:// www.engin.umich.edu/dbi/nih2000/subjects.html, last accessed April 28, 2004. Edmonds, E., 2003. Logics for construction generative arts systems. Digital Creativity 14 (1), 23–28. Everitt, D., Turner, G., 1999. cubeLife (digital artwork). Information at: http://www.cubelife.org, last accessed April 25, 2004. Everitt, D., Turner, G., Quantrill, M., Robson, J., 2002. Arts and Disability Interfaces: New Technology, Disabled Artists And Audiences, Part 2 of 4: Technology Report. Arts Council England. Available at http://www.fased. org/research/distech-report.pdf, last accessed April 25, 2004. Experimental Interaction Unit, http://www.eiu.org, last accessed April 28, 2004. Field, T., Martinez, A., Nawrocki, T., Pickens, J., Fox, N., Schanberg, S., 1998. Music shifts frontal EEG in depressed adolescents. Adolescence 33 (129), 109–116. Fine Art Forum. http://www.msstate.edu/Fineart_Online/home.html, last accessed April 27, 2004. Gabriel, U., 2001. Terrain ’01 (digital artwork). Details at http://www.foro-artistico.de/english/program/system. htm, last accessed April 25, 2004. Gigliotti, C., 2002. Reverie, Osmose and Ephe´me`re. In n.paradoxa. vol. 9. (Eco)Logical, pp. 64–73. Available at http://www.immersence.com/publications/CGigliotti-nparadoxa-N.html, last accessed April 27, 2004. Goleman, D., 1995. Emotional Intelligence. Bantam Books, New York. Grau, O., 2003. Charlotte Davies: Osmose. Virtual Art, From Illusion to Immersion (Revised and expanded edition). MIT Press, Cambridge, MA pp. 193–211. Available at http://www.immersence.com/publications/ OGrau-VirtualArt-N.html, last accessed 26 April, 2004. Howarth, P., Costello, P., 1996. Studies into the Visual Effects of Immersion in Virtual Environments 1996. Available at http://www.lboro.ac.uk/departments/hu/groups/viserg/9603v2.htm, last accessed April 25, 2004. Huggins, J., Levine, S., Fessler, J., Sowers, W., Pfurtscheller, G., Graimann, B., Schloegl, A., Minecan, D., Kushwaha, R., 2003. Electrocorticogram as the basis for a direct brain interface: opportunities for improved detection accuracy, The First International IEEE EMB Conference on Neural Engineering 2003 pp. 587–590, March 20–22. Inouye, T., Shinosaki, K., Iyama, A., Matsumoto, Y., 1993. Localisation of activated areas and directional EEG pattern during mental arithmetic. Electroencephalography and Clinical Neurophysiology 86, 224–230. Jacobs, G., Benson, H., Friedman, R., 1996. Topographic EEG mapping of the relaxation response. Biofeedback and Self-Regulation 21 (2), 121–129. Kelly, D., Brown, C.C., Shaffer, J.W., 1970. A comparison of physiological and psychological measurements of anxious patients and normal controls. Psychophysiology 6, 429–441. Kennedy, P., 1998. Emory Neuroscientists use Brain Implant to help Paralyzed and Speech-Impaired Patients Communicate via Computer. Emory University, Robert W. Woodruff Health Sciences Center, press release. Available at http://www.emory.edu/WHSC/HSNEWS/releases/feb99/022399brain.html, last accessed April 28, 2004. Kennedy, P., 1999. Brain Implants: A New Interface for Communication. http://www.cis.upenn.edu/~bracy/ brain/, last accessed April 28, 2004.

914

E. Edmonds et al. / Interacting with Computers 16 (2004) 897–915

Kiroy, V., Warsawskaya, L., Voynov, V., 1996. EEG after prolonged mental activity. International Journal of Neuroscience 85, 31–43. Kort, B., Reilly, R., 2002. Theories for Deep Change in Affect-sensitive Cognitive Machines: A Constructivist Model. M.I.T. Media Laboratory 2002. Available at http://ifets.ieee.org/periodical/vol_4_2002/kort.html, last accessed April 25, 2004. Kyberd, P., 2002. in: Keates, S., Langdon, P. (Eds.), Universal Access and Assistive Technology. Springer, Berlin. ISBN 1852335955. Available at http://www.cyber.rdg.ac.uk/research/publications.htm?viewpublication&IDZ01157, last accessed April 25, 2004. Lanier, J., 2002. One-Half of a Manifesto. Available at http://www.edge.org/3rd_culture/lanier/lanier_index. html, last accessed April 25, 2004. Lanier, J., 2003. The Top Eleven Reasons VR has not yet become commonplace (talk at the San Francisco Bay Area Chapter of ACM SIGCHI). Available at http://www.baychi.org/calendar/20030909/ or read the summary on his site: http://www.advanced.org/jaron/topeleven.html, last accessed April 25, 2004. ¨ berg, P., 1995. Effects of psychophysiological stress on trapezius Larsson, S., Larsson, R., Zhang, Q., Cai, H., O muscles blood flow and electromyography during static load. European Journal of Applied Physiology 71, 493–498. Leonardo http://mitpress2.mit.edu/e-journals/Leonardo/, last accessed April 27, 2004. Lindsay, K., Vergo, P. (Eds.), 1994. Vasily Kandinsky: Complete Writings on Art. Da Capo Press, New York. Lundberg, U., Kadefors, R., Melin, B., Palmerud, G., Hassme´n, P., Engstro¨m, M., Elfsberg Dohns, I., 1994. Psychophysiological stress and EMG activity of the trapezius muscle. International Journal of Behavioral Medicine 1, 354–370. Macaulay, M., 2000. Monitoring, treating and compensating for the effects of anxiety in human–computer interaction. PhD Thesis, Loughborough University, UK. MacGregor, B., 2002. Cybernetic Serendipity Revisited, In Proceedings of creativity and cognition 2002, ACM 2002 pp. 11–13, Available from http://portal.acm.org/citation.cfm?doidZ581710.581713, last accessed 27 April 2004. Malmo, R., 1975. On Emotions, Needs, and Our Archaic Brain. Holt, Rinehart and Winston, Inc., New York. Markand, O., 1990. Alpha Rhythms. Journal of Clinical Neurophysiology 7 (2), 163–189. Marrin, T., 2000. Inside the Conductor’s Jacket: Analysis, Interpretation and Musical Synthesis of Expressive Gesture 2000. Available from http://web.media.mit.edu/(marrin/HTMLThesis/Dissertation.htm. last accessed 28 April 2004. Examples mentioned available from http://web.media.mit.edu/~marrin/HTMLThesis/2.6.htm. Mundy-Castle, A.C., 1951. Theta and beta in the electroencephalogram of normal adults. Electroencephalography and Clinical Neurophysiology 3, 477–486. Nagane, M., 1990. Development of psychological and physiological sensitivity indices to stress based on state anxiety and heart rate. Perceptual and Motor Skills 70, 611–614. NESTA-Natrional Endowment for Science, Technology and the Arts. http://www.nesta.org.uk/, last accessed July 07, 2004. Nicolelis, M., Chapin, J., 2002. Controlling Robots with the Mind. Scientific American October 2002, online page 2. Available at http://www.sciam.com/article.cfm?articleIDZ00065FEA-DAEA-1D8090FB809EC5880000&pageNumberZ2&catIDZ2, last accessed April 25, 2004. Oken, B., Salinsky, M., 1992. Alertness and attention: Basic science and electrophysiologic correlates. Journal of Clinical Neurophysiology 9 (4), 480–494. Orlan. Orlan, the Living Masterpiece. Gene Ragalie’s Homepage, Western Illinois University website. Available at http://www.wiu.edu/users/gjr100/orlan.htm, last accessed April 25, 2004. Pauline, M. Survival Research Labs. http://www.srl.org, last accessed April 27, 2004. Petruzzello, S., Landers, D., 1994. State anxiety reduction and exercise: does hemispheric activation reflect such changes?. Medical Science Sports Exercise 26 (8), 1028–1035. Poulton, A., Kyberd, P., Gow, D., 2002. Progress of a modular prosthetic arm. Available at http://www.cyber.rdg. ac.uk/research/publications.htm?viewpublication&IDZ01157, last accessed April 25, 2004. Ray, W., Cole, H., 1985. EEG alpha activity reflects attentional demands, and beta activity reflects emotional and cognitive processes. Science 228, 750–752. Rhizome. http://www.rhizome.org/, last accessed April 27, 2004. Salovey, P., Mayer, J., 1990. Emotional intelligence. Imagination, Cognition and Personality 9 (3), 185–211.

E. Edmonds et al. / Interacting with Computers 16 (2004) 897–915

915

SciArt, http://www.sciart.org/, last accessed July 07, 2004. Sharpley, C., 1994. Differences in pulse rate and heart rate and effects on the calculation of heart rate reactivity during periods of mental stress. Journal of Behavioral Medicine 17 (1), 99–109. Stelarc. Personal website http://www.stelarc.va.com.au, last accessed 28 April, 2004. Amplified Body http:// www.stelarc.va.com.au/ampbod/ampbod.html. Stimbod http://www.stelarc.va.com.au/stimbod/stimbod. html. Transhumanism, ‘physical enhancement’ and bionics links. Available at http://www.aleph.se/Trans/Individual/ Body/index.html, last accessed April 25, 2004. Turrell, J. Guggenheim Museum Biography. http://www.guggenheimcollection.org/site/artist_bio_155.html, last accessed April 27, 2004. Verotsko, R., 2002. Personal site. http://www.invisum.com/, last accessed April 28, 2004. Warwick, K., 2002. Project Cyborg 2. Details available at http://www.rdg.ac.uk/KevinWarwick/html/ project_cyborg_2_0.html, last accessed April 25, 2004. Warwick, K., Gasson, M., 2004. Practical Interface Experiments with Implant Technology, International Workshop on Human–Computer Interaction (HCI2004), May 16, 2004, Prague 2004 in press. Wærsted, M., Bjorklund, R., Westgaard, R., 1991. Shoulder muscle tension induced by two VDU-based tasks of different complexity. Ergonomics 34, 137–150. Wærsted, M., Eken, T., Westgaard, R.H., 1996. Activity of single motor units in attention-demanding tasks: firing pattern in the human trapezius muscle. European Journal of Applied Physiology 72, 323–329. Whittenberger, G., 1979. Correlation of magnitude estimates of state anxiety with heart rate, finger pulse volume, skin conductance, and EMG responses for subjects under threat of shock, PhD Thesis, Florida State University. Zivanovic, A., 2003. A web site dedicated to the work of Edward Ihnatowicz. http://www.senster.com/ ihnatowicz/index.htm, last accessed April 28, 2004.