Robotics and Autonomous Systems 58 (2010) 1238–1245
Contents lists available at ScienceDirect
Robotics and Autonomous Systems journal homepage: www.elsevier.com/locate/robot
Mental tasks-based brain–robot interface Eduardo Iáñez a,∗ , José María Azorín a , Andrés Úbeda a , José Manuel Ferrández b , Eduardo Fernández c a
Virtual Reality and Robotics Lab, Industrial Systems Engineering Department, Miguel Hernández University of Elche, Spain
b
Electronics and Computer Technology Department, Polytechnic University of Cartagena, Murcia, Spain
c
Bioengineering Institute, Miguel Hernández University of Elche, Spain
article
info
Article history: Available online 7 September 2010 Keywords: Brain computer interface Human–robot interface Electroencephalography Linear discriminant analysis Wavelet transform
abstract This paper describes a Brain Computer Interface (BCI) based on electroencephalography (EEG) that allows control of a robot arm. This interface will enable people with severe disabilities to control a robot arm to assist them in a variety of tasks in their daily lives. The BCI system developed differentiates three cognitive processes, related to motor imagination, registering the brain rhythmic activity through 16 electrodes placed on the scalp. The features extraction algorithm is based on the Wavelet Transform (WT). A Linear Discriminant Analysis (LDA) based classifier has been developed in order to differentiate between the three mental tasks. The classifier combines through a score-based system four LDA-based models simultaneously. The experimental results with six volunteers performing several trajectories with a robot arm are shown in this paper. © 2010 Elsevier B.V. All rights reserved.
1. Introduction A Brain Computer Interface (BCI) system is based on the use of mental activity of a person to generate control commands in a device [1,2]. To this end, the EEG signals of the person have to be registered and processed appropriately in order to differentiate between several cognitive processes or ‘‘mental tasks’’. These interfaces have been used in different applications, from the control of a keyboard or the mouse in a computer, to the control of a robot arm [3,4]. The generation of control actions without the need of making any kind of movement by the person make these interfaces very useful for people with severe disabilities. Brain activity can be registered in several ways because, apart from the electrical activity, magnetic or metabolic signals (which reflect changes in blood flow) are produced. The magnetic fields can be measured through magnetoencephalography (MEG) and the metabolic activity through t Positron Emission Tomography (PET) or by Functional Magnetic Resonance Imaging (fMRI) [5]. But these techniques require sophisticated and expensive pieces of equipment whose availability and utilization are relegated only to higher corporations or hospitals. Invasive techniques can be used in the development of BCI systems. In this case, the activity of a neuron or small groups of them can be registered using microelectrodes implanted directly in the brain. These techniques have been used to determine the
∗
Corresponding author. Tel.: +34 96522 2179; fax: +34 965222033. E-mail address:
[email protected] (E. Iáñez). URL: http://nbio.umh.es/ (E. Iáñez).
0921-8890/$ – see front matter © 2010 Elsevier B.V. All rights reserved. doi:10.1016/j.robot.2010.08.007
intention of movements in animals [4,6] or to control a cursor on a screen [7]. However, for humans, non-invasive techniques based on EEG signals are preferable because of ethical aspects and medical risks. These techniques can offer similar results [8]. These techniques use electrodes on the scalp to measure the electroencephalographic (EEG) signals. Non-invasive BCIs can be classified as evoked or spontaneous. In an evoked BCI, the registered signals reflect the automatic response of the brain to certain external stimuli [9,10]. However the need for external stimuli restricts the number of applications. This paper is focused on a spontaneous BCI where the person carries out a cognitive process on their own will [11]. BCI can use synchronous or asynchronous protocols. In synchronous protocols, the system indicates to the user the moments when he/she must think of the cognitive process. Then, the signals have to be processed during a concrete interval of time and the decision is taken. The systems using this protocol are slow. The synchronous protocols make easier the EEG processing because the starting time is known and the differences with respect to the background can be measured [12]. On the other hand, the asynchronous protocols are more flexible because the user does not need a specific time and he/she can think freely the cognitive processes [13]. In synchronous protocols, the EEG signals are time-locked to externally paced cues repeated every time (‘‘the system controls the user’’), while in asynchronous protocols the user can think of some mental tasks at any time (‘‘the user controls the system’’). BCI systems have been applied to the robotic field, such as the control of a little robot with spontaneous techniques going through different rooms [14] or the control of a wheelchair using an evoked
E. Iáñez et al. / Robotics and Autonomous Systems 58 (2010) 1238–1245
1239
Fig. 2. Motor cortex area of the brain (left) and situation of the electrodes following the standard placement of 64 scalp electrodes as an extension to the 10/20 International System.
Fig. 1. Image of the local environment with the device USBAmp of g.tec, the PC of registering and processing, and the user.
technique [15]. But there are only a few approximations of using a BCI to control a robot arm. There is a solution where a monkey is able to control a robot arm and to eat by himself, but using in this case an invasive technique [4]. Another example is controlling a robot arm using the amplitude of the alpha waves [16], but using in this case a synchronous protocol. This paper describes a spontaneous non-invasive EEG-based BCI. It uses an asynchronous protocol. The BCI uses the Wavelet Transform to extract the relevant characteristics of the EEG signals. A new LDA-based classifier has been developed to differentiate between three mental tasks. This classifier uses a combination of four LDA-based models to classify the mental tasks. The BCI allows control of a real robot arm. Users can control the robot end effector to perform the desired trajectory. The efficiency and accuracy with six users have been evaluated making different experimental tests. The paper is organized as follows. Section 2 describes the brain computer interface developed, including the features extraction algorithm. The classifier implemented is explained in Section 3. Next, in Section 4 the experimental tests with different volunteers and the results are described. Finally, the main conclusions are summarized in Section 5. 2. Brain interface protocol In this section, the procedure used to register the EEG signals and to extract the relevant features of the EEG signals is explained. The EEG signals have been registered through the commercial device g.tec1 USBAmp (Fig. 1). This device is composed of 16 channels. A frequency sample of 1200 Hz has been used. A filter between 0 and 60 Hz has been applied, as well as a notch filter at 50 Hz to eliminate the perturbations of the electrical network. The software used to register the EEG signals has been developed in Matlab using the API (Application Programming Interface) of Matlab offered with the device (gUSBAmp MATLAB API). Three cognitive processes or ‘‘mental tasks’’ related with the motor imagination have been considered: the rest state and two mental tasks. As certain studies indicate [17], the imagination of a movement generates the same mental process and even physical process as the performance of the movement, except that the movement is locked. The mental tasks considered in our BCI are the imagination of the movement of the right or left arm, or the imagination of the right or left hand movement. The user will choose the two tasks which are easier to concentrate on for him/her.
1 http://www.gtec.at/.
The electrodes have been situated following the standard placement of 64 scalp electrodes as an extension to the 10/20 International System [18,19]. It has been tested in [20] that the mental tasks related with the motor imagination produce a cerebral activation in the motor cortex of the brain (Fig. 2, left), so the electrodes have been situated above this position of the brain (Fig. 2, right): nine effective electrodes around the Cz position have been used (FC1, FCz, FC2, C1, Cz C2, CP1, CPz and CP2). As the g.tec device is composed of 16 electrodes, the seven remaining electrodes have been placed around the main electrodes in order to use them during the preprocessing. The EEG signals registered are processed in sequences of 1 s of length, including an overlap of half a second with the previous sequence, so the decisions will be taken each half second in real time. The protocol used to register the data will be explained in Section 4. Once the data are registered, they are preprocessed and a features extraction algorithm is applied. Then, a classification of the characteristics will be done to obtain the cognitive process that the user is thinking in that time. The classifier implemented will be explained in Section 3. Before extracting the main characteristics of the registered data, a preprocessing has been applied to the signals. The baseline of each electrode has been removed, eliminating the mean value of the signal registered by each electrode. An artifact rejection has been applied to the signals. A Surface Laplacian has been applied to the main electrodes removing from each one the mean value of the adjacents in order to improve the signal/noise relationship in each electrode [21–23]. In this preprocessing the 16 electrodes have been used. Next, a features extraction algorithm is applied to the data. The function of the algorithm is to extract the main characteristics of the EEG signals in order to facilitate the subsequent classification. The algorithm used is based on the frequency domain. The spectrum between 0 and 60 Hz has been calculated to analyze the variations of the rhythmic activity. The Wavelet Transform (WT) has been used to extract the features [24]. It divides the signals in their frequency components and studies each component with a resolution according to its scale. The Wavelet Pack Decomposition (WPC) of Matlab has been used generating the tree of Fig. 3. The db3 filter (of Daubechies family) has been used and the energy of the 14 coefficients of level 7 (from 0 to 13) of the tree has been calculated. The characteristic output vector is formed by the concatenation of the 9 main electrode energy coefficients. 3. Classifier A new Linear Discriminant Analysis (LDA) based classifier has been developed. LDA is related to the Fisher linear discriminant, which is used in statistics and automatic learning to find the best linear combination of the input characteristics to separate two classes [25,26]. This method makes a dimensional reduction to maximize the rate between the variance between classes with the internal variance of each class in any set of data, guaranteeing the maximum separability between the classes.
1240
E. Iáñez et al. / Robotics and Autonomous Systems 58 (2010) 1238–1245
Fig. 3. Coefficients tree obtained using the wavelet packet decomposition (WPD).
Table 1 Classes 1 and 2 considering the four LDA models created.
Model 1 Model 2 Model 3 Model 4
Class 1 (main)
Class 2
Right Left Rest state Right
Left & rest state Right & rest state Right, left Left
Fig. 4. Example of separation of two classes with LDA and threshold selection.
As LDA is not directly appropriate to classify three classes simultaneously, four different models have been created to classify the characteristics of the three mental tasks (the rest state and the mental tasks assigned to right and left). In each model only two classes have been differentiated, as is shown in Table 1. In each model, a main class, that is only composed by the data corresponding to one mental task, has been separated from a secondary class, which is composed by the data corresponding to the other tasks. The output of each model has two dimensions because of the dimensional reduction produced by LDA. Fig. 4 has been obtained after applying one of the models. Each data point corresponding to class 1 is represented by a cross, while each data point corresponding to class 2 is represented by a circle. The procedure to calculate the thresholds that separate the classes will be explained later. For model 1, class 1 (main) corresponds to the data of the characteristics of the mental task assigned to right, whereas class 2 corresponds to the data of the characteristics of the mental task assigned to left and to the rest state. The same procedure is considered for models 2 and 3 according to Table 1. However, the fourth model only considers the right and left tasks. Therefore, there is a model to differentiate each mental task from the other
mental tasks, and a support model to differentiate between the right and left mental tasks. Once the models have been created, the threshold that better separates the two classes in each model has to be selected. A function that automatically selects the thresholds has been designed. A single threshold would be enough to separate two regions, however two thresholds have been selected in order to create three regions, correspondent to class 1, class 2, and an intermediate region of uncertainty (Fig. 4). The thresholds have been selected maximizing the number of characteristics of each class in their own region and minimizing them in the opposite region. The uncertainty region will correspond with the central region where the classes cannot be properly separated. The thresholds have been only created in the horizontal dimension (see Fig. 4), because it is the best dimension to separate the classes. The visualization in the two reduced dimensions obtained from LDA has been used in Fig. 4 to clarify the explanation. The thresholds and the models will be calculated during the adjustment of the classifier (that will be explained in Section 4). Each time the models are applied to the characteristic vector provided by the features extraction algorithm, the next values are returned for each model according to the region where the vector is classified: ‘‘1’’ for the region of class 1, ‘‘−1’’ for the region of class 2, and ‘‘0’’ for the uncertainty region. 3.1. Decision system Once the models have been created and the thresholds selected, a score-based system has been developed to obtain the output of the BCI: right, rest state, left or uncertainty. The uncertainty state is returned as output when it is not sure which mental task the user is doing. Introducing the uncertainty state improves the accuracy of the system decreasing the error percentage. The uncertainty state will be evaluated like the rest state during the BCI performance. After applying the four models to each characteristic vector, the mental tasks have been scored depending on the regions where the vector is classified in each model. Then these scores are added in order to select the task corresponding to the higher score. This score must exceed a threshold, which has been selected as ‘‘1’’ regarding the experimental results. In case it does not exceed the threshold or the maximum is not unique (i.e. several tasks have the same maximum value), the output will be uncertainty. The scores have been given in this way: Models 1 to 3:
• If the vector is classified in the region belonging to class 1 or main class (value 1): 1 point is given to the task of class 1 (see Table 1). • If the vector is classified in the region belonging to class 2 (value −1): 0.5 points are given to each task assigned to class 2 (see Table 1). • If the vector is classified in the uncertainty region (value 0): no task is scored. Model 4:
• If the vector is classified in the region belonging to class 1 or main class (value 1): 1 point is given to the task of class 1, in this case, right task (see Table 1). • If the vector is classified in the region belonging to class 2 (value −1): 1 point is given to the task of class 2, in this case, left task (see Table 1). • If the vector is classified in the uncertainty region (value 0): no task is scored. The equation group (1) shows the representation of these scores and the way to determine the output.
E. Iáñez et al. / Robotics and Autonomous Systems 58 (2010) 1238–1245
Fig. 5. User interface in Matlab.
score_right = 1 · r11 + 0.5 · r23 + 0.5 · r33 + 1 · r41 score_rest = 0.5 · r 3 + 0.5 · r 3 + 1 · r 1 1 2 3 (in model 4, rest state is not scored)
3 1 3 3 score_left = 0.5 · r1 + 0.5 · r2 + 0.5 · r3 + 1 · r4 rmr = 1 ⇒ if the vector is classified in region r in model m 0 ⇒ rest of the cases score = max{score_right , score_rest , score_left }
if (the maximum is not unique) then output = NaN (uncertainty) elseif (score < threshold) then output = NaN (uncertainty) elseif (score == score_right ) then output = 1 (Right) elseif (score == score_rest ) then output = 0 (Rest State) elseif (score == score_left ) then output = −1 (Left).
(1)
For example, if the models are applied to a characteristic vector and it is classified in the following regions:
• • • •
Model 1: 1 (right) Model 2: −1 (right and rest state) Model 3: 0 (uncertainty) Model 4: 1 (right) the scores obtained are:
• Right: 1 (model 1) + 0.5 (model 2) + 0 (model 3) + 1 (model 4) = 2.5 points • Left: 0 (model 1) + 0 (model 2) + 0 (model 3) + 0 (model 4) = 0 points
• Rest state: 0 (model 1) + 0.5 (model 2) + 0 (model 3) + 0 (model 4) = 0.5 points. As the highest score corresponds to right and this score exceeds the threshold, this task will be the one selected as output. In case it does not exceed the threshold or the maximum score is not unique, the output will be NaN (Not a Number in Matlab) that indicates uncertainty. 4. Experimental tests For the experimental tests, a user interface in Matlab has been developed (Fig. 5). This interface allows the training of the users. The interface gives us options to connect with the device, select the kind of test to do, and start/stop the test. Three kinds of tests have been made with this interface: the measurement of the online error, where the user is shown the task which must be performed during the test; the performance of trajectories in order to go past as near as possible the center
1241
of certain targets; and finally the trajectories made with the robot arm, where the user interface is only used as background, and the image of the camera will be used as visual feedback for the user. In this interface, when START is executed, the dot situated on top moves automatically downwards with a constant speed making a trajectory. The position of the dot is refreshed each half second, which is when the decision is taken as was said in Section 2. With the decisions made on the mental task which the user is thinking about at the present time the movement of the dot will be controlled. If the right task is detected, the dot will move right painting the trajectory in a green color. If the left task is detected, the dot will move to left painting the trajectory in a yellow color. Finally, if the rest state is detected, the dot will only move downwards, without any lateral movement, painting the trajectory in a red color. First, each user should do some tests in order to make the adjustment of the classifier, allowing the creation of a specific model for each user with the initial data registered. In these tests, the user must think of the mental tasks specified in each moment. With these tests, the four LDA-based models are created and the thresholds are calculated. These models and thresholds will be used during the rest of the experimental tests. Afterwards, the user can use the interface in real time to train and finally control the robot arm. For the training, firstly, the corresponding test in the user interface is selected to measure the online error in the three tasks, indicating what the user must think about in each one. After this, a configuration of targets will be selected for the user to make a trajectory. In this case it is not indicated to the user what to think, he should think of the necessary mental tasks to get past as close as possible to the center of the targets. Finally, after the training, the user will do several trajectories controlling directly the robot arm and having the visual feedback directly from the robot environment through the camera. Six able-bodied volunteers took part in the experiments (user#i, i = 1, . . . , 6). The volunteers are four men and two women, all healthy and with ages between 19 and 28 years old. Each user chose, from among the four cognitive processes offered (explained in Section 2), the easier one for them to imagine between the movement of the right and left arm. In this section, the test protocol followed in each session will be explained. In addition, the adjustment procedure of the classifier will be described. Finally, the offline and online tests, and the trajectories performed by the robot arm will be shown. 4.1. Test protocol Each user will do three sessions on different days: two days between sessions 1 and 2, and four days between sessions 2 and 3. The process followed in each session is the following:
• Session 1: – Four files are registered from the user to create a specific model for him and to make the adjustment of the classifier. The adjustment of the classifier is very fast and it is done while the user rests briefly after the first part of the session. – At this point the offline error is measured to know the success of the classifier. – Next, five tests for each mental task are made by the user with a duration of 50 s for each one. The online errors of the session are obtained for this session. – Finally, the user makes two kinds of trajectories (each one with different configuration of targets), five times each, getting the score passing through the targets. • Session 2: – The session starts using directly the models created in the first session.
1242
E. Iáñez et al. / Robotics and Autonomous Systems 58 (2010) 1238–1245
– Next, five tests for each mental task are made by the user with a duration of 50 s for each one. The online errors of the session are obtained. – Then, the user makes two kinds of trajectories (each one with different configuration of targets), five times each, getting the score passing through the targets. • Session 3: – The same procedure as session 2 is followed in the last session. – And finally, five trajectories with the robot arm are made by the user, getting the score passing through the targets. In this case the visual feedback corresponds directly to the robot.
Table 2 Offline results for the three tasks: success percentage.
User#1 User#2 User#3 User#4 User#5 User#6
Right
Rest state
Left
54.00 35.68 59.98 49.34 41.68 56.02
24.50 32.32 33.66 24.34 22.18 22.00
49.66 33.34 58.34 50.66 42.32 57.98
The duration of each session, including the placement of the electrodes, all tests performed by users and rest time for the volunteers between tests, was around an hour and a half in order to avoid fatigue of the volunteers. 4.2. Adjustment of the classifier First, the adjustment of the classifier is made. For that, the EEG signals have been registered indicating to the user which mental task they must perform each time. For every user, four files have been registered showing alternately a red dot, that indicates the user should be in the rest state, and a green dot, that indicates the user must think in one of the two mental tasks selected by him. In files 1 and 2, the EEG signals related to the previously selected mental task and the rest state are registered (e.g. imagining the movement of the right arm and rest state). On the other hand, in files 3 and 4 the EEG signals related to the other mental task and the rest state are registered (e.g. imagining the movement of the left arm and rest state). Each dot is shown on the screen for 6 s, and they are alternately repeated making 20 iterations with each one. Afterwards, the files are separated in sequences of 1 s obtaining 320 trials for the mental task assigned to left, 320 to the task assigned to right and 640 for the rest state (this last state is presented in the register of the four files). The registered data are processed and the four LDA-based models are created to be used during the rest of the tests and the successive sessions.
Fig. 6. Online success percentage on rest state.
4.3. Offline results Next, from the registered data obtained for the adjustment of the classifier, a cross validation has been made in order to obtain the offline error of the classifier. The following procedure has been used: 70% of these data have been randomly selected to calculate the four LDA-based models and to calculate the thresholds. After that, the remaining 30% of the data have been used to test it. This process has been done five times selecting, always, and for each repetition, the data randomly. The results have been averaged and they are shown in Table 2. The Table shows the success percentage obtained for the three mental tasks. The success percentage is low, because the user has not got any kind of visual feedback, so the user does not know if he is doing well or not. It will be shown in later sections that these results are improved with training, but especially when the user performs the trajectories. 4.4. Online results The models created during the adjustment of the classifier in Section 4.2 are selected to start the real time tests. These models will be used during the rest of the tests and sessions made by the users. The online results of the users have been obtained using the Matlab user interface (Fig. 5). The task that the user must think is indicated to measure the error percentage produced. But, in this
Fig. 7. Online success percentage on right task.
case, as it was mentioned before, the user has a visual feedback of the results that the classifier provides, and for this reason the user knows if he is doing well or not. The results of the classification are shown in Figs. 6–8. In each figure one mental task is shown. In each figure the success percentage of the averaged value for each session and for each user is shown. Based on the experimental results, it has been checked that the characteristics of the EEG signals are invariant on time. This fact causes a wrong adjustment of the data to the initially calculated models. Even so, as will be shown in the next two subsections, during the execution of the tests, where the task that must be thought is not indicated to the user, the results improve significantly achieving a good control of the trajectory to pass through the designated targets. 4.5. Trajectory results For the drawing of the trajectories, the Matlab user interface of Fig. 5 has been used. In this case it is not indicated to the user the mental task he/she should think of. The user is free to think of the
E. Iáñez et al. / Robotics and Autonomous Systems 58 (2010) 1238–1245
1243
Fig. 8. Online success percentage on left task.
Fig. 10. Score of the two trajectories made for the users.
Fig. 9. Trajectories made by user#2, trajectory 1 (left) and trajectory 2 (right).
mental task that considers passing through the targets marked in the test. The user will do two kinds of trajectories with five trials at each one. Fig. 9 shows the two kinds of trajectories with the targets that the user must cross. A score is given to the user measured as a function of the distance to the center of the target. If the user passes through the center of a target they get 10 points, and if he/she touches the border, the user gets 1 point. The middle values are given linearly. The average of the points obtained for the two trajectories done by the users is shown in Fig. 10. The average value for the score of each session is shown. An improvement of the results is obtained in accordance with the training. 4.6. Results of the trajectories made with the robot arm Once the training is performed, the user can control the robot arm. The robot arm FANUC LR Mate 200iB has been used for the final tests. This robot has 6° of freedom and can load up to 5 kg in the end effector. The robot arm is shown in Fig. 11 (right). The software used to control the robot arm has been programmed in C++. It allows, using the libraries of the robot, us to make a direct connection with the robot to send it commands with the necessary positions to draw the trajectories. Since the software system developed to detect the cognitive processes takes decisions each half second, the control program of the robot arm will send each half second a new command to the robot arm in order to make it take a new position and follow the trajectory that the user wishes. To see the movements that the
robot is making, and as the robot is not in the same building as the user is, a camera has been installed (Fig. 11, right). To make the movements of the robot arm, cartesian coordinates have been used, moving the end effector of the robot arm in the YZ plane, and keeping the plane X and the orientation constants. The work range of the robot arm has been limited to a section in the YZ plane for safety reasons. It matches with the target paper size. The trajectory performed with the robot arm is the same as trajectory 2 of the previous section. On the piece of paper it can be seen that the targets match and the duration of the test is the same. This allows working with the user interface (Fig. 5) simultaneously and saving the data the same way as with the rest of the tests. The trajectory visualized in the interface and the trajectory by the robot will be identical, but in this case the user will use as a visual feedback the camera on the robot arm to make the tests. An image of the robot arm is shown in Fig. 11 (right), and a zoom of a piece of paper with the targets is shown on the left. On the piece of paper can be seen an example of trajectories made by a user passing through the targets. During the test a zoom is made with the camera allowing the user to see better the targets and the trajectory that he/she is making. Four users made tests with the robot arm. A comparison between the results of trajectory 2 made with the robot arm in the last session versus the average of trajectory 2 in the three sessions is shown in Fig. 12. An improvement in the results can be seen. This improvement is produced by the training of the user and by the responsibility of the users while they control directly the robot arm. As was mentioned before, due to the invariability of the EEG signals with time, there is a wrong adjustment of the data to the initially calculated models at the beginning of session 1. This fact affects to the online results, but when the user can think freely to make the trajectories, the results are improved significantly, and the user can pass through the targets controlling the robot arm. 5. Conclusions and future trends A non-invasive spontaneous BCI using an asynchronous protocol has been developed. This BCI allows us to differentiate between three cognitive processes. The features extraction algorithm based on the Wavelet Transform (WT) and a classifier LDA-based have been explained. The combination of the four models allows improvement of the final decision about the mental task that the user is thinking about at each moment.
1244
E. Iáñez et al. / Robotics and Autonomous Systems 58 (2010) 1238–1245
Fig. 11. Image of the robot arm through the camera where a paper with the targets that the robot must cross is shown (right), and a zoom of the paper with the targets and some trajectories performed by a user with the robot arm (left).
Fig. 12. Trajectory scores obtained using only the visual interface (without robot) versus the scores obtained using the real robot.
Despite the problem of the wrong adjustment of data to the initially calculated models, when the user makes the trajectories with the user interface and with the robot arm, the results obtained to differentiate between the three cognitive processes have improved. In future trends an online method to adjust the classifier to the new input values at each moment will be implemented. On the other hand, new classifiers will be developed to improve the results, and a combination of several classifiers will be considered. Furthermore, the control of the robot arm will be improved adding new degrees of freedom and including specific tasks that can help people with severe disabilities in their daily life. Acknowledgements This research has been supported by grants DPI2008-06875C03-03 (Ministerio de Ciencia e Innovación) and SAF2008-03694 from the Spanish Government. References [1] G. Dornhege, J.R. Millán, T. Hinterberger, D.J. McFarland, K.-R. Müller, Towards Brain–Computer Interfacing, MIT Press, Cambridge, Massachusetts, 2007. [2] M.A.L. Nicolelis, Actions from thoughts, Nature 409 (2001) 403–407. [3] B. Obermaier, G.R. Muller, G. Pfurtscheller, Virtual keyboard controlled by spontaneous EEG activity, IEEE Transactions on Neural Systems Rehabilitation Engineering 11 (2003) 422–426. [4] J.M. Carmena, M.A. Lebedev, R.E. Crist, J.E. O’Doherty, D.M. Santucci, D.F. Dimitrov, P.G. Patil, C.S. Henriquez, M.A.L. Nicolelis, Learning to control a brain–machine interface for reaching and grasping by primates, PloS Biology 1 (2003) 193–208.
[5] N. Weiskopf, K. Mathiak, S.W. Bock, F. Scharnowski, R. Veit, W. Grodd, R. Goebel, N. Birbaumer, Principles of a brain–computer interface (BCI) based on real-time functional magnetic resonance imaging (fMRI), IEEE Transactions on Biomedical Engineering 51 (6) (2004) 966–970. [6] J.K. Chapin, K.A. Moxon, R.S. Markowitz, M.A.L. Nicolelis, Real-time control of a robot arm using simultaneously recorded neurons in the motor cortex, Nature Neuroscience 2 (1999) 664–670. [7] M.D. Serruya, N.G. Harsopoulos, L. Paninski, M.R. Fellows, K. Donoghue, Instant neural control of a movement signal, Nature 416 (2002) 141–142. [8] M. Velliste, S. Perel, M.C. Spalding, A.S. Whitford, A.B. Schwartz, Cortical control of a prosthetic arm for self-feeding, Nature 453 (2008) 1098–1101. [9] J.D. Bayliss, Use of the evoked potential P3 component for control in a virtual environment, IEEE Transactions on Neural Systems and Rehabilitation Engineering 11 (2003) 113–116. [10] X. Gao, X. Dingfeng, M. Cheng, S. Gao, A BCI-based environmental controller for the motion-disabled, IEEE Transactions on Neural Systems and Rehabilitation Engineering 11 (2003) 137–140. [11] J.R. Millán, P.W. Ferrez, A. Buttfield, Non invasive brain–machine interfaces — Final Report. IDIAP Research Institute — ESA, 2005. [12] G. Pfurtscheller, C. Neuper, Motor imagery and direct brain–computer communication, Proceeding IEEE 89 (2001) 1123–1134. [13] J.R. Wolpaw, D.J. McFarland, T.M. Vaughan, Brain-computer interface research at the Wadsworth center, IEEE Transactions on Rehabilitation Engineering 8 (2000) 222–226. [14] J.R. Millán, F. Renkensb, J. Mouriñoc, W. Gerstnerb, Brain-actuated interaction, Artificial Intelligence 159 (2004) 241–259. [15] I. Iturrate, J.M. Antelis, A. Kubler, J. Minguez, A noninvasive brain-actuated wheelchair based on a P300 neurophysiological protocol and automated navigation, IEEE Transactions on Robotics 25 (3) (2009) 614–627. [16] S. Inoue, Y. Akiyama, Y. Izumi, S. Nishijima, The development of BCI using alpha waves for controlling the robot arm, IEICE Transactions on Communications 91 (7) (2008) 2125–2132. [17] J. Decety, M. Lindgren, Sensation of effort and duration of mentaly executed actions, Scand. J. Psychol 32 (1991) 97–104. [18] American electroencephalographic society American electroencephalography society guidelines for standard electrode position nomenclature, Journal of Clinical Neurophysiology 8 (2) (1991) 200–202. [19] H.H. Jasper, The ten–twenty electrode system of the international federation, Electroencephalography and Clinical Neurophysiology 10 (1958) 371–375. [20] E. Iáñez, M.C. Furió, J.M. Azorín, J.A. Huizzi, E. Fernández, Brain–robot interface for controlling a remote robot arm, in: III International Work-Conference on the Interplay between Natural and Artificial Computation, IWINAC 2009, in: Lecture Notes on Computer Sciences, vol. 5602, Springer Verlag, Berlin, Heidelberg, 2009, pp. 353–361. ISBN: 3-642-02266-9. [21] F. Babiloni, F. Cincotti, L. Bianchi, G. Pirri, J.R. Milln, J. Mourio, S. Salinari, M.G. Marciani, Recognition of imagined hand movements with low resolution surface laplacian and linear classifiers, Medical Engineering Samp; Physics 23 (2001) 323–328. [22] D.J. McFarland, L.M. McCane, S.V. David, J.R. Wolpaw, Spatial filter selection for EEG-based communication, Electroencephalography and Clinical Neurophysiology 103 (1997) 386–394. [23] J. Mourio, EEG-based Analysis for the Design of Adaptive Brain Interfaces. Ph.D. Thesis, Centre de Recerca en Enginyeria Biomdica, Universitat Politcnica de Catalunya, Barcelona, Spain, 2003. [24] T. Demiralp, J. Yordanova, V. Kolev, A. Ademoglu, M. Devrim, V.J. Samar, Time–frequency analysis of single-sweep event-related potentials by means of fast wavelet transform, Brain and Language 66 (1) (1999) 129–145.
E. Iáñez et al. / Robotics and Autonomous Systems 58 (2010) 1238–1245
Andrés Úbeda is a scientific researcher at the Virtual Reality and Robotics Lab at Miguel Hernández University of Elche (Spain). He holds an M.Sc. in Industrial Engineering from the Miguel Hernández University of Elche in 2009. Now he is doing his Ph.D. on Bioengineering. His current research interests are Human–Robot and Human–Computer Communication. He is focused on Multimodal Interfaces which includes eye-tracking, haptics, BCI (non-invasive brain interfaces) and robotics.
[25] E.K. Tang, P.N. Suganthan, X. Yao, A.K. Qin, Linear dimensionality reduction using relevance weighted LDA, Pattern Recognition 38 (4) (2005) 485–493. [26] X.G. Wang, X.O. Tang, Experimental study on multiple LDA classifier combination for high dimensional data classification, in: Multiple Classifier Systems, Proceedings Lecture Notes in Computer Science, 5th International Workshop on Multiple Classifier Systems, Cagliari, Italy, 3077, 2004, pp. 344–353.
Eduardo Iáñez is a scientific researcher at the Virtual Reality and Robotics Lab at Miguel Hernández University of Elche (Spain). He holds an M.Sc. in Telecommunication Engineering from the Miguel Hernández University of Elche in 2007. Now he is doing his Ph.D. on Cerebral Interfaces for Device Control. His current research interests are Human–Robot and Human–Computer Interfaces. He is focused on Brain Computer Interfaces (BCI) (non-invasive brain interfaces) and Multimodal Human–Robot Interfaces (integrating brain, ocular and haptic information).
José María Azorín is Associate Professor of the Industrial Systems Engineering Department at Miguel Hernández University of Elche (Spain), and researcher at the Virtual Reality and Robotics Lab. He holds an M.Sc. in Computer Science from the University of Alicante (1997, Spain) and a Ph.D. in Telerobotics from the Miguel Hernández University of Elche (2003, Spain). His current research interests are Telerobotics and Human–Robot Interaction, all of them applied to any kind of real service robotics scenarios, including assistive robotics. Dr. Azorin was a 1996 recipient of the ‘‘Best Thesis in the Industrial Systems Engineering Department’’ from the Miguel Hernandez University of Elche, Spain. Dr. Azorin has been active since 1999 in R&D within several projects on Advanced Robotics. He is author or co-author of a broad range of research publications and is a member of different scientific societies such as IEEE Robotics and Automation and CEA-IFAC (Spanish Association inside the International Federation of Automatic Control).
1245
José Manuel Ferrández was born in Elche, Spain in 1969. He received the M.Sc. Degree in Computer Science in 1995, and the Ph.D. Degree in 1998, both of them from the Universidad Politécnica de Madrid, Spain. He is currently Associate Professor at the Department of Electronics, Computer Technology and Projects at the Universidad Politécnica de Cartagena and Head of the Electronic Design and Signal Processing Research Group at the same University. He is also the coordinator of the Spanish Thematic Network RTNAC (rtnac.org) and the iberoamerican network CANS (RCANS.org), related to Natural and Artificial Computation. He is also the Chairman of the International Conference IWINAC, International Work Conference on the Interplay between Natural and Artificial Computation. His research interests include bioinspired processing, neuromorphic engineering and low vision prosthesis. Eduardo Fernández received the M.D. Degree from the University of Alicante, Spain, in 1986 and the Ph.D. Degree in Neurosciences in 1990. He is currently Associate Professor at the University Miguel Hernández, Spain, and Director of the Artificial Vision Laboratory at the Bioengineering Institute of the University Miguel Hernández, Spain. In the last years he has been using histological as well as electrophysiological techniques to understand how mammalian retinal cells and the circuitry within the retina can manage and code visual information. He is actively working on the development of a visual neuroprosthesis for the profoundly blind.