A high performance hybrid SSVEP based BCI speller system

A high performance hybrid SSVEP based BCI speller system

Advanced Engineering Informatics 42 (2019) 100994 Contents lists available at ScienceDirect Advanced Engineering Informatics journal homepage: www.e...

6MB Sizes 0 Downloads 44 Views

Advanced Engineering Informatics 42 (2019) 100994

Contents lists available at ScienceDirect

Advanced Engineering Informatics journal homepage: www.elsevier.com/locate/aei

Full length article

A high performance hybrid SSVEP based BCI speller system D. Saravanakumar , M. Ramasubba Reddy ⁎

T

Biomedical Engineering Group, BISP Lab, Department of Applied Mechanics, Indian Institute of Technology Madras, Chennai 600036, India

ARTICLE INFO

ABSTRACT

Keywords: Steady state visual evoked potential (SSVEP) Vision based eye-gaze tracker (VET) Electro-oculogram (EOG) Brain Computer Interface (BCI) Visual speller/keyboard

The existing EEG based keyboard/speller systems have a tradeoff between the target detection time and classification accuracy. This study focuses on increasing the accuracy and probability of target classification rates in the SSVEP based speller system. We proposed two different types of hybrid SSVEP system by combining SSVEP with vision based eye gaze tracker (VET) and electro-oculogram (EOG). Thirty six targets were randomly chosen for this study and their corresponding visual stimulus was presented with unique frequencies. The visual stimuli were segregated into three groups and each group were arranged into different regions (left/middle/right) of the keyboard/speller layout for improving the probability of target detection rate. The VET/ EOG data were utilized to identify the regions that belong to the selected target. The region/group determination decreases the issue of misclassification of SSVEP frequencies. The averaged spelling accuracies of SSVEP-VET and SSVEP-EOG system for all the subjects is 91.2% and 91.39% respectively. Later, a visual feedback was added to the SSVEP-EOG system (SSVEP-EOG-VF) for improving the target detection rate. In this case, an average classification accuracy of 98.33% was obtained with the information transfer rate (ITR) of 69.21 bits/min for all the subjects. An accuracy of 100% was obtained for five subjects with the ITR of 74.1 bits/min in this system.

1. Introduction Brain Computer Interface (BCI) is a system used for communication purpose, which can decode the brain signals and use it to interface with the familiar digital technologies such as, on screen keyboard speller, virtual reality, wheel chair control, robotics etc. The BCI system is formally defined by Wolpaw et al. as a communication system which uses the human EEG signal to communicate with the environment without using the brain's normal peripheral pathways of nerves and muscles [1]. The BCI framework is fundamentally utilized for individuals experiencing extreme neurological disorders, for example, muscular dystrophies, spinal line damage, amyotrophic lateral sclerosis, multiple sclerosis, and so on. A few kinds of electroencephalogram (EEG) components are utilized for BCI applications such as, steady state visual evoked potential, P300, slow cortical potential and motor imagery signals. SSVEP is the most broadly utilized control signal in numerous BCI applications and it has the highest information transfer rate, high accuracy and minimal training. It is an EEG component originated from occipital and occipito-parietal region of the cerebral cortex in response to the visual stimulus flickering at a constant/unique frequency [2]. When visual stimulation is repeated at a frequency greater than 5 Hz, the quasi-sinusoidal transient visual evoked potential is combined to



produce a near-sinusoidal SSVEP resonant at the fundamental stimulation frequency (F1) and its harmonics (2F1, 3F1, etc.) [2–4]. The practical SSVEP-BCI system uses several flickering or flashing light sources as an input, which flickers at a predetermined rate. Bakardijan et al. demonstrated that maximum SSVEP response is obtained between 5.6 Hz and 15.3 Hz from 5 to 48 Hz range [5]. Zhang [6] and Allison [7] have demonstrated that selective focus on a target alone is sufficient to obtain an accurate SSVEP signal. The subject/user focuses on one of the choice sources, which is then understood through the study of SSVEP signals in the occipital and parietal regions of the brain. The SSVEP visual stimulus (or) light sources can be designed using LEDs, LCD display, CRT etc.. The SSVEP based BCIs have some limitations on the number of targets.Visual stimulus design can not use harmonics of F1 frequency (2F1, 3F1, etc.) because the single visual stimulus flickers at F1 frequency could evoke a strong SSVEP response at F1 frequency and harmonics (2F1, 3F1 etc.,) [4,8,9]. For example, 6 Hz and 12 Hz visual stimuli elicit the same response in the occipital cortex. If, we use any LCD/LED based display for visual stimulus design, the stimulus frequency should be an integer division of the chosen display/ monitor refresh rate. Studies have shown that frequency encoding techniques [10,11], sampled sinusoidal simulation method [12,13] and dual frequency SSVEP methods [9] overcomes the above mentioned problems and more targets were achieved for real time speller/

Corresponding author. E-mail addresses: [email protected] (D. Saravanakumar), [email protected] (M. Ramasubba Reddy).

https://doi.org/10.1016/j.aei.2019.100994 Received 7 February 2019; Received in revised form 8 July 2019; Accepted 27 September 2019 1474-0346/ © 2019 Elsevier Ltd. All rights reserved.

Advanced Engineering Informatics 42 (2019) 100994

D. Saravanakumar and M. Ramasubba Reddy

keyboard applications. Many types of hybrid speller/keyboard systems were developed using SSVEP, EOG and P300 based BCIs [8,14–18]. Most of the developed hybrid BCI systems have less ITR. There is a tradeoff between the ITR and classification rate/accuracy in hybrid SSVEP systems. The classification accuracy and target detection time is an important factor in the BCI based speller system. The probability of SSVEP frequency recognition rate decreases if the number of target increases. The performance of a BCI system can be analyzed numerically using target detection rate and the ITR. The measurement of efficiency and speed of the real time BCI system is explained by wolpaw et al. as ITR (bits/min) [19,20]. ITR depends on the number of targets, probability of correct detection, length of the time segment and processing speed of the algorithm. Although larger time segments provide better target classification accuracy, they make the system less feasible for real time implementation due to slower response. Therefore, a compromise must be set for achievable accuracy in the smallest possible buffer size/time segment for effective online implementation. To overcome the above mentioned limitations in the hybrid SSVEP systems, we proposed a hybrid system which uses SSVEP and vision based eye gaze tracker (VET) data. The dual frequency SSVEP method was adopted for speller stimulus design. VET is an eye movement/ eye gaze detection system and it uses real time images for eye gaze detection. In our proposed system, VET is used to find the area on the visual keyboard where the subject is looking on the screen. This region identification was used to localize or group the visual stimulus for SSVEP classification or target detection. Instead of considering all the visual stimuli for SSVEP classification, our proposed method uses stimulus frequencies from the selected region. This region segregation improves the SSVEP classification accuracy. Later, the VET was replaced with electro-oculogram (EOG) for region selection, because VET system needs fixed head position for finding the users eye gaze direction. Further, a visual feedback was added to the SSVEP-EOG system for improving the target detection rate. Four different types of eye movements were considered (wink, single blink, double blink and triple blink) for SSVEP-EOG-VF system. Yang et al. [21] proposed the extended multivariate synchronization index (EMSI) method for multichannel SSVEP classification. Studies show that the EMSI algorithm gives better classification accuracy than conventional CCA methods [21–23]. Hence, we adopted EMSI algorithm for both offline and online SSVEP classification.

Fig. 1. Experimental setup.

4.285, 5, 6, 7.5 and 10 Hz) were considered for dual frequency SSVEP visual stimulus design and these frequencies are integer division of monitor refresh rate of 60 Hz. The Table 1 illustrates the selected frequencies and their dual combinations. All the visual stimuli were arranged and indexed with the corresponding characters, see Fig. 3. Java based Processing software platform was used for input visual stimulus design. The visual stimuli were divided into three groups and it is placed in the right, left and middle region of the keyboard layout. The visual stimuli were grouped based on the frequency difference. If the subject eye gaze direction is detected as right, the right region visual stimulus frequencies are considered for SSVEP frequency classification instead of considering all the 36 selected frequencies. This group selection reduces the problem of misclassification of adjacent frequency valued stimulus. 2.3. Subject details Ten healthy male subjects (mean age: 26.2 ± 3.46) were participated in this study with normal or corrected to normal vision. The subjects were recruited from Indian Institute of Technology Madras (Chennai, India). Five experienced and five new subjects were recruited for this experimental study. Before the experimental study, detailed procedures were explained to all subjects and signed consent was obtained. The experimental procedures with human study was approved by Indian Institute of Technology Madras (IITM) -Institutional ethics committee.

2. Methods 2.1. Experimental setup The proposed speller system consists of a single laptop with an extended monitor shown in Fig. 1. The designed visual stimuli (keyboard layout with flickers) were presented on the extended monitor. The subjects were instructed to sit in front of the extended computer monitor and distance between the monitor and subject was kept as 50 cm. To avoid head movements during the EEG and VET acquisition, the subjects were instructed to place their head on the indigenously developed chin rest. The webcam was placed on top of the extended monitor for capturing the eye images during the experiment. Fig. 2 illustrates the functional block diagram of our proposed system. Indigenously developed data acquisition system was used for EEG and EOG signal acquisition.

2.4. Data acquisition In this study, we acquired EEG, VET and EOG data. The EEG data were acquired or recorded using sintered Ag/AgCl electrodes according to the international 10–20 electrode system (Electrode locations: OZ, O1, O2, PO3, PO4, POZ, PO7 and PO8). Four electrodes were placed on the horizontal (A-positive and B-negative) and vertical (C-positive and D-negative) axis of the eyes for measuring wink and blink eye movements shown in Fig. 2. The electrode E is acting as a ground/bias for both EEG and EOG data acquisition. The EEG and EOG data were acquired using indigenously developed data acquisition system with the sampling rate of 250 Hz. Later the EOG data were down sampled to 32 Hz for eye movement detection. The acquisition system was made using 24-bit ADS1299 IC by Texas Instruments [24].

2.2. Visual speller design The dual frequency SSVEP method was adopted for visual stimulus design [9]. The visual stimuli were presented on the LED monitor with the refresh rate of 60 Hz. For validation purpose, we have randomly chosen thirty six targets, which were flickering at unique frequencies shown in Table 1. Eleven frequencies (2.14, 2.3, 2.72, 3, 3.333, 3.75,

2.5. Vision based eye gaze tracker using webcam A commercially available webcam is used (30fps; image sensorCMOS; 5-megapixel resolution). The template matching algorithm is 2

Advanced Engineering Informatics 42 (2019) 100994

D. Saravanakumar and M. Ramasubba Reddy

Fig. 2. Functional block diagram of proposed speller system.

implemented to track the eye gaze direction. The template images have to be stored first for the detection of eye gaze direction. The subject was asked to look at nine positions (left, left top, left bottom, center, top, bottom, right top, right bottom, right) on the display one by one and all these corresponding images were captured and stored as a templates images in the folder. After storing the template's images of an individual subject, they were asked to perform an online experiment in order to find/classify the eye gaze direction. In online analysis, the eye image was captured during run time of the experiment and it is compared and matched with one of the nine stored images using a template matching method. Input image I (size: 640×480) is captured from a webcam and Td (size: 640×480) is a template image for an eye gaze direction. In this algorithm a cost function Cd between Td and the input image I is calculated by using the normalized sum of squared differences of pixel intensity values using the formula as given below [25].

[I (x + i , y + j )

Cd (x , y ) = i, j

2.6. EOG signals and features The vertical electrode position is used to measure the eye activities like up, down and blink. The horizontal electrode position is used measure the eye activities like right, left and wink. In this study, we used four types of eye movements in the SSVEP-EOG system, such as wink, single blink, double blink and triple blink. The threshold and peak detection methods were used for different eye movement detection. The acquired EOG data were preprocessed and down sampled at 32 Hz to avoid minor fluctuations. Amplitude, duration, waveform pattern (peak followed by valley and valley followed by peak) and number of peaks and valleys were considered as features for blink and wink detection and these features were taken from the first order differentiated version of acquired EOG signal shown in Fig. 4. The subjects were instructed to perform eye blink and wink task for extracting the features and threshold value. A cue guided instruction was given to the subjects to perform the eye blink and wink task. A five sessions of eye movement data were collected for offline analysis. In a single session, all the subjects were tasked to perform each eye movement twenty times. The average eye movement classification accuracy among all the subjects is 99.7%.

Td (i , j )]2

[I (x + i, y + j ) Td (i , j )]2

(1)

The cost function is calculated with all nine stored template images. The templates position with lowest cost function is considered as best match or subject gazing area. Example, If the eye gaze is detected as left/left top/left bottom, we concluded that the subject is looking on left region of the keyboard.

2.7. SSVEP frequency recognition In this study, we used extended multivariate synchronization index algorithm for SSVEP frequency recognition. Yang et al, proposed the

Table 1 The selected frequency and dual combination. Region 1 (left)

Region 2 (middle)

Region 3 (right)

Character

Combination

Frequency

Character

Combination

Frequency

Character

Combination

Frequency

Q W E A S D Z X C 2 3 4

3 + 3.333 3.33 + 3.75 3.333 + 4.285 3.333 + 5 5 + 3.75 3.333 + 6 5+5 5+6 6+6 10 + 2.3 10 + 3 10 + 5

6.333 7 7.618 8.333 8.75 9.333 10 11 12 12.3 13 15

R T Y F G H V B N 5 6 7

3.333 + 3.333 3 + 4.285 5 + 2.72 3+5 3+6 7.5 + 2.3 7.5 + 3 7.5 + 3.75 10 + 2.14 5 + 7.5 7.5 + 6 10 + 3.75

6.667 7.285 7.72 8 9 9.8 10.5 11.25 12.14 12.5 13.5 13.75

U I O J K L M P 1 8 Back Space

3+3 3 + 3.75 3.75 + 3.75 6 + 2.14 4.285 + 4.285 5 + 4.285 6 + 3.75 6 + 4.285 7.5 + 3.333 7.5 + 4.285 10 + 3.333 10 + 4.285

6 6.75 7.5 8.14 8.57 9.285 9.75 10.285 10.833 11.78 13.333 14.285

3

Advanced Engineering Informatics 42 (2019) 100994

D. Saravanakumar and M. Ramasubba Reddy

Fig. 3. Visual keyboard layout.

extension algorithm to the multivariate synchronization index is called EMSI, which gives better performance than MEC and CCA [21–23]. Let X denote the sine-cosine pair source model of dimensionality Nt × 2Nh , h is the number of harmonics considered. Y denote the multichannel EEG signal of dimensionality Nt × Ny , y is the number of channels and t is the time period. The datasets X and Y are normalized with zero mean and unit variance. The MSI and EMSI algorithms are almost similar, the extended MSI algorithm includes the time delayed version of EEG data. The delayed version of the EEG signals could be treated as an additional set of channel to the original EEG data.

Y =

(YY )

2.8. Experimental procedures 2.8.1. SSVEP response time detection (offline) In order to find the SSVEP response time for all the ten subjects, an offline experiment was performed. The SSVEP response to the input visual stimulus varies between the subjects. Each subject needs different time window length for elicited SSVEP frequency recognition/ classification. Before the experiment starts an initial offset or rest period of 10 s was given to all the subjects to make ready for the experiments. At the end of 10 s, a green color highlight was shown on the first visual stimulus see Fig. 5. Once the first visual stimulus is highlighted, the subjects were asked to give attention on the highlighted stimulus. The highlight was made to appear up to 5 s. This highlighted time duration is called SSVEP stimulus time window. At the end of stimulus window the highlight was made to disappear and the rest window of 1 s was given wherein the subjects were asked to take a rest. This procedure is repeated for other visual stimuli as SSVEP stimulus window followed by rest window. Totally 6 sessions of offline experiments were performed

(2)

where Y is the first-order time delay with τ number of samples of Y. Using the new Y , we could use the same MSI algorithm to calculate the synchronization index for frequency recognition. The MSI measures the synchronization between two data sets which may be of different dimensionality, it uses S-estimator as a feature for classification [22].

(a)

(b)

(c)

(d)

Fig. 4. Differentiated version of acquired EOG signal (a). Single wink, (b). Single blink, (c). Double blink, (d). Triple blink (blue line- horizontal channel, red linevertical channel, x-axis represents the number of samples and y-axis represents the rate of change-amplitude). (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.) 4

Advanced Engineering Informatics 42 (2019) 100994

D. Saravanakumar and M. Ramasubba Reddy

Fig. 5. Offline SSVEP paradigm.

beginning of stimulus window, we instructed the subjects to give eye movements for region selection. Single blink is for right region selection and wink is for left region selection. If no eye movement is performed by the subjects the middle region will be selected by default. Three sessions of online analysis were performed by all the subjects. A time period of 5 min was given between the sessions as relaxation time. The timing diagram of SSVEP-EOG system shown in Fig. 9.

Fig. 6. Timing diagram of offline SSVEP paradigm.

3. Results and discussion

on all the subjects with each session containing 36 trials (total 36 targets). A cue guided offline paradigm was used here that highlights all the stimuli one by one in a sequential manner. EEG data were acquired and stored for all the subjects. The stored EEG data were indexed with a corresponding highlighted stimulus frequency value. This indexing is helpful for offline SSVEP frequency recognition analysis. Fig. 6 shows the timing diagram of offline cue guided paradigm.

3.1. Analysis of vision based eye gaze tracker A five sessions of preliminary experiments were conducted on all the subjects for finding the detection accuracy of eye gaze direction. In each session the subjects were asked to look or gaze the nine positions (as mentioned above) one by one as per instruction from the cue-guided system. The template matching algorithm was used for eye gaze detection. Table 2, describes the results of online eye gaze classification.

2.8.2. SSVEP-VET experiment The subject specific SSVEP time window was calculated from offline analysis (described in the results and discussion section). Before we start the free spelling task, three sessions of online cue guided experiments were performed on all the subjects to get accustomed with the proposed systems and procedures. An offset time of 10 s was given to all the subjects to get ready for the experiment. Similar to SSVEP offline paradigm, the online cue guided paradigm consists of stimulus window, followed by 0.5 s rest window. Each subject was given a different SSVEP time window evaluated from the offline analysis. At the beginning of stimulus window all the stimuli were highlighted with green color. The subjects were instructed to type all the 36 characters one by one in a single session based on their choice. Once the highlight was shown to the subjects, they started concentrating on the desired visual stimulus. The webcam captures the images of the eyes after 0.5 s from the onset of stimulus window. After detecting the eye gaze information the corresponding visual stimulus group highlighted color was changed to blue. This color change will give additional information/feed back to the subjects shown in Fig. 7. For online SSVEP classification, the EMSI algorithm considers selected group stimulus frequencies. Region selection from the keyboard layout increases the probability of target detection rate and the near by visual stimulus frequency misclassification was avoided. The timing diagram of the online cue guided paradigm is shown in Fig. 8.

3.2. SSVEP response time analysis/ detection Five sessions of stored EEG data were considered for SSVEP response time analysis. The acquired EEG data were analyzed at different time windows, 2.5, 3, 3.5 and 4 s from the original 5 s SSVEP time window. The extended multivariate synchronization index (EMSI) algorithm was used for SSVEP frequency recognition or classification. The classification algorithm was performed on EEG data of all the subjects in different time windows. After performing SSVEP classification, the confusion matrix is created for all the subjects at different time window. The confusion matrix is formed by using actual target and predicted target from the EMSI algorithm. The SSVEP classification accuracy was calculated from the confusion matrix. The offline EEG data were acquired using 8 channels. For finding the optimal number of channels/electrodes for online study, the acquired EEG data were analyzed with two sets of EEG channels (set 1: all 8 channels, set 2: O1, O2, Oz, Pz, P3, P4). The SSVEP classification accuracy was separately calculated for 6 channel and 8 channel sets. Fig. 10 shows the offline SSVEP classification accuracy of both the sets (6 and 8 channels) with different time windows. The paired T-test (hypothesis: equal mean) was performed between the classification accuracy of 6 channel and 8 channel sets at different time window length summarized in Table 3 (equal mean with corresponding pvalue). The p-value are in acceptable region. Hence we accepted that the mean value between 6 and 8 channels were almost similar. From the Table 3, we concluded that there is no statistically significant difference between the mean accuracy of 6 and 8 channel sets. Therefore,

2.8.3. SSVEP-EOG experiment The experimental procedure of SSVEP-EOG is same as above. For region selection we used EOG signals instead of VET data. At the 5

Advanced Engineering Informatics 42 (2019) 100994

D. Saravanakumar and M. Ramasubba Reddy

Fig. 7. Online SSVEP paradigm.

Fig. 8. Timing diagram of the online cue guided paradigm.

(a)

Fig. 9. Timing diagram of online SSVEP-EOG system.

(b)

Table 2 Eye gaze classification rate. Subject number

S1 S2 S3 S4 S5 S6 S7 S8 S9 S10

Fig. 10. Offline SSVEP classification accuracy using (a) 6 channel, (b) 8 channel EEG electrodes.

Correct classification (out of 9) Session I

Session II

Session III

Session IV

Session V

7 9 8 9 9 8 7 9 9 8

9 9 9 9 9 8 9 9 9 9

9 8 9 9 9 9 9 9 9 9

9 9 9 9 9 9 9 9 9 9

9 9 9 9 9 9 9 9 9 9

Table 3 Paired T-test results between the classification accuracy of 6 and 8 channel EEG data. Electrode/Channel

6 channel (vs) 8 channel

Time window in seconds 2.5

3.0

3.5

4.0

*

*

**

***

* p < 0.45. ** p < 0.6. *** p < 0.73.

6 channel electrode sets were considered as an optimal number of channels for online speller study. If the classification accuracy is greater than 90 percentage, the corresponding least time window was taken as SSVEP time window for online study shown in Fig. 11.

3.3. SSVEP-VET system Three sessions of online cue based experiments were performed on all the subjects to make the system familiar for the online free spelling 6

Advanced Engineering Informatics 42 (2019) 100994

D. Saravanakumar and M. Ramasubba Reddy

Table 5 Online free spelling task (spelling ability). Target words

APPLE MANGO BANANA LEMON LYCHEE PAPAYA PINEAPPLE GAS SYMBOL WEBCAM CYCLE JOB COLLEGE BONE MASS

Fig. 11. Subject specific SSVEP stimulation time window.

Table 4 ITR of online cue guided task (SSVEP-VET and SSVEP-EOG system).

S1 S2 S3 S4 S5 S6 S7 S8 S9 S10 Average

Time (in s)

3 3 3 3.5 3.5 3.5 3.5 3 4 3.5 3.35

ITR (bits/min) SSVEP-VET

SSVEP-EOG

85 89.82 86.57 70.23 72.86 75.58 71.53 93.24 61.45 71.53 77.78

86.57 91.51 85 70.23 72.86 76.99 71.53 93.24 60.33 72.86 78.11

S1

S2

S3

S4

S5

S6

S7

S8

S9

S10

S1

1 0 1 0 0 0 1 1 0 0 1 0 0 0 0

0 0 0 1 0 0 0 0 0 0 0 0 0 0 0

0 0 0 1 0 0 0 0 1 0 0 0 0 0 0

1 0 1 0 0 2 1 0 1 1 0 0 0 1 0

0 1 0 1 0 0 1 1 0 1 0 0 0 2 0

1 0 0 0 0 1 0 0 0 0 0 0 2 0 0

0 1 0 0 1 0 2 0 1 0 1 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

0 1 0 1 1 0 0 1 0 2 0 1 0 1 0

0 0 0 0 2 0 1 0 0 1 0 0 0 0 1

1 0 1 0 0 0 1 1 0 0 1 0 0 0 0

T-test was performed on the classification accuracies of both SSVEP EOG and VET systems. The test result shows there is no significant difference between their mean accuracies. The ITR of SSVEP-EOG system is tabulated in Table 4. The speller system was designed for real time online applications as an assistive device. Further, for improving the target detection accuracy of our proposed SSVEP-EOG system, a visual feedback was given to all the subjects after SSVEP stimulation time window for optimal decision making or optimal target identification. The subjects were instructed to perform various eye movements for optimal decision making or desired target identification. At the end of the SSVEP stimulus window, EMSI algorithm calculates the synchronization index between acquired EEG data and selected region visual stimulus frequencies one by one [11]. Based on the synchronization index value, the algorithm gives rank to all the visual stimulus frequencies from the selected region. The first rank is given to maximum synchronization index valued stimulus frequency. In online SSVEP classification, 90 percentage of the time a desired character/target was ranked as one and sometimes it was mistaken as 2, 3, 4 or 5. The given ranks were displayed on the visual stimuli as a feedback shown in Fig. 13. If the desired visual stimulus ranked as 1 the subject was asked to take a rest (no eye movement), which means the chosen target and classified frequency are same. If the desired stimulus was ranked as 2 the subjects were asked to perform single blink. Similarly for rank 3, 4 and 5 are single wink, double and triple blink respectively. Based on the rank, the subjects were asked to perform respective eye movements to avoid error or misclassification. Threshold method was used for eye movement detection. After eye movement detection, the respective target character was selected and displayed on the window. The subjects and proposed SSVEP-EOG visual feedback system can act cooperatively for optimal target selection or decision making. Three sessions of cue guided online SSVEP-EOG experiment were performed with visual feedback. Timing diagram of this experiment is shown in Fig. 14. A time duration of 1 s was given for visual feedback (including rest period). In each session, the subjects were asked to select all 36 characters as per the instruction from the online cue-guided system. A 100 percentage classification accuracy was obtained for five subjects as shown in Table 6. The information transfer rate of this system is tabulated in Table 6. Compared to SSVEP-VET and SSVEPEOG system, SSVEP-EOG with the visual feedback system (SSVEP-EOGVF) gave good classification accuracy (98.33%). The same fifteen words were given to all the subjects to validate the SSVEP-EOG with the visual feedback system. After spelling the single word a time duration of 60 s was given to all the subjects as a rest period. Out of 10 subjects 9 subjects spelled or selected all the words correctly.

Fig. 12. Classification accuracy of online cue guided task (SSVEP-VET and SSVEP-EOG).

Subject number

Corrections

task. The classification accuracy of the online cue based system was calculated and shown in Fig. 12. The time window needed to select the targets varies between the subjects. Therefore, the different SSVEP time window was assigned to each subjects considering their spelling abilities evaluated from the preliminary experiments above. Table 4 illustrates the information transfer rate of the proposed SSVEP-VET system. The rest window period is excluded for ITR calculation. For free spelling task 15 words were given to all subjects. We instructed the subjects to type all the words one by one. A time period of 60 s was given as a rest window after selecting the single word. Table 5 illustrates the typing ability of all the subjects, and the numbers indicate an error or number of times the subjects made mistakes while typing the word. The SSVEP-VET based system needs fixed head position for eye gaze detection. Few subjects felt discomfort during the experiments and fixed head position leads to fatigue. To overcome this limitation the EOG data were used for region selection instead of VET data. 3.4. SSVEP-EOG system The single and double blink EOG information was used for region selection. Three sessions of online experiments were performed and the classification accuracy was calculated and shown in Fig. 12. The paired 7

Advanced Engineering Informatics 42 (2019) 100994

D. Saravanakumar and M. Ramasubba Reddy

Fig. 13. SSVEP-EOG paradigm with visual feedback.

Fig. 14. Timing diagram of SSVEP-EOG with visual feedback system. Fig. 15. Subjects feedback (Rating), S-V (SSVEP-VET), S-E (SSVEP-EOG), S-E VF (SSVEP-EOG-VF).

Table 6 Target detection accuracy of online cue guided SSVEP-EOG-VF system. Subject number

Average time to detect a single target (s)

Average letters detected (out of 36)

Accuracy (%)

ITR (bits/ min)

S1 S2 S3 S4 S5 S6 S7 S8 S9 S10 Average

4 4 4 4.5 4.5 4.5 4.5 4 5 4.5 4.35

35.33 36 36 34.66 36 36 35 36 34 35 35.4

98.15 100 100 96.3 100 100 97.22 100 94.44 97.22 98.33

74.13 77.55 77.55 63.35 68.93 68.93 64.59 77.55 54.91 64.59 69.21

Fig. 16. Classification accuracy comparison between non-hybrid (dual frequency SSVEP system) and proposed hybrid systems (1. Dual frequency SSVEPVET, 2. Dual frequency SSVEP-EOG, 3. Dual frequency SSVEP-EOG-VF).

Table 7 Paired T- test results between the proposed methods. Methods

P-Value (Equal Mean)

Significant Difference

SSVEP-VET and SSVEP-EOG SSVEP-VET and SSVEP-EOG-VF SSVEP-EOG and SSVEP-EOG-VF

0.85 0.0000001 0.0000001

no yes yes

visual feedback system shown in Fig. 15, because the target classification accuracy is high and less fatigue. In VET systems fixed head position is needed and the subject can't move their head, sometimes it will lead to fatigue in the neck.

The paired T-test was performed between the classification accuracies of proposed methods (hypothesis: equal mean). Table 7 describes the statistical relationship between the proposed methods. The mean accuracies of SSVEP-VET and EOG systems were almost similar. The target classification accuracy of SSVEP-EOG with the visual feedback system have a significant difference between both the systems. After performing all the experiments we asked the subjects to rate [Low (1) to High (10)] the proposed systems based on spelling accuracy, subject comfort and ease of use. All the subjects gave high ratings to the

3.5. Performance evaluation with other conventional systems The probability of target detection rate is less in SSVEP based keyboard/speller system when more targets are used. If the number of targets increases, the nearby frequency valued stimulus is getting misclassified. The adjacent frequency valued stimuli were assigned into different groups to avoid frequency misclassification. For online SSVEP classification the EMSI algorithm considers selected group visual stimulus frequencies for finding the synchronization index between the 8

Advanced Engineering Informatics 42 (2019) 100994

D. Saravanakumar and M. Ramasubba Reddy

Table 8 Paired T-test between non-hybrid and hybrid systems. Methods

P-Value (Hypothesis-Equal Mean)

Accept/Reject

Significant Difference

SSVEP (vs) SSVEP-VET

F = 164, p = 1.75× 10

10

Reject

Yes

F = 311, p = 8.37× 10

13

Reject

Yes

SSVEP (vs) SSVEP-EOG

F = 158, p = 2.33× 10

SSVEP (vs) SSVEP-EOG-VF

Reject

10

Acknowledgements

Table 9 Comparison of our proposed SSVEP-EOG-VF system with conventional spelling systems. Ref.

Method EEG/ EOG/Hybrid BCIs

Number of Channels

Number of Commands

Accuracy (%)

ITR (bits/ minute)

[8] [26] [27] [14] [18] [28] Ours

SSVEP-P300 SSVEP SSVEP SSVEP-P300 EEG-EOG EOG Propose SSVEP-EOGHF

14 6 9 10 26 1 8

36 36 40 36 36 40 36

93 90.46 91.04 93.85 97.6 94.13 98.33

31.8 65.98 267 56.44 39.6 68.69 69.21

Yes

The authors wish to extend their gratitude to the Indian Institute of Technology Madras for providing the necessary research facilities and thank all the subjects for their involvement. References [1] J.R. Wolpaw, N. Birbaumer, D.J. McFarland, G. Pfurtscheller, T.M. Vaughan, Brain computer interfaces for communication and control, Front. Neurosci. 4 (2002) 767–791, https://doi.org/10.3389/conf.fnins.2010.05.00007. [2] A. Jeffreys, Human brain electrophysiology: evoked potentials and evoked magnetic fields in science and medicine: by david regan, elsevier, 1989. $140.00/Dfl. 350.00 (xxiv + 672 pages) ISBN 0 444 01324 5, Trends in Neurosciences. 12 (1989) 413–414. http://doi.org/10.1016/0166-2236(89)90083-0. [3] G.R. Müller-Putz, R. Scherer, G. Pfurtscheller, R. Rupp, EEG-based neuroprosthesis control: a step towards clinical practice, Neurosci. Lett. 382 (2005) 169–174, https://doi.org/10.1016/j.neulet.2005.03.021. [4] T.M. Srihari Mukesh, V. Jaganathan, M.R. Reddy, A novel multiple frequency stimulation method for steady state VEP based brain computer interfaces, Physiol. Meas. 27 (2006) 61–71, https://doi.org/10.1088/0967-3334/27/1/006. [5] H. Bakardjian, T. Tanaka, A. Cichocki, Optimization of SSVEP brain responses with application to eight-command Brain-Computer Interface, Neurosci. Lett. 469 (2010) 34–38, https://doi.org/10.1016/j.neulet.2009.11.039. [6] D. Zhang, A. Maye, X. Gao, B. Hong, A.K. Engel, S. Gao, An independent brain–computer interface using covert non-spatial visual selective attention, J. Neural Eng. 7 (2010) 16010, https://doi.org/10.1088/1741-2560/7/1/016010. [7] B.Z. Allison, J.A. Pineda, ERPs evoked by different matrix sizes: implications for a brain computer interface (bci) system, IEEE Trans. Neural Syst. Rehabil. Eng. 11 (2) (2003) 110–113, https://doi.org/10.1109/TNSRE.2003.814448. [8] M.H. Chang, J.S. Lee, J. Heo, K.S. Park, Eliciting dual-frequency SSVEP using a hybrid SSVEP-P300 BCI, J. Neurosci. Methods 258 (2016) 104–113, https://doi. org/10.1016/j.jneumeth.2015.11.001. [9] H.J. Hwang, D. Hwan Kim, C.H. Han, C.H. Im, A new dual-frequency stimulation method to increase the number of visual stimuli for multi-class SSVEP-based braincomputer interface (BCI), Brain Res. 1515 (2013) 66–77, https://doi.org/10.1016/ j.brainres.2013.03.050. [10] X. Zhao, D. Zhao, X. Wang, X. Hou, A SSVEP stimuli encoding method using trinary frequency-shift keying encoded SSVEP (TFSK-SSVEP), Front. Hum. Neurosci. 11 (2017) 1–9, https://doi.org/10.3389/fnhum.2017.00278. [11] C. Chuan Jia, X. Xiaorong Gao, B. Bo Hong, S. Shangkai Gao, Frequency and Phase Mixed Coding in SSVEP-Based Brain-Computer Interface, IEEE Trans. o Biomed. Eng. 58 (2011) 200–206, https://doi.org/10.1109/TBME.2010.2068571. [12] N.V. Manyakov, N. Chumerin, A. Robben, A. Combaz, M. Van Vliet, M.M. Van Hulle, Sampled sinusoidal stimulation profile and multichannel fuzzy logic classification for monitor-based phase-coded SSVEP brain-computer interfacing, J. Neural Eng. 10 (2013), https://doi.org/10.1088/1741-2560/10/3/036011. [13] X. Chen, Z. Chen, S. Gao, X. Gao, A high-ITR SSVEP-based BCI speller, BrainComputer Interfaces. 1 (2014) 181–191, https://doi.org/10.1080/2326263X.2014. 944469. [14] E. Yin, Z. Zhou, J. Jiang, F. Chen, Y. Liu, D. Hu, A novel hybrid BCI speller based on the incorporation of SSVEP into the P300 paradigm, J. Neural Eng. 10 (2013), https://doi.org/10.1088/1741-2560/10/2/026012. [15] Y. Li, J. Pan, F. Wang, Z. Yu, A hybrid BCI system combining P300 and SSVEP and its application to wheelchair control, IEEE Trans. Biomed. Eng. 60 (2013) 3156–3166, https://doi.org/10.1109/TBME.2013.2270283. [16] E. Yin, Z. Zhou, J. Jiang, F. Chen, Y. Liu, D. Hu, A speedy hybrid BCI spelling approach combining P300 and SSVEP, IEEE Trans. Biomed. Eng. 61 (2014) 473–483, https://doi.org/10.1109/TBME.2013.2281976. [17] M. Wang, I. Daly, B.Z. Allison, J. Jin, Y. Zhang, L. Chen, X. Wang, A new hybrid BCI paradigm based on P300 and SSVEP, J. Neurosci. Methods 244 (2015) 16–25, https://doi.org/10.1016/j.jneumeth.2014.06.003. [18] M.H. Lee, J. Williamson, D.O. Won, S. Fazli, S.W. Lee, A high performance spelling system based on EEG-EOG signals with visual feedback, IEEE Trans. Neural Syst. Rehabil. Eng. 4320 (2018) 1–18, https://doi.org/10.1109/TNSRE.2018.2839116. [19] J.R. Wolpaw, N. Birbaumer, W.J. Heetderks, D.J. McFarland, P.P. Hunter, G. Schalk, E. Donchin, L.A. Quatrano, C.J. Robinson, T.M. Vaughan, Brain-computer interface technology: a review of the first international meeting, IEEE Trans. Rehabilit. Eng. 8 (2000) 164–173, https://doi.org/10.1109/TRE.2000.847807. [20] J.R. Wolpaw, H. Ramoser, D.J. McFarland, G. Pfurtscheller, EEG-based

reference matrix and input EEG data. The addition of VET and EOG data were used to find the region or group which belongs to the selected target. The VET/EOG integration with the dual frequency SSVEP speller system, increases the spelling accuracy and the probability of target detection rate. We compared the target classification /detection accuracy of dual frequency SSVEP speller system with and without VET/ EOG integration shown in Fig. 16. The paired T-test was performed between the dual frequency SSVEP system with and without VET/EOG integration. Table 8 illustrates a significant difference between the mean target detection accuracies of hybrid and non-hybrid systems. From the above test we concluded that the hybrid systems have increased classification rate and probability of target detection rate. Table 9 summarizes the target detection accuracy and ITR of conventional keyboard/speller systems with our proposed SSVEP-EOG-VF system. Hence the proposed SSVEP-EOG system with visual feedback outperforms than conventional speller systems. The target classification accuracy and ITR of our proposed system is higher than the conventional speller systems. 4. Conclusion In this study a high performance hybrid SSVEP based speller system was designed and validated with other conventional keyboard/speller systems. For experimental purpose, we randomly selected 36 characters and their corresponding visual stimuli were designed using the dual frequency SSVEP method. A subject specific SSVEP stimulus time window was calculated and given to the respective subjects for free spelling or online task. Frequency misclassification and probability of target detection rate is the main limitations of the SSVEP based speller system, which was addressed by VET and EOG data by region selection or grouping the visual stimulus. Further, improving the target detection rate, a visual feedback was given in SSVEP-EOG systems with various eye movements. A 100% spelling accuracy was obtained for 5 subjects out of 10 and the average spelling accuracy of SSVEP-EOG visual feedback system is 98.33% with the ITR of 69.21 bits/min. The spelling accuracy and ITR of the proposed method was compared with conventional speller systems. This result indicates that the speller system can be used for practical application with less false positive and more accuracy. 9

Advanced Engineering Informatics 42 (2019) 100994

D. Saravanakumar and M. Ramasubba Reddy

[21] [22] [23] [24]

communication: improved accuracy by response verification, IEEE Trans. Rehabilit. Eng.. 6 (1998) 326–333, https://doi.org/10.1109/86.712231. Y. Zhang, D. Guo, D. Yao, P. Xu, The extension of multivariate synchronization index method for SSVEP-based BCI, Neurocomputing. 269 (2017) 226–231, https:// doi.org/10.1016/j.neucom.2017.03.082. Y. Zhang, P. Xu, K. Cheng, D. Yao, Multivariate synchronization index for frequency recognition of SSVEP-based brain-computer interface, J. Neurosci. Methods 221 (2014) 32–40, https://doi.org/10.1016/j.jneumeth.2013.07.018. Y. Zhang, D. Guo, P. Xu, Y. Zhang, D. Yao, Robust frequency recognition for SSVEPbased BCI with temporally local multivariate synchronization index, Cogn. Neurodyn. 10 (2016) 505–511, https://doi.org/10.1007/s11571-016-9398-9. D. Information, ADS129x low-power, 8-Channel, 24-bit analog front-end for, Biopotential Measurements (2015).

[25] J.H. Lim, J.H. Lee, H.J. Hwang, D.H. Kim, C.H. Im, Development of a hybrid mental spelling system combining SSVEP-based brain-computer interface and webcambased eye tracking, Biomed. Signal Process. Control 21 (2015) 99–104, https://doi. org/10.1016/j.bspc.2015.05.012. [26] D. Saravanakumar, R.Ramasubba, A Novel Visual Keyboard System for Disabled People/Individuals Using Hybrid SSVEP Based Brain Computer Interface, in: 2018 International Conference on Cyberworlds (CW), 2018: pp. 264–269. http://doi.org/ 10.1109/CW.2018.00054. [27] X. Chen, Y. Wang, M. Nakanishi, X. Gao, T.-P. Jung, S. Gao, High-speed spelling with a noninvasive brain–computer interface, Proc. Natl. Acad. Sci. 112 (2015) E6058–E6067, https://doi.org/10.1073/pnas.1508080112. [28] S. He, Y. Li, A single-channel EOG-based speller, IEEE Trans. Neural Syst. Rehabil. Eng. 25 (2017) 1978–1987, https://doi.org/10.1109/TNSRE.2017.2716109.

10