eduzhai > Helth Sciences > Medical >

Noninvasive BCI is used to decode the expected arm extension motion in prosthetic control

  • sky
  • (0) Download
  • 20211031
  • Save
https://www.eduzhai.net American Journal of Biomedical En gineer in g 2012, 2(4): 155-162 DOI: 10.5923/j.ajbe.20120204.02 Non-Invasive BCI for the Decoding of Intended Arm Reaching Movement in Prosthetic Limb Control Ching-Chang Kuo1, Jessica L. Knight2, Chelsea A. Dressel1, Alan W. L. Chiu1,3,* 1Biomedical Engineering, Louisiana Tech University, Ruston, LA, 71270, United States 2Biological Sciences, Louisiana Tech University, Ruston, LA, 71270, United States 3Applied Biology and Biomedical Engineering, Rose-Hulman Institute of Technology, Terre Haute, IN, 47803, United States Abstract Non-invasive electroencephalography (EEG) based brain-computer interface (BCI) is able to provide an alternative means of communicat ion with and control over externa l assistive devices. In general, EEG is insufficient to obtain detailed information about many degrees of freedom (DOF) for arm movements. The main objectives are to design a non-invasive BCI and create a signal decoding strategy that allows people with limited motor control to have more command over potential prosthetic devices. Eight healthy subjects were recruited to perform visual cues directed reaching tasks. Eye and motion artifacts were identified and removed to ensure that the subjects’ visual fixation to the target locations would have litt le or no impact on the final result. We applied a Fisher Linear Discriminate (FLD) classifier to perform single-trial classification of the EEG to decode the intended arm movement in the left , right, and forward directions (before the onsets of actual movements). The mean EEG signal amplitude near the PPC region 271-310ms after visual stimu lation was found to be the dominant feature for best classification results. A signal scaling factor developed was found to imp rove the classification accuracy fro m 60.11% to 93.91% in the binary class (left versus right) scenario. This result demonstrated great promises for BCI neuroprosthetics applications, as motor intention decoding can be served as a prelude to the classification of imag ined motor movement to assist in motor disable rehabilitation, such as prosthetic limb or wheelchair control. Keywords Brain Co mputer Interface (BCI), Classification, Electroencephalogram (EEG), Movement Intention, Posterior Parietal Co rtex (PPC) 1. Introduction Brain Co mputer Interface (BCI) is a frontier research area in neural engineering that has gathered a great deal of attention from scientists and the general public. BCI technology allows co mmunicat ion to occur between the brain and an external machine[1], and its application can range from entertain ment to assistive devices[2]. In a typical BCI system, the brain activit ies are recorded and processed by a computer system, which in turn, deciphers the mental or physical activ ities and creates commands to control e xternal devices[3, 4]. One of the goals in BCI and neural engineering research is creating assistive devices for those with limited motor control. A successful BCI system is valuable in motor disable rehabilitation by allowing the subjects to perform physical practices[5]. Th is type of technology would drastically imp rove the quality of life fo r the patients by allo wing these indiv iduals to have better communication and more independent control[6, 7]. * Corresponding author: dr.alan.chiu@gmail.com (Alan W. L. Chiu) Published online at https://www.eduzhai.net Copyright © 2012 Scientific & Academic Publishing. All Rights Reserved The operation of tradit ional electro myogram(EM G) based controlled prosthetics is based on the decoding of myoelectric signals of residual muscles[8, 9]. While these devices provide more basic control over the prostheses, certain limitations restrict their acceptability. Users with severely limited motor ability would require much effort to learn how to contract specific muscle groups in order to control the device. Therefore, the need to create a mo re intuitive control strategy based on the user’s naturally occurring brain signals is apparent. The BCI technology today encompasses invasiveelectroc orticography (ECo G), implanted electrodes, or non-invasive electroencephalography (EEG)[10]. Current literature suggests that EEG is adequate to extract detailed informat ion about precise movements of the upper limb [11]. For our study, non-invasive techniques based on EEG surface potentials appear to be a more sensible method for collecting and processing data[12-14] for neuroprosthetics applications with relatively few DoF. Real-t ime signal classification based on the activation and feature extraction fro m part icular brain regions can allo w fo r the control of the assistive devices. When examin ing the neuronal activities for BCI applications, the signal intensity or signal power features has been commonly used for decoding user movement 156 Ching-Chang Kuo et al.: Non-Invasive BCI for the Decoding of Intended Arm Reaching M ovement in Prosthetic Limb Control intents. The posterior parietal cortex (PPC) reg ion is responsible for converting visual stimu li into motor movements[15] and is a vital location for decoding intended motor movement[12]. While most other research focused on discriminating EEG signals between left hand, right hand, toe, and tongue imagined movement[16, 17], our study endeavors to decode and classify EEG signals for the intended movement direction of the same limb, leading to a more realistic control o f a single upper limb prosthetic arm[18]. Single-trial signal classification strategy was developed to evaluate the temporal, spatial, and spectral EEG features during the planning stages of motor movements in the left, right, and forward direct ions similar to[19-21]. Ultimately, such classification algorith m will be a part of a two-stage neuroprosthetics control strategy. In the first stage, the intended motor movement directions can be decoded using EEG signal features. The second stage is envisioned to be a motor imaginary classifier. In this paper, we shall focus our discussion on the classification of motor intention only. Since the presentation of the visual cues does not influence the performance[22], we used “realistic” instead of “abstract” visual-cue in order to avoid a tedious calibrat ion procedure. Furthermore, realistic visual image environment may enhance learning progress[23]. Ensemb le Emp irical Mode Deco mposition (EEM D), where the signals are decomposed into intrinsic mode functions (IMF)[24], was utilized to isolate the frequency information in the training set. We developed and validated the use of scalp EEG data and current density localization for intended movement direction analysis. Subsequently, we evaluate the feature classification strategy suitable for distinguishing the brain activity associated with the intended hand movement. Potential variat ions in electrode impedance at different recording locations and at different recording t imes may drastically impact the amp litude and signal-to-noise ratio (SNR) o f the EEG s ignals. Finally, we proposed and evaluated a scaling factor based on the “signature” EEG signal after the presentation of the visual-cues. We hypothesize that such scaling factor is able to compensate the potential problems of electrode impedance differences between trials and across different locations. Our preliminary result indicated that the inclusion of such a scaling process would significantly improve the overall single-trial classification accuracy. In a two-class decoding scheme, the accuracy imp roved fro m appro ximately 60% to over 90% with the scaling factor. The imp licat ion of this work would have direct impact on the acceptability of the BCI neuroprosthetics application as the new device will function based on user’s intent, which can provide a more intuitive control parad ig m, for simp le device control with few DoF. 2. Methods The experimental procedure for investigating the motor intention using targeted BCI is shown in Figure 1. It involves the design of a v isual stimulat ion system, an EEG data acquisition system, a signal p re-processing unit, an art ifact removal algorith m, a feature extraction method, and a signal classifier. The detail for each step is provided in the subsequent sections. The method for co mputing the visual-cue based scaling factor is proposed and described in the feature extraction section. Figure 1. A flow chart describing overall experimental procedure for decoding the reaching tasks during the planning stage. The visual stimulation was provided to subjects for recording intended movement direction EEG signal. The suitable scaling factors which applied to the signal amplitude features in the test set were obtained after artifact removal. A 5x5-fold cross validation procedure was performed to determine the overall accuracy 2.1. Visual Stimul ati on and EEG Data Acquisition 2.1.1. Part icipants Eight able-body participants with normal or corrected to norma l eye sight (6 males and 2 fe ma les, age 19-29, all right handed users) were recruited in this study. All of the subjects had no prior experience with BCIs and no history of neurology disorders. The protocol has been approved by the Institutional Review Board for Hu man Use (IRB) at the Louisiana Tech Un iversity. All part icipants had read and signed an informed consent. 2.1.2. Task and St imu li Figure 2. The experimental setup is illustrated. Touch pads (circles) are placed at the base (resting) position and at the targets of the reaching tasks (left, right, and forward) to track whether the subject has performed the tasks co rrect ly The experimental setup is shown in Figure 2 where the subjects were seated in front of a co mputer screen and given visual cues fro m the co mputer monitor. Touch pad sensors were placed at the middle and to the sides of the monitor to American Journal of Biomedical En gineer in g 2012, 2(4): 155-162 157 track the subject responses. In order to obtain useful signals for the fast interpretation, the BCI task should be relatively easy to perform and require little effort fro m the users to prevent physical or mental fatigue[10]. In order to do so, a targeted delayed saccade/reach task was used in this study. A minimu m o f 450 trials were performed by each subject. The recording sessions were broken into blocks (90 trials / block), separated by 5 min b reaks in between. The sequence of each trial is shown in Figure 3. At the beginning of each trial, each subject was asked to relax the forearm and place the palm on the touch pad at the base position 40 cm away fro m the screen. Visual cues were provided using the E-Prime 2.0 s ystem (Science Plus Group, Netherland) to info rm the subjects of the proper movement to perform in a dark room. Two types of visual cues were provided. First, the “Effector cue” was d isplayed 500 ms after the beginning of each trial. It instructed which movement type the users should perform (imag inary movement with eyes closed, reach without eye movement, or saccade to target). The second visual cue, called the “Direction cue” was shown at the center of the screen 1000 ms after the “Effector cue”. It informed the user of the appropriate reaching directions (left, right, or forward ). The subjects were asked to fixate on the center of the screen until the “Go cue” appears 700 ms after the “Direction cue”. They were then asked to perform the indicated actions as quickly as possible after the appearance of the “Go cue”. The nine different “Effector – Direct ion” combinations were evenly distributed and randomly provided over the whole e xperiment. Figure 3. T ime course of one trial is illustrated. The 700ms delay period between the presentation of the “Direction cue” and the “ Go cue” is considered the period of directional movement planning. The EEG data wit hin this t ime window is used for the analysis 2.1.3. Apparatus The EEG evoked response potential (ERP) signals were recorded using a 128-channel Hydro Cel Geodesic Sensor Net (Electrical Geodesics Inc., Eugene, OR) with the Net-Station 5.3 software. Figure 4 shows the electrode placement as viewed fro m the top of the head and regions of interest around the PPC. A ll signals were anti-aliasing low-pass filtered at 100Hz, and digit ized at a sample rate of 256Hz. Figure 4. The channel map for 128 electrodes is shown with electrode number labeled. The illustration is observed from the top of the subject’s head with the front of the head pointing upward. The regions of interest near the PPC are circled 158 Ching-Chang Kuo et al.: Non-Invasive BCI for the Decoding of Intended Arm Reaching M ovement in Prosthetic Limb Control 2.2. Signal Pre-processing and Artifact Removal The EEG data was digitally filtered between 0.1~30 Hz. Since this project focused only on the analysis motor intention prior to any actual movements, all three different effectors were included and co mbined in the analysis. The data was separated into three groups (left, right, or forward) based on the “Direction cues”. Bad channels, as the results of poor skin contact, eye blink, eye movement, or muscle movement were detected based on their particular signal characteristics and abnormal amplitude informat ion, were replaced by the averaged signals fro m neighboring channels using NetStation built-in functions. Only those artifact-free epochs (with amp litude <50μV) were used for further analysis. The data was also re-referenced to the average signal across all 128 electrodes. The 100ms before onset of each trial was used for baseline correction adjustments. 2.4. Ensemble Empirical Mode Decomposition Ensemb le empirical mode decomposition (EEM D) is a data-driven analysis method that separates the signal into a collection of intrinsic mode functions (IMFs). It is a powerful approach for analy zing nonlinear, non-stationary EEG s ignal since the method is only based on local characteristic time scale[29-31]. Unlike trad itional bandpass filters, EEM D breaks down the signals in an emp irical manner, which is strictly based on the signal characteristics without specifying any frequency bands[32]. Mode mixing problem that existed in the Empirical mode deco mposition (EMD) method can be resolved by EEM D utilizing the uniformly distributed reference frame by the addition of white no ise[33]. The procedure for EEM D has been described in great detailed in[29], and would not be repeated here. 2.3. Offline Source Localization Validation 2.5. Feature Extraction and Signal Cl assification Source localization was performed offline as a way to validate that the activated brain regions of our recorded data is consistent with the literature. The process described in this subsection would not be needed in the real-time implementation of the motor intention decoder. Independent component analysis (ICA)[25] was first performed using the extended Infomax-ICA algorith m in the EEGLA B tools[26] to find the maximally temporally independent signals available[27]. Figure 5 illustrates the averaged signals across all the recording trials before and after the removal of eye motion artifact. The DIPFIT 2.0 algorith m was then used to estimate the dipole sources of the remaining independent component (IC) after spatial filtering[26]. The dipoles were projected onto the boundary element mode in EEGLAB then plotted on the average MNI (Montreal Neurologica l Institute) brain images[28]. The source locations were then specified using the Talairach coordinate system. Dipole locations fro m the source localizat ion algorith m would not be used in the single-tria l c lassification of arm move ment direction since it is a time consuming process. Figure 5. EEG signals before and after spatial filtering is shown. The signal for each channel is averaged across all trials prior to ICA spatial filtering. An example of eye artifact is seen in the EEG spatial map around 305ms. The averaged signal for each channel after ICA spatial filtering is shown at the bottom subplot Signal c lassifiers we re created using the Statistical Pattern Recognition Toolbo x in Matlab[34] to decode the EEG signal features. A two-class analysis (left versus right) was performed using Fisher Linear Discrimination (FLD) b inary classifier in a 5x5-fold cross validation procedure. Eighty percent (80%) of the data fo r each direction was randomly chosen to be the training set. The remaining 20% of the data was assigned to be the testing set. The “signature” signal was acquired in each region of interest (ROI) near the PPC region (see Figure 4) using the training set for each cross-validation study. In this study, the averaged ERP signal within 235ms after the presentation of the “Direction cue” would be considered the “signature” at each ROI. It has been observed that regardless of the intended reaching direction or the type of effectors requested of the subject, the averaged EEG signal within the first 235ms after the presentation of the “Direction cue” retains similar signal profile. Each “signature”, consisted of a do minant h igh delta (0 – 4Hz) and a low theta (4 – 8Hz) co mponent, has been observed to have similar shapes, regardless of the intended movement direction. The local maximu m and local minimu m of the “signature” signal at each ROI were found and their difference was used as a scaling factor. The signal amp litude at each recording site was scaled accordingly with the following equation: vi,scaled (t) = vi (t) vi,max − vi,min − vi,min (1) Where vi(t) and vi,scaled(t) denote the ERPs in the test set, at location i, before and after scaling. The values vi,max and vi,min are the maximu m and minimu m of the “signature” at the same location, found in the averaged train ing data set. Since this scaling process only involves multip lying the EEG recording by a d ifferent factor at each location, it is suitable for real-time applications. Figure 6 is the graphical illustration for scaling the EEG signals. The impact of scaling was evaluated by investigating the features at different time delays after the “Direct ion cue”[35]. American Journal of Biomedical En gineer in g 2012, 2(4): 155-162 159 Once it has been established that using this cue-based “signature” can enhance the binary classification accuracy of the planned motor movement in two d irect ions, we performed the second analysis. It involved the evaluation of the EEM D-based operation on the decoding accuracy. The high frequency noise in the EEG data was reduced through the elimination of IM F1 and IMF2[36], since the ERP difference of intended movement direct ion have been reported to be <12 Hz[37]. The evaluation of the EEM D-based operation on the decoding accuracy was performed by co mparing the FLD decoder performance on IMF-removed data set. Since the average EEG signal amp litude in a 40ms window was signal the feature, removing IMF1 and IM F2 co mponents did not significantly imp rove the decoder performance fro m 93.91±6.09% to 95.44±3.28% (p >0.4). Figure 6. Graphical illustration of the scaling factor is shown. The scaling factor computed from the ROI channels in the training set is applied to the test set. The light color lines indicate the EEG signals from individual trials in the training set from one electrode; the dark bolded line indicates the average signal. The scaling factor is set to be the difference between the maximum and minimum values in the first 235ms of the averaged signal 3. Results Using Talairach coordinate system, the dominant equivalent dipole source for each intended arm movement direction was observed near the PPC areas for all subjects. Figure 7 illustrates the result of the EEGLA B plug-in DIPFIT2.0 output for a particular subject where the coordinates for the left co mponent[-20, -40, 24], the forward component[0, -33, 40], and the right co mponent[28, -40, 23] are found. This is consistent with the results reported in the literature[12]. The effects of the parietal ICs were then back-projected onto the scalp for each subject after art ifact re moval (Figure 8). Fi gure 7. Source reconst ruct ion for three equivalent dipoles is illustrat ed. As a validation, estimated source dipole locations were found to be near the PPC regions, consistent with reported literature[2] with the residual variance for each dipole estimate found to be <6% Fi gure 8. The independent component clust ers of each subject are shown. The three independent component clusters that extracted from ten subjects’ ICA decomposit ion demonst rat e act ivit ies at post erior pariet al cort ex region. The larger heads show the average projection across ten subjects. The smaller scalp maps are from individual subjects 160 Ching-Chang Kuo et al.: Non-Invasive BCI for the Decoding of Intended Arm Reaching M ovement in Prosthetic Limb Control As a preliminary evaluation of the proposed cue-based scaling strategy, a two-direction (left versus right) classifier was created. The averaged ERP data fro m the training set at each recording site was found, the scaling term was calculated to be the difference between the maximu m and minimu m values within 235ms after the presentation of the visual cue. Once the scaling factors were found, they were applied to the test set. Amplitude features at different time delays were evaluated and the improvement of classification accuracy after the scaling operation is shown in Figure 9. The highest classification accuracy (on the scaled data) was found to take place 271 – 310ms after the visual cues. Statistically significant improvement (p <0.01) in classification performance was found with scaling (accuracy 93.91±6.09%), than without (accuracy 60.11±9.02%). Tab le 1 su mmarizes the subject-by-subject result for the single-trial FLD using 5x5-fold cross validation. Figure 9. EEG amplitude features obtained in the PPC regions can be used to classify the intended direction of reaching motion. Classification accuracy is at the highest using amplitude features 271 – 310ms after the presentation of the visual cues Table 1. Single-trial binary classification of left versus right intended movement was performed using FLD. Statistically significant improvement in accuracy was found after cue-based “ signature” scaling (p<0.01) Subject A B C D E F G H Mean ±Stde v Without Scaling Mean ± Stdev 66.40 ± 8.11% 59.20 ± 4.05% 72.80 ± 5.68% 55.23 ± 5.48% 68.54 ± 4.91% 57.67 ± 5.49% 54.75 ± 5.37% 46.29 ± 5.54% 60.11 ± 9.02% With Scaling Mean ± Stdev 99.33 ± 0.83% 96.13 ± 3.87% 91.60 ± 4.89% 96.80 ± 2.25% 95.33 ± 2.50% 78.71 ± 7.03% 94.79 ± 2.31% 98.58 ± 1.43% 93.91 ± 6.09% 4. Discussion BCI technology enables people to interact with external devices in new and intuitive ways. As a prosthetic application, it helps people with limited muscle control (such as those suffering fro m spinal inju ry, stroke or cerebral palsy) regain some of the lost motor functions. Even though there is still debate over the best classification method for BCI, we developed and validated the use of surface EEG to distinguish the brain activity during planning of intended arm movements. EEG data was recorded fro m untrained subjects excluding feedback, where each individual subject was analyzed independently in this study. Subjects only instructed to performthe indicated reaching tasks (see Figure 3). In the framework of upper limb neuroprosthesis, this paradigm could be directly imp lemented as a part of the control strategy of the prosthetic arm fo r activ ity of daily liv ing (ADL). The spatial, te mpora l, and spectral features were e xtracted based on reported literature. We used the spatial information near the PPC regions as previously reported[15]. The temporal feature pertaining to the mean EEG signal amp litude 271 – 310ms after the presentation of the “Direction cue” visual-cue was found to have the most significant difference between the intended arm reaching directions, and the highest classification accuracy. A scaling strategy based on the EEG response to cue-based stimu lus was proposed. The maxima and minima “signature” signal fro m 0 – 235ms after the presentation of the “Direction cue” was used as a scaling factor for subsequent single-trial analysis. The early synchronization in the delta (0 – 4Hz) and low theta (4 – 8Hz) bands is related to the “Direction cue”, which supports the idea of early co mponent reflects the processing of the visual intention, where the alpha band (9 – 12Hz) is associated with the v isual attention[38]. The “signature” signal around this frequency range can be found at different recording electrodes near the ROIs and the visual cortex during the delayed “Direction cue” period. Our current study did not attempt to distinguish the three effectors. Recently, there have been many reported studies on the classification of saccade motor imagery versus motor execution tasks[21, 39-40]. The co mbination of motor planning and motor imagery for amputee subjects may be a more viab le technique for controlling neuroprosthetics devices. The utility of the proposed cue-based “signature” scaling factor gave some promising results by imp roving the classification accuracy of intended motor direct ions. To test this scaling strategy in more realistic situations, it may be extended to non-visual cue based (voluntary movement) setup in the future. In these experiments, the subjects will decide the desired reaching destinations without the target-specific stimulat ion. The “signatures” in these situations would have been internally triggered, possibly dominated by a slightly different frequency component. A non-invasive mobile prosthetic platform using wireless dry electrodes and wearable EEG systems would benefit in real world operational environ ments[41-42]. Before the implementation of a real-t ime BCI system, some hardware platforms and specific software need to be developed. Three main advantages of EEG record ing system with a real-time signal processing platform are low-cost, easily customized and intuitive operation. Future development of specific software communicat ion systems between EEG recording devices and signal processing platform must be designed and operated close to real-t ime. Other specification includes simp le training protocol for rehabilitation purposes. More work is needed to understand how changes in attention and intention may impact EEG signals. Future study related to American Journal of Biomedical En gineer in g 2012, 2(4): 155-162 161 the angular direction decoding, instead of the current discrete direct ions, may be necessary. Part icipants with motor-d isabi lit ies will be recruited to provide mo re conclusive results on survey of signal processing algorithms in brain-computer interfaces based on electrical brain signals”, Journal of Neural Engineering, vol. 4, pp. 32-57, 2007. the advantage of the proposed “signature” scaling and [7] Y.T. Wang, Y. Wang, T.P. Jung, “A cell-phone-based classification algorith ms. brain-computer interface for communication in daily life”, Journal of Neural Engineering, vol. 8, pp. 025018, 2011. 5. Conclusions Although surface EEG signals have limit informat ion about complex arm movements, we demonstrated EEG signal can be used to decode the direction of reaching tasks during the planning stage. Experiments were designed to provide visual-cues to guide the user imagery/arm movements. ICA and EEMD are efficient to remove artifact. A cue-based scaling strategy was developed to adjust the EEG s ignal amp litude near the PPC regions. Temporal informat ion (271 – 310ms) after the presentation of the visual cues was found to hold the most discriminatory features. This work wou ld have direct application based on the electroencephalographicalsignals of the user intent. In addition, motor intention comb ined with motor imagery paradigms also would provide more co mmends on the control of BCI. The overall single-trial classification accuracy of 93.91 ± 6.09% holds this paradigm pro mising for non-invasive BCI design in neuromotor prosthesis or wheelchair applications. ACKNOWLEDGEMENTS The project described was supported by Grant Nu mber P20RR016456 fro m the Nat ional Institutes of Health (NIH) National Center for Research Resources (NCRR). [8] F.C. Sebelius, B.N. Rosen, G.N. Lundborg, “Refined myoelectric control in below-elbow amputees using artificial neural networks and a data glove”, The Journal of Hand Surgery, vol. 30, pp. 780-789, 2005. [9] J.M . Fontana, A.W.L. Chiu, “Control of Prosthetic Device Using Support Vector M achine Signal Classification Technique”, American Journal of Biomedical Sciences, vol. 1, pp. 336-343, 2009. [10] M . van Gerven, J. Farquhar, R. Schaefer, R. Vlek, J. Geuze, A. Nijholt, N. Ramsey, P. Haselager, L. Vuurpijl, S. Gielen, P. Desain, “The brain-computer interface cycle”, Journal of Neural Engineering, vol. 6, pp. 041001 2009. [11] T.J. Bradberry, R.J. Gentili, J.L. Contreras-Vidal, “Reconstructing three-dimensional hand movements from noninvasive electroencephalographic signals”, Journal of Neuroscience, vol. 30, pp. 3432-3437, 2010. [12] Y. Wang, S. M akeig, “Predicting Intended M ovement Direction UsingEEG from Human Posterior Parietal Cortex”, in D.D. Schmorrow et al. (Eds.): Augmented Cognition, HCII 2009, LNAI 5638, pp. 437-446, 2009. [13] J.N. M ak, Y. Arbel, J.W. M inett, L.M . M cCane, B. Yuksel, D. Ryan, D. Thompson, L. Bianchi, D. Erdogmus, “Optimizing the P300-based brain-computer interface: current status, limitations and future directions”, Journal of Neural Engineering, vol. 8, pp. 025003, 2011. [14] P. Brunner, L. Bianchi, C. Guger, F. Cincotti, G. Schalk, “Current trends in hardware and software for brain-computer interfaces (BCIs)”, Journal of Neural Engineering, vol. 8, pp. 025001, 2011. REFERENCES [1] J.R. Wolpaw, N. Birbaumer, D.J. M cFarland, G. Pfurtscheller, T.M . Vaughan, "Brain-computer interfaces for communicati on and control", Clinical Neurophysiology, vol. 113, pp. 767-791, 2002. [2] P.S. Hammon, S. M akeig, H. Poizner, E. Todorov, V.R. de Sa, "Predicting Reaching Targets from Human EEG", IEEE Signal Processing M agazine, vol. 69, pp. 69-77, 2008. [3] A.Nijholt, D. Tan, “Brain-Computer Interfacing for Intelligent Systems”, IEEE Intelligent systems, vol. 23, pp. 72-79, 2008. [4] S. M akeig, K. Gramann, T.P. Jung, T.J. Sejnowski, H. Poizner, “Linking brain, mind and behavior”, International Journal of Psychophysiology, vol. 79, pp. 95-100, 2009. [5] G. Prasad, P. Herman, D. Coyle, S. M cDonough, J. Crosbie, “Applying a brain-computer interface to support motor imagery practice in people with stroke for upper limb recovery: a feasibility study”, Journal of Neuroengineering and Rehabilitation, vol. 7, pp. 60, 2010. [6] A. Bashashati, M . Fatourechi, R.K. Ward, G.E. Birch, “A [15] R. QuianQuiroga, L.H. Snyder, A.P. Batista, H. Cui, R.A. Andersen, “M ovement intention is better predicted than attention in the posterior parietal cortex”, Journal of Neuroscience, vol. 26, pp. 3615-3620, 2006. [16] H.H. Ehrsson, S. Geyer, E. Naito, “Imagery of voluntary movement of fingers, toes, and tongue activates corresponding body-part-specific motor representations”, Journal of Neurophysiology, vol. 90, pp. 3304-3316, 2003. [17] M . Naeem, C. Brunner, R. Leeb, B. Graimann, G. Pfurtscheller, “Seperability of four-class motor imagery data using independent components analysis”, Journal of Neural Engineering, vol. 3, pp. 208-216, 2006. [18] C.F. Pasluosta, A.W.L. Chiu, “Slippage Sensory Feedback and Nonlinear Force Control System for a Low-Cost Prosthetic Hand”, American Journal of Biomedical Sciences, vol. 1, pp. 295-302, 2009. [19] S. Waldert, H. Preissl, E. Demandt, C. Braun, N. Birbaumer, A. Aertsen, C. M ehring, “Hand movement direction decoded from M EG and EEG”, Journal of Neuroscience, vol. 28, pp. 1000-1008, 2008. [20] S. Waldert, T. Pistohl, C. Braun, T. Ball, A. Aertsen, C. M ehring, “A review on directional information in neural 162 Ching-Chang Kuo et al.: Non-Invasive BCI for the Decoding of Intended Arm Reaching M ovement in Prosthetic Limb Control signals for brain-machine interfaces”, Journal of Physiology Paris, vol. 103, pp. 244-254, 2009. using empirical mode decomposition”, Biomedical Engineer ing Online, vol. 9, pp. 25, 2010. [21] L. Holper, M . Wolf, “Single-trial classification of motor imagery differing in task complexity: a functional near-infrar ed spectroscopy study”, Journal of Neuroengineering and Rehabilitation, vol. 8, pp. 34, 2011. [22] C. Neuper, R. Scherer, S. Wriessnegger, G. Pfurtscheller, “M otor imagery and action observation: modulation of sensorimotor brain rhythms during mental control of a brain-computer interface”, Clinical Neurophysiology, vol. 120, pp. 239-247, 2009. [23] G. Pfurtscheller, R. Leeb, C. Keinrath, D. Friedman, C. Neuper, C. Guger, M . Slater, “Walking from thought”, Brain Research, vol. 1071, pp. 145-152, 2006. [24] Z.X. Zhou, B.K. Wan, D. M ing, H.Z. Qi, “A novel technique for phase synchrony measurement from the complex motor imaginary potential of combined body and limb action”, Journal of Neural Engineering, vol. 7, pp. 046008, 2010. [25] T.P. Jung, S. M akeig, T-W. Lee, M .J. M cKeown, G. Brown, A.J. Bell, T.J. Sejnowski, “Independent component analysis of biomedical signals”, Proc 2nd Int Workshop on Independent Component Analysis and Signal Separation, pp. 633-644, 2000. [32] W.Y. Hsu, “EEG-based motor imagery classification using neuro-fuzzy prediction and wavelet fractal features”, Journal of Neuroscience M ethods, vol. 189, pp. 295-302, 2010. [33] T.Y. Wu, Y.L. Chung, “M isalignment diagnosis of rotating machinery through vibration analysis via the hybrid EEM D and EM D approach”, Smart M aterials and Structures, vol. 18, pp. 095004, 2009. [34] V. Franc, V. Hlavac, “Statistical Pattern Recognition Toolbox for M atlab”, Center for M achine Perception, Czech Technical University, 2004. [35] M . Congedo, F. Lotte, A. Lecuyer, “Classification of movement intention by spatially filtered electromagnetic inverse solutions”, Physics in M edicine and Biology, vol. 51, pp. 1971-1989, 2006. [36] P. Flandrin, G. Rilling, P. Goncalves, “Empirical M ode Decomposition as a Filter Bank”, IEEE Signal Processing Letters, vol. 11, pp. 112-114, 2004. [37] Y. Wang, T.P. Jung, “A collaborative brain-computer interface for improving human performance”, PLoS One, vol. 6, pp. e20422, 2011. [26] A. Delorme, S. M akeig, “EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics including independent component analysis”, Journal of Neuroscience M ethods, vol. 134, pp. 9-21, 2004. [27] C.T. Lin, S.A. Chen, T.T. Chiu, H.Z. Lin, L.W. Ko, “Spatial and temporal EEG dynamics of dual-task drivingperformanc e”, Journal of Neuroengineer ing and Rehabilitation, vol. 8, pp. 11, 2011. [28] R. Oostenveld, T.F. Oostendorp, “Validating the boundary element method for forward and inverse EEG computations in the presence of a hole in the skull”, Hum Brain M apping, vol. 17, pp. 179-192, 2002. [29] Z. Wu, N.E. Huang, “Ensemble empirical mode decomposition: A noise-assisted data analysis method”, Advances in Adaptive Data Analysis, vol. 1, pp. 1-41, 2009. [30] N.E. Huang, Z. Shen, S.R. Long, M .C. Wu, H.H. Liu, “The empirical model decomposition and the Hilbert spectrum for nonlinear and nonstationary time series analysis”, Proceedings of the Royal Society London A, vol. 454, pp. 903-995 1998. [31] C.L. Yeh, H.C. Chang, C.H. Wu, P.L. Lee, “Extraction of single-trial cortical beta oscillatory activities in EEG signals [38] M .S. Treder, A. Bahramisharif, N.M . Schmidt, M .A. van Gerven, B. Blankertz, “Brain-computer interfacing using modulations of alpha activity induced by covert shifts of attention”, Journal of Neuroengineering and Rehabilitation, vol. 8, pp. 24, 2011. [39] K.J. M iller, G. Schalk, E.E. Fetz, M . den Nijs, J.G. Ojemann, R.P. Rao, “Cortical activity during motor execution, motor imagery, and imagery-based online feedback”, Proceedings of the National Academy of Sciences USA, vol. 107, pp. 4430-4435, 2010. [40] E. Raffin, J. M attout, K.T. Reilly, P. Giraux, “Disentangling motor execution from motor imagery with the phantom limb”, Brain, vol. 135, pp. 582-595, 2012. [41] C.T. Lin, L.W. Ko, J.C. Chiou, J.R. Duann, R.S. Huang, S.F. Liang, T.W. Chiu, T.P. Jung, “Noninvasive Neural Prostheses Using M obile and Wireless EEG”, Proceedings of the IEEE, vol. 96, pp.1167-1183, 2008. [42] C.T. Lin, L.W. Ko, M .H. Chang, J.R. Duann, J.Y. Chen, T.P. Su, T.P. Jung, “Review of wireless and wearable electroencephalogram systems and brain-computer interfaces--a mini-review,” Gerontology, vol. 56, pp. 112-119, 2009.

... pages left unread,continue reading

Document pages: 8 pages

Please select stars to rate!

         

0 comments Sign in to leave a comment.

    Data loading, please wait...
×