Today will give you a lecture about brain computer interface which is also called brain machine interface. Let's go to the outline of my lecture.
  • Brief overview of Brain Computer Interfaces (BCI), motivation and objectives
  • Basic paradigms used for noninvasive EEG BMI:SSVEP, Motor Imagery, and P300 VEP
  • Signal Processing and Machine Learning Methods for BCI
  • Experimental results: demos and computer simulations
  • Classification and recognition of human emotions
  • Brain to Brain Interactions, Hyperscanning
  • Future perspectives: Hybrid BCI, B2B, Neuro-feedback
  • Potential applications for rehabilitation, therapies, trainingNeuro-feedback
At first I would like to give you a brief overview of BCI, present some motivation and objective and tell you why this technology is so interesting and exciting. Next I focus on paradigms used for noninvasive EEG BMI - this is a recording of brain wave by using the electroput on the scalpel. I focus on 3 basic fundamental paradigm: SSVEP, Motor Imagery, P300 VEP. Also I am going to give you an overview of signal processing and machine learning method which is used in brain computer interface (BCI). I will also illustrate how BCI works by computer simulation and real life demos. I will also talk about brain to brain interaction and hyperscanning. Finally, I will discuss future perspective about BCI and, the most important, what is the real life application of BCI (application for rehabilitation, therapy, training and entertainment).
As you know, we can interact with computer by keyboard, mouse, speech sound. We can also control computer by some body language or gesture or eye movement. BCI can be considered as an extension of human-computer interaction because BCI is the control of computer by the brain wave.
What is the strict definition of BCI? It is a mechanism that allows a user to interact with the outside world through the measurement of brain waves or correlates neutral activity associated with mental processes or perception. A brain machine interface system is a quite complicated system, it includes 3 basic parts:
means for measuring neutral signals from the brain
a machine learning method which provides us an algorithm for decoding brain signals,
methodology for mapping this decoding to a behavior of action
BCI framework

BCI is a communication technology providing a direct connection between human brain and computer without nerves and muscles function. So as you see, this is a complex process. Firstly we must amplify very weak brain signals, the second: we need to start this signal on computer and next we need to process the data to extract specific future, select the future and after this perform classification.

We have many potential and real life application of BCI. One is spelling machine, another one is to control wheelchair, another one is control the robot arms or proteses and navigate the cursor on the screen.

What is the main goal of BCI or BMI? The goal is to develop a noninvasive and user-friendly interface capable of controlling multiple independent channels. In other words, we need to have a system which is a multi-command system that allows us to connect computer with human brain directly, without using the voice control or muscles. So the goal of BCI research is to develop system which is able to decode neutral representation and natural movements planning and execution.
What is motivation for BCI? To provide new research paradigm to better understand how human brain works and processes information and to learn how information from different sensory streams (visual, auditory and somatosensory) is integrated in the brain and use this knowledge to build efficient neuro-engineering devices. In other words, why BCI is such an interesting and exciting field of research? First of all, it is a new paradigm in neuroscience which may help us to understand how the human brain works. The next reason is that the BCI allows you to expand possibilities for a human computer interface for rehabilitation and training. The third reason is to develop control electronic devices to assist elderly or partially paralyzed people.
So what is the current trend in BCI? The most interesting paradigm is so-called Motor Imagery when the subject imagines a movement of a limb and we would like not only to develop the good BCI system but also to understand how a human controls movement.
How brain computer works? We need to identify or extract the brain pattern from our brain. Firstly, we need to make measurement of EEG signal, next we make some preprocessing - removing of some ongoing activity of the brain not related to specific mental task. Next we need to extract the feature - some specific pattern which is related to mental task. Then we perform classification using machine learning algorithm.
Brain machine interface usually works with neurofeedback.
The subject must adopt to computer via neurofeedback or biofeedback and machine should also adapt to the user. Specific brain activity must be extracted in fraction of second. The most popular neurofeedback is the visual one. Other ways of feeling our brain pattern are sonification, audiofication, haptic or tactile methods. Accuracy of system with a neurofeedback is higher than that of the system without one, it's proven by some experiments.
How the BCI is built? It is a multi-stage signal processing system. First we perform the data acquisition by using multi-channel system, next we perform preprocessing, then we perform the future extraction and feature selection, next block is pattern recognition and intelligent classification, the following one is executive devices. This is just a basic model which can and should be improved by using different methods.
  • Neuroscience
  • Physiology/psychology
  • Engineering
  • Mathematics, DSP, ML
  • Computer science
  • Rehabilitation
  • As fast as possible
  • Adjusted for spontaneous variations
  • Avoiding instabilities
  • Robust
In conclusion, brain computer interface is a very complex system, this technology is still in development but in less than 5-10 years it can make a lot of progress and in the next lecture I will show you what has been achieved so far.
Today we will talk about affective brain-computer interface. Brain-computer interface (BCI) is a direct communication pathway between a brain and an external device, especially computer. The affective brain-computer interface involves human emotions, so we would like to evoke specific emotion to enhance the brain activity. Affective brain-computer interaction aims to enhance brain-computer ability to detect, process and respond to users affect or emotion.

Also I would like to mention that the current state of brain-computer interface assume that we (especially that in affective brain-computer interface) not only use the brain signal but also another signals like, for example, electromyography (EMG) so muscle activity, and also electrooculography (EOG) so eye movement, and also electrocardiography so heart rate variability.
How does stimuli in affective brain-computer interface look?
This slide illustrates English actors who try to evoke specific emotions, in this case positive emotions.
So if we observe such affective faces, our attention to this visual stimulation is higher and also the important is that the brain responses stronger.
What is the promising approaches and paradigms for BCI? The first is Steady State Visual Evoked Potential (SSVEP), the next one is Motor Imagery and also the very important paradigm is P300 Visual Event Related Potential (P300 VERP).
Let's start from SSVEP.
Simulation for wheelchair control
Illustration for principle
So on the screen the subject observes flickering object, and all of these objects flicker with different frequency. The brainwave consists of these specific frequencies, so we can detect these specific frequencies depending on what of the object is in focus of the subject. How does this work in reality? I would like to illustrate it by showing the subject controlling the wheelchair.

So the subject has a possibility to control the wheelchair by observing four different flickering objects on the screen.
This model can be extended to more directions, for example, in this case we have 8 checkerboards and each of these checkerboards flicker with different frequency and we have eight different directions (left, right, up, down, and so on) and we can detect the frequency in the brain by using the sophisticated signal processing method.

This method works but causes tiredness of the subject. Detecting a weak signal from this stimuli is very difficult so we try to use affective BCI. How can we impose emotions for this paradigm? Instead of a checkerboard or flickering light we use some objects that are flickering with a different frequency: we use faces with different emotional reactions on them. We recognize the direction of the movement on basis of different frequency of flickering. We implemented this idea for controlling a robot arm. The subject can choose the direction of the robot's arm movement by focusing on different objects. The robot arm can be used for delivering water of coffee. This method works really well.
With Motor Imagery the subject imagines the movement of the limbs and he can control, for example, the movement of a car in a game. We use the neurofeedback and record the brain activity to understand when we fail and when the method works well. This paradigm is based on Motor Imagery. We also can combine two different devices: a wheelchair and a robot hand.
Another paradigm is based on so-called P300/N170 Event Related Potential. We detect the peak of the event-related potential (approximately 300ms after stimulus) and we call this P300 paradigm.
I would like to state that BMI doesn't work perfectly.
BMI can be used for rehabilitation, especially after a stroke. If a subject correctly imagines a movement of a paralysed limb, robot helps him to make a movement. This way we can make rehabilitation faster. How is works in real life? If we detect a pattern in a brain signal, robot helps to perform specific tasks of a paralysed limb. BCI can be not only applied for entertainment, it also can help people to recover.
This is an another visual stimuli that can be applied to BCI. We have 8 different commands, we give the sequence of directions in which the person should move, the person imagines the movement of the hand. This is one of the systems.
Another system uses faces. We record the brain activity, analyze so-called topographic map of the brain activity.

To enhance the images we try to use different types of objects. How neutral faces and emotional faces influence the performance of BCI computer? Affective BCI, basis of event-related potential, uses phases that have emotional context. If we use emotional faces, we can obtain a better performance of BCI.
This is another idea: to enhance and to improve the performance of BCI we can use inverted faces. The cognitive process of finding inverted faces is much more complex and event-related potential is higher. The subject must count how many times he has looked in specific directions or he must recognize whether the face is familiar to him.
All these methods provide some fatigue so the subject is usually discouraged after using the system for longer than 20 minutes or half an hour.
We developed another system based on emojis. We show artificial faces with emotions from happy to unhappy, in this case there are minimal changes in visual stimuli and fatigue is much lower. This is a more user-friendly system and in this case we have only 6 commands. We compare this with another visual stimuli which uses only the change of the intensity of the light. The stimuli with faces gives the best performance.
BCI can be potentially used to control many devices, e.g. a robot arm or a wheelchair. We still need improvement in the sign of paradigm and in developing the good machine learning algorithm. We usually use imagery movements of hands and feet and/or analyze brain responses to various visual or auditory stimuli. BCI algorithms are based on machine learning methods for feature extraction and classification technique. The performance of BCI can be improved by developing efficient neurofeedback. BCI need to work online so we need fast machine learning algorithms.