Loading

Module 1: The Inverse Problem & EEG Localization

Notes
Study Reminders
Support
Text Version

EEG Localization Models

Set your study reminders

We will email you at these times to remind you to study.
  • Monday

    -

    7am

    +

    Tuesday

    -

    7am

    +

    Wednesday

    -

    7am

    +

    Thursday

    -

    7am

    +

    Friday

    -

    7am

    +

    Saturday

    -

    7am

    +

    Sunday

    -

    7am

    +

The inverse problem and EEG source localization. This is a slightly complex and theoretical lecture, so please bear with me. So here we will consider EEG source localization techniques and the forward and inverse problems associated with it. What it means is that we are recording from the scalp. Now, which part of the brain, which brain areas contribute to EEG? Now can we do that just from the scalp electrodes? Can we figure out which areas cause different EEG and ERP components? That is what this lecture is about.

So during the last two decades, our computers have become more and more powerful. The computer’s computational techniques used by earlier neurophysiologists were hamstrung because they were very weak compared to what we have with us now. The computational power of an android smartphone is many orders of magnitude more than what the early neuroscientist. When I say early neuroscientist, I am talking about the people from the 50s and 60s which are already using computers what they had. So the thing is we are recording from the top, all these green dots are electrode positions and using these can we work backward and see which brain areas contribute or cause this electrical activity on the brain? So this is EEG source localization.

So the forward problem. So in engineering physics or applied math, modeling involves a set, involves predicting the effects or results of, for a set of known parameters. We know what is happening. We develop equations and we have a mathematical theory and then we can predict how that data looks like, like so on the right you have estimated parameters that gives rise to your mathematical model or physical theory and then you have a prediction of data. So this is the forward problem. As long as you know all this and this is reasonably good, your theory, you would have a unique solution and you would be able to predict data. So this is forward modelling or the forward problem.

So you have your something called head model which describes the scalp, the bone, the meninges, the cerebrospinal fluid and the brain and you have different sensors and then the model also describes the conductivity and geometry of the brain. And you have equivalent dipoles which you can assume occur and then you have a prediction and then with your, how it is supposed to look like and then with your magnetic or electrical easy sensors you can record and see how it looks like. And if there is a discrepancy, you change your model until it fits what is recorded, your prediction fits what is recorded. So this is the forward problem.

The inverse problem on the other hand is we use EEG data to infer which brain areas caused it. So you have the measured data, it is backward, you have the measured data, your EEG, then you have your mathematical model or physical theory and then you predict the parameters causing it. So unlike the forward problem, there is no unique solution for the inverse problem. In theory, an infinite number of parameters might explain the same measurement data. For example, you are recording between two areas, two electrodes and you get one. And it could be anything, it could be 5 minus 4, it could be 10 minus 9, 8 minus 7, so on and so forth. There is an infinite possibility, so that is why we do not have unique solution. So what we do is we constrain the solution by using our knowledge of neuroanatomy, neurophysiology.

For example, suppose we have the auditory evoked potential and we have two possible solutions, one in the auditory cortex and one in the occipital cortex. So since we know the anatomy and the physiology we would discard the occipital dipole or the occipital generator and focus on the auditory generator. So this is hardcore physics numerical analysis with help from neuro-anatomy and neurophysiology. And the more number of electrodes you have, the more robust the solution is, so suppose you just have 4 or 5 electrodes compared to 256 electrodes. So the 256 electrode dataset would give you a much better generator solution than just a diminished set of electrodes.

So assumptions. So there are lot of assumptions which we use, the assumption in physics and conductivity and structure. When we are doing the, figuring out the sources and generators of EEG signals in the brain. So the first assumption is that the signal that is measured on the scalp is generated by the pyramidal neurons that are all oriented perpendicular to the cortical surface. And this figure you saw, you encountered earlier, so this is a pyramidal cell. One neuron generates really small bit of activity and it cannot be picked up on the surface because all the other neurons around it they have activity and they overwhelm it especially if they are not synchronized.

But when a large group of neurons is simultaneously active then it becomes an equivalent dipole, you can model it as that, it is a vector with direction and magnitude and the electrical activities big enough to be picked up by the EEG electrode on the scalp. So consider this figure here, you have the electrode over here, this is the scalp, the bone, the meninges, the cerebrospinal fluid and then you have the cortex and then you have the pyramidal cells and layers three, 2, 3, 5 and 6. So if they are all irregular and they are doing their own thing independently of each other, so you have these different electrical records from each say neuron. The sum is the EEG and it is random. But if all of them are synchronized and they all fire at the same time more or less, then the sum you can see, sum potentials which reflect the summed activity of all these guys firing synchronously. As the apical dendrites of these pyramidal neurons are all parallel to each other, so the EPSP in these dendrites they sum across space and time you have an approximate equivalent dipole which considers the activity of all these neurons.

So the assumptions of the neural sources or dipoles they exist only on the cortical surface and are perpendicular to it. It then imposes a constraint on the different kinds of solutions you would have for the inverse problem. So the solution’s search space as it were reduces to the cortical surface, you do not have to go inside or outside. And these are then, these constraints are then translated into a source model. It is very important to keep in mind that the final solution that you get, it depends on multiple parameters and the accuracy of the choices made because if the initial conditions or the initial parameters are slightly different then the solution would be very different.

So some of the things to keep in mind is we do a lot of simplifying assumptions. We assume that the current in dendrites flows unidirectionally but that is not true because if you remember the action potential starts at the initial segment and it moves forward orthodromic as well as it moves backward and invades soma, the cell body, it is a backpropagating action potentials. Then there is another whole population of cells which we have not even talked about, those are called glial cells. And these significantly outnumber the neurons and modulate neural activity. For example, they actively buffer extracellular potassium. And if you look at their resting membrane potential, it is near the equilibration potential of potassium. So they are not included in the source models usually. Then conductivity, so the different parts of the scalp, the coverings of the brain, the scalp, the skull, the meninges, the CSF, all of them have different conductivities. It is not homogenous, neither it is isotropic.

So we have to put in numbers for each of these conductivity values. And one big unknown is we do not know the conductivity value of the skull, the human skull even now because whatever skulls we have is usually postmortem and that is not the real situation. And we cannot assume that the postmortem skull’s conductivity is the same as living person’s skull. And we also do not know, we are not very clear about the conductivity of the tissue between the brain and the scalp surface. Most models assume the skull to be homogenous with the same conductivity in all directions. This is still yet to be proved. So if you have an impedance meter and you access to a neurosurgeon, please get the impedance value of the skull and the tissue between the brain and scalp surface in humans.

So source localization in EEG, so has been one of the primary goals to solve this inverse problem. So what this means is identifying where in the brain a particular type of activity originates based on the surface EEG. And this is useful particularly for conditions like epilepsy. Because of epilepsy, the neuron, you have this kind of jerky movement, rhythmic movements very fast and they imply hypo synchronization of pyramidal neurons for example and they all firing together in synchrony. So to be very useful to localize which part of the brain the epilepsy is originating because depending on that, for example, if it is in the temporal lobe and it is temporal lobe epilepsy, the treatment is specific. It can vary for different kinds of epilepsy.

Now let us get to the nitty-gritty. How do we construct a realistic head model? So to identify the source of an EEG you need three things. One is you need the source model, then you need the head model and then you need the EEG data. And there are different compartments which have to be individually modeled; one is the skin, then you have the bone, compact bone as opposed to spongy bone. The compact bone is on the outer, inner surface of this skull. The spongy bone is in between. Then of course the gray matter with all its neurons and glial cells and what have you and the white matter which is made up of mostly axons. And the center cerebrospinal fluid. So all these have to be put in to get, to do your source modeling.

So the source model. Let us consider the source model first. The source model tells us the three-dimensional positions, that is the x, y, z coordinates of the dipoles of the cortical surface. And this would be over here, over the gray. And this is also referred to as the source space and it is assumed that the EEG signals are generated by sources which can be approximated as dipoles for the reason mentioned earlier.

So then we come to the head model. We need to describe how the electrical currents from the sources will flow through the head and finally end up as scalp EEG. So this depends on two factors, one is the geometry of the head and the other factor is the conductivity of the various tissues of the head. So this is skin, compact bones, spongy bone, all these would be involved for the head model. And the head model can be the geometry of the head, can be obtained by MRI, structural magnetic resonance imaging. Not fMRI, just plain all anatomical MRI. Regarding the rest of the tissues there is a huge literature available on conductivity values for the various tissues. But as mentioned before skull conductivity is the biggest unknown. We have, we assume we do not have any clear experimental data for skull conductivity.

So in this session, we shall conclude the lecture on the inverse problem and EEG source localization techniques. We were talking about models. So let us consider the head model. So we need the MRI image of the subject which has the different 3D coordinate system. Then the EEG electrodes, the subject. So we need to align the EEG electrodes with the subject’s MRI in a common coordinate system to solve for these problems. So if you look at the picture on the right, the blue dots are the fiducial markers for MRI. So you have the nasion, it is on the nose. You have an inion, you cannot see this here but the back. Then you have two points over the pre-auricular ridges for the ear. The red dots are all EEG electrodes. So we have to synchronize both these; otherwise, we will not be able to get the right solution if the EEG electrodes are not synchronized with the MRI image. There the physiology has to be synchronized with the anatomy.

So as I mentioned this procedure is done using fiducial markers, which are anatomical locations. This point over here is the nasion. A similar point at the back, there is a ridge, is called the inion. And then you have the two pre-auricular ridges. So these are the anatomical landmarks for the MRI. The red dots are the EEG electrodes and they are arised from the, usually from the 10-20 system. So once we have this, now we can model the measured EEG as so, y is equal to L q plus n where q is the time course of the unknown dipole, r is the dipole location. L is the matrix capturing how the currents from the dipoles are transformed into EEG recorded at the scalp electrodes and n is the noise, the measurement noise from the baseline recording. So essentially, we have to estimate q given L, y and n.

So what are the sources of error? So all the assumptions of the inverse problem if they are off or you have not taken them into account, you will have a huge error. Then we need the head model has to be accurate. So usually, neuroscientists use a 4-layered model, that is the brain, the skull, the cerebrospinal fluid and the skin. This is a good starting point. 3 layers can also work but it is not so good while 6 layers is good but is computationally very intensive. The computer will keep modeling it for days. The other thing is we have to recognize the different tissues in MRI, I mean which is the skull exactly, which is the skull, which is the bone, which is the skin, how thick is the CSF layer, so on and so forth. This is nontrivial, you need to be an expert and it also takes a long time.

So source modeling software, usually, nobody writes it from scratch and it is really good numerical analysis and coding. There are open source academic software packages and there are commercial packages. So some of the academic packages which we have some experience with both of the institution at St. John’s include EEG Lab, it is from the University of California, San Diego. And this has reached to the test of time and it is very, very majorly supported and I advise any newbie in this field to first check it out. There is also newer software which is available, which integrates both the MRI as well as the EEG data. So one is Brainstorm which we have begun playing with at St. John’s. Then there is one called Cartool and then there is FieldTrip which is a set of packages which runs on MATLAB, so does EEG Lab it runs on MATLAB. Then there is low-resolution electromagnetic tomography which we will consider just in a minute. As far as commercial software packages are concerned, BESA is the most popular, it is little expensive but it is considered very, very good in its field and also CURRY which is from Neuroscan which has a source modeling module. I am not familiar personally with the rest of the software but you can check them out.

So where to begin? Because you have a whole bunch of data you have the EEG data and typically you do not have one or two electrodes, it is minimum of 64 electrodes or 128 or 256 channels. And then you have to superimpose this after doing all the modeling and stuff on fMRI datasets which are vast, very big to begin with. EEG datasets are much smaller than fMRI data because, and then MRI data because there they go into gigabytes. And then you need a computer to handle all this, lots of RAM, a fast processor and lots of hard drive space because it quickly fills up.

So one suggestion is to start with Michel and Brunet’s 2019 paper, it is over here and they describe CARTOOL which is open source software for source imaging. So this review is good because it describes in detail the different steps needed to estimate the distribution of underlying neuron sources derived from EEG, all the dipoles. It also explains the logic underlying each step and the requirements needed to be fulfilled to perform them. And all these steps are implemented in a standalone software called CARTOOL. So I suggest if you are interested in source imaging and modeling, to please check this out and you do not have to collect data. There are demo datasets which come with these programs and also you have many groups all over the world who have put up their data, real data which is recorded from real subjects, not simulated data on the internet which you are free to use and to analyze. The only condition being that you have to acknowledge them in your paper’s presentation.

So coming to LORETA, so LORETA is low-resolution brain electromagnetic tomography and it calculates the current distribution throughout the brain volume. Not just on the surface but throughout the brain volume. So it has advantages, one is it provides basic localization solution with easy to follow procedures. It is much simpler than CARTOOL and it is also open-source, so you can download it and use it. And it can localize boundary and deep sources unlike earlier solutions because if you just look at EEG per say, it is just the 2 centimeters below the scalp and after that you cannot really say anything unless you use MEG in which case you get deep sources but you do not get the superficial sources. So one way to have best of both worlds is to do simultaneous recordings or recordings of both EEG and MEG in a given subject and integrate the data. Now that is difficult because EEG while relatively freely available MEG is not. And there are very few sites, I am not sure if there is any place in India which does MEG on a routine basis. The disadvantage of LORETA is that its spatial resolution is low and that makes source realization, localization difficult.

So the original author of LORETA came up, Pascual-Marqui, professor Pascual-Marqui came up with a variation on LORETA called sLORETA. So this is based on standardization of current density estimated for source localization. It claimed, it is claimed that sLORETA estimates the current sources without any error. That is, it provides the exact solution of zero localization error. Again, the disadvantage is it has low resolution and it fails to localize multiple sources when they overlap especially. So then the group, the LORETA group came up with another variation called exact LORETA or eLORETA, which is a method which focuses on deep sources with reduced localization error. Is based on LORETA but uses different numerical analytical procedures. So it is supposed to be reliable localizing method with no bias, localization bias. And again, it provides zero error localization in case of non-ideal conditions that is in the presence of noise. Several studies have shown that eLORETA seems to provide better results than sLORETA but your mileage may vary.

So implementation, so one of our students at IISc, she is now at IIT -D. So she is studying the mu-rhythm in spinal cord injury patients for developing BCI assistive devices. We shall consider the mu-rhythm in greater detail in our BCI lectures but essentially it is rhythm which comes in the alpha band and it is over the motor areas, central on both sides. And when you move or when you think of moving, it gets disrupted. So these are the electrodes where the mu rhythm is mostly prominent over the motor areas, sensory-motor areas. And this is the typical mu rhythm. You see this regular thing and it occurs in the alpha band.

So it could be easily be missed as alpha waves. But it is in the wrong area for alpha waves. Alpha waves are typically prominent in the occiput and when they occur, if they occur in all areas, they are most prominent in the occiput. The mu rhythm on the other hand is localized only to the central and the parietal areas. Now the interesting thing about the mu rhythm is if you think of moving your hand, immediately the mu rhythm which is there, it is kind of an idling rhythm like your car being on neutral. It immediately gets disrupted and it becomes better. Now even if you think of moving your hand, you do not have to move it. if you think of moving your hand, it gets disrupted. So that is the interesting thing about the mu rhythm and it can be possibly used for rehabilitation in stroke patients, make them, train them to keep thinking about this and the brain is plastic and there it keeps changing. They say your brain is not the same today as it was yesterday, it changes all the time. So this could possibly help in stroke rehabilitation.

So what did Mansi and her group find? They found that the premotor cortex, the primary motor cortex and the postcentral gyrus and the posterior parietal cortex were activated compared to other areas. And these areas are considered the neurophysiological substrates for motor imagery. So consider her data, so A is moving the left hand, so on the, the left hand is represented contralaterally on the opposite side. So you see a lot of action over here. B is moving to the right hand. You see some action here but you also see action on the left motor cortex. C is moving the legs and those are the leg areas. The hand areas are below, the leg areas on top. And D is moving the tongue. So each of these specific movements activates specific areas in the motor cortex. And she did not collect the data, she used a movement imagery database from this website. So please mark out, download this and you can do eLORETA. You can either use the LORETA software per say or you can use a software called FieldTrip which has LORETA implemented in it which runs on MATLAB.

So the FieldTrip implementation of eLORETA, FieldTrip is a MATLAB toolbox for source modeling. It contains a set of separate high-level functions and visualizations is via MATLAB. It does not have any EEG per say. And if you go to their website, please see references to the review papers and teaching material to get you started. And by far the best way to get hands-on experience you should go through the tutorials like everything else. And also, you shouldn't do it alone because you might make some mistakes and it is better to get course corrected right in the beginning. So it is better to work with a group of people or other people who have done FieldTrip in the past. By having said that it is free and as long as you have MATLAB, you can run it and get some good data.

Now we have to consider commercial software tool and BESA, brain electrical source modeling software is the one most commonly seen in cognitive neurophysiology labs, the proprietary software. And it was created by two German scientists, Michael Berg and Paul Scherg, Berg and Scherg. And it performs brain electrical source analysis for estimating the parameters for these intracranial sources of ERPs. So it may not provide an exact solution and the errors may vary from very small to being on the order of 2 to 3 centimeters. So it may miss sources. But that is part of the inverse problem and it is not the thing of the software, the inverse problem is such that you never be able to sure that this is the exact solution of what we record on the scalp. So on the figure on the right you see the ERPs in the lower panel, different ERPs. And on the top you have the computer brain generators, it is on top.

But given all its disadvantage it still provides BESA a reasonable estimate of spatial and temporal characteristics of ERP generators. This is better than estimating ERP generators from surface topography alone. It has got good math. So if you see the figure on the right, you see BESA brain generators and there are different spheres over there and they have a little rod sticking out of them, so it is a vector. So it shows the direction and it also shows amplitude. And best of all, you can superimpose these data on fMRI data.

Now fMRI is different from structural MRI where you look at the activation of areas in the brain and you estimate this by something called the BOLD signal, the blood oxygen level dependence signal. The assumption being that areas with, which are active have more, use more energy and so the blood supply to them increases. So using EEG and fMRI is probably the most powerful way. EEG, fMRI and MEG is probably the most powerful way to figure outsources and generators in the brain. So this has just been a brief introduction to source localization and modeling. This is neuro-biophysics. There is lot of physics and numerical analysis involved. So I strongly encourage you to read a lot with references given over here and also available on the internet. And the best way to do things is to get started. And if you have MATLAB in your institute or in your lab, please download and use EEG lab, ERP lab, FieldTrip and CARTOOL and you are on your way.