MEG - An introduction

Doctor Richard Coppola explains how magnetoencephalography (MEG) is used to record exquisite images of the brain.

The cue at the top is telling you which task you are supposed to be doing. This is the 2-back, meaning you have to remember what you saw two trials ago and respond to that. So, now you are seeing a ‘4’, so you remember that, but not press. Now you see a four again – you’ve got to remember that. And then we’ll see another trial in a second – we’ve slowed it down a little bit just for being able to explain the task. When you see the next stimulus, which is a ‘1’, now you’re pressing ‘4’ because that’s what you saw two trials ago. You need to remember that you had ‘4-4-1’ – you can think of it as a sequence or you can think of it as a little buffer in your mind, people use different strategies for this. Now you see a ‘2’ but the stimulus that is two trials back is a ‘4’ so what you would be pressing would be the number ‘4’. By having the alternation between 0-back, and 1-back, and 2-back, they control the load, the memory load, that you have to be using for doing the task. By having the alternation between the 0-back and 1-back and 2-back, they control the load, the memory load that you have to be using for doing the task. With 0-back, there’s really no memory because you just press the button that corresponds to what you are seeing on the screen immediately – the only memory involved is in remembering the overall task. With for 1-back the memory load goes up because you’ve got to remember the previous trial and recall the number you’re currently seeing so that you’ll be able to press that in a few seconds when the number comes. With 2-back you’ve got to remember 2 trials back and there’s more updating and memory that has to go on. So, the memory load is very different, it’s much harder. The key to how our analysis works is that the amount of visual stimulation that your brain is receiving and the amount of motor movements that you’re making, in terms of pressing buttons on the keypad, regardless of whether the memory load is 0-back, 1-back or 2-back. So that those things that have to be going on in your brain in order to recognize a ‘4’, in order to press a particular button on the control pad, those brain activities are identical, regardless of what the memory load is. The only thing that’s changing in the conditions between the task is the memory load. So, if we do a contrast between the tasks, the brain activity for the stimulation and for the motor movements is identical. And those things cancel out in our analysis. And the difference we’re left with is just the difference that is those brain areas that are required in order to do those increased load. That’s the key to this type of task and having what we call an active task and a control task. So in this task, the active task is the 1-back or the 2-back. The control case is the 0-back, where memory is not involved. I can show you, what we’ll talk about later when we see some of the data, we’ll see what those contrasts look like in terms of looking at our subject populations. There is a bit of a practice effect. Our subjects all are practiced outside the scanner actually. We are not usually describing to the subject how to do the task while they’re in the scanner. They’ve actually received training outside the scanner, sitting at a computer station, where they are trained and run for a long enough sequence so that they are used to the task and know what they are doing and are comfortable. There’ probably some long-term training effects, particularly in terms of the 2-back, but not too much. It’s a hard task even if you’ve done it a lot, as I have, or some other of our subjects have. It’s still a challenging memory task. There are clearly some learning effects, we try to use enough practice so that we get past that stage of the learning and we’re looking at a more constant level of performance. We also have the subjects performance on the task – we know whether they are right or wrong, and what percentage correct they’re doing. So we can track that and that’ll give us is an idea about whether there’s changes in their performance over time. But for the most part, the subjects have seen this task several times outside the scanner and they’re quite used to it when they come inside. So in most of our experiments, we’re not interested in the learning effect or the practice effect. We’re really interested in the tonic effect of having to do this task. So, we try to get past these learning effects. For the most part, what we’re most interested in as part of our initial question is what brain areas are active when you load them with this kind of memory. As long as this person is trying to do the task, we can see how that works. This is always a problem in terms of looking at schizophrenic patients who tend to have a much harder time doing these kinds of tasks. It complicates the analysis when we’re trying to understand whether or not the lack of performance is due to not marshaling enough brain activity or whether their brain is working in a different way. And those are the kinds of questions we are interested in – about how the brain is working. We like to be able to run in this situation with people sitting up because it’s a much more natural position. Our subjects and our patients find being in this scanner a lot easier than being in the MRI scanner which is much more claustrophobic. You’re kind of laying on you back inside a cylinder in a very noisy chamber. Right now, you’re actually hearing some noise because we have the door to the chamber open. But normally you wouldn’t even be hearing that because we’d have the door to the chamber closed – the only thing you’d here is maybe a little bit of the airflow moving and occasionally we have a microphone in here so that we can talk back and forth to the subject to make sure they are comfortable… Usually subjects when they’re in here, we get them to do a number of different types of tasks. Some are more involved than others… 06:08:30 This one involves working memory, which is one pf our primary interest, particularly when looking at patients with schizophrenia. There’s a fixation point in the center and we ask you to keep your eyes fixed on that spot so that eye movements are not involved in the task. Eye movements add extra brain activity that we’re not interested in for this particular task. It’s pretty easy to recognize the number position around there – the field of view is such that you can see the whole thing. If you keep your eyes on the center, then that reduces the amount of movements. The cue for which task we’re doing is on the top, so the subject remembers that. The numbers move around in a random pattern and the response pad matches that – matches the shape of the screen so we can see what’s happening. We’re going to get a different task that’s going to come up. There’s a short pause in between the blocks and each block then has a new cue at the top to tell you what it is. We do alternating blocks. We do about 25 seconds of each version of this and they alternate through a number of times – 8 or 10 times each task, so that we’ve got a series of blocks in each condition. We can use the contrast in the brain activity between those blocks as how we structure our analysis. We like the idea of using alternating blocks rather than a very, very long block of one or the other because people state changes even over short 10s or 20s of seconds in terms of engagement with the task and their amount of alertness. By alternating it we can keep those things constant in terms of the contrast between the brain conditions. Okay, before we put the screen up we’ve shuttered the projector so there’s no light in here, so it doesn’t shine in your eyes – this just roles up out of the way. We can leave the response pad here. I need to lower you down – everything is pneumatically driven because we try to keep all of the amount of electrical equipment and anything with metal out of the room because all of those things would make noise that would interfere with the recording from the signal that we’re looking for in your brain, which is very, very tiny (!!!!!). So that’s why we need the shielded room, because we need to be careful about all these things. So, that gets us down and I’ll slide you forward a little but and then if you’re careful when you get up and not bang your head you can get out. The door is quite heavy and consists of the two layers – they have to match up here. Because it’s quite heavy, there’s a pneumatic lock that pulls the locks together and seals the door in several ways – so now it’s closed up tight. What we’re trying to record is a very, very tiny magnetic signal that’s generated by the electrical activity of the brain. This signal is thousands and millions of times smaller than the earth’s magnetic field. So we have to use this shielded room but we also have to use some very special sensor coils. They are little magnetically wound coils that sit around this gantry. The subject’s head is in here and there are 275 coils that are arranged around the head that pick up these signals. Now the trick to this is that these coils have to be bathed in liquid helium because they need to operate at superconducting temperatures. The signals in the coils are transduced by something known as a SQUID, which is a superconducting quantum interfering device. This allows us to translate the tiny, tiny magnetic field that we’re recording into a signal that we can then absorb and record with our computers that are then able to be further processed into pictures of brain activity. But the coils are here, the SQUIDs themselves are up here, all of this has to sit inside liquid helium. The reason this gantry is so big is that it has a very big reservoir of liquid helium and then it has a vacuum chamber about it. Just like your thermos bottle that you would use for storing coffee at home – except that this one has a very, very high vacuum that is much, much higher and much more perfect in terms of being able to insulate and keep this liquid helium cold. That is the skin of it right here, at the top of the subjects head, is about an inch away from a liquid that is -273 degrees – its only a few degrees above absolute zero [NOTE, this is wrong!!]. It’s extremely cold, yet you don’t feel any cold in here at all – the vacuum works very perfectly to insulate and contain the liquid helium. Hi, I’m in our magnetically-shielded room with our MEG system, which stands for magnetoencephalography. We run this room as a clean room – that’s why I’m standing here in my socks – because any small particles of dirt might bring a slight bit of magnetic material into the room and anything like that makes enough noise to interfere with our recording. So, our subjects come in after we’ve removed all metal from their body – that includes taking off their belt, removing their wallet and money, and key-case, and anything like that. It is not a safety issue as it is in fMRI, for us it’s just a noise issue we need to remove all those pieces of metal that would create noise that would interfere with our recording. We’re recording the magnetic signal from the brain. Activity of the neurons in the brain is an electrochemical event that fires action potentials, changes synaptic transmission and local field potentials, which are all things that guide the neural activity. These are electrical events and all of these electrical events create a very tiny magnetic field that we can record from the outside. If the neurons and the current flow has enough structure from a few hundred thousand neurons, then that gives us a big enough signal that we can record actually outside the brain. It takes a certain amount of neural tissue to be active and it doesn’t have to be active for very long – that’s one of the differences between this kind of recording and, say, fMRI. Here, the electrical activity of the brain changes on a millisecond by millisecond basis according to what the person is seeing and feeling and doing and thinking. That’s one of the key issues why we’re interested in this particular technology, because it allows us to track activity on a millisecond by millisecond basis. Now, our ability to spatially localize where that is coming from is not quite as good as some other technologies like fMRI but, on the other hand, we have the very temporal resolution. Indeed, we can actually trade those two things off. If we want to get very accurate spatial resolution, we can actually do that quite well too – but only on under circumstances: if somebody is tapping your median nerve, a very focal, very small piece of the brain is active in response to that somatosensory sensation. If we average a few hundred of those sensations, we can actually localize that in the brain to within a couple of millimeters. But now we don’t have quite the same temporal resolution because it has taken us a few hundred average to do that. Although we could follow the waveform of that average on a millisecond by millisecond basis. But over a dynamic, longer range of time it would actually take a few 10s or 20s of seconds to actually get the average that we need. So, its always a trade-off in terms of spatial versus temporal localization. But the key, again, is that we do have the temporal resolution is that we are tracking the electrical activity in the brain rather than the blood flow, which changes in relation to this activity in a very small time constant. It takes hundreds of milliseconds for the blood flow to reorganize itself in response to that kind of neural activity change. One of the tasks that we’re particularly interested in, in terms of when we look at a person’s brain activity is a task that we call the N-back. This is a working memory task when we ask people to look at symbols or letters or numbers that we see on the screen and we ask them to press a keypad in response to what they see on the screen. If they respond to what they see immediately, it doesn’t require any memory. If we do something like what we then call the 1-back meaning that they have to respond to what they saw in the previous trial, this adds a memory component – we call it working memory because they have to hold it in memory for a few seconds and wait until the next stimulus comes up on the screen and then respond to what they saw a few seconds earlier, that’s a memory load [SOME BACKGROUND NOISE HERE]. If we make them remember from two trials, that’s what we call a 2-back, that increases the memory load. So by using a contrast between 0-back, 1-back, and 2-back we can change that memory load and use that as a way of understanding of how that brain activity changes as the memory load changes. It allows us to isolate those components that are just due to working memory rather than the things that are due to visual stimulation or to motor responses. We actually have 275 separate channels that we’re recording from. This looks a lot like an electroencephalogram in terms of what we would call brain waves. We can see them marching along the screen in real time. The subject is sitting in there at the moment resting with their eyes closed and this waveform here, this very regular, oscillatory pattern, is what we call resting alpha rhythm. That’s a relaxed brain rhythm when you’re sitting quietly alert and awake but not actually doing anything. There’s a regional pattern to this because the sensors are distributed all over the head but this changing pattern across the brain is the kind of thing we look at. These are just the individual channels, what we do with our more advanced processing is to take all of these channels together and apply some what we call beamforming algorithms to actually locate within the brain where these signals are coming from. This is sort of what the raw data itself looks like while the subject is sitting in the scanner. We wouldn’t notice any particular changes here due to the task, although we can see some gross changes like whether or not they have their eyes open or eyes closed and that changes the alpha pattern and we see some movements and things like that… We’re looking at a large collection of channels on here – we have 275 separate sensors, separate coils that record. Each one of these traces across here is from one of those separate recording channels. At the moment, I’m only putting up a subset of the 275 – if I put up all 275 on the screen, it’s a little too hard to follow. 06:26:41 We’re now looking at a slightly different set of channels on the screen – this is a region that doesn’t reflect the alpha. This is a more activated pattern without the alpha rhythm although now he might be getting a little drowsy and this might be a little burst of a sleepy signal – the subject has just being sitting in there without doing anything. And so, we can actually track and see changes like levels of alertness. And when people sleep, their brainwave pattern changes considerably. When we see eye blinks, when we see large discursions on the screen, those kinds of things we consider artefacts because we’re not interested in the electrical activity from the muscles moving the eye, that’s not brain activity. So we either have to remove that artefact, or we have to filter it out, or we have some signal processing ways that can allow us to ignore that artefact. But there are several different types – one is the movement of the eye, so there’s a big signal from the eye muscle. If people clench their teeth that makes a lot of noise in the temporalis muscles and that will show up as very high frequency bursts of activity – again it’s not from the brain, it’s from the muscle and so that’s not something we’re interested in. Again, we have to either filter that out or remove those channels or edit around them in some way. So we have to be very careful to make sure that we’re studying brain activity and not artefact activity. Now that we have recorded our data, we have to check a number of things. In this case, we’ve taken the recording that you saw of the continuous waveform on the screen before and we’ve also recorded event markers that tell us where each of the stimuli were occurring when the subject was seeing things on the screen. We were also recording the button presses that the subject makes in response to each of these. So we have now a fill recording record that has all of the event marks in the data and this is the kind of thing we’re going to process into being able to localize where activity is. Now there’s a lot of intervening processing steps where we use our beamforming technique to localize the activity in the brain. This is sort of the final process step of that and it’s a comparison of our data on the left screen – this is the localization of the difference in brain activity from the 2-back versus the 0-back task, as we have been talking about using before. We are using the contrast between those conditions to allow us to do our filtering. What that gives us is the areas in the brain that are active that are only due to the working memory task. This is for the 2-back in particular, and what we see are two regions in the frontal areas, this is dorsolateral prefrontal cortex, both on the left and the right sides of the brain. We also see areas in the posterior parietal cortex both in the left and we don’t quite see the one on the right here because of the thresholds I’ve used for displaying this. What we see on the right is the same subjects doing the same tasks in the MRI scanner. And this was a very early experiment we did to demonstrate that we were localizing the activity to the same places that we saw activity in the fMRI. This gave us a good validation of the fact that our ability to localize this data was quite good. In this case, this is a network of brain regions that are involved with working memory and attention. This is fairly classically well understood. What we call the DLPFC, or the dorsolateral prefrontal cortex, are executive regions in the brain that control what we consider the more frontal lobe functions – executive functions that control how people are organizing their behavior, particularly if they need to do a complex task like the working memory organization. Attention is modulated, particularly spatial attention and memory, is also coordinated in posterior parietal regions and it is the interaction between the frontal and posterior regions that create this network that is involved in doing the complexities of this task. So what we’re looking at here is a horizontal slice through the brain. So this is the frontal part of the brain, toward the eyes, and this is what we call the posterior, which is about mid-level in terms of up and down. This is just a convenient slice in which to locate activity. The contrast here is what we call grey and matter, the white matter is the connecting tissue, the axons that distribute the signal between regions in the brain. This grey area here, which in this particular type of scan shows up as grey, is the actual cell bodies, and this is the cortical mantle around the outside of the brain here and you can see the convolutions of the folds. One of the major applications for our working memory task is a clinical study in looking at patients with schizophrenia, and their well siblings, and normal controls. And we’re interested in are there differences in the brain activity in this patient group compared to normals. So this is a final slide that distills all of that data down in to a comparison photo that gives us a way of looking at and interpreting the differences [EDIT OUT GUY IN THE BACKGROUND]. Here on the top we’re seeing a final image of a horizontal slice through the brain, as we saw before. And again we see both the dorsolateral prefrontal cortex both left and right and the posterior parietal cortex both left and right. The areas of activation here shown are a little bit larger because this is a group study with quite a larger number of subjects in it and so it broadens out the area of significant activation. What we are seeing on the bottom is the same map that has been thresholded the same way in terms of looking at significant activity. By probands, we are referring to the patients with schizophrenia and we see that they activate the posterior regions in a similar fashion but they have much less activity in the dorsolateral prefrontal cortex areas. Their executive functions areas are reduced in this particular type of task. We’re also examining the unaffected siblings – these are brothers or sisters of our patients with schizophrenia who do not have the disorder. And in part of a large sibling study in the clinical brain disorders branch we are bringing in these matched pairs and looking at them with a series of tasks, including the n-back. In this case, when these siblings do it they do activate the posterior regions similarly and their degree of activation in the frontal regions seems to fall somewhere in between the normal control group and the patients with schizophrenia. These subjects do not have schizophrenia, but nonetheless they seem to be showing a little bit of brain pattern that is similar. So this is helping us to determine what the genetic underpinnings of the disorder might be. One of our main interests in the application of magnetoencephalography has been this exquisite time resolution that I referred to before. And what we really hope to be able to do is to examine brain activity in real time while people are doing complex tasks or even ordinary tasks. We can take that kind of activity flow across the cortex and actually look at how it moves among brain regions while people are doing tasks. In this case, we’ve done a recording of what we call gamma-band activity during an auditory experiment and we can actually see how this moves across brain regions while this particular auditory task is going on. In this case, we’ve actually slowed down the activity – this represents about 1 and a half seconds of an auditory task that the subjects are doing. We’ve slowed down the movement so that we’ll actually be able to look at it and study it for a while. This is from a particular subject and a particular task – we’re not able to do this all the time and we’re not able to fully quantify and understand this yet. But this is the place we’re moving to with this kind of technology.

magnetoencephalography, MEG, neuroimaging, imaging, encephalography, encephalogram, magnetoencephalogram, richard, coppola, copolla, n-back, nback, working, memory

  • ID: 2277
  • Source: DNALC.G2C

Related Content

2257. Neuroimaging

A review of neuroimaging-related content on Genes to Cognition Online.

  • ID: 2257
  • Source: G2C

2089. Comparing neuroimaging techniques

Professor Wayne Drevets discusses the advantages of using different neuroimaging techniques, such as MEG and PET, to solve particular research questions.

  • ID: 2089
  • Source: G2C

2266. Neuroimaging - review

Bridging the gap between descriptions of human behaviors and underlying neural events has been a dream of both psychologists and neuroscientists for quite some time.

  • ID: 2266
  • Source: G2C

1122. Schizophrenia - Future Research (2)

Dr. Sukhi Shergill discusses exciting possibilities for future research into schizophrenia.

  • ID: 1122
  • Source: G2C

1163. Neuroimaging and Schizophrenia

Professor Daniel Weinberger describes how neuroimaging techniques are being used to examine the brains of schizophrenic patients.

  • ID: 1163
  • Source: G2C

1152. Positron Emission Tomography (PET)

Professor Trevor Robbins discusses how positron emission tomography (PET) works to provide detailed images of brain structure and chemistry.

  • ID: 1152
  • Source: G2C

1442. Neuroimaging - Research

Neuroimaging facilitates the precise mapping of specific brain structures. It is important to remember, however, that specific behaviors or emotions rarely map to specific brain areas.

  • ID: 1442
  • Source: G2C

1153. Functional Magnetic Resonance Imaging (fMRI)

Professor Trevor Robbins describes functional magnetic resonance imaging (fMRI) technology, which is used to take detailed images of the functioning brain.

  • ID: 1153
  • Source: G2C

2187. Alzheimer's disease - imaging test

Professor Donna Wilcock discusses a new biological technique for diagnosing Alzheimer's disease using PET neuroimaging.

  • ID: 2187
  • Source: G2C

860. Anti-pain Systems

Why do women and men feel pain differently?

  • ID: 860
  • Source: G2C