We are searching data for your request:
Upon completion, a link will appear to access the found materials.
My question is why we use P and N for naming positive and negative EEG component(for instance N170 or P100) but for MEG data we use M in both case, I mean M170 and M100 in this example?
EEG and MEG measure two different but related signals. While EEG records the electric activity from the scalp, MEG records changes in the magnetic field induced by electric activity in the brain. Because of this, MEG is largely insensitive to sources oriented radially, but can pick up sources oriented tangentially to the cortex. Moreover, EEG is sensitive to volume currents (i.e., the spread of currents across the brain), while MEG less so .
Because of this distinction, while two components can have the same latency - let's say the face-specific N170 and the M170 - the sources originating them might be different. Naming the components differently stresses this important distinction.
For reference, I recommend the following articles and books
- Baillet, S., Mosher, J. C., & Leahy, R. M. (2001). Electromagnetic brain mapping. Signal Processing Magazine, IEEE, 18(6), 14-30.
- Hansen, P., Kringelbach, M., & Salmelin, R. (2010). MEG:An Introduction to Methods. Oxford University Press, USA.
Apart from making the distinction between recording techniques (talking about N170 and you refer to ERP), here is an explanation:
Naming convention refers more to neurophysiology than waveform (component) directions.
The standard tool modeling the EEG/MEG activity is a dipole. A dipole consists of a pair of a negative and positive charges and it generates both an electric and a magnetic field (see figure). You can see in the electric field panel that the positive activity is ahead of the positive charge, here the head of the arrow (on the contrary, the magnetic field is oriented on the side of the dipole). Therefore, a positivity in an ERP signal (P300 for example) corresponds to an increase of the dipole activity measured ahead of the positive charge (resp. negative for N400).
Here, we can say the electrical positive P300 component (or waveform) is neurophysiologicaly a positive electrical activity.
On the contrary, the magnetic main activity does not follow the main direction of the dipole but its perpendicular one. Therefore, we can't relate an increase in a magnetic field to the positive/negative direction of the dipole.
Here, a magnetic N170 would not be grounded on neurophysiological basis, but just in the fact that N only refers to a negative on-going measure.
Multivariate pattern analysis of MEG and EEG: a comparison of representational structure in time and space
Multivariate pattern analysis of magnetoencephalography (MEG) and electroencephalography (EEG) data can reveal the rapid neural dynamics underlying cognition. However, MEG and EEG have systematic differences in sampling neural activity. This poses the question to which degree such measurement differences consistently bias the results of multivariate analysis applied to MEG and EEG activation patterns. To investigate, we conducted a concurrent MEG/EEG study while participants viewed images of everyday objects. We applied multivariate classification analyses to MEG and EEG data, and compared the resulting time courses to each other, and to fMRI data for an independent evaluation in space. We found that both MEG and EEG revealed the millisecond spatio-temporal dynamics of visual processing with largely equivalent results. Beyond yielding convergent results, we found that MEG and EEG also captured partly unique aspects of visual representations. Those unique components emerged earlier in time for MEG than for EEG. Identifying the sources of those unique components with fMRI, we found the locus for both MEG and EEG in high-level visual cortex, and in addition for MEG in early visual cortex. Together, our results show that multivariate analyses of MEG and EEG data offer a convergent and complimentary view on neural processing, and motivate the wider adoption of these methods in both MEG and EEG research.
Using this approach, you can read all data from the file into memory, apply filters, and subsequently cut the data into interesting segments.
The following steps are taken to read data, to apply filters and to reference the data (in case of EEG), and optionally to select interesting segments of data around events or triggers or by cutting the continuous data into convenient constant-length segments.
- read the data for the EEG channels using ft_preprocessing, apply a filter and re-reference to linked mastoids
- read the data for the horizontal and vertical EOG channels using ft_preprocessing, and compute the horizontal and vertical bipolar EOG derivations
- combine the EEG and EOG into a single data representation using ft_appenddata
- determine interesting pieces of data based on the trigger events using ft_definetrial
- segment the continuous data into trials using ft_redefinetrial
- segment the continuous data into one-second pieces using ft_redefinetrial
Data were collected between 2012 and 2017 in Rennes (France) during two different experiments. The first dataset consists of naming and spelling the names of visually presented objects (Fig. 1). The second dataset includes resting state, visual/auditory naming and visual working memory tasks (Fig. 2). The same equipment was used in both datasets and recordings were performed in the same place (Rennes University Hospital Center). HD-EEG system (EGI, Electrical Geodesic Inc., 256 electrodes) was used to record brain activity with a sampling rate equal to 1 KHz and electrodes impedances were kept below 50 kΩ. Involved participants were different for the two studies. They provided their written informed consent to participate and fulfilled some inclusion/exclusion criteria questionnaires (summarized in Table 1). The participants were seated in a medical armchair linked to the faraday structure of the room. The room was lit by natural light attenuated by blinds. Heads of our participants were approximately located 1 m in front of the 17′screen. Images were presented centrally as black drawings on a white background without any size modification (10 cm × 10 cm). This setup corresponds to a viewing angle of 2.86 degrees of maximum excenticity from fixation point so that the entire image falls within the participant’s foveal vision. Sounds were displayed through 50 watts logitech speakers without any possibility of audio isolation.
Dataset 1, (a) Participants (N = 23). (b) HD-EEG system used in the experiment. (c) A representative schema of the dataset 1 collection procedure. (d) Experimental design for pictures naming and spelling.
Dataset 2. (a) Participants (N = 20), (b) HD-EEG, (c) Four tasks (visual naming, auditory naming, memory task and resting state). (d) Experimental design for auditory and memory tasks.
Participants. Twenty-three right-handed healthy volunteers of whom 12 females, with an age range between 19–40 years (mean age 28 year), and 11 males with an age range between 19–33 years (mean age 23 years) participated. This experiment was approved by an independent ethics committee and authorized by the French institutional review board (IRB): “Comité Consultatif de Protection des Personnes dans la Recherche Biomédicale Ouest V” (CCPPRB-Ouest V). This study was registered under the name “conneXion” and the agreement number: 2012- A01227-36. Its promoter was the Rennes University Hospital.
Experimental procedure and design. The experiment begins with the verification of inclusion/exclusion criteria. The participants read the information notice and the consent form. Then, those who signed to participate completed two questionnaires. The first questionnaire collects information related to the inclusion/exclusion criteria and to personal information (name, age, sex, address), while the second one allows to determine the manual laterality of the participants using the Edinburgh manual laterality measurement scale adapted in French (Edinburgh Handedness Inventory, Oldfield, 1971 23 ). Afterwards, the acquisition procedure is explained to the participant (see Fig. 1c). The experimental paradigm includes two conditions, the first one corresponds to the picture naming task and the second one to the picture spelling task. The spelling task always follow the naming task and its instruction was not given before the naming task was completed to avoid any reminiscence of words orthographic structures
Both naming and spelling tasks are divided into two runs of 74 stimuli to avoid tiredness of participants. Each run contains balanced numbers of animals and objects as well as long and short words. Pictures are presented on a screen using a computer and the experimental paradigm is diffused using E-prime® Psychology Software Tools © 24 . The responses produced by the participants are collected via a Logitech® microphone and analyzed to detect onsets of speech using Praat v5.3.13 (University of Amsterdam, 1012VT Amsterdam, The Netherlands) 25 .
Task 1: Picture naming. Participants were asked to name at a normal speed 148 displayed pictures on a screen divided in two runs of about 8 min each. The images were selected from a database of 400 pictures standardized for French 26 . The presented pictures represent two categories (animals (74) and tools (74)). Naming agreement of picture is very high leading to few mistakes while recovering words associated to images. The word length is controlled such that the paradigm includes 74 short words (37 animals and 37 tools) of 3 to 5 letters and 74 long words (37 animals and 37 tools) of 7 to 10 letters. Other psycholinguistic parameters were controlled to get equivalent datasets (name agreement, image agreement, age of acquisition as well as linguistic parameters like oral frequency, written frequency, letters/phonemes/syllables and morphemes numbers), see Table 2.
All pictures were shown as black drawings on a white background. The order of presentation within a run of 74 stimuli was fully randomized across participants. Naming latencies were determined as the time between picture onset and the beginning of vocalization recorded by the system. EEG Triggers of images that were not correctly recognized or not recognized at all are discarded so that they don’t appear in the dataset.
Task 2: Picture spelling. In this task, participants were asked to spell the same images used in the picture naming task. The instruction about the spelling task was given after the completion of the naming task to keep the participant naïve about our interest to the spelling. Also, this task always happened after the picture naming task to avoid the re-activation of the spelling of image’s names. Two sessions of visual spelling were performed for each participant and the duration of each session was about 9 to 10 min. This task is very similar to the previous one except that it involves the orthography of words that correspond to the named images. One can easily recognize the drawing and named it without going through activation of the orthography of the word. In spelling, recovering the exact orthographic structure with the sequence of letters is an additional step that closes spelling and writing. This task includes the same 148 images selected from the database of 400 pictures standardized for French 25 that were used for the naming task.
These data have not been extensively analyzed, especially under the psycholinguistic angle. They have also never been compared together with the picture naming task.
Participants. Twenty right-handed healthy volunteers (10 females, 10 males, mean age 23 years- See Fig. 2a) participated in this experiment. Like for the dataset 1, all participants provided a written informed consent to participate in this study which was approved by an independent ethics committee and authorized by the IRB (CCPPRB-Ouest V)). Recorded study name was “Braingraph” and the study agreement number was 2014-A01461-46. Its promoter was still the Rennes University Hospital.
Experimental procedure and design. The experiment is composed of four tasks: resting state, picture naming, auditory naming and working memory. The three first tasks are distributed in a counterbalanced way within the group. The memory task always happened at the end of the session because it involved 40 images from the previous naming task amongst its 80 displayed images. All responses for each trial within the tasks are available with the data. Triggers corresponding to false responses or no responses are simply discarded from the dataset.
Resting state EEG. The participants were asked to relax for 10 minutes with their eyes opened during the recordings. Participants were facing the computer screen which displayed a fixation cross. They were told not to fixate the cross but to keep their eyes in the vicinity of it so that the rest run could also be used as a control for visual or auditory naming.
Task 1: Picture naming. The naming task of this second dataset contains 40 unrecognizable scrambled objects on top of 80 meaningful pictures taken from the Alario and Ferrand database 25 (see Fig. 3 for typical examples of the presented images). Scrambled pictures were generated from the Alario and Ferrand database by mixing drawings lines and participants were instructed to say nothing when viewing them. Pictures were displayed on a screen as black drawings on a white background. Pictures were selected to get a high name agreement (avg = 96.86%, min = 86% max = 100%), see Table 3 for image’s parameters of dataset 2.
typical examples of the presented images. All images can be found in Alario and Ferrand database 26 .
Dynamic causal modeling for EEG and MEG
We present a review of dynamic causal modeling (DCM) for magneto- and electroencephalography (M/EEG) data. DCM is based on a spatiotemporal model, where the temporal component is formulated in terms of neurobiologically plausible dynamics. Following an intuitive description of the model, we discuss six recent studies, which use DCM to analyze M/EEG and local field potentials. These studies illustrate how DCM can be used to analyze evoked responses (average response in time), induced responses (average response in time-frequency), and steady-state responses (average response in frequency). Bayesian model comparison plays a critical role in these analyses, by allowing one to compare equally plausible models in terms of their model evidence. This approach might be very useful in M/EEG research where correlations among spatial and neuronal model parameter estimates can cause uncertainty about which model best explains the data. Bayesian model comparison resolves these uncertainties in a principled and formal way. We suggest that DCM and Bayesian model comparison provides a useful way to test hypotheses about distributed processing in the brain, using electromagnetic data.
A graphical overview of the…
A graphical overview of the generative model for DCM for evoked responses. Left:…
Mismatch negativity: model specification. The…
Mismatch negativity: model specification. The sources comprising the network are connected with forward…
Mismatch negativity: Bayesian model selection…
Mismatch negativity: Bayesian model selection among DCMs for the three models, F, B,…
Evidence for feedback loops: Bayesian…
Evidence for feedback loops: Bayesian model comparison across subjects. Comparison of the model…
DCM for induced responses: results…
DCM for induced responses: results for nonlinear coupling between sources in a four‐source…
Steady‐state response study using local…
Steady‐state response study using local field potentials and a single source DCM, results:…
How are the electrodes related to each other?
The EEG records brain waves using equipment called amplifiers and by looking at the information from the electrodes in different combinations. These combinations of electrodes are called 'montages'.
- In bipolar montages, consecutive pairs of electrodes are linked by connecting the electrode input 2 of one channel to input 1 of the subsequent channel, so that adjacent channels have one electrode in common. The bipolar chains of electrodes may be connected going from front to back (longitudinal) or from left to right (transverse).
- Another type of montage is the referential montage. In this type, various electrodes are connected to input 1 of each amplifier and a reference electrode is connected to input 2 of each amplifier. Ideally, inactive electrodes (ones that are uninvolved in the electrical field being studied) are chosen as references.
3. Advanced Examples
Having described the standard MNE-Python workflow for source localization, we will now present some more advanced examples of data processing. Some of these examples provide alternative options for preprocessing and source localization.
3.1. Denoising with Independent Component Analysis (ICA)
In addition to SSP, MNE supports identifying artifacts and latent components using temporal ICA. This method constitutes a latent variable model that estimates statistically independent sources, based on distribution criteria such as kurtosis or skewness. When applied to M/EEG data, artifacts can be removed by zeroing out the related independent components before inverse transforming the latent sources back into the measurement space. The ICA algorithm currently supported by MNE-Python is FastICA (Hyvärinen and Oja, 2000) implemented in Scikit-Learn (Pedregosa et al., 2011). Here, MNE-Python has added a domain specific set of convenience functions covering visualization, automated component selection, persistence as well as integration with the MNE-Python object system. ICA in MNE-Python is handled by the ICA class which allows one to fit an unmixing matrix on either Raw or Epochs by calling the related decompose_raw or decompose_epochs methods. After a model has been fitted, the resulting source time series can be visualized using trellis plots (Becker et al., 1996) (cf. Figure 6) as provided by the plot_sources_raw and plot_sources_epochs methods (illustrated in Figure 6). In addition, topographic plots depicting the spatial sensitivities of the unmixing matrix are provided by the plot_topomap method (illustrated in Figure 6). Importantly, the find_sources_raw and find_sources_epochs methods allow for identifying sources based on bivariate measures, such as Pearson correlations with ECG recording, or simply based on univariate measures such as variance or kurtosis. The API, moreover, supports user-defined scoring measures. Identified source components can then be marked in the ICA object's exclude attribute and saved into a FIF file, together with the unmixing matrix and runtime information. This supports a sustainable, demand-driven workflow: neither sources nor cleaned data need to be saved, signals can be reconstructed from the saved ICA structure as required. For advanced use cases, sources can be exported as regular raw data or epochs objects, and saved into FIF files ( sources_as_raw and sources_as_epochs ). This allows any MNE-Python analysis to be performed on the ICA time series. A simplified ICA workflow for identifying, visualizing and removing cardiac artifacts is illustrated in Table 2.
Figure 6. Topographic and trellis plots of two automatically identified ICA components. The component ⌢ corresponds to the EOG artifact with a topography on the magnetometers showing frontal signals and a waveform typical of an eye blink. The component ȶ on the right captures the ECG artifact with a waveform matching 3 heart beats.
Table 2. From epochs to ICA artifact removal in less than 20 lines of code.
3.2. Non-Parametric Cluster-Level Statistics
For traditional cross-subject inferences, MNE-Python offers several parametric and non-parametric statistical methods. Parametric statistics provide valid statistical contrasts in so far as the data under test conform to certain underlying assumptions of Gaussianity. The more general class of non-parametric statistics, which we will focus on here, do not require such assumptions to be satisfied (Nichols and Holmes, 2002 Pantazis et al., 2005).
M/EEG data naturally contains spatial correlations, whether the signals are represented in sensor space or source space, as temporal patterns or time𠄿requency representations. Moreover, due to filtering and even the characteristics of the signals themselves, there are typically strong temporal correlations as well. Mass univariate methods provide statistical contrasts at each “location” across all dimensions, e.g., at each spatio-temporal point in a cortical temporal pattern, independently. However, due to the highly correlated nature of the data, the resulting Bonferroni or false discovery rate corrections (Benjamini and Hochberg, 1995) are generally overly conservative. Moreover, making inferences over individual spatio-temporal (or other dimensional) points is typically not of principal interest. Instead, studies typically seek to identify contiguous regions within some particular dimensionality, be it spatio-temporal or time𠄿requency, during which activation is greater in one condition compared to a baseline or another condition. This leads to the use of cluster-based statistics, which seek such contiguous regions of significant activation (Maris and Oostenveld, 2007).
MNE-Python includes a general framework for cluster-based tests to allow for performing arbitrary sets of contrasts along arbitrary dimensions while controlling for multiple comparisons. In practice, this means that the code is designed to work with many forms of data, whether they are stored as SourceEstimate for source-space data, or as Evoked for sensor-space data, or even as custom data formats, as necessary for time𠄿requency data. It can operate on any NumPy array using the natural (grid) connectivity structure, or a more complex connectivity structure (such as those in a brain source space) with help of a sparse adjacency matrix. MNE-Python also facilitates the use of methods for variance control, such as the “hat” method (Ridgway et al., 2012). Two common use cases are provided in Figure 7.
Figure 7. Examples of clustering. (A) Time-frequency clustering showing a significant region of activation following an auditory stimulus. (B) A visualization of the significant spatio-temporal activations in a contrast between auditory stimulation and visual stimulation using the sample dataset. The red regions were more active after auditory than after visual stimulation, and vice-versa for blue regions. Image (B) was produced with PySurfer.
3.3. Decoding—MVPA—Supervised Learning
MNE-Python can easily be used for decoding using Scikit-Learn (Pedregosa et al., 2011). Decoding is often referred to as multivariate pattern analysis (MVPA), or simply supervised learning. Figure 8 presents cross-validation scores in a binary classification task that consists of predicting, at each time point, if an epoch corresponds to a visual flash in the left hemifield or a left auditory stimulus. Results are presented in Figure 8. The script to reproduce this figure is available in Table 3.
Figure 8. Sensor space decoding. At every time instant, a linear support vector machine (SVM) classifier is used with a cross-validation loop to test if one can distinguish data following a stimulus in the left ear or in the left visual field. One can observe that the two conditions start to be significantly differentiated as early as 50 ms and maximally at 100 ms which corresponds to the peak of the primary auditory response. Such a statistical procedure is a quick and easy way to see in which time window the effect of interest occurs.
Table 3. Sensor space decoding of MEG data.
3.4. Functional Connectivity
Functional connectivity estimation aims to estimate the structure and properties of the network describing the dependencies between a number of locations in either sensor- or source-space. To estimate connectivity from M/EEG data, MNE-Python employs single-trial responses, which enables the detection of relationships between time series that are consistent across trials. Source-space connectivity estimation requires the use of an inverse method to obtain a source estimate for each trial. While computationally demanding, estimating connectivity in source-space has the advantage that the connectivity can be more readily related to the underlying anatomy, which is difficult in the sensor space.
The connectivity module in MNE-Python supports a number of bivariate spectral connectivity measures, i.e., connectivity is estimated by analyzing pairs of time series, and the connectivity scores depend on the phase consistency across trials between the time series at a given frequency. Examples of such measures are coherence, imaginary coherence (Nolte et al., 2004), and phase-locking value (PLV) (Lachaux et al., 1999). The motivation for using imaginary coherence and related methods is that they discard or downweight the contributions of the real part of the cross spectrum and, therefore, zero-lag correlations, which can be largely a result of the spatial spread of the measured signal or source estimate distributions (Schoffelen and Gross, 2009). However, note that even though some methods can suppress the effects of the spatial spread, connectivity estimates should be interpreted with caution due to the bivariate nature of the supported measures, there can be a large number of apparent connections due to a latent region connecting or driving two regions that both contribute to the measured data. Multivariate connectivity measures, such as partial coherence (Granger and Hatanaka, 1964), can alleviate this problem by analyzing the connectivity between all regions simultaneously (cf. Schelter et al., 2006). We plan to add support for such measures in the future.
The connectivity estimation routines in MNE-Python are designed to be flexible yet computationally efficient. When estimating connectivity in sensor-space, an instance of Epochs is used as input to the connectivity estimation routine. For source-space connectivity estimation, a Python list containing SourceEstimate instances is used. Instead of a list, it is also possible to use a Python generator object which produces SourceEstimate instances. This option drastically reduces the memory requirements, as the data is read on-demand from the raw file and projected to source-space during the connectivity computation, therefore requiring only a single SourceEstimate instance to be kept in memory. To use this feature, inverse methods which operate on Epochs , e.g., apply_inverse_epochs , have the option to return a generator object instead of a list. For linear inverse methods, e.g., MNE, dSPM, sLORETA, further computational savings are achieved by storing the inverse kernel and sensor-space data in the SourceEstimate objects, which allows the connectivity estimation routine to exploit the linearity of the operations and apply the time-frequency transforms before projecting the data to source-space.
Due to the large number of time series, connectivity estimation between all pairs of time series in source-space is computationally demanding. To alleviate this problem, the user has the option to specify pairs of signals for which connectivity should be estimated, which makes it possible, for example, to compute the connectivity between a seed location and the rest of the brain. For all-to-all connectivity estimation in source-space, an attractive option is also to reduce the number of time series, and thus the computational demand, by summarizing the source time series within a set of cortical regions. We provide functions to do this automatically for cortical parcellations obtained by FreeSurfer, which employs probabilistic atlases and cortical folding patterns for an automated subject-specific segmentation of the cortex into anatomical regions (Fischl et al., 2004 Desikan et al., 2006 Destrieux et al., 2010). The obtained set of summary time series can then be used as input to the connectivity estimation. The association of time series with cortical regions simplifies the interpretation of results and it makes them directly comparable across subjects since, due to the subject-specific parcellation, each time series corresponds to the same anatomical region in each subject. Code to compute the connectivity between the labels corresponding to the 68 cortical regions in the FreeSurfer 𠇊parc” parcellation is shown in Table 4 and the results are shown in Figure 9.
Table 4. Connectivity estimation between cortical regions in the source space.
Figure 9. Connectivity between brain regions of interests, also called labels, extracted from the automatic FreeSurfer parcellation visualized using plot_connectivity_circle . The image of the right presents these labels on the inflated cortical surface. The colors are in agreement between both figures. Left image was produced with matplotlib and right image with PySurfer.
MNE-Python implements two source localization techniques based on beamforming: Linearly-Constrained Minimum Variance (LCMV) in the time domain (Van Veen et al., 1997) and Dynamic Imaging of Coherent Sources (DICS) in the frequency domain (Gross et al., 2001). Beamformers construct adaptive spatial filters for each location in the source space given a data covariance (or cross-spectral density in DICS). This leads to pseudo-images of “source power” that one can store as SourceEstimates .
Figure 4 presents example results of applying the LCMV beamformer to the sample data set for comparison with results achieved using dSPM. The code that was used to generate this example is listed in Table 5.
Table 5. Inverse modeling using the LCMV beamformer.
3.6. Non-Linear Inverse Methods
All the source estimation strategies presented thus far, from MNE to dSPM or beamformers, lead to linear transforms of sensor-space data to obtain source estimates. There are also multiple inverse approaches that yield non-linear source estimation procedures. Such methods have in common to promote spatially sparse estimates. In other words, source configurations consisting of a small set of dipoles are favored to explain the data. MNE-Python implements three of these approaches, namely mixed-norm estimates (MxNE) (Gramfort et al., 2012), time𠄿requency mixed-norm estimates (TF-MxNE) (Gramfort et al., 2013b) that regularize the estimates in a time𠄿requency representation of the source signals, and a sparse Bayesian learning technique named γ-MAP (Wipf and Nagarajan, 2009). Source localization results obtained on the ERF evoked by the left visual stimulus with both TF-MxNE and γ-MAP are presented in Figure 10.
Figure 10. Source localization with non-linear sparse solvers. The left plot shows results from TF-MxNE on raw unfiltered data (due to the built-in temporal smoothing), and the right plot shows results from γ-MAP on the same data but filtered below 40 Hz. One can observe the agreement between both methods on the sources in the primary (red) and secondary (yellow) visual cortices delineated by FreeSurfer using an atlas. The γ-MAP identifies two additional sources in the right fusiform gyrus along the visual ventral stream. These sources that would not be naturally expected from such simple visual stimuli are weak and peak later in time, which makes them nevertheless plausible.
In both tasks, atypical responses (i.e., errors) and non-responses were excluded from further analysis (1,3% of the data). Response latencies above and below 3 SD were calculated for each participant in each task and excluded from further analysis (2% of the data).
On average, participants named pictures slower (mean RTs = 872 ms, SD = 205 ms) than they read the corresponding words (mean RTs = 560 ms, SD = 101 ms). The 312 ms difference was significant [t(16) = 18,799, p < 0.001].
In the stimulus-aligned condition (from 0 to 400 ms after stimulus onset) significant differences in amplitudes (p < 0.05) were observed between word reading and picture naming throughout the whole time-window of processing. These differences were particularly present over posterior electrodes and bilaterally from 100 to 400 ms post-stimulus (Figure 1A).
FIGURE 1. (A) Results of the stimulus-aligned waveform analysis. Values are masked by results of cluster-based non-parametric analysis: only significant values are plotted. Within the left panel, upper part corresponds to left hemisphere electrodes, middle part to midline electrodes, and lower part to right hemisphere electrodes within each part, electrodes are ordered from posterior to anterior. Dashed lines outline representative electrodes, which time course is plotted separately (picture naming in black, word reading in gray). The topography represents the spatial distribution of the effect over each cluster (black dots outline electrodes within each cluster). (B) Results of the spatio-temporal segmentation on the stimulus-locked ERP Grand-Means of both tasks. Each period of topographic stability is displayed in the color bars with the information about its time course. The corresponding topographies are listed on the right (positive values in red, negative values in blue), with the common topographies marked in red. The gray bar on the temporal axis represents the periods of topographic difference between tasks, as revealed by the TANOVA. (C) Boxplots of distributions of individual onsets of maps A and E and offsets of map A, extracted from the back-fitting procedure for both picture naming (bold lines) and word reading (thin lines). Zero of times represent stimulus presentation.
Results of the TANOVA showed that topographic differences between tasks also stretched across the whole time-window of processing, with the exception of the period comprised between about 75 and 150 ms after stimulus onset (see Figure 1B), corresponding to the temporal signature of the P1 component map.
The spatio-temporal segmentation of the stimulus-aligned Grand-Means explained 95,81% of the Global Variance, and revealed the presence of a total of six template maps. In Figure 1B, the five template maps starting from the P1 component map onward (map labeled 𠇊”) are shown. In picture naming, the topographic configurations present in the P1 range (map 𠇊”) and later in the 200 ms time window (map 𠇍”) were highly correlated spatially and therefore labeled with the same template map by the clustering algorithm. When the same template map appears repeatedly in different non-overlapping time windows of the same Grand-Mean, it does not reflect comparable neuronal activity (e.g., Michel et al., 2009). For this reason, the later map has been relabeled differently in the figure, as it likely reflects a qualitatively different step of information processing following early visual encoding.
The application of the clustering algorithm resulted in a sequence of topographic maps, depicted in Figure 1B for the grand-averages of each task. Results of the spatio-temporal segmentation revealed that in an early time-window (comprised between about 75 and 150 ms after stimulus onset and thus compatible with visual encoding), the same topographic map (labeled 𠇊”) was present in the grand-averages of both tasks. In the waveform analysis, higher amplitudes were detected in word reading compared with picture naming (Figure 1A). The TANOVA corroborated the results of the spatio-temporal segmentation, revealing that the same topographic maps were predominant across tasks in the considered time-window (75 ms). A back-fitting was performed in the time window comprised between 0 and 400 ms from stimulus onset to test for the onsets, offsets and durations of map 𠇊” across participants in both tasks. Results revealed that map 𠇊” had a slightly later onset in picture naming (mean onset: 66 ms after picture onset) with respect to word reading (mean onset: 50 ms after word onset). A Wilcoxon signed-rank test proved the difference to be marginal (z = 𠄱,818, p = 0.07). Map 𠇊” also displayed a later offset in picture naming (mean offset: 155 ms after picture presentation) compared with word reading (mean offset: 132 ms after word presentation). The difference proved to be significant (z = 𠄲,301, p < 0.05). Finally, no differences were found in map duration across tasks (z = 𠄱,086, p = 0.278). Figure 1C illustrates the distributions of the individual onsets and offsets of map 𠇊” in both picture naming and word reading.
The time window following visual encoding (starting from about 150 ms onward) was characterized by extensive amplitude differences, mainly located on posterior sites. In this time window, substantial topographic cross-task differences were detected. A back-fitting performed on the time window comprised between 160 and 300 ms after stimulus onset revealed that map 𠇍,” characterized by posterior positivity and anterior negativity (Figure 1B), was significantly more present in picture naming compared with word reading (Pearson Chi Square computed on map presence across individuals: χ 2 = 14.43, p < 0.001). In picture naming, Map 𠇍” explained the 10% of the variance in the considered time-window (160 ms). The posterior characterization of amplitude differences as revealed by the waveform analysis seems consistent with the fact that in picture naming, map 𠇍” was predominant in the considered time-window. Conversely, map named 𠇋” was significantly more present in the word reading task (χ 2 = 6,10, p < 0.05) and explained only the 3% of the variance in the time window comprised between 160 and 300 ms after word onset. The low explained variance can be attributed to the rapidly changing spatial configuration of map 𠇋,” which is likely to be due to the unstable and transitory nature of the ERP activity in the considered time-window.
The back-fitting revealed that map named 𠇌” had a negligible presence in individual ERPs. This is probably due to the transitional and unstable nature of this topographic map (see Figure 1B).
Amplitude differences were then sustained in the time window from about 250 ms to the end of the stimulus-locked analysis, corroborated by topographic differences identified by the TANOVA. These differences are however likely to be due to the very different time course of the processing stages specific of each task. In fact, the spatio-temporal segmentation performed on this time window revealed the presence, in both tasks, of the same period of topographic stability (map labeled 𠇎”) characterized by posterior positivity and anterior negativity. This common map displayed noticeably different time courses between tasks. The back-fitting performed on the time window comprised between 100 and 400 ms after stimulus onset revealed that map 𠇎” (explaining the 22% of the variance across tasks in the considered time-window) displayed an earlier onset in word reading (mean onset: 187 ms after word presentation) with respect to picture naming (mean onset: 252 ms after picture presentation). Figure 1C illustrates the distribution of the individual onsets of the common map 𠇎” between tasks. A Wilcoxon signed-rank test proved the cross-task difference in the onset to be significant across participants (z = 𠄲,342, p < 0.05).
In the response-aligned condition (from ms to the vocal response onset), significant amplitude differences (p < 0.05) were observed between word reading and picture naming throughout the whole time-window of interest. Differences were observed earlier over anterior electrodes (from to ms) and more posteriorly in the following time-window closer to articulation. Again, effects were bilateral (Figure 2A).
FIGURE 2. (A) Results of the response-aligned waveform analysis. Values are masked by results of cluster-based non-parametric analysis: only significant values are plotted. Within the left panel, upper part corresponds to left hemisphere electrodes, middle part to midline electrodes, and lower part to right hemisphere electrodes within each part, electrodes are ordered from posterior to anterior. Dashed lines outline representative electrodes, which time course is plotted separately (picture naming in black, word reading in gray). The topography represents the spatial distribution of the effect over each cluster (white dots outline electrodes within each cluster). (B) Results of the spatio-temporal segmentation on the response-locked ERP Grand-Means of both tasks. Each period of topographic stability is displayed in the color bars with the information about its time course. The corresponding topographies are listed on the right (positive values in red, negative values in blue), with the common topographies marked in red. The gray bar on the temporal axis represents the periods of topographic difference between tasks, as revealed by the TANOVA. (C) Boxplots of distributions of individual offsets of maps F and durations of map G, extracted from the back-fitting procedure for both picture naming (bold lines) and word reading (thin lines). Zero of times in the boxplot of the offset of map F represents voice onset.
The TANOVA revealed an extended period of topographic difference, stretching across the whole time-window of processing with the exception of the last period starting about 100 ms prior to the onset of articulation.
The spatio-temporal segmentation revealed the presence of three template maps (Figure 2B) – labeled 𠇏,” “G,” and “H” – explaining 94,5% of the Global variance.
The template map labeled 𠇏” corresponds to the common map (𠇎”) in the stimulus-aligned condition. These maps were, in fact, spatially correlated above 0.99.
All the three maps were common to both tasks, but maps 𠇏” and “G” displayed different time courses. A back-fitting procedure was carried out in the time-window comprised between and ms before response articulation, revealing that in word reading the map labeled 𠇏,” explaining the 20% of the variance in the considered time-window, displayed an offset much closer to response articulation (mean map offset: 184 ms before articulation) compared with picture naming (mean map offset: 257 ms before articulation). This result proved to be significant across participants (z = 𠄲,580, p = 0.01). A second back-fitting was performed in the time-window comprised between and ms to test for the duration of map “G” across tasks. The results revealed that map “G” had a longer duration in picture naming (mean duration: 243 ms) compared with word reading (mean duration: 113 ms). The result was significant (z = 𠄳,297, p < 0.01).
Figure 2C illustrates the distribution of the offsets of map 𠇏” and the duration of map “G” across participants and for each task. It is worthy of notice that the mean maps offset and duration calculated across participants might be different when compared to the mean onsets of the same maps in the ERP Grand-Means, because of variability across participants.
While recording EEG, we presented difficult-to-detect visual stimuli that were either brighter or darker than the background at three intensity levels (i.e. contrast-from-background), collected discrimination performance and then asked the participants to rate the clarity of their perception on the PAS scale (Fig. 1). This allowed us to investigate the relationship of the CPP to both the amount of external sensory evidence and the level of subjective clarity of the percept (the internally experienced evidence).
Single trial structure: Following an acoustic alerting tone, a brief visual stimulus was presented always at the same position in the upper right visual field. Stimuli could be either brighter or darker than the background and were presented at 3 different individually adjusted stimulus intensities (low, intermediate and high). After 1000 ms, participants were asked to report the brightness of the stimulus relative to the background (Discrimination task) and then rate the clarity of their perception on the Perceptual Awareness Scale (PAS) (Awareness report).
Behavioral responses to visual stimuli
To manipulate the amount of sensory evidence, we varied stimulus intensity (low, intermediate and high) by selecting 3 different contrast levels (i.e. stimulus levels corresponding to 25%, 50% and 75% detection rate, individually determined prior to the experiment see experimental procedures). Participants were asked to indicate the brightness of the stimulus relative to the background (“lighter” or “darker”, prompted by a first question screen) for assessing discrimination accuracy, and to then rate the clarity of their percept (“no experience”, “brief glimpse”, “almost clear” or “clear”, prompted by the second question screen) (Fig. 1).
Figure 2a illustrates the distribution of subjective perceptual awareness ratings across the three intensity levels. As expected, for low intensity stimuli (orange line), participants indicated most often “no experience” (PAS = 0), followed by a “brief glimpse” (PAS = 1), “almost clear” experience (PAS = 2), and very few “clear” experience (PAS = 3). For intermediate intensity stimuli (purple line), the percentage of responses was more equally distributed across awareness rating levels. For high intensity stimuli (cyan line), participants indicated least often having “no experience” (PAS = 0), followed by more frequent “brief glimpses” (PAS = 1), “almost clear” (PAS = 2) and “clear” experiences (PAS = 3). Finally, catch trials were rated in 88.1% of trials as PAS = 0 (“no experience”).
Behavioral results. (a) PAS rating variability for each level of external sensory evidence. Error bars represent standard errors. Mean percentage of correct discrimination responses as a function of (b) PAS ratings and (c) stimulus intensity. Error bars represent standard errors and the solid line (50%) chance level.
Sorting trials according to the clarity of subjective experience (i.e. PAS = 0, PAS = 1, PAS = 2, PAS = 3) revealed that as the clarity of the percept increased, accuracy also increased (Fig. 2b), as expected [repeated-measures ANOVA (degrees of freedom corrected using Greenhouse-Geisser estimates of sphericity): F(1.688,16.882) = 113.2, p < 0.01 linear trend F(1,10) = 1000.7, p < 0.01]. Similarly, sorting trials according to the strength of the presented evidence (i.e. low, intermediate, high stimulus intensity) showed that as objective sensory information increased, accuracy also increased (Fig. 2c) [repeated-measures ANOVA: F(2,20) = 35.5, p < 0.01 linear trend F(1,10) = 89.9, p < 0.01].
Overall, these results thus show that both factors of interest (visual awareness and stimulus intensity level), i.e. the internally experienced and externally presented sensory evidence, co-vary, but also show considerable trial-by-trial variability.
Next, we examined to what extent the CPP amplitude is varying with each measure, when the alternative measure is accounted for. In addition, we ran a mediation analysis to test whether the known relationship between stimulus intensity and CPP 1,2,3,16 is mediated by subjective experience.
We first plotted the CPP as a function of external physical evidence (stimulus intensity), not taking into account internally experienced evidence (PAS ratings) (Fig. 3). As expected, the CPP scaled with stimulus intensity. However, note the large variability around the mean per stimulus intensity (Fig. 3, shaded areas). To test whether this variability is explained by the variability in PAS ratings observed for each stimulus intensity level (see Fig. 2a), we tested whether the CPP amplitude varies with subjective awareness ratings when physical stimulus properties were held constant. Conversely, we also tested whether this potential varies with physical stimulus properties when subjective ratings were held constant. That is, we compared ERP amplitudes evoked by different levels of subjective or objective evidence while controlling for the contribution of the alternative variable. To numerically equate the value of the alternative variable across all levels of comparison, we used random trial sub-sampling (see Methods).
Grand average ERP waveforms over electrode Pz showing the late evoked potentials as a function of stimulus intensity, regardless of perceptual rating. Shaded areas represent standard errors at each time point.
Centro-parietal positivity co-varies with subjective clarity when stimulus contrast is equated across levels of comparisons
We compared CPP amplitude between awareness ratings for which we could equate stimulus intensities with a sufficient number of trials using trial subsampling (see “Statistical Analysis” section for details). PAS = 0, PAS = 1 and PAS = 2 were compared for stimuli presented at low and intermediate stimulus intensities (Fig. 4a) and PAS = 1, PAS = 2 and PAS = 3 for stimuli presented at intermediate and high stimulus intensities (Fig. 4b). The results were identical for both comparisons (cf. Fig. 4a vs. b). A late positive deflection was observed over central electrodes, peaking around 400–600 ms, which scaled with subjective awareness ratings (Fig. 4a,b), increasing in amplitude with stronger subjective experience. For statistical testing, we ran a non-parametric cluster-based permutation test 17,18 (see Methods). This revealed a positive cluster over centro-parietal electrodes between roughly 200–800 ms after stimulus onset (highlighted by the dashed rectangle in Fig. 4a,b), independently of the awareness levels compared (see Fig. 4c,d, left maps for the results of the initial, random sub-sample, pcluster < 0.01). We corroborated this result by repeating the analysis in another 500 runs, randomly selecting a different subset of trials on each iteration (and always equating stimulus intensity across awareness ratings), revealing this effect to be highly consistent across sub-samples (see Fig. 4c,d, right maps for the topography of averaged p-values across the total of 500 runs). Additional, pairwise post-hoc comparisons between each awareness level, performed through cluster-based permutation t-tests averaged over the significant time window identified in the main analysis, showed that the CPP differed in amplitude across all awareness levels (Fig. 4e,f). Overall, these results show that the CPP is strongly related to subjective clarity of the percept, with higher amplitudes corresponding to higher clarity of perceptual experience. This effect was consistent across subjects (see Supplementary Fig. 1a for single subject data).
CPP scales with the strength of subjective evidence. (a,b) Grand average ERP waveforms over electrode Pz, obtained for each visual awareness rating (a: PAS = 0 vs. 1 vs. 2 b: PAS = 1 vs. 2 vs. 3) for trials with equated stimulus intensity levels. Each waveform represents the average ERP over 500 random trial draws shaded areas represent standard errors at each time point (note that the standard errors are very small and therefore almost invisible). Time windows of significant differences are highlighted by the dashed rectangle. (c,d) Cluster analysis results for awareness-related signals (based on ANOVAs across awareness levels). The maps on the left show the topographic distribution of F-values, while the black dots represent significant electrodes (initial draw out of 500 subsamples). The maps on the right show the topography of the p-values averaged over all 500 random draws in the time windows where the most consistent effects were found. (e,f) Pairwise post-hoc comparisons for awareness-related signals in the significant time window (dashed rectangle in a,b). Each map shows the t-value distribution with black dots indicating significant electrodes.
Centro-parietal positivity shows no relation to stimulus contrast when subjective awareness ratings are equated across levels of comparisons
Next, we compared CPP amplitude across stimulus intensity levels for which we could equate the value of awareness ratings with a sufficient number of trials through random trial subsampling. Low versus intermediate stimulus intensity levels were compared after equating PAS values 0, 1 and 2 across stimulus categories (Fig. 5a) and intermediate versus high stimulus intensity levels were contrasted after equating PAS values 1, 2 and 3 across stimulus categories (Fig. 5b). The results reveal very small variations of the CPP with stimulus intensity for both comparisons (see Fig. 5a,b: compare low vs. intermediate intensity waveform in a and intermediate vs. high intensity waveform in b), with the differences being an order of magnitude smaller than those observed for the awareness effects (cf. Fig. 4a,b). Running a non-parametric cluster-based permutation t-test per stimulus intensity comparison did not reveal any significant cluster of electrodes (all pcluster > 0.5 for low vs intermediate comparisons, all pcluster > 0.3 for intermediate vs high comparisons). The absence of any effect was not driven by trial selection as confirmed by repeating the trial selection 500 times (random draws). Hence, the CPP was not modulated by the different stimulus intensities when accounting for the subjective experience ratings. The lack of effect was evident across subjects (see Supplementary Fig. 1b for single subject data).
CPP amplitude and stimulus intensity: ERP amplitudes over electrode Pz per each of the stimulus intensity comparisons (a: low vs. intermediate b: intermediate vs. high) for which awareness ratings could be equated. No differences were found across stimulus intensities with equated PAS ratings (compare low vs. intermediate waveform in (a) and intermediate vs. high waveform in (b)). Note that the amplitude-differences between a and b are due to the need for including different PAS ratings in the respective averages (PAS 0, 1 and 2 for waveforms in (a), PAS 1, 2 and 3 for waveforms in (b)). Each waveform represents the average ERP over 500 trial selections (random draws), shaded areas represent standard errors at each time point.
Centro-parietal positivity: accuracy and catch trials
Given the relatively stronger co-variation of the CPP with subjectively experienced than with physically presented sensory evidence, we expected an uncoupling of this potential from performance accuracy. This was confirmed in two separate analyses. First, we compared the CPP across PAS ratings on correct response trials only (PAS ratings from 0 to 3 included), and then repeated this analysis on incorrect response trials (only including PAS = 0 and 1 ratings to have a sufficient number of numerically equated trials between the two categories, see Behavioral results).
For correct trials, we confirmed the existence of an awareness rating effect. The higher the visual awareness rating, the higher the amplitude of the CPP (Fig. 6a, left panel). As above, the cluster-based permutation test on ERPs across subjective rating (PAS), performed on the whole epoch (0 to 900 ms), revealed a significant positive centro-parietal electrode cluster (pcluster < 0.01, from 196 to 900 ms, Fig. 6a, left panel, map). The cluster was also present in each pairwise post-hoc comparison between PAS ratings when performed through cluster-based permutation t-tests on the mean amplitude of the significant time window (p values ranging from 0.046 to 0.002).
Late evoked potential scales with awareness regardless of accuracy. (a) ERP waveforms over electrode Pz, obtained for each visual awareness level in correct trials (left panel) and for visual awareness levels PAS = 0 and PAS = 1 in incorrect trials (right panel). The map illustrates the topographical distribution of F- and t-values for the significant time window (highlighted by the dashed rectangle). Black dots represent significant positive electrode clusters. (b) Evoked potentials in catch trials versus PAS = 0 trials (obtained after 500 random draws from low and intermediate stimulus intensity trials) over electrode Pz. Shaded areas around PAS = 0 waveforms represent standard errors at each time point. Map: t-value distributions from the comparison between catch trial data in the window after expected stimulus onset (marked by 0) versus its baseline “pre-stimulus” interval. Black dots represent significant positive electrode clusters.
Importantly, the awareness rating effect was also observed for incorrect trials, suggesting a dissociation of the effect from task accuracy (Fig. 6a, right panel). The cluster-based permutation t-test (performed on the whole epoch, 0 to 900 ms) again revealed a positive cluster over centro-parietal areas (pcluster < 0.01, from 248 to 788 ms, Fig. 6a, right panel, map).
As a variant of testing for a link of the CPP to an ‘internal’ decision quantity, we analysed catch trials to examine whether this late positivity can also occur in the absence of any external sensory evidence. The corresponding cluster-based permutation t-test performed on the whole epoch (0 to 900 ms), with the pre-stimulus period (−300 to 0 ms) as a reference interval, again revealed a positive cluster over centro-parietal areas (pcluster < 0.01, starting at 248 ms until the end of the epoch, Fig. 6b, map). Therefore, even if no veridical sensory information is available, the CPP is still present, and its amplitude matches the amplitude of stimulus-present trials with PAS reports of zero (“no experience”) (Fig. 6b). Please note that the high number of PAS 0 reports in catch trials prevented a comparison across PAS levels.
To further investigate the relationship between CPP amplitude, stimulus strength and subjective experience, we ran a mediation analysis linking the three variables, without any prior trial selection. A mediation analysis allows for estimation of the extent to which a proposed mediator variable accounts for the relationship between a predictor and an outcome variable 19,20 . Here, based on the abundant literature linking the amount of available evidence to the CPP amplitude e.g. 1,3,12,16 , we considered stimulus intensity (from catch trials to High, 4 levels) as predictor of the EEG amplitude (outcome variable), while subjective experience measured by PAS scale ratings (0–3, 4 levels) was included as a proposed mediator (see Fig. 7a). The choice of the awareness rating as the mediator is also in line with our behavioural results (Fig. 2a), indicating that the awareness rating is predicted by the stimulus strength, thus acknowledging that subjective perception is mostly driven by external evidence. Indeed, in the mediation analysis, variations in levels of the predictor must significantly account for variations in the proposed mediator. Please note that by the use of the mediation analysis, we aimed to investigate the strength of the stimulus intensity-CPP amplitude relationship whilst controlling for subjective experience, without implying that the subjective evidence causally influences the CPP.
Mediation hypothesis and results. (a) The mediation model included the independent variable (stimulus intensity) as predictor of the outcome variable (ERP amplitude). Subjective reports were introduced as a proposed mediator and tested as to whether they accounted for the relationship between predictor and outcome. (b) The mediation effect (ab) time course averaged across electrodes within the significant cluster is shown on the left, together with the total effect of stimulus intensity (c) and the direct effect (c′) when controlling for subjective reports. The rectangle represents the significant time window of mediation. On the right, the topography of the mediation effect is shown with the significant positive electrode cluster superimposed (black dots).
Therefore, in our model (Fig. 7a), path a represents the relationship between stimulus intensity and PAS rating and path b the relationship between PAS rating and EEG amplitude when controlling for stimulus intensity. Path c represents the total stimulus intensity effect (unmediated) on EEG amplitude and path c’ represents the direct stimulus intensity–EEG amplitude effect when controlling for PAS ratings. The product of the path a and path b coefficients (ab) represents the mediation effect. Testing for a significant mediation effect involves testing if the predictor-outcome relationship (stimulus intensity – EEG amplitude) is significantly reduced by including the mediator (PAS ratings) in the model (a*b = c-c′ > 0). To test this, ab coefficients calculated for each channel and each time point from 0 to 900 ms after stimulus presentation were tested against 0 by means of a cluster based permutation t-test. We found a significant positive cluster (pcluster = 0.007 also note path a p < 0.05), spanning from 300 to 800 ms and including several centro-parietal electrodes. Figure 7b shows ab values over time averaged across electrodes within the significant cluster (on the left) and its corresponding topography (on the right). Importantly, the time window and the topography of the mediation effect exactly mirrored the CPP component. This result indicates that the inclusion of the mediator “subjective report” in the model significantly decreased the predictive power of stimulus strength on CPP (compare dashed lines in Fig. 7b). In summary, the mediation analysis confirmed and extended our initial results, showing that when the subjective experience is accounted for, the stimulus strength does not significantly modulate the CPP amplitude anymore.
What is Magnetoencephalography (MEG)?
Magnetoencephalography (MEG) is a non-invasive technique for investigating human brain activity. It allows the measurement of ongoing brain activity on a millisecond-by-millisecond basis, and it shows where in the brain activity is produced.
How does MEG work?
At the cellular level, individual neurons in the brain have electrochemical properties that result in the flow of electrically charged ions through a cell. Electromagnetic fields are generated by the net effect of this slow ionic current flow. While the magnitude of fields associated with an individual neuron is negligible, the effect of multiple neurons (for example, 50,000 &ndash 100,000) excited together in a specific area generates a measureable magnetic field outside the head. These neuromagnetic signals generated by the brain are extremely small&mdasha billionth of the strength of the earth&rsquos magnetic field. Therefore, MEG scanners require superconducting sensors (SQUID, superconducting quantum interference device). The SQUID sensors are bathed in a large liquid helium cooling unit at approximately -269 degrees C. Due to low impedance at this temperature, the SQUID device can detect and amplify magnetic fields generated by neurons a few centimeters away from the sensors. A magnetically shielded room houses the equipment, and mitigates interference.
What are the advantages of MEG?
MEG has advantages over both fMRI and EEG. The technologies complement each other, but only MEG provides timing as well as spatial information about brain activity. fMRI signals reflect brain activity indirectly, by measuring the oxygenation of blood flowing near active neurons. MEG signals are obtained directly from neuronal electrical activity. MEG signals are able to show absolute neuronal activity whereas the fMRI signals show relative neuronal activity, meaning that the fMRI signal analysis always be compared to reference neuronal activity. This means that MEG can be recorded in sleeping subjects. MEG does not make any operational noise, unlike fMRI. While fMRI measurement requires the complete absence of subject movement during recording, MEG measurement does not, so children can move their heads within the MEG helmet.
Finally and most importantly, MEG provides us with temporal characteristics about brain activation with sub-millisecond precision, whereas fMRI measurement provides poor temporal information. MEG provides also more accurate spatial localization of neural activities than EEG, a complementary method of recording brain activity. The I-LABS MEG Brain Imaging Center system allows co-registration of EEG and MEG.