You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: getting_started/eeg/biosemi.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -21,7 +21,7 @@ The [24 bit file format](http://www.biosemi.com/faq/file_format.htm) has the pra
21
21
22
22
The [high sampling rate](http://www.biosemi.com/faq/adjust_samplerate.htm) (minimally 2kHz, i.e. 2000 Hz) has the consequence that files are much larger than with most acquisition systems. e.g., when compared to a BrainAmp data file sampled at 512 Hz with the same number of electrodes, the Biosemi data file will be approximately 3x as large on disk and 4x as large after having read it in memory. That means that for processing bdf files you typically will want to have a computer with more than the standard amount of RAM. After reading the data in MATLAB memory, a common procedure is to downsample it to reduce the sampling rate to 500 Hz using **[ft_resampledata](/reference/ft_resampledata)**. This will make all subsequent analyses run much faster and will facilitate doing the analysis with less RAM.
23
23
24
-
Most Biosemi [electrode caps](http://www.biosemi.com/headcap.htm) have a unique channel naming scheme. Also the exact number and positions is different from those in other EEG systems. Consequently, when plotting the channel-level data with a topographic arrangement of the channels, or when plotting the topographies (see the [plotting tutorial](/tutorial/plotting#plotting_data_at_the_channel_level) and the [channel layout tutorial](/tutorial/layout)), you will have to use a layout that is specific to your Biosemi electrode cap. FieldTrip includes the following template 2D layout files in the fieldtrip/template/layout directory, but you might want to construct your own layout.
24
+
Most Biosemi [electrode caps](http://www.biosemi.com/headcap.htm) have a unique channel naming scheme. Also the exact number and positions is different from those in other EEG systems. Consequently, when plotting the channel-level data with a topographic arrangement of the channels, or when plotting the topographies (see the [plotting tutorial](/tutorial/plotting#plotting_data_at_the_channel_level) and the [channel layout tutorial](/tutorial/layout)), you will have to use a layout that is specific to your Biosemi electrode cap. FieldTrip includes the following template 2D layout files in the `fieldtrip/template/layout` directory, but you might want to construct your own layout.
Copy file name to clipboardExpand all lines: template/sourcemodel.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -50,7 +50,7 @@ You will notice that the regularly spaced 3D grids are not that interesting to l
50
50
51
51
When doing source reconstruction using minimum norm estimation (MNE, also known as linear estimation) techniques, the assumption is that the sources in the brain are distributed and that only the strength at all possible cortical locations is to be estimated. Since the strength of all dipoles in the cortical mesh is estimated simultaneously, sources should only be placed in regions where generators might be present. MNE therefore usually assumes a source model that consists of grey matter only, which can be modeled as a highly folded cortical sheet.
52
52
53
-
A canonical cortical sheet is available in fieldtrip/template/sourcemodel with different numbers of vertices (20484, 8192 and 5124 vertices). These files were taken from the SPM8 release version; they refer to the canonical T1 anatomical MRI and are expressed in MNI coordinates.
53
+
A canonical cortical sheet is available in the `fieldtrip/template/sourcemodel` directory with different numbers of vertices (20484, 8192 and 5124 vertices). These files were taken from the SPM8 release version; they refer to the canonical T1 anatomical MRI and are expressed in MNI coordinates.
Copy file name to clipboardExpand all lines: tutorial/connectivity/coherence.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -362,7 +362,7 @@ Once we computed this, we can use **[ft_sourceanalysis](/reference/ft_sourceanal
362
362
cfg.unit = 'cm';
363
363
source = ft_sourceanalysis(cfg, freq);
364
364
365
-
The resulting source-structure is a volumetric reconstruction which is specified in head-coordinates. In order to be able to visualise the result with respect to the subject's MRI, we have to interpolate the functional data to the anatomical MRI. For this, we need the subject's MRI, which is included in the [SubjectCMC.zip](https://download.fieldtriptoolbox.org/tutorial/SubjectCMC.zip) dataset. After reading the anatomical MRI, we reslice it along the axes of the head coordinate system for improved visualization.
365
+
The resulting source-structure is a volumetric reconstruction which is specified in head-coordinates. In order to be able to visualize the result with respect to the subject's MRI, we have to interpolate the functional data to the anatomical MRI. For this, we need the subject's MRI, which is included in the [SubjectCMC.zip](https://download.fieldtriptoolbox.org/tutorial/SubjectCMC.zip) dataset. After reading the anatomical MRI, we reslice it along the axes of the head coordinate system for improved visualization.
366
366
367
367
mri = ft_read_mri('SubjectCMC.mri');
368
368
@@ -376,7 +376,7 @@ Next, we can proceed with the interpolation.
376
376
cfg.downsample = 2;
377
377
interp = ft_sourceinterpolate(cfg, source, mri);
378
378
379
-
There are various ways to visualise the volumetric interpolated data. The most straightforward way is using **[ft_sourceplot](/reference/ft_sourceplot)**.
379
+
There are various ways to visualize the volumetric interpolated data. The most straightforward way is using **[ft_sourceplot](/reference/ft_sourceplot)**.
Copy file name to clipboardExpand all lines: tutorial/preproc/artifacts.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -60,7 +60,7 @@ If you want to use **[ft_rejectvisual](/reference/ft_rejectvisual)** on continuo
60
60
61
61
The **[ft_databrowser](/reference/ft_databrowser)** function works both for continuous and segmented data, and works with the data either still on disk or already read into memory. It allows you to browse through the data and to mark with the mouse segments with an artifact. Contrary to ft_rejectvisual, ft_databrowser does not return the cleaned data and also does not allow you to delete bad channels (though you can switch them off from visualization). Instead it returns in the output `cfg` a list of segments, expressed as begin and end sample relative to the recording. After detecting the segments with the artifacts, you call **[ft_rejectartifact](/reference/ft_rejectartifact)** to remove them from your data (when the data is already in memory) or from your trial definition (when the data is still on disk).
62
62
63
-
Noteworthy is that the **[ft_databrowser](/reference/ft_databrowser)** function can also be used to visualise the time course of the ICA components and thus easily allows you to identify the components corresponding to eye blinks, heart beat and line noise. A good ICA unmixing of your data requires that the atypical artifacts (e.g., electrode movement, squid jumps) are removed **prior** to calling **[ft_componentanalysis](/reference/ft_componentanalysis)**. After you have determined what the bad components are, you can call **[ft_rejectcomponent](/reference/ft_rejectcomponent)** to project the data back to the sensor level, excluding the bad components.
63
+
Noteworthy is that the **[ft_databrowser](/reference/ft_databrowser)** function can also be used to visualize the time course of the ICA components and thus easily allows you to identify the components corresponding to eye blinks, heart beat and line noise. A good ICA unmixing of your data requires that the atypical artifacts (e.g., electrode movement, squid jumps) are removed **prior** to calling **[ft_componentanalysis](/reference/ft_componentanalysis)**. After you have determined what the bad components are, you can call **[ft_rejectcomponent](/reference/ft_rejectcomponent)** to project the data back to the sensor level, excluding the bad components.
64
64
65
65
More information about manually dealing with artifacts is found in the [visual artifact rejection](/tutorial/visual_artifact_rejection) tutorial.
Copy file name to clipboardExpand all lines: tutorial/preproc/visual_artifact_rejection.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -33,7 +33,7 @@ The **[ft_rejectvisual](/reference/ft_rejectvisual)** function works only for se
33
33
34
34
The **[ft_databrowser](/reference/ft_databrowser)** function works both for continuous and segmented data and also works with the data either on disk or already read into memory. It allows you to browse through the data and to select with the mouse sections of data that contain an artifact. Those time-segments are marked. Contrary to **[ft_rejectvisual](/reference/ft_rejectvisual)**, the **[ft_databrowser](/reference/ft_databrowser)** function does not return the cleaned data and also does not allow you to delete bad channels (though you can switch them off from visualization). After detecting the time-segments with the artifacts, you should call **[ft_rejectartifact](/reference/ft_rejectartifact)** to remove them from your data (when the data is already in memory) or from your trial definition (when the data is still on disk).
35
35
36
-
Noteworthy is that the **[ft_databrowser](/reference/ft_databrowser)** function can also be used to visualise the time course of the ICA components and thus easily allows you to identify the components corresponding to eye blinks, heart beat and line noise. Note that a proper ICA unmixing of your data requires that the atypical artifacts (e.g., electrode movement, squid jumps) are removed **prior** to calling **[ft_componentanalysis](/reference/ft_componentanalysis)**. After you have determined what the bad components are, you can call **[ft_rejectcomponent](/reference/ft_rejectcomponent)** to project the data back to the sensor level, excluding the bad components.
36
+
Noteworthy is that the **[ft_databrowser](/reference/ft_databrowser)** function can also be used to visualize the time course of the ICA components and thus easily allows you to identify the components corresponding to eye blinks, heart beat and line noise. Note that a proper ICA unmixing of your data requires that the atypical artifacts (e.g., electrode movement, squid jumps) are removed **prior** to calling **[ft_componentanalysis](/reference/ft_componentanalysis)**. After you have determined what the bad components are, you can call **[ft_rejectcomponent](/reference/ft_rejectcomponent)** to project the data back to the sensor level, excluding the bad components.
37
37
38
38
## Procedure
39
39
@@ -136,7 +136,7 @@ This operation could be repeated for each of the metrics, by selecting the metri
136
136
{% include markup/skyblue %}
137
137
The summary mode in **[ft_rejectvisual](/reference/ft_rejectvisual)** has been primarily designed to visually screen for artefacts in channels of a consistent type, i.e., only for the axial MEG gradiometers in this example.
138
138
139
-
If you have EEG data, the EOG channels have the same physical units and very similar amplitudes and therefore can be visualised simultaneously.
139
+
If you have EEG data, the EOG channels have the same physical units and very similar amplitudes and therefore can be visualized simultaneously.
140
140
141
141
If you have data from a 306-channel Neuromag system, you will have both magnetometers and planar gradiometers, which have different physical units and rather different numbers. Combining them in a single visualization is likely to result in a biassed selection, either mainly relying on the magnetometers or the gradiometers being used to find artefacts.
Copy file name to clipboardExpand all lines: tutorial/source/beamformer.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -152,7 +152,7 @@ Prior to doing the spectral decomposition with ft_freqanalysis you have to ensur
152
152
Furthermore, after selecting the channels you want to use in the source reconstruction (excluding the bad channels) and after rereferencing them, you should not make sub-selections of channels any more and throw out channels, because that would cause the data not be average referenced any more.
153
153
{% include markup/end %}
154
154
155
-
You can now visualise the headmodel together with the sensor positions:
155
+
You can now visualize the headmodel together with the sensor positions:
Copy file name to clipboardExpand all lines: tutorial/source/beamformingextended.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -24,7 +24,7 @@ In the [sensor-level tutorial](/tutorial/sensor_analysis) we found gamma-band os
24
24
25
25
The brain is divided in a regular three dimensional grid and the source strength for each grid point is computed. The method applied in this example is termed Dynamical Imaging of Coherent Sources (DICS) and the estimates are calculated in the frequency domain (Gross et al. 2001). Other beamformer methods rely on source estimates calculated in the time domain, e.g., the Linearly Constrained Minimum Variance (LCMV) and Synthetic Aperture Magnetometry (SAM) methods (van Veen et al., 1997; Robinson and Cheyne, 1997). These methods produce a 3D spatial distribution of the power of the neuronal sources. This distribution is then overlaid on a structural image of the subject's brain. These distributions of source power can then be subjected to statistical analysis. It is always ideal to contrast the activity of interest against some control/baseline activity. Options for this will be discussed below, but it is best to keep this in mind when designing your experiment from the start, rather than struggle to find a suitable control/baseline after data collection.
26
26
27
-
When conducting a multiple-subject study, it is essential that averaging over subjects does not violate any statistical assumption. One of these assumptions is that subject's sources are represented in a common space, i.e. an averaged grid point represents the estimate of the same brain region across subjects. One way to get subjects in a common space is by spatially deforming and interpolating the source reconstruction after beamforming. However, we will use (and recommend) an alternative way that does not require interpolation. Prior to source estimation we construct a regular grid in MNI template space and spatially deform this grid to each of the individual subjects (note that you will only have the data from one subject here). The beamformer estimation is done on the direct grid mapped to MNI space, so that the results can be compared over subjects. This procedure is explained in detail [in this example code](/example/sourcemodel_aligned2mni). Creating the MNI template grid only needs to be done once, and the result is provided in the fieldtrip/template directory. We strongly suggest that you have a quick (but thorough) look at the example code page and understand the essence of what is being done there anyway!
27
+
When conducting a multiple-subject study, it is essential that averaging over subjects does not violate any statistical assumption. One of these assumptions is that subject's sources are represented in a common space, i.e. an averaged grid point represents the estimate of the same brain region across subjects. One way to get subjects in a common space is by spatially deforming and interpolating the source reconstruction after beamforming. However, we will use (and recommend) an alternative way that does not require interpolation. Prior to source estimation we construct a regular grid in MNI template space and spatially deform this grid to each of the individual subjects (note that you will only have the data from one subject here). The beamformer estimation is done on the direct grid mapped to MNI space, so that the results can be compared over subjects. This procedure is explained in detail [in this example code](/example/sourcemodel_aligned2mni). Creating the MNI template grid only needs to be done once, and the result is provided in the `fieldtrip/template` directory. We strongly suggest that you have a quick (but thorough) look at the example code page and understand the essence of what is being done there anyway!
28
28
29
29
The tutorial is split into three parts. In the first part of the tutorial, we will explain how to compute the forward and inverse model, which is the fundamental basic for source level analysis. In the second part, we will localize the sources responsible for the posterior gamma activity upon visual stimulation. In the third part of the tutorial, we will compute coherence to study the oscillatory synchrony between two sources in the brain. This is computed in the frequency domain by normalizing the magnitude of the summed cross-spectral density between two signals by their respective power. For each frequency bin the coherence value is a number between 0 and 1. The coherence values reflect the consistency of the phase difference between the two signals at a given frequency. In the dataset we will analyze the subject was required to maintain an isometric contraction of a forearm muscle. The example in this session covers thus cortico-muscular coherence on source level. The same principles, however, apply to cortico-cortical coherence, for which the interested reader can have a look at [another tutorial](/tutorial/connectivityextended).
30
30
@@ -441,7 +441,7 @@ If you input a sourcemodel on which you have **not** already computed the leadfi
441
441
442
442
### Plotting cortico-muscular coherent sources
443
443
444
-
In order to be able to visualise the result with respect to the anatomical MRI, we have to do the exact same step as described above, just this time we have to interpolate the coherence parameter rather than the power parameter:
444
+
In order to be able to visualize the result with respect to the anatomical MRI, we have to do the exact same step as described above, just this time we have to interpolate the coherence parameter rather than the power parameter:
445
445
446
446
source_coh_lft.pos = template.sourcemodel.pos;
447
447
source_coh_lft.dim = template.sourcemodel.dim;
@@ -452,7 +452,7 @@ In order to be able to visualise the result with respect to the anatomical MRI,
Again there are various ways to visualise the volumetric interpolated data. The most straightforward way is using **[ft_sourceplot](/reference/ft_sourceplot)**.
455
+
Again there are various ways to visualize the volumetric interpolated data. The most straightforward way is using **[ft_sourceplot](/reference/ft_sourceplot)**.
Copy file name to clipboardExpand all lines: tutorial/source/dipolefitting.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -47,7 +47,7 @@ To fit the dipole models to the data, we will perform the following steps:
47
47
- Using **[ft_dipolefitting](/reference/ft_dipolefitting)** we will fit dipole models to the averaged data for each condition and to the difference between the conditions.
48
48
- Throughout this tutorial, we will use the [high-level plotting](/tutorial/plotting) functions to look at the data, and some [lower-level plotting](/development/module/plotting) functions to make detailed visualizations.
49
49
50
-
### Read and visualise the anatomical data
50
+
### Read and visualize the anatomical data
51
51
52
52
We start with the anatomical MRI data, which comes directly from the scanner in DICOM format. You can download the [dicom.zip](https://download.fieldtriptoolbox.org/workshop/natmeg2014/dicom.zip) from our download server. We suggest that you unzip the dicom files in a separate directory.
53
53
@@ -78,7 +78,7 @@ The high-level plotting functions do not offer support for flexible plotting of
78
78
79
79
{% include image src="/assets/img/workshop/natmeg2014/dipolefitting/natmeg_dip_geometry1.png" width="500" %}
80
80
81
-
It is possible to visualise the anatomical MRI using the **[ft_sourceplot](/reference/ft_sourceplot)** function. Usually we use the function to overlay functional data from a beamformer source reconstruction on the anatomical MRI, but in the absence of the functional data it will simply show the anatomical MRI. Besides showing the MRI, you can also use the function to see how the MRI is aligned with the coordinate system, and how the voxel indices [i j k] map onto geometrical coordinates [x y z].
81
+
It is possible to visualize the anatomical MRI using the **[ft_sourceplot](/reference/ft_sourceplot)** function. Usually we use the function to overlay functional data from a beamformer source reconstruction on the anatomical MRI, but in the absence of the functional data it will simply show the anatomical MRI. Besides showing the MRI, you can also use the function to see how the MRI is aligned with the coordinate system, and how the voxel indices [i j k] map onto geometrical coordinates [x y z].
0 commit comments