Skip to content

Commit 8594304

Browse files
spelling and consistent formatting
1 parent 886e212 commit 8594304

25 files changed

+51
-52
lines changed

getting_started/eeg/biosemi.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ The [24 bit file format](http://www.biosemi.com/faq/file_format.htm) has the pra
2121

2222
The [high sampling rate](http://www.biosemi.com/faq/adjust_samplerate.htm) (minimally 2kHz, i.e. 2000 Hz) has the consequence that files are much larger than with most acquisition systems. e.g., when compared to a BrainAmp data file sampled at 512 Hz with the same number of electrodes, the Biosemi data file will be approximately 3x as large on disk and 4x as large after having read it in memory. That means that for processing bdf files you typically will want to have a computer with more than the standard amount of RAM. After reading the data in MATLAB memory, a common procedure is to downsample it to reduce the sampling rate to 500 Hz using **[ft_resampledata](/reference/ft_resampledata)**. This will make all subsequent analyses run much faster and will facilitate doing the analysis with less RAM.
2323

24-
Most Biosemi [electrode caps](http://www.biosemi.com/headcap.htm) have a unique channel naming scheme. Also the exact number and positions is different from those in other EEG systems. Consequently, when plotting the channel-level data with a topographic arrangement of the channels, or when plotting the topographies (see the [plotting tutorial](/tutorial/plotting#plotting_data_at_the_channel_level) and the [channel layout tutorial](/tutorial/layout)), you will have to use a layout that is specific to your Biosemi electrode cap. FieldTrip includes the following template 2D layout files in the fieldtrip/template/layout directory, but you might want to construct your own layout.
24+
Most Biosemi [electrode caps](http://www.biosemi.com/headcap.htm) have a unique channel naming scheme. Also the exact number and positions is different from those in other EEG systems. Consequently, when plotting the channel-level data with a topographic arrangement of the channels, or when plotting the topographies (see the [plotting tutorial](/tutorial/plotting#plotting_data_at_the_channel_level) and the [channel layout tutorial](/tutorial/layout)), you will have to use a layout that is specific to your Biosemi electrode cap. FieldTrip includes the following template 2D layout files in the `fieldtrip/template/layout` directory, but you might want to construct your own layout.
2525

2626
- biosemi16.lay
2727
- biosemi32.lay

template/sourcemodel.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -50,7 +50,7 @@ You will notice that the regularly spaced 3D grids are not that interesting to l
5050

5151
When doing source reconstruction using minimum norm estimation (MNE, also known as linear estimation) techniques, the assumption is that the sources in the brain are distributed and that only the strength at all possible cortical locations is to be estimated. Since the strength of all dipoles in the cortical mesh is estimated simultaneously, sources should only be placed in regions where generators might be present. MNE therefore usually assumes a source model that consists of grey matter only, which can be modeled as a highly folded cortical sheet.
5252

53-
A canonical cortical sheet is available in fieldtrip/template/sourcemodel with different numbers of vertices (20484, 8192 and 5124 vertices). These files were taken from the SPM8 release version; they refer to the canonical T1 anatomical MRI and are expressed in MNI coordinates.
53+
A canonical cortical sheet is available in the `fieldtrip/template/sourcemodel` directory with different numbers of vertices (20484, 8192 and 5124 vertices). These files were taken from the SPM8 release version; they refer to the canonical T1 anatomical MRI and are expressed in MNI coordinates.
5454

5555
- cortex_20484.surf.gii
5656
- cortex_8196.surf.gii

tutorial/connectivity/coherence.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -362,7 +362,7 @@ Once we computed this, we can use **[ft_sourceanalysis](/reference/ft_sourceanal
362362
cfg.unit = 'cm';
363363
source = ft_sourceanalysis(cfg, freq);
364364

365-
The resulting source-structure is a volumetric reconstruction which is specified in head-coordinates. In order to be able to visualise the result with respect to the subject's MRI, we have to interpolate the functional data to the anatomical MRI. For this, we need the subject's MRI, which is included in the [SubjectCMC.zip](https://download.fieldtriptoolbox.org/tutorial/SubjectCMC.zip) dataset. After reading the anatomical MRI, we reslice it along the axes of the head coordinate system for improved visualization.
365+
The resulting source-structure is a volumetric reconstruction which is specified in head-coordinates. In order to be able to visualize the result with respect to the subject's MRI, we have to interpolate the functional data to the anatomical MRI. For this, we need the subject's MRI, which is included in the [SubjectCMC.zip](https://download.fieldtriptoolbox.org/tutorial/SubjectCMC.zip) dataset. After reading the anatomical MRI, we reslice it along the axes of the head coordinate system for improved visualization.
366366

367367
mri = ft_read_mri('SubjectCMC.mri');
368368

@@ -376,7 +376,7 @@ Next, we can proceed with the interpolation.
376376
cfg.downsample = 2;
377377
interp = ft_sourceinterpolate(cfg, source, mri);
378378

379-
There are various ways to visualise the volumetric interpolated data. The most straightforward way is using **[ft_sourceplot](/reference/ft_sourceplot)**.
379+
There are various ways to visualize the volumetric interpolated data. The most straightforward way is using **[ft_sourceplot](/reference/ft_sourceplot)**.
380380

381381
cfg = [];
382382
cfg.method = 'ortho';

tutorial/intracranial/mouse_eeg.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -693,7 +693,7 @@ Since the original anatomical and labeled MRI are expressed in the same coordina
693693
atlas_realigned.unit = mri_realigned.unit;
694694
atlas_realigned.coordsys = mri_realigned.coordsys;
695695

696-
We can visualise the atlas in the same way as the anatomical MRI.
696+
We can visualize the atlas in the same way as the anatomical MRI.
697697

698698
cfg = [];
699699
ft_sourceplot(cfg, atlas);

tutorial/preproc/artifacts.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -60,7 +60,7 @@ If you want to use **[ft_rejectvisual](/reference/ft_rejectvisual)** on continuo
6060

6161
The **[ft_databrowser](/reference/ft_databrowser)** function works both for continuous and segmented data, and works with the data either still on disk or already read into memory. It allows you to browse through the data and to mark with the mouse segments with an artifact. Contrary to ft_rejectvisual, ft_databrowser does not return the cleaned data and also does not allow you to delete bad channels (though you can switch them off from visualization). Instead it returns in the output `cfg` a list of segments, expressed as begin and end sample relative to the recording. After detecting the segments with the artifacts, you call **[ft_rejectartifact](/reference/ft_rejectartifact)** to remove them from your data (when the data is already in memory) or from your trial definition (when the data is still on disk).
6262

63-
Noteworthy is that the **[ft_databrowser](/reference/ft_databrowser)** function can also be used to visualise the time course of the ICA components and thus easily allows you to identify the components corresponding to eye blinks, heart beat and line noise. A good ICA unmixing of your data requires that the atypical artifacts (e.g., electrode movement, squid jumps) are removed **prior** to calling **[ft_componentanalysis](/reference/ft_componentanalysis)**. After you have determined what the bad components are, you can call **[ft_rejectcomponent](/reference/ft_rejectcomponent)** to project the data back to the sensor level, excluding the bad components.
63+
Noteworthy is that the **[ft_databrowser](/reference/ft_databrowser)** function can also be used to visualize the time course of the ICA components and thus easily allows you to identify the components corresponding to eye blinks, heart beat and line noise. A good ICA unmixing of your data requires that the atypical artifacts (e.g., electrode movement, squid jumps) are removed **prior** to calling **[ft_componentanalysis](/reference/ft_componentanalysis)**. After you have determined what the bad components are, you can call **[ft_rejectcomponent](/reference/ft_rejectcomponent)** to project the data back to the sensor level, excluding the bad components.
6464

6565
More information about manually dealing with artifacts is found in the [visual artifact rejection](/tutorial/visual_artifact_rejection) tutorial.
6666

tutorial/preproc/visual_artifact_rejection.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@ The **[ft_rejectvisual](/reference/ft_rejectvisual)** function works only for se
3333

3434
The **[ft_databrowser](/reference/ft_databrowser)** function works both for continuous and segmented data and also works with the data either on disk or already read into memory. It allows you to browse through the data and to select with the mouse sections of data that contain an artifact. Those time-segments are marked. Contrary to **[ft_rejectvisual](/reference/ft_rejectvisual)**, the **[ft_databrowser](/reference/ft_databrowser)** function does not return the cleaned data and also does not allow you to delete bad channels (though you can switch them off from visualization). After detecting the time-segments with the artifacts, you should call **[ft_rejectartifact](/reference/ft_rejectartifact)** to remove them from your data (when the data is already in memory) or from your trial definition (when the data is still on disk).
3535

36-
Noteworthy is that the **[ft_databrowser](/reference/ft_databrowser)** function can also be used to visualise the time course of the ICA components and thus easily allows you to identify the components corresponding to eye blinks, heart beat and line noise. Note that a proper ICA unmixing of your data requires that the atypical artifacts (e.g., electrode movement, squid jumps) are removed **prior** to calling **[ft_componentanalysis](/reference/ft_componentanalysis)**. After you have determined what the bad components are, you can call **[ft_rejectcomponent](/reference/ft_rejectcomponent)** to project the data back to the sensor level, excluding the bad components.
36+
Noteworthy is that the **[ft_databrowser](/reference/ft_databrowser)** function can also be used to visualize the time course of the ICA components and thus easily allows you to identify the components corresponding to eye blinks, heart beat and line noise. Note that a proper ICA unmixing of your data requires that the atypical artifacts (e.g., electrode movement, squid jumps) are removed **prior** to calling **[ft_componentanalysis](/reference/ft_componentanalysis)**. After you have determined what the bad components are, you can call **[ft_rejectcomponent](/reference/ft_rejectcomponent)** to project the data back to the sensor level, excluding the bad components.
3737

3838
## Procedure
3939

@@ -136,7 +136,7 @@ This operation could be repeated for each of the metrics, by selecting the metri
136136
{% include markup/skyblue %}
137137
The summary mode in **[ft_rejectvisual](/reference/ft_rejectvisual)** has been primarily designed to visually screen for artefacts in channels of a consistent type, i.e., only for the axial MEG gradiometers in this example.
138138

139-
If you have EEG data, the EOG channels have the same physical units and very similar amplitudes and therefore can be visualised simultaneously.
139+
If you have EEG data, the EOG channels have the same physical units and very similar amplitudes and therefore can be visualized simultaneously.
140140

141141
If you have data from a 306-channel Neuromag system, you will have both magnetometers and planar gradiometers, which have different physical units and rather different numbers. Combining them in a single visualization is likely to result in a biassed selection, either mainly relying on the magnetometers or the gradiometers being used to find artefacts.
142142

tutorial/source/beamformer.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -152,7 +152,7 @@ Prior to doing the spectral decomposition with ft_freqanalysis you have to ensur
152152
Furthermore, after selecting the channels you want to use in the source reconstruction (excluding the bad channels) and after rereferencing them, you should not make sub-selections of channels any more and throw out channels, because that would cause the data not be average referenced any more.
153153
{% include markup/end %}
154154

155-
You can now visualise the headmodel together with the sensor positions:
155+
You can now visualize the headmodel together with the sensor positions:
156156

157157
figure
158158
ft_plot_sens(freqPost.grad);

tutorial/source/beamformingextended.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@ In the [sensor-level tutorial](/tutorial/sensor_analysis) we found gamma-band os
2424

2525
The brain is divided in a regular three dimensional grid and the source strength for each grid point is computed. The method applied in this example is termed Dynamical Imaging of Coherent Sources (DICS) and the estimates are calculated in the frequency domain (Gross et al. 2001). Other beamformer methods rely on source estimates calculated in the time domain, e.g., the Linearly Constrained Minimum Variance (LCMV) and Synthetic Aperture Magnetometry (SAM) methods (van Veen et al., 1997; Robinson and Cheyne, 1997). These methods produce a 3D spatial distribution of the power of the neuronal sources. This distribution is then overlaid on a structural image of the subject's brain. These distributions of source power can then be subjected to statistical analysis. It is always ideal to contrast the activity of interest against some control/baseline activity. Options for this will be discussed below, but it is best to keep this in mind when designing your experiment from the start, rather than struggle to find a suitable control/baseline after data collection.
2626

27-
When conducting a multiple-subject study, it is essential that averaging over subjects does not violate any statistical assumption. One of these assumptions is that subject's sources are represented in a common space, i.e. an averaged grid point represents the estimate of the same brain region across subjects. One way to get subjects in a common space is by spatially deforming and interpolating the source reconstruction after beamforming. However, we will use (and recommend) an alternative way that does not require interpolation. Prior to source estimation we construct a regular grid in MNI template space and spatially deform this grid to each of the individual subjects (note that you will only have the data from one subject here). The beamformer estimation is done on the direct grid mapped to MNI space, so that the results can be compared over subjects. This procedure is explained in detail [in this example code](/example/sourcemodel_aligned2mni). Creating the MNI template grid only needs to be done once, and the result is provided in the fieldtrip/template directory. We strongly suggest that you have a quick (but thorough) look at the example code page and understand the essence of what is being done there anyway!
27+
When conducting a multiple-subject study, it is essential that averaging over subjects does not violate any statistical assumption. One of these assumptions is that subject's sources are represented in a common space, i.e. an averaged grid point represents the estimate of the same brain region across subjects. One way to get subjects in a common space is by spatially deforming and interpolating the source reconstruction after beamforming. However, we will use (and recommend) an alternative way that does not require interpolation. Prior to source estimation we construct a regular grid in MNI template space and spatially deform this grid to each of the individual subjects (note that you will only have the data from one subject here). The beamformer estimation is done on the direct grid mapped to MNI space, so that the results can be compared over subjects. This procedure is explained in detail [in this example code](/example/sourcemodel_aligned2mni). Creating the MNI template grid only needs to be done once, and the result is provided in the `fieldtrip/template` directory. We strongly suggest that you have a quick (but thorough) look at the example code page and understand the essence of what is being done there anyway!
2828

2929
The tutorial is split into three parts. In the first part of the tutorial, we will explain how to compute the forward and inverse model, which is the fundamental basic for source level analysis. In the second part, we will localize the sources responsible for the posterior gamma activity upon visual stimulation. In the third part of the tutorial, we will compute coherence to study the oscillatory synchrony between two sources in the brain. This is computed in the frequency domain by normalizing the magnitude of the summed cross-spectral density between two signals by their respective power. For each frequency bin the coherence value is a number between 0 and 1. The coherence values reflect the consistency of the phase difference between the two signals at a given frequency. In the dataset we will analyze the subject was required to maintain an isometric contraction of a forearm muscle. The example in this session covers thus cortico-muscular coherence on source level. The same principles, however, apply to cortico-cortical coherence, for which the interested reader can have a look at [another tutorial](/tutorial/connectivityextended).
3030

@@ -441,7 +441,7 @@ If you input a sourcemodel on which you have **not** already computed the leadfi
441441

442442
### Plotting cortico-muscular coherent sources
443443

444-
In order to be able to visualise the result with respect to the anatomical MRI, we have to do the exact same step as described above, just this time we have to interpolate the coherence parameter rather than the power parameter:
444+
In order to be able to visualize the result with respect to the anatomical MRI, we have to do the exact same step as described above, just this time we have to interpolate the coherence parameter rather than the power parameter:
445445

446446
source_coh_lft.pos = template.sourcemodel.pos;
447447
source_coh_lft.dim = template.sourcemodel.dim;
@@ -452,7 +452,7 @@ In order to be able to visualise the result with respect to the anatomical MRI,
452452
cfg.coordsys = 'mni';
453453
source_coh_int = ft_sourceinterpolate(cfg, source_coh_lft, template_mri);
454454

455-
Again there are various ways to visualise the volumetric interpolated data. The most straightforward way is using **[ft_sourceplot](/reference/ft_sourceplot)**.
455+
Again there are various ways to visualize the volumetric interpolated data. The most straightforward way is using **[ft_sourceplot](/reference/ft_sourceplot)**.
456456

457457
cfg = [];
458458
cfg.method = 'ortho';

tutorial/source/dipolefitting.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -47,7 +47,7 @@ To fit the dipole models to the data, we will perform the following steps:
4747
- Using **[ft_dipolefitting](/reference/ft_dipolefitting)** we will fit dipole models to the averaged data for each condition and to the difference between the conditions.
4848
- Throughout this tutorial, we will use the [high-level plotting](/tutorial/plotting) functions to look at the data, and some [lower-level plotting](/development/module/plotting) functions to make detailed visualizations.
4949

50-
### Read and visualise the anatomical data
50+
### Read and visualize the anatomical data
5151

5252
We start with the anatomical MRI data, which comes directly from the scanner in DICOM format. You can download the [dicom.zip](https://download.fieldtriptoolbox.org/workshop/natmeg2014/dicom.zip) from our download server. We suggest that you unzip the dicom files in a separate directory.
5353

@@ -78,7 +78,7 @@ The high-level plotting functions do not offer support for flexible plotting of
7878

7979
{% include image src="/assets/img/workshop/natmeg2014/dipolefitting/natmeg_dip_geometry1.png" width="500" %}
8080

81-
It is possible to visualise the anatomical MRI using the **[ft_sourceplot](/reference/ft_sourceplot)** function. Usually we use the function to overlay functional data from a beamformer source reconstruction on the anatomical MRI, but in the absence of the functional data it will simply show the anatomical MRI. Besides showing the MRI, you can also use the function to see how the MRI is aligned with the coordinate system, and how the voxel indices [i j k] map onto geometrical coordinates [x y z].
81+
It is possible to visualize the anatomical MRI using the **[ft_sourceplot](/reference/ft_sourceplot)** function. Usually we use the function to overlay functional data from a beamformer source reconstruction on the anatomical MRI, but in the absence of the functional data it will simply show the anatomical MRI. Besides showing the MRI, you can also use the function to see how the MRI is aligned with the coordinate system, and how the voxel indices [i j k] map onto geometrical coordinates [x y z].
8282

8383
figure
8484
cfg = [];

0 commit comments

Comments
 (0)