# FWHM & RESEL details for SPM and FSL

## Executive Summary

Random Field Theory depends on a measure called the RESEL count to determine corrected P-values. How RESELs are reported on in SPM and FSL, however, differs. This post reviews the basics of RESELs and FWHM and details how to interpret the values reported by each software.

## Background

Random Field Theory (RFT) is the method used in both FSL and SPM to make inference on statistic images. Specifically, it finds thresholds that control the chance of false positives while searching the brain for activations or group differences. The magic of RFT, unlike other methods, like Bonferroni, is that it accounts for the spatial smoothness of the data; as a result, a less severe correction is applied to highly smooth data while a more stringent correction is applied to relatively rough data.

The details of how RFT measures smoothness are technical (it is the inverse square root of the determinant of the variance-covariance matrix of the gradient of the component fields), but fortunately Keith Worsley introduced the notion of FWHM and RESELs [1] to make matters simplier.

For RFT, the FWHM is defined as the Full Width at Half Maximum of the Gaussian kernel required to smooth independent, white noise data to have the same “smoothness” as your data. Here, “smoothness” refers to the technical definition involving the “square root of the determinant…”. Crucially, FWHM is not the applied smoothness, e.g. the Gaussian kernel smoothing applied as part of preprocessing. It is the smoothness of the data fed into the GLM, which is a combination of the intrinsic smoothness of the data (affected by things like image reconstruction parameters and physiological noise). As we have 3-D data, FWHM is usually represented as a 3-vector, [FWHMx FWHMY FWHMZ], though be careful to check whether the units are in voxels or mm.

RESEL stands for RESolution ELement, and is a virtual voxel with dimensions [FWHMx FWHMY FWHMZ]. The RESEL count is the number of RESELs that fit into your search volume; in math mode:

R = V / ( FWHMX × FWHMY × FWHMZ )

where V is the search volume; note that V can be in voxels or mm3 (or whatever) as long as FWHMX, FWHMY & FWHMZ have the same units.

Note a possible source of confusion: A “RESEL” is 3D object, a cuboid; the “RESEL count” is a scalar, your search volume in units of RESELs.

## SPM

SPM provides the most complete reporting of FWHM and RESEL-related quantities. In the Results figure, in the footer of the P-value table, SPM lists [FWHMx FWHMY FWHMZ] in both voxel and mm units. It also lists the search volume in RESELs (along with the same in mm3 and voxel units). Finally, it reports the size of one RESEL measured in voxels.

Note, if you check SPM’s arithmetic, you may find that R is not exactly V / ( FWHMX × FWHMY × FWHMZ ). The reason is that a more sophisticated method for measuring the search volume is used that involves counting voxel edges and faces, etc; see `spm/src/spm_resels_vol.c` & [2] for more details.

## FSL

FSL provides no information about RESELs or FWHM in its HTML output. There is some information available, however, buried in the `*.feat` directories.

In each `*.feat` directory there is a `stats` subdirectory. In that directory you’ll find a file called `smoothness` that contains three lines, one each labeled `DLH`, `VOLUME`, `RESELS`.

`DLH` is the unintuitive parameter that describes image roughness; precisely it is

`DLH` = (4 log(2))3/2 / ( FWHMX × FWHMY × FWHMZ)

where FWHMX, FWHMY and FWHMZ are in voxel units.

`VOLUME` is the search volume in units of voxels. And, as a source of great possible confusion, `RESELS` is not the RESEL count, but rather, the size of one RESEL in voxel units

`RESELS` = FWHMX × FWHMY × FWHMZ

Sadly, the FWHMX, FWHMY and FWHMZ are never individually computed or saved. Note, however, the geometric mean of the FWHM’s is available as

AvgFWHM = `RESELS`1/3

which is some consolation.

Update: The verbose option to smoothest, -V, it will report the individual FWHMx, FWHMx, & FWHMy. But, again, this isn’t saved as part of feat output.

## Who cares?

Any neuroimaging researcher should always check the estimated FWHM of their analysis. The reason is that if, for some reason the FWHM smoothness is less than 3 in any dimension, the accuracy of the RFT results can be very poor [3]. While the FSL user can’t examine the 3 individual FHWM, they can at least compute AvgFWHM and check this.

Another reason is to understand differences in corrected P-values. For example, say you conduct 2 studies each with 20 subjects. You notice that, despite having similar T-values, the two studies have quite different FWE corrected P-values. The difference must be due to different RESEL counts, which in turn can be explained by either a difference in search volume (V) or smoothness (FWHM).

Finally, one more reason to check the RESELs is when you are getting bizarre results, like insanely significant results for modest cluster sizes. For example, if VBM results are poorly masked it can happen that the analysis includes a large boundary of air voxels outside the brain. Not only is this unnecessarily increasing your search volume (possibly increasing your multiple testing correction), but the smoothness estimate in air will be dramatically lower than in brain tissue, and thus corrupt the accuracy of the inferences. A bizarre value for RESEL size or FWHM will reveal this problem.

For a light-touch introduction to Random Field Theory, see [3]; for more detail (but less than the mathematical papers) see the relevant chapters in these books [4,5,6].

## References

[1] Worsley, K. J., Evans, A. C., Marrett, S., & Neelin, P. (1992). A three-dimensional statistical analysis for CBF activation studies in human brain. Journal of Cerebral Blood Flow and Metabolism, 12(6), 900–918. Preprint

[2] Worsley, K. J., Marrett, S., Neelin, P., Vandal, A. C., Friston, K. J., & Evans, A. C. (1996). A unified statistical approach for determining significant signals in images of cerebral activation. Human brain mapping, 4(1), 58–73.

[3] Nichols, T. E., & Hayasaka, S. (2003). Controlling the familywise error rate in functional neuroimaging: a comparative review. Statistical Methods in Medical Research, 12(5), 419–446.

[4] Poldrack, R. A., Mumford, J. A., & Nichols, T. E. (2011). Handbook of fMRI Data Analysis (p. 450). Cambridge University Press.

[5] Penny, W. D., Friston, K. J., Ashburner, J. T., Kiebel, S. J., & Nichols, T. E. (2006). Statistical Parametric Mapping: The Analysis of Functional Brain Images (p. 656). Academic Press. [Previous version available free]

[6] Jezzard, P., Matthews, P. M., & Smith, S. M. (2002). Functional MRI: An Introduction to Methods (p. 432). Oxford University Press.

# Standardizing DVARS

Multiple authors have found the spatial standard deviation of the temporal difference data useful for detecting anomalous or artifactual scans in fMRI. Called DVARS by Power et al. (NeuroImage, 59(3), 2142-54), this measure is useful but lacks interpretable units. In particular, the nominal value of DVARS reflects both the temporal standard deviation and the degree of temporal autocorrelation.

I’ve created a simple approach to standardize DVARS, attempting to remove the dependence on temporal standard deviation and autocorrelation. See my notes on Standardizing DVARS and a FSL script DVARS.sh to implement this new version of DVARS.

# FSL Scripts

I have recently uploaded a number of FSL scripts which folks might find useful. See my FSL scripts page for details, but the new ones include

• `fslstats.sh` Allows you to use fslstats with multiple files
• `fsl_fdr.sh` Gives a much nicer interface to FDR results in FSL
• `Dummy.sh` Creates dummy variables, handy for manual/scripted creation of design matrices.

# Flame without 1st level directories – copes only.

Follow-up to Flame without 1st level directories from the NISOx blog (formerly Neuroimaging Statistics Tips & Tools)

My earlier post described how to get the basic FEAT second-level results when you just have a stack of COPE and VARCOPE images. In fact, you don’t even need the VARCOPE images.

If you just have the COPE images, in the flameo call omit `--vc=4dvarcope` (which you don’t have) and replace the `--runmode=flame1` with `--runmode=ols` and you should be all set.

# Flame without 1st level directories

FSL’s fMRI modelling tool, FEAT, is a great tool and is easy to use if you use FSL from start to finish, in the recommended fashion. However, sometimes you might want to get FEAT group results when you don’t have all of the individual 1st level FEAT directories. This tip is for the case when you have just the 4D COPE image and the 4D VARCOPE images (all subjects, of course, already in MNI space), and you just want to do a one-sample t-test.

In the code below, I’m assuming that the following files are around:

• `4dcope` 4D COPE image
• `4dvarcope` 4D VARCOPE image
• `design.mat`, `design.con` and `design.con` for a 1-sample t-test; use Glm_gui if you need help creating these.
• `mask` mask image
• `example_func` an example image for ‘underlay’ in the figures
```\$FSLDIR/bin/flameo --cope=4dcope --vc=4dvarcope --mask=mask --ld=stats --dm=design.mat --cs=design.grp --tc=design.con --runmode=flame1
echo \$(\$FSLDIR/bin/fslnvols 4dcope) - 1 | bc -l  > stats/dof
\$FSLDIR/bin/smoothest -d \$(cat stats/dof) -m mask -r stats/res4d > stats/smoothness
rm -f stats/res4d*
awk '/VOLUME/ {print \$2}' stats/smoothness > thresh_zstat1.vol
awk '/DLH/ {print \$2}' stats/smoothness > thresh_zstat1.dlh
\$FSLDIR/bin/cluster -i thresh_zstat1 -c stats/cope1 -t 2.3 -p 0.05 -d \$(cat thresh_zstat1.dlh) --volume=\$(cat thresh_zstat1.vol) --othresh=thresh_zstat1 -o cluster_mask_zstat1 --connectivity=26 --mm --olmax=lmax_zstat1_tal.txt > cluster_zstat1_std.txt
\$FSLDIR/bin/cluster2html . cluster_zstat1 -std
MinMax=\$(\$FSLDIR/bin/fslstats thresh_zstat1 -l 0.0001 -R)
\$FSLDIR/bin/overlay 1 0 example_func -a thresh_zstat1 \$MinMax rendered_thresh_zstat1
\$FSLDIR/bin/slicer rendered_thresh_zstat1 -S 2 750 rendered_thresh_zstat1.png
/bin/cp \$FSLDIR/etc/luts/ramp.gif .ramp.gif
```

You don’t get the full, pretty output from FEAT, but you do get a pretty PNG overlay, and the HTML table of significant clusters. Generalizing to more complicated group-level models is simply a matter of changing the `design.mat` and `design.con` files, and adjusting the degrees-of-freedom calculation on the second line.

This is really just a quick hack. I hope it is of some use.

Quick addition: The code above gives you corrected cluster-wise inference. To get corrected voxel-wise inference, replace the line with the ”\$FSLDIR/bin/cluster” command with these four lines:

```RESELcount=\$(awk '\$1~/VOLUME/{v=\$2};\$1~/RESELS/{r=\$2};END{printf("%g",1.0*v/r)}' stats/smoothness)
FWEthresh=\$(ptoz 0.05 -g \$RESELcount)
\$FSLDIR/bin/fslmaths thresh_zstat1 -thr \$FWEthresh thresh_zstat1
\$FSLDIR/bin/cluster -i thresh_zstat1 -c stats/cope1 -t \$FWEthresh --othresh=thresh_zstat1 -o cluster_mask_zstat1 --connectivity=26 --mm --olmax=lmax_zstat1_tal.txt > cluster_zstat1_std.txt
```

Or, if you want to use an uncorrected threshold, use this snippet instead (replacing 2.3 with your favorite threshold).

```UnCorrThresh=2.3
\$FSLDIR/bin/cluster -i thresh_zstat1 -c stats/cope1 -t \$UnCorrThresh --othresh=thresh_zstat1 -o cluster_mask_zstat1 --connectivity=26 --mm --olmax=lmax_zstat1_tal.txt > cluster_zstat1_std.txt
```