FWHM & RESEL details for SPM and FSL

Executive Summary

Random Field Theory depends on a measure called the RESEL count to determine corrected P-values. How RESELs are reported on in SPM and FSL, however, differs. This post reviews the basics of RESELs and FWHM and details how to interpret the values reported by each software.

Background

Random Field Theory (RFT) is the method used in both FSL and SPM to make inference on statistic images. Specifically, it finds thresholds that control the chance of false positives while searching the brain for activations or group differences. The magic of RFT, unlike other methods, like Bonferroni, is that it accounts for the spatial smoothness of the data; as a result, a less severe correction is applied to highly smooth data while a more stringent correction is applied to relatively rough data.

The details of how RFT measures smoothness are technical (it is the inverse square root of the determinant of the variance-covariance matrix of the gradient of the component fields), but fortunately Keith Worsley introduced the notion of FWHM and RESELs [1] to make matters simplier.

For RFT, the FWHM is defined as the Full Width at Half Maximum of the Gaussian kernel required to smooth independent, white noise data to have the same “smoothness” as your data. Here, “smoothness” refers to the technical definition involving the “square root of the determinant…”. Crucially, FWHM is not the applied smoothness, e.g. the Gaussian kernel smoothing applied as part of preprocessing. It is the smoothness of the data fed into the GLM, which is a combination of the intrinsic smoothness of the data (affected by things like image reconstruction parameters and physiological noise). As we have 3-D data, FWHM is usually represented as a 3-vector, [FWHMx FWHMY FWHMZ], though be careful to check whether the units are in voxels or mm.

RESEL stands for RESolution ELement, and is a virtual voxel with dimensions [FWHMx FWHMY FWHMZ]. The RESEL count is the number of RESELs that fit into your search volume; in math mode:

R = V / ( FWHMX × FWHMY × FWHMZ )

where V is the search volume; note that V can be in voxels or mm3 (or whatever) as long as FWHMX, FWHMY & FWHMZ have the same units.

Note a possible source of confusion: A “RESEL” is 3D object, a cuboid; the “RESEL count” is a scalar, your search volume in units of RESELs.

SPM

SPM provides the most complete reporting of FWHM and RESEL-related quantities. In the Results figure, in the footer of the P-value table, SPM lists [FWHMx FWHMY FWHMZ] in both voxel and mm units. It also lists the search volume in RESELs (along with the same in mm3 and voxel units). Finally, it reports the size of one RESEL measured in voxels.

Note, if you check SPM’s arithmetic, you may find that R is not exactly V / ( FWHMX × FWHMY × FWHMZ ). The reason is that a more sophisticated method for measuring the search volume is used that involves counting voxel edges and faces, etc; see spm/src/spm_resels_vol.c & [2] for more details.

FSL

FSL provides no information about RESELs or FWHM in its HTML output. There is some information available, however, buried in the *.feat directories.

In each *.feat directory there is a stats subdirectory. In that directory you’ll find a file called smoothness that contains three lines, one each labeled DLH, VOLUME, RESELS.

DLH is the unintuitive parameter that describes image roughness; precisely it is

DLH = (4 log(2))3/2 / ( FWHMX × FWHMY × FWHMZ)

where FWHMX, FWHMY and FWHMZ are in voxel units.

VOLUME is the search volume in units of voxels. And, as a source of great possible confusion, RESELS is not the RESEL count, but rather, the size of one RESEL in voxel units

RESELS = FWHMX × FWHMY × FWHMZ

Sadly, the FWHMX, FWHMY and FWHMZ are never individually computed or saved. Note, however, the geometric mean of the FWHM’s is available as

AvgFWHM = RESELS1/3

which is some consolation.

Update: The verbose option to smoothest, -V, it will report the individual FWHMx, FWHMx, & FWHMy. But, again, this isn’t saved as part of feat output.

Who cares?

Any neuroimaging researcher should always check the estimated FWHM of their analysis. The reason is that if, for some reason the FWHM smoothness is less than 3 in any dimension, the accuracy of the RFT results can be very poor [3]. While the FSL user can’t examine the 3 individual FHWM, they can at least compute AvgFWHM and check this.

Another reason is to understand differences in corrected P-values. For example, say you conduct 2 studies each with 20 subjects. You notice that, despite having similar T-values, the two studies have quite different FWE corrected P-values. The difference must be due to different RESEL counts, which in turn can be explained by either a difference in search volume (V) or smoothness (FWHM).

Finally, one more reason to check the RESELs is when you are getting bizarre results, like insanely significant results for modest cluster sizes. For example, if VBM results are poorly masked it can happen that the analysis includes a large boundary of air voxels outside the brain. Not only is this unnecessarily increasing your search volume (possibly increasing your multiple testing correction), but the smoothness estimate in air will be dramatically lower than in brain tissue, and thus corrupt the accuracy of the inferences. A bizarre value for RESEL size or FWHM will reveal this problem.

Further Reading

For a light-touch introduction to Random Field Theory, see [3]; for more detail (but less than the mathematical papers) see the relevant chapters in these books [4,5,6].

References

[1] Worsley, K. J., Evans, A. C., Marrett, S., & Neelin, P. (1992). A three-dimensional statistical analysis for CBF activation studies in human brain. Journal of Cerebral Blood Flow and Metabolism, 12(6), 900–918. Preprint

[2] Worsley, K. J., Marrett, S., Neelin, P., Vandal, A. C., Friston, K. J., & Evans, A. C. (1996). A unified statistical approach for determining significant signals in images of cerebral activation. Human brain mapping, 4(1), 58–73.

[3] Nichols, T. E., & Hayasaka, S. (2003). Controlling the familywise error rate in functional neuroimaging: a comparative review. Statistical Methods in Medical Research, 12(5), 419–446.

[4] Poldrack, R. A., Mumford, J. A., & Nichols, T. E. (2011). Handbook of fMRI Data Analysis (p. 450). Cambridge University Press.

[5] Penny, W. D., Friston, K. J., Ashburner, J. T., Kiebel, S. J., & Nichols, T. E. (2006). Statistical Parametric Mapping: The Analysis of Functional Brain Images (p. 656). Academic Press. [Previous version available free]

[6] Jezzard, P., Matthews, P. M., & Smith, S. M. (2002). Functional MRI: An Introduction to Methods (p. 432). Oxford University Press.

SPM plot units

What are the units of a plot in SPM?

In SPM, when you click on “Plot” and ask for a “Contrast estimates and 90% CI”, what exactly are the units of the thing plotted? I get this question so many times I wanted to record my answer once and for all.

The short answer is: 2-3 times an approximation to percent BOLD change, if you have a long-block design; otherwise, it depends. Here’s the long answer.

The final units of a plotted contrast of parameter estimates depends on three things:

  1. The scaling of the data.
  2. The scaling of the predictor(s) that are involved in the selected contrast.
  3. The scaling of the selected contrast.

The scaling of the data

For a first level fMRI analysis in SPM the scaling is done “behind the scenes”… there are no options to change the default behavior: The global mean intensity for each image is computed, and the average of those values (the “grand grand mean”) is scaled to be equal to be 100. However, SPM’s spm_global produces a rather dysmal estimate of typical intracerebral intensity (note that the author is listed as “Anonymous”; to be fair, it worked well on the tightly-cropped, standard-space PET images that it was originally designed for). The estimate of spm_global is generally too low, and so when the data are divided by it and multiplied by 100, the typical brain intensity is well over 100. In my experience, if you examine a beta_XXXX.img corresponding to a constant in a first-level regression model, typically gray matter values range from 200 to 300 when it should be 100; for first-level analyses done in subject-space (where there are even more air voxels) the bias can be even greater. Let’s call the typical gray matter intensity of your BOLD T2* images to be B for later correcting this effect (e.g. B = 250 or 300 or whatever you find).

At the second level, you depend on the contrast estimates passed up from the first level to have a valid interpretation. So all three of these rules should apply to the first and second level models, but the data scaling is done at the first level (no further scaling should be done at the second level; e.g., in the Batch Editor, second-level models have “Global normalisation -> Overall grand mean scaling” set to “No”).

The scaling of the predictor(s) that are involved in the selected contrast

At the first level, the predictors must be scaled appropriately to give the regression coefficients the same units as the data. For BOLD fMRI, this means the baseline to peak magnitude should be 1.0… i.e. a one unit change in a beta coefficient will result in a one unit change in predicted BOLD effect. In SPM, for loooong blocks/epochs, 20 seconds or longer, the scaling is as expected, with the baseline-to-plateau difference being 1.0. For shorter blocks however, the peak can be higher – as this corresponds to the ‘initial peak’ part of the HRF, as the duration is too short for the flat plateau to be established (this ‘early peak’ is due to symmetry of convolution, as it must exactly mirror the post-stimulus overshoot; there is some physiological basis for this as well, though) – or they can be lower. For events, the HRF is normalized to sum to unity, which doesn’t help you predict the peak.

If you have anything but long blocks, then, you have to examine the baseline-to-peak (or if you prefer, baseline-to-trough) height, and make a note of it for later scaling (call it H). (The design matrix is in SPM.xX.X, in SPM.mat; plot individual columns to judge the scaling). For event-related designs, however, it can be tricky to uniquely define the baseline-to-peak height; if you have a slow design, with isolated events, you’ll find a different peak height than if you have a design with some closely-spaced events as the responses pile-up additively. A reasonable unique definition of a event-related response magnitude is the height of an isolated event; again, call this H.

At the second level, you must also make sure that your regressors preserve the units of the data. For example, if you are manually coding dummy regressors, like gender, as -1/1, realize that the unit effect of beta coefficient for gender will result in a two unit change in the data. Safer is using a 0/1 coding, though that then re-defines the intercept; best yet is to use -1/2 and 1/2 as dummy values for a dichotomous variable.

For parametric modulation and covariates, note that a regression coefficient’s units are determined by the units of the covariate. For example, if you have a parametric modulation in a reward experiment, where the units are £’s that are at risk in a bet, then your regressor coefficient has units (if everything else is working as it should) “percent BOLD change per £ at risk”. Likewise, if you use an age predictor in the second level, if you enter age in years (centered or uncentered), the units of the age coefficient are “percent BOLD change per year”

Finally, beware of trying to compare between percent BOLD change in Block Designs vs. Event Related designs. Due to the nonlinearities of the BOLD effect, a percent BOLD change for an event related response is really a different beast from a percent BOLD change for an block design. When reporting one type, be sure it is clear what sort of effect you’re reporting, and don’t try averaging or comparing their magnitudes.

The scaling of the selected contrast

The General Linear Model is a wondrous thing, as you’ll get the same T-values and P-values for the contrast [-1 -1 1 1] as you do for [-1/2 -1/2 1/2 1/2]. However, the interpretation of the contrast of parameter estimates (contrast times beta-hats) is different for each. The following rules to scale contrasts will usually work to ensure that a contrast preserves the units of the regression coefficients:

  • Sum of any positive contrast elements is 1.
  • Sum of any negative contrast elements is -1.

If the contrast elements are all positive or all negative, the interpretation is clear, as you are ensuring that the beta-coefficents (each which should have unit-change interpretations) will be averaged, giving a (average) unit change interpretation. If there are a mix of positive and negative elements, I see this rule as carefully ensuring there is a positive unit change quantity subtracted from a negative unit change quantity, giving the expected difference of unit change. (This rule should work for covariates and simple 0/1 dummy variables; this FSL example shows a case with -1/0/1 dummy variables where the contrasts violate these rules yet actually give the correct units.)

When it goes right, when it goes wrong

If everything had gone well, then the plotted units are “approximate percent BOLD change”. The “approximate” qualification results from only using a global-brain mean of 100; an “exact” percent BOLD change requires that the voxel in question has had baseline (or grand mean) intensity of exactly 100. We don’t grand-mean-normalise voxel-by-voxel because of edge effects: At the edge of the brain, the image intensity falls off to zero, and dividing by these values will give artifactually large values.

Of the three scalings in play, the first two of these three items are basically out of the control of the user. As hinted, first-level fMRI data in SPM are generally mis-scaled, resulting in global brain intensity between 200 and 300, i.e., 2 to 3 times greater than anticipated. Also, if you don’t have long blocks, you’ll need to apply a correction for the non-unity H you measured. The corrections are as follows: Dividing the reported results B/100 (i.e. multiplying by 100/B) will correct the problem of the data being incorrectly scaled. Multiplying by H will correct the units of the regression coefficients. After both of these corrections, you should get something closer to approximate percent BOLD change.

This all sounds rather depressing, doesn’t it? Well, there is a better way. If you use the Marsbar toolbox it will carefully estimate baseline intensity for the ROI (or single-voxel ROI—make sure it isn’t on the edge of the brain though!) and gives exact percent BOLD change units.

Postscript: What about FSL?

FSL has an good estimate of the global mean, because it uses morphological operations to identify intracerebral voxels. It scales grand mean brain intensity to 10,000, however (because it used to use a integer data representation and 100 would have resulted in too few significant digits.) Block-design predictors are scaled to have unit height; the default HRF doesn’t use a post-stimulus undershoot, and so baseline-to-peak height is the same as trough-to-peak height. Event-related predictors are not scaled to have peak height of unity; however, the recommended tool for extracting percent BOLD change values, featquery, estimates a trough-to-peak height and then appropriate scales the data. However, as Jeanette Mumford has pointed out, this isn’t really the right thing to do when you have closely spaced trials; see her excellent A Guide to Calculating Percent Change with Featquery to see how to deal with this.

Other references

Thanks to Jeanette Mumford & Tor Wager for helpful input and additions to this entry, and Guillaume Flandin for the additional references.

SPM2 Gem 13: Corrected cluster size threshold

This is a script that will tell you the corrected cluster size threshold for given cluster-defining threshold: corrclusth.m

The usage is pretty self explanatory:

 Find the corrected cluster size threshold for a given alpha
 function [k,Pc] =CorrClusTh(SPM,u,alpha,guess)
 SPM   - SPM data structure
 u     - Cluster defining threshold
         If less than zero, u is taken to be uncorrected P-value
 alpha - FWE-corrected level (defaults to 0.05)
 guess - Set to NaN to use a Newton-Rhapson search (default)
         Or provide a explicit list (e.g. 1:1000) of cluster sizes to
         search over.
         If guess is a (non-NaN) scalar nothing happens, except the the
         corrected P-value of guess is printed. 

 Finds the corrected cluster size (spatial extent) threshold for a given
 cluster defining threshold u and FWE-corrected level alpha. 
  

To find the 0.05 (default alpha) corrected cluster size threshold for a 0.01 cluster-defining threshold:

>> load SPM
>> CorrClusTh(SPM,0.01)
  For a cluster-defining threshold of 2.4671 the level 0.05 corrected
  cluster size threshold is 7860 and has size (corrected P-value) 0.0499926
  

Notice that, due to the discreteness of cluster sizes you cannot get an exact 0.05 threshold.

The function uses an automated search which may sometimes fail. If you specify a 4th argument you can manually specify the cluster sizes to search over:

>> CorrClusTh(SPM,0.01,0.05,6000:7000)

  WARNING: Within the range of cluster sizes searched (6000...7000)
  a corrected P-value <= alpha was not found (smallest P: 0.0819589)

  Try increasing the range or an automatic search.

Ooops… bad range.

Lastly, you can use it as a look up for a specific cluster size threshold. For example, how much over the 0.05 level would a cluster size of 7859 be?

>> CorrClusTh(SPM,0.01,0.05,7859)
  For a cluster-defining threshold of 2.4671 a cluster size threshold of
  7859 has corrected P-value 0.050021

Just a pinch!

SPM2 Gem 12: fMRI Analysis threshold

This is an update of Gem12 for SPM99, originally by Stefan Kiebel.

> Is it possible to instruct spm99 to search all voxels within a given
> mask image rather than all above a fixed or a %mean threshold? 

Yes, with SPM2 it's possible to use several masking options. 

To recap, there are 3 sorts of masks used in SPM2:
   1. an analysis threshold
   2. implicit masking
   3. explicit masking

1: One can set this threshold for each image to -Inf to switch off this
   threshold.
2: If the image allows this, NaN at a voxel position masks this voxel
   from the statistics, otherwise the mask value is zero (and the user
   can choose, whether implicit masking should be used at all).
3: Use mask image file(s), where NaN (when image format allows this) or
   a non-positive value masks a voxel.

On top of this, SPM automatically removes any voxels with constant
values over time.

So what you want is an analysis, where one only applies an explicit
mask. 

In SPM2 for PET, you can do this by going for the Full Monty and
choosing -Inf for the implicit mask and no 0-thresholding. Specify one
or more mask images. (You could also define a new model structure,
controlling the way SPM for PET asks questions).

With fMRI data/models, SPM2 is fully capable of doing explicit
masking, but the user interface for fMRI doesn't ask for it. One way
to do this type of masking anyway is to change the SPM.mat file
*after* you specify your model, but *before* clicking 'Estimate'.
Specifically:

   1. Load the SPM.mat file,
             load SPM
      set the SPM.xM.TH values all to -Inf,
             SPM.xM.TH = -Inf*SPM.xM.TH;
      and, in case that you have an image format not allowing NaNs,
      set SPM.xM.I to 0
             SPM.xM.I = 0;

   2. If using a mask image, set SPM.xM.VM to a vector of structures,
      where each structure element is the output of spm_vol. For instance:
               SPM.xM.VM = spm_vol('Maskimage');

   3. Finally, save by
               save SPM SPM

SPM2 Gem 10: Creating a mask from a XYZ

This is a “Jan’s Gem” from Jan Gläscher…

Subject: Re: [SPM] creating a mask from voxel coordinates
From: Jan Gläscher <glaescher@UKE.UNI-HAMBURG.DE>
Date: Tue, 15 Mar 2005 12:24:27 +0100
To: SPM@JISCMAIL.AC.UK


Dear Susana,

you can try the following recipe.  It might not be the easiest way, but it
should work.

Suppose your n MNI coordinates are saved in a 3xn matrix called mni.
(This is essential (not the name, but the format of 3xn), otherwise the
steps below won't work).

1. Create an spm_vol handle to the image, that you determined the
   coordinates from

   Vin = spm_vol(spm_get(1,'*.img','Select image'));

2. Read the information in the image. (The data are not needed, this is
   done purely for the purpose of setting up a matrix of voxel coordinates.

   [Y,XYZ] = spm_read_vols(Vin);

3. Setup a matrix of zeros in the dimensions of the input image

   mask = zeros(Vin.dim(1:3));

4. Now loop over all voxels in your mni variable and set the corresponding
   location in the mask matrix to 1:

   for v = 1:size(mni,2)
      mask(find(XYZ(1,:)==mni(1,v)&XYZ(2,:)==mni(2,v)&XYZ(3,:)==mni(3,v))) = 1;
   end

5. Setup an spm_vol output file handle and change the filename

   Vout = Vin;
   Vout.fname = '/path/to/the/directory/mask.img';

6. Finally, write the new mask to the output file.

   spm_write_vol(Vout,mask);


Cheers,
Jan

SPM2 Gem 9: Masking w/ ImCalc

This is an email from Rik Henson which references John. It addresses the fact that ImCalc defaults to writing out images in 2-byte signed integer format, which is can be problematic if you’re masking statistic images (which should written with floating point precision).

Subject: Masking with ImCalc
From: Rik Henson 
Date: Fri, 14 May 2004 08:24:25 +0100 (03:24 EDT)
To: SPM@JISCMAIL.AC.UK

Several people have asked how to mask SPMs across different
analyses. This used to be possible using ImCalc in SPM99 (see Mailbase
archive). In SPM2 however, when ImCalc is called from the GUI, it
adopts various defaults, which include writing images as "uint16"
(rather than "float", which is the normal datatype for T/F imgs).
Thanks to John for pointing this out.

It is for this reason that the procedure of overwriting
spmT/F*imgs using ImCalc does not work from SPM2's GUI.

To override this default however, you can simply call
ImCalc from the command line instead, setting the
datatype to spm_type 'float':

    spm_imcalc_ui([],[],'i1.*(i2>3.09)',{[],[],'float',0})

The empty 1st and 2nd arguments mean that SPM will prompt
you to select the input images and the output filename, as usual.
You will of course need to change the third argument to your
particular ImCalc equation for evaluation. The 4th argument
is a cell array of defaults, which includes the 'float' enforcement.

For more info, type "help spm_imcalc_ui".

Rik

See also SPM2 Gem 8, and SPM99 Gems 3, 16, and 21.

SPM2 Gem 8: ImCalc Script

SPM99 Gem 21 is the same (with minor changes).

Subject: Re: a script to use ImaCal
From: John Ashburner 
Date: Thu, 16 Oct 2003 11:24:17 +0000 (07:24 EDT)
To: SPM@JISCMAIL.AC.UK

> Brain image for each subject to mask out CSF signal was generated by
> using MPR_seg1.img (i1) and MPR_seg2.img (i2) with (i1+i2)>0.5 in
> ImaCal.(called brainmpr.img for each subject)
>
> Then I have more than two-hundred maps, which need to mask out
> CSF. I think I can use ImaCal again with selecting brainmpr.img (i1)
> and FAmap.img (i1), and then calculating (i1.*i2) to generate a new
> image named as bFAmap.img.  Unfortunately, if I use the ImaCal, it
> take so long time to finish all subjects. Could anyone have a script
> to generate  a multiplication imaging with choosing a brain
> image(brainmpr.img, i1) and a map image (FAmap.img, i2) and writing
> an output image (bFAmap.img) from i1.*i2? 

You can do this with a script in Matlab.  Something along the lines of the
following should do it:

P1=spm_get(Inf,'*.IMAGE','Select i1');
P2=spm_get(size(P1,1),'*.IMAGE','Select i2');

for i=1:size(P1,1),
    P = strvcat(P1(i,:),P2(i,:)));
    Q = ['brainmpr_' num2str(i) '.img'];
    f = '(i1+i2)>0.5';
    flags = {[],[],[],[]};
    Q = spm_imcalc_ui(P,Q,f,flags);
end;

Note that I have not tested the above script.  I'm sure you can fix it if
it doesn't work.

Best regards,
-John

SPM2 Gem 7: Extracting realignment parameters

The referenced script is here save_parameters.m

Subject: Re: [SPM] realignment parameter
From: John Ashburner 
Date: Fri, 23 Apr 2004 11:25:27 +0000 (07:25 EDT)
To: SPM@JISCMAIL.AC.UK

> I have switched from spm99 to spm2 and am learning new things.
> When I realign images interactively with GUI interface,
> the rp*.txt file is created. However, the parameter file
> is not created - or, I cannot find it -, when I realign
> with a batch script.
> I am sure I left out something, but I don't know it.
> Any suggestion will be very appreciated.

If you call spm_realign with output (left-hand) arguments, then no rp_*.txt
file is saved.  You could create a save_parameters.m file, containing:

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
function save_parameters(V)
fname = [spm_str_manip(prepend(V(1).fname,'rp_'),'s') '.txt'];
n = length(V);
Q = zeros(n,6);
for j=1:n,
        qq     = spm_imatrix(V(j).mat/V(1).mat);
        Q(j,:) = qq(1:6);
end;
save(fname,'Q','-ascii');
return;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

and derive the values from your realigned images:

    P = spm_get(Inf,'*.IMAGE','Select realigned images');
    save_parameters(spm_vol(P))

Best regards,
-John

SPM2 Gem 6: Contrast SE Computation

An email from Will Penny on how a contrast’s standard error is computed.

Subject: Re: Beta SE in SPM2?
From: Will Penny <wpenny@FIL.ION.UCL.AC.UK>
Date: Fri, 27 Feb 2004 11:26:35 +0000
To: SPM@JISCMAIL.AC.UK

Tobias,

> How do I extract beta SE in SPM2?  As far as I can see I can only get
> y, Y, beta, and Bcov...have I missed something?

If you're referring to the Standard Error or the Standard deviation of the
parameter estimates then this isn't stored explicitly.

Its computed on the fly when one makes an inference about a contrast. See the
following lines (153-158) in spm_contrasts:

case 'T'                                  %-Compute SPM{t} image
         %---------------------------------------------------------------
        cB    = spm_get_data(xCon(ic).Vcon,XYZ);
        l     = spm_get_data(VHp,XYZ);
        VcB   = xCon(ic).c'*SPM.xX.Bcov*xCon(ic).c;
        Z     = cB./sqrt(l*VcB);

So, if you did a contrast xCon(ic).c'=[1 0 0 0] to test the null
hypothesis that the first parameter is zero then cB will be equal to
beta1 and sqrt(l*VcB) will be equal to std(beta1) - the standard
deviation of the first parameter estimate.

Note that l is the residual sum of squares at that voxel which is
stored in the image ResMS.img and accessed via the file handle VHp.

In this way SPM does'nt have to store Standard Deviations/covariances
for every voxel and beta - so saving disk space. But its not that
handy for what you want !

Hope this helps, Will.