SPM plot units

What are the units of a plot in SPM?

In SPM, when you click on “Plot” and ask for a “Contrast estimates and 90% CI”, what exactly are the units of the thing plotted? I get this question so many times I wanted to record my answer once and for all.

The short answer is: 2-3 times an approximation to percent BOLD change, if you have a long-block design; otherwise, it depends. Here’s the long answer.

The final units of a plotted contrast of parameter estimates depends on three things:

  1. The scaling of the data.
  2. The scaling of the predictor(s) that are involved in the selected contrast.
  3. The scaling of the selected contrast.

The scaling of the data

For a first level fMRI analysis in SPM the scaling is done “behind the scenes”… there are no options to change the default behavior: The global mean intensity for each image is computed, and the average of those values (the “grand grand mean”) is scaled to be equal to be 100. However, SPM’s spm_global produces a rather dysmal estimate of typical intracerebral intensity (note that the author is listed as “Anonymous”; to be fair, it worked well on the tightly-cropped, standard-space PET images that it was originally designed for). The estimate of spm_global is generally too low, and so when the data are divided by it and multiplied by 100, the typical brain intensity is well over 100. In my experience, if you examine a beta_XXXX.img corresponding to a constant in a first-level regression model, typically gray matter values range from 200 to 300 when it should be 100; for first-level analyses done in subject-space (where there are even more air voxels) the bias can be even greater. Let’s call the typical gray matter intensity of your BOLD T2* images to be B for later correcting this effect (e.g. B = 250 or 300 or whatever you find).

At the second level, you depend on the contrast estimates passed up from the first level to have a valid interpretation. So all three of these rules should apply to the first and second level models, but the data scaling is done at the first level (no further scaling should be done at the second level; e.g., in the Batch Editor, second-level models have “Global normalisation -> Overall grand mean scaling” set to “No”).

The scaling of the predictor(s) that are involved in the selected contrast

At the first level, the predictors must be scaled appropriately to give the regression coefficients the same units as the data. For BOLD fMRI, this means the baseline to peak magnitude should be 1.0… i.e. a one unit change in a beta coefficient will result in a one unit change in predicted BOLD effect. In SPM, for loooong blocks/epochs, 20 seconds or longer, the scaling is as expected, with the baseline-to-plateau difference being 1.0. For shorter blocks however, the peak can be higher – as this corresponds to the ‘initial peak’ part of the HRF, as the duration is too short for the flat plateau to be established (this ‘early peak’ is due to symmetry of convolution, as it must exactly mirror the post-stimulus overshoot; there is some physiological basis for this as well, though) – or they can be lower. For events, the HRF is normalized to sum to unity, which doesn’t help you predict the peak.

If you have anything but long blocks, then, you have to examine the baseline-to-peak (or if you prefer, baseline-to-trough) height, and make a note of it for later scaling (call it H). (The design matrix is in SPM.xX.X, in SPM.mat; plot individual columns to judge the scaling). For event-related designs, however, it can be tricky to uniquely define the baseline-to-peak height; if you have a slow design, with isolated events, you’ll find a different peak height than if you have a design with some closely-spaced events as the responses pile-up additively. A reasonable unique definition of a event-related response magnitude is the height of an isolated event; again, call this H.

At the second level, you must also make sure that your regressors preserve the units of the data. For example, if you are manually coding dummy regressors, like gender, as -1/1, realize that the unit effect of beta coefficient for gender will result in a two unit change in the data. Safer is using a 0/1 coding, though that then re-defines the intercept; best yet is to use -1/2 and 1/2 as dummy values for a dichotomous variable.

For parametric modulation and covariates, note that a regression coefficient’s units are determined by the units of the covariate. For example, if you have a parametric modulation in a reward experiment, where the units are £’s that are at risk in a bet, then your regressor coefficient has units (if everything else is working as it should) “percent BOLD change per £ at risk”. Likewise, if you use an age predictor in the second level, if you enter age in years (centered or uncentered), the units of the age coefficient are “percent BOLD change per year”

Finally, beware of trying to compare between percent BOLD change in Block Designs vs. Event Related designs. Due to the nonlinearities of the BOLD effect, a percent BOLD change for an event related response is really a different beast from a percent BOLD change for an block design. When reporting one type, be sure it is clear what sort of effect you’re reporting, and don’t try averaging or comparing their magnitudes.

The scaling of the selected contrast

The General Linear Model is a wondrous thing, as you’ll get the same T-values and P-values for the contrast [-1 -1 1 1] as you do for [-1/2 -1/2 1/2 1/2]. However, the interpretation of the contrast of parameter estimates (contrast times beta-hats) is different for each. The following rules to scale contrasts will usually work to ensure that a contrast preserves the units of the regression coefficients:

  • Sum of any positive contrast elements is 1.
  • Sum of any negative contrast elements is -1.

If the contrast elements are all positive or all negative, the interpretation is clear, as you are ensuring that the beta-coefficents (each which should have unit-change interpretations) will be averaged, giving a (average) unit change interpretation. If there are a mix of positive and negative elements, I see this rule as carefully ensuring there is a positive unit change quantity subtracted from a negative unit change quantity, giving the expected difference of unit change. (This rule should work for covariates and simple 0/1 dummy variables; this FSL example shows a case with -1/0/1 dummy variables where the contrasts violate these rules yet actually give the correct units.)

When it goes right, when it goes wrong

If everything had gone well, then the plotted units are “approximate percent BOLD change”. The “approximate” qualification results from only using a global-brain mean of 100; an “exact” percent BOLD change requires that the voxel in question has had baseline (or grand mean) intensity of exactly 100. We don’t grand-mean-normalise voxel-by-voxel because of edge effects: At the edge of the brain, the image intensity falls off to zero, and dividing by these values will give artifactually large values.

Of the three scalings in play, the first two of these three items are basically out of the control of the user. As hinted, first-level fMRI data in SPM are generally mis-scaled, resulting in global brain intensity between 200 and 300, i.e., 2 to 3 times greater than anticipated. Also, if you don’t have long blocks, you’ll need to apply a correction for the non-unity H you measured. The corrections are as follows: Dividing the reported results B/100 (i.e. multiplying by 100/B) will correct the problem of the data being incorrectly scaled. Multiplying by H will correct the units of the regression coefficients. After both of these corrections, you should get something closer to approximate percent BOLD change.

This all sounds rather depressing, doesn’t it? Well, there is a better way. If you use the Marsbar toolbox it will carefully estimate baseline intensity for the ROI (or single-voxel ROI—make sure it isn’t on the edge of the brain though!) and gives exact percent BOLD change units.

Postscript: What about FSL?

FSL has an good estimate of the global mean, because it uses morphological operations to identify intracerebral voxels. It scales grand mean brain intensity to 10,000, however (because it used to use a integer data representation and 100 would have resulted in too few significant digits.) Block-design predictors are scaled to have unit height; the default HRF doesn’t use a post-stimulus undershoot, and so baseline-to-peak height is the same as trough-to-peak height. Event-related predictors are not scaled to have peak height of unity; however, the recommended tool for extracting percent BOLD change values, featquery, estimates a trough-to-peak height and then appropriate scales the data. However, as Jeanette Mumford has pointed out, this isn’t really the right thing to do when you have closely spaced trials; see her excellent A Guide to Calculating Percent Change with Featquery to see how to deal with this.

Other references

Thanks to Jeanette Mumford & Tor Wager for helpful input and additions to this entry, and Guillaume Flandin for the additional references.

SPM99 Gem 23: Tabulating T Statistics

Subject: Fwd: Re: tabulating all statistics
From: John Ashburner <john@FIL.ION.UCL.AC.UK>
Date: Tue, 1 Jul 2003 11:47:07 +0000 (07:47 EDT)
To: SPM@JISCMAIL.AC.UK

> I was wondering if it would be possible to write t values for an
> entire volume into a file? 

Try this:

   fid=fopen('t-values.txt','w');
   P=spm_get(1,'*.img','Select statistic image');
   V=spm_vol(P);

   [x,y,z] = ndgrid(1:V.dim(1),1:V.dim(2),0);
   for i=1:V.dim(3),
     z   = z + 1;
     tmp = spm_sample_vol(V,x,y,z,0);
     msk = find(tmp~=0 & finite(tmp));
     if ~isempty(msk),
       tmp = tmp(msk);
       xyz1=[x(msk)'; y(msk)'; z(msk)'; ones(1,length(msk))];
       xyzt=V.mat(1:3,:)*xyz1;
       for j=1:length(tmp),
         fprintf(fid,'%.4g %.4g %.4g\t%g\n',...
                 xyzt(1,j),xyzt(2,j),xyzt(3,j),tmp(j));
       end;
     end;
   end;
   fclose(fid);

best regards,
-John
                

As noted in a 2 Feb 2004 email, to eliminate all voxels below a certain threshold, change

     msk = find(tmp~=0 & finite(tmp));

to

     msk = find(tmp>0 & finite(tmp));


SPM99 Gem 22: spm_orthviews tricks

IMHO, after spatial normalization, John’s key contribution to SPM is spm_orthviews, the function behind the ‘Check Reg’ button which let’s you view many volumes simeltaneously. Some of the Gems use spm_orthviews (e.g. Gems 1 and 16) but listed here are some generally useful tricks for spm_orthviews.

These tricks are usefulanytimea three-view orthogonal slice view is shown, whether from useing ‘Check Reg’, ‘Display’ or when overlaying blobs from the ‘Results’ window.

Turn off/on cross-hairs

spm_orthviews('Xhairs','off')

spm_orthviews('Xhairs','on')
.

Return current x,y,z world space, mm location

spm_orthviews('pos')'

(Note that I transpose the returned value into a row vector, for easier copying and pasting.)

Move to x,y,z world space, mm location

spm_orthviews('reposition',[x y z])

SPM99 Gem 21: ImCalc Script

As posted, this snippet is for SPM2; I’ve edited it to work with SPM99.

Subject: Re: a script to use ImaCal
From: John Ashburner <john@FIL.ION.UCL.AC.UK>
Date: Thu, 16 Oct 2003 11:24:17 +0000 (07:24 EDT)
To: SPM@JISCMAIL.AC.UK

> Brain image for each subject to mask out CSF signal was generated by
> using MPR_seg1.img (i1) and MPR_seg2.img (i2) with (i1+i2)>0.5 in
> ImaCal.(called brainmpr.img for each subject)
>
> Then I have more than two-hundred maps, which need to mask out
> CSF. I think I can use ImaCal again with selecting brainmpr.img (i1)
> and FAmap.img (i1), and then calculating (i1.*i2) to generate a new
> image named as bFAmap.img.  Unfortunately, if I use the ImaCal, it
> take so long time to finish all subjects. Could anyone have a script
> to generate  a multiplication imaging with choosing a brain
> image(brainmpr.img, i1) and a map image (FAmap.img, i2) and writing
> an output image (bFAmap.img) from i1.*i2? 

You can do this with a script in Matlab.  Something along the lines of the
following should do it:

P1=spm_get(Inf,'*.img','Select i1');
P2=spm_get(size(P1,1),'*.img','Select i2');

for i=1:size(P1,1),
    P = strvcat(P1(i,:),P2(i,:)));
    Q = ['brainmpr_' num2str(i) '.img'];
    f = '(i1+i2)>0.5';
    flags = {[],[],[],[]};
    Q = spm_imcalc_ui(P,Q,f,flags);
end;

Note that I have not tested the above script.  I'm sure you can fix it if
it doesn't work.

Best regards,
-John

SPM99 Gem 20: Creating customized templates and priors

From: John Ashburner 
Subject: Re: Help for constructing Template images
Date: Wed, 18 Dec 2002 16:22:59 +0000
To: SPM@JISCMAIL.AC.UK

> What are the advantages of customized template images
> in VBM analysis?

Customised templates are useful when:

1) The contrast in your MR images is not the same as the
   contrast used to generate the existing templates.  If
   the contrast is different, then the mean squared cost
   function is not optimal.  However, for "optimised" VBM
   this only really applies to the initial affine
   registration that is incorporated into the initial
   segmentation.  Contrast differences are likely to have
   a relatively small effect on the final results.

2) The demographics of your subject population differ
   from those used to generate the existing templates and
   prior probability images.  For example, serious problems
   can occur if your subjects have very large ventricles.
   In these data, there would be CSF in regions where the
   existing priors say CSF should not exist.  This would
   force some of the CSF to be classified as white matter,
   seriously affecting the intensity distribution that
   is used to model white matter.  This then has negative
   consequences for the whole of the segmentation.

> Can any one please explain the detailed steps to
> construct a customized template image (gray and white
> matter images) for VBM analysis?

The following script is one possible way of generating your
own template image.  Note that it takes a while to run, and
does not save any intermediate images that could be useful
for quality control.  Also, if it crashes at any point then
it is difficult to recover the work it has done so far.

* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 

make_template.m

* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 

You may also wish to do some manual editing of the images
afterwards - especially to remove extra-skull CSF.  When
everything has finished, simply smooth the images by 8mm
and call them templates and prior probability images.
You can modify the default priors for the segmentation step
in order that the customised ones are used.  This can be done
either by changing spm_defaults.m, or by typing the following
in Matlab:

spm_defaults
global defaults
defaults.segment.estimate.priors = ...
     spm_get(3,'*.IMAGE','Select GM,WM & CSF priors');

Note that this will be cleared if you reload the defaults.  This
could be done when you start spm, reset the defaults or if the
optimised VBM script is run, as it calls spm_defaults.m.
Alternatively the optimised VBM script could be modified to
include the above.

Note that I have only tried the script with three images, so
I don't have a good feel for how robust it is likely to be.

>
> Please let me know the number of subjects required to
> construct one?

Its hard to say, but more is best.  The 8mm smoothing means
that you can get away with slightly fewer than otherwise.

Best regards,
-John

Note: The make_template is an updated version of what was originally posted. It is current as of Sep 9, 2003.

SPM99 Gem 19: VBM modulation script

This is the famed script to modulate spatially normalized probability images. For gray matter probability images, modulated images have units of gray matter volume per voxel, instead of gray matter concentration adjusted for differences in local brain size.

In John’s own words, here’s a better explaination.

Note that script below is SPM99 specific. In SPM2, it is done viaspm_write_sn(V,prm,'modulate')(Seespm_write_snhelp fore more.)

Date: Thu, 27 Jul 2000 14:28:39 +0100 (BST)
Subject: Re: matlab question & VBM
From: John Ashburner 
To: spm@mailbase.ac.uk, x.chitnis@iop.kcl.ac.uk


| Following on from John Ashburner's recent reply, is there a matlab function
| that enables you to adjust spatially normalised images in order to preserve
| original tissue volume for VBM?

The function attached to this email will do this.  Type the following bit of
code into Matlab to run it:

	Mats   = spm_get(Inf,'*_sn3d.mat','Select sn3d.mat files');
	Images = spm_get(size(Mats,1),'*.img','Select images to modulate');

	for i=1:size(Mats,1),
	        spm_preserve_quantity(deblank(Mats(i,:)),deblank(Images(i,:)));
	end;

[...]

Best regards,
-John

The attached script is here.spm_preserve_quantity.m~

SPM99 Gem 18: –log10 P–values from T images

P-value images are difficult to visualize since “important” values are small and clumped near zero. A -log10 transformation makes for much better visualization while still having interpretability (e.g. a value of 3 cooresponds to P=0.001).

This function,T2nltP, will create -log10 P-value image based on either a contrast number (which must be a T contrast) or a T statistic image and the degrees of freedom.

(See also the equivalent SPM2 function.)

t2nltp.m

function T2nltP(a1,a2)
% Write image of -log10 P-values for a T image
%
% FORMAT T2nltP(c)
% c     Contrast number of a T constrast (assumes cwd is a SPM results dir)
%
% FORMAT T2nltP(Timg,df)
% Timg  Filename of T image
% df    Degrees of freedom
%
%
% As per SPM convention, T images are zero masked, and so zeros will have
% P-value NaN.
%
% @(#)T2nltP.m	1.2 T. Nichols 03/07/15

if nargin==1
  c = a1;
  load xCon
  load SPM xX
  if xCon(c).STAT ~= 'T', error('Not a T contrast'); end
  Tnm = sprintf('spmT_%04d',c);
  df = xX.erdf;
else
  Tnm = a1;
  df  = a2;  
end


Tvol = spm_vol(Tnm);

Pvol        = Tvol;
Pvol.dim(4) = spm_type('float');
Pvol.fname  = strrep(Tvol.fname,'spmT','spm_nltP');
if strcmp(Pvol.fname,Tvol.fname)
  Pvol.fname = fullfile(spm_str_manip(Tvol.fname,'H'), ...
			['nltP' spm_str_manip(Tvol.fname,'t')]);
end


Pvol = spm_create_image(Pvol);

for i=1:Pvol.dim(3),
  img         = spm_slice_vol(Tvol,spm_matrix([0 0 i]),Tvol.dim(1:2),0);
  img(img==0) = NaN;
  tmp         = find(isfinite(img));
  if ~isempty(tmp)
    img(tmp)  = -log10(max(eps,1-spm_Tcdf(img(tmp),df)));
  end
  Pvol        = spm_write_plane(Pvol,img,i);
end;


SPM99 Gem 17: Origin maddness

A source of confusion is where the origin (the [0,0,0] location of an image) is stored. When there is no associated .mat file, the origin is read from the Analyze originator field. If this is zero it is assumed to match the center of the image field of view. If thereisa .mat file, then the origin is the first three values of

        M\[0 0 0 1]'
                

whereMis the transformation matrix in the .mat file.

One limitation is that the origin stored in the Analyze header is a (short) integer, and so cannot represent an origin with fractional values. To set the origin to specific, fractional value, use this code snippet:

  Orig = [ x y z ]; % Desired origin in units of voxels
  P = spm_get(Inf,'*.img'); % matrix of file names

  for i=1:size(P,1)

    M = spm_get_space(deblank(P(i,:)));
    R = M(1:3,1:3);
    % Set origin
    M(1:3,4) = -R*Orig(:);
    spm_get_space(deblank(P(i,:)),M);
 	
  end


SPM99 Gem 16: Scripting Figures

OK, this isn’t a John email, but rather a tip of my own that uses one of John’s functions. When preparing a manuscript you often want to display a “blobs on brain” image, where a reference image underlies a colored significance image. You can do this within the SPM Results facility, but since you never get a figure right the first time, I prefer to do it on the command line.

The code snippet blow scriptizes the blobs-on-brain figure. You’ll get a large orthgonal viewer in the graphics window, so it’s then easy to print (or grab a screen snapshot) to then create your figure.

% Make sure to first clear the graphics window

% Select images
Pbck = spm_get(1,'*.img','Select background image')
Psta = spm_get(1,'*.img','Select statistic image')

% Set the threshold value
Th   = 4;  

% Create a new image where all voxels below Th have value NaN
PstaTh = [spm_str_manip(Psta,'s') 'Th'];
spm_imcalc_ui(Psta,PstaTh,'i1+(0./(i1>=Th))',{[],[],spm_type('float')},Th);

% Display!
spm_orthviews('image',Pbck,[0.05 0.05 0.9 0.9]);
spm_orthviews('addimage',1,PstaTh)

% Possibly, set the crosshairs to your favorite location
spm_orthviews('reposition',[0 -10 10])

                

This assumes that you just want to threshold your image based on a single intensity threshold. To make it totally scripted, replace the spm_get calls with hard assignments.