# SPM99 Gem 23: Tabulating T Statistics

```Subject: Fwd: Re: tabulating all statistics
From: John Ashburner <john@FIL.ION.UCL.AC.UK>
Date: Tue, 1 Jul 2003 11:47:07 +0000 (07:47 EDT)
To: SPM@JISCMAIL.AC.UK

> I was wondering if it would be possible to write t values for an
> entire volume into a file?

Try this:

fid=fopen('t-values.txt','w');
P=spm_get(1,'*.img','Select statistic image');
V=spm_vol(P);

[x,y,z] = ndgrid(1:V.dim(1),1:V.dim(2),0);
for i=1:V.dim(3),
z   = z + 1;
tmp = spm_sample_vol(V,x,y,z,0);
msk = find(tmp~=0 & finite(tmp));
if ~isempty(msk),
tmp = tmp(msk);
xyz1=[x(msk)'; y(msk)'; z(msk)'; ones(1,length(msk))];
xyzt=V.mat(1:3,:)*xyz1;
for j=1:length(tmp),
fprintf(fid,'%.4g %.4g %.4g\t%g\n',...
xyzt(1,j),xyzt(2,j),xyzt(3,j),tmp(j));
end;
end;
end;
fclose(fid);

best regards,
-John
```

As noted in a 2 Feb 2004 email, to eliminate all voxels below a certain threshold, change

```     msk = find(tmp~=0 & finite(tmp));
```

to

```     msk = find(tmp>0 & finite(tmp));
```

# SPM99 Gem 22: spm_orthviews tricks

IMHO, after spatial normalization, John’s key contribution to SPM is spm_orthviews, the function behind the ‘Check Reg’ button which let’s you view many volumes simeltaneously. Some of the Gems use spm_orthviews (e.g. Gems 1 and 16) but listed here are some generally useful tricks for spm_orthviews.

These tricks are usefulanytimea three-view orthogonal slice view is shown, whether from useing ‘Check Reg’, ‘Display’ or when overlaying blobs from the ‘Results’ window.

Turn off/on cross-hairs

spm_orthviews('Xhairs','off')

spm_orthviews('Xhairs','on')
.

Return current x,y,z world space, mm location

spm_orthviews('pos')'

(Note that I transpose the returned value into a row vector, for easier copying and pasting.)

Move to x,y,z world space, mm location

spm_orthviews('reposition',[x y z])

# SPM99 Gem 21: ImCalc Script

As posted, this snippet is for SPM2; I’ve edited it to work with SPM99.

```Subject: Re: a script to use ImaCal
From: John Ashburner <john@FIL.ION.UCL.AC.UK>
Date: Thu, 16 Oct 2003 11:24:17 +0000 (07:24 EDT)
To: SPM@JISCMAIL.AC.UK

> Brain image for each subject to mask out CSF signal was generated by
> using MPR_seg1.img (i1) and MPR_seg2.img (i2) with (i1+i2)>0.5 in
> ImaCal.(called brainmpr.img for each subject)
>
> Then I have more than two-hundred maps, which need to mask out
> CSF. I think I can use ImaCal again with selecting brainmpr.img (i1)
> and FAmap.img (i1), and then calculating (i1.*i2) to generate a new
> image named as bFAmap.img.  Unfortunately, if I use the ImaCal, it
> take so long time to finish all subjects. Could anyone have a script
> to generate  a multiplication imaging with choosing a brain
> image(brainmpr.img, i1) and a map image (FAmap.img, i2) and writing
> an output image (bFAmap.img) from i1.*i2?

You can do this with a script in Matlab.  Something along the lines of the
following should do it:

P1=spm_get(Inf,'*.img','Select i1');
P2=spm_get(size(P1,1),'*.img','Select i2');

for i=1:size(P1,1),
P = strvcat(P1(i,:),P2(i,:)));
Q = ['brainmpr_' num2str(i) '.img'];
f = '(i1+i2)>0.5';
flags = {[],[],[],[]};
Q = spm_imcalc_ui(P,Q,f,flags);
end;

Note that I have not tested the above script.  I'm sure you can fix it if
it doesn't work.

Best regards,
-John```

# SPM99 Gem 20: Creating customized templates and priors

```From: John Ashburner
Subject: Re: Help for constructing Template images
Date: Wed, 18 Dec 2002 16:22:59 +0000
To: SPM@JISCMAIL.AC.UK

> What are the advantages of customized template images
> in VBM analysis?

Customised templates are useful when:

1) The contrast in your MR images is not the same as the
contrast used to generate the existing templates.  If
the contrast is different, then the mean squared cost
function is not optimal.  However, for "optimised" VBM
this only really applies to the initial affine
registration that is incorporated into the initial
segmentation.  Contrast differences are likely to have
a relatively small effect on the final results.

2) The demographics of your subject population differ
from those used to generate the existing templates and
prior probability images.  For example, serious problems
can occur if your subjects have very large ventricles.
In these data, there would be CSF in regions where the
existing priors say CSF should not exist.  This would
force some of the CSF to be classified as white matter,
seriously affecting the intensity distribution that
is used to model white matter.  This then has negative
consequences for the whole of the segmentation.

> Can any one please explain the detailed steps to
> construct a customized template image (gray and white
> matter images) for VBM analysis?

The following script is one possible way of generating your
own template image.  Note that it takes a while to run, and
does not save any intermediate images that could be useful
for quality control.  Also, if it crashes at any point then
it is difficult to recover the work it has done so far.

* * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
```

```* * * * * * * * * * * * * * * * * * * * * * * * * * * * * *

You may also wish to do some manual editing of the images
afterwards - especially to remove extra-skull CSF.  When
everything has finished, simply smooth the images by 8mm
and call them templates and prior probability images.
You can modify the default priors for the segmentation step
in order that the customised ones are used.  This can be done
either by changing spm_defaults.m, or by typing the following
in Matlab:

spm_defaults
global defaults
defaults.segment.estimate.priors = ...
spm_get(3,'*.IMAGE','Select GM,WM & CSF priors');

Note that this will be cleared if you reload the defaults.  This
could be done when you start spm, reset the defaults or if the
optimised VBM script is run, as it calls spm_defaults.m.
Alternatively the optimised VBM script could be modified to
include the above.

Note that I have only tried the script with three images, so
I don't have a good feel for how robust it is likely to be.

>
> Please let me know the number of subjects required to
> construct one?

Its hard to say, but more is best.  The 8mm smoothing means
that you can get away with slightly fewer than otherwise.

Best regards,
-John
```

Note: The make_template is an updated version of what was originally posted. It is current as of Sep 9, 2003.

# SPM99 Gem 19: VBM modulation script

This is the famed script to modulate spatially normalized probability images. For gray matter probability images, modulated images have units of gray matter volume per voxel, instead of gray matter concentration adjusted for differences in local brain size.

In John’s own words, here’s a better explaination.

Note that script below is SPM99 specific. In SPM2, it is done viaspm_write_sn(V,prm,'modulate')(Seespm_write_snhelp fore more.)

```Date: Thu, 27 Jul 2000 14:28:39 +0100 (BST)
Subject: Re: matlab question & VBM
From: John Ashburner
To: spm@mailbase.ac.uk, x.chitnis@iop.kcl.ac.uk

| Following on from John Ashburner's recent reply, is there a matlab function
| that enables you to adjust spatially normalised images in order to preserve
| original tissue volume for VBM?

The function attached to this email will do this.  Type the following bit of
code into Matlab to run it:

Mats   = spm_get(Inf,'*_sn3d.mat','Select sn3d.mat files');
Images = spm_get(size(Mats,1),'*.img','Select images to modulate');

for i=1:size(Mats,1),
spm_preserve_quantity(deblank(Mats(i,:)),deblank(Images(i,:)));
end;

[...]

Best regards,
-John
```

The attached script is here.spm_preserve_quantity.m~

# SPM99 Gem 18: –log10 P–values from T images

P-value images are difficult to visualize since “important” values are small and clumped near zero. A -log10 transformation makes for much better visualization while still having interpretability (e.g. a value of 3 cooresponds to P=0.001).

This function,T2nltP, will create -log10 P-value image based on either a contrast number (which must be a T contrast) or a T statistic image and the degrees of freedom.

t2nltp.m

```function T2nltP(a1,a2)
% Write image of -log10 P-values for a T image
%
% FORMAT T2nltP(c)
% c     Contrast number of a T constrast (assumes cwd is a SPM results dir)
%
% FORMAT T2nltP(Timg,df)
% Timg  Filename of T image
% df    Degrees of freedom
%
%
% As per SPM convention, T images are zero masked, and so zeros will have
% P-value NaN.
%
% @(#)T2nltP.m	1.2 T. Nichols 03/07/15

if nargin==1
c = a1;
if xCon(c).STAT ~= 'T', error('Not a T contrast'); end
Tnm = sprintf('spmT_%04d',c);
df = xX.erdf;
else
Tnm = a1;
df  = a2;
end

Tvol = spm_vol(Tnm);

Pvol        = Tvol;
Pvol.dim(4) = spm_type('float');
Pvol.fname  = strrep(Tvol.fname,'spmT','spm_nltP');
if strcmp(Pvol.fname,Tvol.fname)
Pvol.fname = fullfile(spm_str_manip(Tvol.fname,'H'), ...
['nltP' spm_str_manip(Tvol.fname,'t')]);
end

Pvol = spm_create_image(Pvol);

for i=1:Pvol.dim(3),
img         = spm_slice_vol(Tvol,spm_matrix([0 0 i]),Tvol.dim(1:2),0);
img(img==0) = NaN;
tmp         = find(isfinite(img));
if ~isempty(tmp)
img(tmp)  = -log10(max(eps,1-spm_Tcdf(img(tmp),df)));
end
Pvol        = spm_write_plane(Pvol,img,i);
end;
```

# SPM99 Gem 17: Origin maddness

A source of confusion is where the origin (the [0,0,0] location of an image) is stored. When there is no associated .mat file, the origin is read from the Analyze originator field. If this is zero it is assumed to match the center of the image field of view. If thereisa .mat file, then the origin is the first three values of

```        M\[0 0 0 1]'
```

whereMis the transformation matrix in the .mat file.

One limitation is that the origin stored in the Analyze header is a (short) integer, and so cannot represent an origin with fractional values. To set the origin to specific, fractional value, use this code snippet:

```  Orig = [ x y z ]; % Desired origin in units of voxels
P = spm_get(Inf,'*.img'); % matrix of file names

for i=1:size(P,1)

M = spm_get_space(deblank(P(i,:)));
R = M(1:3,1:3);
% Set origin
M(1:3,4) = -R*Orig(:);
spm_get_space(deblank(P(i,:)),M);

end
```

# SPM99 Gem 16: Scripting Figures

OK, this isn’t a John email, but rather a tip of my own that uses one of John’s functions. When preparing a manuscript you often want to display a “blobs on brain” image, where a reference image underlies a colored significance image. You can do this within the SPM Results facility, but since you never get a figure right the first time, I prefer to do it on the command line.

The code snippet blow scriptizes the blobs-on-brain figure. You’ll get a large orthgonal viewer in the graphics window, so it’s then easy to print (or grab a screen snapshot) to then create your figure.

```% Make sure to first clear the graphics window

% Select images
Pbck = spm_get(1,'*.img','Select background image')
Psta = spm_get(1,'*.img','Select statistic image')

% Set the threshold value
Th   = 4;

% Create a new image where all voxels below Th have value NaN
PstaTh = [spm_str_manip(Psta,'s') 'Th'];
spm_imcalc_ui(Psta,PstaTh,'i1+(0./(i1>=Th))',{[],[],spm_type('float')},Th);

% Display!
spm_orthviews('image',Pbck,[0.05 0.05 0.9 0.9]);

% Possibly, set the crosshairs to your favorite location
spm_orthviews('reposition',[0 -10 10])

```

This assumes that you just want to threshold your image based on a single intensity threshold. To make it totally scripted, replace the spm_get calls with hard assignments.

# SPM99 Gem 15: Computing Cerebral Volume (VBM)

```Subject:      Re: smoothed modulated image
From: John Ashburner <john@FIL.ION.UCL.AC.UK>
Date:         Fri, 12 Apr 2002 13:45:01 +0000
To: SPM@JISCMAIL.AC.UK

> considering a smoothed modulated image, which is the right
> interpretation of the matrix:  "each value of the matrix denotes the
> volume, measured in mm3, of gray matter within each voxel" or "each
> value of the matrix is proportional to the volume, measured in mm3,
> of gray matter within each voxel" or something else?

The contents of a modulated image are a voxel compression map
multiplied by tissue belonging probabilities (which range between zero
and one).

The units in the images are a bit tricky to explain easily (so I would
suggest you say that intensities are proportional).  To find the
volume of tissue in a structure in one of the modulated images, you
sum the voxels for that structure and multiply by the product of the
voxel sizes of the modulated image.

The total volume of grey matter in the original image can be
determined by summing the voxels in the modulated, spatially
normalised image and multiplying by the voxel volume (product of voxel
size).

For example, try the following code for an original image and the same
image after spatial normalisation and modulation. Providing the
bounding box of the normalised image is big enough, then both should

V = spm_vol(spm_get(1,'*.img'))
tot = 0;
for i=1:V(1).dim(3),
img = spm_slice_vol(V(1),spm_matrix([0 0 i]),V(1).dim(1:2),0);
tot = tot + sum(img(:));
end;
voxvol = det(V(1).mat)/100^3; % volume of a voxel, in litres
tot    = tot; % integral of voxel intensities

tot*voxvol

I hope the above makes sense.

All the best,
-John```