Voxel-based Lesion-Symptom Mapping - Céline R. Gillebert
←
→
Page content transcription
If your browser does not render page correctly, please read the page content below
Paul Broca (1861) “Mr. Tan” • no productive speech • single repetitive syllable ‘tan’ Broca’s area: speech Broca’s aphasia: problems with production fluency, articulation, word-finding, repetition, production and comprehension of complex grammatical structures
Lesion-Symptom Mapping = inferring the function of a brain area by observing the behavioural consequences of damage to that area
advantages • stronger inference: Is brain area necessary for task? fMRI, EEG, MEG: Does activity in brain area correlate with task? • infer function of node in network of areas fMRI: difficult to understand the differential contribution of areas that are simultaneously activated by the task • clinical relevance: predict recovery or select best protocol for rehabilitation of behavioural deficits
disadvantages • Lesions do not respect the boundaries of functional areas… … and do not cover the whole brain, even not in the largest possible sample of patients • Lesions are permanent…. … although their relation to behavioural function depends on the time to stroke (neuroplasticity) • Lesions can cause dysfunction of structurally intact areas at the distance … lesion-symptom mapping is inherently a “localizationist approach” http://www.strokecenter.org/
disadvantages • Lesions do not respect the boundaries of functional areas… … and do not cover the whole brain, even not in the largest possible sample of patients • Lesions are permanent…. … although their relation to behavioural function depends on the time to stroke (neuroplasticity) • Lesions can cause dysfunction of structurally intact areas at the distance … lesion-symptom mapping is inherently a “localizationist approach”
Example: hemispatial neglect . . . . . . . . . . Demeyere et al. (under review). Psychological Assessment.
lesion overlap • We can overlay the lesions of patients with a deficit on the cancellation task. • Example: Karnath et al. (2004). Cerebral Cortex n=78
lesion subtraction Patients with similar brain damage but without the deficit are critical to identify areas related to the function on top of areas that are commonly damaged! Karnath et al. (2004). Cerebral Cortex.
voxel-based lesion-symptom mapping • Statistics to evaluate whether differences in lesion frequency are reliable predictors of behavioural deficits. • Example: Karnath et al. (2004). Cerebral Cortex
How to run a VLSM analysis?
How to run a VLSM analysis? 1. Acquisition of brain scan with visible lesion 2. Delineation of the lesion 3. Normalization of lesion to a common template 4. Statistics across a group of patients
CT versus MR scans Case RR, Oxford CNC Case RR, Oxford CNC CT scans MRI scans • clinical: acute haemorrhage • no radiation (control data) visible • when contraindication for • higher spatial resolution MRI • different images with different • not ideal for research… but contrasts large database
MR scans: different contrasts Case RR, Oxford CNC Case RR, Oxford CNC T1-weighted scans T2-weighted scans • Fast to acquire • Slower to acquire • Good contrast between • Excellent for finding lesions WM and GM • FLAIR attenuates CSF • Excellent structural detail
acute or chronic stroke? • acute stroke: widespread dysfunction • structurally intact brain areas are disrupted as they are connected to the lesioned brain areas • more clinically relevant • chronic stroke: brain is plastic • difficult to infer what a brain region used to do • more stable, identifies functions that cannot be compensated
How to run a VLSM analysis? 1. Acquisition of brain scan with visible lesion 2. Delineation of the lesion 3. Normalization of lesion to a common template 4. Statistics across a group of patients
lesion delineation • Manual delineation of the lesion: “gold standard” • requires experience and knowledge about brain anatomy • time-consuming, only feasible for relatively small sample sizes (but power of VLSM…) • susceptible to operator bias • Fully/semi-automated delineation • replicable • suitable for large sample sizes • errors are inevitable • “normal” signal varies from individual to individual • lesions are heterogeneous in signal, also within an individual
Automated lesion delineation • CT scans: Gillebert, C.R., Humphreys, G.W., & Mantini, D. (2014). Automated delineation of stroke lesions using brain CT images. Neuroimage: Clinical, 4:540-548. • MRI scans: Mah, Y.H., Jager, R., Kennard, C., Husain, M., & Nachev, P. (2014). A new method for automated high-dimensional lesion segmentation evaluated in vascular injury and applied to the human occipital lobe. Cortex, 56:51-64.
Manual lesion delineation Manual delineation of the lesion, slice by slice, using e.g. MRIcron Case RR, Oxford CNC Case RR, Oxford CNC
overview 1. Acquisition of brain scan with visible lesion 2. Delineation of the lesion 3. Normalization of lesion to a common template 4. Statistics across a group of patients
normalization • Alignment of brains to ‘template’ image in stereotaxic space, necessary to compare lesions between individuals • Linear and non-linear transformation to minimize difference with template
normalization • Alignment of brains to ‘template’ image in stereotaxic space, necessary to compare lesions between individuals • Linear and non-linear transformation to minimize difference with template • Use an appropriate (age- and modality-matched) template: N=152, n=50, n=30, n=366, 25yrs 73yrs 61yrs 35yrs Rorden et al. (2012). Neuroimage. Winkler et al. FLAIR Templates. MNI152, SPM and FSL Available at http://glahngroup.org
normalization of CT scans: Gillebert et al. (2014) Neuroimage: Clinical
normalization • ! Region of lesion appears different in image and template, and software will attempt to warp lesioned region → Solution: ignore the lesioned brain tissue in the process → Masked normalization: Brett et al., (2001) Neuroimage → Less of a problem with unified segmentation-normalization approach (Crinion et al. (2007) Neuroimage) • Clinical toolbox for SPM
Clinical Toolbox in SPM Rorden et al. (2012). Neuroimage http://www.mccauslandcenter.sc.edu/CRNL/clinical-toolbox
overview 1. Acquisition of brain scan with visible lesion 2. Delineation of the lesion 3. Normalization of lesion to a common template 4. Statistics across a group of patients
visualization of lesion distribution Molenberghs, Gillebert, et al., 2009
Operationalization of behaviour 25 N=132 20 number of patients 15 n=180 10 5 0 0 5 10 15 20 25 30 35 40 45 50 number of cancelled complete hearts cut-off = 42 Demeyere*, Gillebert*, et al. (in preparation)
Operationalization of behaviour 25 20 number of patients 15 10 5 0 0 5 10 15 20 25 30 35 40 45 50 number of cancelled complete hearts performance Demeyere*, Gillebert*, et al. (in preparation)
Parametric or non-parametric statistics • traditional: t-test for continuous data • assumptions: data are normally distributed, two groups have similar variance, and data represent interval measurements • but • assumptions difficult to test across the thousands of voxel-wise comparisons • measures differences in the mean between two groups, not appropriate for skewed distributions • dependent variables often measured using an ordinal scale • alternative: Brunner Munzel rank order test • assumption free, also for variables on an ordinal scale • Approaches normal distribution if n>= 10
correction for multiple comparisons • Bonferroni-correction • Strong protection against false alarms • Overly conservatives when comparisons are not independent • Permutation thresholding • randomly relabeling and resampling the data, computing the maximum observed statistic within the entire brain volume for each permutation • lesions are formed from large contiguous regions, where each voxel is not truly independent • False discovery rate (FDR) • controls the ratio of false alarms to hits • sensitive where a signal is present in a substantial portion of the data
Some considerations… • A t-test requires two groups and one continuous variable. • The VLSM t-test is orthogonal to t-tests used for fMRI/VBM: • fMRI/VBM t-tests: • Deficit defines two groups. • Voxel intensity provides continuous variable. • VLSM • Voxel intensity (lesion/no lesion) defines two groups. • Behavioral performance provides continuous variable. • Note VLSM group size varies from voxel-to-voxel. • Statistical tests provide optimal power both groups have the same number of observations (balanced). • Therefore, VLSM power fluctuates across voxels • We can not make inferences of voxels that are rarely damaged or always damaged (also true for binomial tests).
Beyond VLSM… • Track-wise “Hodological” Lesion-Deficit Analysis • Thiébaut de Schotten et al. (2012) Cerebral Cortex • maps of white matter tracts representing a probability of a given voxel belonging to that tract • calculating the size of the overlap (in cubic centimetres) between each patient’s lesion map and each thresholded (50%) pathway map → Can these continuous measure of the pathway disconnection predict behavioural deficits?
Beyond VLSM… Chechlacz, Mantini, Gillebert, & Humphreys (under review). Cortex
Beyond VLSM… • Track-wise “Hodological” Lesion-Deficit Analysis • Thiébaut de Schotten et al. (2012) Cerebral Cortex • maps of white matter tracts representing a probability of a given voxel belonging to that tract • calculating the size of the overlap (in cubic centimetres) between each patient’s lesion map and each thresholded (50%) pathway map → Can these continuous measure of the pathway disconnection predict behavioural deficits? • Voxel-wise Bayesian Lesion-Deficit Analysis • Chen et al. (2008) Neuroimage • Multivariate Lesion-Symptom Mapping (MLSM) • Zhang et al. (2014) Human Brain Mapping: Modelling the relation of the deficit to the entire lesion map as opposed to each isolated voxel, using support vector regression • Mah et al. (2014) Brain: capturing high-dimensional structure of lesion data using machine learning techniques
You can also read