Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Image registration with applications to multimodal small animal imaging
(USC Thesis Other)
Image registration with applications to multimodal small animal imaging
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
IMAGEREGISTRATIONWITHAPPLICATIONSTO
MULTIMODALSMALLANIMALIMAGING
by
BelmaDogdas
ADissertationPresentedtothe
FACULTYOFTHEGRADUATESCHOOL
UNIVERSITYOFSOUTHERNCALIFORNIA
InPartialFulfillmentofthe
RequirementsfortheDegree
DOCTOROFPHILOSOPHY
(ELECTRICALENGINEERING)
August2007
Copyright 2007 BelmaDogdas
Dedication
tomyfamily
ii
Acknowledgements
I would like to thank my advisor Richard Leahy for giving me the opportunity to work
withhiminhisgroup,forhisfullsupport,guidanceandpatiencethroughoutmystudies.
He provided me with deep background knowledge and advice on my research from
developing ideas to presenting outcomes. Working with Dr. Leahy has been a great
pleasureandexperienceforme.
I would also like to thank other members of my dissertation and qualifying com-
mittee, Antonio Ortega, Francis Bonahon, Krishna Nayak and Shri Narayanan for their
valuableinputandcommentsonthiswork.
Iamindebtedtothemembersofourresearchgroupforprovidingastimulatingand
fun environment. I am especially grateful to Dimitrios Pantazis, Quanzheng Li, Hua
Hui, Abhijit Chaudhari, Sangtae Ahn, Juan Soto, Anand Joshi, Zheng Li, Sangeetha
Somayajula and former group members Evren Asma, Bing Bai and Alexei Ossadtchi
for their friendship and help throughout these years. Special thanks to Gloria Halfacre,
Allan Weber, Regina Morton and Diane Demetras for all their help during my study at
USC.
During my research years in PhD, I had chance to work with great researchers
whomIhavelearntalot: ThankstoDavidShattuck, FelixDarvas, JohnMosher, Arion
Chatziioannou, David Stout and Gary Christensen for their guidance and help during
theprojectswehavecollaboratedon.
iii
Myfriendshavebeenagreatsourceofsupportandencouragementtome. Thanksto
Murti, Salih, Esen, Serdar, Erdem, Sebe, Nazli, Engin, Ozlem, Neslihan, Esen, Berna,
Ipek,Aysun,BurcuandAslifortheirfriendshipduringmyyearsinLosAngeles.
Finally, I like to thank my parents Ali, Kezban and my sister Emine for their love
and emotional support during my studies. They have been a great source of inspiration
at every stage of my life. Last but not least, I would like to thank Canturk for his love,
forhisconstantencouragementandforbeingthereformealltheseyears.
iv
TableofContents
Dedication ii
Acknowledgements iii
ListofFigures vii
ListofTables xii
Abstract xiii
Chapter1: ImageRegistrationwithApplicationstoMultimodalSmallAnimal
Imaging 1
1.1 AnOverviewofImageRegistration . . . . . . . . . . . . . . . . . . . 6
1.1.1 AGeneralFrameworkforImageRegistration . . . . . . . . . . 6
1.1.2 FeatureSelectionandSimilarityMetrics . . . . . . . . . . . . . 8
1.1.3 ParameterizationoftheDeformationField . . . . . . . . . . . . 11
1.1.4 ChoiceofRegularizationFunctions . . . . . . . . . . . . . . . 14
1.1.5 Classification of Registration Techniques based on Parameteri-
zation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.1.6 LowDimensionalMethods . . . . . . . . . . . . . . . . . . . . 17
1.1.6.1 Affine transformations based on points, curves, sur- facesandintensities . . . . . . . . . . . . . . . . . . 17
1.1.6.2 Non-rigid transformations based on points, curves,
surfacesandintensities . . . . . . . . . . . . . . . . 20
1.1.7 HighDimensionalMethods . . . . . . . . . . . . . . . . . . . 22
1.1.7.1 InverseConsistentLinearElasticRegistration . . . . 22
1.2 ApplicationsofSelectedRegistrationMethodstoSmallAnimalImaging 24
1.2.1 Rigidregistration: AIRvs. RView . . . . . . . . . . . . . . . . 25
1.2.2 ThinPlateSpline . . . . . . . . . . . . . . . . . . . . . . . . . 29
1.2.3 Nonrigidregistration: low-dimensionalvshigh-dimensional: AIR
vs. LEREG . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
1.2.4 Atlas-basedregistrationtoautomaticallylabelatemplatemouse 34
v
Chapter2: Digimouse: A3DWholeBodyMouseAtlasfromCTandCryosec-
tionData 36
2.1 MousePreparationandImaging . . . . . . . . . . . . . . . . . . . . . 38
2.2 ProcessingtheData . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
2.2.1 Dataalignmentandresampling . . . . . . . . . . . . . . . . . 41
2.2.2 SegmentationandLabelling . . . . . . . . . . . . . . . . . . . 43
2.3 Applications: PETandOpticalSimulationsusingtheMouseAtlas . . . 47
Chapter3: 3DSurfaceModelofaMousefromOrthogonalViewsofStructured
LightImages 52
3.1 OpticalGeometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
3.2 CameraCalibrationusingDirectLinearTransformation(DLT)Method 55
3.3 StructuredLightSystem . . . . . . . . . . . . . . . . . . . . . . . . . 58
3.4 PhaseUnwrapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
3.5 Combiningheightmapsfromtopandsideviews. . . . . . . . . . . . . 67
3.6 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
Chapter4: AutomaticSegmentationofImageswithTumorPathology 79
4.1 ViscousFluidRegistration . . . . . . . . . . . . . . . . . . . . . . . . 80
4.2 LevelSets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
4.3 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
Chapter5: SegmentationofSkullandScalpin3DHumanMRIUsingMathe-
maticalMorphology 91
5.1 RelatedWork . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
5.2 ObtainingAnatomicalSurfacewithMorphologicalProcessing . . . . . 95
5.2.1 MathematicalMorphology . . . . . . . . . . . . . . . . . . . . 97
5.2.2 SegmentationofBrain . . . . . . . . . . . . . . . . . . . . . . 99
5.2.3 SkullandScalpThresholds . . . . . . . . . . . . . . . . . . . . 99
5.2.4 ScalpSegmentation . . . . . . . . . . . . . . . . . . . . . . . . 100
5.2.5 SegmentationofSkull . . . . . . . . . . . . . . . . . . . . . . 102
5.2.5.1 SegmentationofOuterSkull . . . . . . . . . . . . . 102
5.2.5.2 SegmentationofInnerSkull . . . . . . . . . . . . . . 105
5.3 Results. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
5.3.1 Automationstudy . . . . . . . . . . . . . . . . . . . . . . . . . 109
5.3.2 Co-registeredCT/MRIValidation . . . . . . . . . . . . . . . . 111
Chapter6: ConclusionandFutureWork 119
References 124
vi
ListofFigures
1.1 Different imaging technologies developed for small animal imaging:
Shown here are: cryotome, microPET, optical, MRI, microCAT and
ultrasound. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 IllustrationofnonrigidregistrationappliedtoapairofCTmouseimages:
(a) coronal section through the target mouse S (b) corresponding sec-
tion through template mouse T (c) template mouse after deformation
to match target mouse (d) overlay of target and template mouse before
registration (e) overlay of target and template mouse after deformation
(f)rectangulargridinthetemplatespace(g)deformedgridusingtrans-
formationh =x+u(x). . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3 Illustrationofthedeformationfieldsafterrigid,affineandnonrigidtrans-
formationsareappliedin2D. . . . . . . . . . . . . . . . . . . . . . . 13
1.4 RigidregistrationofPETtoCTdatausingRViewandAIR. . . . . . . . 26
1.5 Joint histogram plots of PET-CT data before and after rigid registration
usingmutualinformation. . . . . . . . . . . . . . . . . . . . . . . . . . 27
1.6 RigidregistrationofCTtocryosectiondatausingRViewandAIR. . . . 28
1.7 2Dthinplatesplineregistration. . . . . . . . . . . . . . . . . . . . . . 30
1.8 Illustrationofthedeformationfieldafter2Dthinplatesplineregistration. 31
1.9 Nonrigid registration of CT images of template mouse to Digimouse
using AIR and LEREG: (a), (e) CT image of Digimouse (b), (f) rigidly
registered CT image of template mouse using RView (c)warped CT
imageoftemplatemouseusingAIR(d)overlayofthewarpedCTimage
of template mouse onto CT image of Digimouse (g)warped CT image
of template mouse using LEREG (h) overlay of the warped CT image
oftemplatemouseontoCTimageofDigimouse. . . . . . . . . . . . . 32
vii
1.10 EvaluationofrigidandnonrigidregistrationsusingAIRandLEREGby
overlayingtheskeletons: (1a,2a)overlayofskeletonsofDigimouseand
template mouse after rigid registration (1b,2b) overlay of skeletons of
Digimouse and template mouse after nonrigid registration using AIR
(1c,2c) overlay of skeletons of Digimouse and template mouse after
nonrigidregistrationusingLEREG. . . . . . . . . . . . . . . . . . . . 34
1.11 Automated labeling of template mouse by deforming the atlas using
transformation obtained from registering the CT of the Digimouse to
theCTofthetemplatemouseusingLEREG. . . . . . . . . . . . . . . 35
2.1 ThesequenceofprocessingofthemouseforacquiringthePET,CTand
cryosectionimagedata. . . . . . . . . . . . . . . . . . . . . . . . . . . 40
2.2 RepresentativeslicesfromthePET,CTandcryosectiondata(voxeland
matrixsizesaregiveninsagittal,axialandcoronaldirectionsrespectively). 41
2.3 Overlay of the CT to the cryostat data before and after nonrigid regis-
tration. NonrigidregistrationwasperformedusingLEREG. . . . . . . . 43
2.4 RegisteredPET-CTfusionimageofthemouse. . . . . . . . . . . . . . 44
2.5 Surface rendering of atlas structures: whole brain, external cerebrum,
cerebellum, olfactory bulbs, striatum, medulla, masseter muscles, eyes,
lachrymalglands,heart,lungs,liver,stomach,spleen,pancreas,adrenal
glands,kidneys,testes,bladder,skeletonandskininDigimouse. . . . . 45
2.6 Coronal and transaxial sections through the coregistered PET, CT and
cryosectiondatasetsandatlas. . . . . . . . . . . . . . . . . . . . . . . 46
2.7 (a) Tetrahedral mesh representation of the mouse atlas using 3D con-
strained delaunay method (b) assignment of optical absorption coeffi-
cients to the FEM mesh at 700nm (c) assignment of optical diffusion
coefficientsalsoat700nm. . . . . . . . . . . . . . . . . . . . . . . . . 48
2.8 Scatteringandabsorptioncoefficientsoforgansinthe600-700nmregion. 49
2.9 ComparisonofPETand3Dbioluminescencereconstructionsincoronal
(top row) and transaxial (bottom row) views: (a) PET (b) biolumines-
cence(c)fusionofbioluminescenceandPETreconstructions(d)fusion
ofbioluminescenceandgroundtruthtumorimage. . . . . . . . . . . . 51
3.1 Schematicofthemirrorsetupsystem. . . . . . . . . . . . . . . . . . . 55
3.2 Threeorthogonalviewsofthestructuredlightimagesofaphantommouse. 56
viii
3.3 Opticalgeometryforcameracalibration. . . . . . . . . . . . . . . . . . 57
3.4 Opticalgeometryofthestructuredlightsystem. . . . . . . . . . . . . . 59
3.5 1DFFTprofileofstructuredlightpattern. . . . . . . . . . . . . . . . . 61
3.6 Opticalgeometryofthestructuredlightsystemwiththemirrorsetupto
acquiresideviewsinadditiontothetopview. . . . . . . . . . . . . . . 63
3.7 Anillustrationofthereflectionofpointsduetothemirrorplane. . . . . 68
3.8 Calibrationparameters: a,bandcfortopandsideviews. . . . . . . . . 69
3.9 Recomputedreferenceplanesforthetopviewwithheightsh=1,2,3and
4cmusingthecalibrationparameters. . . . . . . . . . . . . . . . . . . 70
3.10 Height map computation for a triangular prism using the calibration
parametersandthestructuredlightpattern. . . . . . . . . . . . . . . . . 71
3.11 Height map computation for a phantom mouse using the calibration
parametersandthestructuredlightpattern. . . . . . . . . . . . . . . . . 72
3.12 Different steps in the generation of the 3D surface of a cylinder using
structured light pattern with the mirror setup: (a) the structured light
patternonthetopandsideviewsofthecylinder(b)wrappedphasemap
of the top and side views (c) quality maps of the top and side views (d)
heightmapsandmidsectionheightprofileforthetopandsideviews. . 73
3.13 Blue: heightmapfortheleftsideview,red: reflectedheightmapforthe
left side view, black: leftpartofthesurfacepoints oftheheightmapof
thetopview,magenta: surfacepointsofthereflectedheightmapforthe
leftsideview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
3.14 Comparison of the 3D surface volume of the cylinder generated using
(a)onlythetopview(b)thetopandsideviews. . . . . . . . . . . . . . 75
3.15 Different steps in the generation of the 3D surface of a plastic mouse
using structured light pattern with the mirror setup: (a) The structured
lightpatternonthetopandsideviewsofthemouse(b)Wrappedphase
map of top and side views (c) Quality maps of the top and side views
(d)Heightmapsandmidsectionheightprofileforthetopandsideviews. 77
3.16 Comparison of surfaces generated using structured light pattern pro-
jectedfromthetopviewandusingthemirrorsetup. . . . . . . . . . . . 78
ix
3.17 Comparison of surfaces generated using structured light pattern pro-
jected from the top view and using the mirror setup: green: using only
top view, purple: using the mirror setup, (a) Left side view (b) Overlay
ofthesurfaces(c)Rightsideview. . . . . . . . . . . . . . . . . . . . . 78
3.18 Comparison of surfaces generated from CT to the ones from structured
light pattern with the mirror setup; gray: 3D surface using structured
light pattern with the mirror setup, purple: 3D surface generated from
CTdata. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
4.1 Simulationofacontractionofacomplextumorwithv =−1−0.2κ. . 87
4.2 Seeding of the atlas image with an earlier stage of the tumor, estimated
usinglevelsetswiththeassumptionofauniformgrowthmodel. . . . . 88
4.3 Simulation of a contraction of the tumor in the template image with
v =−1−0.2κ. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
4.4 Viscousfluidregistrationoftheseededatlasimagetothetemplateimage
withtumorpathology. . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
4.5 Thenormoftheforcethatdrivestheviscousfluidregistrationvsiteration. 89
4.6 The deformation field and the determinant of the Jacobian of the defor- mationfieldateachgridpoint. . . . . . . . . . . . . . . . . . . . . . . 90
5.1 Block diagram of our algorithm for scalp and skull segmentation using
morphologicaloperations. Oncompletion,weapplyadditionalmasking
operationstoensurethatscalp,outerskull,innerskull,andbrainmasks
donotintersecteachother. . . . . . . . . . . . . . . . . . . . . . . . . 97
5.2 Basicstructuringelementsusedinouralgorithm: Cube(C
1
),3DCross(R
1
),
3DOctagon(O
2
)andtheircorresponding2Dstructuringelements: Square,
CrossandOctagon. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
5.3 Different stages of scalp segmentation: (a) slice in the initial MRI vol-
ume (b) image after applying a threshold, X
t scalp
(c) dilated image (d)
image with cavities filled (e) eroded image (f) largest foreground com-
ponent. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
x
5.4 Outerskullsegmentation: Individualstagesofouterskullsegmentation
sequence: (a) a slice in the MRI volume (b) thresholding voxels with
intensity less than or equal t
skull
to identify the “dark” voxels (repre-
sentedaswhitepixelsintheimage)(c)unionofdarkvoxelsanddilated
brain mask (d) modified scalp mask (e) intersection of modified scalp
mask and X
u out
(f) largest connected component X
lc
(g) closed skull
component (h) final outer skull mask after masking by modified scalp
mask. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
5.5 Inner skull segmentation: Stages of inner skull segmentation: (a) slice
intheinitialMRIvolume(b)imageaftermaskingbyerodedouterskull
mask(c)binaryimageproducedbythresholdingX
o
witht
skull
(d)image
afterunionwithdilatedbrainmask(e)openingofX
u in
withO
4
(f)inner
skullmaskproducedfromtheunionofX
open
andtheerodedouterskull. 106
5.6 Amoredetailedblockdiagramofouralgorithmforscalpandskullseg-
mentationusingmorphologicaloperationswiththenotationsusedinthe
equations. On completion, we apply additional masking operations to
ensure that scalp, outer skull, inner skull, and brain masks do not inter- secteachother. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
5.7 Renderingsofsurfacetessellationsoftheheadmodels: (a)scalpmodel
(b) outer skull model (c) inner skull model (d) nested view of the sur- faces. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
5.8 Segmentation of brain, CSF, skull, and scalp from MRI and its corre-
spondingCTontransaxial,sagittalandcoronalslicesrespectively. First
Row: Original MRI. Second Row: Original CT data. Third Row: Seg-
mentation of brain, CSF, skull, and scalp from MRI. Fourth Row: Seg-
mentationofcombinedbrain/CSF,skull,andscalpfromCT. . . . . . . 113
5.9 Surfacetesselationsofthescalp,outerskull,innerskull,andbrainobtained
from4MRIdatasets. . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
5.10 Overlay of the MRI and CT skulls on a transaxial MRI cross-section
anditscorrespondingco-registeredCTcross-section: (a)Overlayofthe
MRI skull on MRI: white region corresponds to skull (b) Overlay of
MRI and CT skulls: white voxels are labeled as skull in both CT and
MRI segmentations, dark gray voxels are labeled as skull only in the
MRIdata,lightgrayvoxelsarelabeledasskullonlyintheCTdata,and
black voxels are labeled as background in both. (c) Overlay of the CT
skullonco-registeredCT. . . . . . . . . . . . . . . . . . . . . . . . . . 116
xi
ListofTables
1.1 Different types of similarity metrics, transformation parameterizations
and regularization functions that can be combined into the registration
framework to produce many of the commonly used image registration
algorithms. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.2 RMS registration errors computed from 600 fiducials for rigid registra-
tionofPETtoCTandCTtocryosectiondatawithRViewandAIR. . . 29
1.3 DicecoefficientscomparingtheskeletonvolumeoftheDigimousewith
theskeletonvolumeofthetemplatemouseafterrigidandnonrigidreg-
istrations(AIRandLEREG). . . . . . . . . . . . . . . . . . . . . . . . 33
2.1 LinearattenuationcoefficientsassignedtodifferenttissuesinDigimouse
togeneratetheattenuationmapforthePETsimulation. . . . . . . . . . 51
3.1 Dice coefficients comparing the CT surface with the 3D surface gener- ated from structured light using only the top view and the 3D surface
generatedfromstructuredlightpatternsusingthemirrorsetup. . . . . . 76
5.1 DicecoefficientscomparingtheCSFandbrainvolumeinsidetheskull,
skull volumes (i.e., the region between inner and outer skull bound-
aries),theregionbetweenouterskullandscalpboundary,andthewhole
headfor8co-registeredCTandMRIdatasets. . . . . . . . . . . . . . 115
5.2 TableoftheaverageDicecoefficientsandaveragesetdifferencesforthe
comparisonof1mm,2mm,and3mmmisregisteredCTskullswithitself
andMRIskull. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
xii
Abstract
Biomedical imaging technologies that were originally developed for human medical
diagnosis have been adapted to study the anatomy and metabolism of small animals.
However, multimodality image registration algorithms, which have been used for anal-
ysisofhumanimages,havenotbeenutilizedaseffectivelyinthesmallanimalimaging
community. Aswiththehumanstudies,thereexistmanyapplicationswhereimagereg-
istration is a useful task in small animal imaging applications: visualizing and quanti-
fying changes in longitudinal studies, combining complementaryinformation from dif-
ferent modalities such as to employ both anatomical (MR, X-ray CT) and functional
(PET,SPECToropticalimaging)information,andidentifyingandlabellingofspecific
anatomicalstructuresusinganatlas. Inthisthesis,Ireviewtheimageregistrationmeth-
ods commonly used in human studies and show examples where they can also be very
effective in small animal imaging applications. I also describe a high resolution 3D
digital mouse atlas, Digimouse, for use in the small animal imaging community, using
CT, PET and cryosection data, in which I used image registration techniques reviewed
earlier. I also show applications of Digimouse to generation of simulated data for use
in the evaluation of PET and optical image reconstruction algorithms, and also to the
automaticlabellingofanatomicalstructuresusingX-rayCTimages.
In the third chapter of this thesis, I describe a new method for reconstructing a sur- facemodelofthemousefromstructuredlightdata. Thismethodhasapplicationsin3D
xiii
optical imaging of bioluminescence and fluorescence where the surface of the animal
is needed to define a mesh for solution of the forward and inverse problems. I develop
a method which can derive height maps from different views and integrate computed
height maps into a single estimate of the surface of the animal using the system geom-
etry and an iterative closest point algorithm (ICP). I validate my method by comparing
thesurfaceswiththoseobtainedfromthecorrespondingX-rayCTsurfaceofthemouse.
With increasing use of mouse models of human disease, many imaging studies
involve lesions. This presents a challenge for automated labelling by registration to
the mouse atlas, since the lesions are not present in the atlas. Even with human stud-
ies this is a challenging problem since no correspondence to a tumor, exists in an atlas
of normal anatomy. I approach this problem by seeding the atlas image with an ear- lier stage of the tumor which I have formulated using level sets with a uniform growth
assumptionandapplyingtheviscousfluidregistrationalgorithmwhichcanmodellarge
scaledeformationsduetothetumor.
In the final chapter (5) of the thesis, I describe a new method developed for the
automated extraction of skull and scalp boundaries from T1-weighted MR images - in
thiswithapplicationstohumanratherthansmallanimalimaging.
xiv
Chapter1
ImageRegistrationwithApplications
toMultimodalSmallAnimalImaging
Animals have been extensively used as models to study the cause, nature and genomic
basis of human disease and to explore possible cures through gene therapy and drug
development. Mice are the most widely used animals in medical research because of
their small size, rapid breeding, and genetic similarities to humans. Traditional model
studies require the animals to be sacrificed, prohibiting repeated studies with the same
animal. With the increased availability of imaging technologies originally developed
for human medical diagnosis, it is now possible to study both anatomy and biological
processesinthemouserepeatedlyandnon-invasively. DedicatedsmallanimalCT,PET,
SPECT,MR,ultrasoundandopticalfluorescenceandbioluminescencesystems(Figure
1.1)arenowallcommerciallyavailable(MassoudandGambhir,2003;Cherry,2004).
Small animal imaging protocols are often designed and evaluated longitudinally.
To visualize or quantify changes over time, these images must be registered either to a
baseline(intra-subjectregistration)orcontrol(inter-subjectregistration)imagecollected
usingthesamemodality. Inotherprotocols,twoormorecomplementarymodalitiesare
employed to reveal both anatomical (MR or CT) and functional (PET, SPECT, or opti-
calimaging)information(Chowetal.,2005;LoeningandGambhir,2001;Huangetal.,
2005). Alignment of these images requires theuse of multimodalregistration methods.
Cryosection and autoradiography are one more pair of imaging modalities that while
terminal are often used as a reference point or gold standard against which to compare
1
Figure1.1: Differentimagingtechnologiesdevelopedforsmallanimalimaging: Shown
hereare: cryotome,microPET,optical,MRI,microCATandultrasound.
noninvasiveimages. Again,comparisonofinvivoandpostmortemimagesrequiresthe
use of multimodal registration procedures. There is therefore a need for image regis-
tration methods for single and multiple modalities and for inter and intra-subject com-
parisons. Finally, with the development of atlases of whole mouse (Segars et al., 2004;
Dogdas et al., 2007) and brain (MacKenzie-Graham et al., 2004; Rosen et al., 2000),
registration techniques can also be used to map images of individual animals to these
atlasestofacilitateidentificationandlabelingofspecificanatomicalstructures.
In the following, we provide an overview of the different approaches to registration
that can be applied to small animal images. While there is a relatively small literature
specifically on small animal image registration (Vaquero et al., 2001; Hayakawa et al.,
2000; Gefen et al., 2003), the vast array of methods developed for human imaging are
2
generally applicable to this problem (VandenElsen et al., 1993; Maintz and Viergever,
1998;Fitzpatricketal.,2000;Hilletal.,2001;Pluimetal.,2003;Audetteetal.,2000).
Theseincludemethodsforsingleandmultiplemodalitiesandforinterandintra-subject
alignment. We review these approaches in a common regularization framework and
highlight a number of techniques that appear well suited to small animal applications
andforwhichsoftwareisreadilyavailable.
We define registration as the process of defining a point-wise correspondence
between a template image T and a target image S. An image is a function defined on
a two or three dimensional domain or coordinate system that maps a point in the coor- dinate system to an intensity value. The image intensities reflect the specific attributes
of the imaging system, for example electron density in CT or radiolabelled tracer den-
sity in PET and SPECT. Image intensities can also be vector valued, as is the case for
color and multispectral optical images or multiparameter (T1, T2, proton density) MR
images. While most of the methods below extend to vector-valued intensities, we will
restrictourattentiontothescalarcase.
The mapping from pointx inω in the coordinate system ofT to pointy in Ω in the
coordinate system of S is defined by a transformation h. The transformation h defines
correspondencesbetweenthetwoimagessuchthatfeaturesinimageT aremappedonto
correspondingfeaturesinS. Thetransformationhisoftendescribedintermsofthedis-
placementfieldsusuchthaty =h(x) =x+u(x). Thedisplacementfielddescribesthis
transformationintermsofhowfarandinwhichdirectionapointmoveswhenprojected
through the transformation. Most image registration methods combine the following
three components: (i) selection of the features in the images that are used to guide the
transformation; (ii)aparameterizationusedtorestricttherangeoftransformationsinh
(e.g. rigid vs. nonrigid mappings); and (iii) a regularizing function chosen to ensure a
3
smooth mapping fromT toS when the combination of image features and parameteri-
zation ofh are insufficient to define a unique stable mapping. As described in the next
section, features can be defined in terms of points, curves, surfaces and subvolumes as
well as on the entire image. Parameterizations range from low dimensional rigid and
affinetransformations,throughtheuseofbasisfunctionsthatallowarestrictedrangeof
nonlinear warpings of the coordinate system, to high dimensional transformations such
as the discrete lattice in which a separate displacement is allowed for each voxel. The
regularizing function usually consists of a smoothness measure often defined by anal-
ogy to continuum mechanics in which we minimize a bending energy defined on the
displacementfieldu.
WeillustratethedeformationfieldhanddisplacementfielduinFig. 1.2inthecase
of a nonrigid registration of two CT volumes. The template mouse image is mapped
through a linear elastic coordinate transformation (Christensen and Johnson, 2001) so
that the intensities and shape of the deformed template best match those of the target
mouse image. By overlaying the two images before (Fig. 1.2d) and after (Fig. 1.2e)
registration,weseethatthe(white)skeletalstructuresarebroughtintoreasonablygood
alignment after registration. The deformed grid was generated by mapping a rectangu-
lar grid through the transformation and illustrates how the template mouse coordinate
system differs spatially from that of the target mouse. Regions where the grid spac-
ing is compressed correspond to regions of expansion of the template while regions of
expandedgridspacingscorrespondtoregionsofcontraction. Thereasonforthisseem-
inglycounterintuitivenotionofexpansionandcontractionisthatwedefinethetransfor- mations with respect to a Eulerian coordinate system.
1
A Eulerian coordinate system
is the natural coordinate system to use for defining image registration transformations
1
A Lagrangian coordinate system transformation maps points from the template to the target as a
function of the template coordinate system while a Eulerian transformation does the same as a function
ofthetargetcoordinatesystem.
4
(a) Target Image: S (b) Template Image: T
(c) Deformed Template: T(h(x))
(f) Original field: h=x (g) Deformation field: h = x+u(x)
(d) Target + Template (e) Target + Deformed Template
Figure1.2: IllustrationofnonrigidregistrationappliedtoapairofCTmouseimages: (a)
coronal section through the target mouseS (b) corresponding section through template
mouse T (c) template mouse after deformation to match target mouse (d) overlay of
target and template mouse before registration (e) overlay of target and template mouse
after deformation (f) rectangular grid in the template space (g) deformed grid using
transformationh =x+u(x).
5
since it facilitates the task of interpolation required to compute the deformed image on
aregularlattice.
Selectionofaregistrationalgorithmisguidedbythepropertiesofthepairofimages
to be registered. For example rigid transformations would be appropriate for aligning
MRandPETimagesofasingleanimalifimmobilizedinananimalholderduringboth
scans, while higher dimensional nonrigid transformations would be required to coreg-
ister two different animals or to register an animal to an atlas. Similarly the choice of
feature depends on the degree of user interaction required and whether the images are
single or multimodal. We include a range of examples towards the end of this chapter
thatillustratethesedifferentapplications.
1.1 AnOverviewofImageRegistration
1.1.1 AGeneralFrameworkforImageRegistration
Imageregistrationcanbeformulatedintermsofanoptimizationproblem
b
h = argmin
h
C
sim
(T,S,h)+λC
reg
(h) (1.1)
where the transformation
b
h minimizes the weighted sum of a similarity function
C
sim
(T,S,h), which defines a distance metric between corresponding features in the
pairofimagesbeingmatched,andaregularizingfunctionC
reg
(h)whichresolvesambi-
guitiesamongthesetoftransformationsthatminimizethesimilarityfunctionbyselect-
ing a smooth transformation. The regularization parameter λ determines the degree of
smoothing imposed by the regularizer. Almost all image registration algorithms can
be placed in this framework. The transformation can be found nonparametrically by
computing the deformation on the same voxel lattice on which the images are defined.
6
However, in many cases the range of transformations is restricted through the use of
a set of basis functions to represent h. With this general formulation we now consider
eachofthethreemajorcomponentsthatdefinemostregistrationmethods: featureselec-
tion andsimilarity metrics, transformationparameterization, andchoiceofregularizing
function. Table 1.1 lists many of the similarity measures, parameterizations, and regu-
larization operators that have been used to produce many of the commonly used image
registrationalgorithms.
Feature/Similarity Metrics
(Dimensionality)
Transformation parameterization
(Dimensionality)
Regularization
(Deformation Type )
Landmark (0-D)
Contour (1-D)
Surface (2-D)
Sub-volume (3-D)
Intensity Difference (N-D)
Cross Correlation (N-D)
Intensity Vectors (N-D)
Intensity Variance (N-D)
Mutual Information (N-D)
Rigid (low)
Affine (low)
Talairach (low)
Polynomial (low-medium)
B-spline (medium - high)
Thin-plate spline (medium)
Discrete cosine (medium - high)
Fourier Series (medium - high)
Wavelet (medium - high)
Discrete lattice (high)
None
Thin-plate spline (small)
Differential operators (small)
Prior distributions (small)
Elasticity (small-large)
Viscous fluid (large)
Inverse Consistency (small-large)
Table 1.1: Different types of similarity metrics, transformation parameterizations and
regularizationfunctionsthatcanbecombinedintotheregistrationframeworktoproduce
manyofthecommonlyusedimageregistrationalgorithms.
Image registration algorithms are also distinguished by the numerical optimization
procedure used to find the transformation. In a few cases, such as affine and thin plate
splineregistrationwithexactmatchingpointconstraints,theproblemhasaclosedform
solution. In most cases however, eq (1.1) must be solved iteratively. The choice of the
iterative procedure determines the convergence rate rather than the final solution and is
therefore an issue we will not address in detail in this thesis. However, it is important
to note that in many cases the cost function in eq (1.1) is not convex so that solutions
obtained using local search algorithms are locally rather than globally optimal. While
in principle global solutions can be found using methods such as simulated annealing,
7
in practice the cost is prohibitive. A more practical approach for avoiding poor local
minimaistouseacombinationofgoodinitializationandmultiresolutionprocedures.
1.1.2 FeatureSelectionandSimilarityMetrics
ThesimilaritymetricC
sim
(T,S,h)givesadistancemeasurebetweenthedeformedtem-
plateimageT(h(x))andthetargetimageS computedonasetofcorrespondingfeatures
inS andT. For a 3D image space, these features can be points (0-D), lines and curves
(1-D),surfaces(2-D),subvolumes(3-D)andimageintensities(3-D).
Landmarks or point constraints can be attached as fiducial markers prior to imag-
ing (extrinsic) or defined as specific anatomical features (intrinsic). Common extrinsic
fiducial objects include stereotactic frames, screw markers, and skin markers. Since
extrinsicmarkersdonotcontaininformationregardingtheanatomyofthesubject,they
are best suited for rigid transformations (Fitzpatrick et al., 2000). Intrinsic landmarks
are based on the subjects’ anatomy and can be used to compute either rigid and low
dimensional warping transformations. One major disadvantage to the use of intrinsic
point constraints is that transformations are restricted to low dimensionality resulting
in a limited ability to match images between different animals. A second disadvantage
is that these features are typically user defined and therefore it is difficult to automate
registrationbasedonpointmatchingconstraints.
The concept of matching features extends naturally from points to curves, surfaces
and subvolumes in the image. While automated feature extraction methods can be
used to define curves and surfaces representing the boundaries of anatomical struc-
tures, user interaction is still required to define the correspondence between structures
intheimagepairtoberegistered. Theincreasedrichnessofcurvesandsurfacefeatures
makesthemmoreusefulforhigherdimensionaltransformations,althoughthedeforma-
tion field u will always be ambiguous because these features provide no information
8
aboutcorrespondenceofpointsinthevolumeotherthanatthefeatures. Thisambiguity
is resolved by the choice of parameterization and regularization function as described
below. When points, curves and surfaces are used to register two images, the similarity
metricC
sim
(T,S,h) is defined as themean distance between corresponding features in
thetwoimages. Inthecaseofcurvesandsurfaces,distancesarecomputedfromthepro-
jectionsofeachpointonthedeformedtemplate’scurves/surfaceontothecorresponding
curve/surfaceinthetarget.
Registrationbasedonmatchingfeaturesgenerallyrequiresuserinteractiontodefine
these features and to indicate correspondence between features in the two images. In
contrast, registration techniques based on image intensity can be fully automated by
finding a transformation that maximizes a similarity measure computed over the entire
image. If the images to be registered are from the same modality and appropriately
scaled,themeansquaredintensitydifferencecanbeusedasanappropriatemetric:
C
sim
(T,S,h) =
Z
Ω
(T(h(x))−S(x))
2
dx (1.2)
Less restrictive is the assumption of a linear relation between the intensities of the
twoimagesforwhichthecorrelationcoefficientisappropriate:
C
sim
(T,S,h) =
R
Ω
(T(h(x))−T)(S(x)−S)
R
Ω
(T(h(x))−T)
2
dx
R
Ω
(S(x)−S)
2
dx
(1.3)
where T and S are the means of the deformed template and target images respec-
tively.
T =
Z
Ω
T(h(x))dx/
Z
Ω
dx S =
R
Ω
S(x)dx/
R
Ω
dx
9
When the images are from different modalities we do not expect a simple linear
mapping between intensities. However, we do expect the intensities to vary in a corre-
lated manner. One approach to exploiting this correlation without the need to specify
theformofthedependencebetweenintensitiesinthetwoimagesistousemutualinfor- mation. Mutualinformationmeasuresthestatisticaldependencebetweentheintensities
of corresponding voxels in the images and the maximum of this measure is achieved
when these images are registered (Viola, 1995; Maes et al., 1997). Mutual information
betweenthedeformedtemplateimageT(h(x))andthetargetimageS isdefinedas:
C
sim
(T,S,h) =H(T(h(x)))+H(S)−H(T(h(x)),S) (1.4)
whereH denotesthejointandmarginalentropiesdefinedrespectivelyas:
H(T(h(x)),S) = −
X
i,j
p
T(h(x)),S
(i,j)log(p
T(h(x)),S
(i,j))
H(S) = −
X
j
p
S
(j)log(p
S
(j)) (1.5)
withp
S
(j) andp
T(h(x)),S
(i,j) representing the empirical marginal and joint probability
distributions over the set of discrete intensities i ∈ I and j ∈ J for T(h(x)) and S
respectively. These joint densities can be estimated directly from the joint image his-
togramsorusingParzenwindowing.
Analternativemetricformeasuringsimilaritybetweenmultimodalimageintensities
is the ratio image uniformity (RIU) (Woods et al., 1992). The RIU uses a ratio image
derived by dividing the intensity of each voxel in the target image by the intensity of
the corresponding voxel in the deformed template image and the similarity measure is
computedasthenormalizedstandarddeviationofthisratioimage.
10
Maximizing the similarity of image intensity may not always be sufficient
to differentiate between different parts of anatomy that have similar intensities.
Attribute(intensity) vectors address this problem by appending geometric information
in addition to the intensity information. An attribute vector indicates geometric prop-
erties from a local to a global scale, hence reduces the effects of local minima in the
intensitymatchingconstraintsbyreducingtheambiguityinpotentialintensitymatches.
Geometric moment invariants (GMI), principal curvatures, wavelets, Gabor filters and
lowfrequencyrepresentationshavebeenusedtoextractgeometricpropertiesofobjects
and to define the attribute vectors besides intensity and edge information (Shen and
Davatzikos, 2002). Moreover, multiple similarity functions derived using different fea-
tures can also be combined in the registration framework to guide a more accurate reg-
istration(JohnsonandChristensen,2002;Kybic,2001).
1.1.3 ParameterizationoftheDeformationField
The parameterization of h determines the mathematical form of the transformation
between the features of the template and the target images. In medical applications,
where the shape and size of the subject are preserved, rigid transformations are used
to determine the deformation. Rigid transformations are the simplest transformation,
where only rotation and translation parameters (5 parameters in 3D) are needed to be
determined. They preserve all the geometric distances. A more general transforma-
tionisaffinetransformationwhichalsoallowsshearingandscaling. Affinetransforma-
tions constrain parallel lines to map to parallel lines and they require 12 parameters in
3D. Affine techniques are generally used to overcome the scanner introduced errors in
imagesthatcauseglobalscalingandshearingratherthandescribingthedeformationsin
structuresofthebody(Fitzpatricketal.,2000).
11
In3D,rigidandaffinetransformationscanbedescribedusingaconstant3x3matrix,
A, that includes the rotation, skew and scaling parameters and a 3x1 vector t that for- mulates the translation (Equation 1.6). For the special case, when the transformation is
rigid,Abecomesanorthogonalmatrix.
h(x) =Ax+t (1.6)
It is sufficient to use rigid transformations to determine the spatial transformation
between images in many applications such as: registering multiple imaging modalities
of the same patient (intra-subject registration), tracking stereotactic frames connected
to patients during surgery, and other transformations limited to rigid body motions.
However, accommodating the anatomical variations between subjects in inter-subject
applications and determining tissue deformations due to interventions or changes and
trackingorganmotionintimeinintra-subjectapplicationsrequirenonrigidtransforma-
tions to determine point-wise correspondences between images. Nonrigid registration
methodscanmaplinestocurvesandtheyarenolongerrepresentedusingsimpletrans-
formations. Nonrigidregistrationalgorithmsgenerallyincludeorappliedafterarigidor
affine transformation to give an initial estimate on the transformation. Figure 1.3 gives
an illustration of the deformation fields after rigid, affine and nonrigid transformations
areappliedin2D.
Many nonrigid registration algorithms model the displacement field u as a sum-
mation of basis functions and the problem becomes determining the coefficients w of
thesebasisfunctionsϕ(Equation1.7). Thesebasisfunctionscanbeeitherpolynomials
12
Original Rigid Affine Nonrigid
Figure1.3: Illustrationofthedeformationfieldsafterrigid,affineandnonrigidtransfor- mationsareappliedin2D.
(Woodsetal.,1998b),Fourier(trigonometric)basisfunctions(Amitetal.,1991;Chris-
tensen and Johnson, 2001), splines (Bookstein, 1989), discrete cosines (Ashburner and
Friston,1999)orwavelets(Amit,1994).
u(x) =
n
X
i=1
w
i
ϕ
i
(1.7)
The simplest of these basis functions are the polynomials, which were used in the
AIR software to model nonrigid registration (Woods et al., 1998b). Besides their com-
putational cost, the main disadvantage of using the polynomials for image registration
is their limited ability to recover local anatomical shape variability since nonzero coef-
ficientsproduce aglobal ratherthanlocaldeformation. Also, spuriousoscillationsmay
occur when higher order polynomials are used. These polynomial basis functions can
belocalizedbyusingpiecewisepolynomials. Splinesarespecialpiecewisepolynomials
whichhavefinitesupportandensurethecontinuityofthepolynomialsandthecontinu-
ityofthederivativesofthesepolynomialsattheknotpoints.
The term spline originally refers to the use of long flexible strips of metal which
can be bent by attaching different weights along its length to model surfaces of ships
and planes. A similar idea can be used to model spatial transformations. For example
13
two separate surfaces whose heights above the plane correspond to the displacement in
each direction can be used to represent the 2D deformation. Many spline based reg-
istration techniques are based on a set of corresponding points between the images.
Splines approximate the displacements at the corresponding feature points and provide
asmoothlyvaryingfieldbetweenthem.
A type of spline transformation that has been popular especially for 2D transfor- mations is the thin plate spline which is based on radial basis functions (Bookstein,
1989). The term “thin plate spline” refers to the physical analogy involving the bend-
ing of a thin plate. In 2D thin plate splines, the displacement field can be described
as u(x,y) = −U(r) = −r
2
logr
2
where r =
p
x
2
+y
2
. Here U is the fundamental
solutionofthebiharmonicequationΔ
2
U = 0,theequationfortheshapeofathinsteel
plate lofted as a function u(x,y) above the (x,y) plane. Thin plate splines express the
transformation on point constraints with the analogy of minimizing the bending energy
function:
C
reg
(u) = (
∂
2
u
∂x
2
)
2
+2(
∂
2
u
∂x∂y
)
2
+(
∂
2
u
∂y
2
)
2
(1.8)
Theresultingtransformationdeterminedbythethinplatesplinefornfeaturepoints
pin2Dcanbeexpressedas
h(x) =Ax+t+
n
X
i=1
w
i
u(|x−p
i
|) (1.9)
whereAx+tdeterminestheaffinepartofthetransformationandcoefficientsw
i
deter- minethenon-affinepartofthetransformation.
1.1.4 ChoiceofRegularizationFunctions
We have formulated image registration as a minimization of a similarity measure
betweencorrespondingpointsintwoimageswithanappropriateparameterization. The
14
pure minimization of such difference measures typically leads to an ill-posed problem.
Therefore regularization approaches should be used. Regularization helps to choose
from among infinite possible solutions that may arise due to limited set of features or
instability arising from the large number of parameters to be measured. It imposes a
smoothnessconstraintandtriestoreducehigherorderderivativeterms. IntheBayesian
framework, regularization can be thought as a prior term which represents a priori
knowledge about the expected deformations. For rigid and low dimensional deforma-
tions, regularization is typically not required. However, for higher dimensional trans-
formations, it is essential. In this latter case, if we want to penalize small magnitude
displacements then linear elastic, thin plate spline or differential operators can be used.
On the other hand, viscous fluids allow larger deformations when needed (Christensen
etal.,1996).
In regularization that uses linear elasticity, the images are modeled as an elastic
medium subject to two opposing forces, external forces which drive deformation and
internal forces that are generated by the elasticity of the material. For this mechanical
medium,thedisplacementfieldsatisfiesthefollowingNavier-Stokesequation(Equation
1.10) where λ and μ are the elasticity parameters and F(x) is the external force. The
external force F(x) is derived from the similarity measure between the images which
can be taken as the gradient of a similarity cost function such as the intensity correla-
tion. ThenthecorrespondingPDEcanbesolvedusingfinitedifferenceorfiniteelement
methods.
μΔu(x)+(λ+μ)∇(∇·u(x))+F(x) = 0 (1.10)
15
Elasticmediumapproachesareonlyvalidforsmalldeformations. Theviscousfluid
approaches overcome this problem and accommodate large distance nonrigid deforma-
tions by substituting the displacement, u, by the velocity field v in the Navier Stokes
equation.
μΔv(x)+(λ+μ)∇(∇·v(x))+F(x) = 0 (1.11)
Here, Δ = ∇
2
= ∇ · ∇ is the Laplacian operator and along with the viscosity
coefficientμ,μΔv is known as the viscous term in the partial differential equation and
is associated with the constant volume viscous flow of the template. The second term
∇(∇·v) is the gradient of the divergence operator and along with the viscosity coeffi-
cientsλ andμ, (λ+μ)∇(∇·v) allows the expansion and the shrinkage of the regions
in the template. The displacement field is later computed according to the Eulerian
referenceframeas:
v(x,t) =
∂u(x,t)
∂t
+∇u(x,t)v(x,t) (1.12)
Aswithlinearelasticity,F(x)istheexternalforcethatdrivesthetransformationand
itiscomputedfromthesimilaritymeasurebetweentheimages.
1.1.5 Classification of Registration Techniques based on Parame-
terization
In this section, we describe the common registration techniques by classifying them
according to the parameterization of the transformation. We can categorize registration
methods into two large groups depending on the complexity of the transformation as
low-dimensionalregistrationmethodsandhigh-dimensionalregistrationmethods.
16
1.1.6 LowDimensionalMethods
1.1.6.1 Affinetransformationsbasedonpoints,curves,surfacesandintensities
Affineregistrationofcorrespondingpointsetscanbeformulatedasaleastsquaresprob-
lem known as the Procrustes problem in statistics literature. The problem can be for- mulated as finding the transformation that minimizes the cost function which is the
registrationerrorbetweenthefeaturepointsC
sim
(T,S,h) =
P
n
i=1
ky
i
−Ax
i
−tkwith
thenotationy
i
correspondingtothefeaturepointsinthetargetimage,x
i
corresponding
to the feature points in the template image. Evans et al. (1991) registered human PET
and MR brain images using this approach where the point matches were automatically
determinedusingasystemoffiducialmarkers.
Curvesandsurfacesthatarederivedfromintrinsicfeaturescanalsobeusedtoassist
image registration both in 2D and 3D applications. Care must be taken to accurately
define curves at the end points since the registration is most sensitive to these defined
points. Likewise, accurate delineation of surface boundaries are important because the
registrationcanonlybeasaccurateasthesurfacedefinition. GuziecandAyache(1994)
determined the rigid registration using curvature and torsion parameters of curves in
CT skull images and Thirion and Gourdon (1996) used crest lines to register CT skull
images of the same subject taken in two different positions. In the surface based rigid
registrationliterature,oneofthemostpopularmethodshasbeentheheadandhatalgo-
rithmbyPelizzarietal.(1989). Inthismethod,thesurfaceofthetargetimageisusedas
headandasetofpointsisextractedfromsurfacecontoursinthetemplateimagetorep-
resentthehat. Thehatisthenregisteredtotheheadbyminimizingthedistanceofpoints
inthehattotheheadsurface. Anotherwidelyusedmethodforsurfacebasedaffinereg-
istration techniques is the iterative closest point (ICP) algorithm which was introduced
by Besl and McKay (1992). In the ICP algorithm, feature points are associated with
17
the nearest neighbor criteria, and then the rigid rotation and translation parameters are
estimatedbyminimizingthemeansquarecostfunctiondefinedbetweenthedistancesof
thecorrespondingfeatures. Thefeaturesaretransformedusingtheestimatedparameters
and these steps are reiterated until the error of the mean square cost function between
the distance of the corresponding features is minimized. This approach is quite general
formatchingavarietyoffeaturessuchaspointstopoints,pointstosurfaces,contoursto
surfaces where correspondences are not known. A software package called MVox uses
thisalgorithmforrigidregistrationof3Dsurfaces(Nielsen,1996). ChuiandRangarajan
(2003) generalized the ICP registration approach from rigid to nonrigid and developed
amorerobustpointmatchingalgorithm.
Intheintensitybasedrigidregistrationliterature,Hajnaletal.(1995)usedintensity
difference to find the matching between MR images and Lemieux et al. (1998) used
correlationcoefficienttodetectchangesinbrainlesionsinMRIscans.
Woodsetal.(1992)usedthemethodofminimizingtheintensityvarianceoftheratio
image uniformity(RIU) for registration of multiple 3D PET images and later they used
thisapproachtoregisterserialMRimages(Woodsetal.,1998a). Forintermodalityreg-
istration(MR-PET),amodifiedversionofthismethodwhichisknownasthepartitioned
intensity uniformity method was proposed by Woods et al. (1993). This method works
on the principle that “all pixels with a particular MR pixel value represent the same
tissue type so that values of corresponding PET pixels should also be similar to each
other”. In this case, the algorithm tries to minimize the standard deviation of the PET
voxel values for each MR intensity value. This algorithm can also be thought of as an
extensionofRIUwherethereexistratioimagesforeachMRintensitybins. Normalized
standarddeviationsarecalculatedfromtheseimagesandthetotalsimilaritymeasureis
18
computed from a weighted sum of all the normalized standard deviations. This algo-
rithm is available as a part of the software called Automated Image Registration (AIR)
[http://bishopw.loni.ucla.edu/AIR5].
Studholmeetal.(1999)implementedmutualinformationasanintensitymeasureas
part of the software called RView (3D Volume Registration Viewer) which integrates
a number of 3D display and fusion routines together with 3D rigid volume registra-
tion [http://rview.colin-studholme.net]. He showed that mutual information may also
increase with the increase in misregistration because of the influence of the size of
the overlap of the two images. This can occur when the sum of marginal entrophies
increase faster than the joint entrophy. To overcome this problem, he suggested a nor- malizedmutualinformation(NMI)measurewhichislesssensitivetochangesinoverlap
(Equation1.13).
NMI(T(h(x)),S) =
H(T(h(x)))+H(S)
H(T(h(x)),S)
(1.13)
In RView, before the registration process, the images are first resampled to a com-
mon cubic resolution using tri linear intensityinterpolation. In addition, theimages are
filteredusingaGaussiankerneltoobtainsimilarspatialresolutionforbothimages. The
histogram of the joint probability distribution of the images are estimated by using 64
intensitybinsforeachmodalityandthemarginalandjointentrophiesareestimatedfrom
this histogram using the Equations 1.4 and 1.5 described in intensity-based registration
techniques. A multiresolution hill climbing approach is used for the optimization pro-
cedure in which low resolution images are used to determine an initial estimate of the
parametersandthentheparametersarerefinedusingthehigherresolutionimages.
19
1.1.6.2 Non-rigid transformations based on points, curves, surfaces and intensi-
ties
In the nonrigid registration literature, points can be used to determine the transforma-
tionforthewholeimage. Inthesemethodsitispossibletomatchallthefiducialpoints
selected from both images. However, the accuracy of the matching of the nonfiducial
points depends on the type of the transformation. With polynomials, piecewise poly-
nomials and splines the problem becomes determining the coefficients of these basis
functions. Therefore one needs to have enough fiducial points to be able to determine
these coefficients unambiguously. Many spline based registration techniques use a set
of landmark points to determine the transformation. Various spline based registration
techniquesareavailableintheliterature,includingthinplatesplines(Bookstein,1989),
elastic body splines (Davis et al., 1997), and multi-quadric and shifted-log interpolants
(Franke,1979;Hardy,1990;RuprechtandMuller,1995).
TimsariandLeahy(2001)usedthinplatesplinestoregisterautoradiographicimages
of a rat brain. They first created a 3D surface atlas using the 2D cryosection images
basedonaseriesofmapsgeneratedbySwanson. Thentheygeneratedapoolofprofiles
by reslicing the 3D atlas surface into new sections with a wide range of sectioning
angles. The matching from the template image to the atlas is done by determining the
minimum Procrustes distance between the landmark points selected from the atlas and
the template image while the transformation from the template image to the atlas is
computedusingthinplatesplines.
Structures such as edges, contours and lines can also be used to determine the non-
rigid transformation between two images. Declerck et al. (1995) used crest lines to
automatically register MR brain images whereas Subsol et al. (1995) used these lines
as stable features of skull anatomy and registered CT images to build a morphometric
20
anatomical atlas. Approaches that make use of sulcal lines to find the 3D warping in
humanbraindataincludeworksbyGeetal.(1995)andLuoandEvans(1995).
In the surface based nonrigid registration literature, Szeliski and Lavallee (1993)
used an octree-approximation of the distance potential field and an octree-spline defor- mationfieldtoregistersurfaces. ThompsonandToga(1996)formulatedwarpingusing
Chensurfaces(hybridsuperquadricsandsphericalharmonics)toextractsurfacemodels
ofstructure,includingsulci. AsurfacematchingalgorithmwasalsodevelopedinFleute
andLavallee(1998)tobuildastatisticalshapemodel. Inthismethod,randompointsets
on the surface are identified first, and then these clouds of points are registered and
matchedtoestablishcorrespondenceusingamulti-resolutionoctree-splineapproach.
Inintensitybasednonrigidregistrationtechniques,squareddifferences(Christensen
et al., 1993), (Woods et al., 1998b; Ashburner et al., 1997), cross correlation (Bajcsy
andKovacic,1989;Collinsetal.,1995)ormutualinformation(Kimetal.,1997)canbe
used for the measure of intensity similarity. Intensity based registration methods try to
maximize the intensity similarity measure by tuning the parameters of the deformation
field(AIR,SPM).
In the Statistical Parametric Mapping (SPM) software, complex warping fields
are expressed in terms of the lowest frequency components of the 3D cosine basis
functions with a smoothness constraint which is implemented using a bayesian max-
imum a posteriori (MAP) estimator (Ashburner and Friston, 1999). The choice
of these basis functions eliminates the complex arithmetic (since it is a real trans-
form) and it is separable in 2D. In addition the lowest frequencies provide a good
energy compaction for smooth functions (Jain, 1989). This software is available from
http://www.fil.ion.ucl.ac.uk/spm/.
21
AIR (Automated Image Registration) represents the deformation fields as sums of
polynomials (Woods et al., 1998b). Here, the parameters of the transformation are ini-
tiallycomputedforanaffinetransformationandthenthealgorithmrefinestheseparam-
eters to calculate a non-affine transformation in which the position of each voxel in the
transformed image is a polynomial expansion of its location in the initial image. For a
secondorderpolynomial,thereexist10degreesoffreedomforeachaxis(30total). Ina
similarfashionthismodelcanbeextendedtohigherorderpolynomialssuchasthird(60
DOF), fourth (105 DOF) and fifth order polynomials (168 DOF). However their ability
torecoveranatomicalshapevariabilityisoftenlimitedsincetheycanonlymodelglobal
shapechanges.
1.1.7 HighDimensionalMethods
Thelow-dimensionalmethodsexpressdeformationfieldsusingbasisfunctionsthatcan
extendoverlargeregionof,oreventheentireimage. Consequentlytheyarenotableto
controldeformationatthescaleofindividualvoxels. Inaddition,globaltransformations
maynotpreservetopologyortheconnectivityofthetransformedimages. Toguarantee
topological integrity of structures in the image, the laws of physical continuum models
are used to estimate the deformations. Mechanical models that use linear elastic and
viscousfluidanalogsallowmoreflexibletransformationsbyprovidingasmanydegrees
of freedom as the number of voxels existing in the image. Although the linear elastic
methods do not guarantee topology when large deformations are required, the viscous
fluidmethodsareabletogeneratelargedeformationswhilealsopreservingtopology.
1.1.7.1 InverseConsistentLinearElasticRegistration
A tool based on elastic models is linear elastic registration (LEREG) which creates
transformations from 3D image volume to another 3D image volume using an inverse
22
consistent linear elastic registration algorithm (Christensen and Johnson, 2001). In this
approach, the forward transformation h
(T,S)
is estimated from the template image T to
the target image S and the reverse transformation h
(S,T)
is estimated from the target
imagetothetemplateimagebyminimizingthecostfunction:
C
sim
(T,S,h) = σ
Z
Ω
(
T(h
(T,S)
(x))−S(x)
2
+
S(h
(S,T)
(x))−T(x)
2
)dx
+ ρ
Z
Ω
(
Lu
(T,S)
(x)
2
+
Lu
(S,T)
(x)
2
)dx (1.14)
+ χ
Z
Ω
(
u
(T,S)
(x)−e u
(S,T)
(x)
2
+
u
(S,T)
(x)−e u
(T,S)
(x)
2
)dx
Thefirstintegralinthecostfunctiondefinesthecumulativesquarederrorsimilaritycost
betweentransformedtemplateimageT(h
(T,S)
(x))andthetargetimageS andthetrans-
formedtargetimageS(h
(S,T)
(x))andthetemplateimageT. Thesecondintegralisused
to regularize the forward and reverse displacement fields u
(T,S)
and u
(S,T)
respectively
tosatisfythepropertiesoflinearelasticityandhenceenforcethetopologicalconstraints
onthetransformation. ThelineardifferentialoperatorLcorrespondstothelinearelastic
constraintandisrepresentedas:
Lu(x) =αΔu(x)+β∇(∇.u(x))+γu(x) (1.15)
where α, β and γ are constants associated with the linear elastic constitutive law. The
thirdintegralinthecostfunctioniscalledtheinverseconsistencyconstraintandismin-
imized when the forward and reverse transformations h
(T,S)
and h
(S,T)
are inverses of
each other. The inverse consistency constraint minimizes the inverse consistency error
betweenthetemplateandthetargetimages. Theparametersσ,ρandχinthecostfunc-
tion are weighting coefficients for each part of the cost function. Registration results
showthatthejointestimationofaconsistentsetofforwardandreversetransformations
23
constrained by linear elasticity perform better than registration results obtained using
eitherconstraintalone(ChristensenandJohnson,2001).
1.2 Applications of Selected Registration Methods to
SmallAnimalImaging
In our experiments, we applied selected registration methods to mouse images and val-
idated that these methods can also be used besides human subjects. To evaluate rigid
registration,weusedCT,PETanddigitalimagingofcryosectionsofanudemousethat
wereacquiredtocreatea3Ddigitalphantom(Digimouse)foruseinsmallanimalimag-
ing community (Stout et al., 2002; Dogdas et al., 2007). In this project, the challenge
was to create high quality images in all three imaging modalities with markers readily
apparentforlaterimageregistrationthatwouldnotcompromisetheimageacquisitions.
The preparation of the mouse and the imaging process using PET, CT and cryosection-
ing along with the creation of the Digimouse will be explained in the next chapter. In
this section, we will show some applications of different registration techniques using
these data. In particular, we used AIR and RView registration packages to register PET
to CT and CT to the cryosection volume of a nude mouse rigidly. We used thin plate
splinefor2DregistrationandAIRandlinearelasticregistrationmethodstoregisterCT
imagesoftwodifferentmicetoprovidenonrigidregistrationexamples. Weusedversion
”AIR 5.2.5” in our experiments. AIR and LEREG registrations were run on an AMD
Athlon Dual Processor 1.7GHz Machine with a 2GB of RAM and thin plate spline and
RViewregistrationswererunona1GHzPentiumIIIProcessor.
24
1.2.1 Rigidregistration: AIRvs. RView
Functional information obtained from PET studies of small animals can be combined
with the anatomical information obtained from CT scanners and different anatomical
informationcanbecombinedfromCT(skeleton,skin)andcryosections(organs)ofthe
mouse. Henceanaccurateregistrationbetweeneachmodalityisrequiredtogetprecise
knowledgeaboutmouseanatomy.
In our first experiment, we used the PET and the CT data from Digimouse. After
processing the data we had a CT image with a cubical 0.1mm voxel size and a matrix
size of 372x920x292 (throughout the thesis we will list matrix sizes as sagittal x axial
x coronal) and the PET data with 0.4mm voxels in the planar section and 0.6mm in the
axialdirectionwithamatrixsizeof128x127x128. Priortoregistration,weadjustedthe
contrast of the CT and the PET data so that we can extract more information regarding
the organs using the software ImageJ (Rasband, 2003). We then registered the CT and
PET data using AIR and RView. No smoothing was applied to the PET and CT images
priortoregistration. Weusedthestandarddeviationofratioimagesasthecostfunction
and a six parameter rigid transformation model in AIR. The algorithm for registering
PET to CT images took about 27 mins. We also used a six parameter rigid transfor- mation model in the registration using RView. In this case the algorithm took 50 secs
to converge. In both registration methods, the registered images were obtained using
tri-linearinterpolation. OverlaysoftheregisteredPETimageobtainedusingRViewand
AIRandtheCTimageareshowninFigure1.4.
Wealsoplottedthefeaturespace(jointhistogram)oftheCT-PETvolumeinFigure
1.5 to get a better visualization of the registration behavior using mutual information.
This figure shows the diffuse pattern in the joint histogram after rigid registration. The
right histogram shows how the joint histogram became more focused after the images
wereregistered.
25
CT RegisteredPET OverlayofPET
usingRView ontoCT
CT RegisteredPET OverlayofPET
usingAIR ontoCT
Figure1.4: RigidregistrationofPETtoCTdatausingRViewandAIR.
For our second experiment, we used the same CT data and registered it to the
cryosection data we created from the stack of the colored cryosection images. The
cryosection data had a matrix size of 380 x 992 x 208 and a cubical 0.1mm voxel size
after processing the data. Initially, we tried to register CT data to the cryosection data
without prior segmentation. With RView, the user needed to provide initial translation
26
PET INTENSITY
C
T
I
N
T
E
N
S
I
T
Y
PET INTENSITY
C
T
I
N
T
E
N
S
I
T
Y
Jointhistogramoftheintensitiesof Jointhistogramoftheintensitiesof
PETandCTdatabeforeregistration PETandCTdataafterregistration
Figure1.5: JointhistogramplotsofPET-CTdatabeforeandafterrigidregistrationusing
mutualinformation.
parametersinordertoobtainameaningfulregistration. Withtheseparametersthealgo-
rithm converged, without the necessity of editing the volume, in 2 mins. However AIR
didn’t converge with the original data. A prior segmentation was needed to register the
CT data to cryosection data. Therefore, we segmented both the CT and the cryosection
datasothattheyonlyincludethemousestructures. Weusedthesegmentedskinsurface
tomasktheimagesinAIR.Inthiscase,thealgorithmconvergedin32mins. Sinceboth
CT and cryosection data provide anatomical information about mice we manually seg-
mentedsomestructureswhicharevisibleinbothimagesinordertohaveabetterinsight
about the registration accuracy. For this test, we chose to segment the heart and the
bladder from the optical cryosection slices and overlay these regions on the registered
CTdataobtainedbothfromRViewandAIR.Theregistrationresultswiththeoverlayof
thesegmentedheartandbladderareshowninFigure1.6.
27
cryosectiondata RegisteredCT OverlayofCTonto
usingRView cryosectiondata
cryosectiondata RegisteredCT OverlayofCTonto
usingAIR cryosectiondata
Figure1.6: RigidregistrationofCTtocryosectiondatausingRViewandAIR.
We then evaluated the registration accuracy by segmenting the external fiducials
whichwerevisibleinallmodalities,andcomputingtheRMSerrorwhichisgivenas:
RMS Error =
v
u
u
t
1
N
N
X
i=1
kp
i
−q
i
k
2
(1.16)
28
RView AIR
PETtoCT 0.5768mm 0.8885mm
CTtocryosection 0.7315mm 0.7642mm
Table1.2: RMSregistrationerrorscomputedfrom600fiducialsforrigidregistrationof
PETtoCTandCTtocryosectiondatawithRViewandAIR.
TheRMSerrorscomputedfrom600fiducialsthatareavailablefromthemid100slices
are listed in Table 1.2. We decided to use the mid 100 slices and discard the rest of the
fiducials since segmentation of fiducials was harder as we got close to the boundaries
of the mouse due to the wooden frame built around the animal. From the results, we
see that the RMS errors in PET to CT and CT to cryosection rigid registration using
AIRandRViewareverysmallwhichshowthatbothofthesoftwaresperformedequally
well. However, a successful CT to cryosection data registration with the AIR program
wasobtainedafterpriorsegmentationofthedatawhichrequireduserinteraction.
1.2.2 ThinPlateSpline
Toillustratea2Dnonrigidregistrationexample,weregisteredaCTsliceofthetemplate
mouse to the corresponding CT slice of the Digimouse. The size of the CT slice was
186x460 with a uniform pixel size of 0.2mm. We manually picked 20 landmarks for
which we can find correspondences in both the template mouse and Digimouse. We
then implemented thin plate spline algorithm as described in Bookstein (1989) using
Matlabandusedthisalgorithmtoregisterthe2Dslices. Originalsliceswiththeselected
landmarksandthewarpedtemplatemousewiththeDigimouselandmarksareshownin
Figure1.7. WealsoplottedthedeformationfieldwiththeDigimouselandmarksandthe
originalgridwiththetemplatemouselandmarkstogiveanideaofthewarpingoccurring
inthinplatesplineregistration(Figure1.8).
29
Digimouse templatemouse warpedtemplatemouse overlayofthewarpedtemplate
withlandmarks withlandmarks withDigimouselandmarks mouseontoDigimouse
Figure1.7: 2Dthinplatesplineregistration.
1.2.3 Nonrigid registration: low-dimensional vs high-dimensional:
AIRvs. LEREG
Inthisexperiment,weusedthe3DCTofatemplatemouseandregisteredittothe3DCT
oftheDigimouse. WestartedbydownsamplingtheCToftheDigimousebyafactorof2
andobtaineda0.2mmvoxelsizeimage. Wealsozero-paddedthedatasothattheimage
can be easily downsampled by a factor of 8, which is necessary for LEREG’s multires-
olution framework, so that the matrix size of the CT Digimouse became 192x464x152.
Thematrixsizeofthetemplatemousewas256x410x256withthesamevoxelsize. Ini-
tially, we edited the images so that they only included the mouse structures and also
manually cropped the limbs of the CT images of both mice since we were more inter- ested in matching the organs of the mice than matching the limbs. Afterwords, we
rigidlyregisteredthesemiceusingRViewandusedtheregisteredtemplatemouseasan
initial transformation for the nonrigid registration. With AIR, we started the algorithm
30
regulargridwiththetemplatemouselandmarks
deformationfieldwiththeDigimouselandmarks
Figure1.8: Illustrationofthedeformationfieldafter2Dthinplatesplineregistration.
from the low order polynomial and successively increment the order of the polynomial
untilthealgorithmcouldnotimproveitself. Inourcase,thealgorithmterminatedatthe
7thorderpolynomialandtheregistrationtookabout6hours. WithLEREG,weusedthe
initially registered template image and registered it to the CT of the Digimouse using
the weighting factors of σ = 1, ρ = 0.00125 and χ = 600. With these data sets, the
31
algorithm took about 5 hours to converge. The original CT images, the CT images reg-
isteredrigidlyandnonrigidly(AIRandLEREG)andtheoverlaysofthefinalregistered
imageswiththeCToftheDigimouseareinFigure1.9.
(a) (b) (c) (d)
(e) (f) (g) (h)
Figure 1.9: Nonrigid registration of CT images of template mouse to Digimouse using
AIR and LEREG: (a), (e) CT image of Digimouse (b), (f) rigidly registered CT image
of template mouse using RView (c)warped CT image of template mouse using AIR
(d) overlay of the warped CT image of template mouse onto CT image of Digimouse
(g)warped CT image of template mouse using LEREG (h) overlay of the warped CT
imageoftemplatemouseontoCTimageofDigimouse.
32
In order to evaluate the registration accuracy of the nonrigid registration experi-
ments, we segmented the skeleton from the CT images and measured how well these
structures match after registering them using AIR and LEREG (Figure 1.10). For this,
wecomputedtheDicecoefficientsbetweentheskeletonoftheDigimouseandtheskele-
ton of the registered template mouse which was obtained using AIR and LEREG soft-
wares. TheDicecoefficientmeasuresthesimilarityoftwosets;itrangesfrom0forsets
thataredisjointto1forsetsthatareidentical. Thismetricisaspecialcaseofthekappa
indexthatisusedforcomparingsetsimilarity(Zijdenbosetal.,1994). Itisdefinedas
k(S
1
,S
2
) =
2|S
1
∩S
2
|
|S
1
|+|S
2
|
, (1.17)
whereS
1
andS
2
arethetwosetsbeingcompared.
We also computed the Dice coefficient between the skeleton of the Digimouse and
the skeleton of the rigidly registered template mouse in order to also assess the rigid-
nonrigid registration performance. From Table 1.3, we can conclude that LEREG per- formed better than AIR in registering the anatomical structures within the animal since
the Dice coefficient computed using the skeleton of the template mouse registered with
LEREG is larger than the Dice coefficient computed using the skeleton of the template
mouseregisteredwithAIR.
rigid(RView) nonrigid(AIR) nonrigid(LEREG)
k(S
1
,S
2
) 0.3127 0.6077 0.7411
Table 1.3: Dice coefficients comparing the skeleton volume of the Digimouse with the
skeleton volume of the template mouse after rigid and nonrigid registrations (AIR and
LEREG).
33
(1a) (1b) (1c) (2a) (2b) (2c)
Figure 1.10: Evaluation of rigid and nonrigid registrations using AIR and LEREG by
overlayingtheskeletons: (1a,2a)overlayofskeletonsofDigimouseandtemplatemouse
after rigid registration (1b,2b) overlay of skeletons of Digimouse and template mouse
after nonrigid registration using AIR (1c,2c) overlay of skeletons of Digimouse and
templatemouseafternonrigidregistrationusingLEREG.
1.2.4 Atlas-based registration to automatically label a template
mouse
Using the CT and cryosection data, we generated a whole body mouse atlas in which
majororgansweresegmentedwiththehelpofamouseanatomist(Dogdasetal.,2007).
The details of how we created this atlas will be presented in the next chapter. We used
Digimousetoautomaticallylabelatemplatemousebyfindingatransformationbetween
theCToftheDigimouseandtheCTofthetemplatemouseandlaterapplyingthistrans-
formation to the atlas. We determined this transformation using the LEREG software.
The overlay of the deformed atlas on the CT of the template mouse is in Figure 1.11
showingsomemajororgansandtheskeletonoftheCTofthetemplatemouseobtained
automaticallythroughthisprocedure.
34
Heart
Lungs
Bladder Kidneys
Figure1.11: Automatedlabelingoftemplatemousebydeformingtheatlasusingtrans-
formationobtainedfromregisteringtheCToftheDigimousetotheCTofthetemplate
mouseusingLEREG.
35
Chapter2
Digimouse: A3DWholeBodyMouse
AtlasfromCTandCryosectionData
The Digimouse atlas was generated from a normal nude male mouse using two imag-
ing modalities, X-ray microCT and color cryosection images. The atlas also includes
coregistered PET data representing the distribution of a mixture of the tracers
18
F
−
and
18
F-fluorodeoxyglucose (FDG) within the mouse. In this chapter, we describe our
methodology for acquiring and registering the PET, CT and cryosection data. We then
describeourapproachtodefiningandlabellinganatomicalstructuresintheatlas.
Anumberofalternativemouseatlasesarecurrentlyavailable,includingtheMOBY
phantom (Segars et al., 2004) created from Visible Mouse data from the Duke Center
forin-vivomicroscopy,theatlasofembryonicmousedevelopmentatCaltech(Dhenain
et al., 2001) and the Mouse Atlas Project developed at the University of Edinburgh
(Burger et al., 2002). The Visible Mouse (Johnson et al., 2002) is a 110 micron res-
olution magnetic resonance microscopy volume image of a normal 16 week old male
C57BL/6 mouse. The organ models in the MOBY phantom were segmented from this
data. 3D non-uniform rational B-spline (NURBS) surfaces were then fit to each seg-
mented structure to generate a mathematical phantom. Cardiac and respiratory motion
werealsomodeledusing4DNURBSintheMOBYphantom(Segarsetal.,2004). The
embryonic mouse atlas is a 3D digital atlas of mouse embryo development from con-
ceptionthroughbirth,againconstructedusingmagneticresonancemicroscopy(Dhenain
etal.,2001). TheatlasdevelopedattheUniversityofEdinburghisaspatialdatabaseof
36
gene expression patterns in the developing mouse embryo (Burger et al., 2002). There
are also a number of atlases that focus on the mouse brain (MacKenzie-Graham et al.,
2004; Rosen et al., 2000; Celio et al., 1998; Sonka and Fitzpatrick, 2000) and hypotha-
lamus(BroadwellandBleier,1976).
All but one of the atlases listed above are either restricted to a specific organ or
are of embryonic rather than mature animals. The MOBY atlas is closest in spirit and
content to the Digimouse atlas described here. However, in addition to differences in
terms of the species of the mouse, the types of organs defined and resolution, these
atlases differ in the modalities from which they were constructed, i.e. magnetic reso-
nancemicroscopyversusCTandcryosectioning. TheMOBYatlashastheadvantageof
modelingcardiacandrespiratorymotionprovidinga4Dmousephantomforusebythe
smallanimalimagingcommunity. TheDigimouseatlasincludesfunctionalinformation
(PET) in addition to the anatomical information (CT and cryosections) which together
with the labelled atlas represent a multimodal mouse phantom. With the increasing use
of small animal whole body in vivo imaging across multiple modalities the availability
of a whole body multimodal atlas has many potential applications in both simulation
andinvivostudies. Potentialapplicationsofthemouseatlasinclude:
1)Apedagogicaltoolforexploringmouseanatomyacrossmodalities.
2) Automated labelling of anatomical structures by coregistration of in vivo mouse
image data either directly to the atlas or to the source CT and cryosection image data
fromwhichtheatlaswasconstructed.
3)ComputerphantomstudiestosimulatesmallanimalPET,SPECT,CTorMRIimag-
ingsystems.
4) Generation of boundary or finite element meshes for use in modeling of light propa-
gationfor3Dbioluminescence(Chaudharietal.,2005)andfluorescenceimaging(Ntzi-
achristos et al., 2002) systems. These can be used either for simulation purposes, or
37
when combined with automated registration from an individual mouse to the atlas, for
solvingtheforwardprobleminin-vivotomographicstudies.
2.1 MousePreparationandImaging
Weinjecteda28gnormalnudemalemousewith765μCiofFDGand216μCiof
18
F
−
ioninsolution,amixturechosentoshowuptakeinPETimagesinbothsofttissue(heart,
brain, kidney, muscle, bladder) and skeletal structures. Uptake and non-specific clear- ance was allowed for an hour under light anesthesia (Ketamine/Xylazine). During the
last 15 minutes of uptake, the bladder was expressed and we sutured the mouse into a
rigid framework (Figure 2.1). We designed this framework to hold the mouse in place
throughout the imaging protocol, and attached six 0.8mm diameter hollow teflon tubes
which were filled with ink and
18
F
−
ion (18 μCi each tube) to serve as fiducial mark-
ers for the three imaging modalities. The posture of the mouse in this framework was
as close as possible to that of the mouse in a typical PET or CT imaging experiment
(Stoutetal.,2003). Afteranhourwesacrificedthemouseusingpentobarbitaloverdose
and immediately froze it in liquid nitrogen to prevent bloating from rapidly developing
gas pockets. Careful attention was needed to gently freeze the animal in stages to pre-
vent cracking due to large temperature gradients in the tissues. We used two 8 second
immersions in liquid nitrogen to rapidly freeze the mouse. During the imaging process
wemaintainedthisfrozenstatebyblowingacoldstreamofnitrogengasoverthemouse.
TheliquidnitrogenwasboiledoffinaDewarusingaheaterconnectedtoapowersup-
ply, which allowed the gas flow to be easily controlled. The end of the Dewar pipe was
held in place and positioned just outside the active field of view for the PET and CT
systems. Testing of this cooling system using frozen water in a 50 cc test tube with an
embedded thermocouple showed that the temperature could be maintained well below
38
freezing for as long as liquid nitrogen was available, which was in excess of 10 hours
fortheDewarsizeweused.
WeacquiredthePETimagesinonebedpositioninaConcordeP4microPETscan-
ner (Siemens Preclinical Solutions, Knoxville, TN) following 60 minutes uptake of the
probe and the images were reconstructed using a MAP (maximum a posteriori) recon-
struction algorithm (Qi et al., 1998a). The whole PET imaging process consisted of
a 30 minute emission scan followed by 2x10 minutes of transmission scanning using
singles and a 360 μCi Ge-68 point source. An additional 10 minute singles scan was
acquired without the point source to estimate the emission contamination in the trans-
mission scans. We also corrected the attenuation using the CT data we acquired in the
nextstep(Chowetal.,2005).
We acquired the CT images of the whole body in 2 bed positions using the Imtek
microCATsystem(SiemensPreclinicalSolutions,Knoxville,TN)immediatelyfollow-
ing the PET scans. Each position was acquired using 0.5 mm aluminum filtration, 784
steps, 450 ms exposures at 50 kvp, and 250 μA. We reconstructed the two CT images
ona18CPUBeowulfPCclusterusingtheFeldkampconebeamreconstructionmethod
witharampfiltertoproduceavolumetricimagewithavoxelsizeof0.1mm.
Following PET and CT imaging, we packed the mouse and framework in Carboxyl
Methyl Cellulose (CMC) inside a plastic pipette box and froze it for 60 seconds in
liquid nitrogen. The CMC not only acts as a common stabilizer but also assists the
smooth motion of the cryosection blade. After freezing overnight at -20 C, the plastic
boxwasstrippedoffandtheblockwasattachedtothecryostatcuttingplateusingaddi-
tional CMC. We froze the plate and the block overnight to ensure a strong bond. The
cryostat was set to cut 50μm thick sections throughout the entire mouse. We acquired
digital images of the block using a Nikon CoolPix 5700 camera and a 1 GB micro-
drive. To avoid changes in focus, the camera was set to manual focus of 30 cm with
39
a fixed aperture of 11.3. No flash was used since it caused unwanted reflections from
the surface. After each slice was removed the block was cleaned with a single swipe
of a 70percent EtOH pad to remove any ice or bone chips and create a smooth clear
image surface. The camera was mounted on a stand and operated remotely to avoid
moving the camera between images. Additional testing indicated that no loss of reso-
lution could be determined between using the raw data, uncompressed TIFF images or
minimalJPEGcompression; thereforeweusedtheminimalJPEGcompressionsetting.
The main advantage of using JPEG was the ability to fit the entire image volume onto
a single 1 GB microdrive, allowing the camera to remain untouched during the entire
cryosectioningsession. Anadditionaladvantagewasthattheimagingtimewasconsid-
erably shortened, since storing the uncompressed images required significantly longer
waits between imaging each slice. The sequence of events for processing of the mouse
for PET, CT and cryosectioning is illustrated in Figure 2.1 and representative image
slices from PET, CT and the cryosection data are shown in Figure 2.2. A preliminary
reportonthisprocedureisgiveninStoutetal.(2002).
CT scans
Inject radioactivity
S acrifice & Freeze
PE T imaging
Cryosection
CMC & Freeze
PE T Transmission
h
Day 1 Day 2 Day 3
/
/
1-h
Day 1 Day 2 Day 3
Figure 2.1: The sequence of processing of the mouse for acquiring the PET, CT and
cryosectionimagedata.
40
PET X-ray CT Cryosection
V oxel size : 0.4mm x 0.6mm x 0.4mm 0.1mm x 0.1mm x 0.1mm 38.8µm x 38.8µm x 50µm
Size : 8MB 900MB 600MB
Matrix size : 128 x 127 x 128 512 x 942 x 512 1704 x 2560 x 418
Figure 2.2: Representative slices from the PET, CT and cryosection data (voxel and
matrixsizesaregiveninsagittal,axialandcoronaldirectionsrespectively).
2.2 ProcessingtheData
2.2.1 Dataalignmentandresampling
PriortoconstructingtheDigimouseatlasweresampledthecryosectionimagesontoan
isotropic 0.1mm grid and then coregistered and resampled the CT and PET data to the
samecoordinatesystem. Wewillnowdescribethesestepsindetail.
The cryosections were obtained as coronal sections with an in-plane voxel size of
38.8μm in the sagittal and axial directions and a 50μm slice thickness in the coronal
41
direction, making a matrix size of 1704x2560x418. In the process of acquiring the
cryosection images, minor shifts occurred since the moving belt of the cryostat did not
alwaysrepositiontheblockinexactlythesamelocationwitheachpassunderthecutting
knife. Thiscausedrelativemovementbetweensomesectionsandhencelinearmisalign-
mentoftheseslices. Sincethesemisalignmentsoccurredonlyinthecoronalsectioning
plane they could be corrected slice-by-slice through translation in the axial and sagittal
directions. We segmented the fiducial markers in each slice and cross-correlated the
images of the fiducial markers between successive images to determine which slices
were misaligned. In these slices, the translation was chosen to maximize interslice cor- relation of the images of the fiducial markers. The final result was verified by visual
inspection. We then cropped the cryosection slices to remove background and resam-
pled each slice using bicubic interpolation to a 0.1mm x 0.1mm voxel size. Since the
slice thickness was 50μm we used every second slice to generate an isotropic 0.1mm
cubicalvoxelforamatrixsizeof380x992x208fortheentiremouse.
TheCTimagereconstructedfromthetwobedpositionshada0.1mmisotropicvoxel
sizewithamatrixsizeof512x942x512. Wecroppedthevolumeto372x920x292voxels
andadjustedtheimageintensityscaletomaximizesofttissuecontrastusingthesoftware
ImageJ (Rasband, 2003). We then registered the CT data to the cryosection data using
the RView software to find the rigid transformation (rotation, translation and scaling)
betweenthetwoimagevolumes(Studholmeetal.,1999).
Afterrigidregistration,overlayofthecryosectionandCTimagesshowedmisalign-
ment in the head and forelimbs. We believe this was because the animal was not com-
pletely frozen during CT and PET imaging, and subsequent small displacements of the
headandforelimbsoccuredduringpackingandfreezingofthemouseinCMC.Tocor- rect this problem we applied piecewise rigid registration separately for body, head and
forelimbs,followedbyanonrigidrefinementusingtheLEREGalgorithm(Christensen,
42
1999) to ensure a continuous deformation field between the two volumes. Figure 2.3
shows the overlay of the CT data to the cryostat image before and after nonrigid regis-
tration.
beforenonrigidregistration afternonrigidregistration
Figure2.3: OverlayoftheCTtothecryostatdatabeforeandafternonrigidregistration.
NonrigidregistrationwasperformedusingLEREG.
The PET image data were reconstructed on 0.4mm voxels in the transaxial planes
and0.6mmintheaxialdirectionwithamatrixsizeof128x127x128. Thesewererigidly
registeredtotheX-rayCTdatausingRView(Studholmeetal.,1999). Figure2.4shows
theregisteredPET/CTfusionimageofthemouse. ThePETdatawerethentransformed
into the space of the cryosection images by applying the same nonrigid registration
transformation we previously used to map the CT to the cryosection data. The PET
data were then resampled in this space at the same density as the resampled CT and
cryosectiondatatoproducethefinalsetofcoregistereddata.
2.2.2 SegmentationandLabelling
The anatomy of the mouse was defined using an interactive curve editing tool imple-
mented in Matlab (Mathworks, Natick, MA) specifically for development of this atlas.
The graphical user interface allowed us to load the registered 2D cryosection and CT
volumes,blendthetwoimages,andviewinthreeorthogonalplanes. Usingthistoolwe
43
Figure2.4: RegisteredPET-CTfusionimageofthemouse.
wereabletosegmentindividualorgansfromcoronalslices. Wethenverifiedsegmenta-
tionaccuracyinsagittalandtransaxialviews.
We began by segmenting the outer skin boundary and skeleton by thresholding the
CT images. The resulting skeleton was then edited using the interactive tool to cor- rect bone boundaries affected by partial volume effects and artifacts due to scatter and
beamhardening. Additionalsofttissuestructureswerethenaddedtotheatlasbytracing
theirboundariesinsuccessivecoronalsectionsofthefusedCTandcryosectionimages.
The boundaries were defined by the user as a set of points which are then automati-
callyinterpolatedusingacubicB-spline. Theboundarieswerethenrefinedbydragging
and adding control points to achieve the best fit to the organs in the fused image data.
After segmenting an organ in each slice, the surface of the resulting volume was tes-
selated using the marching cubes algorithm and then smoothed using a mean curvature
flowalgorithmasimplementedinourBrainSuitesoftware(ShattuckandLeahy,2002).
Organ volumes were then reconstructed from these smoothed surfaces. The following
44
organs were defined in this way: whole brain, external cerebrum, cerebellum, olfac-
tory bulbs, striatum, medulla, masseter muscles, eyes, lachrymal glands, heart, lungs,
liver, stomach, spleen, pancreas, adrenal glands, kidneys, testes and bladder. Since the
organsweresegmentedoneatatime,furtherprocessingwasneededtocorrectforover- lapbetweenorgansand/ortheskeleton. Wecorrectedtheseoverlapsusingavolumetric
hand-editing tool in BrainSuite (Shattuck and Leahy, 2002). Figure 2.5 shows a 3-D
surface rendering of the structures in the final atlas. Coronal and transaxial sections
throughtheatlasandcoregistereddatasetsareshowninFigure2.6.
Figure 2.5: Surface rendering of atlas structures: whole brain, external cerebrum, cere-
bellum, olfactory bulbs, striatum, medulla, masseter muscles, eyes, lachrymal glands,
heart, lungs, liver, stomach, spleen, pancreas, adrenal glands, kidneys, testes, bladder,
skeletonandskininDigimouse.
45
Since the PET scan is a mixture of FDG (soft tissue) and F- (bone), we took the
skeletonfromtheCTimages,dilateditbyonevoxelandusedthisvolumetowindowthe
registeredPETdatatoseparatetheskeletonandFDGcomponentsofthePETscan. Even
thoughthePETdataprovidedinformationregardingthelocationofthebladderandthe
heart, we did not use it for the segmentation process because of its low resolution. We
usedthePETdatajustforincorporatingfunctionalinformationintoouratlas.
PET CT Cryosection Atlas OverlayofAtlas
ontoCryosection
Figure 2.6: Coronal and transaxial sections through the coregistered PET, CT and
cryosectiondatasetsandatlas.
The complete set of coregistered data and volumetric labelled atlas are available at
the website http://neuroimage.usc.edu/Digimouse.html together with a mesh represen-
tation of the atlas for use with boundary and finite element methods as described in the
followingsection.
46
2.3 Applications: PET and Optical Simulations using
theMouseAtlas
Weconcludethischapterbyillustratinguseoftheatlasinsimulatingsmallanimalbio-
luminescenceandPETimagingstudies. Weassumeacancermodelinwhichweimplant
three tumors at distinct locations in the atlas. The spherical tumors each have a radius
of1.8mmandwereplacedinthebrain,lungandspleen. Weassumethetumorcellsare
labelled with the firefly luciferase (FFL) gene so that they can be localized with a bio-
luminescence imaging system. We also simulate PET imaging of the uptake of FDG in
thesamethreetumorswithrealisticnonspecificbackgroundactivityasdescribedbelow.
For the bioluminescence simulation we downsampled the mouse atlas to a matrix
sizeof95x248x52withavoxelsizeof0.4mm. Wethengenerateda3Dmeshrepresen-
tation of the atlas using a constrained delaunay tetrahedralization method (Si, 2005), in
whichtheorganboundariesaremoredenselytesselatedthantheirinteriorstomoreaccu-
rately reflect the surface shape of different organs. The final mesh, illustrated in Figure
2.7a, had 306,773 elements, 16,164 surface nodes and 42,080 internal nodes. To sim-
ulate bioluminescence imagingwesolvethe diffusionequationusing frequencydepen-
dent optical attenuation and scatter coefficients (Alexandrakis et al., 2005) assigned to
each organ in the atlas (Figure 2.8). We simulate volumetric reconstruction of biolumi-
nescent sources as described in Chaudhari et al. (2005), in which multispectral images
ofthelightemittedfromtheanimalareused. Asourcegridwasdefinedwithintheatlas
on a regular grid with 1.2mm voxel spacing for a total of 9,192 source locations. A
finiteelementmethod(FEM)solutionofthediffusionequationwascomputedtodefine
the mapping from each source location within the animal to each of the mesh nodes
on the skin surface. We used the optical properties as reported in Alexandrakis et al.
(2005)assignedelement-wiseforallorgansfor6wavelengthbinsuniformlyspacedby
47
20nmfrom600nmto700nmasillustratedinFigure2.7b-c. Detailsofthemultispectral
forwardmodelcanbefoundinChaudharietal.(2005).
Bioluminescence data are acquired in the focal plane of the CCD camera, but for
the purposes of this simulation we assume that we directly measure photon density at
the skin surface of the animal. We computed the photon density at the surface of the
atlas for each of the 6 wavelength bins assuming uniform expression of FFL within
each tumor (Wet et al., 1987). Noise (SNR = 200) sampled from a bioluminescence
backgroundimagetakenwiththeCCDcameramountedontheIVIS200imagingsystem
(Xenogen Corporation IVIS 200) was added to simulate realistic noise conditions. The
3DreconstructionofFFLwascomputedusingapositively-constrainedregularizedleast
squaresmethodasdescribedinChaudharietal.(2005).
(a) (b) (c)
Figure2.7: (a)Tetrahedralmeshrepresentationofthemouseatlasusing3Dconstrained
delaunay method (b) assignment of optical absorption coefficients to the FEM mesh at
700nm(c)assignmentofopticaldiffusioncoefficientsalsoat700nm.
The PET simulation is based on a model of the F220 microPET (Siemens Preclin-
ical Solutions, Knoxville, TN) small animal scanner. To simulate PET data we used a
48
Figure2.8: Scatteringandabsorptioncoefficientsoforgansinthe600-700nmregion.
forwardprojectorbasedonthefactoredsystemapproachdescribedinQietal.(1998a).
Thissystemmodelincludesgeometricsensitivity,attenuationandnormalizationfactors
togetherwithablurkernelthattakesintoaccounttheblockstructureofthescannerand
intercrystal penetration and scatter. Fully 3D sinograms were simulated with a maxi-
mumringdifferenceof47andspan3.
To generate an attenuation map we used an idea similar to that described in Chow
et al. (2005) where we assigned linear attenuation coefficients to each structure in the
atlasaccordingtothevalueslistedinTable2.1. Thesevalueswereobtainedbylogarith-
mic interpolation of published values Hubbell and Seltzer (1996) to compute specific
coefficientsat511keV.Softtissueattenuationcoefficientsvarylittleatthisenergylevel
and assigning different attenuation values to each organ would have a larger impact on
SPECTstudieswithlowerenergygammarays.
TosimulateFDGuptakeintheatlasweassumeduniformactivityinthethreetumors
andaddedasbackgroundascaledversionoftheDigimouseactivityreconstructedfrom
thePETdataacquiredduringpreparationoftheatlas. Whilethisdoesnotcloselyreflect
a true study since the Digimouse PET data used a combination of
18
F
−
and FDG, it
does provide texture to the background similar to that found in in-vivo studies. The
49
combined tumor and background activity was forward projected through the system
model to simulate sinogram data. Data were scaled after forward projection using a
measured normalization scan from the F220 scanner. Pseudo-Poisson noise was then
addedtoachievetheequivalentofanaverageof20countsperlineofresponse. Images
were reconstructed using 2 iterations of a 3D OSEM algorithm and 18 iterations of a
preconditionedconjugategradientMAPmethodQietal.(1998b),QiandLeahy(2000),
Lietal.(2004). Aquadraticpriorwasusedwithasmoothingparameterof0.1.
Figure 2.9 shows a comparison of the PET and bioluminescence reconstructions
fromdatasimulatedusingtheDigimouseatlas. ThePETimagesillustratetheabilityof
microPET scanners to image relatively small tumors. Through use of the phantom we
caninvestigatethebehaviourofdifferentreconstructionalgorithmsorimagingsystems
for realistic anatomically-based PET tracer distributions. The bioluminescence images
show that it is possible to accurately reconstruct FFL sources in 3D using multispectral
data, at least in cases where the forward model (i.e. the spatial distribution of optical
diffusion and absorption coefficients) are known. Through use of the atlas we are able
to investigate the accuracy with which FFL and other bioluminescent and fluorescent
sourcescanbeestimatedasafunctionofimagingsystemparametersandreconstruction
algorithm. Furthermore, we can investigate the sensitivity of these results to errors in
ourestimatesoftheopticalpropertiesoftheanimal.
50
Tissue type
Type of material
LAC(cm
-1
)
air Air 1.04E-04
background Adipose tissue 0.0913
skeleton Bone cortical 0.1717
eye Eye lens 0.1012
medulla, olfactory bulbs, heart,
lachrymal glands, bladder wall,
stomach, spleen, pancreas, liver,
kidneys, adrenal glands, skin
Soft tissue
0.1008
cerebellum, external cerebrum,
striatum, rest of the brain
Brain
0.0994
testes Testis 0.0993
masseter muscles Muscle 0.0999
lungs Lung tissue 0.0317
bladder content Water 0.0960
tumors Soft Tissue 0.1008
Table 2.1: Linear attenuation coefficients assigned to different tissues in Digimouse to
generatetheattenuationmapforthePETsimulation.
Figure 2.9: Comparison of PET and 3D bioluminescence reconstructions in coronal
(top row) and transaxial (bottom row) views: (a) PET (b) bioluminescence (c) fusion
ofbioluminescenceandPETreconstructions(d)fusionofbioluminescenceandground
truthtumorimage.
51
Chapter3
3DSurfaceModelofaMousefrom
OrthogonalViewsofStructuredLight
Images
“Structuredlight”methodsaretechniquesfordeterminingthe3Dsurfacetopographyof
anobject(TakedaandMutoh,1983;Skydanetal.,2002;Lietal.,2001). Influorescence
and bioluminescence imaging of small animals we need this information not only for
registeringtheopticaldatatothesurface,andknowingwherethemouseisinthefieldof
viewoftheinstrument,butalsoforgeneratingtheforwardmodelthatmapssourcedis-
tributionswithintheanimalintothemeasurementspace. Inthestructuredlightsystem,
parallellaserlinesareprojectedontothemousetoproducestructuredlightimages. With
knowledgeoftherelevantcameraandprojectorgeometry,heightmapsarethendeduced
fromthedistortionsoftheselinesproducedonthesurfaceofthemouse. However,when
itisnotpossibletomeasuretheseparameters,theycanalsobeestimatedalongwiththe
lensdistortionmodelusingcalibrationobjects. Oneofthepopularmethodstocompute
the height maps from structured light images is based on computing the phase of the
structured light images using the fourier transform (Takeda and Mutoh, 1983; Su and
Chen, 2001). Later, the final height map is computed by unwrapping this phase and
relating it to the geometry of the system. Abrupt and irregular changes in the surface
of a complex object such as a mouse can result in occlusions and shadows, and can
produce incorrect height maps. Hence additional structured light images from different
52
viewswillbehelpfultoobtainanaccurate3Dsurfaceprofile. Intheliterature,thereare
differenttechniquestomeasurethe360degreeshape,suchastheobjectrotationmethod
by Cheng et al. (1991) and a fixed imaging system with a rotating camera or projector
by Screiber and Notni (2000). We will approach the problem by obtaining additional
orthogonal views using a mirror set up which resembles the second method since the
imaging system is fixed and the problem reduces to transforming all the local 3D coor- dinates obtained from different views to a global coordinate system. After obtaining
surfaces from each direction, we combine these 3D point clouds to form the resultant
3Dsurface. Pengetal.(2002)usedtheICPalgorithm(BeslandMcKay,1992)toregis-
termultiplerangeimagesonafree-formsurface. Lietal.(2001)usedcrosscorrelation
topatchsubregionsofsurfacestoobtainalargescalethreedimensionalsurfaceprofile.
Our approach to the problem is to combine 3D height maps from orthogonal views by
finding the calibration and the system transformation between these views and refining
our result using the ICP algorithm to obtain the final 3D surface model of the mouse.
Ideally, knowing the system calibration and the accurate height maps from different
views will be sufficient to create the 3D surface model, however due to uncertainties
intheorientationofmirrors, themeasurementsandocclusions,aregistrationtechnique
which uses surface features to match these views, in addition to the knowledge of the
transformationoftheimagingsystem,isneededtoobtainanaccuratesurfacemodel.
3.1 OpticalGeometry
Weacquirestructuredlightimagesusingacommercialinstrumentformultispectralbio-
luminescence and fluorescence imaging (Xenogen Corporation IVIS 200). In this sys-
temparallellinesareprojectedontothesurfaceoftheobjectatanangleandtheresultant
structured light images are stored with the photographic images. However this system
53
is a 2D planar imaging system and hence provides the acquisition of structured light
andthephotographicimagesonlyfromthetopview. Inordertoobtainamoreaccurate
surfacemodelofthemouse,weusedtheinstrumentationsetupdevelopedfora2Dimag-
ingsystemthatallowsstructuredlightandphotographicimageacquisitionof3Dtomo-
graphic data from the full mouse surface (Chaudhari et al., 2005). A schematic of this
mirror set up is shown in Figure 3.1. Because of the interference of the reflected struc-
turedlightsfromthemirrors,thestructuredlightandphotographicimagesareobtained
consecutivelybycoveringthemirrorswhileacquiringthetopviewandbycoveringthe
topviewwhenacquiringthesideviews. Anexampleofstructuredlightimagesobtained
from the top view and the side views for a mouse phantom is shown in Figure 3.2. In
order to make our data acquisition robust during imaging, we did not change the focal
plane of the camera, therefore the side views are imaged from their reflected views due
to the mirrors and hence viewed from a greater distance than the top view. In order to
findthetransformationbetweenthesideviewsandthetopview,wefirstdefinedacoor- dinatesystem. WepickedtheXenogen’salignmentgridfortheoriginofourcoordinate
frame and defined the imaging platform as the z = 0 plane. We then calibrated the
camerausingthewellknownDirectLinearTransformation(DLT)methoddescribedin
Abdel-Aziz and Karara (1971). After the camera calibration, we computed the height
maps for the top and side views. For the side views, the height map corresponds to the
heightofthereflectedviewoftheobjectfromthereferenceplane(z = 0). Afterobtain-
ing the height maps, we reflected the height maps for the side views using the mirror
equation and registered this height map for the side view to the height map from the
top view using the ICP algorithm. The resultant surface of the object was computed by
taking the intersection of these height volumes. In the following sections we describe
thesestepsindetail.
54
Figure3.1: Schematicofthemirrorsetupsystem.
3.2 Camera Calibration using Direct Linear Transfor- mation(DLT)Method
Inordertocomputethesystemcalibrationandrelatetheheightmapsfromthesideand
the top views, we first computed the camera calibration. For this optical system, we
can define an object coordinate frame and an image planeas in Figure 3.3. The camera
maps an object point B with coordinates (x
b
,y
b
,z
b
) in the object coordinate space to
the image pointI with coordinates (u
b
,v
b
) in the image plane, according to the camera
projectioncenterC. SincethecameracenterC andthecorrespondingimageandobject
pointsbelongtothesameline,wecanrelatetheimagepointstotheobjectpointsusing
alineartransformationaccordingtotheDLTmethod(Abdel-AzizandKarara,1971).
In order to obtain this linear relation between the image space and the object space,
a third axis is added to the image plane, called the “principal axis”, which is perpen-
dicular to the image plane (Figure 3.3). The intersection of this axis with the image
plane is referred to as the “principal point” (P). Accordingly, in the image coordinate
frame all the points that are on the image plane are defined to have 0 value for their
third coordinates and therefore the image point I becomes (u
b
,v
b
,0) and the principal
pointbecomes(u
0
,v
0
,0)accordingtothisnewdefinition. Assumingthedistanceofthe
camera center to the image plane is c
d
, the coordinates of the camera center becomes
(u
0
,v
0
,c
d
) in the image coordinate frame. Then the vector drawn from C to I can be
writtenas (u
b
−u
0
,v
b
−v
0
,−c
d
)intheimagecoordinateframe. Ontheotherhandthe
55
Figure3.2: Threeorthogonalviewsofthestructuredlightimagesofaphantommouse.
vector drawn from C to B can be written as (x
b
− x
c
,y
b
− y
c
,z
b
− z
c
) assuming the
position of the camera center in the object coordinate frame is (x
c
,y
c
,z
c
). The vectors
~
CI and
~
CB weredescribedintheimageandobjectcoordinateframesrespectivelyand
they should be described in a common reference frame in order to describe the linear- ity relation. One approach is to transform the vector
~
CB to the image plane reference
frame. Then,usingthecolinearityofC,I andB,wecanrelatethevectors
~
CB and
~
CI
as:
u
b
−u
o
v
b
−v
o
−c
d
=cA
x
b
−x
c
y
b
−y
c
z
b
−z
c
where A is the transformation matrix from the object coordinate space to the image
coordinatespaceandcisascalingconstant.
56
B(xb,yb,zb)
I(ub,vb,0)
P(uo,vo,0)
C(uo,vo,cd)
u
v
cd
Z
X
Y
Figure3.3: Opticalgeometryforcameracalibration.
Including the optical errors from the lens, we can rearrange this equation forx
b
, y
b
andz
b
toobtain:
u
b
−δu
b
=
L
1
x
b
+L
2
y
b
+L
3
z
b
+L
4
L
9
x
b
+L
10
y
b
+L
11
z
b
+1
v
b
−δv
b
=
L
5
x
b
+L
6
y
b
+L
7
z
b
+L
8
L
9
x
b
+L
10
y
b
+L
11
z
b
+1
(3.1)
wheretheopticalerrorsareexpressedas:
δu
b
= ξ(L
12
r
2
+L
13
r
4
+L
14
r
6
)+L
15
(r
2
+2ξ
2
)+L
16
ξη
δv
b
= η(L
12
r
2
+L
13
r
4
+L
14
r
6
)+L
15
ξη +L
16
(r
2
+2ξ
2
)
with[ξ,η] = [u
b
−u
0
,v
b
−v
0
],r
2
=ξ
2
+η
2
.
L
1
- L
16
arethe16-DLTparametersofthecamerathattransformstheobjectcoordinate
frame to the image coordinate frame and include parameters of the transformation
matrix A, the principal point (x
c
,y
c
,z
c
) and the camera center (u
o
,v
o
,c
d
). Among
57
these parameters, L
12
- L
14
are related to the optical distortion and L
15
- L
16
to the
decenteringdistortion. Equation3.1canthenberearrangedas:
u
b
v
b
=
x
b
y
b
z
b
1 0 0 0 0 −u
b
x
b
u
b
y
b
u
b
z
b
0 0 0 0 x
b
y
b
z
b
1 −v
b
x
b
v
b
y
b
−v
b
z
b
ξr
2
M ξr
4
M ξr
6
M (r
2
+2ξ
2
)M ξηM
ηr
2
M ηr
4
M ηr
6
M ξηM (r
2
+2η
2
)M
L
1
L
2
L
3
.
L
15
L
16
whereM =L
9
x
b
+L
10
y
b
+L
11
z
b
+1.
In order to solve for the DLT parameters, a group of control points whose x, y and
z coordinates are already known along with their image point correspondences must
be used. In order to solve for 16 parameters, one would require at least 8 points. We
compute the DLT parameters using an iterative least squares method, since the coeffi-
cientmatrixrequiresM,whichisafunctionoftheparametersL
9
- L
11
,tocomputethe
remainingL
12
- L
16
parameters.
3.3 StructuredLightSystem
The geometry of the structured light system is shown in Figure 3.4, where the distance
betweenthestructuredlightprojectorlensandthecameralensisdandthecameralens
is Lo distance above the reference plane R, which is a fictitious plane normal to the
optical axis of the camera and serves as reference from which the object heighth(x,y)
ismeasured. Bisapointonthesurfaceoftheobject,B
s
isthepointwherethestructured
light would hit R if the object was not present, and B
c
is the point where the camera
58
observes the structured light ray on R when the object is present. The structured light
patternhasitslinesnormaltotheplaneofthefigure,anditsimageisprojectedontothe
objectsurface.
Figure3.4: Opticalgeometryofthestructuredlightsystem.
Using the approach in Takeda and Mutoh (1983), when an object is present on the
reference plane, the deformed structured light pattern observed through the camera can
beexpressedasasummationoffourierseries:
s(x,y) =a(x,y)
∞
X
n=−∞
g
n
exp[2πinf
0
(x+p(x,y))] (3.2)
wherep(x,y) is the amount of the distortion of the structured light due to the height of
theobject(ifthestructuredlightsourceisatinfinity,meaningthereisnooriginalphase
modulation, then p(x,y) is equal to|B
s
B
c
| in Figure 3.4) and it can be expressed as a
59
phase change φ(x,y) = 2πf
0
p(x,y). Using this phase definition, Equation 3.2 can be
rewrittenas:
s(x,y) =a(x,y)
∞
X
n=−∞
g
n
exp[i(2πnf
0
x+nφ(x,y))] (3.3)
Whenthereisnoobjectonthereferenceplane,thestructuredlightpatternbecomes:
s
0
(x,y) =a
0
(x,y)
∞
X
n=−∞
g
n
exp[i(2πnf
0
x+nφ
0
(x,y))] (3.4)
In Equations 3.3 and 3.4, a(x,y) and a
0
(x,y) are the non-uniform distributions of
reflectivity on the object and on the reference plane respectively, g
n
is the weighting
factors of fourier series, f
0
is the fundamental carrier frequency of the structured light
pattern, and φ(x,y) and φ
0
(x,y) are the phase modulations resulting from the object
heightdistributionandtheoriginalphasemodulation.
Takeda and Mutoh (1983) showed that if the spatial variations a(x,y) and a
0
(x,y)
andphasemodulationsφ(x,y)andφ
0
(x,y)areslowcomparedtof
0
,the1DFFTofthe
structured light pattern s(x,y) will have peaks at integer multiples of f
0
, −f
0
and the
origin. Hence witha suitablefilter, thespectracanbefilteredtoletonlythefundamen-
tal component (Figure 3.5). The inverse FFT can then be applied to the fundamental
component,whichwillgivethecomplexsignalsb s(x,y)andb s
0
(x,y)as:
b s(x,y) = g
1
a(x,y)exp[i(2πf
0
x+φ(x,y))].
b s
0
(x,y) = g
1
a
0
(x,y)exp[i(2πf
0
x+φ
0
(x,y))]. (3.5)
AsseenfromFigure3.4,thedistortionofthestructuredlightpatternduetotheheightof
the object is related to the phase difference of the structured light pattern on the object
60
Figure3.5: 1DFFTprofileofstructuredlightpattern.
and the structured light pattern on the reference plane: Δφ(x,y) =φ(x,y)- φ
0
(x,y) =
2πf
0
B
s
Bc. ThisphasedifferenceΔφ(x,y)iscomputedusingEquations3.5by
Δφ(x,y) =φ
w
(x,y) =Im(log[b s(x,y)b s
∗
0
(x,y)]) (3.6)
Here, the phase calculation gives principal values ranging from−π toπ and hence has
discontinuities with 2π phase jumps. By an appropriate phase unwrapping algorithm,
a continuous distribution of the phase φ
u
can be computed in order to relate it to the
heightoftheobject.
Assuming the structured light system is perfect and the system parameters (L
0
and
d) are known, the height distribution can then be computed using the similarity of the
trianglesBB
s
B
c
andthetrianglebetweenB,thestructuredlightprojectorlens,andthe
cameralensas:
h(x,y) =
L
0
φ
u
(x,y)
φ
u
(x,y)−2πf
0
d
(3.7)
61
Su et al. (2004) showed that if the system parameters are not known, a calibration
technique based on ray tracing can be used to find the relation between phase-height
usingthesimilarityofspacetrianglesas:
1
h(x,y)
=a(x,y)+
b(x,y)
φ
u
(x,y)
+
c(x,y)
φ
u
(x,y)
2
(3.8)
whereh(x,y)andφ
u
(x,y)aretheheightandcontinuousphaseofaparticularpixel,and
thecoefficientsa(x,y),b(x,y)andc(x,y)aredeterminedusingacalibrationprocedure
where planar objects with known heights are placed on the reference plane and the
phase distributions are computed using the FFT method and unwrapped using a phase
unwrappingalgorithm.
We acquired the structured light images using a commercial instrument which is
availableformultispectralbioluminescenceandfluorescenceimaging(XenogenCorpo-
ration IVIS 200). As described above, we acquired structured light images from three
views using the mirror set up described by Chaudhari et al. (2005). To compute the
height maps for the side views, we also used calibration objects with known heights as
we did to compute the height map for the top view. The phase and the corresponding
heightmapsforthesideviewsarethencomputedwithrespecttothereflectedreference
planes referred asR
right
andR
left
for the right side mirror and left side mirror (Figure
3.6).
3.4 PhaseUnwrapping
Since the phase calculation by any inverse trigonometric function provides the princi-
pal values ranging from −π to π, in the absence of noise these discontinuties can be
corrected easily by adding or subtracting 2π at each discontinuity (Equation 3.9). This
process is called phase unwrapping. In practice, noise, local shadows, undersampling,
62
Figure 3.6: Optical geometry of the structured light system with the mirror setup to
acquiresideviewsinadditiontothetopview.
fringe discontinuties, irregular surface brightness, and occlusions make phase unwrap-
pingacomplicatedprocedure.
φ
u
(x,y) =φ
w
(x,y)+2kπ (3.9)
63
The problem of phase unwrapping arises in a diverse range of inverse problems
including SAR (synthetic aperture radar) profilometry (Curlander and McDonough,
1991) where the phase angle of a pixel is proportional to the terrain height and in MRI
applications such as field mapping which is often used to remove geometric distortion
in EPI (Echo-planar imaging) (Munger et al., 2000), chemical shift mapping (Glover
andSchneider,1991)andvelocitymeasurementusedinMRangiography(Nayleretal.,
1986).
Several methods for phase unwrapping exist in the literature, including Gold-
stein’s branch-cut method (Goldstein et al., 1988), minimum norm methods (Ghigilia
and Romero, 1994), Flynn’s minimum discontinuity algorithm (Flynn, 1997), quality-
guided algorithms (Lim et al., 1995; Xu and Cumming, 1999; Ghiglia and Pritt, 1998).
Goldstein’s branch-cut algorithm (Goldstein et al., 1988) relies on residues which are
inconsistenciesinthephasedata. Heidentifiestheseinthewrappedphasedataandbal-
ances them with branch cuts. Later, a flood fill algorithm is used to do path integration
aroundthebranchcutstoobtainthefinalunwrappedphasedata. Thisalgorithmisvery
fastandrequireslittlememory. Itsdrawbackisthatitusesthephasedataonlyforfind-
ing the residues and this can cause inaccurate results. Flynn’s minimum discontinuity
algorithm(Flynn,1997),insteadofidentifyingtheresidues,findsthediscontinutiesdue
to phase wrapping and adds the appropriate multiple of 2π to the pixels in continuous
regions. The drawback of this algorithm is that the memory and execution times are
higher than other algorithms. In the minimum norm methods the phase unwrapping
problemisformulatedasaminimizationproblem,wherethesumofthep
th
normofthe
differencesbetweenthegradientsofthesolutionandthoseofthemeasurementsismin-
imized. Specialcasesofthesemethodsincludeunweightedandweightedleastsquares,
where the 2
nd
norm of the error function is minimized. Unweighted least squares pro-
duces smooth and corrupted solutions since they unwrap through the residues instead
64
of around them. The quality-guided algorithms are based on the flood fill algorithm,
and are guided by a quality map derived from the phase data. The quality map gives a
measure of the quality or the goodness of each phase value in the phase map. It starts
the unwrapping process from a pixel with a high quality phase measure and continues
to unwrap pixels beginning with the highest quality pixels and ending with the low-
est quality pixels. The quality maps are computed from the phase data using phase
derivative variance or correlation coefficient metrics. Comparisons of these methods
showthatquality-guidedalgorithms,althoughrequiringmorememorythanGoldstein’s
branch-cut algorithm, perform the best among the available algorithms when applied
to different examples including SAR interferometry and magnetic resonance imaging
wrapped phase data (Ghiglia and Pritt, 1998), as long as a good quality map can be
extractedfromthephasedata.
We use a quality guided phase unwrapping method for surface profiling (Lim et al.,
1995; Xu and Cumming, 1999) which was originally developed for phase unwrapping
ofSARinterferogram. Firstwecomputeaqualitymapthatgivesameasureofthereli-
abilityofthephaseataparticularpointinthephasemap. Wecomputethisasthephase
derivative variance of the phase map, where Δφ
x
i,j
and Δφ
y
i,j
are the partial derivatives
and Δφ
x
m,n
and Δφ
y
m,n
are the averages of these partial derivatives over a window with
sizel:
QM
m,n
=
q
P
(Δφ
x
i,j
−Δφ
x
m,n
)+
q
P
(Δφ
y
i,j
−Δφ
y
m,n
)
l
2
(3.10)
After computing the quality map, we use a region growing method that starts
unwrapping from a seed with higher reliability (higher quality) where the phase val-
ues are relatively uniform. Then, the phase of each pixel is predicted from the phase of
theneighboringpixelsanditselfusingthefollowingprocedure:
65
Supposetherearen
u
unwrappedpixelsaroundthepixelunderevaluation. Apredic-
tionofthecorrectphaseatthispixeliscomputedbasedontheaverageoftheunwrapped
neighboringpixels.
φ
p
=
P
nu
k=1
φ
k
n
u
(3.11)
If the values of the unwrapped pixels around this pixel agree well with each other, the
confidence in the phase values of the neighborhood will be high. A measure of the
confidenceisgivenby:
d
p
=
P
nu
k=1
|φ
k
−φ
p
|
n
u
(3.12)
If d
p
is larger than a certain tolerance t
p
, then we do not attempt to unwrap the pixel,
the pixel is marked as a bad pixel, and another pixel with high quality is taken into
consideration. If this test is passed, we compute the unwrapped phaseφ
u
for this pixel
as:
φ
u
=φ
w
+2πnint((φ
p
−φ
w
)/2π) (3.13)
where φ
w
is the wrapped phase and nint corresponds to the nearest integer. Now the
deviation of the computed unwrapped phaseφ
u
and the predicted phaseφ
p
is evaluated
bycomputingd
u
as:
d
u
=|φ
u
−φ
p
| (3.14)
Ifd
u
iswithinacertaintolerancet
u
,thenφ
u
isaccepted,elseitismarkedasabadpixel.
We continue unwrapping pixels until all reliable pixels are unwrapped. Afterwards,
wesearchthroughthephaseimageandfindthepixelsthatarenotunwrappedandplace
them in a queue with priority to the ones that have the highest number of unwrapped
neigboringpixels. Thesepixelsarethenunwrappedusingthesameproceduredescribed
earlier.
66
Depending on the selection of the seed point, the whole unwrapped phase map will
be accurate with an offset of multiplies of 2π. In order to compute the exact phase
offset,weuseareferencepixelwhoseheightisaccuratelyknown:
p
o
=nint
b(i,j)
1
h(i,j)
−a(i,j)
−φ
u
(i,j)
2π
(3.15)
andtheresultingunwrappedphasemapbecomesφ
u
p =φ
u
+2πp
o
.
Forthesideviewsthisphaseoffsetiscomputedusingtheheightmapofthetopview
after reflecting it through the mirror equation. Depending on whether we compute the
leftortherightview,weonlyreflecttherightsideortheleftsideofthetopheightmap.
3.5 Combiningheightmapsfromtopandsideviews
Aftercomputingtheheightmapsforallviews,weusedthecameracalibrationtoconvert
thesemapstothecoordinateframedefinedinSection3.2. Sinceweknowthegeometry
of the system and the location of the mirrors, we compute the equation for the mirror
plane and use the mirror reflection equation (Equation 3.16, Figure 3.7) to reflect side
heightmapstotheiroriginalviewsinthecoordinateframewedefined.
p
0
(x,y,z) =p(x,y,z)−2d
m
~ n (3.16)
Afterwards, weapplytheICP(BeslandMcKay,1992)algorithmtorigidlyregister
theheightmapsfromthesideviewstotheheightmapfromthetopviewandobtainthe
final3Dvolumebyintersectingtheregisteredheightmaps.
67
Figure3.7: Anillustrationofthereflectionofpointsduetothemirrorplane.
3.6 Experiments
We computed the camera calibration using a calibration grid placed at heights 0cm,
2.2cm and 5.4cm from the reference plane. We picked our calibration grid as large as
possible so that it covers the whole field of view. For every plane we picked 63 points,
that gave a total of 189 points to compute our 16 parameter DLT camera model using
theiterativeleastsquaresalgorithm.
After computing the camera calibration, we calibrated the structured light system
using7calibrationplanesincludingthereferenceplanewithheights=1,2,3,4,5and6cm
for the top view and 9 calibration planes with heights ranging from -1cm to -8cm for
the side views. We placed our mirror setup such that the distance OR
left
and OR
right
(Figure 3.6) was 3.6cm. We used Equation 3.6 to compute the phase difference of
each plane with the reference plane and the quality based phase unwrapping algorithm
described in Section 3.4 to obtain the true phase map of calibration planes. For each
view, the calibration parameters a, b and c were then computed using Equation 3.8
(Figure3.8).
68
a
b
c
top view left side view right side view
Figure3.8: Calibrationparameters: a,bandcfortopandsideviews.
We recomputed height maps of the planes with h=1,2,3 and 4cm for the top view
usingthecalibrationparametersandtheunwrappedphasemapstoseetheperformance
ofourcalibration(Figure3.9). Wethenusedtestobjects,atriangularprismandaplastic
mouse to test the calibration and the quality map based phase unwrapping method for
the top view. For the phase unwrapping algorithm we started unwrapping the pixels
that have quality map value greater than 0.9, according to the algorithm explained in
section 3.4. We also picked the average deviation thresholdst
u
andt
p
asπ/2. We used
thephotographicimagesacquiredduringthestructuredlightimagestomasktheobjects
from the background. The process of computing the height maps from the structured
69
light patterns on triangular prism and the plastic mouse are shown in Figures 3.10 and
3.11. These results show that we can accurately compute the height maps for the top
view.
Figure3.9: Recomputedreferenceplanesforthetopviewwithheightsh=1,2,3and4cm
usingthecalibrationparameters.
We then tested our method with the mirror setup by acquiring the structured light
patterns for both the top and side views of a test object. In this case, we picked a
cylinder as our test object instead of a triangular prism, since addition of the height
map information from the side views will have a significant effect on the estimation
of the correct shape of the cylinder. This is due to the camera position, which creates
occlusions on the lower half of the cylinder in the top view. We acquired top and side
viewsseparately;toacquirethestructuredlightpatternforthetopview,wecoveredthe
mirrorsandtoacquirethestructuredlightpatternforthesideviews,wecoveredthetop
view (Figure 3.12a). We then computed the phase (Figure 3.12b) and the quality maps
(Figure 3.12c) as described in previous sections. We see that due to its regular shape,
70
(a)Structuredlightpatternonatriangularprism (b)PhaseMap
(c)HeightMap (d)Heightplotsalongselectedviews
Figure3.10: Heightmapcomputationforatriangularprismusingthecalibrationparam-
etersandthestructuredlightpattern.
thequalitymapsofthecylinderforallviewshavehighvaluesexceptaroundtheedges.
Using the calibration parameters, we then computed the actual height maps for the top
and side views of the cylinder (Figure 3.12d). Then, we transformed these height maps
totheobjectcoordinateframeusingthecameracalibrationparameters. Usingthemirror
equations for the left and right side mirrors, we reflected the coordinates of the height
maps for the side views. Figure 3.13 shows the height map for the left side view(blue),
reflected height map for the left side view(red), left part of the surface points of the
height map of the top view(black) and surface points of the reflected height map for
71
(a)structuredlightpatternonaplasticmouse (b)PhaseMap
(c)HeightMap (d)Heightplotsalongselectedviews
Figure3.11: Heightmapcomputationforaphantommouseusingthecalibrationparam-
etersandthestructuredlightpattern.
the left side view(purple). We then registered the surface points of the reflected height
mapfortheleftsideviewtothesurfacepointsoftheheightmapforthetopviewusing
theICPalgorithmandappliedthesametransformationtotheremaininginnerpointsof
the height map. We applied the same procedure to the right side height map and the
final volume was generated from the intersection of all height maps with a matrix size
of 120x120x50 that corresponds to a voxel size of 1mm
3
. The surface of the resulting
volume was tesselated using the marching cubes algorithm and then smoothed using
a mean curvature flow algorithm as implemented in the BrainSuite software (Shattuck
72
Figure 3.12: Different steps in the generation of the 3D surface of a cylinder using
structured light pattern with the mirror setup: (a) the structured light pattern on the top
and side views of the cylinder (b) wrapped phase map of the top and side views (c)
quality maps of the top and side views (d) height maps and mid section height profile
forthetopandsideviews.
73
and Leahy, 2002). Figure 3.14 shows the comparison of the surface volume of the
cylindergeneratedfromusingonlythestructuredlightpatternofthetopview,andusing
structured light patterns of top and side views. Comparison of these results show that
addition of the height map information of structured light patterns from the side views
has significantly improved the resultant 3D surface profile of the cylinder object. The
side view information helps resolve the ambiguity that is caused due to the occlusions
oftheregionsbecauseoftheshapeofthecylinderandthepositionofthecamera.
Figure 3.13: Blue: height map for the left side view, red: reflected height map for the
left side view, black: left part of the surface points of the height map of the top view,
magenta: surfacepointsofthereflectedheightmapfortheleftsideview.
Wethentestedthismethodonaplasticmouse. Theprocessofcomputingtheheight
mapsfromthetopandsideviewstructuredlightpatternsontheplasticmouseisshown
in Figure 3.15. We see that quality maps (Figure 3.15c) computed for the top and side
viewsoftheplasticmousecancapturetheareaswhichhavediscontinuities. Forthetop
view, these regions are around ears and for the side views between the limbs and the
bodyof theanimal. Attheseregionsthequalitymapshavelowervaluesandtheregion
growing based phase unwrapping method unwraps phase around these discontinuities.
74
Figure 3.14: Comparison of the 3D surface volume of the cylinder generated using (a)
onlythetopview(b)thetopandsideviews.
Figures 3.16 and 3.17 show the resultant surfaces generated using structured light pat-
tern using only the top view and using the mirror setup. Comparison of these surfaces
demonstrates that addition of structured light patterns from the side views helped to
generateasmootherandmoreaccurate3Dsurfacemodeloftheplasticmouse(notethe
ear shape in Figure 3.17c which can not be constructed using only the top view due to
occlusion). In order to test the accuracy of our method and to see the effect of addition
of the structured light patterns from the side views into our 3D modeling, we acquired
the CT of the plastic mouse and compared our 3D surface profile with the surface we
havesegmentedfromtheCTdata. TheCTimagewasreconstructedfromtwobedposi-
tionswith0.2mmisotropicvoxelsizeand256x512x256matrixsize. Wesegmentedthe
surface using thresholding operations and mathematical morphology. We then rigidly
registered the skin volume to the volume we have generated from structured light pat-
terns with the mirror setup using RView (Studholme et al., 1999). Figure 3.18 shows
the overlay of the surface generated from structured light patterns using the mirror set
up and the surface from the CT data. In order to assess the performance of our method
and the effect of the structured light patterns from side views, we computed the Dice
coefficient (Equation 1.17) between the CT surface and the 3D surface generated from
75
structured light from the top view, and the dice coefficient between the CT surface and
the 3D surface generated from the structured light patterns using mirror setup (Table
3.1). Comparing the Dice coefficients, we can conclude that incorporating the height
maps obtained from structured light patterns of side views helped to generate a more
accurate 3D surface model of the plastic mouse. However, addition of side views did
not result in a perfect generation of the 3D surface model of the plastic mouse (Dice
coefficient of 0.8716). One of the reasons for this is that we did not incorporate the
bottomviewinourmodel. Inaddition,errorsincameracalibration,structuredlightsys-
tem calibration, registration of the height maps obtained from structured light patterns
of side views with the structured light pattern of the top view have contributed in the
generationofourfinal3Dsurface.
CTsurfacewithstructured usingonlytopview usingmirrorsetup
lightsurfacegenerated
k(S
1
,S
2
) 0.7585 0.8716
Table 3.1: Dice coefficients comparing the CT surface with the 3D surface generated
from structured light using only the top view and the 3D surface generated from struc-
turedlightpatternsusingthemirrorsetup.
76
Figure3.15: Differentstepsinthegenerationofthe3Dsurfaceofaplasticmouseusing
structuredlightpatternwiththemirrorsetup: (a)Thestructuredlightpatternonthetop
and side views of the mouse (b) Wrapped phase map of top and side views (c) Quality
maps of the top and side views (d) Height maps and mid section height profile for the
topandsideviews. 77
(a)Surfacegeneratedusingstructured (b)Surfacegeneratedusingstructured
lightpatternprojectedfromtopview lightpatternprojectedbymirrorsetup
Figure 3.16: Comparison of surfaces generated using structured light pattern projected
fromthetopviewandusingthemirrorsetup.
Figure 3.17: Comparison of surfaces generated using structured light pattern projected
fromthetopviewandusingthemirrorsetup: green: usingonlytopview,purple: using
themirrorsetup,(a)Leftsideview(b)Overlayofthesurfaces(c)Rightsideview.
Figure 3.18: Comparison of surfaces generated from CT to the ones from structured
light pattern with the mirror setup; gray: 3D surface using structured light pattern with
themirrorsetup,purple: 3DsurfacegeneratedfromCTdata.
78
Chapter4
AutomaticSegmentationofImages
withTumorPathology
One of the main applications of the Digimouse is to automatically segment and label
anatomicalstructuresinmouseimagesasshowninSection1.2.4,whereautomaticlabel-
ing is performed using nonrigid warping of the CT of the Digimouse to the CT of the
templatemouseandapplyingthistransformationtothedigitalatlas. Howeverthisappli-
cation is limited to mice with normal anatomy, where the mapping of the organs in the
atlas to the template mouse is constrained to be a one-to-one mapping. With a major
focus in many small animal studies on animal models with pathologies and abnormali-
ties in the anatomy of the animal, a challenge will be to register atlases to images con-
taining lesions and other structural abnormalities. In this chapter, I present preliminary
results on how such abnormalities can be integrated into atlas matching procedures in
suchawaythatwealsomodelchangesintissueinthevicinityoflesions.
Cuadraetal.(2004)usedatumorgrowthmodeltosegmentpathologicalbrainsusing
a digital brain atlas where they have assumed a radial expansion model of the tumor
growth. They have seeded a synthetic tumor, which is centered on the centroid of the
patient’s lesion, into the brain atlas and deformed this seeded atlas to the patient image
combining a method derived fromoptical flowand amodel oflesion growth. Kyriacou
et al. (1999) first estimated the anatomy prior to the tumor growth using a simulated
biomechanical contraction of the tumor region assuming a uniform growth model for
the tumor. Then they applied the normal to normal atlas registration to this pre-tumor
79
anatomy. Finallythetumorgrowthmodelwasappliedtotheresultantregisteredatlasto
produceasegmentedbrainimagewithtumor.
Ourapproachtothisproblemisapplyingaviscousfluidregistrationmethodbetween
theatlasandthetemplateimageswhichallowslargedeformations. Priortoregistration
we assumed a uniform growth model for the tumor and estimated its initial stage in the
templateimageusingalevelsetimplementation. Thenweusedthisinitialtumormodel
to seed our atlas and applied the viscous fluid based nonrigid registration algorithm to
modelthenonrigidregistrationoftheatlasimagetothetemplatewithtumorpathology.
The next sections give a brief overview of viscous fluid registration and the level sets
methods. Then,IprovideasimulationexamplewhereIhaveimplementedbothmethods
toguideourregistrationoftheatlasimagetothetemplateimagewithtumorpathology.
4.1 ViscousFluidRegistration
As mentioned in Section 1.1.4, physical continuum models can be used to model flex-
ible deformations (many degrees of freedom) by embedding the deforming image in a
deformable medium which is either an elastic material or viscous fluid. Among these
models, the elastic medium can model only small deformations. The viscous fluid
approaches, on the other hand, are more flexible and can accommodate large distance
nonrigid deformations. Here, I will describe the concepts of the viscous fluid registra-
tion which was developed by Christensen et al. (1996). I used the same notation as in
Chapter 1, where the template image is denoted as T and the target image is denoted
as S. u(x) is the deformation field that transforms the template image T to the target
imageS byasimilaritymeasuredefinedbetweentheseimages.
80
As described in Section 1.1.4, the partial differential equation for the viscous fluid
deformationofthetemplateimageintheEulerianreferenceframeisformulatedas:
μΔv(x)+(λ+μ)∇(∇·v(x))+F(x) = 0 (4.1)
where Δ = ∇
2
= ∇·∇ is the Laplacian operator, μΔv is the viscous term in the
partial differential equation and is associated with the constant volume viscous flow of
the template. The second term∇(∇·v) is the gradient of the divergence operator and
along with the viscosity coefficients λ and μ, (λ + μ)∇(∇· v) allows the expansion
and the shrinkage of the regions in the template. F(x) is the external force that drives
the deformation and it is computed as the derivative of the cost function which is the
similaritymeasurebetweentheimages. ForaGaussiansensormodel,thecostfunction
isdefinedas:
C(T,S,u) =
1
2
Z
Ω
(|T(x−u(x,t))−S(x)|
2
)dx (4.2)
yieldingthebodyforceasitsderivative:
F(x,u(x,t)) =−(T(x−u(x,t))−S(x))∇T|
(x−u(x,t))
(4.3)
wheretherelationbetweenthevelocityandthedisplacementfieldintheEulerianrefer- enceframeisgivenasfollows:
v(x,t) =
∂u(x,t)
∂t
+∇u(x,t)v(x,t) (4.4)
Theterm∇u(x,t)v(x,t)iscalledtheEulerianderivativeandresultsfromthechainrule
ofdifferentiationofthevelocityfield.
The registration problem requires solving the viscous fluid PDE (Equation 4.1) for
the velocity field and the Eulerian velocity field (Equation 4.4) for the deformation
81
field. Numerically,theviscousfluidPDEislinear,hencecanbewrittenas:
Kv =μΔv +(λ+μ)∇(∇·v) =f (4.5)
It is solved by forming the matrix K, which is known as the stiffness matrix,
by approximating the Laplacian and the gradient of the divergence terms by
finite differences or finite elements on a discrete lattice (Nielsen, 2003). Using
finite differences, in three-dimensional space, the Laplacian of a velocity field
v = (v
1
(i,j,k),v
2
(i,j,k),v
3
(i,j,k)) can be approximated using the second order
partialderivatives:
∇
2
v =
∂
2
v
∂x
2
+
∂
2
v
∂y
2
+
∂
2
v
∂z
2
(4.6)
wherethesecondorderpartialderivativeisapproximatedbyTaylor’sexpansionusinga
centraldifferenceformulaforauniformmeshofspacinghandgridnodes(i,j,k).:
∂
2
v
1
∂x
2
=
v
1
(i+1,j,k)−2v
1
(i,j,k)+v
1
(i−1,j,k),
h
2
+O(h
2
) (4.7)
Adding the similar approximation for
∂
2
v
2
∂y
2
and
∂
2
v
3
∂z
2
, the Laplacian of the velocity
fieldisapproximatedasfollows:
∇
2
v =
1
h
2
[(v(i+1,j,k)+v(i−1,j,k)−v(i,j+1,k)−v(i,j−1,k)+v(i,j,k+1)+
v(i,j,k−1)−6v(i,j,k)]+O(h
2
))
The gradient of the divergence of the velocity field after defining the divergence as
θ =∇.v isapproximatedas:
∇(∇·v) = ∇θ =∇(
∂v
1
∂x
+
∂v
2
∂y
+
∂v
3
∂z
)
=
∂
2
v
1
∂x
2
+
∂
2
v
2
∂x∂y
+
∂
2
v
3
∂x∂z
,
∂
2
v
1
∂x∂y
+
∂
2
v
2
∂y
2
+
∂
2
v
3
∂y∂z
,
∂
2
v
1
∂x∂z
+
∂
2
v
2
∂y∂z
+
∂
2
v
3
∂z
2
82
UsingTaylor’sexpansionfor
∂
2
v
1
∂x∂y
=
1
4h
2
[v
1
(i+1,j +1,k)+v
1
(i−1,j−1,k)−
v
1
(i+1,j−1,k)−v
1
(i−1,j +1,k)]+O(h
2
),thefirstcomponentofthegradientof
thedivergenceofthevelocityfieldcanbediscretized:
∂θ
∂x
=
1
4h
2
[4v
1
(i+1,j,k)+4v
1
(i−1,j,k)−8v
1
(i,j,k)+v
2
(i+1,j +1,k)
−v
2
(i−1,j +1,k)+v
2
(i−1,j−1,k)−v
2
(i+1,j−1,k)+v
3
(i+1,j,k +1)
−v
3
(i−1,j,k +1)+v
3
(i−1,j,k−1)−v
3
(i+1,j,k−1)]+O(h
2
)
The same discretization scheme is also used to approximate
∂θ
∂y
and
∂θ
∂z
. With these
approximations, the stiffness matrixK becomes large and sparse and typically succes-
sive over-relaxalation or iterative methods are used to solve for the velocity field in
Navier-Stokesequation.
TheEulerianvelocityfield,ontheotherhand,isapproximatedbytheEulerianinte-
grationovertimeusingtheforwardfinitedifferenceschemeas:
u(x,t
n+1
) =u(x,t
n
)+(t
n+1
−t
n
)(I−∇u(x,t
n
))
| {z }
∇h(x,tn)
v(x,t
n
) (4.8)
The stability of this updating scheme depends on the transformation∇h. To have a
wellconditionedhomeomorphic(continuous,1-1andonto)tranformation,theJacobian
J =|∇h|(Equation4.9)ofthetransformationh = (h
1
,h
2
,h
3
)hastobepositive.
J =
∂h
1
∂x
∂h
1
∂y
∂h
1
∂z
∂h
2
∂x
∂h
2
∂y
∂h
2
∂z
∂h
3
∂x
∂h
3
∂y
∂h
3
∂z
(4.9)
In addition, because of the discretization, large curved deformations can render the
transformation singular. To avoid this problem, the new template image is resampled
83
using the current deformation field when J < 0.5 as described in (Christensen et al.,
1996). Consecutively, the deformation field u is set to zero and the velocity field v is
keptconstantforthenextiteration. Thecompletealgorithmtofindthedeformationfield
accordingtotheviscousfluidregistrationisdescribedasfollows:
1. ComputeF(x,u(x,t
n
))usingEquation4.3.
2. IfF(x,u(x,t
n
))isbelowsomethresholdthenSTOP.
3. SolvetheNavier’sPDEinEquation4.1usingF(x,u(x,t
n
))andv(x,t
n
).
4. Chooseatimestept
n+1
−t
n
suchthat∇h(x,t
n
)v(x,t
n
)issmallerthanthemaximal
flow(whichwepickedas0.8).
5. PerformtheEulerintegrationusingEquation4.8.
6. IftheJacobianJ =|∇h(x,t
n
)|islessthan0.5thenresamplethetemplate.
7. n =n+1,gotoStep1.
4.2 LevelSets
The level set method is a numerical technique for tracking interfaces and shapes
(Sethian,1999;Malladietal.,1995). Ithasbeenwidelyusedintheareasofimagepro-
cessing (segmentation, image denoising, image enhancement), computer vision (shape
detectionandrecognition),fluidmechanics,combustionandmanufacturingofcomputer
chips. The advantage of the level set method is that numerical computations involving
curves and surfaces on a fixed cartesian grid can be performed without parameteriza-
tion (Eulerian approach). For2D, thisisdonebyembeddingtheinitialclosedinterface
as the zero level set of a higher dimensional function Θ. When the interface moves in
the normal direction to itself with a speed function v, this movement can be described
as a Hamilton-Jacobi equation for the level set function (Equation 4.10). This partial
differential equation can be solved numerically on a discrete grid by approximating the
84
spatialandtemporalpartialderivativesusingfinitedifferences. Atanytimet,thedesired
propagatedcurvecorrespondstothezerolevelsetfunction.
δΘ
δt
+v|∇Θ| = 0 (4.10)
There are many advantages of using level sets to track interfaces. First of all, the
extension of the level sets to 3D or higher dimensions is straightforward. Due to its
construction,topologicalchangesintheevolvingcurveareaccountedforautomatically.
Moreover,intrinsicproperties,suchascurvatureorthenormaloftheinterface,areeasily
computed from the embedded function. One obvious choice for this level set function
is the signed distance function, which gives the distance and the sign to the level set. If
thegridpointisinsidetheinterface,itisassignednegativedistance;ifitisoutside,itis
assignedpositivedistance.
Assumethegivenclosedcurveisevolvingaccordingtotwospeeds;firstitisexpand-
ing or contracting with a constant speed v
o
, and second it is moving with a speed pro-
portionaltoitscurvature. Usingthelevelsets,thisentiresetofmotioncanbedescribed
as:
Θ
t
+(v
o
−κ)|∇Θ| = 0 (4.11)
where the curvature κ is estimated from the level set function as the divergence of
thenormal:
~ n =
∇Θ
|∇Θ|
=
(Θ
x
,Θ
y
)
(Θ
2
x
+Θ
2
y
)
1/2
κ = ∇.n =∇.
∇Θ
|∇Θ|
=
Θ
xx
Θ
2
y
−2Θ
y
Θ
x
Θ
xy
+Θ
yy
Θ
2
xx
(Θ
2
x
+Θ
2
y
)
3/2
(4.12)
85
ThecorrespondingPDEforthislevelsetissolvedusingafirstorderapproximation
withtheupwindscheme(Equation4.10).
Θ
n+1
ij
= Θ
n
ij
+Δt
−[max(v
oij
,0)∇
+
+min(v
oij
,0)∇
−
]
+κ
n
ij
((D
0x
ij
)
2
+(D
0y
ij
)
2
)
1/2
(4.13)
where∇
+
and∇
−
aredefinedas:
∇
+
= [max(D
−x
ij
,0)
2
+min(D
+x
ij
,0)
2
)+max(D
−y
ij
,0)
2
+min(D
+y
ij
,0)
2
]
∇
−
= [max(D
+x
ij
,0)
2
+min(D
−x
ij
,0)
2
)+max(D
+y
ij
,0)
2
+min(D
−y
ij
,0)
2
]
In order to have a stable evolution of the level set function, we require the Courant-
FriedrichsLevy(CFL)conditionwhichrequiresthefronttocrossnomorethanonegrid
celleachtimestep. ThusweshouldpickthetimestepaccordingtoΔt≤
Δx
max
Ω
v
.
4.3 Experiments
This section presents simulation experiments where we implemented the viscous fluid
algorithm to register an atlas image to a template image with tumor pathology. We
startedbyseedingtheatlaswithtumor-likeintensity. Inordertocreatethisseed,wefirst
segmented the tumor manually on the template image and assumed a uniform growth
modelforthetumor. Toestimateitsearlierstage,weimplementedthelevelsetsmethod
described in Section 4.2 and used a v = −1− 0.2κ speed function to find the seed of
the tumor. This seed size can be easily controlled with the level set function, where
the number of pixels with a negative distance to the boundary can be easily detected.
An example of a more complex tumor image, along with the behavior of the tumor to
this speed function, is illustrated in Figure 4.1. Figure 4.2 show the images of the atlas
and the template with tumor pathology in lungs as well as the segmented tumor and
86
its earlier stage. Figure 4.3 shows the evolution of this tumor from its final stage to its
initialstageusingthelevelsetsmethodwiththeassumptionofauniformtumorgrowth
modeling. We then implemented the viscous fluid algorithm described in the previous
section, to model the deformation between the seeded atlas and the template images
(Figure 4.4). We used the viscosity parametersμ = 1 andλ = 0. Figure 4.5 shows the
normoftheforcewhichdeterminesthestoppingcriteriavs. iteration. Itisseenthatthis
metricdecreaseswithiterationasexpected.
Figure4.1: Simulationofacontractionofacomplextumorwithv =−1−0.2κ.
Figure 4.6 shows the deformation field along with the determinant of the Jacobian
matrix defined in Equation 4.9. The Jacobian gives an idea of the deformation; regions
withJ(u) > 0correspondtolocalexpansionandregionswithJ(u) < 0corresponding
tolocalcontraction. WeseethataroundthetumorareaJ(u)> 0whichindicatesalocal
expansion.
87
Figure4.2: Seedingoftheatlasimagewithanearlierstageofthetumor,estimatedusing
levelsetswiththeassumptionofauniformgrowthmodel.
Figure 4.3: Simulation of a contraction of the tumor in the template image with v =
−1−0.2κ.
88
Figure 4.4: Viscous fluid registration of the seeded atlas image to the template image
withtumorpathology.
Figure4.5: Thenormoftheforcethatdrivestheviscousfluidregistrationvsiteration.
89
Figure4.6: ThedeformationfieldandthedeterminantoftheJacobianofthedeformation
fieldateachgridpoint.
90
Chapter5
SegmentationofSkullandScalpin3D
HumanMRIUsingMathematical
Morphology
Solutions to magnetoencephalography (MEG) and electroencephalography (EEG)
inverse problems require realistic models of the head for use in accurate computation
of the mapping from neural current sources to scalp potentials and extra-cranial mag-
netic fields (Mosher et al., 1999). Since the conductivity of skull is significantly lower
than that of soft tissue, it is crucial that bone regions be included in the head model
(Hamalainen and Sarvas, 1989; Oostendorp and van Oosterom, 1989). Because of the
existence of closed form solutions for the MEG and EEG forward problem, multilayer
sphericalmodelshavetraditionallybeenusedtoapproximatethehumanheadwithaset
of nested spheres representing brain, skull, and scalp (Ary et al., 1981; Sarvas, 1987;
Zhang, 1995). Recently, representations of the head as a set of contiguous regions
bounded by surface tessellations of the scalp, outer skull, inner skull, and brain bound-
aries have been used to provide more realistic models (Hamalainen and Sarvas, 1989;
Schlittetal.,1995). Usingboundaryelementmethodsinconjunctionwiththesemodels
produces more accurate results than the multilayer spherical model but requires that a
volumetric image of the subject’s head first be segmented into its component bone and
softtissueregions.
91
CT and MRI serve different functions in biomedical imaging. CT exhibits better
bone definition than is seen with MRI; however, it does not provide good contrast in
soft tissue. In addition, CT is not a preferred modality for routine anatomical imaging
of structures in the head because of the exposure of the subject to ionizing radiation.
Conversely,MRIdoesanexcellentjobofdifferentiatingsofttissues,includingcerebral
cortex and blood vessels, but it does not provide accurate detail of bone structures,
such as the skull, because of the weak magnetic resonance signals produced in bone.
SegmentationofskullfromMRIthuspresentsachallengingproblem. Thedevelopment
ofmethodsforsegmentingskullandscalpfromMRIenablestheproductionofpatient-
specific models of scalp, skull, and brain from a single, high-resolution MRI of the
completehead.
Inthelastchapterofthisthesis,wepresentanewtechniqueforsegmentationofskull
and scalp in T1-weighted magnetic resonance images (MRI) of the human head. Our
methodusesmathematicalmorphologicaloperationstogeneraterealisticmodelsofthe
skull, scalp, and brain that are suitable for electroencephalography (EEG ) and magne-
toencephalography(MEG)sourcemodeling. WefirstsegmentthebrainusingourBrain
Surface Extractor algorithm; using this, we can ensure that the brain does not intersect
our skull segmentation. We next generate a scalp mask using a combination of thresh-
oldingandmathematicalmorphology. Weusethescalpmaskinourskullsegmentation
procedure, as it allows us to automatically exclude background voxels with intensities
similar to those of skull. We find the inner and outer skull boundaries using thresh-
olding and morphological operations. Finally, we mask the results with the scalp and
brainvolumestoensureclosedandnon-intersectingskullboundaries. Visualevaluation
indicated accurate segmentations of the cranium at a gross anatomical level (other than
smallholesinthezygomaticbonein8subjects)inall44MRIvolumesprocessedwhen
run using default settings. In a quantitative comparison with co-registered CT images
92
as a gold standard, MRI skull segmentation accuracy, as measured using the dice coef-
ficient,wasfoundtobesimilartothatwhichwouldbeobtainedusingCTimagerywith
aregistrationerrorof2mmto3mm.
5.1 RelatedWork
ThoughnumerousMRIsegmentationtechniquesaredescribedintheliterature,thereis
littleresearchdedicatedtotheproblemofsegmentingskullinMRI,asCThastypically
been used for this purpose. Classification of skull in MRI has often been a by-product
of classification techniques designed to categorize brain tissue. Held et al. (1997) used
a Markov random field (MRF) approach to classify MRI data into gray matter, white
matter, CSF, scalp-bone, and background. This method does not guarantee continuous
bounding contours, and it was not developed for the purpose of segmentation of the
skullinMRI.ChuandTakaya(1993)detectedtheskin-skullandskull-brainboundaries
usingthresholdingandLaplacianofGaussian(LoG)operationsonsuccessivetransverse
slices. Congorto et al. (1996) and Belardinelli et al. (2003) used neural networks based
approaches to segment skull and brain also from successive T1 MRI slices. Another
technique was introduced by Heinonen et al. (1997), who used thresholding and region
growingtosegmentboneinMRIvolumes. Haqueetal.(1998)alsousedsimplethresh-
oldingandmanualsegmentationtodetecttheboundariesofskullandscalptocreatean
average head model for forward modeling in MEG. Performing the segmentation on
individualslicesdoesnotallowthemethodtoexploittheconnected3Dstructureofthe
skull. Withthresholdingandregiongrowingmethods,thesegmentationofcertainbone
regions, such as ocular globes, is difficult because of partial volume effects. Rifai et al.
(1999)appliedadeformablemodeltosegmentskullfromMRI.Deformablemodelscan
93
beattractedtoincorrectboundariesresultinginthepotentialinclusionofskin,muscles,
eyes,andinnerearinthesegmentedskull.
Techniques have also been developed that make use of information from multiple
modalities. Studholme et al. (1996) used CT information for segmentation of skull in
co-registered MRI of the same subject. Soltanian-Zadeh and Windham (1997) applied
a multi-scale approach where they made use of both CT and MRI information. The
choice of an approach that requires both CT and MRI, while attractive in providing
accurate detail of both skull and soft tissue, is not generally practical since acquisition
usingbothmodalitiesisrarelyperformedineithervolunteerorclinicalstudies. Wolters
et al. (2002) note that the inner skull boundary can be determined more accurately in
protondensity(PD)thaninT1weightedMRIsincetheCSFproducesastrongersignal
thanskullintheformercase,whilethetwoarebarelydistinguishableinthelatter. Using
co-registeredPDandT1weightedimages,theyproduceasegmentationbasedonadap-
tivefuzzyc-meansclusteringandextendedregiongrowing(Wolters,2003). Forsolving
the forward problem in EEG these results are then extended to include anisotropy in
conductivityintheskullusingdiffusiontensorimagingtoestimatetheconductivityten-
sor (Haueisen et al., 2002). The forward problem is then solved using a finite element
method. Akahn et al. (2001) also use PD and T1 images to segment skull, scalp, CSF,
eyes, and white and gray matter with a hybrid algorithm that uses snakes, region grow-
ing, morphological operations and thresholding. While these approaches are attractive,
andtheuseofPDimageswillleadtoimprovedsegmentationofskull,inthevastmajor- ity of brain imaging studies only high resolution T1-weighted images are collected.
Consequently, there is a need for a robust approach to segmentation of skull and scalp
fromT1-weightedimagesonly.
Therearealimitednumberofsoftwarepackagesthatofferskullsegmentationfrom
T1-weightedMRI.TheseincludeBrainExtractionTool(BET)producedbySmithetal.
94
(2001), ANATOMIC produced by Heinonen et al. (1998), and BRain Image ANalysis
(BRIAN) produced by Kruggel and Lohmann (1996). In BET, the brain is first seg-
mentedfromtheMRIandthentheouterskullboundaryisfoundbysearchingoutwards
from the brain surface and determining the maximum gradient in intensity combined
withanadhocthresholdingprocedure. InANATOMIC,skullissegmentedusingthresh-
oldingandregiongrowingasdescribedabove. Othersoftwarepackages,suchasANA-
LYZE and CURRY, can also be used to segment skull and scalp in T1 MRI. However,
the segmentation procedures are not automatic. These methods require a user-defined
sequence of thresholding and application of included morphological tools to obtain the
skullandscalpboundaries.
Mostpreviousstudieshavenotprovidedquantitativevalidationofmethodsforseg-
menting skull in MRI. Instead, many relied upon the analysis of medical doctors to
qualitativelyapprovetheirresults. However,skullisnotaveryapparentstructureinT1-
weightedMRI,andevenfortrainedclinicalexpertsthedeterminationoftheboundaries
oftheskullwithinthismodalityisdifficult.
In this chapter, we describe our approach to this problem, which we base on a
sequence of thresholding and morphological operations. We also present a quantita-
tive validation study of our method, which uses co-registered MRI-CT data to evaluate
ourresults.
5.2 Obtaining Anatomical Surface with Morphological
Processing
AccuratesegmentationofskullinMRIisdifficultbecausethemagneticresonancesignal
fromboneisweakandoftenindistinguishablefromair; itisfurthercomplicatedbythe
topology of the skull. Certain parts of the skull, such as the ocular globes and portions
95
of the upper region of the skull, are thin compared to the resolution currently available
in MRI. This problem is confounded by partial volume effects that tend to reduce the
apparent thickness of the skull. In regions in which bone and air are separated by thin
layersoftissue(e.g.,sinuses,externalauditorycanals)itisoftendifficulttodifferentiate
betweenairandbonevoxels(Rifaietal.,1999).
Binary mathematical morphology is an algebraic system based on set theory that
provides two basic operations: dilation and erosion. Combinations of these operations
enableunderlyingobjectshapestobeidentifiedandreconstructedfromtheirnoisydis-
torted forms. A combination of dilation and erosion gives rise to two additional oper- ations, opening and closing, which form the basis for most morphological processing
(Nadadur and Haralick, 2000). Morphological approaches have already been widely
used in segmentation of magnetic resonance imagery (Bomans et al., 1990; Brummer
etal.,1993;Shattucketal.,2001).
Ourmethodperformsskullandscalpsegmentationusingasequenceofmorpholog-
ical operations. We first segment the brain using our Brain Surface Extractor (Sandor
and Leahy, 1997; Shattuck et al., 2001; Shattuck and Leahy, 2002). We exclude the
brain from the image volume to obtain an estimate of the intensity distribution of the
scalp and the skull regions. From this, we compute a threshold that we use to perform
an initial segmentation of the scalp, which we refine with mathematical morphology.
Whenwesegmenttheskull,weusethescalpsegmentationtodistinguishbetweendark
voxels that may be skull or air, since these areas will have overlapping intensity distri-
butions and may appear connected through the sinuses. We also use the brain mask to
ensure that our skull boundary does not intersect the brain boundary. Figure 5.1 pro-
vides a complete block diagram of our algorithm for segmenting scalp and skull using
morphologicaloperations.
96
MRI Data
Br ain Scalp Out er Skull Inner Skull
S egmentation S egmentation S egmen tation Segmentation
MRI Data
Brain Surface
Extractor
Brain Mask
Dilate C 2
Thresholding
Closing with hole
filling O 2
Largest Foreground
Selection
Scalp Mask
Opening C
Erosion C 2
12
MRI Data
Thresholding
Union
Masking
Largest Foreground
Selection
Closing O 4
Masking
Outer Skull
Erosion C 1
MRI Data
Masking
Thresholding
Union
Opening O 4
Union
Inner Skull
Outer Skull
Erosion O 4
Brain Mask
Dilate C 1
Figure 5.1: Block diagram of our algorithm for scalp and skull segmentation using
morphological operations. On completion, we apply additional masking operations to
ensurethatscalp,outerskull,innerskull,andbrainmasksdonotintersecteachother.
5.2.1 MathematicalMorphology
Morphologicaloperatorsuseabinaryimageandastructuringelementasinputandcom-
bine them using intersection, union, inclusion or complement operators. We use three
basicstructuringelements: thecube,the3Dcross,andthe3Doctagon. Theseelements
are shown in Figure 5.2, along with their corresponding 2D structuring elements. The
3D octagon, which we denote byO
2
, is an approximation of a sphere and is defined as
97
Cube(C
1
) 3DCross(R
1
) 3DOctagon(O
2
)
Square Cross Octagon
Figure5.2: Basicstructuringelementsusedinouralgorithm: Cube(C
1
),3DCross(R
1
),
3D Octagon(O
2
) and their corresponding 2D structuring elements: Square, Cross and
Octagon.
the dilation of the 3D cross by the 3D cube. This is analogous to an octagon shape in
2D,whichisproducedbythedilationofa3x3squarebya3x3cross.
Dilationorerosionbylargerstructuringelementsisperformedbyapplyingtheoper- ationmultipletimeswithasmallerstructuringelement. Forinstanceifwewanttoapply
adilationoperationusingC
4
,acubicstructuringelementofradius4,tosomeobjectS,
wecanachievethisbyapplyingdilation4timesusingC
1
,acubeofradius1:
S
dilated
= S⊕C
4
= S⊕C
1
⊕C
1
⊕C
1
⊕C
1
. (5.1)
We use the following mathematical symbols for the morphological operators in the
descriptionofouralgorithm: ⊕fordilation, forerosion,◦foropening,•forclosing,
andforaspecialformofclosingoperationwithholefilling,describedindetailbelow.
98
5.2.2 SegmentationofBrain
We segment the brain using the Brain Surface Extractor (BSE) software, v.3.0 (San-
dor and Leahy, 1997; Shattuck et al., 2001; Shattuck and Leahy, 2002). BSE identifies
the brain using anisotropic diffusion filtering, Marr-Hildreth edge detection, and mor- phological operations. Pre-processing with an anisotropic diffusion filter improves the
edgedefinitionsintheMRIbysmoothingnon-essentialgradientsinthevolumewithout
blurring steep edges. The Marr-Hildreth edge detector identifies important anatomical
boundaries such as the boundary between brain and skull. The brain is identified and
refined using a sequence of morphological and connected component operations. The
outputofBSEisabinarymask,denotingbrainandnon-brain.
5.2.3 SkullandScalpThresholds
Wecomputeinitialsegmentationsfortheskullandscalpusingthresholdingwithvalues
computedfromtheimageafterbrainextraction. Thevoxelsthatarenotlabeledasbrain
aretypicallyeitherlowintensityregions(thebackground,somepartsofCSFandskull)
or higher intensity regions (fat, skin, muscle and other soft tissues). We compute an
empirical skull threshold as the mean of the intensities of the non-zero voxels that are
notbrain. Wedefinethissetas
X
NB
={k :k ∈V\B,m
k
> 0}, (5.2)
whereV isasetofspatialindicesreferencingtheentireMRIvolume,B issetofvoxels
identified by BSE as brain, V\B is the set of voxels in V excluding B, and m
k
is the
k-thvoxelintheMRIvolume. Thethresholdisthen
t
skull
=
1
|X
NB
|
X
j∈X
NB
m
j
, (5.3)
99
wherem
j
istheintensityofthej
th
voxelinV,theMRI,andX
NB
isthesetofnon-zero
andnon-brainvoxels.
Thescalpsurfaceistheinterfacebetweentheheadandthebackgroundintheimage.
Itstransitionisquitesharp,thusourprimaryconcernistofindanappropriatethreshold
that removes noise in the background. We compute the scalp threshold as the mean of
thenon-brainvoxelsthatareatorabove t
skull
,
t
scalp
=
1
|X
NS
|
X
j∈X
NS
m
j
, (5.4)
wherem
j
istheintensityofthej
th
voxelinV,andX
NS
isdefinedas
X
NS
={k ∈X
NB
:m
k
≥t
skull
}. (5.5)
These thresholds give the algorithm an initial estimate of the range of the skull and
scalp intensities. In our implementation of the algorithm, these thresholds may also be
adjustedbytheusertoimprovethesegmentation.
5.2.4 ScalpSegmentation
ThresholdingtheMRIwitht
scalp
producesasetofforegroundvoxelsdescribedby
X
t scalp
={k ∈V :x
k
≥t
scalp
}, (5.6)
where t
scalp
is the scalp threshold and x
k
is the k
th
voxel value (see Fig. 5.3b). After
thresholding, the volume will still contain background voxels that result from noise. It
will also have cavities in the head that correspond to regions of tissue that have low
intensity values such as CSF. These cavities will often be connected to the background
throughthesinusesorauditorycanals. Toaddresstheseproblems,weapplyamodified
100
(a)V (b)X
t scalp
(c)X
t scalp
⊕O
2
(d)X
t scalp
⊕O
2 filled
(e)X
t scalp
⊕O
2 filled
O
2
(f)S
Figure 5.3: Different stages of scalp segmentation: (a) slice in the initial MRI volume
(b) image after applying a threshold, X
t scalp
(c) dilated image (d) image with cavities
filled(e)erodedimage(f)largestforegroundcomponent.
morphological closing operation. A traditional closing operation consists of a dilation
followed by an erosion, both of which use the same structuring element. Our closing
operation,whichwedenoteby,performsaholefillingoperationbetweendilationand
erosion; this will fill any cavities that are disjoint from the background after dilation.
We obtain the scalp volume by applying our modified closing operation toX
t scalp
, the
thresholdedvolume,usingastructuringelementO
2
(seeFig. 5.3c-e):
X
t scalp filled
=X
t scalp
O
2
. (5.7)
101
Since the connection of auditory canals and sinuses to the outside environment is
through ears and nose, application of the dilation operator with an O
2
structuring ele-
ment closes these structures. A subsequent hole-filling closing operation produces a
single volume that contains the entire head, including brain and skull as well as scalp.
Wethenselectthelargestforegroundconnectedcomponentastheheadvolume,
S =SCC(X
t scalp filled
), (5.8)
where SCC is an operator that selects the largest foreground connected component in
the setX
t scalp filled
. The setS differentiates scalp from background. Figure 5.3 shows
viewsofeachstageofthescalpidentificationprocess.
5.2.5 SegmentationofSkull
Finding the skull in T1-weighted MRI is difficult because the skull appears as a set
of dark voxels and typically has a thickness of only three or four voxels. To properly
segmenttheskull,wemustidentifyboththeouterboundary,whichseparatesitfromthe
scalp,andtheinnerboundary,whichencasesthebrain.
5.2.5.1 SegmentationofOuterSkull
Ourouterskullsegmentationprocedurefirstperformsathresholdingoperationtoiden-
tify the dark voxels in the image. As described above, we estimate the threshold for
skull, t
skull
, as the mean intensity value of the non-zero voxels in the MRI volume that
arenotidentifiedasbrain. Applicationofthisthresholdtotheimageproducestheset
X
dark
={k ∈V :x
k
≤t
skull
}, (5.9)
102
(a)V (b)X
dark
(c)X
u out
=X
dark
∪B
dm
(d)S
e
(e)X
i out
=X
u out
∩S
e
(f)X
lc
(g)X
c
=X
lc
•O
4
(h)Sk
out
=X
c
∩S
e
Figure 5.4: Outer skull segmentation: Individual stages of outer skull segmentation
sequence: (a)asliceintheMRIvolume(b)thresholdingvoxelswithintensitylessthan
or equal t
skull
to identify the “dark” voxels (represented as white pixels in the image)
(c)unionofdarkvoxelsanddilatedbrainmask(d)modifiedscalpmask(e)intersection
ofmodifiedscalpmaskandX
u out
(f)largestconnectedcomponentX
lc
(g)closedskull
component(h)finalouterskullmaskaftermaskingbymodifiedscalpmask.
where once againV is a spatial index set representing the entire MRI volume andx
k
is
thek
th
voxelvalueintheMRI(seeFig. 3b).
103
The region identified by the thresholding operation will exclude some regions such
as CSF that are inside the head but do not belong inside the skull. To capture these
regions, we take the union of our thresholded image, X
dark
, with a dilated brain mask,
B
dm
. WedilatethebrainmaskproducedbyBSE;weuseacubeofradius2,denotedas
C
2
(equivalent to a 5x5x5 cube), as the structuring element. This process, the result of
whichisshowninFig. 3c,isdescribedas
B
dm
= B
m
⊕C
2
(5.10)
X
u out
= X
dark
∪B
dm
. (5.11)
Next, we take the intersection of this volume with a modified scalp volume, S
e
, to
include only the dark voxels that lie within the head. We obtain this modified mask by
applyingan openingoperationwithcubicstructuringelementofradius12(25x25x25),
followedbyanerosionoperationwithacubeofradius2(5x5x5):
S
e
= (S◦C
12
) C
2
. (5.12)
These two operations generate a scalp volume that typically does not include the ears
or nose (see Fig. 3d). In particular, this volume excludes portions of the ear canals
and nasal sinuses that have intensity values that are indistinguishable from skull. By
masking the skull with S
e
(see Fig. 3e), we ensure that these regions do not appear
incorrectly as part of the skull and that the skull does not intersect the scalp boundary.
Thisisdescribedby
X
i out
= X
u out
∩S
e
, (5.13)
whereS
e
isthemodifiedscalp,andX
i out
istheinitialskullmask.
104
Thelargestconnectedregionresultingfromthisoperationwillbetheclosedvolume
bounded by the outer skull, but X
i out
may contain additional connected components
such as the eyes. Thus, we now select the largest connected component in X
i out
to
disconnectthisvolumefrombackgroundvoxelsandeyesockets(seeFig. 3f):
X
lc
=SCC(X
i out
). (5.14)
We then close the boundary of the outer skull using a closing operation with the struc-
turingelementO
4
X
c
=X
lc
•O
4
. (5.15)
This closing operation may cause the volume to intersect the scalp, so our final
operation is to again mask the volume with the eroded scalp to enforce the physical
constraint that the outer skull boundary lies within the scalp volume that we previously
computed:
Sk
out
=X
c
∩S
e
. (5.16)
TheresultingsetSk
out
isaclosedvolumeboundedbytheouterskullboundary(seeFig.
5.4h).
5.2.5.2 SegmentationofInnerSkull
Segmenting the inner boundary of the skull is more difficult than identifying the scalp
or the outer skull. We must be careful to keep all parts of the brain and the CSF within
the inner skull boundary. In some MRI volumes with low SNR, skull is difficult to
distinguish from CSF, and CSF may therefore be misinterpreted as skull. In order to
105
(a)V (b)X
o
=V ∩Sk
out e
(c)X
bright
(d)X
u in
=X
bright
∪B
dm
(e)X
open
=X
u in
◦O
4
(f)Sk
in
=X
open
∪Sk
out o
Figure5.5: Innerskullsegmentation: Stagesofinnerskullsegmentation: (a)sliceinthe
initialMRIvolume(b)imageaftermaskingbyerodedouterskullmask(c)binaryimage
produced by thresholding X
o
with t
skull
(d) image after union with dilated brain mask
(e)openingofX
u in
withO
4
(f)innerskullmaskproducedfromtheunionofX
open
and
theerodedouterskull.
overcome this problem we use the outer skull and brain masks to restrict the allowed
locationsoftheinnerskullboundary.
106
First, we mask the MRI volume using a skull mask, Sk
out e
. We create Sk
out e
by
eroding the outer skull set computed in the previous section, Sk
out
, with a structuring
elementC
1
:
Sk
out e
= Sk
out
C
1
(5.17)
X
o
= V ∩Sk
out e
. (5.18)
Thisensuresthattheresultantinnerskullwilllieinsidetheouterskull. Then,weapply
thesamethresholdt
skull
weusedtofindtheouterskulltothevoxelsinX
o
:
X
bright
={k ∈X
o
:x
k
≥t
skull
}. (5.19)
The resulting volume should represent the region bounded by the inner skull boundary
(seeFig. 5.5c). Thethresholdingoperationproducesholesinthevolumeresultingfrom
CSFvoxelsthathaveintensitiessimilartothatoftheskull. Therefore,wetaketheunion
of theX
bright
with a dilated brain mask to directly fill these non-skull regions (see Fig.
5.5d). We use the structuring element C
1
to dilate the brain. With this operation we
ensurethatthemaskX
u in
enclosesthebrainmask:
B
dm
= B
m
⊕C
1
(5.20)
X
u in
= X
bright
∪B
dm
(5.21)
The mask still appears noisy due to various remaining confounding features, such as
bright voxels that correspond to diploic fat within the skull. We remove these regions
byapplyinganopeningoperationusingastructuringelementO
4
(seeFig. 5.5e),
X
open
= X
u in
◦O
4
, (5.22)
107
While the previous steps will remove most CSF regions from the estimated skull
mask, there may be regions where the dilated brain mask alone does not encompass
all CSF, which therefore appears as part of the skull. To overcome this problem we
impose a physical constraint on skull thickness in our algorithm and ensure that it does
not exceed 4mm. Using the structuring element O
4
, we apply an erosion operation to
theouterskullvolume. Thisproducestheminimuminteriorskullboundary,Sk
out o
:
Sk
out o
= Sk
out
O
4
. (5.23)
BycomputingtheunionofSk
out o
withX
open
,
Sk
in
= X
open
∪Sk
out o
, (5.24)
weobtainaninnerskullboundarythatbettermatchestheanatomy(seeFig. 5.5f). Sk
in
contains all voxels that contained within the skull, but does not include the skull itself.
While this operation will lead to underestimation of skull thickness in subjects with
skullsthickerthan4mminplaces,itisapracticalsolutiontotheproblemthatCSFand
skull are often indistinguishable. This thickness constraint can also be modified by the
user.
Because of partial volume effects and the resolution of the MRI data, the skull can
beverythininsomepartsoftheimage. Thismaycauseproblemsinoursegmentations
becausetheresultantsurfacetessellationsofthescalp,outerskull,innerskull,andbrain
boundariesmayintersect. Wethereforeconstrainoursegmentationresultstoensurethat
each structure does not intersect with the neighboring surfaces. To avoid affecting the
brain segmentation result, we apply this constraint to the inner skull first, then to the
outer skull, and finally to the scalp. We dilate the brain mask with structuring element
C
1
and take the union of this mask with the inner skull. We repeat the same procedure
108
for the outer skull and scalp. We constrain the outer skull by taking its union with the
dilatedinnerskull;weconstrainthescalpbytakingitsunionwiththeouterskulldilated
with structuring elementC
1
. This correction procedure guarantees that we obtain non-
intersecting boundaries. Figure5.6givesasummaryofouralgorithmwiththenotation
used in the equations. Figure 5.7 shows views of surface models generated from this
procedure. Thefinalsetofmasksdifferentiatebrain,thespacebetweenbrainandskull,
skull,scalp,andbackground.
5.3 Results
Weperformedtwostudiesoftheperformanceofourscalpandskullsegmentationmeth-
ods,whichweimplementedusingtheC++programminglanguage. Inthefirststudy,we
assessedtheabilityofourmethodtoperformautomaticallyonseveralMRIvolumes. In
the second study, we measure the accuracy of our methods using co-registered sets of
CTandMRIdata.
5.3.1 Automationstudy
We applied our algorithm to 44 T1-weighted MRI volumes obtained from the Interna-
tionalConsortiumforBrainMapping(ICBM)SubjectDatabase(Mazziottaetal.,2000).
Scalpsegmentationwassuccessfulforallsubjectswithnomanualintervention. For36
of the 44 volumes, our algorithm produced closed anatomically correct skull models
with no user interaction. For the remaining 8 volumes, our algorithm also generated
anatomically correct skull models that contained one or two holes near the eye sockets
in the zygomatic bones. Of these, 3 were corrected by manually adjusting the skull
threshold. Intheothercases,adjustingthethresholdtoclosetheholesresultedinerrors
elsewhere in the skull. This problem arises because of the very thin bone layer in the
109
scalp
t
X
t_scalp
X
t_scalp_filled
S
2
O O
.
SCC
V
B
BSE
V
dark
X
u_out
X
i_out
2
C B B
m dm
= U
2 12
) ( C C S S
e
o
=
SCC
X
lc
4
O
X
c
e
S
Sk
out
skull
t
V
X
bright
u_in
open
4
O
4
OO Sk Sk
out out_e
=
Sk
in
1
C Sk Sk
out out_e
=
1
C B B
m dm
= U
-
X
o
skull
t O
U
U
U
O
o
U
V
X
X
X
O -
O -
Figure 5.6: A more detailed block diagram of our algorithm for scalp and skull seg-
mentation using morphological operations with the notations used in the equations. On
completion, we apply additional masking operations to ensure that scalp, outer skull,
innerskull,andbrainmasksdonotintersecteachother.
vicinityofthezygomaticbone. However,theseholesdonotintroduceadditionalopen-
ings in the cranium between the brain and scalp and so should have minimal impact on
MEGandEEGfieldscalculatedusingthesesurfaces.
110
(a) (b)
(c) (d)
Figure 5.7: Renderings of surface tessellations of the head models: (a) scalp model (b)
outerskullmodel(c)innerskullmodel(d)nestedviewofthesurfaces.
5.3.2 Co-registeredCT/MRIValidation
WeexaminedsegmentationresultsfrompairsofCTandT1-weightedMRIdatavolumes
acquired from 8 subjects. The CT-MRI data sets were provided as part of the project,
“RetrospectiveImageRegistrationEvaluation”(Westetal.,1997). Inthisdatabase,the
T1-weighted MRI data were obtained using a Magnetization Prepared RApid Gradient
Echo (MP-RAGE) sequence. This is a rapid gradient-echo technique in which a prepa-
ration pulse (or pulses) is applied before the acquisition sequence to enhance contrast
(MuglerandBrookeman,1990). ThedimensionsoftheMRIvolumeswere256x256x
128withresolutionontheorderof1mmx1mmx1.5mm. ThecorrespondingCTscans
had 3mm slice thickness with slice dimensions 512 x 512, with resolution on the order
111
of 0.4mm x 0.4mm. The number of slices for each CT volume varied between 42 and
49.
WeprocessedtheCTandMRIdatabyregisteringeachCTvolumetoitscorrespond-
ing MRI volume. We used the 3D Multi-Modal Image Registration Viewer Software
(RView8.0wBeta)toperformtheseregistrations(Studholmeetal.,1999). Afteralign-
ment, we used thresholding to label the skull in the CT data sets. We then used closing
and flood filling procedures to fill the diploic spaces within the skull. We also labeled
the scalp and brain in CT data sets using morphological operations. In this study, we
treated the segmentation results that we obtained from CT as a gold standard against
which to compare the MRI segmentations. Figure 5.8 shows results of the segmenta-
tionofbrain, skullandscalpfromaT1-MRIvolumeanditscorrespondingCTdatafor
transaxial, sagittal and coronal sections. Figure 5.9 shows the results as tessellations
of the scalp, outer skull, inner skull, and brain for 4 of the 8 MRI data sets. Surface
tesselations were computed using the Marching Cubes algorithm (Lorensen and Cline,
1987)andrenderedusingourBrainSuitesoftware(ShattuckandLeahy,2002).
Toassesstheperformanceofouralgorithm,wecomputedDicecoefficientsbetween
correspondinglylabeledregionsgeneratedfromtheCTandMRIdata(Equation1.17).
Skull morphology in the lower portion of the head is extremely complex and per- formance of our algorithm in these regions is unreliable. However, forward modeling
calculations in MEG and EEG are primarily affected by the scalp and the skull bound-
ariesintheregionsbetweenthebrainsurfaceandthelocationofthescalpelectrodesor
externalmagnetometers. Consequently,werestrictedourevaluationtotheheadvolume
lyingabovetheplanepassingthroughnasionandinionandperpendiculartothesagittal
plane. While small portions of the occipital and temporal lobes may extend below this
plane, and hence are excluded from the evaluation, the use of an anatomically defined
112
MRI
CT
SegmentedMRI
SegmentedCT
Figure 5.8: Segmentation of brain, CSF, skull, and scalp from MRI and its correspond-
ingCTontransaxial, sagittalandcoronalslicesrespectively. FirstRow: OriginalMRI.
Second Row: Original CT data. Third Row: Segmentation of brain, CSF, skull, and
scalp from MRI. Fourth Row: Segmentation of combined brain/CSF, skull, and scalp
fromCT.
bounding plane makes this analysis objective and repeatable and preferable to one in
whichtheevaluationregionisredefinedforeachsubject.
Foreachsubject,wecomputedDicecoefficientsmeasuringthesimilarityinsegmen-
tationsoffourregionsthatwerelabeledintheco-registeredCTandMRI.Theseregions
were the brain and CSF volume within the inner skull boundary, the skull (the region
113
Figure5.9: Surfacetesselationsofthescalp,outerskull,innerskull,andbrainobtained
from4MRIdatasets.
between the outer and the innerskull boundary), thetheregion between the outer scalp
boundary and the outer skull, and the whole head volume contained within the scalp.
These results are listed in Table 5.1. On average, the segmentations showed agreement
metricsof0.9436±0.013forbrain/csf,0.7504±0.056forskull,0.7229±0.061forthe
regionboundedbyscalpandskull,and0.9670±0.013forthescalp. Figure5.10shows
114
Brain/CSF Skull Regionbetween ScalpVolume
ScalpandOuterSkull (wholehead)
Subject1 0.9452 0.7608 0.6820 0.9814
Subject2 0.9472 0.7134 0.7665 0.9786
Subject3 0.9540 0.6364 0.6123 0.9818
Subject4 0.9189 0.7940 0.7228 0.9550
Subject5 0.9451 0.7852 0.7752 0.9729
Subject6 0.9315 0.7912 0.7488 0.9588
Subject7 0.9517 0.7254 0.7924 0.9483
Subject8 0.9553 0.7964 0.6832 0.9590
Average 0.9436±0.013 0.7504±0.056 0.7229±0.061 0.9670±0.013
Table5.1: DicecoefficientscomparingtheCSFandbrainvolumeinsidetheskull,skull
volumes(i.e., theregionbetweeninnerandouterskullboundaries),theregionbetween
outerskullandscalpboundary,andthewholeheadfor8co-registeredCTandMRIdata
sets.
an overlay of the MRI and CT skulls on transaxial MRI and CT sections for one of the
datasets,illustratingtheagreementofthetwosegmentations.
TheuseofCTmodelsforskullandscalpinEEGorMEGsourcelocalizationwould
require registration of the CT data to MEG or EEG sensor positions, which typically
has an accuracy on the order of 2.5mm (de Munck et al., 2001; Lamm et al., 2001).
Consequently, it is worthwhile to relate the error induced by use of the skull segmenta-
tion from MRI with that which would result from registration error even if CT images
were available. We thus compare the Dice coefficients presented in Table 5.1 for skull
segmentation with Dice coefficients computed from mis-registered copies of the CT
skull model and the MRI skull model relative to the original CT skull model. We also
computeasetdifferencemeasure,
diff(S
A
,S
B
) =
|S
A
|−|S
A
∩S
B
|
|S
A
|
, (5.25)
115
(a) (b) (c)
Figure 5.10: Overlay of the MRI and CT skulls on a transaxial MRI cross-section and
itscorrespondingco-registeredCTcross-section: (a)OverlayoftheMRIskullonMRI:
white region corresponds to skull (b) Overlay of MRI and CT skulls: white voxels are
labeledasskullinbothCTandMRIsegmentations,darkgrayvoxelsarelabeledasskull
onlyintheMRIdata,lightgrayvoxelsarelabeledasskullonlyintheCTdata,andblack
voxels are labeled as background in both. (c) Overlay of the CT skull on co-registered
CT.
which produces a value of 0 when all members ofS
A
are inS
B
, and a value of 1 when
nomembersofS
B
areinS
A
.
We shifted the CT skull volume by 1mm, 2mm, and 3mm separately along each of
the three cardinal axes in turn. We then computed the Dice coefficients and set differ- encesbetweentheshiftedCTskullsandtheoriginalCTskullandtheshiftedCTskulls
and MRI skull along each axis in turn. To compute the final results we calculated the
averageofthedirectionalDicecoefficientsandsetdifferencestocomputetheresultsfor
1mm, 2mm and 3mm shifts. Table 5.2 lists the average Dice coefficients and average
setdifferencesfortheshiftedCTskullrelativetoitselfandtotheMRIskull.
ThedicecoefficientsshowthattheerrorsbetweenMRIandCTskullsegmentations
are similar to the effect of a 2mm to 3mm registration error. This figure is consistent
with errors reported in the literature (de Munck et al, 2001; Lamm et al, 2001). How-
ever,registrationerrorwillbepresentwhenusingeitherCTorMRI,soamorerealistic
116
Avg S
1
vsAvg S
2
Avg DiceCoef. Avg diff(S
1
,S
2
) Avg diff(S
2
,S
1
)
MRI vsCT 0.7504±0.0560 0.1658±0.0727 0.3067±0.0888
CT vsCT
1mm
0.9086±0.0281 0.0925±0.0280 0.0903±0.0282
CT vsCT
2mm
0.8206±0.0538 0.1819±0.0540 0.1769±0.0538
CT vsCT
3mm
0.7349±0.0769 0.2679±0.0767 0.2613±0.0767
MRI vsCT
1mm
0.7346±0.0599 0.1876±0.0727 0.3220±0.0863
MRI vsCT
2mm
0.6918±0.0703 0.2337±0.0775 0.3636±0.0865
MRI vsCT
3mm
0.6337±0.0858 0.3002±0.0905 0.4158±0.0955
Table 5.2: Table of the average Dice coefficients and average set differences for the
comparisonof1mm,2mm,and3mmmisregisteredCTskullswithitselfandMRIskull.
comparison is between the MRI and CT skull shifted by 2mm to 3mm and the corre-
sponding CT and shifted CT skull. The MRI/shifted-CT dice coefficients average 0.69
for a 2mm shift and 0.63 for a 3mm shift. In comparison, the CT/shifted-CT dice coef-
ficients average 0.82 for a 2mm shift and 0.73 for a 3mm shift. The set differences
reported in Table 5.2 indicate that the lower dice coefficients for MRI/CT than CT/CT
resultfromdifferencesintheskullthicknessinMRIandCT.
The set difference is asymetric, and Table 5.2 shows that diff(MRI,CT) is less
than diff(CT,MRI) for each case. This indicates that the CT skull is, on average,
thicker than the MRI skull. This may be due to the absence of significant MRI signal
fromskullsothatpartialvolumeeffectsfromthescalptendtoreducetheapparentskull
thickness. Conversely,partialvolumeeffectsinCT,inwhichtheskullhashighintensity,
may result in an over-estimate of skull thickness. In the data used here this effect may
be exacerbated by the 3mm slice thickness in the CT data, which will further tend to
resultinincreasedapparentskullthickness.
As discussed above, in some instances one may adjust the skull and scalp thresh-
olds to improve estimates of the skull volumes. The thresholds can be quickly
117
adjusted, and identification of the scalp and skull surfaces on a 256x256x124 vol-
ume using the method described here requires less than 4 seconds of processing time
on a 3GHz Pentium IV processor. Our algorithm is integrated into our BrainSuite
environment (Shattuck and Leahy, 2002; Dogdas et al., 2005), available online at
http://neuroimage.usc.edu/brainsuite.
118
Chapter6
ConclusionandFutureWork
In this thesis, we have reviewed the popular registration techniques available in the
humanstudiesliteratureandshowedthatthesemethodscaneffectivelybeusedinsmall
animalimagingapplications. Inparticular,wehavestudiedAIRandRViewmethodolo-
gies and software for 3D rigid intrasubject registration of PET-CT and CT-cryosection
data and AIR and LEREG methods for 3D nonrigid intersubject registration of CT
images of mice. We have also implemented thin plate spline algorithm and tested this
method for 2D nonrigid intersubject registration of CT images of mice. Both rigid reg-
istration softwares AIR and RVIEW performed well in multimodal registration. Initial
segmentation of the mouse data was required to obtain a meaningful registration with
AIR. In the nonrigid registration methods, we have showed that LEREG performed
better than AIR in matching the anatomical structures of two mice by comparing the
overlap of the skeleton structures after registering with these methods. The thin plate
splinemethodisanelegantmethodfor2Dregistration,howeveritrequirestheselection
oflandmarkstodeterminethewarping.
In the second part of this thesis we have described the construction of a 3D whole
body mouse atlas, the Digimouse, from coregistered CT and cryosection data of a nor- mal nude male mouse. High quality PET, CT and cryosection images were acquired
post mortem from a single mouse placed in a stereotactic frame with fiducial markers
visible in all three modalities. The image data were coregistered to a common coordi-
nate system using the fiducials and resampled to an isotropic 0.1mm voxel size. Using
119
interactive editing tools, we segmented and labelled whole brain, cerebrum, cerebel-
lum,olfactorybulbs,striatum,medulla,massetermuscles,eyes,lachrymalglands,heart,
lungs,liver,stomach,spleen,pancreas,adrenalglands,kidneys,testes,bladder,skeleton
and skin surface. The combination of cryosection and CT images enhanced our ability
to delineate both soft tissue structures and bone. The final atlas consists of the 3D vol-
ume, in which the voxels are labelled to define the anatomical structures listed above,
withcoregisteredPET,CTandcryosectionimages.
To illustrate the use of the atlas, we have included simulations of 3D biolumi-
nescence and PET image reconstruction. Optical scatter and absorption values were
assignedtoeachorgantosimulaterealisticphotontransportwithintheanimalforbiolu-
minescenceimaging. Similarly,511keVphotonattenuationvaluesareassignedtoeach
structureintheatlastosimulaterealisticphotonattenuationinPET.
We encountered unexpected problems in coregistering the data sets due to relative
motion of the limbs and head after the CT and PET data were acquired. Future atlas
projects can deal with this problem by further reducing the internal temperature of
the mouse before scanning. Nevertheless, computational tools for nonrigid registration
allowed us to align the data after acquisition. Soft tissue structures in the mouse would
have been more clearly delineated if the CT data had been acquired using a contrast
agent such as Fenestra (Advanced Research Technologies, Quebec, Canada), however
thiswasnotavailableatthetimethatthedatawereacquired(Stoutetal.,2002). Coreg-
istered high resolution magnetic resonance image data, such as that used in the MOBY
phantom (Segars et al., 2004), would also have added to our ability to delineate struc-
tures in the atlas. Nevertheless, the high resolution cryosection data provides a wealth
of anatomical detail which we believe, in combination with the labelled volume and
coregistered CT and PET data, will make this a useful tool for researchers working in
smallanimalimaging.
120
In the third part of the thesis we have described a method to generate a 3D surface
model of a mouse from orthogonal views of structured light images. We needed this
informationnotonlyforregisteringtheopticaldatatothesurface,butalsoforgenerating
theforwardmodelsothatweknowwheretoimposeboundaryconditions. Weacquired
structured light images using the Xenogen Corporation IVIS 200 instrument, which
is available for multispectral bioluminescence and fluorescence imaging. In order to
obtain a more accurate surface model of the object, we used the instrumentation set up
developedfora2Dimagingsystemthatallowsstructuredlightandphotographicimage
acquisition of 3D tomographic data of the mouse surface (Chaudhari et al., 2005) and
acquiredstructuredlightpatternsforthesideviewsinadditiontothetopview.
We computed the 16 parameter DLT camera model using an iterative least squares
method with 189 points obtained from the calibration grid placed at three different
heights. After the camera calibration, we computed structured light calibration using
reference planes of known heights. We estimated the phase modulation due to planar
objects of known height with respect to the reference plane by 1D-FFT analysis, and
then unwrapped this phase using a quality based unwrapping algorithm to get correla-
tion between height and phase maps. For arbitrary objects, we used these calibration
parameters and the unwrapped phase data to compute the height maps for each view.
Afterestimatingtheheightmapsforallviews,wereflectedtheheightmapsfortheside
views to the top view using the mirror reflection equation and registered these height
maps with the top view height map using ICP algorithm. The resultant surface of the
objectwascomputedbytakingtheintersectionoftheseheightvolumes.
We tested our method by computing the 3D surface model of a cylinder object and
a plastic mouse. We also compared the 3D surface of a plastic mouse with the surface
generated from the Xray CT image of the same plastic mouse. The Dice coefficient
betweentheCTsurfaceandthe3Dsurfacegeneratedfromstructuredlightpatternswas
121
0.8716 with the mirror setup and 0.7585 with only the top view. Comparison of these
results show that addition of the side views improve the generation of an accurate 3D
surface model of the mouse. In future work, we will also include the bottom view with
modification in the mirror setup, which will help us generate an improved 3D surface
modelofthemouse.
In the fourth part of this thesis, we have demonstrated the use of the viscous fluid
registration algorithm to register an atlas image to an image with tumor pathology. We
firstestimatedaninitialstageofthetumorinthesubjectimagebymanuallysegmenting
it and using level sets methods with the assumption of a uniform growth model. Then
we seeded this earlier tumor model into the atlas and performed viscous fluid registra-
tion. These preliminary results indicate that the viscous fluid registration method can
accurately model large deformations caused by tumors. Currently, we have applied our
method only to 2D simulation data. In future work, we plan to apply our algorithm to
realand3Ddata.
In the last chapter of this thesis, we have developed a new algorithm for generating
skullandscalpmodelsfromT1-weightedMRI.Wetestedouralgorithmon44MRIdata
setsandperformedadditionalquantitativevalidationusing8CT/MRIco-registereddata
sets. In both experiments, we obtained anatomically accurate scalp and skull regions
witharelativelysmallcomputationalcost.
Toassesstheaccuracyofouralgorithm,wecomputedDicecoefficientsasameasure
of the similarity of the segmentation results we obtained from co-registered CT and
MRI data sets. We performed these comparisons on the upper portion of the head by
defining a plane that is perpendicular to sagittal planes and passes through the nasion
and the inion. We obtained an average Dice coefficient of 0.7504±0.056 for the skull
segmentation, 0.7229± 0.061 for the scalp segmentation and 0.9670± 0.013 for the
whole head. Comparison of these results with the effect of shifting the CT skull with
122
respect to itself and the MRI skull indicate that the differences between the MRI and
CTskullsareequivalenttothatwhichwouldarisefromaregistrationerrorontheorder
of 2mm to 3mm. When the additional effect of 2mm to 3mm misregistration is added
to the MRI/CT skull comparison, the dice coefficients drop to a value equivalent to a
registration-only error in excess of 3mm. The set difference measures also reported
indicate that these results are duein large partto differencesin skullthickness between
CTandMRI.Itisreasonabletoconcludethatthisiscausedinpartbyunder-estimation
of skull thickness due to partial volume effects in the MRI. However, partial volume
errorsintheCTskullitself,aswellastheskullthickeningeffectsofthe3mmCTslices
and additional registration errors arising from the CT/MRI registration, also contribute
tothedecreaseintheMRI/CTdicecoefficientswhilenotadverselyaffectingtheCT/CT
values. Consequently, taking CT as the gold standard, the values reported represent a
worse case performance and the effect of using an MRI-segmented skull in place of
one extracted from CT would be equivalent to a registration error in excess of 3mm.
However, with the additional sources of error outlined here, we might expect effective
performance with the MRI skull to be better than this. Furthermore, the skull thickness
canbeeasilyadjustedbychangingtheskullandscalpthresholdsfromthedefaultvalues
used in this study. In this case, more accurate skull volumes may be extracted. Since
theprimaryapplicationweenvisionforthisskullsegmentationmethodisingenerating
forward models for MEG and EEG, the impact of errors in definition of the skull layer
should ultimately be assessed in terms of their impact on source localization accuracy,
howeverthatisbeyondthescopeofthisthesis.
123
References
Abdel-Aziz, Y. I. and Karara, H. M. (1971). Direct linear transformation from com-
parator coordinates into object space coordinates in close-range photogrammetry. In
Proceedings of the Symposium on Close-Range Photogrammetry , volume 50, pages
1–18.
Akahn,Z.,Acar,C.E.,andGencer,N.G.(2001). Developmentofrealisticheadmodels
forelectromagneticsourceimagingofthehumanbrain. 2001Proceedingsofthe23rd
Annual EMBS InternationalConferenceoftheIEEE,1:899–902.
Alexandrakis, G., Rannou, F. R., and Chatziioannou, A. F. (2005). Tomographic bio-
luminescence imaging by use of a combined optical-pet (opet) system: A computer
simulationfeasibilitystudy. Phys.Med.Biol.,50:4225–4241.
Amit,Y.(1994). Anonlinearvariationalproblemforimagematching. SIAMJournalin
Scientific Computing,151:207–224.
Amit, Y., Grenander, U., and Piccioni, M. (1991). Structural image restoration through
deformable templates. Journal of the American Statistical Association, 86 414:376–
387.
Ary, J. P., Klein, S. A., and Fender, D. H. (1981). Location of sources of evoked scalp
potentials: Corrections for skull and scalp thickness. IEEE Trans. Biomed. Eng.,
BME-28:447452.
Ashburner, G. M. and Friston, K. (1999). Nonlinear spatial normalization using basis
functions. Human Brain Mapping,7:254–266.
Ashburner, J., Neelin, P., Collins, D. L., Evans, A. C., and Friston, K. J. (1997). Incor- poratingpriorknowledgeintoimageregistration. NeuroImage,64:344–352.
Audette, M. A., Ferrie, F. P., and Peters, T. M. (2000). An algorithmic overview of
surface registration techniques for medical imaging. Medical Image Analysis, pages
201–217.
Bajcsy, R. and Kovacic, S. (1989). Multiresolution elastic matching. Computer Vision,
Graphics and Image Processing,46:1–21.
124
Belardinelli,P.,Mastacchi,A.,Pizzella,V.,andRomani,G.L.(March2003). Applying
a visual segmentation algorithm to brain structures mr images. Proceedings of the
first International IEEE EMBSConferenceonNeuralEngineering,pages507–510.
Besl, P. J. and McKay, N. D. (1992). A method for registration of 3D shapes. IEEE
Transactions Pattern Anal.Mach.Intell.,11:567–585.
Bomans, M., Hohne, K., Tiene, U., and Riemer, M. (June 1990). 3-D segmentation
of MR images of the head for 3D display. IEEE Transactions on Medical Imaging,
9:177–183.
Bookstein, F. (June 1989). Principal warps: Thin plate splines and the decomposition
ofdeformations. IEEE Trans.PatternAnal.Mach.Intell.,11(6):567–585.
Broadwell,R.D.andBleier,R.(1976). Acytoarchitectonicatlasofthemousehypotha-
lamus. J Comp Neurol.,167(3):315–339.
Brummer, M. E., Merseau, R. M., Eisner, R. L., and Lewine, R. J. (1993). Automatic
detectionofbraincontoursinMRIdatasets. IEEETransactionsonMedicalImaging,
12:153–166.
Burger, A., Baldock, R., and Yang, Y. (2002). The edinburgh mouse atlas and gene-
expressiondatabase: Aspatio-temporaldatabaseforbiologicalresearch. Proceedings
of the 14th International Conference on Scientific and Statistical Database Manage-
ment (SSDBM02). http://genex.hgu.mrc.ac.uk/.
Celio, M. R., Hof, P. R., Bloom, F. E., and Young, W. G. (1998). A computerized
stereotaxicatlasofthemousebrain. SocNeurosciAbst,24:1065–1065.
Chaudhari, A. J., Darvas, F., Bading, J. R., Moats, R. A., Conti, P. S., Smith, D. J.,
Cherry, S. R., and Leahy, R. M. (2005). Hyperspectral and multispectral biolumi-
nescence optical tomography for small animal imaging. Physics in Medicine and
Biology,50:5421–5441.
Cheng, X. X., Su, Y., and Guo, L. R. (1991). Automated measurement for 360deg
profilometryof3Ddiffuseobjects. Appl.Opt.,30:1274–1278.
Cherry, S. R. (2004). In vivo molecular and genomic imaging: new challenges for
imagingphysics. Physics inMedicineandBiology,49(3):13–48.
Chow,P.L.,Rannou,F.R.,andChatziioannou,A.F.(2005). Attenuationcorrectionfor
smallanimalpettomographs. PhysicsinMedicineandBiology,50:1837–1850.
Christensen, G. and Johnson, H. (2001). Consistent image registration. IEEE Transac-
tions on Medical Imaging,20(7):568–582.
125
Christensen, G., Rabbitt, R., and Miller, M. (1996). Deformable templates using large
deformationkinematics. IEEETransactionsonImageProcessing,5(10):1435–1447.
Christensen, G. E. (1999). Consistent linear-elastic transformations for image match-
ing. InformationProcessinginMedicalImaging,LCNS1613,Springer-Verlag ,pages
224–237.
Christensen,G.E.,Rabitt,R.D.,andMiller,M.I.(1993). Adeformableneuroanatomy
textbook based on viscous fluid mechanics. 27th Ann. Conf. On Inf. Sciences and
Systems,211-216.
Chu, C. and Takaya, K. (1993). 3-Dimensional rendering of MR images. WESCANEX
93. Communications, Computers and Power in the Modern Environment Conference
Proceedings, IEEE,pages165–170.
Chui, H. and Rangarajan, A. (2003). A new point matching algorithm for non-rigid
registration. Computer VisionandImageUnderstanding,89:114–141.
Collins, D. L., Holmes, C. J., Peters, T. M., and Evans, A. C. (1995). Automatic 3D
model-basedneuroanatomicalsegmentation. HumanBrainMapping,3:190–208.
Congorto, S., Penna, S. D., and Erne, S. N. (1996). Tissue segmentation of MRI of the
headbymeansofakohonenmap. 18thAnnualInternationalConferenceoftheIEEE
Engineering in Medicine andBiologySociety,3:1087–1088.
Cuadra, M. B., Pollo, C., Bardera, A., Cuisenaire, O., Villemure, J., and Thiran, J.
(October 2004). Atlas-based segmentation of pathological MR brain images using a
modeloflesiongrowth. IEEETransactionsonMedicalImaging,23(10):1301–1314.
Curlander,J.C.andMcDonough,R.N.(1991). SyntheticApertureRadar: Systemsand
Signal Processing. Wiley,NewYork.
Davis, M. H., Khotanzad, A., Flamig, D. P., and Harms, S. E. (1997). A physics based
coordinatetransformationfor3Dimagematching. IEEETrans.onMedicalImaging,
163:317–328.
de Munck, J. C., Verbunt, J. P., Van’t Ent, D., and Van Dijk, B. W. (2001). The use
of an MEG device as 3D digitizer and motion monitoring system. Phys Med Biol.,
46(8):2041–52.
Declerck, J., Subsol, G., Thirion, J.-P., and Ayache, N. (1995). Automatic retrieval of
anatomicalstructuresin3Dmedicalimages. INRIATechnicalReport2485.
Dhenain, M., Ruffins, S. W., and Jacobs, R. E. (2001). Three-dimensional digital
mouse atlas using high-resolution MRI. Developmental Biology, 232(2):458–470.
http://mouseatlas.caltech.edu/.
126
Dogdas, B., Shattuck, D. W., and Leahy, R. M. (2005). Segmentation of skull and
scalp in 3D human MRI using mathematical morphology. Human Brain Mapping,
26(4):273–285.
Dogdas, B., Stout, D., Chatziioannou, A. F., and Leahy, R. M. (2007). Digimouse: a
3d whole body mouse atlas from CT and cryosection data. Physics in Medicine and
Biology,52(3):577–587.
Evans, A. C., Marrett, S., Torrescorzo, J., Ku, S., and Collins, L. (1991). MRI-PET
correlation in three dimensions using a volume of interest (voi) atlas. Journal of
Cerebral Blood Flow andMetabolism,11:A69–A78.
Fitzpatrick, J., Hill, D., and Maurer, C. (2000). Image registration. In Sonka, M. and
Fitzpatrick, J., editors, Handbook of Medical Imaging, volume 2, chapter 8, pages
447–513.SPIEPress,SanDiego.
Fleute, M. and Lavallee, S. (1998). Building a complete surface model from sparse
datausingstatisticalshapemodels: applicationtocomputerassistedkneesurgery. in:
FirstInternationalConferenceonMedicalImageComputingandComputer-Assisted
Intervention, LNCS 1496, Springer,Berlin,879-887.
Flynn, T. J. (1997). Two-dimensional phase unwrapping with minimum weighted dis-
continuity. Journal of theOpticalSocietyofAmerica,14(10):2692–2701.
Franke,R.(1979). Acriticalcomparisonofsomemethodsforinterpolationofscattered
data. Naval Postgrad. SchoolTech.ReportNPS-53-79-003 .
Ge, Y., Fitzpatrick, J. M., Kessler, R. M., and Jeske-Janicka, M. (1995). Intersubject
brain image registration using both cortical and subcortical landmarks. SPIE Image
Processing 2434,pages81–95.
Gefen, S., Tretiak, O., and Nissanov, J. (Nov 2003). Elastic 3D alignment of rat brain
histologicalimages. IEEE TransactionsonMedicalImaging,2211:1480–1489.
Ghigilia,D.andRomero,L.(1994). Robusttwo-dimensionalweightedandunweighted
phase unwrapping that uses fast transforms and iterative methods. Journal of the
Optical Society of AmericaA,11(1):107–117.
Ghiglia, D. C. and Pritt, M. D. (1998). Two-Dimensional Phase Unwrapping, Theory,
Algorithms and Software. JohnWileyandSons,Inc.
Glover, G. and Schneider, E. (1991). Three-point dixon technique for true water/fat
decompositionwithb0inhomogeneitycorrection. Magnetic Resonance in Medicine,
18:371–383.
127
Goldstein, R., Zebker, H., and Werner, C. (1988). Satellite radar interferometry: Two-
dimensionalphaseunwrapping. RadioScience,23(4):713–720.
Guziec, A. and Ayache, N. (1994). Smoothing and matching of 3D space curves. The
International Journal of ComputerVision,12(1):79–104.
Hajnal, J. V., Saeed, N., Soar, E. J., Oatridge, A., Young, I. R., and Bydder, G. M.
(1995). A registration and interpolation procedure for subvoxel matching of serially
acquiredmrimages. JournalofComputerAssistedTomography,19:289–296.
Hamalainen, M. S. and Sarvas, J. (1989). Realistic conductor geometry model of the
human head for interpretation of neuromagnetic data. IEEE Trans. Biomed. Eng.,
36:165171.
Haque, H. A., Musha, T., and Nakajima, M. (1998). Numerical estimation of skull
and cortex boundaries from scalp geometry. Engineering in Medicine and Biology
Society 1998, Proceedings of the 20th Annual International Conference of the IEEE,
4:2171–2174.
Hardy, R. L. (1990). Theory and applications of the multiquadric-biharmonic method:
20 years of discovery 1968-1988. Computers and Math. with Applications, 19:163–
208.
Haueisen, J., Tuch, D. S., Ramon, C., Schmipf, P., Wedeen, V., George, J., and Bel-
liveau, J. (2002). The influence of brain tissue anisotropy on human EEG and MEG.
NeuroImage,15:159–166.
Hayakawa,N.,Uemura,K.,Ishiwata,K.,Shimada,Y.,Ogi,N.,andNagaoka,T.(2000).
APET-MRIregistrationtechniqueforpetstudiesoftheratbrain. Nucl. Med. Biol.,
27:121–125.
Heinonen, T., Dastidar, P., Kauppinen, P., Malmivuo, J., and Eskola, H. (1998). Semi-
automatictoolforsegmentationandvolumetricanalysisofmedicalimages. Medical
and Biological EngineeringandComputing,363:291–296.
Heinonen, T., Eskola, H., Dastidar, P., Laarne, P., and Malmivuo, J. (1997). Segmenta-
tion ofT1 MRscans forreconstructionofresistiveheadmodels. Computer Methods
and Programs in Biomedicine,54:173–181.
Held,K.,Kops,E.,Krause,B.,Wells,W.,Kikinis,R.,andMuller-Gartner,H.-W.(Dec.
1997). Markov random field segmentation of brain MR images. IEEE Transactions
on Medical Imaging,16:878–886.
Hill, D. L. G., Batchelor, P. G., Holden, M., and Hawkes, D. J. (2001). Medical image
registration. Physics in MedicineandBiology,46:1–45.
128
Huang, S. C., Truong, D., Wu, H. M., Chatziioannou, A. F., Shao, W., Wu, A. M., and
Phelps, M. E. (2005). An internet-based kinetic imaging system (kis) for micropet.
Molecular Imaging and Biology,inpress.
Hubbell, J. H. and Seltzer, S. M. (1996). Tables of x-ray mass attenuation
coefficients and mass energy absorption coefficients from 1 kev to 20 mev
for elements z =1 to 92 and 48 additional substances of dosimetric inter- est. http://physics.nist.gov/PhysRefData/XrayMassCoef/cover.html[AccessedAugust
2005],Gaithersburg: NationalInstituteofStandardsandTechnology,1996.
Jain,A.K.(1989). Fundamentalsofdigitalimageprocessing. Prentice-Hall.
Johnson, G. A., Cofer, G. P., Gewalt, S. L., and Hedlund, L. W. (2002). Morphologic
phenotyping with magnetic resonance microscopy: the visible mouse. Radiology,
222(3):789–793.
Johnson,H.andChristensen,G.(2002). Consistentlandmarkandintensity-basedimage
registration. IEEE TransactionsonMedicalImaging,21(5):450–461.
Kim, B., Boes, J. L., Frey, K. A., and Meyer, C. R. (1997). Mutual information for
automatedunwarpingofratbrainautoradiographs. Neuroimage,51:31–40.
Kruggel, F. and Lohmann, G. (1996). Brian-a toolkit for the analysis of multimodal
braindatasets. InH.U.Lemke,K.Inamura,C.C.Jaffe,M.W.Vannier(eds.),Proceedings
of the Computer Aided Radiology 1996 CAR’96, Paris, Berlin, Springer, pages 323–
328.
Kybic, J. (2001). Elastic Image Registration Using Parametric Deformation Mod-
els. PhD thesis, Swiss Federal Institute of Technology Lausanne, EPFL, Lausanne,
Switzerland.
Kyriacou, S. K., Davatzikos, C., Zinreich, S. J., and Bryan, R. N. (July 1999). Nonlin-
ear elastic registration of brain images with tumor pathology using a biomechanical
model. IEEE TransactionsonMedicalImaging,187:580–592.
Lamm, C., Windischberger, C., Leodolter, U., Moser, E., and Bauer, H. (2001). Co-
registration of EEG and MRI data using matching of spline interpolated and MRI-
segmentedreconstructionsofthescalpsurface. BrainTopography,14(2):93–100.
Lemieux, L., Wieshmann, U. C., Moran, N. F., Fish, D. R., and Shorvon, S. D. (1998).
The detection and significance of subtle changes in mixed signal brain lesions by
serial mri scan matching and spatial normalization. Med Image Analysis, 2 3:227–
242.
129
Li,Q.,Asma,E.,Qi,J.,Bading,J.R.,andLeahy,R.M.(2004). Accurateestimationof
thefisherinformationmatrixforthePETimagereconstructionproblem. IEEETrans.
Medical Imaging,23(9):1057–1064.
Li, W., Su, X., and Liu, Z. (2001). Large-scale three-dimensional object measurement:
A practical coordinate mapping and image data-patching method. Applied Optics,
40(20):3326–3333.
Lim,H.,Xu,W.,andHuang,X.(1995). Twonewpracticalmethodsforphaseunwrap-
ping. in Proc. IGARSS95,Firenze,Italy,page196198.
Loening, A. M. and Gambhir, S. S. (2001). Amide: A completely free system for
medicalimagingdataanalysis. JournalofNuclearMedicine,42(5):192.
Lorensen,W.E.andCline,H.E.(1987). Marchingcubes: Ahighresolution3Dsurface
constructionalgorithm. ACMComputerGraphics,21(4):163–169.
Luo, S. and Evans, A. C. (1995). A method to match human sulci in 3D space. IEEE
Eng. in Med. and Biol., 17thAnn.Conf.
MacKenzie-Graham, A., Lee, E. F., Dinov, I., Bota, M., Shattuck, D. W., Ruffins, S.,
Yuan, H., Konstantinidis, F., Pitiot, A., Ding, Y., Hu, G., Jacobs, R. E., and Toga,
A. W. (2004). A multimodal, multidimensional atlas of the c57bl/ 6j mouse brain.
Journal of Anatomy,204:93–102.
Maes, F., Collignon, A., Vandermeulen, D., Marchal, G., and Suetens, P. (1997). Mul-
timodality image registration by maximization of mutual information. IEEE Trans.
Med. Imaging,16(2):187–198.
Maintz, J. B. A. and Viergever, M. A. (1998). A survey of medical image registration.
Medical Image Analysis,21:1–36.
Malladi, R., Sethian, J. A., and Vemuri, B. C. (February 1995). Shape modeling
with front propagation: A level set approach. IEEE Trans. on Pattern Analysis and
Machine Intelligence,17(2):158–175.
Massoud,T.F.andGambhir,S.S.(2003). Molecularimaginginlivingsubjects: seeing
fundamentalbiologicalprocessesinanewlight.GenesandDevelopment,17(5):545–
580.
Mazziotta, J., Toga, A., Evans, A., Fox, P., Lancaster, J., Zilles, K., Woods, R., Paus,
T., Simpson, G., B., P., C.J., H., Collins, D., Thompson, P., MacDonald, D., Schor- mann, T., Amunts, K., Palomero-Gallagher, N., Parsons, L., Narr, K., Kabani, N., Le
Goualher, G., Boomsma, D., Cannon, T., Kawashima, R., and Mazoyer, B. (2000).
A probabilistic atlas and reference system for the human brain. Journal of the Royal
Society,356(1412):1293–1322.
130
Mosher,J.C.,Leahy,R.,andLewis,P.(March1999). EEGandMEG:forwardsolutions
forinversemethods. IEEETransactionsonBiomedicalEngineering,463:245–259.
Mugler, J. P. and Brookeman, J. R. (1990). Three-dimensional magnetization-prepared
rapid gradient-echo imaging (3D MP RAGE). Magnetic Resonance in Medicine,
15:152–157.
Munger, P., Crelier, G. R., and Peters., T. (2000). An inverse problem approach to the
correctionofdistortioninEPIimages. IEEE Trans. on Medical Imaging,19(7):681–
689.
Nadadur, D. and Haralick, R. M. (May 2000). Recursive binary dilation and erosion
usingdigitallinestructuringelementsinarbitraryorientations. IEEETransactionson
Image Processing,9:749–759.
Nayler, G., Firmin, D. B., and Longmore, D. (1986). Blood flow imaging by cine
magneticresonance. JournalofComputerAssistedTomography,10:715–722.
Nielsen,L.K.(2003). Elastic Registration of Medical MR Images. PhDthesis,Univer- sityofBergen.
Nielsen,M.B.(1996). Mvox:interactive2-4dmedicalimageandgraphicsvisualization
software. In H. U. Lemke, M. W. Vannier, K. Inamura, A. G. Farman (eds): Proc.
Computer Assisted Radiology(CAR’96),Paris,France,June26-29 ,pages335–338.
Ntziachristos,V.,Tung,C.,Bremer,C.,andWeissleder,R.(2002.).Fluorescencemolec-
ulartomographyresolvesproteaseactivityinvivo. Naturemedicine,8:757–761.
Oostendorp, T. F. and van Oosterom, A. (1989). Source parameter estimation in
inhomogeneous volume conductors of arbitrary shape. IEEE Trans. Biomed. Eng.,
36:382391.
Pelizzari,C.,Chen,G.,Sperling,D.,Wiichselbaum,R.,andChen,C.(1989). Accurate
three dimensional registration of CT, PET and/or MRI images of the brain. Journal
of Computer Assisted Tomography,13:20–26.
Peng,X.,Liu,C.,,Zhang,Z.,andGu,P.(2002). Reverseengineeringwith3Dimaging
andmodeling. InZhang,Y.,Liu,W.,andPollicove,H.,editors,Proc.SPIEVol.4921,
p. 1-10, Optical Manufacturing Technologies, Yimo Zhang; Wenyao Liu; Harvey M.
Pollicove; Eds.,pages1–10.
Pluim, J. P. W., Maintz, J. B. A., and Viergever, M. A. (Aug 2003). Mutual informa-
tion based registration of medical images: a survey. IEEE Transactions on Medical
Imaging,228:986–1004.
131
Qi, J. and Leahy, R. M. (2000). Resolution and noise properties of map reconstruction
forfully3DPET. IEEE Trans.MedicalImaging,19(5):493–506.
Qi, J., Leahy, R. M., Cherry, S. R., Chatziioannou, A., and Farquhar, T. H. (1998a).
High resolution 3D bayesian image reconstruction using the micropet small-animal
scanner. Physics in MedicineandBiology,43(4).
Qi, J., Leahy, R. M., Hsu, C., Farquhar, T. H., and Cherry, S. R. (1998b). Fully
3D bayesian image reconstruction for the ecat exact hr+. IEEE Trans. Nucl.Sci.,
45(3):1096–1103.
Rasband,W.(2003). Imagej. http://rsb.info.nih.gov/ij/.
Rifai, H., Bloch, I., Hutchinson, S., Wiart, J., and Garnero, L. (1999). Segmentation
of the skull in MRI volumes using deformable model and taking the partial volume
effectintoaccount. SPIE Proceedings,3661:288–299.
Rosen, G. D., Williams, A. G., Capra, J. A., Connolly, M. T., Cruz, B., Lu, L.,
Airey, D. C., Kulkarni, K., and Williams, R. W. (2000). The mouse brain library
@www.mbl.org. Int MouseGenomeConference14,166.
Ruprecht,D.andMuller,H.(1995).Aframeworkforscattereddatainterpolation.in: M.
Goebel, H. Muller, B. Urban, eds., Visualization in Scientific Computing, Springer- Verlag, Vienna.
Sandor, S. and Leahy, R. (1997). Surface-based labeling of cortical anatomy using a
deformabledatabase. IEEETMI,16:41–54.
Sarvas,J.(1987). Basicmathematicalandelectromagneticconceptsofthebiomagnetic
inverseproblems. Phys. Med.Biol.,32:1122.
Schlitt, H. A., Heller, L., Aaron, R., Best, E., and Ranken, D. M. (1995). Evaluation of
boundary element method for the EEG forward problem: Effect of linear interpola-
tion. IEEE Trans. Biomed.Eng.,42:5257.
Screiber, W. and Notni, G. (2000). Theory and arrangements of selfcalibrating whole
bodythreedimensionalmeasurementsystemsusingfringeprojectiontechnique. Opt
Eng.,39:159–169.
Segars,W.P.,Tsui,B.M.W.,Frey,E.C.,Johnson,G.A.,andBerr,S.S.(2004). Devel-
opment of a 4D digital mouse phantom for molecular imaging research. Molecular
Imaging and Biology,6:149–159. http://www.bme.unc.edu/wsegars/phantom.html.
Sethian, J. A. (1999). Level set methods and fast marching methods. Cambridge Uni-
versityPress.
132
Shattuck, D., Sandor-Leahy, S., Schaper, K., Rottenberg, D. A., and Leahy, R. M.
(2001). Magneticresonanceimagetissueclassificationusingapartialvolumemodel.
NeuroImage,13(5):856–876.
Shattuck, D. W. and Leahy, R. M. (2002). Brainsuite: An automated cortical sur- face identification tool, [invited article]. Medical Image Analysis, 8(2):129–142.
http://brainsuite.usc.edu/.
Shen, D. and Davatzikos, C. (2002). Hammer: hierarchical attribute matching mecha-
nismforelasticregistration. IEEETrans.onMedicalImaging,21(11):1421–1439.
Si, H. (version 1.3.4, June 17, 2005). A quality tetrahedral mesh generator and three-
dimensionaldelaunaytriangulator. http://tetgen.berlios.de/.
Skydan,O.A.,Lalor,M.J.,andBurton,D.R.(Oct2002).Techniqueforphasemeasure-
ment and surface reconstruction by use of colored structured light. Applied Optics,
41(29):6104–6117.
Smith, S., Bannister, P., Beckmann, C., Brady, M., Clare, S., Flitney, D., Hansen, P.,
Jenkinson, M., Leibovici, D., Ripley, B., Woolrich, M., and Zhang, Y. (2001). Fsl:
Newtoolsforfunctionalandstructuralbrainimageanalysis. InSeventhInt.Conf.on
Functional Mapping of theHumanBrain.
Soltanian-Zadeh, H. and Windham, J. (Dec. 1997). A multi-resolution approach for
intracranial volume segmentation from brain images. Medical Physics, 24 12:1844–
1853.
Sonka, M. and Fitzpatrick, J. M. (2000). Handbook of Medical Imaging, volume 2,
Medical Image Processing and Analysis, chapter P. M. Thompson and M. S. Mega
andK.L.NarrandE.R.SowellandR.E.BlantonandA.W.Toga,Chapter17,Brain
ImageAnalysisandAtlasConstruction,pages1063–1119. AcademicPress.
Stout,D.B.,Chow,P.L.,Gustilo,A.,Grubwieser,S.,andChatziioannou,A.F.(2003).
Multimodality isolated bed system for mouse imaging experiments. Mol. Imaging
Biol.,5.
Stout,D.B.,Chow,P.L.,Silverman,R.,Leahy,R.,Lewis,X.,Gambhir,S.,andChatzi-
ioannou, A. (2002). Creating a whole body digital mouse atlas with PET, CT and
cryosectionimages. MolecularImagingandBiology,4(4).
Studholme, C., Hill, D. L. G., and Hawkes, D. (1996). Automated 3D registration of
MRandCTimagesofthehead. MedicalImageAnalysis,12:163–175.
Studholme, C., Hill, D. L. G., and Hawkes, D. J. (1999). An overlap invariant entro-
phy measure of 3D medical image alignment. Pattern Recognition, 32(1):71–86.
http://rview.colin-studholme.net/.
133
Su, X. and Chen, W. (2001). Fourier transform profilometry: a review. Optics and
Lasers in Engineering,35:263–284.
Su, X., Song, W., Cao, Y., and Xiang, L. (2004). Phase-height mapping and coordinate
calibration simultaneously in phase-measuring profilometry. Optical Engineering,
43:708–712.
Subsol, G., Thirion, J.-P., and Ayache, N. (Nov. 1995). A general scheme for automat-
ically building 3D morphometric anatomical atlases: application to a skull atlas. In
Medical Robotics and ComputerAidedSurgery,Baltimore(USA),pages373–382.
Szeliski, R. and Lavallee, S. (1993). Matching 3D anatomical surfaces with non-rigid
deformations using octree-splines. SPIE vol. 2031, Geometric Methods in Computer
Vision II,pages306–315.
Takeda, M. and Mutoh, K. (1983). Fourier transform profilometry for the automatic
measurementof3Dobjectshapes. Appl.Opt.,2224:3977–3982.
Thirion,J.P.andGourdon,A.(November1996). The3Dmarchinglinesalgorithmand
its application to crest lines extraction. Graphical Models and Image Processing, 58
6:503–509.
Thompson,P.M.andToga,A.W.(1996). Asurface-basedtechniqueforwarpingthree-
dimensional image of the brain. IEEE Transactions on Medical Imaging, 15 4:402–
417.
Timsari,B.andLeahy,R.(2001). ComputingTheBrain: AGuidetoNeuroinformatics,
chapter Neuro Slicer: A Tool of Registering 2D Slice Data to 3D Surface Atlases.
AcademicPress.
VandenElsen,P.A.,Maintz,J.B.A.,Pol,E.-J.D.,andViergever,M.A.(1993). Medical
image matching - a review with classification. IEEE Engineering in medicine and
biology,124:26–39.
Vaquero, J. J., Desco, M., Pascau, J., Santos, A., Lee, I., Seidel, J., and Green, M. V.
(Aug 2001). PET, CT and MR image registration of the rat brain and skull. IEEE
Transactions on Nuclear Science,484:1440–1445.
Viola, P. A. (1995). Alignment by Maximization of Mutual Information. PhD disserta-
tion,MIT.
West,J.,Fitzpatrick,J.M.,Wang,M.Y.,andDawant,B.M.,etal.(1997). Comparison
and evaluation of retrospective intermodality image registration techniques. Journal
of Computer Assisted Tomography,21:554–566.
134
Wet, J. R. D., Wood, K. V., DeLuca, M., Helinski, D. R., and Subramani, S. (1987).
Fireflyluciferasegene: structureandexpressioninmammaliancells. Mol Cell Biol.,
7:725–737.
Wolters, C. (2003). Influence of Tissue Conductivity Inhomogeneity and Anisotropy on
EEG/MEGbasedSourceLocalizationintheHumanBrain. PhDthesis,Max-Planck-
InstituteofCognitiveNeuroscience,Leipzig.
Wolters,C.H.,Kuhn,M.,Anwander,A.,andReitzinger,S.(2002). Aparallelalgebraic
multigrid solver for finite element method based source localization in the human
brain. Comp.Vis.Sci.,5(3):165177.
Woods, R., Cherry, S., and Mazziotta, J. (July/August, 1992). Rapid automated algo-
rithm for aligning and reslicing PET images. Journal of Computer Assisted Tomog-
raphy,164:620–633.
Woods, R., Mazziotta, J., and Cherry, S. (July/August, 1993). MRI-PET registration
with automated algorithm. Journal of Computer Assisted Tomography, 17 4:536–
546.
Woods, R. P., Grafton, S. T., Colin, J., Cherry, S. R., and Mazziotta, J. C. (Jan-
uary/February, 1998a). Automated image registration: I. general methods, and intra-
subject, intramodality validation. Journal of Computer Assisted Tomography, 22
1:139–152.
Woods, R. P., Grafton, S. T., Watson, J. D. G., Sicotte, N. L., and Mazziotta, J. C.
(January/February, 1998b). Automated image registration: II. intersubject validation
oflinearandnonlinearmodels. JournalofComputerAssistedTomography,221:153–
165.
Xu, W. and Cumming, I. (January 1999). A region-growing algorithm for insar phase
unwrapping. IEEE TransactionsofGeoScienceandRemoteSensing,371:124–134.
Zhang, Z. (1995). A fast method to compute surface potentials generated by dipoles
withinmultilayeranisotropicspheres. Phys.Med.Biol.,40:335349.
Zijdenbos,A.P.,Dawant,B.M.,Margolin,R.A.,andPalmer,A.(Dec1994). Morpho-
metricanalysisofwhitematterlesionsinMRimages. IEEETransactionsonMedical
Imaging,13:716–724.
135
Abstract (if available)
Abstract
Biomedical imaging technologies that were originally developed for human medical diagnosis have been adapted to study the anatomy and metabolism of small animals. However, multimodality image registration algorithms, which have been used for analysis of human images, have not been utilized as effectively in the small animal imaging community. As with the human studies, there exist many applications where image registration is a useful task in small animal imaging applications: visualizing and quantifying changes in longitudinal studies, combining complementary information from different modalities such as to employ both anatomical (MR, X-ray CT) and functional (PET, SPECT or optical imaging) information, and identifying and labelling of specific anatomical structures using an atlas. In this thesis, I review the image registration methods commonly used in human studies and show examples where they can also be very effective in small animal imaging applications. I also describe a high resolution 3D digital mouse atlas, Digimouse, for use in the small animal imaging community, using CT, PET and cryosection data, in which I used image registration techniques reviewed earlier. I also show applications of Digimouse to generation of simulated data for use in the evaluation of PET and optical image reconstruction algorithms, and also to the automatic labelling of anatomical structures using X-ray CT images.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
3D inference and registration with application to retinal and facial image analysis
PDF
Hyperspectral and multispectral optical bioluminescence and fluorescence tomography in small animal imaging
PDF
Digitizing human performance with robust range image registration
PDF
Compression algorithms for distributed classification with applications to distributed speech recognition
PDF
Transmission tomography for high contrast media based on sparse data
PDF
Sampling theory for graph signals with applications to semi-supervised learning
PDF
Contributions to structural and functional retinal imaging via Fourier domain optical coherence tomography
PDF
Human activity analysis with graph signal processing techniques
PDF
Metasurfaces in 3D applications: multiscale stereolithography and inverse design of diffractive optical elements for structured light
PDF
Efficient transforms for graph signals with applications to video coding
PDF
3D vessel mapping techniques for retina and brain as an early imaging biomarker for small vessel diseases
PDF
Efficient graph learning: theory and performance evaluation
PDF
Modeling and predicting with spatial‐temporal social networks
PDF
Optimal redundancy design for CMOS and post‐CMOS technologies
PDF
Application-driven compressed sensing
PDF
Correction, coregistration and connectivity analysis of multi-contrast brain MRI
PDF
Applications of graph theory to brain connectivity analysis
PDF
Transparent and lightweight medical image analysis techniques: algorithms and applications
PDF
Scalable polymerization additive manufacturing: principle and optimization
PDF
Theoretical foundations for dealing with data scarcity and distributed computing in modern machine learning
Asset Metadata
Creator
Dogdas, Belma
(author)
Core Title
Image registration with applications to multimodal small animal imaging
School
Viterbi School of Engineering
Degree
Doctor of Philosophy
Degree Program
Electrical Engineering
Publication Date
08/03/2009
Defense Date
01/30/2007
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
3D surface profiling,image registration,mouse atlas,OAI-PMH Harvest,skull and scalp segmentation,small animal imaging,structured light
Language
English
Advisor
Leahy, Richard M. (
committee chair
), Bonahon, Francis (
committee member
), Ortega, Antonio (
committee member
)
Creator Email
dogdas@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-m756
Unique identifier
UC1417274
Identifier
etd-Dogdas-20070803 (filename),usctheses-m40 (legacy collection record id),usctheses-c127-537232 (legacy record id),usctheses-m756 (legacy record id)
Legacy Identifier
etd-Dogdas-20070803.pdf
Dmrecord
537232
Document Type
Dissertation
Rights
Dogdas, Belma
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Repository Name
Libraries, University of Southern California
Repository Location
Los Angeles, California
Repository Email
cisadmin@lib.usc.edu
Tags
3D surface profiling
image registration
mouse atlas
skull and scalp segmentation
small animal imaging
structured light