Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
An ultrasound localization system for endovascular aortic aneurysm repair
(USC Thesis Other)
An ultrasound localization system for endovascular aortic aneurysm repair
PDF
Download
Share
Open document
Flip pages
Copy asset link
Request this asset
Transcript (if available)
Content
AN ULTRASOUND LOCALIZATION SYSTEM
FOR ENDOVASCULAR AORTIC ANEURYSM REPAIR
by
Jay C. Mung
A Dissertation Presented to the
FACULTY OF THE USC GRADUATE SCHOOL
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulllment of the
Requirements for the Degree
DOCTOR OF PHILOSOPHY
(BIOMEDICAL ENGINEERING)
May 2012
Copyright 2012 Jay C. Mung
To My Parents and Julie
ii
Acknowledgments
I owe a lot of people a great deal of gratitude.
Thank you Jesse Yen, PhD for your guidance and
exibility over the years. You
have taught me how to think and how to hold higher standards. Thank you Fred
Weaver, MD for your absolute enthusiasm and support for this project. Your
collaboration and vision have been crucial. Thank you to the rest of the faculty
and committee who have all provided support and valuable feedback: Krishna
Nayak, PhD; K. Kirk Shung, PhD; Brent Liu, PhD; Tzung Hsiai, MD PhD; and
the late Manbir Singh, PhD.
I've had the privilege and pleasure of working with a superb clinical team. These
people have physically struggled alongside me to make this project happen. Sukgu
Han, MD you were there from the beginning, your perseverance laid the ground
work for all the subsequent pre-clinical studies. Thank you. John Moos, MD your
get-it-done attitude and quick thinking have helped us pull through a number of
iii
close calls. Grace Huang, MD your thorough preparation and grasp of the bigger
picture have helped us execute on the vision behind this project. Linda Kirkman,
DVM and Paul Kirkman, you are an amazing team, and the pre-clinical work could
not have happened without you. Ryan Park, thanks so much to you and your team
for accommodating my crazy imaging requests and really nailing the logistics to
make it happen.
I wouldn't be here without the support of past and current lab mates. Rachel
Bitton, PhD thanks for your guidance and advice on how to survive this crazy
process. Samer Awad, PhD thanks for showing me the ropes and the fun trips.
Chi \JJ" Seo, PhD and Jong-Seob Jeong, PhD thanks for putting up with this
\new kid" in lab and helping me through. Dan Fullerton, thanks for helping me
out during your undergrad research stint. Hamid Reza Chabok, PhD thanks for
the good talks and good times, especially at IUS. Fan Zheng, thanks for making
the interconnect on such short notice and entertaining my crazy ideas. Hojong
Choi, thanks for your help with the chips.
Yuling Chen, Man Nguyen, Jun Seob Shin, thanks for putting up with the messes
I've made in the lab, my time squatting on the equipment, helping me rent vans
and move boxes. Having you guys around has made an otherwise solitary endeavor
more pleasant. Stick with it, you will nish soon and in less time than me! If you
iv
have any questions from now till whenever{ just ask, I will make the time. Extend
the spirit of mentorship to those that come after you.
Other friends that I've made at USC, thank you, you've really rounded out my
experience over the years. Nick Wettels, PhD we've come a long way and I really
admire the trail you're blazing. Joe Munaretto, PhD thanks for being such a good
sport and enduring my attempts to keep up with you. Christian Gutierrez, PhD
thanks for the talks and following through on \big dreams". Farhan Baluch, thanks
for lending me random supplies and making rides on the F DASH more fun. Jan
Pfenninger, I really value our times as neighbors in Trojan Plaza. You helped me
focus during a crazy time here.
Thank you to those who keep the BME department running: Mischalgrace Dias-
anta, thank you for keeping me on track and always being available. Thank you
Sandra Johns and Karen Johnson, past and current chairs Michael Khoo, PhD
and Norberto Grzywacz, PhD.
A great deal of my PhD education took place outside of the classroom and lab.
Thank you to Francois Vignon, PhD and Ameet Jain, PhD of Philips Research
North America for a wonderful summer internship. Paul Thienphrapa, thanks for
dreaming big with me, we'll make those ideas happen. Thank you to my cohort
of Body Engineering Los Angeles Fellows, V. Tim Nayar and David Herman, it's
v
been a pleasure learning, teaching and making a dierence with you guys. Thank
you to Thomas Cummins, let's keep winning.
To society, thanks for the free pass you've given me as a student. I've had time
to think and learn and fail. This has been the greatest privilege of all. School has
been a safe environment for me to take up resources while trying new and dierent
ideas, fail without judgment and learn from my mistakes.
The real-world is less forgiving. But I am condent that I now have the best chance
of success out there. In the real-world, there is also a great disconnect between
cutting edge health care technology and the level of care that most people have
access to. My place in society is to create and enable technology that will bridge
this gap. Without health, little else matters.
A healthy life means having love and people to share it with. Mike Lee, you've been
with me through thick and thin over the years. You are the embodiment of loyalty.
Thank you. To my parents, Hsien Pin and Char May Mung, you both sacriced
and risked everything to provide me with opportunity. You made me who I am.
All of my achievements are yours. To my wife, Julie Kim, your unconditional love
has given me strength to do it all. You mean the world to me.
vi
Table of Contents
Dedication ii
Acknowledgments iii
List of Tables x
List of Figures xi
Abstract xvi
1 Introduction 1
1.1 Clinical Background . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Technology Background . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2 The Ultrasound Localization System 10
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.2 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.2.1 Hardware System Design . . . . . . . . . . . . . . . . . . . . 11
2.2.2 Localization Overview . . . . . . . . . . . . . . . . . . . . . 15
2.2.3 Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3 In Vivo Studies - Part 1 33
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.2 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.2.1 Preoperative imaging and Registration . . . . . . . . . . . . 35
3.2.2 Surgery and Data acquisition . . . . . . . . . . . . . . . . . 37
3.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.3.1 Intraoperative experience . . . . . . . . . . . . . . . . . . . . 40
3.3.2 Post euthanasia evaluation . . . . . . . . . . . . . . . . . . . 42
vii
3.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4 In Vitro Studies - Array Redesign and Dynamic Tracking 47
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
4.2 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
4.2.1 Cradle Array Design . . . . . . . . . . . . . . . . . . . . . . 48
4.2.2 Static Points in a Volume . . . . . . . . . . . . . . . . . . . 50
4.2.3 Controlled Motion . . . . . . . . . . . . . . . . . . . . . . . 51
4.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
4.3.1 Static Points in a Volume . . . . . . . . . . . . . . . . . . . 53
4.3.2 Controlled Motion . . . . . . . . . . . . . . . . . . . . . . . 56
4.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
5 In Vivo Studies - Part 2 62
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
5.2 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
5.2.1 Retroperitoneal Approach Array . . . . . . . . . . . . . . . 63
5.2.2 Virtual Endoluminal View . . . . . . . . . . . . . . . . . . . 65
5.2.3 Imaging and Pre-surgery . . . . . . . . . . . . . . . . . . . . 65
5.2.4 Interventions: Endovascular Navigation, Stent Deployment . 67
5.2.5 Outcome Measures: Data Acquisition and Oine Evaluation 69
5.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
5.3.1 Ultrasound vs. Fluoroscopic Tracking . . . . . . . . . . . . . 69
5.3.2 Centerline Analysis . . . . . . . . . . . . . . . . . . . . . . . 72
5.3.3 Stent Placement and Necropsy Measurement. . . . . . . . . 73
5.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
5.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
6 Conclusion and Future Work 81
6.1 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
6.2 Future Aims . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
6.2.1 Catheter and Guidewire Transducers . . . . . . . . . . . . . 82
6.2.2 New Methods for Registration . . . . . . . . . . . . . . . . . 83
6.2.3 Optimal Sensor Placement . . . . . . . . . . . . . . . . . . . 86
6.2.4 Sound Speed Error Compensation . . . . . . . . . . . . . . . 87
6.2.5 In Vivo Experiments . . . . . . . . . . . . . . . . . . . . . . 88
Bibliography 90
Appendix A Interconnect 97
A.1 List of Components . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
A.1.1 PCB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
viii
A.1.2 Socket Strips . . . . . . . . . . . . . . . . . . . . . . . . . . 100
A.1.3 Assembly Notes, Finished Product . . . . . . . . . . . . . . 101
Appendix B Matlab Code 102
B.1 initialize USGPS.m . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
B.2 myFunctionJay.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
B.3 JayKalman.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
B.4 make x i.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
Appendix C Virtual Endoluminal View 118
ix
List of Tables
1.1 Current localizer technology and how an ultrasound tracker might
compare. Adapted from [38]. . . . . . . . . . . . . . . . . . . . . . . 5
2.1 RMSE, error mean and error standard deviation for all experi-
ments. Each row represents one experiment where 100 samples were
recorded each at 31 positions for a total of 3100 data points. . . . . 29
4.1 Summary of data points acquired for static volumetric evaluation. 51
4.2 Summary of the global error statistics for static volumetric study. . 54
5.1 Error magnitude statistics describing dierence between ULS and
uoroscopy based tracking for 4 trials. . . . . . . . . . . . . . . . . 72
5.2 Stent deployment to target distances for three stents. Negative
values indicate that the stent landed inferior to the target vessel
and positive values indicate that the stent landed superior, thereby
occluding the target vessel. . . . . . . . . . . . . . . . . . . . . . . 77
x
List of Figures
1.1 Drawing of normal abdominal aorta compared to aneurysm. . . . . 2
1.2 Endovascular vs. open repair of abdominal aortic aneurysm [48]. . . 3
1.3 Fluoroscopy angiogram of abdominal aortic aneurysm with stent-
graft [26]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.1 Schematic of the hardware used for the ultrasound localization system. 11
2.2 Photograph of catheter transducer showing cylindrical single ele-
ment tip. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.3 Pulse echo waveform and frequency spectrum. . . . . . . . . . . . . 13
2.4 Impedance magnitude and phase plots. . . . . . . . . . . . . . . . . 13
2.5 Layout for external transducer sensor array with photograph. . . . . 14
2.6 Localization signal pathway
ow chart. . . . . . . . . . . . . . . . . 16
2.7 Fictive noisy data with envelope detection and two thresholds based
on baseline noise level . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.8 Concept of intersecting circles demonstrating trilateration location
estimate. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.9 Aperture error and correction method. . . . . . . . . . . . . . . . . 22
2.10 Experimental setup for water tank validation showing a.) water
only; b.) water and pork tissue; c.) water, pork and stent graft. . . 24
2.11 L. Registered localizer outputs against ground truth. R. Error mag-
nitude histogram Data shown is from water only experiment trial 3,
with aperture compensation. . . . . . . . . . . . . . . . . . . . . . . 27
2.12 Mean RSME values across all four trials for each experimental setup. 28
xi
2.13 Still frame from movie (no audio, 1:15 @ 30 fps). Inset shows camera
view looking into the water tank. The sensor array and water surface
are at the top of the frame. Background shows an animation of
catheter position from localizer data, plotted in 3D, at a perspective
similar to the video camera angle. . . . . . . . . . . . . . . . . . . . 30
3.1 Psuedocolor CT volume rendering of the swine model abdomen with
contrast agent. The stent target is below (caudal) the branching of
the renal arteries. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.2 L: Photograph of EKG ducial markers placed on pig abdomen. R:
Pig CT imaging. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
3.3 L: Fiducial locations in CT space [mm] R: Fiducials plotted with
registered array sensor coordinates. . . . . . . . . . . . . . . . . . . 36
3.4 Pig during surgery with transducer array axed and ultrasound
catheter inside femoral artery . . . . . . . . . . . . . . . . . . . . . 37
3.5 Real time GUI display of RF signals and coordinates during exper-
iment. Greatest signal magnitudes are on channels 1,3 & 4. . . . . . 38
3.6 3D Slicer display showing catheter position in 3D rendering and
orthogonal 2D slices. . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.7 Diagram of target stent placement. . . . . . . . . . . . . . . . . . . 39
3.8 Thick orthographic projection CT slices showing bowel gas (green
arrows) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3.9 Photo of bowel gas ballooning from intestines upon opening abdom-
inal cavity. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
3.10 Top L: Photo of stent, in situ, deployed just below the renal arteries.
Top R: Explanted aorta containing stent. Bottom: Explanted aorta
and stent showing renal artery clearance. . . . . . . . . . . . . . . 43
4.1 L: Axial slice pig CT image showing boundaries for acoustic access
between the animal surface and the abdominal aorta. R: Potential
cross-section for sensor array placement. . . . . . . . . . . . . . . . 48
4.2 L: CAD rendering of cradle array. R: Photo of completed cradle
array with sensors in place . . . . . . . . . . . . . . . . . . . . . . . 49
4.3 Volumetric grid of points acquired in Set 1. Arrows indicate sensor
angles and tips indicate sensor positions. . . . . . . . . . . . . . . . 51
xii
4.4 Displacement vs. time trace. Peach, turquoise, and pink show
regions of 1, 3, 5 cm/s
2
respectively. . . . . . . . . . . . . . . . . . . 52
4.5 Flow chart of the signal chain for noise lter comparison analysis. . 53
4.6 L: Quiver plot showing bias vectors. R: Heat map showing standard
deviation of points according to location. Color bar max is 0.8 mm. 54
4.7 Histograms for the X, Y, and Z bias for static volumetric data set 1. 55
4.8 Mean RMS bias and standard deviation vs. distance from volume
center. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
4.9 Coronal image slice from pig CT with bias magnitude indicated by
the heat map overlay. Max colorbar indicates 2.7 mm error. Sensor
locations (below the displayed plane) are indicated by the red circles. 56
4.10 Time-displacement and error traces for various ltering strategies
at 14 dB system SNR. . . . . . . . . . . . . . . . . . . . . . . . . . 57
4.11 Mean RMS error of 10 trials as a function of system SNR for dierent
ltering strategies. . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
4.12 Results of Golay ltering with a window size of 105 samples on 16
dB SNR data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
4.13 Optimal Golay lter window size as a function of system SNR for
several acceleration levels. . . . . . . . . . . . . . . . . . . . . . . . 59
5.1 Array with sensors and pig. Light blue ultrasound gel is used to
ensure acoustic contact between the sensor and the pig skin. . . . . 63
5.2 L: Customizable wooden bed with custom immobilizer preparation.
R:Swine repositioned in cradle with transducers. . . . . . . . . . . . 64
5.3 L: 3D rendering of aorta from segmented from CT angiography with
centerline. Endoluminal views from Mid L: Illiac bifurcation; Mid
R: inferior, left renal artery; R: superior, right renal artery . . . . . 65
5.4 CT imaging axial slice and volume rendering of pig and cradle. . . . 66
5.5 Operating room with pig, Ultrasound Localization System and
u-
oroscope. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
xiii
5.6 Catheter tip tracking from successive frames of cine
uoroscopy
against pigtail catheter marked at 1cm increments. Note data shown
here is from
uoroscopy only. . . . . . . . . . . . . . . . . . . . . . 70
5.7 Top: Temporally and spatially registered
uoroscopy and ultra-
sound position vs. time data. Bottom: Error magnitude between
uoroscopy and ULS data. . . . . . . . . . . . . . . . . . . . . . . . 71
5.8 3D Aorta modeling and rendering from pig CT showing centerline
and Ultrasound Localization System coordinates. Nipple-like struc-
tures are EKG leads attached as ducial markers. . . . . . . . . . . 74
5.9 Ultrasound Localization System coordinates against aorta centerline. 75
5.10 Distribution of distances between tracked points and aorta centerline. 76
5.11 Results from Pig #3 targeting right superior renal artery place-
ment. L: Abdominal aorta with stent in situ. R: Partially dissected
abdominal aorta showing stent. . . . . . . . . . . . . . . . . . . . . 76
6.1 L: Modied central venous catheter with transducer attached. R:
Schematic of custom guidewire. . . . . . . . . . . . . . . . . . . . . 82
6.2 Layer stack for basic catheter mounted transducer. . . . . . . . . . 83
6.3 Flowchart of transducer placement optimization algorithm. . . . . . 87
6.4 L: Flowchart for sound speed error correction. R: Segmented CT
image weighted by sound speed and beam path iterations. Red:
initial estimate, Green: nal. . . . . . . . . . . . . . . . . . . . . . . 88
A.1 Completed interconnect, 4-layer design, attached to VDAS connec-
tor. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
A.2 Photo showing a cut PCB card, socket connector, SMA connector
and BNC-SMA cable. . . . . . . . . . . . . . . . . . . . . . . . . . 98
A.3 Four \cards" t onto one PCB sheet. Two cards are required per
design, and two designs are represented. PCB schematic and scans
shown. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
C.1 Top: 3D rendering of vessel surface model with CT slices overlaid.
Bottom: Endoluminal view renderings at the renal take-os with R:
90 degree view angle and L: 120 deg view angle . . . . . . . . . . . 119
C.2 Start 3D Slicer, Load Data (File >> Add Volume) . . . . . . . . . 120
xiv
C.3 Crop data to reduce workload Modules >> Crop Volume. Check
that the input volume is correct, create a new MRMLROINode to
represent the ROI. Use the bounding box to select the appropriate
ROI. Leave other parameters default and Apply. . . . . . . . . . . 120
C.4 Establish Fiducials both to seed the segmentation and guide the
endoscopy view. Modules >> Fiducials. Use the axial view and
click in the center of the vessel of interest. . . . . . . . . . . . . . . 120
C.5 Scroll through slices and click on each vessel center. . . . . . . . . . 121
C.6 Segmentation (Modules>> Simple Region Growing). Semi-automatically
selects contiguous vessel from surrounding tissue. Load seeds from
previous ducial list. Use the correct input volume and create a new
output volume. Use the default settings and Apply. Scroll through
slices and check for satisfactory results. Make sure there are no holes
or bones in the segmentation. If so, readjust multiplier parameter.
If that doesn't work, redo ducials and then volume cropping. If
results are good, increase iteration parameters for smoother results. 121
C.7 Surface Model (Modules >> Model Maker). Use the previously
segmented volume to create a surface. This greatly reduces the
amount of data and speeds up rendering for 3D visualization. Make
sure to use the correct input volume and to create a new model.
Apply. Go to Modules >> surfaces, select the correct model and
UNCHECK \backface culling". . . . . . . . . . . . . . . . . . . . . 121
C.8 Orient the vessel model to a view that would correspond to the
forward-looking view from a catheter. . . . . . . . . . . . . . . . . 122
C.9 Virtual Endoscopy (Modules >> Endoscopy). Select the ducial
list that corresponds to the
y-through points. Here is the default-
setting view at the renal branches. . . . . . . . . . . . . . . . . . . 122
C.10 Change view settings. Same location as above, with 120 degree view
angle. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
C.11 Enable dual 3D view for both endoluminal and gross views. . . . . . 123
C.12 Enable axial slice view to superimpose slice in both views. . . . . . 123
xv
Abstract
Image guidance is crucial to minimally invasive surgery. This dissertation describes
a novel ultrasound-based localization system to guide catheter based repair of
abdominal aortic aneurysm. Chapter 1 provides background on the clinical sig-
nicance of abdominal aortic aneurysm, the state of the art for interventional
tool tracking and summarizes the technical contributions of this work. Chapter 2
describes the Ultrasound Localization System (ULS) in detail; it covers the real-
time signal pathway from raw ultrasound data to location coordinates. The in
vitro experiments show that the ULS is capable of tracking in water with 1.05
mm accuracy and with a layer of intervening pork tissue with 3.23 mm accuracy.
Chapter 3 describes a pilot in vivo study using the ULS in conjunction with pre-
operative CT imaging data to deploy a stent-graft in a live pig. The results showed
tracking was feasible but pig bowel gas reduced system SNR. Chapter 4 describes
a design revision to the ULS based on the pilot in vivo study and follows with fur-
ther in vitro validation of the new design, presenting various ltering strategies to
xvi
smooth motion in the presence of noise. Chapter 5 describes the second in vivo pig
study, using the revised ULS described in the previous chapter as well as the third
pig study, using a yet further rened version of the ULS and an intuitive \virtual
endoluminal" view. The results show that the ULS system is accurate to within 1
mm of
uoroscopy and capable of delivering aortic stent-grafts within 1 cm of the
intended target. Chapter 6 concludes this dissertation and suggests future work,
which is required to address ultrasound-tool miniaturization and image registra-
tion.
This dissertation represents one published journal article [38], several conference
proceedings [37, 41, 39] and unpublished work. Chapter 5 is derived from work
in preparation for submission to a clinically oriented journal. The authors are
Jay Mung, Grace Huang, John Moos, Jesse Yen and Fred Weaver. Chapter 4 is
derived from work in preparation for submission to a technically oriented journal.
The authors are Jay Mung and Jesse Yen.
xvii
Chapter 1
Introduction
1.1 Clinical Background
Abdominal aortic aneurysm (AAA) represents a disease process where the abdom-
inal aorta weakens and dilates by 50 percent (Fig 1.1). This condition greatly
increases the risk of aortic rupture. Each year in the US over 9,000 people die
from ruptured AAA and over 40,000 undergo elective operations for AAA [17].
Endovascular aneurysm repair (EVAR) is becoming the rst-line standard of care
in vascular surgery for patients with suitable indications. Like open repair, the
objective of EVAR is to stabilize the aneurysm with an impermeable fabric graft,
thereby preventing rupture or leakage. The dierence is that EVAR delivers a
stent-graft via small incisions in the groin rather than a large incision in the
abdomen (Fig 1.2). This paradigm shift from open repair has been particularly
dramatic for abdominal aortic repairs. Current literature demonstrates that for
1
Figure 1.1: Drawing of normal abdominal aorta compared to aneurysm.
endovascular aortic repairs, the perioperative morbidity and mortality are consis-
tently lower compared to open surgery given appropriate patient selection [42].
Such promising patient outcomes have encouraged surgeons to push the technical
envelope in the development of branching or fenestrating endografts, peripheral or
visceral stenting, and complex thoracic endovascular aortic repairs [43].
1.2 Technology Background
Imaging technology has been crucial to the growth of endovascular procedures.
Preoperative surgical planning relies on high resolution CT angiography with 3D
2
Figure 1.2: Endovascular vs. open repair of abdominal aortic aneurysm [48].
reconstruction. Fluoroscopy is the gold-standard modality for navigation during
EVAR, which projects a 2D X-ray image taken in specic angles to demonstrate
appropriate anatomy for intervention (Fig 1.3). Its limitations become apparent
as minimally invasive procedures become more complex. It is dicult to appreci-
ate three dimensional features from a 2D projection image. This leads to repeated
uoroscopy in order to obtain multiple projections, which results in increased oper-
ative time and exposure to contrast agent and radiation. Adverse outcomes ranging
from erythema, \
ushing, nausea, arm pain, pruritus, vomiting, headache and mild
urticaria" to life-threatening reactions due to these factors have been documented
[29, 35]. Furthermore, given the two dimensional limitation of
uoroscopy, manip-
ulation of the imaging equipment becomes paramount for precise visualization,
which is not only cumbersome but also a potential pitfall for operator dependent
3
error. These issues are particularly relevant as endovascular AAA procedures are
performed in increasingly dicult cases [7].
Figure 1.3: Fluoroscopy angiogram of abdominal aortic aneurysm with stent-graft
[26].
There has been interest in developing alternative or adjunctive means to
uo-
roscopy for endovascular procedures. Several groups have studied intravascular
ultrasound (IVUS) as a means of guiding EVAR. Initial clinical experiences were
later followed by studies in 80 patients showing that IVUS can reduce X-ray expo-
sure by 16 minutes and contrast load by 190 ml [62, 59]. While surgeons were
4
able to identify the desired stent graft landing zones by evaluating the IVUS 2D
luminal cross-section images, such a technique does not provide global positioning
and might not be adequate for complex procedures. A feasibility study with near
real time MRI guidance demonstrated an average stent deployment accuracy of 7.8
mm in ve pigs, largely due to the migration of one stent [8]. Though the results
were promising, the authors acknowledged that a 5-10 second image refresh rate
and MRI compatibility stands in the way of clinical adoption.
Table 1.1: Current localizer technology and how an ultrasound tracker might com-
pare. Adapted from [38].
Image guided surgery researchers are investigating other methods for instrument
localization. In particular, devices broadly categorized as surgical localizers can
be used to provide real-time information on the instrument's 3D location and
orientation only. Localizers do not provide imaging information but rather are used
in conjunction with image data to navigate surgical procedures [44]. Localizers can
also provide feedback for surgical tool location during robotic surgery [27]. The
two foremost surgical localization systems commercially available now are based
on either optical or electromagnetic technology. Optical localizers require a clear
5
line of sight and therefore do not provide a viable solution for tracking catheters.
Electromagnetic tracking systems on the other hand have demonstrated potential
in preclinical abdominal procedures with errors < 5 mm [63]. Further study is
necessary to determine whether these errors are acceptable.
Electromagnetic (EM) systems use a remote transmitter to create electromag-
netic elds within the tracking volume. A receiver is mounted on the instrument
and senses position based on the strength of the detected electromagnetic elds.
However, past ndings show that error increases in the presence of ferrous met-
als, which perturb the electromagnetic elds. A number of issues pertaining to
tracking accuracy in the face of dynamic C-arms and metal interference have been
investigated and remain open issues [24, 21]. EM tracker companies continue to
make improvements and recent results have been promising to encourage contin-
ual research. EM technology has seen commercial adoption, marketed as a means
to reduce contrast dose [49]. Nonetheless, no electromagnetic localizer solution is
deemed the standard of care in abdominal procedures [54, 61].
Multimodality image-guided surgery approaches merge separate but complemen-
tary imaging modalities or technologies. Many have been interested in using real-
time 3D electromagnetic tool tracking sensors in concert with imaging for interven-
tional procedures. This has been the natural extension of \stereotactic" surgery
utilized in brain and sinus surgery. Various swine studies using an EM system in
6
conjunction with preoperative CT images have demonstrated guidewire navigation
with 5-6 mm accuracy [63, 1, 30]. The greatest challenge with these approaches
are image-patient registration.
Researchers and clinicians alike have long used ultrasound imaging for locating
interventional devices. Breyer and Cikes document the use of standard pulse-echo
ultrasound imaging to visualize the catheter along with the rest of the anatomy
[5]. This method was inadequate due to specular re
ections that gave rise to spuri-
ous images, which motivated the use of a catheter-mounted monitoring transducer.
One conguration was the transponder method, where the catheter-mounted trans-
ducer transmitted a pulse upon receiving an ultrasound signal from the imag-
ing transducer, indicating catheter location as a strong signal on the ultrasound
image. Vilkomerson and Lyons described in detail a similar arrangement where
the catheter based transducer only received the ultrasound signal from the imag-
ing transducer and then injected the signal into the ultrasound image to indicate
catheter position [58]. Merdes and Wolf extended this method to provide 3D coor-
dinate data using a 2D imaging array by comparing received signals to those from
the expected imaging array beam prole and achieved 0.23 0.11 mm resolution
at a range of 75 mm and 0.47 0.47 mm resolution at a range of 97 mm in vitro,
showing a range-dependent resolution [33]. Mung, Vignon and Jain implement sim-
ilar functionality in real-time on a 3D commercial scanner and achieve 0:36 0:16
mm tracking accuracy [40]. An alternative approach to visualizing interventional
7
device tips has been to vibrate the device and image the tip with Doppler ultra-
sound. Armstrong et al used the ColorMark device, which vibrated needles at 1-3
kHz for positive needle tip identication under Doppler ultrasound and demon-
strated success in 18 of 25 clinical pericardiocentesis procedures [3]. Fronheiser
and Idriss employed a similar approach for locating catheters and demonstrated
success visualizing device tips in real time 3D in vivo [14].
1.3 Contributions
This work describes a prototype ultrasound localizer system much like an elec-
tromagnetic localizer. Since our system does not use EM signals, it is ostensibly
immune to the interference that can degrade EM tracker performance. The system
does not use ultrasound imaging but rather ultrasound signals. The catheter-tip is
equipped with a single element ultrasound transducer. This transducer transmits
an ultrasound pulse which is received by an array of six to seven external single
element ultrasound receivers that are held in contact with the body. The received
signals threshold detected to provide time of
ight measurements which then pro-
vide the location estimate. Time-based localization of gunshots has been docu-
mented around World War I and ultrasound imaging based methods for catheter
localization have been documented since 1984 [47]. However, there has been little
published work combining these two concepts. Thus, we have developed a novel
8
ultrasound based 3D catheter localization device for endovascular navigation. This
has required the development of custom hardware, patient specic sensor arrays
and signal processing techniques. Furthermore, a virtual endoluminal view is pre-
sented as an intuitive method for navigating the vasculature. In this dissertation
I demonstrate the in vivo feasibility of our using the Ultrasound Localization Sys-
tem in conjunction with preoperative CT images to guide EVAR procedures with
a porcine model.
9
Chapter 2
The Ultrasound Localization
System
2.1 Introduction
This chapter discusses the design and demonstrates in vitro feasibility of the ultra-
sound localization system. The initial design goals for this system have been set
with consideration to endovascular AAA procedures. The abdominal aorta lays
along the top of the spine, at a depth of about 10{30 cm below the abdominal
surface, depending on patient body habitus. Therefore target penetration depth
for this system is 10{30 cm. One possible cause of failure during endovascular
AAA procedures is occlusion of the renal arteries [15]. Renal arteries generally
have a radius of approximately 2.5 mm and therefore our target performance is to
achieve root mean square error (RMSE) < 2 mm.
10
2.2 Methods
2.2.1 Hardware System Design
Figure 2.1: Schematic of the hardware used for the ultrasound localization system.
Figure 2.1 shows a schematic of the ultrasound localizer system hardware. The
catheter tip includes an ultrasound transducer. A pulser (Panametrics 5072PR)
drives this transducer with a 5{10 ns,360 V spike excitation to transmit an
ultrasound pulse. The external transducer sensor array, held in acoustic contact
with the patient's abdomen, receives the pulse. The Verasonics Ultrasound Engine
(Redmond, WA) ties these pieces of hardware together [57]. It provides the trigger
signal to the pulser and amplies and digitizes the incoming radio frequency (RF)
11
data from the transducer sensor array. This RF data is passed on to the host PC,
which processes the data to estimate the catheter's coordinates.
Catheter Transducer
Figure 2.2: Photograph of catheter transducer showing cylindrical single element
tip.
The localizer system uses a custom catheter transducer (Fig 2.2). It is made of a
single element cylindrical tube of PZT 5H which resonates predominantly in the
radial direction (Boston Piezooptics). It is mechanically and electrically connected
to the end of a coaxial cable, which mimics a catheter. The transducer has a
resonance frequency of about 3.5 MHz with 19% fractional bandwidth, as evaluated
by pulse-echo and electrical impedance tests (Figs 2.3, 2.4). The resonant frequency
12
falls within the range of frequencies associated with abdominal ultrasound imaging,
which requires sucient penetration depth [52].
Figure 2.3: Pulse echo waveform and frequency spectrum.
Figure 2.4: Impedance magnitude and phase plots.
13
Transducer Sensor Array
Figure 2.5: Layout for external transducer sensor array with photograph.
Sensor count and placement in
uences localization performance and is an active
area of research. At least three non-collinear sensors are required for time of
ight
3D localization. More sensors deliver increased performance in the face of noisy
measurements and can also provide redundancy in the event that a particular
sensor measurement is invalid. This work beings with seven sensors arranged in
a uniform circular array (Figure 2.5) as described by McKay and Patcher [31].
We have previously characterized the theoretical performance of this array using
Dilution of Precision [41]. A custom made acrylic plate holds the transducers in
the shape of the 20 cm diameter array. This diameter was selected to correspond to
the approximate dimensions of an adult human abdomen. The sensor transducers
are 3.5 MHz (Olympus Panametrics V382) pistons with 13 mm diameter nominal
aperture. The transducer frequency was chosen to match that of the catheter
14
transducer. A custom made interconnect of RG-58 coaxial cable connects the
transducers to the Verasonics Ultrasound Engine.
Verasonics Ultrasound Engine
The Verasonics platform is a programmable PC-based ultrasound imaging research
system. It is capable of acquiring 128 channels of data in parallel with 60 MHz
sampling. Each channel can be subject to an individual hardware FIR lter. For
this work, the Verasonics is not used for its imaging capabilities but rather as a
high speed parallel data acquisition and signal processing device. The localizer
system uses seven channels|one for each sensor|sampled at 15 MHz, bandpass
ltered to suppress DC components and high-frequency noise.
2.2.2 Localization Overview
Figure 2.6 shows the signal pathway upon raw RF data acquisition all the way to
location coordinate XYZ output. The timing and data acquisition parameters are
set on the Verasonics system, which rst digitally lters the RF data in hardware
before passing the data to the PC. The PC averages the multiple lines of incoming
RF data and performs Hilbert transform for envelope detection. Next the PC
performs time of arrival (TOA) detection, least squares estimation, aperture error
15
Figure 2.6: Localization signal pathway
ow chart.
compensation and nally registration. The following subsections describe these
steps in greater detail.
Wavefront Detection for Times of Arrival
In order to provide real-time location estimation, the wavefront time of arrival
(TOA) must be extracted from each line of RF data. TOA detection is an active
area of research. Peak detection and cross-correlation based approaches have been
16
used to nd TOA for acoustic source localization [28, 11]. However, these methods
can suer in reverberant and multipath environments.
Multipath artifacts occur when ultrasound energy takes an indirect route from the
catheter to the sensor due to re
ections. Direct path signals represent the shortest
distance between the catheter and the sensor. Therefore for a given transmit event,
multipath signals always arrive at times later than the true signal. In homogeneous
attenuating media, multipath energy is subject to greater attenuation due to its
longer travel distance. However, a multipath signal artifact can have greater ampli-
tude by impinging upon the sensor at a more favorable angle with respect to the
sensor's directivity pattern. An inhomogeneous medium of tissues and organs may
also subject a direct path signal to greater attenuation than a multipath signal.
In other words, signal amplitude is not a reliable indicator of multipath artifacts.
In the presence of multipath artifacts the most reliable method to nd TOA is to
detect the leading edge of the signal wavefront [50].
Finding the time at which a signal rst crosses a set threshold is one method
of leading edge detection. This requires careful consideration of the threshold
level. The lower the threshold, the more likely noise can trigger a threshold-
crossing event. At the same time, a higher threshold might bias TOA detection
towards later times due to signal rise time and increases the likelihood of missing a
wavefront all together. Some bias can be compensated for with a double threshold
17
Figure 2.7: Fictive noisy data with envelope detection and two thresholds based
on baseline noise level .
crossing method, where two threshold crossing (e.g. 6 mV and 10 mV) times
extrapolate the zero-crossing of the leading edge via parabolic t [32]. Ultimately,
leading edge detection performance is non-linear and more sensitive to SNR than
cross-correlation based methods. This work uses a form of the double threshold
crossing method for leading edge detection. The thresholds are set in the following
manner. First, the algorithm nds the RMS amplitude of the initial duration (< 25
sec) of the recorded signal, due to noise, before the ultrasound signal arrives. The
algorithm sets two thresholds based on 4 and 8 times the noise amplitude, which
corresponds to SNRs of 12 to 18 dB. Figure 2.7 demonstrates a double threshold
with ctive noisy data. A successful wavefront detection event occurs when the
signal crosses both thresholds within an empirically determined duration (< 2
sec). The time at which the lower, earlier threshold crossing occurs is recorded as
18
the TOA. This method provides a TOA with the bias of the lower threshold and
the noise immunity of the higher threshold.
Location Estimation: Closed form Linear Least Squares Estimator
This work uses trilateration to estimate the source location. Figure 2.8 illustrates
this concept in 2D. Each sensor's coordinates designate the origin of a sphere (gure
shows circles). The TOA at each sensor estimates the distance between the source
and each sensor. These distances designate each sphere's radius. The point where
all spheres intersect provides the source location estimate. Because this work uses
up to 7 sensors with noisy measurements, a unique solution does not exist. The
linear least-squares approximation is one method to solve such an over-determined
system [6, 53].
The notation in this section is derived from Smith and Abel [51], where bold-
faced characters indicate vectors. The linear least-squares estimate for the source
location is:
^ x
s
=
S
T
S
1
S
T
b (2.1)
where
19
Figure 2.8: Concept of intersecting circles demonstrating trilateration location
estimate.
S =2
2
6
6
6
6
6
6
6
6
6
6
4
x
1
x
2
.
.
.
x
N
3
7
7
7
7
7
7
7
7
7
7
5
(2.2)
and
20
b =
2
6
6
6
6
6
6
6
6
6
6
4
D
2
1
R
2
1
D
2
2
R
2
2
.
.
.
D
2
N
R
2
N
3
7
7
7
7
7
7
7
7
7
7
5
(2.3)
^ x
s
is a the three element vector estimate of the source location. x
i
is the location
of theith sensor for (i = 1;:::;N)N sensors. R
i
=kx
i
k is the distance from each
sensor to the origin and D
i
is the estimated distance between the source and each
sensor as calculated by
D
i
cTOA
i
(2.4)
where TOA
i
is the estimated time of arrival. S is an N 3 matrix containing
the sensor locations and b is anN element column vector derived from the sensor
locations and distance values. Equations (2.1) to (2.3) are derived from the set of
algebraic equations that solve for the point of intersection between several spheres
[25].
21
Aperture Error and Compensation
Most source localization algorithms operate under the assumption that sensors
and sources occupy points in space. This assumption is valid when the sensor and
source apertures are smaller than the center frequency wavelength . This work
uses sensor transducers with diameters of about 30. This large diameter aperture
aects not only directionality but also time of arrival measurements. Due to the
sensor's aperture, the signal emanating from the source arrives rst at the nearest
edge of sensor rather than its center. This will result in TOA values earlier than
expected and cause localization error (Fig 2.9).
Figure 2.9: Aperture error and correction method.
22
This work uses a two-step technique to compensate for the aperture error. The
rst step is to perform source localization normally, relying on the point sen-
sor assumption to provide initial sensor locations. The eective location of each
sensor is then shifted in the direction of the initial estimate, by a distance con-
strained to the dimensions of the sensor aperture. For a circular sensor aperture
the shift-distance is no greater than the aperture radius. The shifted eective
sensor locations then provide a rened source location estimate that accounts for
aperture error (Fig 2.9).
Registration
The nal step after the location estimate is registration. Registration aligns the
localizer output with the appropriate frame of reference. For this work only non-
re
ection, rigid body transformations are allowed, which means only rotation and
translation without
ipping, skewing or scaling. This work uses the Procrustes
Analysis method for point-based registration to nd the transformation matrix
[22].
23
2.2.3 Validation
We used a 20-gallon tank of ltered degassed water for all validation experiments.
Water temperature was monitored throughout the course of the experiments to be
21:4 0:64
C. A mean sound speed of 1486.56 m/s was selected from published
values based on this mean temperature [19].
Figure 2.10: Experimental setup for water tank validation showing a.) water only;
b.) water and pork tissue; c.) water, pork and stent graft.
Figure 2.10 shows three experimental setups for quantitative validation were con-
sidered. The rst setup was in a water tank only. The second setup included a
slab of pork belly, mounted beneath sensor array to simulate the abdominal wall.
The pork belly had skin, fat layers and muscle intact, was 3{5 cm thick, 24 x 24
cm square and was obtained from a local butcher. The third setup included an
additional nitinol-dacron abdominal aortic aneurysm stent graft. The catheter was
mounted inside the stent graft which was mounted on the mechanical stage. The
24
purpose of this setup was to simulate catheter navigation within an AAA stent
graft.
All three experimental setups underwent the same experimental protocol. The
sensor array was mounted on the tank so that the active elements faced downward
submerged in the water. A manually controlled three-axis, millimeter precision
mechanical stage (Velmex, Bloomeld, NY) was also mounted on the tank. This
stage held and moved the catheter transducer in the water along a predetermined,
ground truth path. The path was selected to broadly represent imaging depths
and elds of view that would be encountered during endovascular AAA procedures.
The path started at coordinate point (0,0,10) cm and advanced to points (10,
0,10), (10, 0,20), and (10,10,20) cm in 1 cm increments (Fig 2.11) for a
total of 31 locations. The localizer recorded 100 samples of the catheter's location
at each increment along the path for a total of 3100 data points per experimen-
tal run. Aperture compensation was performed oine. Experimental runs were
repeated 4 times each for the three experiment setups.
Because the localizer sensor array and mechanical stage were not calibrated to each
other, the two frames were brought into registration before calculating error. We
used Procrustes analyses for point-based registration to calculate the registration
transformation matrix by tting the average of the 100 localizer output samples
at each of the 31 positions to the ground truth path. The transformation matrix
25
was applied to each localizer output position and the root mean square error is
calculated for each experimental run. The root mean square error (RMSE) is
dened by the equation:
RMSE
"
1
M
M
X
k=1
3
X
a=1
[x
ka
^ x
ka
]
2
#
1=2
(2.5)
where x
ka
is the ground truth coordinate value and ^ x
ka
is the localization system
estimate after registration. Subscript k denotes the sample number for a total of
M samples, subscript a can take the values of 1, 2 or 3 in order to denote the x, y
or z axis [34]. The error means and standard deviations are also reported.
A fourth experiment was performed for qualitative evaluation. In this experiment,
the localizer tracked the catheter in the water tank only. The experimenter held
the catheter wire by hand, approximately 10 cm from the tip. The experimenter
then submerged the catheter inside the tank and moved the catheter throughout
the eld of view of the sensor array. The localizer digitized, plotted and saved
position information at 25 fps for approximately 2 minutes. A digital camera
(Canon Powershot SD 850) recorded video of the catheter movement at 30 fps for
the same duration. We used Matlab software (Mathworks, Natick, MA) to create
an animation from the position plots. We used Final Cut Express video editing
software (Apple Inc, Cupertino, CA) to synchronize the animation and the video,
26
upsample the animation and create a picture-in-picture eect. The movie is played
back in real time at 30 fps.
2.3 Results
Figure 2.11: L. Registered localizer outputs against ground truth. R. Error mag-
nitude histogram Data shown is from water only experiment trial 3, with aperture
compensation.
Figure 2.11L shows all the registered localizer output points and ground truth loca-
tions for one experimental run (water only, trial 3, with aperture compensation).
Each ground truth location is associated with a cluster of 100 localizer samples.
Figure 2.11R is a histogram of error magnitudes for the same data. The errors for
27
the 3100 data points are normally distributed with mean and standard deviation
= 0:91 0:37 mm.
Figure 2.12: Mean RSME values across all four trials for each experimental setup.
Figure 2.12 shows the mean RSME values across all four trials for each experi-
mental setup. The numerical RMSE values along with error means and standard
deviations for all trials are reported in table 2. Mean RMSE error with and without
aperture correction is lowest in the water only case and increases with the presence
of pork tissue and the stent graft. Aperture correction signicantly decreases mean
RMSE in the water tank only (p< 0:01).
28
Table 2.1: RMSE, error mean and error standard deviation for all experiments.
Each row represents one experiment where 100 samples were recorded each at 31
positions for a total of 3100 data points.
2.4 Discussion
The results show that the ultrasound localizer performance is best in a pure water
environment and degrades in the presence of tissue and additionally the stent graft.
This degradation is likely due to two factors. The rst is sound speed. We used
a predetermined, constant sound speed based on the temperature of pure water.
Tissue sound speed is dierent and published values are available. However, we felt
no published values were applicable our particular experimental conditions with
29
Figure 2.13: Still frame from movie (no audio, 1:15 @ 30 fps). Inset shows camera
view looking into the water tank. The sensor array and water surface are at the top
of the frame. Background shows an animation of catheter position from localizer
data, plotted in 3D, at a perspective similar to the video camera angle.
pork belly submerged in pure water at room temperature. Tissue sound speed also
varies based on composition and a constant sound speed assumption also leads to
degraded performance. Anderson et al note that sound speed errors on the order
of 5% are routinely encountered in clinical practice and describe the eects on
ultrasound image quality [2]. Future work on the eect of sound speed deviation
on localizer performance is necessary.
The second likely factor for degraded performance is SNR. Tissue has a higher
attenuation coecient than water. Earlier work shows that a nitinol-dacron stent
graft placed around a catheter transducer can attenuate peak ultrasound signals
30
by a factor of 2 [4]. With a lower SNR, time of arrival measurements are more
susceptible to error from random noise as well as erroneous threshold triggering.
Techniques such as Kalman ltering could alleviate errors associated with low SNR.
The results show that aperture correction signicantly (p< 0:01) improves localiza-
tion performance in water only. Aperture correction did not signicantly improve
localization performance in the presence of the tissue and the stent. It is possi-
ble that aperture correction is only eective when the initial location estimate is
within a certain level of error. Additional iterations of aperture correction may
improve localization performance in the presence of the tissue and stent.
Only the water tank results met our initial design goal of RMSE < 2mm. The
pork tissue and stent results suggest that the localizer will not be suciently accu-
rate in a clinical setting for endovascular AAA procedures. This nding motivates
further study and renement of the system. Ultimately the performance of tool-
tracking in image guided surgery depends on more than just localizer accuracy.
The registration process is a non-trivial component of image guidance. Measures
such as target registration error take registration into account and therefore might
provide better insight on clinical applicability than the RSME values presented
here [22].
This chapter has outlined the design and implementation of a real-time ultrasound
based catheter localizer system. The system was designed towards integration with
31
endovascular AAA procedures. This paper presented in vitro work. Patient specic
parameters such as sensor array geometry and sensor count were not considered.
For example, a 20 cm diameter 2D sensor array might not t and maintain acoustic
contact on all patients. Furthermore, a particular patient may require more or less
than seven sensors for acceptable tracking performance. Future work will consider
sensor array shapes that better conform to patient anatomy and the eect of sensor
count on performance.
We have demonstrated a working real-time prototype ultrasound catheter local-
ization system in vitro. The long-term goals for this work are to demonstrate
feasibility for endovascular AAA procedures and show that ultrasound technology
may provide another option for surgical localizer systems; that
uoroscopy need not
be the exclusive modality for catheter guidance. To achieve these goals, further
evaluation and especially study on registration between the location estimation
coordinates, anatomical images, and physical anatomy are necessary.
32
Chapter 3
In Vivo Studies - Part 1
3.1 Introduction
Any design process is by nature iterative. Medical device design usually includes
in vivo studies in order to spur meaningful iterations. This chapter discusses the
design and performance of the Ultrasound Localization System over a series of in
vivo studies using three swine, in close collaboration with the USC department of
Vascular Surgery. While the initial intention was to perform animal studies with an
N = 3 in an eort towards statistical signicance, each experiment provided new
insight for subsequent experiments and improvements to the system were made
accordingly. Therefore, these three studies represent less of a sampling but rather
an evolution of the system design and performance.
Pigs have traditionally been used in studies involving the vascular system because
of the similarity between swine and human vasculature. All of our experiments
were performed with female Yorkshire pigs weighing 80-90 lbs., in accordance with
33
Figure 3.1: Psuedocolor CT volume rendering of the swine model abdomen with
contrast agent. The stent target is below (caudal) the branching of the renal
arteries.
institutional USC IACUC protocol #11226. Animals were monitored at all times
by a veterinarian and an anesthesia technologist. It bears mentioning that all
animals were treated with utmost respect and all eorts were taken to minimize
animal discomfort. All studies were sacricial.
34
This chapter and the chapter that follows describe the in vivo experiments. This
chapter discusses the rst pig study, which showed that the Ultrasound Localiza-
tion System external array needed to be placed underneath the animal in order to
obtain good ultrasound signal. The following chapter discusses the next two pig
studies with revised array placement and
uoroscopic validation.
3.2 Methods
3.2.1 Preoperative imaging and Registration
The animal was placed under temporary sedation for transport and CT imaging.
Prior to imaging, button EKG leads were placed on the animal to serve as regis-
tration ducials. The ducials were positioned using a template corresponding to
the array of 7 abdominal transducers and centered in the vicinity of the abdominal
aorta/renal artery takeo. This position was approximated using the location of
the swine's navel and nipples (Fig 3.2L).
A Siemens Biograph 64 PET/CT scanner acquired arterial phase contrast images
and saved them at 1 mm slice thickness (Fig 3.2). Images were reviewed in OsiriX
(Fig 3.1). Fiducial markers within the CT image were located using the du-
cials module in 3D Slicer and manually clicking on markers. The CT image space
35
Figure 3.2: L: Photograph of EKG ducial markers placed on pig abdomen. R:
Pig CT imaging.
locations of the ducials markers are given in (Fig 3.3). These coordinates were
registered with the abdominal transducer array elements to provide coarse regis-
tration between the localizer and the CT space. Registration was performed with
Procrustes analysis [18] and yielded a mean Fiducial Registration Error of 5.78
mm [12].
Sensor X Y Z
1 10.923 227.670 988.5
2 -59.702 221.881 1023.5
3 11.502 233.460 1063.5
4 67.076 225.993 1028.5
5 67.076 213.777 953.5
6 5.134 212.040 913.5
7 -64.333 209.153 958.5
Figure 3.3: L: Fiducial locations in CT space [mm] R: Fiducials plotted with
registered array sensor coordinates.
36
3.2.2 Surgery and Data acquisition
Following the imaging study, the animal was recovered from sedation and rested
for 12 hours. The next morning, the animal was placed under general anesthesia
with mechanical ventilation. A femoral cutdown was employed to gain access to
the right femoral artery and place a 9 Fr introducer sheath inside the artery to
facilitate introduction of the ultrasound catheter.
Figure 3.4: Pig during surgery with transducer array axed and ultrasound
catheter inside femoral artery
The EKG leads were removed and the abdominal array was placed and secured at
the location of the markers (Fig 3.4). Liberal amounts of ultrasound transmission
gel ensured acoustic contact. The ultrasound tracking system was initiated and
37
the catheter was advanced and pulled back inside the artery for several runs over
the next several hours. 3D Slicer software with OpenIGTLink provided real-time
image guidance display [16, 45, 56]. For troubleshooting purposes frame averaging
parameters, ltering parameters and abdominal array placement was varied while
tracking was activated. Data was recorded in the form of time stamped coordi-
nate outputs, intermittent raw RF data and monitor display video screen captures
(Figs 3.5, 3.6).
Figure 3.5: Real time GUI display of RF signals and coordinates during experi-
ment. Greatest signal magnitudes are on channels 1,3 & 4.
A nal catheter tracking run was performed to place the catheter at the intended
stent deployment site, just below the renal arteries (Fig 3.7). The proximal end of
the catheter was marked at the lip of introducer sheath to measure the length of
38
Figure 3.6: 3D Slicer display showing catheter position in 3D rendering and orthog-
onal 2D slices.
Figure 3.7: Diagram of target stent placement.
the catheter from the femoral access site to the stent site. This length was used
to mark the stent deployment catheter. The ultrasound catheter and introducer
39
sheath were removed and the stent deployment catheter (Gore Excluder AAA
Endoprosthesis, W.L. Gore & Associates, Inc.; Flagsta, AZ) was introduced into
the access site. The deployment catheter was advanced to the length as measured
from the ultrasound catheter and the stent graft was deployed. The animal was
then euthanized and the operation was converted to an open surgery to expose
the abdominal aorta. The abdominal aorta and stent graft were then explanted to
evaluate stent placement location.
3.3 Results
3.3.1 Intraoperative experience
Large pockets of bowel gas were visible in the preoperative CT images (Fig 3.8)
and later conrmed by direct surgical visualization. The raw RF data showed that
ultrasonic signals with adequate SNR were only consistently available at sensors 1,
3 and 4, which corresponded to the upper right quadrant of the abdominal array.
This corresponded to a position largely occupied by the liver. Moving the array
towards the head, away from the bowels showed increase signal magnitude on the
sensors but invalidated the pre-operative registration.
40
Figure 3.8: Thick orthographic projection CT slices showing bowel gas (green
arrows)
The real time nature of the system provided immediate feedback on system per-
formance. When the catheter was not being actively manipulated, respiratory
motion oscillation was detectable and could be suppressed by imposing a breath-
hold with the mechanical ventilator. Qualitatively, axial tracking up and down the
aorta appeared more accurate than left-right or anterior-posterior tracking, which
appeared erratic around the image. This could have been due to user perception,
as the axial position selected axial slices on which L/R and A/P positions were
displayed.
41
3.3.2 Post euthanasia evaluation
Opening the abdominal cavity showed large amounts of bowel gas present in the
intestines (Fig 3.9), consistent with the preoperative CT images. Direct surgical
exposure of the abdominal aorta showed that the stent was successfully deployed.
Explantation of the aorta with stent showed that the upper lip of the stent was
1-3 mm below the renal bifurcation (Fig 3.10). Note that the process of cutting
the aorta released tension on the vessel which caused it to relax and shrink from
its in vivo state.
Figure 3.9: Photo of bowel gas ballooning from intestines upon opening abdominal
cavity.
42
Figure 3.10: Top L: Photo of stent, in situ, deployed just below the renal arteries.
Top R: Explanted aorta containing stent. Bottom: Explanted aorta and stent
showing renal artery clearance.
3.4 Discussion
We performed this pilot study to assess feasibility and troubleshoot the ultrasound
localization system in vivo. It is clear that bowel gas is a major impediment to
implementing the system in the porcine model, as gas re
ects almost the entire
43
ultrasound signal. Humans have less bowel gas than pigs, but nonetheless bowel
gas is an important consideration.
Barring any solutions to remove bowel gas, we must consider a dierent approach
to acoustic access of the abdominal aorta. Image data as well as preliminary RF
data suggest that bowel gas can be avoided by placing sensors on the
ank and back
the animal, transmitting signal through the liver and back muscle. Although the
current abdominal array ts the animal well, the transducers must be recongured
in order to facilitate acoustic access, avoiding both gas and bone. Dilution of pre-
cision can provide insight on optimal transducer placement given these constraints
[31, 41]. As such, we have built a prototype \cradle array" to hold transducers
against the back and
ank of the animal for our next study.
Registration between the preoperative image, patient and localizer also remains an
unresolved issue. The current accepted method to ensure correspondence between
the preoperative image and the patient in the OR is to position the patient exactly
as he was positioned in the imaging scanner. To this end, we will investigate cus-
tom patient positioners/immobilizers such as the Alpha Cradle (Smithers Medical
Products) [46]. However, this does not address the issue of registering the localizer
coordinate frame to the image and patient. Our current method provides coarse
registration but it is inadequate in that it depends on the accuracy of replacing
the transducer array as it is marked in the CT image. We will investigate a more
44
rigorous registration scheme where the ultrasound localization device itself is used
to locate a series of ducials in order to bring the patient and image into registra-
tion. At this stage, we will not consider soft-tissue deformation due to respiratory
motion as the abdominal aorta remains relatively static in relation to the spinal
column.
Further technological improvements include provisions to address noise within the
system. To this end, we will implement a custom hardware interconnect between
the transducers and the Verasonics system. We note that the system does become
contaminated with RF noise when the electrocautery device is in use, though this
is not an issue because the device is used only to gain access to the artery. We
will also implement extended Kalman ltering to ensure smooth, realistic position
outputs [64].
Preliminary in vivo experience with the ultrasound localizer demonstrated real-
time but erratic tracking in the aorta. CT images and direct inspection of the
abdominal cavity showed bowel gas that likely caused poor signal propagation,
thus responsible for the erratic tracking. Still, tracking was sucient to navigate
the catheter to the target position for stent deployment. The 3D Slicer display
was used exclusively to position the delivery catheter. Abdominal aorta explant
shows stent deployment within 3 mm of target location.
45
Our in vivo pilot study highlighted two major issues that need to be addressed in
future studies: bowel gas, acoustic accessibility, and registration. The long term
goals for this work are to demonstrate feasibility for interventional procedures
and demonstrate that ultrasound technology may be a viable option for surgical
localizer systems.
46
Chapter 4
In Vitro Studies - Array Redesign
and Dynamic Tracking
4.1 Introduction
Our rst in vivo study showed that the original ultrasound localization system
sensor array could not obtain signals adequate for real-time and accurate tool nav-
igation. This prompted re-design of the array to both better t the animal and
achieve better acoustic access. This chapter describes the design process for the
revised array, hereon described as the cradle array. In vitro experiments character-
izing the cradle array follow. These experiments were performed in a water tank
with a motorized 3-axis linear translation stage. The motorized stage enabled us
to perform two types of studies. First was evaluation of Ultrasound Localization
System (ULS) accuracy by measuring static points throughout a volume. These
results allowed us to predict the ULS performance as a function of spatial position.
Merging these results with CT imaging data provided anatomical context, which
47
could indicate whether further design iterations would be necessary. Second was
evaluation of ULS performance under controlled motion and additive Gaussian
noise. Data from these dynamic experiments are used to test dierent ltering
strategies to improve performance tracking a moving catheter in a noisy in vivo
environment.
4.2 Methods
4.2.1 Cradle Array Design
Figure 4.1: L: Axial slice pig CT image showing boundaries for acoustic access
between the animal surface and the abdominal aorta. R: Potential cross-section
for sensor array placement.
The initial array cradle was designed around CT images from the previous swine
study described in Chapter 3. Axial image slices showing soft tissue, bone and
gas suggested where there would be a direct ultrasound propagation path between
48
the abdominal aorta and the body surface. Figure 4.1L shows these potential
propagation paths and indicates that the acoustic windows to the abdominal aorta
are only small patches on the on the left and right posterior surfaces, or roughly
where the kidneys are located. Figure 4.1R shows the outline of a cradle that
would facilitate appropriate sensor placement.
Figure 4.2: L: CAD rendering of cradle array. R: Photo of completed cradle array
with sensors in place
Figure 4.2L shows a 3D rendering of the cradle based on this design. Figure 4.2R
is a photograph of the completed array. It was custom made from Lexan acrylic,
which has known X-ray radiolucent qualities. The assembly has a footprint of 35
x 20 cm. The panels holding the sensors are slanted at 33
; holes were precision
drilled to achieve sensor orientation at 15
from horizontal. Four sensors are cen-
tered at the corners of a 15 x 24.4 cm rectangle; two more are placed at the centers
of the short edges.
49
4.2.2 Static Points in a Volume
The in vitro experiments were performed with the cradle array submerged in a 20-
gallon tank of room-temperature degassed, ltered water. These experiments also
employed a 3-axis linear translational stage (Velmex) that was custom t with
stepper motors to provide computerized displacement control of the submerged
catheter (Lin Engineering, Phidgets, Keling). The stage and stepper motors com-
bined have a nominal resolution of 5 um. A custom Matlab script using Ethernet
UDP established asynchronous communication between the ULS and the motor-
ized stage control.
For the static volume acquisition, the motorized stage translated the submerged
catheter in 1 cm increments along a 14 x 14 x 9 cm volumetric grid. The ULS
acquired data 10 times at each position. Figure 4.3 shows the points in the acquired
volume. The volume acquisition was then repeated for a 14 x 4 X 1.5 cm volume
with 0.5 cm grid spacing at a position corresponding to the location of the abdomi-
nal aorta relative to the cradle array. Figure 4.3 shows the volumetric grid relative
to sensor positions. Table 4.1 summarizes the data acquisition parameters.
50
Figure 4.3: Volumetric grid of points acquired in Set 1. Arrows indicate sensor
angles and tips indicate sensor positions.
Volume Data Acquisition Parameters
Set 1 Set 2
Grid Positions 15 x 15 x 10 29 x 9 x 3
Unique Positions 2,250 783
Samples / Position 10 10
Total Data Points 22,500 783
Table 4.1: Summary of data points acquired for static volumetric evaluation.
4.2.3 Controlled Motion
The controlled motion experiments used the same hardware setup as the static
experiments. Here, a single motor was commanded to produce controlled linear
motion of the catheter along one axis. The maximum displacement spanned 10 cm
with a maximum velocity of 3 cm/sec. Kinematic parameters were chosen cover
the motion prole ranges encountered during a catheter during surgery. Figure 4.4
shows the commanded displacement trace. Note that there are 3 \segments", as
51
indicated by the dierent coloring (peach, turquoise, pink). The motor was pro-
grammed for accelerations of 1, 3 and 5 cm/sec
2
in for these segments, respectively.
Figure 4.4: Displacement vs. time trace. Peach, turquoise, and pink show regions
of 1, 3, 5 cm/s
2
respectively.
The commanded displacement was repeated 5 times while the ULS acquired raw
ultrasound RF data at approximately 24 Hz. Oine, varying levels of Gaussian
noise were added to the RF data, which was then band-pass ltered, envelope
detected, threshold detected and processed to yield position traces. This is the
same process previously described in Fig 2.6. Baseline system SNR was charac-
terized at 29 dB by nding the mean SNR across all lines of RF data acquired
during the experiment. Various levels of Gaussian noise were added to decrease
system SNR to 13 dB. Butterworth, extended Kalman and SavitzkyGolay lters
were employed to smooth the position output. The 4th order Butterworth Fil-
ter low pass cuto frequency was set at 3 Hz based on the frequency spectrum
of the baseline noise-free motion trace. The extended Kalman lter was imple-
mented with a custom script based on a kinematic model. The SavitzkyGolay
52
lter is based on polynomial tting, as such 4th order polynomials are sucient to
describe smooth motions[20]. The performance of the dierent ltering strategies
is compared. Figure 4.5 is a
owchart describing the signal chain for this study.
The optimal window size for the Golay lter is empirically determined.
Figure 4.5: Flow chart of the signal chain for noise lter comparison analysis.
4.3 Results
4.3.1 Static Points in a Volume
The data acquired from the static points throughout the volume were registered
to the ground truth positions with the Procrustes t algorithm [22]. Local bias
error was assessed by calculating the direction and magnitude of error between
53
Figure 4.6: L: Quiver plot showing bias vectors. R: Heat map showing standard
deviation of points according to location. Color bar max is 0.8 mm.
each acquired data point and its corresponding true value. The errors for set 1
are displayed as vectors in gure 4.6L. Error was also assessed by calculating the
3D variances across the 10 samples acquired for each position. The 3D variance
is expressed as a standard deviation magnitude by taking the root mean sum of
the x, y, and z variance values. The local standard deviations are displayed as
a heat map in gure 4.6R. The global mean bias is 1.7 mm and the global mean
standard deviation is 0.26 mm. A summary of the global error statistics is given
in table 4.2.
Global Error [mm] Bias Standard Deviation
Min 0.07 0.04
Max 3.70 1.13
Mean 1.70 0.26
Table 4.2: Summary of the global error statistics for static volumetric study.
Figure 4.7 shows histograms for the X, Y, and Z bias. These histograms show that
there is no signicant bias error in any direction though Z error exhibits a left
skews towards positive bias. Figure 4.8 shows the mean RMS errors within 2 cm
54
Figure 4.7: Histograms for the X, Y, and Z bias for static volumetric data set 1.
bins as a function of distance from the volume center, where there is least error.
For a working volume with a diameter of 10 cm, mean bias magnitude would be
1.4 mm and standard deviation would be 0.2 mm.
Figure 4.8: Mean RMS bias and standard deviation vs. distance from volume
center.
The static volumetric studies of ULS performance can be merged with CT data
to provide context on positional accuracy. Figure 4.9 shows a coronal image slice
from pig CT. The sensor locations (below the displayed plane) are indicated by
55
Figure 4.9: Coronal image slice from pig CT with bias magnitude indicated by the
heat map overlay. Max colorbar indicates 2.7 mm error. Sensor locations (below
the displayed plane) are indicated by the red circles.
the red circles. The bias magnitude is indicated by the heat map overlay, there
the max color corresponds to 2.7 mm magnitude bias error. The data shows that
for the clinically relevant location|the aorta in the vicinity of the renal arteries|
the ULS has bias in the range of 0{1 mm.
4.3.2 Controlled Motion
Figure 4.10 shows the time-displacement and error traces for various ltering
strategies at 14 dB system SNR. The red traces show the experimental data. The
blue traces show the noiseless case for comparison. The black traces show the 1D
56
Figure 4.10: Time-displacement and error traces for various ltering strategies at
14 dB system SNR.
error with an enlarged scale for clarity. High-frequency error spikes predominantly
in the negative displacement direction are evident in the no-lter case. All lters
suppress these error spikes, with the Golay ltering suppressing all error to less
than 0.5 mm.
Figure 4.11 compares the mean RMS error of 10 trials as a function of system SNR
for the dierent ltering strategies. Error bars indicate one standard deviation.
This data indicates that when system SNR exceeds 18 dB, no ltering is necessary
as there is no appreciable eect on error. At less than 18 dB system SNR, ltering
can suppress errors by up to 30
57
Figure 4.11: Mean RMS error of 10 trials as a function of system SNR for dierent
ltering strategies.
Figure 4.12: Results of Golay ltering with a window size of 105 samples on 16 dB
SNR data.
Figure 4.12 shows the results of Golay ltering with a window size of 105 samples
on 16 dB SNR data. The vertical orange lines indicate the boundaries between
58
segments where acceleration settings were changed. Error is lowest in the rst seg-
ment, which has lowest acceleration. Error increases in the subsequent segments
as acceleration is increased. Rapid changes in direction at high acceleration/de-
celeration rates are analogous to high-frequency oscillations that are suppressed
by low-pass lters. This data illustrates the low-pass nature of the Golay lter at
longer window lengths as it ts more data points to the 4th order polynomial.
Figure 4.13: Optimal Golay lter window size as a function of system SNR for
several acceleration levels.
Figure 4.13 shows the optimal Golay lter window size as a function of system
SNR. There is a positive relationship between system noise and optimal window
length. In other words, the noisier the system, the longer Golay window is required.
However, this parameter must be balanced against the kinematic properties of
the data in question. Motion containing higher accelerations benet from shorter
Golay window lengths.
59
4.4 Discussion
This chapter discussed the design of a novel cradle array for improved acoustic
access to the abdominal aorta. The cradle design was guided by CT images. The
cradle array was evaluated with a static volume data acquisition. This allowed for
assessment of error as distributed spatially. These error measures can be combined
with volumetric image data such as CT to provide anatomical context. The data
show that the mean bias error is 1.7 mm over a volume of 14 x 14 x 9 cm
3
. In the
targeted working area near the renal arteries, the error can be even less.
The ULS and cradle performance as pertaining to position output lters was also
evaluated through a number of dynamic motion tests. The eects of increasingly
noisy RF data manifested as greater positional error. The positional error appeared
to have a negative-displacement bias. This biased, non-Gaussian error is likely
due to the fact that the threshold detection and position calculation steps are
non-linear. Furthermore, there appears to be little pattern to when the high-
frequency noise spikes appear. This can possibly be attributed to the fact that we
used stepper motors for motion control; stepper motors are unpredictably noisy.
Nonetheless, ltering was able to suppress some of these ill eects.
Filtering is only necessary after system SNR is lower than 18 dB. Golay ltering
performs best amongst all the ltering strategies. It is important to know however
60
that the Golay window-size parameter in
uences lter performance. It should be
picked on the basis of system noise as well as expected kinematic parameters.
Further characterization of catheter motion proles can be used to inform the
appropriate Golay lter settings. Our preliminary in vivo experiences navigating
the catheter in the abdominal aorta show accelerations less than 1 cm/sec2. This
data was acquired during slow, deliberate motion that might not be representative
of real surgery. Values reported in the literature show peak catheter accelerations
over 1000 cm/sec
2
with a surgical simulator for cardiac applications [55]. Further
characterization of catheter dynamics will be useful for designing optimal motion
ltering strategies.
61
Chapter 5
In Vivo Studies - Part 2
5.1 Introduction
Results from Chapter 3 showed that abdominal placement of the ULS array could
not achieve adequate ultrasound signal due to bowel gas. In Chapter 4 we used CT
images to redesign the array so that the sensors t the pig and have clear acoustic
access to the abdominal aorta. We also characterized the cradle array in vitro.
In this chapter, we test the cradle array in vivo with pig #2. The cradle array
helped obtain better ultrasound signal. Another version of the cradle array was
built for a subject specic t for pig #3. The relative accuracy of the localizer
system against
uoroscopy for single axis travel was found to have a mean error
of less than 1mm. We successfully display a moving catheter in real-time along
the registered aortic images, providing virtual endoluminal perspectives. Tracking
performance was dened by the distance from the tracked coordinates to the aortic
centerline, which was found to lie within the radius of the aorta (6 mm) over 98%
62
of the time. Finally, three endografts were deployed by marking the landing zone
with our system. Overall errors between target and actual graft position have a
mean + standard deviation of 10.1mm.
Figure 5.1: Array with sensors and pig. Light blue ultrasound gel is used to ensure
acoustic contact between the sensor and the pig skin.
5.2 Methods
5.2.1 Retroperitoneal Approach Array
For pig #2, an initial prototype sensor array cradle was designed to hold the
sensors in place underneath the posterior pig abdomen and described in Chapter
63
4. Figure 5.1 shows the cradle array t to the pig.The light blue ultrasound gel is
used to ensure acoustic contact between the sensor and the pig skin.
For pig #3, a new cradle was constructed to have similar sensor spacing, but
was also designed for a patient specic t (Fig 5.2). This cradle incorporated an
Alpha Cradle
fi
custom immobilizer, which has demonstrated improved patient
positioning reproducibility for serial radiosurgery treatments [46]. This cradle was
constructed in two steps. First we built an adjustable wooden bed with precision-
drilled holes to hold the sensors in position. The wooden bed then provided
mechanical support in which the Alpha Cradle
fi
mixture was cured and formed
to t the pig. We then cored through the foam to place the sensors back in the
precision holes. Sensor positioning was conrmed with calipers and again with CT
imaging.
Figure 5.2: L: Customizable wooden bed with custom immobilizer preparation.
R:Swine repositioned in cradle with transducers.
64
5.2.2 Virtual Endoluminal View
We used 3D Slicer software to create a model of the vasculature for a virtual
endoluminal perspective based navigation, similar to virtual colonoscopy (Fig 5.3)
[16, 45, 60]. We posit that this virtual-reality based view oers an intuitive,
radiation-free navigation method for vascular surgeons. 3D Slicer interactively
updates the virtual display according to the catheter position as reported by our 3D
real-time ultrasound-based localizer system via the OpenIGTLink protocol [38, 56].
Figure 5.3: L: 3D rendering of aorta from segmented from CT angiography with
centerline. Endoluminal views from Mid L: Illiac bifurcation; Mid R: inferior, left
renal artery; R: superior, right renal artery
5.2.3 Imaging and Pre-surgery
Two experiments were performed on separate occasions. For both studies, animals
were placed under sedation for transport and CT imaging. EKG leads were placed
on the animals to serve as ducial markers. Pig #3 was subject to the additional
65
step of custom immobilizer tting. Both pigs were imaged with their respective
cradles in place.
A Siemens Biograph 64 PET/CT scanner acquired arterial phase contrast images
and saved them at 1 mm slice thickness. The images were reviewed and rendered
in OsiriX software for analysis and planning (Fig 5.4). CT image ducial posi-
tions were identied using the ducials module in 3D Slicer software. These CT
coordinates were registered with the nominal sensor coordinates to align the two
coordinate systems.
Figure 5.4: CT imaging axial slice and volume rendering of pig and cradle.
After imaging, the animal was returned to the vivarium to recover from seda-
tion. The following morning, the animal was transported to the operating room
and placed under general anesthesia with mechanical ventilation. A surgical res-
ident and veterinarian performed standard bilateral femoral artery cut-downs to
66
gain access with 9 Fr and 10 Fr introducer sheaths. For pig #3, a midline mini-
laparotomy was needed to gain access at the iliac quadrication due to the inade-
quately small caliber of right femoral and iliac arteries. The ultrasound equipped
probe was introduced into the abdominal aorta through either access ports. For
pig #3, a pigtail catheter was also introduced into the abdominal aorta for mea-
surement. A single-plane xed
uoroscope with video data capture was available.
Figure 5.5 depicts the operating room.
Figure 5.5: Operating room with pig, Ultrasound Localization System and
uoro-
scope.
5.2.4 Interventions: Endovascular Navigation, Stent
Deployment
Two distinct types of endovascular tasks were performed. The rst task was a
single-axis ne-translation of the catheter against
uoroscopy. The purpose of the
ne-translation was to simulate precision catheter manipulation in the abdominal
67
aorta. A contrast aortogram was rst performed to identify the aortic segment of
interest and to position the catheter probe at the edge of the
uoroscopic image.
With the ULS and cine
uoroscopy capturing data, the surgeon moved the probe
in approximately 5 mm increments along the pigtail catheter, at roughly 3 second
intervals, as signaled by a metronome. The surgeon advanced the catheter probe
until it reached the opposite edge of the
uoroscope eld of view and the
uoroscope
pedal was released. This procedure was repeated four times before moving onto
the next intervention.
The second task was to evaluate the overall performance of the navigation system
towards a clinically relevant endpoint by the deployment of a stentgraft in EVAR.
It is important to note that no
uoroscopy was used at this step. Clinically rele-
vant anatomical target positions along the abdominal aorta, caudal to branching
vessels such as the renal or superior mesenteric arteries, were identied on the
intraoperative 3D CT display. The surgeons advanced the ultrasound probe to the
target landing zone positions, then marked the probe at its introduction access
site for measuring the physical length required to the target. When the site iden-
tication was complete, the probe was pulled out. The markings were transferred
to the stent deployment catheters (Gore Excluder), which were then reintroduced
and advanced to target zone based on the markings. The stent grafts were blindly
deployed. A completion angiogram was done for validation. The animal was then
sacriced with IV injection of sodium pentabarbitol. The surgeons then performed
68
necropsy. A midline laparotomy incision was made. Exposure of the aorta was
done carefully to minimize movement. Photographs as well as measurements were
taken for the direct visualization of the stent grafts in situ.
5.2.5 Outcome Measures: Data Acquisition and Oine
Evaluation
For the ne-translation task, the relative accuracy performance of the ultrasound
tracker was validated against single-plane
uoroscopy. The overall system perfor-
mance was evaluated in the ne-translation task by comparing the tracked coordi-
nates to the aorta image centerline. Finally, we measured the distance between the
deployed stent and target location. Greater detail regarding experimental methods
accompanies the results described below.
5.3 Results
5.3.1 Ultrasound vs. Fluoroscopic Tracking
The objective of the rst outcome measure was to compare the in vivo ULS coor-
dinates with coordinates obtained from simultaneous cine
uoroscopy. Note that
69
Figure 5.6: Catheter tip tracking from successive frames of cine
uoroscopy against
pigtail catheter marked at 1cm increments. Note data shown here is from
uo-
roscopy only.
registration between the patient, tracker and the images were not evaluated by this
measure. Our data is analyzed as follows. Cine
uoroscopy video was captured
and digitized with a video capture device (Geniatech Igrabber) with 640 x 480
pixel resolution at 29.98 frames per second and saved in MOV format for oine
processing. The pixel to distance scaling was provided by identifying the pigtail
catheter markings locations on the digitized video frames, yielding a mean scale
factor of 225 m/pixel. The location of the catheter tip was identied in each
video frame (Fig 5.6) by normalized 2D cross-correlation with a cropped binary
image of the catheter tip serving as the template. The Z component of the US
3D position data was not included for comparison with the 2D
uoroscopy data
as the ultrasound array XY plane was parallel to the
uoroscope imaging plane.
Ultrasound tracking data was resampled via 1D spline interpolation to match the
70
frame rate of the
uoroscopy data. The two data sets were temporally and spa-
tially registered to each other by minimizing the residual t error after Procrustes
analyses.
Figure 5.7: Top: Temporally and spatially registered
uoroscopy and ultrasound
position vs. time data. Bottom: Error magnitude between
uoroscopy and ULS
data.
71
Figure 5.7 Top shows the registered position vs. time traces for
uoroscopy and
ultrasound based tracking for a single trial. For clarity, only the axis for the prin-
cipal direction of motion is displayed. The stepped appearance re
ects the fact
that the surgeon advanced the catheter in roughly 5 mm steps every 3 seconds.
Figure 5.7 Bottom shows the magnitude dierence between the ULS and
uoro-
scope derived 2D coordinates. Agreement between the two tracking systems is
best between the 15-35 second time markings, which is when the catheter tip is
traveling through roughly the center of ULS array and the
uoroscope eld of
view. Table 5.1 summarizes the overall statistics for the four precision translation
tasks. A total 139.6 seconds of data are analyzed to yield an overall mean 2D
correspondence of 0.85 mm between the ULS and the
uoroscopy.
2D Error Magnitudes (mm)
Trial Mean Median Max Duration (sec) Distance (cm)
1 0.867 0.604 8.157 35.9 9.1
2 0.757 0.584 4.954 40.9 9.5
3 1.136 1.034 3.778 21.0 5.0
4 0.767 0.536 4.431 41.8 9.3
Overall 0.845 0.626 8.157 139.6
Table 5.1: Error magnitude statistics describing dierence between ULS and
uo-
roscopy based tracking for 4 trials.
5.3.2 Centerline Analysis
The objective of the second outcome measure was to evaluate the performance of
the ultrasound tracked virtual catheter display on the CT image. Screen capture
72
video of the 3D Slicer software showing the catheter position relative to anatomy
was saved for playback. Figure 5.8 is a screenshot from 3D Slicer software, show-
ing the volume rendered CT data, aorta model, centerline and tracked ultrasound
coordinates. Kidneys, bones and other anatomical structures are visible in this
display. The display shows that the catheter traveled between the superior mesen-
teric artery and approximately 5 cm caudal to the renal arteries. Oine, the aorta
centerline coordinates were exported for comparison to captured ULS coordinates.
Figure 5.9 shows the tracked coordinates for each trial plotted with the correspond-
ing portion of the aorta centerline. The tracked paths do not appear straight but
exhibit features due to catheter recoil after each translation step. Figure 5.10
shows the distribution of distances between all points and the centerline. The
mean distance to the centerline was 3:29 1:24 mm. Over 98% of all points were
within 6 mm of the image centerline, which was radius of aorta in the region of
interest. This suggests that during the procedure, the tracked tool appears outside
of the vessel less than 2% of the time.
5.3.3 Stent Placement and Necropsy Measurement.
The objective of the nal outcome was to evaluate stent placement performance.
After placing the stents, a completion angiography was run and saved for review.
73
Figure 5.8: 3D Aorta modeling and rendering from pig CT showing centerline and
Ultrasound Localization System coordinates. Nipple-like structures are EKG leads
attached as ducial markers.
Necropsy was performed immediately to reveal stent location. The proximal stent
edge was identied visually and by palpation to nd its location in situ. A millimet-
ric ruler measured stent to target distance; the aorta was then partially dissected
to conrm stent location. Results were recorded and photographed (Fig 5.11).
Table 5.2 summarizes the results for the three stent deployments. One stent was
deployed in Pig #2 using inferior right renal artery as landmark, while two stents
74
Figure 5.9: Ultrasound Localization System coordinates against aorta centerline.
were deployed in the Pig #3 using the superior mesenteric artery and superior
right renal artery as landmarks, respectively. Only Pig #2's stent landed inferior
to the target. Both stents in Pig #3 landed superior to the target, occluding the
target vessels. The overall error, dened as mean + standard deviation, was 10.1
mm.
75
Figure 5.10: Distribution of distances between tracked points and aorta centerline.
Figure 5.11: Results from Pig #3 targeting right superior renal artery placement.
L: Abdominal aorta with stent in situ. R: Partially dissected abdominal aorta
showing stent.
5.4 Discussion
The retroperitoneal sensor placement in this work was motivated by our previous
experiences with poor tracking performance. Pig bowel gas and feces obstructed
76
Stent Target Distance [mm]
Pig 2 R Renal Art 5.0
Pig 3 R Sup Renal Art +7.9
Pig 3 SMA +6.4
Error (Mean + Std) 10.1 (3.1 + 7.0)
Table 5.2: Stent deployment to target distances for three stents. Negative values
indicate that the stent landed inferior to the target vessel and positive values
indicate that the stent landed superior, thereby occluding the target vessel.
ultrasound signal propagation between the abdominal aorta and sensors placed on
the anterior abdominal surface of the pig. CT images suggested that by placing
sensors on the posterior surface instead, ultrasound need only propagate from the
abdominal aorta through relatively uniform tissues predominantly composed of
kidney, muscle and fat. Most importantly, this approach would avoid any gas or
fecal matter from the bowels.
The in vivo data (Fig 5.7, Table 5.1) show that the retroperitoneal approach was
successful. Overall mean 2D correspondence between the ULS and the
uoroscopy
was better than 0.85 mm. This result validates that for ne translational motions
within a 10 cm portion the abdominal aorta, the ULS with retroperitoneal sensor
placement itself is as accurate as single-plane xed
uoroscopy. This measure did
not assess the 3D performance of the ULS. Future work with a mobile C-arm or
bi-plane
uoroscope would be able to provide 3D comparisons. Furthermore, this
work did not account for image warping due to cone-beam
uoroscope emissions.
Future work with specialized calibration phantoms could yield increased robustness
in the
uoroscopy-derived coordinates.
77
Regardless of the ULS accuracy, a surgical localization system is only useful after
successful registration between all components. Registration is crucial for all mul-
timodality, stereotactic systems. Stereotactic brain and sinus surgery relies on the
skull to provide the rigid \frame" to bring the patient in registration with the
image. Stereotactic abdominal interventions have been dicult due to the fact
that the bowel cavity is mostly soft tissue; there is no rigid frame - an image is
no longer an accurate representation of the patient once he or she moves. One
attempt to address this has been to use cone-beam CT, where the patient imaging
and surgery takes place on the same table [30]. Our approach was to create a
custom patient immobilizer to maximize accurate patient repositioning. Though
deep within the abdomen, the abdominal aorta remains relatively xed in relation
to the spine. We posited that if we could immobilize and reproduce spine position,
the aorta position would follow suit.
One indication of registration performance is to calculate the distance between
recorded positions with the aortic centerline as determined by the preoperative
CT scan (Figs 5.8,5.9,5.10). Over 98% of all the captured ULS points were within
6 mm of the centerline, suggesting that the tool appeared outside of the vessel
less than 2% of the time. In this study this measure was sensitive mostly to
registration errors in the anterior/posterior and left/right transverse axes but not
the rostral/caudal axial axis because the abdominal aorta lay relatively in a straight
line. Investigating center-line tracking along a tortuous or branched vessel could
78
provide greater insight into patient-image registration. Ultimately, future work
must investigate the system's in vivo Fiducial Registration Error (FRE) and Target
Registration Error (TRE), which is the de facto standard metric for image-guided
surgery, in order to establish system performance [12].
As an indication of how the overall ULS would perform in EVAR, we placed three
stents using only the ULS with an overall accuracy of 10.1 mm (Table 5.2). A
major limitation of this component of the study was that stent deployment relied
on a proxy measurement: the surgeon rst navigated the ultrasound catheter probe
to the appropriate location, physically marked the proximal end of the catheter at
the site of entry, removed the probe, and then reintroduced the stent deployment
catheter up until the marked length. This technique is especially prone to intro-
ducing errors when the vascular access is not clean, such as when laparotomy is
required. Furthermore, equal lengths of a sti catheter vs. a compliant wire will
not land at the same point inside the aorta. These errors unrelated to the ULS
but eect clinical endpoints. Future work will mount the transducers directly onto
the tool itself.
79
5.5 Conclusion
We have demonstrated the feasibility of a potential alternative or adjunct to
u-
oroscopy using an ultrasound based localizer to guide stent graft deployment in
EVAR. With precision in patient positioning and robust image registration, ultra-
sound based localizer has accuracy comparable to
uoroscopy. A custom patient
immobilizer incorporating retroperitoneal sensor placement is crucial for reproduc-
ing patient positioning and acoustic access to the abdominal aorta, respectively.
Surgeons can use virtual endoluminal and 3D CT views for stent positioning. As
with all image-guided systems, positive clinical outcomes will depend on robust
patient-image registration and clinically relevant TRE measures. With continu-
ous technical improvement to our system, we plan further in vivo experiments to
ne-tune its clinical applicability, gearing towards the miniaturization of probe size,
the incorporation of probe to guidewires and stent graft catheters, and the ecient
manufacturing of supportive cradle with built-in sensor arrays. We are condent in
the renement of this system prior to clinical trials. Given the non-operator depen-
dent accuracy of this system, the wide application of ultrasound based localization
may have signicant impact in the clinical practice of endovascular intervention.
80
Chapter 6
Conclusion and Future Work
6.1 Conclusion
Part of the diculty of catheter manipulation stems from the fact that surgeons
rely on 2D imaging to navigate catheters in a 3D space. This is akin to threading
a needle by only viewing its shadow. One way to address this issue is to use a
surgical tool localizer in conjunction with preoperative 3D CT or MR image data,
much like a GPS. The localizer provides the tool's position and the 3D image data
provides the map. Optical and electromagnetic tool localizers are commercially
available but suer from line-of-sight issues and performance degradation in the
presence of metal [44].
In this dissertation I have documented a prototype ultrasound-based device that
provides 3D catheter position data and does not suer from the issues described
above. The key components are a catheter-tip single element transducer, exter-
nal sparse transducer array and supporting electronics. The device operates in
81
real time (25 Hz) with 1{3 mm accuracy in vitro. Its current limitations include
cost, bulk and diculty of use. Further study on registration methods and pre-
clinical experiments are also necessary. This chapter proposes future work towards
addressing these issues and translating this technology to the clinical realm.
6.2 Future Aims
6.2.1 Catheter and Guidewire Transducers
Figure 6.1: L: Modied central venous catheter with transducer attached. R:
Schematic of custom guidewire.
The current localization system uses a dummy catheter made of an RG-179 coaxial
cable with a cylindrical piezoceramic ultrasound transducer at the tip. It has a
10 French prole and is not compatible with guidewires. In order to translate this
technology into the clinical realm, the transducer must be integrated with catheters
that retain their clinical utility. Figure 6.1 L shows a schematic of an interventional
guidewire that includes a coaxial cable for transducer wiring. Figure 6.1 R shows
82
the tip of a central venous catheter that has been modied to include a transducer.
A catheter equipped as such could be localized with the ULS.
Figure 6.2 shows the layer stack for an ultrasound transducer. The entire stack
will measure 200{500 m, adding negligible thickness to a preexisting catheter.
The catheter design center frequency for an abdominal application will be between
2{3 MHz and would vary for other applications. Sensitivity can be maximized
(30 dB insertion loss or better) at the expense of bandwidth (20% is sucient)
if simple wave front detection is the method for measuring time of
ight. These
properties will be evaluated with standard pulse-echo and impedance tests.
Figure 6.2: Layer stack for basic catheter mounted transducer.
6.2.2 New Methods for Registration
A major hurdle for tool tracking with image fusion approaches is registration, or the
process of aligning the patient with the image and localizer during the procedure.
Registration diculties have been the foremost barrier to adoption of image fusion
83
approaches in abdominal surgery. The abdomen presents a large volume, deep
structures, and soft tissue. Traditional approaches to registration involve locating
landmarks on the patient's skin surface and aligning these landmarks in the image.
For abdominal applications, this has proven to be time consuming and unreliable
and thus has limited clinical adoption of this technology [63, 54]. Recent studies
suggest that we may use the curvature and branching of the vasculature instead
of xed external features on the patient as landmarks for registration [10]. We
hypothesize that the ultrasound localizer system can provide intraoperative 3D
data capture of catheter position which can register the patient, image and localizer
for endovascular abdominal procedures.
Simultaneous Localization and Mapping (SLAM) is a framework popular in
robotics for mapping one's environment and locating one's position on that map
[9]. Minimally invasive surgery techniques are just starting to adopt these meth-
ods [36]. Specically, a SLAM-like approach has been applied towards ducial-free
registration of bronchoscope location on CT images as the bronchoscope traverses
the airway branches [10].
We propose using SLAM as a novel method to register catheter position with vas-
cular CT images. Much like how the bronchoscope travels down airway branches,
the catheter travels up the vascular tree during endovascular procedures. Localizer
data on the catheter as it travels from the entry site to the treatment site would
84
provide a historical record of its trajectory. A model of the vasculature can be
extracted from the patient's preoperative CT data. One SLAM approach is to t
the localizer data to the vasculature model, thus providing the registration neces-
sary to bring the patient, image and localizer into alignment. Consider the cost
function:
Err
CT
T
EMT
=
X
p
k
2P;s
min
s(p
k
)smax
1
r
k
d
2
CT
T
EMT
p
k
(6.1)
wherep
k
is thekth ofN sensor positions,
CT
T
EMT
transformsp
k
from tracker to
CT coordinates,s (p
k
) gives the surrogate data corresponding top
k
,s
min
ands
min
are set once to 0 and 0.1 and once to 0.9 and 1, respectively,r
k
is the average radius
of the branch closest to p
k
and d (x) returns the Euclidean distance between the
between the transformed point and the vessel model [10]. Equation (6.1) essentially
describes how well the tracked locations, given a particular rigid transformation,
ts the vascular model. With successful SLAM registration the transformation
matrix
CT
T
EMT
should converge to a stable value. Probabilistic methods such
as Extended Kalman Filtering or Rao-Blackwellised ltering will likely increase
stability and reduce the time to solution convergence [9]. Performance can be
evaluated with computer simulations and benchtop experiments with endovascular
phantoms.
85
6.2.3 Optimal Sensor Placement
The ultrasound system uses at least 3 external transducers placed in contact with
the patient's body to locate the catheter. The arrangement and number of trans-
ducers has direct eects on how accuracy varies over the region of interest. This
eect is well studied for GPS satellites and is described by dilution of precision [31].
Previously, we used a radially symmetric array on the basis of optimal dilution of
precision over a uniform volume, sized to t the abdomen.
Our initial pig experiments emphasized the need for judicious design and arrange-
ment of the external transducer sensor array beyond only dilution of precision
concerns. For example, eects of bowel gas and other acoustic obstacles diminish
SNR of ultrasonic signals. Here, we propose methods using preoperative 3D CT
images to optimize the external transducer placement with regards to both acoustic
accessibility [13] as well as dilution of precision for the abdominal aorta. Figure 6.3
outlines the steps of optimizing transducer placement over the acoustically acces-
sible body surface manifold by minimizing cumulative dilution of precision along
the abdominal aorta volume of interest. We hypothesize that this algorithm will
yield the optimal transducer placement design.
86
Figure 6.3: Flowchart of transducer placement optimization algorithm.
6.2.4 Sound Speed Error Compensation
An initial estimate of the ultrasound beam path can be plotted on the CT image
using an initial estimate of the sound speed. This path can provide an updated
sound speed estimate based on the composition of tissue types through which the
modeled ultrasound beam propagates [23]. This rened estimate is used to re-plot
the beam path and the process iterates until the location estimate converges to a
stable value. Initial simulations with human CT data show that the sound speed
correction technique can recover up to 1 cm of error magnitude (Fig 6.4 R). We
propose to pre-calculate a lookup table (LUT) of sound speeds for all potential
ultrasound beam propagation paths using the preoperative CT data. A block
diagram illustrating the algorithm is shown in gure 6.4 L.
87
Figure 6.4: L: Flowchart for sound speed error correction. R: Segmented CT image
weighted by sound speed and beam path iterations. Red: initial estimate, Green:
nal.
6.2.5 In Vivo Experiments
Ten pigs will be divided into 3 groups for EVAR surgery experiments to be per-
formed by clinical collaborators. The control group will consist of 3 pigs where
only traditional
uoroscopy will guide the procedure. The rst experimental group
will also consist of 3 pigs but only the ultrasound system will exclusively guide the
procedure. The second experimental group will consist of 4 pigs where a combina-
tion of both the ultrasound localization system and
uoroscopy together will guide
the procedure.
Using abdominal aortic branches (such as celiac artery, superior mesenteric artery,
and renal arteries) as anatomical landmarks, the surgeon will position and deploy
stent grafts to the inferior edges of each aortic branch in the pig. Successful stent
88
graft deployment is dened as an uneventful deployment without occlusion of the
targeted branch. A completion angiogram will then be taken as verication and the
distance between the inferior edge of the branch and the proximal cu of the stent
graft will be measured as an endpoint for error calculation. The animal will be
euthanized and the abdominal aorta will be exposed, with care taken to minimize
aortic movement. The distance between each aortic branch to its corresponding
proximal end of the stent graft will then be measured. Each stent graft will be
exposed in situ by longitudinal arteriotomy to conrm its position within the aorta.
When applicable, procedures will be evaluated on the basis of time to task com-
pletion, complication rate,
uoroscopy exposure time and contrast load, and stent
deployment accuracy upon
uoroscopic and necropsy evaluation. Furthermore,
surgeons will be surveyed following each procedure in the areas of: visualization,
ease of use, ergonomics and perceived increase in endovascular surgical skill. For
each area, we will use a scoring system of +3 (signicant improvement), +2 (mod-
erate improvement), +1 (slight improvement), 0 (no noticeable eects compared
to traditional
uoroscopy),1 (slight degradation),2 (moderate degradation),
and3 (signicant degradation). Clinical success of the project will be dened
by reduction in procedure or
uoroscopy time or contrast load, comparable stent
placement accuracy and moderate to signicant improvement in at least two of the
survey areas.
89
Bibliography
[1] Nadine Abi-Jaoudeh, Neil Glossop, Michael Dake, William F. Pritchard,
Alberto Chiesa, Matthew R. Dreher, Thomas Tang, John W. Karanian, and
Bradford J. Wood. Electromagnetic navigation for thoracic aortic stent-graft
deployment: A pilot study in swine. Journal of Vascular and Interventional
Radiology, 21(6):888 { 895, 2010.
[2] M. E. Anderson, M. S. McKeag, and G. E. Trahey. The impact of sound speed
errors on medical ultrasound imaging. The Journal of the Acoustical Society
of America, 107(6):3540{3548, 2000.
[3] Guy Armstrong, Lisa Cardon, David Vilkomerson, David Lipson, James
Wong, L. Leonardo Rodriguez, James D. Thomas, and Brian P. Grin. Local-
ization of needle tip with color doppler during pericardiocentesis: In vitro
validation and initial clinical application. Journal of the American Society of
Echocardiography, 14(1):29 { 37, 2001.
[4] Aaron E. Bond, Fred A. Weaver, Jay Mung, Sukgu Han, Dan Fullerton, and
Jesse Yen. The in
uence of stents on the performance of an ultrasonic nav-
igation system for endovascular procedures. Journal of Vascular Surgery,
50(5):1143 { 1148, 2009.
[5] B. Breyer and I. Cike s. Ultrasonically marked cathetera method for positive
echographic catheter position identication. Medical and Biological Engineer-
ing and Computing, 22:268{271, 1984.
[6] J.J. Caery Jr. A new approach to the geometry of toa location. In Vehicular
Technology Conference, 2000. IEEE VTS-Fall VTC 2000. 52nd, volume 4,
pages 1943 {1949 vol.4, 2000.
90
[7] Edward Choke, Graham Munneke, Robert Morgan, Anna-Maria Belli, Ian
Loftus, Robert McFarland, Thomas Loosemore, and Matthew Thompson.
Outcomes of endovascular abdominal aortic aneurysm repair in patients with
hostile neck anatomy. CardioVascular and Interventional Radiology, 29:975{
980, 2006.
[8] Yves-Marie Dion, Hassen Ben El Kadi, Caroline Boudoux, Jim Gourdon,
Nabil Chakf e, Amidou Traor e, and Christian Moisan. Endovascular proce-
dures under near real time magnetic resonance imaging guidance: An exper-
imental feasibility study. Journal of Vascular Surgery, 32(5):1006 { 1014,
2000.
[9] Hugh Durrant-Whyte and Tim Bailey. Simultaneous localisation and mapping
(slam): Part i the essential algorithms. IEEE ROBOTICS AND AUTOMA-
TION MAGAZINE, 2:2006, 2006.
[10] Marco Feuerstein, Takamasa Sugiura, Daisuke Deguchi, Tobias Reichl,
Takayuki Kitasaka, and Kensaku Mori. Marker-free registration for electro-
magnetic navigation bronchoscopy under respiratory motion. In Proceedings
of the 5th international conference on Medical imaging and augmented reality,
MIAR'10, pages 237{246, Berlin, Heidelberg, 2010. Springer-Verlag.
[11] J. Fernando Figueroa and John S. Lamancusa. A method for accurate detec-
tion of time of arrival: Analysis and design of an ultrasonic ranging system.
The Journal of the Acoustical Society of America, 91(1):486{494, 1992.
[12] J.M. Fitzpatrick and J.B. West. The distribution of target registration error
in rigid-body point-based registration. Medical Imaging, IEEE Transactions
on, 20(9):917 {927, sept. 2001.
[13] Davide Fontanarosa, Skadi van der Meer, Emma Harris, and Frank Verhaegen.
A ct based correction method for speed of sound aberration for ultrasound
based image guided radiotherapy. Medical Physics, 38(5):2665{2673, 2011.
[14] Matthew P. Fronheiser, Salim F. Idriss, Patrick D. Wolf, and Stephen W.
Smith. Vibrating interventional device detection using real-time 3-D color
Doppler. IEEE Trans. Ultrason. Ferroelectr. Freq. Control, 55(6):1355{1362,
JUN 2008.
91
[15] T.S. Huber G.B. Zelenock and L. M. Messina, editors. Mastery of vascular and
endovascular surgery, chapter Mastery of Endovascular Surgical Treatment
of Abdominal Aortic Aneurysms., pages 140{141. Lippincott Williams &
Wilkins,, Philadelphia, 2006.
[16] David T. Gering, Arya Nabavi, Ron Kikinis, Noby Hata, Lauren J. O'Donnell,
W. Eric L. Grimson, Ferenc A. Jolesz, Peter M. Black, and William M.
Wells. An integrated visualization system for surgical planning and guid-
ance using image fusion and an open mr. Journal of Magnetic Resonance
Imaging, 13(6):967{975, 2001.
[17] Richard F. Gillum. Epidemiology of aortic aneurysm in the united states.
Journal of Clinical Epidemiology, 48(11):1289 { 1298, 1995.
[18] Colin Goodall. Procrustes methods in the statistical analysis of shape. Journal
of the Royal Statistical Society. Series B (Methodological), 53(2):pp. 285{339,
1991.
[19] V. A. Del Grosso and C. W. Mader. Speed of sound in pure water. The
Journal of the Acoustical Society of America, 52(5B):1442{1446, 1972.
[20] Y. Guan, K. Yokoi, O. Stasse, and A. Kheddar. On robotic trajectory planning
using polynomial interpolations. In Robotics and Biomimetics (ROBIO). 2005
IEEE International Conference on, pages 111 {116, 0-0 2005.
[21] T. Haidegger, G. Fenyvesi, B. Sirokai, M. Kelemen, M. Nagy, B. Takacs,
L. Kovacs, B. Benyo, and Z. Benyo. Towards unied electromagnetic tracking
system assessment-static errors. In Engineering in Medicine and Biology Soci-
ety,EMBC, 2011 Annual International Conference of the IEEE, pages 1905
{1908, 30 2011-sept. 3 2011.
[22] J.V. Hajnal, D.J. Hawkes, and D.L.G. Hill. Medical image registration.
Biomedical engineering series. CRC Press, 2001.
[23] Timothy L. Hall, Christopher R. Hempel, Brian J. Sabb, and William W.
Roberts. Acoustic access to the prostate for extracorporeal ultrasound abla-
tion. Journal of Endourology, 24(11):1875{1881, nov. 2010.
[24] Johann Hummel, Michael Figl, Christian Kollmann, Helmar Bergmann, and
Wolfgang Birkfellner. Evaluation of a miniature electromagnetic position
tracker. Medical Physics, 29(10):2205{2212, 2002.
92
[25] D. Kalman. An underdetermined linear system for gps. The College Mathe-
matics Journal, 33(5):384{390, 2002.
[26] Thomas B. Kinney, Gerant M. Rivera-Sanfeliz, and Stephen Ferrara. Stent
grafts for abdominal and thoracic aortic disease. Applied Radiology, 34(3):768
{ 775, mar. 2005.
[27] David Kwartowitz, S. Herrell, and Robert Galloway. Toward image-guided
robotic surgery: determining intrinsic accuracy of the da vinci robot. Interna-
tional Journal of Computer Assisted Radiology and Surgery, 1:157{165, 2006.
[28] Carl D Latino, Niels Lervad Andersen, and Frands Voss. Detection of time of
arrival of ultrasonic pulses. Journal of Physics: Conference Series, 52(1):14,
2006.
[29] M. Mahesh. The AAPM/RSNA physics tutorial for residents - Fluoroscopy:
Patient radiation exposure issues. Radiographics, 21(4):1033{1045, JUL-AUG
2001.
[30] Frode Manstad-Hulaas, Geir Arne Tangen, Lucian Gheorghe Gruionu, Petter
Aadahl, and Toril A. N. Hernes. Three-Dimensional Endovascular Navigation
With Electromagnetic Tracking: Ex Vivo and In Vivo Accuracy. J. Endovas-
cular Ther., 18(2):230{240, APR 2011.
[31] J.B. McKay and M. Pachter. Geometry optimization for gps navigation. In
Decision and Control, 1997., Proceedings of the 36th IEEE Conference on,
volume 5, pages 4695 {4699 vol.5, dec 1997.
[32] W.G. McMullen, B.A. Delaughe, and J.S. Bird. A simple rising-edge detec-
tor for time-of-arrival estimation. Instrumentation and Measurement, IEEE
Transactions on, 45(4):823 {827, aug 1996.
[33] C.L. Merdes and P.D. Wolf. Locating a catheter transducer in a three-
dimensional ultrasound imaging eld. Biomedical Engineering, IEEE Trans-
actions on, 48(12):1444 {1452, dec 2001.
[34] S.A. Meyer and P.D. Wolf. Application of sonomicrometry and multidimen-
sional scaling to cardiac catheter tracking. Biomedical Engineering, IEEE
Transactions on, 44(11):1061 {1067, nov. 1997.
[35] SK Morcos and HS Thomsen. Adverse reactions to iodinated contrast media.
European Radiology, 11:1267{1275, 2001.
93
[36] Peter Mountney and Guang-Zhong Yang. Motion compensated slam for image
guided surgery. In Proceedings of the 13th international conference on Medical
image computing and computer-assisted intervention: Part II, MICCAI'10,
pages 496{504, Berlin, Heidelberg, 2010. Springer-Verlag.
[37] Jay Mung, Sukgu Han, Fred Weaver, and Jesse Yen. Time of
ight and fmcw
catheter localization. In Ultrasonics Symposium (IUS), 2009 IEEE Interna-
tional, pages 590 {593, sept. 2009.
[38] Jay Mung, Sukgu Han, and Jesse T. Yen. Design and in vitro evaluation
of a real-time catheter localization system using time of
ight measurements
from seven 3.5-mhz single element ultrasound transducers towards abdominal
aortic aneurysm procedures. Ultrasonics, 51(6):768 { 775, 2011.
[39] Jay Mung, John Moos, Fred Weaver, and Jesse Yen. Real-time 3d catheter
localization system using ultrasound: Recent in vivo results towards endovas-
cular abdominal aortic aneurysm repair. In Ultrasonics Symposium (IUS),
2011 IEEE International, Oct. 2011.
[40] Jay Mung, Francois Vignon, and Ameet Jain. A non-disruptive technology for
robust 3d tool tracking for ultrasound-guided interventions. In Proceedings of
the 14th international conference on Medical image computing and computer-
assisted intervention - Volume Part I, MICCAI'11, pages 153{160, Berlin,
Heidelberg, 2011. Springer-Verlag.
[41] Jay Mung and Jesse T. Yen. Fundamental limits and simulations on time
dierence of arrival source localization using ultrasound signals. In Ultrasonics
Symposium (IUS), 2010 IEEE, pages 1791 {1794, oct. 2010.
[42] Sheela T. Patel and Juan C. Parodi. Endovascular repair of abdominal aortic
aneurysms. In Gilbert R. Upchurch, Enrique Criado, Christopher P. Can-
non, and Annemarie M. Armani, editors, Aortic Aneurysms, Contemporary
Cardiology, pages 121{132. Humana Press, 2009.
[43] Ricardo Paz-Fumagalli, J. Mark McKinney, and Andrew H. Stockland. Imag-
ing Techniques and Protocols for Endovascular Repair of Abdominal Aortic
Aneurysms, pages 19{41. 2008.
[44] Terry M Peters. Image-guidance for surgical procedures. Physics in Medicine
and Biology, 51(14):R505, 2006.
94
[45] S. Pieper, B. Lorensen, W. Schroeder, and R. Kikinis. The na-mic kit: Itk,
vtk, pipelines, grids and 3d slicer as an open platform for the medical image
computing community. In Biomedical Imaging: Nano to Macro, 2006. 3rd
IEEE International Symposium on, pages 698 {701, april 2006.
[46] Lee E Ponsky, Richard L Crownover, Michael J Rosen, Raymond F Rode-
baugh, Elias A Castilla, Jennifer Brainard, Edward E Cherullo, and Andrew C
Novick. Initial evaluation of cyberknife technology for extracorporeal renal
tissue ablation. Urology, 61(3):498 { 501, 2003.
[47] Sound Ranging. Nature, 104(2611):278 {280, nov. 1919.
[48] Marc L. Schermerhorn, A. James O'Malley, Ami Jhaveri, Philip Cotterill,
Frank Pomposelli, and Bruce E. Landon. Endovascular vs. open repair of
abdominal aortic aneurysms in the medicare population. New England Journal
of Medicine, 358(5):464{474, 2008.
[49] Siemens Healthcare Sector. Less radiation during catheter interven-
tions. http://www.siemens.com/press/pool/de/pressemitteilungen/
2011/imaging_therapy/HIM201108040e.pdf, 2011.
[50] I. Sharp, K. Yu, and Y.J. Guo. Peak and leading edge detection for time-
of-arrival estimation in band-limited positioning systems. Communications,
IET, 3(10):1616 {1627, october 2009.
[51] J. Smith and J. Abel. The spherical interpolation method of source localiza-
tion. Oceanic Engineering, IEEE Journal of, 12(1):246 { 252, jan 1987.
[52] Mike Stocksley. Abdominal Ultrasound. Cambridge University Press, Cam-
bridge, 2001.
[53] Gilbert Strang. Introduction to Linear Algebra. Wellesley-Cambridge Press,
Wellesley, 2009.
[54] J. Tang and K. Cleary. Breakdown of tracking accuracy for electromag-
netically guided abdominal interventions. International Congress Series,
1256:452{459, 2003.
[55] Y. Thakur, D.W. Holdsworth, and M. Drangova. Characterization of catheter
dynamics during percutaneous transluminal catheter procedures. Biomedical
Engineering, IEEE Transactions on, 56(8):2140 {2143, aug. 2009.
95
[56] Junichi Tokuda, Gregory S. Fischer, Xenophon Papademetris, Ziv Yaniv, Luis
Ibanez, Patrick Cheng, Haiying Liu, Jack Blevins, Jumpei Arata, Alexandra J.
Golby, Tina Kapur, Steve Pieper, Everette C. Burdette, Gabor Fichtinger,
Clare M. Tempany, and Nobuhiko Hata. Openigtlink: an open network pro-
tocol for image-guided therapy environment. The International Journal of
Medical Robotics and Computer Assisted Surgery, 5(4):423{434, 2009.
[57] Verasonics. The verasonics ultrasound engine - a new paradigm for ultra-
sound system architecture. http://www.verasonics.com/pdf/verasonics_
ultrasound_eng.pdf, 2007.
[58] D. Vilkomerson and D. Lyons. A system for ultrasonic beacon-guidance of
catheters and other minimally-invasive medical devices. IEEE Trans. Ultra-
son. Ferroelectr. Freq. Control, 44(2):496{504, MAR 1997.
[59] L.K. von Segesser, B. Marty, P. Ruchat, M. Bogen, and A. Gallino. Routine
use of intravascular ultrasound for endovascular aneurysm repair: Angiogra-
phy is not necessary. European Journal of Vascular and Endovascular Surgery,
23(6):537 { 542, 2002.
[60] Junchen Wang, Takashi Ohya, Hongen Liao, Ichiro Sakuma, Tianmiao Wang,
Iwai Tohnai, and Toshinori Iwai. Intravascular catheter navigation using path
planning and virtual visual feedback for oral cancer treatment. Int. J. Med.
Robot. Comput. Assist. Surg., 7(2):214{224, JUN 2011.
[61] G. Welch and E. Foxlin. Motion tracking: no silver bullet, but a respectable
arsenal. Computer Graphics and Applications, IEEE, 22(6):24 {38, nov.-dec.
2002.
[62] RA White, C Donayre, G Kopchok, I Walot, E Wilson, and C deVirgilio.
Intravascular ultrasound: The ultimate tool for abdominal aortic aneurysm
assessment and endovascular graft delivery. J. Endovasc. Surg., 4(1):45{55,
FEB 1997.
[63] BJ Wood, H Zhang, A Durrani, N Glossop, S Ranjan, D Lindisch, E Levy,
F Banovac, J Borgert, S Krueger, J Kruecker, A Viswanathan, and K Cleary.
Navigation with electromagnetic tracking for interventional radiology proce-
dures: A feasibility study. J. Vasc. Interv. Radiol., 16(4):493{505, APR 2005.
[64] P. Zarchan and H. Muso. Fundamentals of Kalman ltering: a practical
approach. Progress in astronautics and aeronautics. American Institute of
Aeronautics and Astronautics, 2005.
96
Appendix A
Interconnect
Figure A.1: Completed interconnect, 4-layer design, attached to VDAS connector.
We employed a custom interconnect to reduce the amount of RF noise intro-
duced into the ULS. Fan Zheng laid out the PCB to provide interconnect
between the VDAS zero-insertion-force connector and the Olympus/Panametrics
piston transducers. The interconnect is designed to access 8 channels, specif-
ically pins P2,T2,W2,Z2, P9,T9,W9,Z9. These correspond to VDAS channels
77,68,12,21,116,53,44 respectively.
97
A.1 List of Components
The interconnect consists of 3 components:
PCB: Printed with Sunstone Circuits
Socket strips: Samtec: SSW-113-06-G-D
SMA connectors: Tyco Electronics: 5-1814832-1
Additionally, we used new cabling: 10 ft RG-58C SMA-BNC Termination Amphe-
nol: CO-058SMABNC0-010 (www.cablesondemand.com order:ACDI039248)
Figure A.2: Photo showing a cut PCB card, socket connector, SMA connector and
BNC-SMA cable.
98
A.1.1 PCB
Figure A.3: Four \cards" t onto one PCB sheet. Two cards are required per
design, and two designs are represented. PCB schematic and scans shown.
In the interest of good noise-shielding, the interconnect uses a 4 layer board where
the signal traces are buried in the middle layers, and the outer layers provide
ground. Because 4 layer boards are more expensive than 2 layer boards, Fan
included another design that could be implemented with 2 layers, where the signal
traces are on an outer layer and the middle layers are just blank. The reasoning
here was that if the 2 layer design performs the same as the 4 layer design, we
could just order cheaper 2 layer boards in the future.
The board was laid out with Altium Designer 9.4.0. The les were uploaded to
Sunstone circuits for 3 day \quickturn" turnaround. The smallest order was for 2
pieces of 3 x 3 inch square PCBs.
PCBs can be dicult to cut. I tried the Yen lab small jig-saw and got nowhere.
The Dremel tool with cutting disc was far more eective. In order to cut straight,
99
I clamped the Dremel to the lab bench, like a mini table-saw. This gave me greater
control to jig and cut the PCB. Note, this makes a huge mess, and the parts can
get dangerously hot. The cut edges of the PCB are razor sharp. Be sure to smooth
the edges and corners with sandpaper.
A.1.2 Socket Strips
The socket strips are a semi-custom solution from Samtec. The part number
represents all the parameters necessary to describe the item (pin shape, material,
pitch, row and column count, etc). Samtec has an amazing free samples system.
I ordered a total of 20 pieces and Samtec shipped them next-day for free. You
should use this service - but be wary not to abuse it as they can cut us (or all of
USC) o any time.
Each interconnect card is edge-connected to the socket strip. The card thickness
is about 2/3rds the spacing in between the pin columns. Bend the pins inward in
order to get good contact between the pins and the solder pads. Take care to bend
the pins evenly. Done correctly, the pins should be able to hold the card snugly in
place in the center of the strip before soldering.
100
A.1.3 Assembly Notes, Finished Product
Solder SMA connectors and SMA socket strips to the PCB. Be careful to place
the SMA connectors on the correct side or else you will not be able to connect
the cables. The SMA connectors are on the same side as the text for the P2-
Z2 channels. The SMA connectors are on the NON-TEXT SIDE for the P9-Z9
channels.
Make good clean solder joints. Note that the center signal channel on the SMA
connector will be easier to solder than the ground pins as the ground planes on the
connector and PCB act as heat sinks. Clipping the through-hole SMA leads does
not appear to make a dierence as far as noise but you should test this yourself.
101
Appendix B
Matlab Code
The Ultrasound Localization System (ULS) was implemented on the Verasonics
Data Acquisition System (VDAS) running 32-bit Matlab Version 2010a on Mac
OS X 10.6.8. To start the ULS, the user rst runs initialize USGPS.m followed by
VSX in the Matlab command line. VSX prompts the user for the setup le, which
is setup USGPS.mat.
The following .m les are required to run the ULS:
initialize USGPS.m: Congures the VDAS and ULS parameters. Saves
VDAS parameters into setup USGPS.mat and saves ULS parameters as
global variables.
myFunctionJay.m: External function called by VDAS once every sequence.
Contains all code to process RF data to provide location estimates. Also
contains code for saving and plotting data.
102
JayKalman.m: Function called myFunctionJay.m for providing Extended
Kalman Filter estimate.
make x i.m: External function to create disc-array coordinates
103
B.1 initialize USGPS.m
Congures the VDAS and ULS parameters. It saves VDAS parameters into
setup USGPS.mat and saves ULS parameters as global variables.
% Initialize and Setup Ultrasound Localization System
% Jay Mung 2012
%
% Copyright 20012010 Verasonics, Inc. All worldwide rights and
% remedies under all intellectual property laws and industrial property
% laws are reserved.
%
% Adapted from SetUpL7 4Flash 4B.m:
clear all; close all; clc;
%% Initialization Parameters
global initParams
% VDAS Setup
initParams.VDAS.fc = 3.75e6; % [Hz] center frequency
initParams.VDAS.fs = initParams.VDAS.fc
*
4; % [Hz] Verasonics fs is set by 4
*
fc
initParams.VDAS.c = 1540; % [m/s]
initParams.VDAS.startDepth = 3; % Verasonics "Wavelengths"
*
2
*
c/fc = m
initParams.VDAS.endDepth = 531; % " "
initParams.VDAS.acqPerFrame = 2; % Number of acquisitions per frame to average
initParams.VDAS.framePeriod = 100000; % Verasonics "noop" units
*
200e9 = sec 30k
initParams.VDAS.acqPeriod = 100000;
initParams.VDAS.TGCuniform = 1023; % initial flat TGC
initParams.VDAS.tOffset = 50e9; % [sec] offset between transmit and receive
% Need OpenIGTLink matlab functions to transfer to 3D Slicer
%initParams.sd = igtlopen('localhost', 18944);
% VDAS Hardware Filter
initParams.VDAS.HWfiltCoeffs = [0,0,0,0,0,1];
% Plug Map
initParams.plugmap = [77,68,12,21,116,125,53,44]; % Mapping from VDAS elements to Channels
%% November 2011 PigCT Registration
SensorCornerFids = [128.725,78.903, 1173.292;
190.903,95.894,708.250;
109.964,85.593,697.592];
NoSensorCornerFids = [131.545,78.492,1222.9;
152.528,94.694,734.749;
149.714,85.293,746.845];
[D,Z, senTnosen] = ...
procrustes(NoSensorCornerFids,SensorCornerFids,'Scaling',false,'Reflection',false);
senCorReg = SensorCornerFids
*
senTnosen.T + senTnosen.c;
SensorFids = [115.223,127.942, 983.045;
117.037,128.452, 907.151;
117.286,129.482, 831.258;
72.421,114.903, 975.477;
68.408,115.572, 899.573;
65.895,115.586, 825.325];
senReg = SensorFids
*
senTnosen.T + repmat(senTnosen.c(1,:),6,1);
initParams.Array.N = 6; % Number of sensors
x i = 0.01
*
[0,0; 7.5,0; 15,0; 0,18.4; 7.5,18.4; 15,18.4]; % [m] sensor locations
x i = x i repmat(mean(x i),initParams.Array.N,1); % Center the array
rr = 90;
% Rotate the array
initParams.Array.x i(:,1:2) = x i(:,1:2)
*
[cosd(rr)sind(rr);sind(rr) cosd(rr)];
104
initParams.Array.flipZ = false;
fidArray = 1e3
*
[initParams.Array.x i zeros(initParams.Array.N,1)];
[D,Z, procParams.REGTRANS] = procrustes(senReg,fidArray,'Scaling',false,'Reflection',false);
%% Small manual registration
procParams.REGTWEAK.AxisAngle = [0 0 0 0];
procParams.REGTWEAK.matrix = vrrotvec2mat(procParams.REGTWEAK.AxisAngle);
procParams.REGTWEAK.T = [0 0 0];
% default filter
procParams.filtCoeffs.taps = 6;
procParams.filtCoeffs.low = 0.1;
procParams.filtCoeffs.high = 4;
% FILTER IS HARD CODED FOR 7.5 = 15/2 MHz Fs
[procParams.filtCoeffs.AA,procParams.filtCoeffs.BB] = ...
cheby1(procParams.filtCoeffs.taps,1,[procParams.filtCoeffs.low/7.5 ...
procParams.filtCoeffs.high/7.5],'bandpass'); % software filter
procParams.filtOn = true;
% Default thresholds
procParams.thresholds.baseline = ones(1, initParams.Array.N);
procParams.thresholds.lowFactor = 19;
procParams.thresholds.lowThresh = procParams.thresholds.baseline
*
procParams.thresholds.lowFactor;
procParams.thresholds.lowHiFactor = 2;
procParams.thresholds.HiThresh = procParams.thresholds.lowThresh
*
...
procParams.thresholds.lowHiFactor;
procParams.thresholds.HiLowReject = 155; % reject if threshold crossings are greater than ...
samples apart
% Default channel reject/include
procParams.weights.enable = true(1,initParams.Array.N); % manual enable
procParams.weights.threshHiLow = true(1,initParams.Array.N); % threshold based enable
procParams.weights.validRange = true(1,initParams.Array.N);
procParams.weights.global = true(1,initParams.Array.N); % AND accross all enables
procParams.holdInterp = true; % nan rejection hold interpolation
% KALMAN
procParams.extKal.Rcoeff = 5;
procParams.extKal.PHIs = 1;
%% GUI Elements and Default Controls
global GUIcontrols
GUIcontrols.axes1 = 5;
GUIcontrols.axes1xAxis = 2;
GUIcontrols.axes1DivBits = 13;
GUIcontrols.axes1FFTchannel = 1;
GUIcontrols.resetKal = false;
GUIcontrols.RF = true;
GUIcontrols.ThreeD = true;
GUIcontrols.XYZ = true;
GUIcontrols.Status = true;
GUIcontrols.StatusRadio = 'FrameRate';
GUIcontrols.Logging = true;
GUIcontrols.SaveClick = false;
%GUIcontrols.TxToSlicer = true;
GUIcontrols.SlicerRadio = 'kal';
GUIcontrols.REGTWEAK = false;
%% Diagnostic Data
global diagnostics
diagnostics.noises = nan(100,initParams.Array.N);
diagnostics.traces = nan(200,3);
diagnostics.traces kal = nan(200,3);
diagnostics.dT = 0.01;
diagnostics.count = 1;
%% Verasonics setup %%%
% Specify Trans structure array.
Trans.name = 'L74';
Trans = computeTrans(Trans); % L74 transducer is 'known' transducer so we can use computeTrans.
Trans.name = 'Satellite';
Trans.frequency = initParams.VDAS.fc/1e6; %verasonics unit is in MHz
105
Trans.type = 0;
Trans.id = 0;
Trans.numelements = 128;
Trans.elementWidth = 0.9968;
Trans.spacing = 1;
Trans.maxHighVoltage = 1.61;
% Specify SFormat structure array.
SFormat.transducer = 'Satellite'; % 128 element linear array with 1.0 lambda spacing
SFormat.scanFormat = 'RLIN'; % rectangular linear array scan
SFormat.radius = 0; % ROC for curved lin. or dist. to virt. apex
SFormat.theta = 0;
SFormat.numRays = 1; % no. of Rays (1 for Flat Focus)
SFormat.FirstRayLoc = [0,0,0]; % x,y,z
SFormat.rayDelta = 128
*
Trans.spacing; % spacing in radians(sector) or dist. ...
between rays (wvlnghts)
SFormat.startDepth = initParams.VDAS.startDepth; % Acquisition depth in wavelengths
SFormat.endDepth = initParams.VDAS.endDepth; % This should preferrably be a multiple of ...
128 samples.
% Specify PData structure array.
PData.sFormat = 1; % use first SFormat structure.
PData.pdeltaX = 1.0;
PData.pdeltaZ = 0.5;
PData.Size(1) = ceil((SFormat.endDepthSFormat.startDepth)/PData.pdeltaZ); % ...
startDepth, endDepth and pdelta set PData.Size.
PData.Size(2) = ceil((Trans.numelements
*
Trans.spacing)/PData.pdeltaX);
PData.Size(3) = 1; % single image page
PData.Origin = [Trans.spacing
*
(Trans.numelements1)/2,0,SFormat.startDepth]; % x,y,z of upper ...
lft crnr.
% Specify Media object.
pt1;
Media.function = 'movePoints';
% Specify Resources.
Resource.RcvBuffer(1).datatype = 'int16';
Resource.RcvBuffer(1).rowsPerFrame = 128
*
ceil((SFormat.endDepth SFormat.startDepth )
*
8/128 + ...
1)
*
initParams.VDAS.acqPerFrame ;
Resource.RcvBuffer(1).colsPerFrame = 128; % 128 RcvBuffer is 64 cols using syn aper.
Resource.RcvBuffer(1).numFrames = 1; % 100 frames used for RF cineloop.
Resource.ImageBuffer(1).datatype = 'double';
Resource.ImageBuffer(1).rowsPerFrame = 1024
*
2; % this is for maximum depth
Resource.ImageBuffer(1).colsPerFrame = PData.Size(2);
Resource.ImageBuffer(1).numFrames = 1;
Resource.DisplayWindow(1).Title = 'Image Display';
Resource.DisplayWindow(1).pdelta = 0.3;
Resource.DisplayWindow(1).Position = [250,150, ... % upper left corner position
ceil(PData.Size(2)
*
PData.pdeltaX/Resource.DisplayWindow(1).pdelta), ... % width
ceil(PData.Size(1)
*
PData.pdeltaZ/Resource.DisplayWindow(1).pdelta)]; % height
Resource.DisplayWindow(1).ReferencePt = [PData.Origin(1),PData.Origin(3)]; % 2D imaging is in ...
the X,Z plane
Resource.DisplayWindow(1).Colormap = gray(256);
Resource.Parameters.connector = 2;
Resource.Parameters.numTransmit = 128; % number of transmit channels. must equal 128 or ...
256 for 4 board system.
Resource.Parameters.numRcvChannels = 128; % number of receive channels.
Resource.Parameters.speedOfSound = initParams.VDAS.c; % speed of sound in m/sec
Resource.Parameters.speedCorrectionFactor = 1.0;
Resource.Parameters.simulateMode = 0;
% Resource.Parameters.simulateMode = 1 forces simulate mode, even if hardware is present.
% Resource.Parameters.simulateMode = 2 stops sequence and processes RcvData continuously.
% Specify Transmit waveform structure. These structures are persistent and we
% only need to specify what changes in subsequent structures.
TW.type = 'parametric';
TW.Parameters = [18,17,1,1]; % A, B, C, D
TW.Parameters = repmat(TW.Parameters, 128, 1); % replicate for 128 xmitters
% Specify TX structure array.
TX.waveform = 1; % use 1st TW structure.
TX.Origin = [0.0,0.0,0.0]; % flash transmit origin at (0,0,0).
TX.focus = 0;
TX.Steer = [0.0,0.0]; % theta, alpha = 0.
TX.Apod = zeros(1,128);
TX.Delay = computeTXDelays(TX);
% Specify TGC Waveform structure.
TGC.CntrlPts = initParams.VDAS.TGCuniform
*
ones(1,8); %[400,490,550,610,670,730,790,850];
TGC.rangeMax = SFormat.endDepth;
TGC.Waveform = computeTGCWaveform(TGC);
% Specify Receive structure arrays
% endDepth add additional acquisition depth to account for some channels
% having longer path lengths.
maxAcqLength = sqrt(SFormat.endDepthˆ2 + (Trans.numelements
*
Trans.spacing)ˆ2) SFormat.startDepth;
106
wlsPer128 = 128/(4
*
2); % wavelengths in 128 samples for 4 samplesPerWave
RxApod = zeros(1,Resource.Parameters.numRcvChannels);
RxApod(initParams.plugmap) = 1;
Receive = repmat(struct('Apod', RxApod, ...
'startDepth', SFormat.startDepth, ...
'endDepth', SFormat.startDepth + wlsPer128
*
ceil(maxAcqLength/wlsPer128), ...
'TGC', 1, ...
'bufnum', 1, ...
'framenum', 1, ...
'acqNum', 1, ...
'samplesPerWave', 4, ...
'mode', 0, ...
'InputFilter', repmat(initParams.VDAS.HWfiltCoeffs,Resource.Parameters.numRcvChannels,1), ...
'callMediaFunc', 1),1,Resource.RcvBuffer(1).numFrames
*
initParams.VDAS.acqPerFrame);
% Set event specific Receive attributes.
for i = 1:Resource.RcvBuffer(1).numFrames
for p = 1:initParams.VDAS.acqPerFrame
rcv = (i1)
*
initParams.VDAS.acqPerFrame + p;
Receive(rcv).framenum = i;
Receive(rcv).acqNum = p;
Receive(rcv).InputFilter(˜RxApod,:) = 0;
end
end
% Specify Recon structure arrays.
Recon = struct('senscutoff', 0.6, ...
'pdatanum', 1, ...
'rcvBufFrame', 1, ...
'ImgBufDest', [1,1], ...
'RINums', 1);
% Define ReconInfo structures.
ReconInfo = struct('mode', 0, ... % replace amplitude.
'txnum', 1, ...
'rcvnum', 1);
% Specify Process structure array.
Process(1).classname = 'Image';
Process(1).method = 'imageDisplay';
Process(1).Parameters = f'imgbufnum',1,... % number of buffer to process.
'framenum',1,... % (1 => lastFrame)
'pdatanum',1,... % number of PData structure to use
'norm',1,... % normalization method(1 means fixed)
'pgain',1.0,... % pgain is image processing gain
'persistMethod','simple',...
'persistLevel',40,...
'interp',1,... % method of interpolation (1=4pt interp)
'compression',0.5,... % Xˆ0.5 normalized to output word size
'mappingMode','full',...
'display',1,... % display image after processing
'displayWindow',1g;
% JAYS ADDITION FOR PLOTTING
Process(2).classname = 'External';
Process(2).method = 'myFunctionJay';
Process(2).Parameters = f'srcbuffer','receive',... % buffer to process.
'srcbufnum',1,...
'dstbuffer','none',...
'srcframenum',0g; % no output buffer
%%
SeqControl(1).command = 'noop';
SeqControl(1).argument = initParams.VDAS.framePeriod;
SeqControl(2).command = 'jump';
SeqControl(2).argument = 1;
SeqControl(3).command = 'triggerOut';
SeqControl(4).command = 'noop';
SeqControl(4).argument = initParams.VDAS.acqPeriod ;
nsc = 5; % nsc is count of SeqControl objects
n = 1; % n is count of Events
%for w = 1:125
% Acquire all frames defined in RcvBuffer
for i = 1:Resource.RcvBuffer(1).numFrames
for p = 1:(initParams.VDAS.acqPerFrame 1)
107
rcv = (i1)
*
initParams.VDAS.acqPerFrame + p;
Event(n).info = 'acquisition';
Event(n).tx = 1; % use 1st TX structure.
Event(n).rcv = rcv; % use ith Rcv structure.
Event(n).recon = 0; % no reconstruction.
Event(n).process = 0; % no processing
Event(n).seqControl = [3,4]; % use SeqControl struct defined below.
n = n+1;
end
Event(n).info = 'acquisition';
Event(n).tx = 1; % use 1st TX structure.
Event(n).rcv = rcv + 1; % use ith Rcv structure.
Event(n).recon = 0; % no reconstruction.
Event(n).process = 0; % no processing
Event(n).seqControl = [3,nsc]; % use SeqControl struct defined below.
SeqControl(nsc).command = 'transferToHost';
nsc = nsc + 1;
n = n+1;
Event(n).info = 'Reconstruct';
Event(n).tx = 0; % no transmit
Event(n).rcv = 0; % no rcv
Event(n).recon = 1; % reconstruction
Event(n).process = 1; % processing
% if floor(i/5) == i/5 % Exit to Matlab every 5th frame
if i == 1 % Exit to Matlab every 5th frame
Event(n).seqControl = nsc;
SeqControl(nsc).command = 'returnToMatlab';
SeqControl(nsc).argument = 1;
nsc = nsc+1;
else
Event(n).seqControl = 0;
end
n = n+1;
Event(n).info = 'noop'; % noop between frames for frame rate control
Event(n).tx = 0; % no transmit
Event(n).rcv = 0; % no rcv
Event(n).recon = 0; % no reconstruction
Event(n).process = 2; % external processing function
Event(n).seqControl = 1; % reference for ?noop? command
n = n+ 1;
end
Event(n).info = 'Jump back to first event';
Event(n).tx = 0; % no TX
Event(n).rcv = 0; % no Rcv
Event(n).recon = 0; % no Recon
Event(n).process = 0;
Event(n).seqControl = 2; % jump command
n = n + 1;
%end
%%
% UI Control Elements
% Sensitivity Cutoff
sensx = 170;
sensy = 420;
UI(1).Control = f'Style','text',... % popupmenu gives list of choices
'String','Sens. Cutoff',...
'Position',[sensx+10,sensy,100,20],... % position on UI
'FontName','Arial','FontWeight','bold','FontSize',12,...
'BackgroundColor',[0.8,0.8,0.8]g;
UI(2).Control = f'Style','slider',... % popupmenu gives list of choices
'Position',[sensx,sensy30,120,30],... % position on UI
'Max',1.0,'Min',0,'Value',Recon(1).senscutoff,...
'SliderStep',[0.025 0.1],...
'Callback',f@sensCutoffCallbackgg;
UI(2).Callback = f'sensCutoffCallback.m',...
'function sensCutoffCallback(hObject,eventdata)',...
' ',...
'sens = get(hObject,''Value'');',...
'ReconL = evalin(''base'', ''Recon'');',...
'for i = 1:size(ReconL,2)',...
' ReconL(i).senscutoff = sens;',...
'end',...
'assignin(''base'',''Recon'',ReconL);',...
'% Set Control.Command to reinitialize Recon structure.',...
'Control = evalin(''base'',''Control'');',...
108
'Control.Command = ''update&Run'';',...
'Control.Parameters = f''Recon''g;',...
'assignin(''base'',''Control'', Control);',...
'% Set the new cutoff value in the text field.',...
'h = findobj(''tag'',''sensCutoffValue'');',...
'set(h,''String'',num2str(sens,''%1.3f''));',...
'return'g;
UI(3).Control = f'Style','edit','String',num2str(Recon(1).senscutoff,'%1.3f'), ... % text
'Position',[sensx+20,sensy40,60,22], ... % position on UI
'tag','sensCutoffValue', ...
'BackgroundColor',[0.9,0.9,0.9]g;
% Range Change
rngx = 20;
rngy = 125;
UI(4).Control = f'Style','text','String','Range',...
'Position',[rngx+10,rngy,80,20],...
'FontName','Arial','FontWeight','bold','FontSize',12,...
'BackgroundColor',[0.8,0.8,0.8]g;
UI(5).Control = f'Style','slider',...
'Position',[rngx,rngy30,120,30],...
'Max',320,'Min',64,'Value',SFormat.endDepth,...
'SliderStep',[0.125,0.250],...
'Callback',f@rangeChangeCallbackgg;
UI(5).Callback = f'rangeChangeCallback.m',...
'function rangeChangeCallback(hObject,eventdata)',...
' ',...
'simMode = evalin(''base'',''Resource.Parameters.simulateMode'');',...
'% No range change if in simulate mode 2.',...
'if simMode == 2',...
'set(hObject,''Value'',evalin(''base'',''SFormat.endDepth''));',...
'return',...
'end',...
'range = get(hObject,''Value'');',...
'assignin(''base'',''range'',range);',...
'SFormat = evalin(''base'',''SFormat'');',...
'SFormat.endDepth = range;',...
'assignin(''base'',''SFormat'',SFormat);',...
'evalin(''base'',''PData.Size(1) = ...
ceil((SFormat.endDepthSFormat.startDepth)/PData.pdeltaZ);'');',...
'evalin(''base'',''[PData.Region,PData.numRegions] = createRegions(PData);'');',...
'evalin(''base'',''Resource.DisplayWindow(1).Position(4) = ...
ceil(PData.Size(1)
*
PData.pdeltaZ/Resource.DisplayWindow(1).pdelta);'');',...
'Receive = evalin(''base'', ''Receive'');',...
'Trans = evalin(''base'', ''Trans'');',...
'maxAcqLength = sqrt(rangeˆ2 + (Trans.numelements
*
Trans.spacing)ˆ2)SFormat.startDepth;',...
'wlsPer128 = 128/(4
*
2);',...
'for i = 1:size(Receive,2)',...
' Receive(i).endDepth = SFormat.startDepth + wlsPer128
*
ceil(maxAcqLength/wlsPer128);',...
'end',...
'assignin(''base'',''Receive'',Receive);',...
'% Update VDAS parameters of Receive objects.',...
'evalin(''base'',''updateVDAS(''''Receive'''')'');',...
'evalin(''base'',''TGC.rangeMax = SFormat.endDepth;'');',...
'evalin(''base'',''TGC.Waveform = computeTGCWaveform(TGC);'');',...
'Control = evalin(''base'',''Control'');',...
'Control.Command = ''update&Run'';',...
'Control.Parameters = ...
f''SFormat'',''PData'',''Receive'',''Recon'',''DisplayWindow'',''ImageBuffer''g;',...
'assignin(''base'',''Control'', Control);',...
'assignin(''base'', ''action'', ''displayChange'');',...
'h = findobj(''tag'',''rangeValue'');',...
'set(h,''String'',num2str(range,''%3.0f''));',...
'return'g;
UI(6).Control = f'Style','edit','String',num2str(SFormat.endDepth,'%3.0f'), ... % text
'Position',[rngx+20,rngy40,60,22], ... % position on UI
'tag','rangeValue', ...
'BackgroundColor',[0.9,0.9,0.9]g;
clear i j n sensx sensy rngx rngy
% Specify factor for converting sequenceRate to frameRate.
frameRateFactor = 5;
% Save all the structures to a .mat file.
save('setup USGPS');
MyFunctionJayGUI2;
LocationGUI;
109
B.2 myFunctionJay.m
External function called by VDAS once every sequence. Contains all code to
process RF data to provide location estimates. Also contains code for saving and
plotting data.
% Jay Mung Ultrasound Localization System 2012
function myFunctionJay(RData)
global initParams procParams GUIdisplays GUIcontrols diagnostics rslts % del rslts
persistent acqLen xx tt dd prevclock coord rec lastGood x s kal plot lastGood x s plot
if isempty(lastGood x s plot)
lastGood x s plot = [0;0;0];
lastGood x s kal plot = [0;0;0];
end
%% Crop, reshape, average, (filter), envelope detect data
rslts.RData crop = RData(:,initParams.plugmap(1:initParams.Array.N));
if isempty(acqLen)
acqLen = length(rslts.RData crop)/initParams.VDAS.acqPerFrame;
xx = 0:acqLen1;
tt = xx/initParams.VDAS.fs + 2
*
initParams.VDAS.startDepth/initParams.VDAS.fc ...
initParams.VDAS.tOffset;
dd = tt
*
initParams.VDAS.c;
end
RData crop reshape = ...
double(reshape(rslts.RData crop,acqLen,initParams.VDAS.acqPerFrame,initParams.Array.N));
RData crop reshape avg = squeeze(mean(RData crop reshape,2));
RData crop reshape avg filt = ...
filter(procParams.filtCoeffs.AA,procParams.filtCoeffs.BB,RData crop reshape avg);
RData crop reshape avg filt hilb = abs(hilbert(RData crop reshape avg filt));
%% Detect Thresholds
threshSkipSamps = 300;
threshmapLow = RData crop reshape avg filt hilb(threshSkipSamps:end,:) > ...
repmat(procParams.thresholds.lowThresh,acqLen threshSkipSamps + 1,1);
threshmapLow(end,:) = 1; % make last sample true to handle ...
exceptions
[aa,bb] = find(threshmapLow); % find first threshold crossings
tem = [1; (find(bb(2:length(bb)) bb(1:length(bb)1)) +1)];
rslts.threshIndLow = aa(tem) 1 + threshSkipSamps;
%% Detect High Thresholds
threshmapHigh = RData crop reshape avg filt hilb(threshSkipSamps:end,:) > ...
repmat(procParams.thresholds.HiThresh,acqLen threshSkipSamps + 1,1);
threshmapHigh(end,:) = 1; % make last sample true to ...
handle exceptions
[aa,bb] = find(threshmapHigh); % find first threshold crossings
tem = [1; (find(bb(2:length(bb)) bb(1:length(bb)1)) +1)];
rslts.threshIndHi = aa(tem) 1 + threshSkipSamps;
%% Weights. Double Threshold
tempDiff = rslts.threshIndHi rslts.threshIndLow;
procParams.weights.threshHiLow = (tempDiff >= 0 ) & (tempDiff < procParams.thresholds.HiLowReject);
procParams.weights.validRange = rslts.threshIndLow < 4125; % some number less than the last sample
procParams.weights.global = procParams.weights.threshHiLow & procParams.weights.validRange;
%% Calculate Position Algebreically
rslts.D = dd(rslts.threshIndLow)';
W = diag(procParams.weights.global);
A = [2
*
initParams.Array.x i ones(initParams.Array.N,1)];
b = rslts.D.ˆ2 sum( initParams.Array.x i.ˆ2,2);
110
xydist est = A'
*
W
*
AnA'
*
W
*
b;
rslts.x s est = [xydist est(1:2); sqrt(xydist est(3) sum(xydist est(1:2).ˆ2))];
%% Calculate Position via External Kalman
rslts.kalout = JayKalman(rslts.D, diagnostics.dT, [initParams.Array.x i ...
zeros(initParams.Array.N,1)], rslts.x s est,procParams.weights.global);
%% Timing
rslts.timestamp = clock;
if isempty(prevclock)
prevclock = rslts.timestamp [0 0 0 0 0 0.1];
end
diagnostics.dT = (rslts.timestamp prevclock)
*
[0 0 0 60ˆ2 60 1]';
prevclock = rslts.timestamp;
%% Saving Data
if GUIcontrols.Logging
coord rec(diagnostics.count).rslts = rslts;
coord rec(diagnostics.count).procParams = procParams;
coord rec(diagnostics.count).initParams = initParams;
if (diagnostics.count == 5000) jj GUIcontrols.SaveClick
flNm2 = sprintf('CoordBuffer %02.0f%02.0f T%02.0f%02.0f S%06.0f.mat',rslts.timestamp(2),...
rslts.timestamp(3),rslts.timestamp(4),rslts.timestamp(5),1e4
*
rslts.timestamp(6));
warndlg(['Saving: ',flNm2, ' and clearing buffer']);
eval(['save ', flNm2, ' coord rec']);
diagnostics.count = 1;
clear coord rec
GUIcontrols.SaveClick = false;
end
diagnostics.count = diagnostics.count + 1;
end
%% RF Channel related Plots
if GUIcontrols.RF
%% Plot Time domain data
if GUIcontrols.axes1 <= 5
% Choose display X axis according to GUI selection
switch GUIcontrols.axes1xAxis
case 1
% samples
xAx = xx;
case 2
% time
xAx = tt
*
1e6;
case 3
% distance
xAx = dd
*
1e2;
case 4
% frames
xAx = xx/acqLen;
end
% Display according to GUI selection
switch GUIcontrols.axes1
case 1
% show all frames raw data
RData display = double(rslts.RData crop)./2ˆGUIcontrols.axes1DivBits + ...
repmat(1:initParams.Array.N,length(rslts.RData crop),1);
xAx = (1:length(rslts.RData crop))/acqLen;
case 2
% show first frame raw data
RData display = ...
(squeeze(RData crop reshape(:,1,:))./2ˆGUIcontrols.axes1DivBits) + ...
repmat(1:initParams.Array.N,acqLen,1);
case 3
% show averaged data
RData display = (RData crop reshape avg./2ˆGUIcontrols.axes1DivBits) + ...
repmat(1:initParams.Array.N,acqLen,1);
case 4
111
% show averaged, filtered data
RData display = (RData crop reshape avg filt./2ˆGUIcontrols.axes1DivBits) + ...
repmat(1:initParams.Array.N,acqLen,1);
case 5
% show averaged, filtered, envelope detected data
RData display = (RData crop reshape avg filt hilb./2ˆGUIcontrols.axes1DivBits) ...
+ repmat(1:initParams.Array.N,acqLen,1);
end
plot(GUIdisplays.axes1,xAx,RData display);
set(GUIdisplays.axes1,'ylim',[0 initParams.Array.N + 1],'xlim',[0 ...
max(xAx)],'XTickMode','auto');
end
%% Plot Frequency domain data
if GUIcontrols.axes1 > 5 && GUIcontrols.axes1 <= 9 && GUIcontrols.axes1FFTchannel <= ...
initParams.Array.N
switch GUIcontrols.axes1
case 6
% FFT all frames raw
temp = abs(fft(double(rslts.RData crop(:,GUIcontrols.axes1FFTchannel))));
temp = temp/max(temp(:));
xAx = linspace(0,initParams.VDAS.fs,length(temp));
case 7
% FFT averaged
temp = abs(fft(RData crop reshape avg(:,GUIcontrols.axes1FFTchannel)));
temp = temp/max(temp(:));
xAx = linspace(0,initParams.VDAS.fs,length(temp));
case 8
% FFT filtered
temp = abs(fft(RData crop reshape avg filt(:,GUIcontrols.axes1FFTchannel)));
temp = temp/max(temp(:));
xAx = linspace(0,initParams.VDAS.fs,length(temp));
case 9
% Filter Response
temp = abs(freqz(procParams.filtCoeffs.AA,procParams.filtCoeffs.BB));
xAx = linspace(0,initParams.VDAS.fs/2,length(temp));
end
RData display = 20
*
log10(temp);
plot(GUIdisplays.axes1,xAx,RData display);
set(GUIdisplays.axes1,'ylim',[60 0],'xlim',[0 initParams.VDAS.fs/2],'XTickMode','auto');
end
%% Plot Noise Histogram data
if GUIcontrols.axes1 == 10
diagnostics.noises = [diagnostics.noises(2:end,:); std(double(rslts.RData crop))];
bar(GUIdisplays.axes1, [diagnostics.noises(end,:);nanmean(diagnostics.noises)]');
set(GUIdisplays.axes1,'ylim',[0 10],'xlim',[0 initParams.Array.N+1]);
end
%% Plot Thresholds
if GUIcontrols.axes1 == 5
set(GUIdisplays.axes1,'NextPlot','add');
threshDisplay = repmat(procParams.thresholds.lowThresh/2ˆGUIcontrols.axes1DivBits + ...
(1:initParams.Array.N),length(xAx),1);
plot(GUIdisplays.axes1,xAx, threshDisplay,'');
plot(GUIdisplays.axes1,xAx(rslts.threshIndLow), ...
diag(threshDisplay(rslts.threshIndLow,:)),'rx');
set(GUIdisplays.axes1,'NextPlot','replacechildren');
end
%% Plot Thresholds
if GUIcontrols.axes1 == 5
set(GUIdisplays.axes1,'NextPlot','add');
threshDisplay = repmat(procParams.thresholds.HiThresh/2ˆGUIcontrols.axes1DivBits + ...
(1:initParams.Array.N),length(xAx),1);
% plot(GUIdisplays.axes1,xAx, threshDisplay,'');
plot(GUIdisplays.axes1,xAx(rslts.threshIndHi), ...
diag(threshDisplay(rslts.threshIndHi,:)),'kx');
112
set(GUIdisplays.axes1,'NextPlot','replacechildren');
end
end
%% 3D Position Plots
% condition data, flip z coordinate, reject plotting NaN/imag by holding
% onto last valid value
x s plot = rslts.x s est;
x s kal plot = rslts.kalout([1 3 5]);
if initParams.Array.flipZ
x s plot(3) =x s plot(3);
x s kal plot(3) =x s kal plot(3);
end
if procParams.holdInterp
if (sum([isreal(x s plot); ˜isnan(x s plot)]) == 4)
lastGood x s plot = x s plot;
else
x s plot = lastGood x s plot;
end
if (sum([isreal(x s kal plot); ˜isnan(x s kal plot)]) == 4)
lastGood x s kal plot = x s kal plot;
else
x s kal plot = lastGood x s kal plot;
end
end
%% Plot Positions
if GUIcontrols.ThreeD
plot3(GUIdisplays.a1,x s plot(1),x s plot(2),x s plot(3),'
*
', ...
x s kal plot(1),x s kal plot(2),x s kal plot(3),'r
*
', ...
initParams.Array.x i(:,1),initParams.Array.x i(:,2),zeros(initParams.Array.N,1),'.')
plot(GUIdisplays.a2,x s plot(1),x s plot(2),'
*
', x s kal plot(1),x s kal plot(2),'r
*
', ...
initParams.Array.x i(:,1),initParams.Array.x i(:,2),'.')
plot(GUIdisplays.a3,x s plot(1),x s plot(3),'
*
', x s kal plot(1),x s kal plot(3),'r
*
', ...
initParams.Array.x i(:,1),zeros(initParams.Array.N,1),'.')
plot(GUIdisplays.a4,x s plot(2),x s plot(3),'
*
', x s kal plot(2),x s kal plot(3),'r
*
', ...
initParams.Array.x i(:,2),zeros(initParams.Array.N,1),'.')
end
if GUIcontrols.XYZ
diagnostics.traces = [diagnostics.traces(2:end,:); x s plot'];
diagnostics.traces kal = [diagnostics.traces kal(2:end,:); x s kal plot'];
plot(GUIdisplays.aX,1:200,diagnostics.traces(:,1),1:200,diagnostics.traces kal(:,1),'r');
plot(GUIdisplays.aY,1:200,diagnostics.traces(:,2),1:200,diagnostics.traces kal(:,2),'r');
plot(GUIdisplays.aZ,1:200,diagnostics.traces(:,3),1:200,diagnostics.traces kal(:,3),'r');
set(GUIdisplays.edit3, 'string', num2str(1e3
*
mean(diagnostics.traces kal(end50:end,1)))); %
set(GUIdisplays.edit4, 'string', num2str(1e3
*
mean(diagnostics.traces kal(end50:end,2)))); %
set(GUIdisplays.edit5, 'string', num2str(1e3
*
mean(diagnostics.traces kal(end50:end,3)))); %
end
%% Transfer coordinates to Slicer
if ˜strcmp(GUIcontrols.SlicerRadio,'off') && (initParams.sd ˜=1)
switch GUIcontrols.SlicerRadio
case 'kal'
x s slicer = x s kal plot;
case 'raw'
x s slicer = x s plot;
end
T.Type = 'TRANSFORM';
T.Name = 'MatlabTransform';
T.Trans = eye(4);
US CT = 1e3
*
[1,0,0;0,1,0;0,0,1]; % rearrange axes and scale to millimeters
if initParams.Array.flipZ
US CT = 1e3
*
[0,1,0;1,0,0;0,0,1];
end
T.Trans(1:3,4) = ((US CT
*
x s slicer)'
*
procParams.REGTRANS.T)';
T.Trans(1:3,4) = T.Trans(1:3,4) + procParams.REGTRANS.c(1,:)';
T.Trans(1:2,4) =T.Trans(1:2,4);
113
if GUIcontrols.REGTWEAK
T.Trans(1:3,4) = ((US CT
*
x s slicer)'
*
procParams.REGTWEAK.matrix
*
procParams.REGTRANS.T)';
T.Trans(1:3,4) = T.Trans(1:3,4) + procParams.REGTWEAK.T' + procParams.REGTRANS.c(1,:)';
T.Trans(1:2,4) =T.Trans(1:2,4);
end
r = igtlsend(initParams.sd, T);
end
%% Status Text
if GUIcontrols.Status
switch GUIcontrols.StatusRadio
case 'FrameRate'
set(GUIdisplays.editStatus,'string', num2str(1/diagnostics.dT));
case 'DataLogging'
if GUIcontrols.Logging
set(GUIdisplays.editStatus,'string', num2str(diagnostics.count));
else
set(GUIdisplays.editStatus,'string', 'NOT SAVING');
end
end
end
end
114
B.3 JayKalman.m
Function called myFunctionJay.m for providing Extended Kalman Filter estimate.
function [xhatOut,residual] = JayKalman(meas, deltat, x i, init,weights)
%EXTKALMAN Radar Data Processing Tracker Using an Extended Kalman Filter
% Adapted for Ultrasound Localization System
% Jay Mung 2012
%
% This program is executed as a MATLAB function block in the
% aero radmod Simulink model. There is a file called "aero raddat.m" that
% contains the data needed to run the aero radmod model. The estimated
% and actual positions are saved to the workspace and are plotted at
% the end of the simulation by the program aero radplot (called from the
% simulation automatically).
%
% See the description in the "Extended Kalman Filter" Brochure for
% the equations.
% This program was developed by Dr. Richard Gran, September 1, 1996,
% and modified by Paul Barnard, June 1997.
% Copyright 19902008 The MathWorks, Inc.
% $Revision: 1.1.6.5.8.1.2.1 $ $Date: 2009/12/31 19:43:47 $
% Initialization
global procParams initParams GUIcontrols
global xhat P R
if GUIcontrols.resetKal
P = [];
GUIcontrols.resetKal = false;
end
if isempty(P)
xhat = [init(1);0;init(2);0;init(3);0];
P = zeros(6);
R = procParams.extKal.Rcoeff
*
eye(initParams.Array.N);%
*
WEIGHTS;
end
WEIGHTS = diag(weights);
% Radar update time deltat is inherited from model workspace
% 1. Compute Phi, Q, and R
Phi = eye(6);
Phi(1,2) = deltat;
Phi(3,4) = deltat;
Phi(5,6) = deltat;
Q = zeros(6,6);
temp = procParams.extKal.PHIs
*
[deltatˆ3/3 deltatˆ2/2; deltatˆ2/2 deltat];
Q(1:2,1:2) = temp;
Q(3:4,3:4) = temp;
Q(5:6,5:6) = temp;
% 2. Propagate the covariance matrix:
P = Phi
*
P
*
Phi' + Q;
% 3. Propagate the track estimate::
xhat = Phi
*
xhat;
% 4 a). Compute observation estimates:
xs = xhat([1 3 5]);
[m,n] = size(x i);
yhat = sqrt(sum((x i repmat(xs',m,1)).ˆ2,2));
% 4 b). Compute observation vector y and linearized measurement matrix M
range vect =(x i repmat(xs',m,1));
range mag = sqrt(sum(range vect.ˆ2,2));
hmatd1 = range vect./repmat(range mag,1,n);
hmat = zeros(m,n
*
2);
115
hmat(:,1:2:n
*
2) = hmatd1;
M = hmat;
% 4 c). Compute residual (Estimation Error)
residual = meas yhat;
% 5. Compute Kalman Gain:
W = P
*
M'/(M
*
P
*
M'+ R)
*
WEIGHTS;
% 6. Update estimate
xhat = xhat + W
*
residual;
% 7. Update Covariance Matrix
P = (eye(6)W
*
M)
*
P
*
(eye(6)W
*
M)' + W
*
R
*
W';
xhatOut = xhat;
116
B.4 make x i.m
External function to create disc-array coordinates
function out = make x i(N,r)
% Jay Mung Ultrasound Localization System 2012
thetas = (0:(2
*
pi/(N1)):((N1)
*
2
*
pi/(N1)));
[x,y] = pol2cart(thetas,r
*
[0, ones(1,N1)]);
x i = zeros(N,3);
x i(1:N,1) = x';
x i(1:N,2) = y';
out = x i;
end
117
Appendix C
Virtual Endoluminal View
Endoscopy means \to look from inside". In the medical eld, endoscopy provides
views from inside various anatomical structures or organs. An endoscope typically
comprises of an optical camera on the end of a long,
exible tube. A common
clinical application is to inspect the inside of the colon for growths or polyps that
might be indicative of cancer. This is called colonoscopy.
It is possible to generate views from the inside of structures without an endoscope
but rather with 3D volume data. We use a CT dataset to create a surface model
of the structure and render the inside view from a virtual camera. Besides being
less invasive, virtual endoscopy also provides the benet of a limitless number of
arbitrary views. Here we use virtual endoscopy to provide endoluminal views, or
views from inside a blood vessel.
Creating an endoluminal view from a CT Angiography dataset is mainly a three
step process. First, the vessel must be segmented from the CT image. This means
indicating which pixels in the image represent the vessel. The next step is to create
118
Figure C.1: Top: 3D rendering of vessel surface model with CT slices overlaid.
Bottom: Endoluminal view renderings at the renal take-os with R: 90 degree
view angle and L: 120 deg view angle
a surface model from the segmented vessel volume. This is done so that the volume
is reduced to only the visible surface. This makes real-time rendering much faster.
The last step is to map a
ight-path for the virtual camera to travel through the
vessel. Thankfully these are all active research topics with modules incorporated
into 3D slicer software.
The following is a step-by-step tutorial on how to produce endoluminal views from
CT images using 3D Slicer software.
119
Figure C.2: Start 3D Slicer, Load Data (File >> Add Volume)
Figure C.3: Crop data to reduce workload Modules >> Crop Volume. Check
that the input volume is correct, create a new MRMLROINode to represent the
ROI. Use the bounding box to select the appropriate ROI. Leave other parameters
default and Apply.
Figure C.4: Establish Fiducials both to seed the segmentation and guide the
endoscopy view. Modules >> Fiducials. Use the axial view and click in the
center of the vessel of interest.
120
Figure C.5: Scroll through slices and click on each vessel center.
Figure C.6: Segmentation (Modules >> Simple Region Growing). Semi-
automatically selects contiguous vessel from surrounding tissue. Load seeds from
previous ducial list. Use the correct input volume and create a new output vol-
ume. Use the default settings and Apply. Scroll through slices and check for
satisfactory results. Make sure there are no holes or bones in the segmentation.
If so, readjust multiplier parameter. If that doesn't work, redo ducials and then
volume cropping. If results are good, increase iteration parameters for smoother
results.
Figure C.7: Surface Model (Modules >> Model Maker). Use the previously seg-
mented volume to create a surface. This greatly reduces the amount of data and
speeds up rendering for 3D visualization. Make sure to use the correct input vol-
ume and to create a new model. Apply. Go to Modules >> surfaces, select the
correct model and UNCHECK \backface culling".
121
Figure C.8: Orient the vessel model to a view that would correspond to the forward-
looking view from a catheter.
Figure C.9: Virtual Endoscopy (Modules >> Endoscopy). Select the ducial list
that corresponds to the
y-through points. Here is the default-setting view at the
renal branches.
122
Figure C.10: Change view settings. Same location as above, with 120 degree view
angle.
Figure C.11: Enable dual 3D view for both endoluminal and gross views.
Figure C.12: Enable axial slice view to superimpose slice in both views.
123
Asset Metadata
Creator
Mung, Jay C. (author)
Core Title
An ultrasound localization system for endovascular aortic aneurysm repair
Contributor
Electronically uploaded by the author
(provenance)
School
Andrew and Erna Viterbi School of Engineering
Degree
Doctor of Philosophy
Degree Program
Biomedical Engineering
Publication Date
05/01/2012
Defense Date
03/20/2012
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
abdominal aortic aneurysm,Biomedical Engineering,image guided interventions,in vivo,OAI-PMH Harvest,stent,time of flight,ultrasound
Language
English
Advisor
Yen, Jesse T. (
committee chair
), Hsiai, Tzung K. (
committee member
), Nayak, Krishna S. (
committee member
), Shung, Kirk Koping (
committee member
), Weaver, Fred A. (
committee member
)
Creator Email
jay.mung@gmail.com,jmung@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c3-18959
Unique identifier
UC11289233
Identifier
usctheses-c3-18959 (legacy record id)
Legacy Identifier
etd-MungJayC-691-1.pdf
Dmrecord
18959
Document Type
Dissertation
Rights
Mung, Jay C.
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Abstract (if available)
Abstract
Image guidance is crucial to minimally invasive surgery. This dissertation describes a novel ultrasound-based localization system to guide catheter based repair of abdominal aortic aneurysm. Chapter 1 provides background on the clinical significance of abdominal aortic aneurysm, the state of the art for interventional tool tracking and summarizes the technical contributions of this work. Chapter 2 describes the Ultrasound Localization System (ULS) in detail
Tags
abdominal aortic aneurysm
image guided interventions
in vivo
stent
time of flight
ultrasound
Linked assets
University of Southern California Dissertations and Theses