Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Contributions to structural and functional retinal imaging via Fourier domain optical coherence tomography
(USC Thesis Other)
Contributions to structural and functional retinal imaging via Fourier domain optical coherence tomography
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
Contributions to Structural and Functional Retinal Imaging via Fourier Domain Optical Coherence Tomography by Jason Michael Tokayer A Dissertation Presented to the FACULTY OF THE USC GRADUATE SCHOOL UNIVERSITY OF SOUTHERN CALIFORNIA In Partial Fulllment of the Requirements for the Degree DOCTOR OF PHILOSOPHY (ELECTRICAL ENGINEERING) December 2013 Copyright 2013 Jason Michael Tokayer Acknowledgements I would like to take this opportunity to express my gratitude and appreciation to a number of people who supported me over the years. I would rst like to thank my co-advisors, Dr. Antonio Ortega and Dr. David Huang, for their insight, guidance and patience as I navigated down a windy road towards a Ph.D. Their vastly dierent mentoring styles equally enriched my graduate school experience. Not only did they help me achieve my Ph.D, they also prepared me for my career beyond graduate school. I would like to thank Dr. B. Keith Jenkins both for serving on my dissertation committee as well for exploring new avenues of research in biomedical optics with me. I would also like to thank Dr. Bartlett Mel for the insightful and thought provoking questions that he posed during my dissertation defense. I am also grateful for a number of my colleagues. Dr. Pankaj Mishra was by my side at the beginning of my graduate school tenure and taught me by example how to adapt to a life of research. Dr. Yali Jia showed me how to transform my research ideas and lab work into signicant publications. Also, Dr. Al-Hafeez Dhalla has been there to answer all of my questions no matter how ridiculous they were. He has been especially inspirational with his thoughtfulness and rigorousness. I would also like to thank Garth Tormoen for repeatedly drawing my blood so that I could complete my experimental work. I would also like to thank my family and my wife's family for their support. Finally, and most importantly, I would like to thank my wife Alana. Her support and encouragement has been the primary driving force behind my push through graduate school. ii Her tolerance of my occasional bad moods and sleepless nights is a testament to her devotion and love. I am especially grateful for the time she took to read and revise my thesis. iii Table of Contents Acknowledgements ii List of Figures vii List of Tables xi Abstract xii Chapter 1: Introduction 1 1.1 OCT Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Scope and Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Chapter 2: OCT Theory 7 2.1 Fourier Domain OCT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.1.1 Axial Ranging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.1.2 Transverse Resolution . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.2 Practical Spectral-Domain OCT Processing . . . . . . . . . . . . . . . . . . 15 2.2.1 Coherence Volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 2.2.2 Resampling and Dispersion Compensation . . . . . . . . . . . . . . . 16 2.2.3 Retinal OCT Imaging . . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.2.4 Laboratory SDOCT System . . . . . . . . . . . . . . . . . . . . . . . 20 2.3 Doppler OCT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 Chapter 3: Segmentation of OCT Retinal Images 25 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 3.2 Retinal Layer Segmentation Algorithm . . . . . . . . . . . . . . . . . . . . . 27 3.2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 3.2.2 Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 3.2.2.1 Limit Search Region . . . . . . . . . . . . . . . . . . . . . . 31 3.2.2.2 Flatten, Smooth and Downsample . . . . . . . . . . . . . . 31 3.2.2.3 Obtain Sparse Approximation . . . . . . . . . . . . . . . . . 33 3.2.2.4 Identify New Boundaries . . . . . . . . . . . . . . . . . . . . 39 3.2.2.5 Rene Boundaries . . . . . . . . . . . . . . . . . . . . . . . 53 iv 3.2.2.6 Limit Search Region or Return . . . . . . . . . . . . . . . . 55 3.2.2.7 Summary of Parameters . . . . . . . . . . . . . . . . . . . . 57 3.2.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 3.3 Synthetic Retinal Image Generation . . . . . . . . . . . . . . . . . . . . . . . 58 3.3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 3.3.2 Model of OCT Retinal Image . . . . . . . . . . . . . . . . . . . . . . 59 3.3.3 Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 3.3.3.1 Smooth Approximation . . . . . . . . . . . . . . . . . . . . 61 3.3.3.2 Speckle Noise Generation . . . . . . . . . . . . . . . . . . . 66 3.3.3.3 Additive Noise Model . . . . . . . . . . . . . . . . . . . . . 68 3.3.3.4 Scaling the Smooth Approximation . . . . . . . . . . . . . . 69 3.3.3.5 Signal-To-Noise Ratio . . . . . . . . . . . . . . . . . . . . . 71 3.3.3.6 Comparison Between In Vivo and Synthetic OCT Data . . . 71 3.3.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 3.4 Evaluation of Segmentation Algorithm . . . . . . . . . . . . . . . . . . . . . 74 3.4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 3.4.2 Evaluation with In Vivo Retinal Images . . . . . . . . . . . . . . . . 75 3.4.3 Parameter Determination using Synthetic Data . . . . . . . . . . . . 75 3.4.3.1 Smoothness Threshold . . . . . . . . . . . . . . . . . . . . . 76 3.4.3.2 Number of Sparse Bayesian Learning Algorithm Iterations . 78 3.4.3.3 Merge Penalty . . . . . . . . . . . . . . . . . . . . . . . . . 80 3.4.4 Segmentation Algorithm Noise Sensitivity and Accuracy . . . . . . . 81 3.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 Chapter 4: Blood Flow Velocity Measurement with Doppler OCT 86 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 4.2 Retinal Vessel Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 4.2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 4.2.2 Retinal Blood Vessel Geometry . . . . . . . . . . . . . . . . . . . . . 90 4.2.3 Simulation Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 4.2.3.1 Initialize . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 4.2.3.2 OCT Signal Calculation . . . . . . . . . . . . . . . . . . . . 97 4.2.3.3 Move Flow Particles . . . . . . . . . . . . . . . . . . . . . . 98 4.2.3.4 Replace Flow Particles . . . . . . . . . . . . . . . . . . . . . 98 4.2.4 Summary of Simulation Parameters . . . . . . . . . . . . . . . . . . . 102 4.2.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 4.3 Case Study I: Phase-Resolved vs Moving-Scatterer-Sensitive Doppler OCT Algorithm Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 4.3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 4.3.2 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 4.3.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 4.3.3.1 Retinal and simulated PR-to-MSS ow ratio . . . . . . . . . 109 v 4.3.3.2 Simulated Flow and Simulated Diameter Accuracy . . . . . 112 4.3.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 4.4 Case Study II: Eect of Transverse Step Size on Flow Velocity Measurements in Phase-Resolved Doppler OCT . . . . . . . . . . . . . . . . . . . . . . . . . 114 4.4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 4.4.2 Theory and Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . 116 4.4.3 Results and Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . 117 4.4.3.1 Validation Using In Vitro Experiments . . . . . . . . . . . . 117 4.4.3.2 Phase Resolved Doppler OCT Fallo with Increasing Trans- verse Step Size . . . . . . . . . . . . . . . . . . . . . . . . . 121 4.4.3.3 Increasing the Beam Width . . . . . . . . . . . . . . . . . . 127 4.4.3.4 Comparison of Phase Averaging and Complex Vector Averaging128 4.4.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 4.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131 Chapter 5: Split-Spectrum Amplitude-Decorrelation Angiography (SSADA)134 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 5.2 SSADA Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 5.2.1 Split-spectrum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 5.2.2 Decorrelation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 5.3 Macular Angiography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 5.4 Velocity Quantication With SSADA . . . . . . . . . . . . . . . . . . . . . . 143 5.4.1 Materials and Methods . . . . . . . . . . . . . . . . . . . . . . . . . . 144 5.4.1.1 Scan Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . 144 5.4.1.2 Data Processing . . . . . . . . . . . . . . . . . . . . . . . . 145 5.4.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 5.4.2.1 Doppler Angle Dependence . . . . . . . . . . . . . . . . . . 147 5.4.2.2 Saturation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148 5.4.2.3 Relationship between decorrelation and velocity . . . . . . . 149 5.4.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152 5.4.3.1 Model Parameters . . . . . . . . . . . . . . . . . . . . . . . 152 5.4.3.2 Model Limitations . . . . . . . . . . . . . . . . . . . . . . . 154 5.4.3.3 Comparison with previous work on intensity-based Doppler variance angiography . . . . . . . . . . . . . . . . . . . . . . 155 5.4.3.4 Clinical SSADA . . . . . . . . . . . . . . . . . . . . . . . . . 156 5.4.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157 5.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158 Chapter 6: Summary and Possible Extensions 160 Reference List 163 vi List of Figures 1.1 Formation of OCT image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2 The human eye . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 2.1 Fiber optic OCT system schematic . . . . . . . . . . . . . . . . . . . . . . . 7 2.2 Raster scan pattern illustration . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.3 Schematic of Michelson interferometer used in OCT . . . . . . . . . . . . . . 9 2.4 Sample A-Scan for SD-OCT . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.5 Coherence volume denition . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 2.6 Resampling and dispersion compensation example . . . . . . . . . . . . . . . 19 2.7 Cross-sectional imaging of the human retina . . . . . . . . . . . . . . . . . . 19 2.8 Spectral domain OCT system schematic . . . . . . . . . . . . . . . . . . . . 21 2.9 Illustration of the Doppler Eect . . . . . . . . . . . . . . . . . . . . . . . . 22 3.1 Example of an OCT structural image of the human retina . . . . . . . . . . 25 3.2 Flowchart for segmentation algorithm . . . . . . . . . . . . . . . . . . . . . . 31 3.3 Fast identication of retina and choroid . . . . . . . . . . . . . . . . . . . . . 32 3.4 Smoothing and downsampling of attened image . . . . . . . . . . . . . . . . 33 3.5 Sparse approximation of a sample A-line after Sparse Bayesian Learning and backwards elimination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 3.6 Sparse approximation of a B-scan . . . . . . . . . . . . . . . . . . . . . . . . 39 3.7 Denitions and notation for segments and related quantities . . . . . . . . . 40 3.8 Denitions of layer quantities . . . . . . . . . . . . . . . . . . . . . . . . . . 41 vii 3.9 Layer building formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 3.10 Illustration of matrix structure of the graph . . . . . . . . . . . . . . . . . . 44 3.11 Node reachability constraint . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 3.12 Transition allowability constraint . . . . . . . . . . . . . . . . . . . . . . . . 47 3.13 Sample graph traversal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 3.14 Example of layer merging . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 3.15 Results of layer building . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 3.16 Extracted and upsampled boundaries . . . . . . . . . . . . . . . . . . . . . . 52 3.17 Sample boundary pruning . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 3.18 Sample graph cut using algorithm in [26] . . . . . . . . . . . . . . . . . . . . 54 3.19 Rened boundaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 3.20 Limited search region between adjacent boundaries . . . . . . . . . . . . . . 56 3.21 Flowchart for computing smooth approximation of OCT image . . . . . . . . 62 3.22 Sample segmented image used for synthetic image generation . . . . . . . . . 62 3.23 Sampling of pixels in a layer . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 3.24 Notation conventions for mapping of layer to rectangular region . . . . . . . 64 3.25 Mapping from image space to rectangular grid . . . . . . . . . . . . . . . . . 65 3.26 Example of Delaunay triangulation with lling . . . . . . . . . . . . . . . . . 67 3.27 Triangulated smooth image approximation . . . . . . . . . . . . . . . . . . . 67 3.28 Simulated speckle pattern in retina . . . . . . . . . . . . . . . . . . . . . . . 68 3.29 Validation of Rayleigh model for OCT signal noise . . . . . . . . . . . . . . . 69 3.30 Sample phantom image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 3.31 Sample phantom image with reduced signal-to-noise ratio . . . . . . . . . . . 72 3.32 Visual comparison between in vivo and synthetic OCT Bscans . . . . . . . . 72 3.33 Comparison of GCL intensity distributions between in vivo and synthetic OCT data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 viii 3.34 Comparison between speckle sizes in in vivo and synthetic OCT data . . . . 74 3.35 Segmentation of retinal images . . . . . . . . . . . . . . . . . . . . . . . . . . 76 3.36 Eect of smoothness threshold on segmentation algorithm . . . . . . . . . . 78 3.37 Eect of number of Sparse Bayesian Learning algorithm iterations on segmen- tation algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 3.38 Eect of the merge penalty on segmentation algorithm . . . . . . . . . . . . 81 3.39 Eect of the signal to noise ratio on segmentation algorithm . . . . . . . . . 82 3.40 Image regions used to compare distributions and speckle sizes of in vivo and synthetic OCT data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 4.1 Example of a phase resolved Doppler OCT image in the human retina . . . . 88 4.2 Vessel geometry for the Doppler OCT simulation . . . . . . . . . . . . . . . 91 4.3 Intersection of cylindrical vessel with scan plane . . . . . . . . . . . . . . . . 92 4.4 Retinal vessel simulation owchart . . . . . . . . . . . . . . . . . . . . . . . 95 4.5 Initialized simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 4.6 Notation conventions for ow particle replacement . . . . . . . . . . . . . . . 100 4.7 Flow particle backtracking and replacement . . . . . . . . . . . . . . . . . . 102 4.8 Visual comparison between in vivo retinal vessel and simulated vessel . . . . 103 4.9 Vessel size compared with coherence volume size. . . . . . . . . . . . . . . . 105 4.10 Illustration of the reduced sensitivity of the phase-resolved algorithm due to nonzero static tissue bandwidth . . . . . . . . . . . . . . . . . . . . . . . . . 105 4.11 Comparison of PR-to-MSS ow ratios . . . . . . . . . . . . . . . . . . . . . . 109 4.12 Normalized PR-to-MSS ow ratios . . . . . . . . . . . . . . . . . . . . . . . 112 4.13 Simulated ow accuracy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 4.14 Simulated diameter measurement accuracy . . . . . . . . . . . . . . . . . . . 113 4.15 Simulated Doppler broadening, consequent phase wrapping and mean Doppler phase shift underestimation with increasing transverse step size for two dif- ferent velocity settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 4.16 Sample Doppler velocity shifts computed using PRDOCT . . . . . . . . . . . 119 ix 4.17 Comparison of broadening and subsequent velocity underestimation for sim- ulation and in vitro experiments . . . . . . . . . . . . . . . . . . . . . . . . . 120 4.18 Simulation results for fallo of PRDOCT measurements for various ow speeds122 4.19 Simulation results for fallo of PRDOCT measurements for various signal-to- noise ratios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 4.20 Simulation results illustrating the reduced fallo in PRDOCT measurements when phase unwrapping is implemented . . . . . . . . . . . . . . . . . . . . . 126 4.21 Simulation results for fallo of phase unwrapped PRDOCT measurements for various signal-to-noise ratios. . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 4.22 Simulation comparison of fallo for dierent beam widths . . . . . . . . . . . 128 4.23 Simulated phase distributions for PRDOCT techniques . . . . . . . . . . . . 129 4.24 Simulation results for comparison of fallo for phase averaging and vector averaging for simulated average velocities in a vessel . . . . . . . . . . . . . . 130 5.1 Diagrams of the modication of the OCT imaging resolution cell and the split-spectrum method used for this purpose . . . . . . . . . . . . . . . . . . 137 5.2 In vivo 3D volumetric OCT of the macula processed with the SSADA algorithm142 5.3 Sample intensity images from phantom study . . . . . . . . . . . . . . . . . . 145 5.4 Sample M-mode Doppler Frequency shifts from phantom study . . . . . . . . 146 5.5 Multi-timescale SSADA M-mode image of blood ow through capillary tube 147 5.6 Dependence of measured SSADA signal on Doppler angle . . . . . . . . . . . 148 5.7 Multi-timescale decorrelation averaged over capillary for various ow speeds 149 5.8 Decorrelation vs velocity for various time separations (t) between A-lines . 150 5.9 Linear t of slope m (1) t versus time t . . . . . . . . . . . . . . . . . . . . . 151 x List of Tables 3.1 Summary of segment and layer denitions and associated notation . . . . . . 43 3.2 Parameters used in segmentation algorithm . . . . . . . . . . . . . . . . . . . 57 4.1 Summary of parameters for retinal vessel simulation . . . . . . . . . . . . . . 103 4.2 Example lookup table for computing velocity V rel;0 from measured average relative velocity V rel and transverse step size x . . . . . . . . . . . . . . . . 124 5.1 Summary of linear t of decorrelation versus velocity . . . . . . . . . . . . . 151 5.2 Operating range for linear model . . . . . . . . . . . . . . . . . . . . . . . . 152 xi Abstract Optical coherence tomography (OCT) is a depth-resolved cross-sectional imaging modality capable of imaging biological tissue at the micron-scale in vivo. OCT has found widespread use in ophthalmology because of the translucent nature of retinal tissue as well as OCT's high resolution and non-contact implementation. OCT is analogous to ultrasound, except that it uses light instead of sound to illuminate a sample. OCT measures travel time of light that is backscattered from structures at dierent depths or axial distances (axial scans) within the sample. Cross-sectional images are generated by scanning the optical beam in the transverse direction and performing successive axial scan measurements. Structural imaging of the human retina has provided new insights into ophthalmic dis- ease. For example, retinal nerve ber layer thickness is utilized in glaucoma progression analysis. Thus, segmentation of OCT retinal images, which is complicated by low image contrast and high levels of speckle noise, is crucial for the study and diagnosis of ocular diseases. Measurement of retinal blood ow is also important for the study of many ocu- lar pathologies including glaucoma and diabetic retinopathy. Doppler OCT, which utilizes measured Doppler frequency/phase shifts to infer blood ow speed, has consequently found widespread use in the ophthalmic community. The Doppler eect is only sensitive to move- ment along the incident beam's axis. In the case of the human macular region, however, many blood vessels are nearly perpendicular to the incident beam, which makes the mea- sured Doppler frequency shifts small and dicult to detect. For this reason, measurement of blood ow in the macula using Doppler OCT is a challenging task. xii This thesis encompasses contributions to both structural and functional OCT retinal imaging. One common theme found throughout this thesis is the generation of synthetic or simulated OCT data in order to quantitatively evaluate various image processing algo- rithms. For each simulation application the synthetic data is rst validated against in vivo retinal data or in vitro experimental data and then used to extrapolate the results and draw inferences about algorithm performance. We rst design and implement a novel sparsity-based retinal layer segmentation algo- rithm that identies the boundaries between retinal layers in OCT structural images. In contrast to most state of the art retinal layer segmentation algorithms, our algorithm does not require overly restrictive a priori assumptions about the expected layer structure and can thus possibly be used to detect unexpected structures that arise in pathological cases. One diculty with quantitative evaluation of segmentation algorithms applied to in vivo retinal images is that the ground truth segmentation is not known. Motivated by this fact, we design a novel method for generating synthetic structural retinal images for which the ground truth segmentation is known. Our technique improves upon an existing synthetic im- age generation method both by using a nonparametric smooth representation of the average intensity values in the image and by accurately modeling speckle noise. After verifying that synthetic images generated with our method accurately represent real in vivo retinal images, we show that our segmentation algorithm accurately identies the retinal layer boundaries in the synthetic images and that our algorithm is robust to image noise. We next examine functional OCT retinal imaging by studying various methods for mea- suring blood ow velocities in retinal vessels. We develop a two dimensional simulation of a blood vessel immersed in tissue that mimics functional imaging in a clinical setting more accurately than existing one dimensional simulations of retinal blood ow. This retinal ves- sel simulation enables quantitative evaluation of the accuracy of Doppler OCT algorithms by providing the ground truth blood ow velocities that Doppler OCT techniques aim to xiii measure. We illustrate the utility of the simulation with two case studies. First, we evaluate the accuracy of two commonly used Doppler OCT algorithms as vessel diameters and associ- ated ow speeds become increasingly small. We show that the phase-resolved Doppler OCT algorithm performs best for large blood vessels while neither algorithm performs well for very small blood vessels and slow blood ow velocities. This is the rst time that a quantitative evaluation of the estimation accuracy of the Doppler OCT algorithms is performed using synthetic data with known ground truth blood ow velocities. In the second case study we examine the eect of transverse step size between adjacent axial scans on the phase-resolved Doppler OCT algorithm. We show that the phase-resolved Doppler OCT algorithm system- atically underestimates blood ow velocities with increasing transverse step size and that this eect is more pronounced at higher velocities. We propose two techniques for correcting measured velocities that can be used to improve the accuracy of the phase-resolved Doppler OCT algorithm. As previously mentioned, Doppler OCT techniques are only sensitive to velocities along the direction of the incident beam (axial velocities). While Doppler OCT is commonly used to measure blood ow near the optic disc, its application to ow estimation in the human macula is limited because many of the blood vessels in this region are nearly perpendicular to the incident beam. In order to overcome this limitation, we introduce a novel method for visualizing blood ow called split-spectrum amplitude-decorrelation angiography (SSADA). In contrast to Doppler phase-based methods, SSADA relies on intensity information only and is sensitive to both transverse (perpendicular) and axial ow velocities. After establishing that SSADA can be used to image ow in the human macula, we perform phantom ow experiments to establish a linear relationship between SSADA measurements and preset blood ow velocities that enables velocity quantication and thus blood ow measurement in the human macula with SSADA. xiv Chapter 1 Introduction 1.1 OCT Background Optical coherence tomography (OCT) is a biomedical imaging technique based on low coher- ence interferometry capable of producing both structural and functional images of biological tissue with micrometer resolution [1]. Depth gating within an imaged sample is achieved by interfering broadband light of low coherence backscattered from a sample and a refer- ence mirror. When the pathlengths of backscattered light from the reference and sample arms are within a source coherence length, the light will interfere and produce a depth re- solved backscattered intensity prole of the sample. OCT measures the time of ight of light backscattered from within a sample. Cross-sectional OCT images are generated by scanning the optical beam transversally. An illustration of OCT retinal image formation is shown in Figure 1.1. Depth resolutions of typical OCT imaging systems range from 510m in air. Ultrahigh- resolution OCT imaging systems can achieve even better resolution. Depth penetration into biological tissue is on the order of a couple of millimeters. Compared to ultrasound imaging, OCT has lower penetration depth but better resolution. Compared to confocal microscopy, OCT has better penetration depth but lower resolution [2]. 1 Figure 1.1: Formation of OCT image. OCT measures the time of ight of backscattered light and can resolve backscattered light from dierent depths within a sample. Left panel: light backscattered from deeper in the sample takes longer to return. Middle panel: Example intensity depth prole from the vertical blue line in the right panel. Right panel: transverse scanning produces a cross sectional image First generation OCT systems used a mechanically scanned reference mirror to obtain backscattered re ectivity from dierent depths within a sample. Since the signals were ac- quired sequentially in time, these systems are known as time-domain systems. Because of the mechanical nature of scanning as well as the low imaging speeds and corresponding high imaging times, in vivo measurements suered from much patient movement. More recently, OCT detection techniques were devised that do not require mechanical scanning of a refer- ence mirror. This resulted in a signicant increase in imaging speeds, ten to fty times faster. In contrast to time-domain OCT, which detects light at dierent depths sequentially, these newer detection techniques measure all of the depths simultaneously by acquiring backscat- tered light as a function of frequency and using the Fourier transform relation between it and 2 time/depth. Thus, these techniques are termed Fourier-domain optical coherence tomogra- phy (FDOCT). With high imaging speeds, FDOCT now enables in vivo three-dimensional imaging [2]. OCT has found widespread use in ophthalmology because of the translucent nature of retinal tissue as well as its high resolution and non-contact implementation. OCT tomograms enable quantication of morphological and functional characteristics of the human retina, some of which enhance the diagnostic capabilities for various eye diseases. For example, retinal nerve ber layer thickness is utilized in glaucoma progression analysis. An illustration of the human eye is shown in Figure 1.2. Figure 1.2: The human eye (from [3]). The focus of this work is the macula with the fovea at its center, which is denoted by a green box. The green box highlights the posterior eye, which is the region of interest for the contributions of this work. The retina is a complex, light-sensitive, layered tissue that lines the inner surface of the eye. Most of this work focuses on the central subregion of the retina, the macula lutea. The most light sensitive structure in the macula is the fovea that is approximately 200 m thick [4]. Nerve ber layer thickness in the human macula is commonly used as a 3 marker for glaucoma [5]. Also, reduced blood ow in the macula may be related to diabetic retinopathy [6]. 1.2 Scope and Contributions Various studies of both structural and functional human retinal imaging are presented in this thesis. A common theme throughout most of this work is the use of synthetic data to provide quantitative evaluation of various image processing algorithms used for retinal layer segmentation and the study of retinal blood ow. The following list provides a summary of the key contributions presented in this thesis. We design and implement a novel sparsity-based retinal segmentation algorithm for OCT structural images. A Sparse Bayesian Learning algorithm is used to identify the discontinuities between adjacent retinal layers. The boundaries between retinal layers are then found using a graph theoretic approach. The algorithm is able to segment the nine layer boundaries without making overly restrictive assumptions about the retinal anatomy. This is the rst segmentation algorithm that does not specify a priori the number of layers to be found. Rather, the algorithm is guided by the data, which makes it more general than many OCT retinal segmentation algorithms so that the segmentation of pathological cases could possibly be performed with our algorithm. We present a technique for generating synthetic OCT structural images. A model of OCT re ectivities is dened assuming locally homogeneous regions of re ectivity. Speckle patterns are implemented via simulation. Additive complex Gaussian noise is generated to complete the model. The intensity distributions of the synthetic data are validated with in vivo retinal images. The spatial correlation of the images, or speckle sizes, are also validated with in vivo retinal images. Our technique accurately models 4 the speckle noise pattern that is determined by the OCT imaging system parameters. These synthetic images provide a ground truth segmentation that can be used for quantitative evaluation of various retinal layer segmentation algorithms. We show that our segmentation algorithm accurately identies the retinal layer boundaries in the synthetic images and that our algorithm is robust to image noise. We develop a novel two dimensional simulation of a blood vessel immersed in tissue. Our simulation mimics functional imaging in a clinical setting more accurately than existing one dimensional simulations of retinal blood ow. We subsequently use the simulation to generate synthetic OCT retinal blood ow velocity data that enables quantitative evaluation of Doppler OCT velocity estimation algorithms. We present two case studies that illustrate the utility of the retinal vessel simulation: { We present a quantitative comparison between two Doppler OCT algorithms (phase-resolved versus moving-scatterer-sensitive). After validating the simula- tion with in vivo retinal data, we show that the phase-resolved algorithm out- performs the moving-scatterer-sensitive algorithm for large vessel diameters and blood ow velocities. We also show that neither algorithm provides accurate blood ow estimates for small vessel diameters and slow blood ow velocities, indicating that Doppler OCT has limited utility for ow measurements in the human macula where blood velocities are small. { We study the eect of transverse step size on measured ow velocities in phase- resolved Doppler OCT. We show that the phase-resolved Doppler OCT algorithm systematically underestimates blood ow velocities with increasing transverse step size and that this eect is more pronounced at higher velocities. We propose two techniques for correcting measured velocities that can be used to improve the accuracy of the phase-resolved Doppler OCT algorithm. 5 Motivated by the limited application of Doppler OCT algorithms for macular ow estimation, we introduce a novel method for visualizing blood ow called split-spectrum amplitude-decorrelation angiography (SSADA). We illustrate the ability of the SSADA algorithm to visualize the capillary network in the human macula. Then, using ow phantom experiments, we derive a linear model that relates the SSADA measurements to ow velocities. This model enables ow velocity quantication with SSADA and could potentially be applied for macular blood ow quantication. The organization of this thesis is as follows. In Chapter 2 the theory of optical coherence tomography is discussed. Chapter 3 focuses on the segmentation of OCT structural images. First, we introduce our novel retinal layer segmentation algorithm. We then develop our synthetic image generation technique and test the accuracy and noise robustness of our segmentation algorithm on synthetic OCT structural images for which the ground truth segmentation is known. In Chapter 4 we introduce the retinal vessel simulation and present two cases studies that illustrate its utility. In Chapter 5 we discuss the SSADA algorithm. Chapter 6 presents conclusions and future work. 6 Chapter 2 OCT Theory Most of the following analysis is based on [2]. A generic OCT system schematic is illustrated in Figure 2.1. Figure 2.1: Fiber optic OCT system schematic. Blue lines indicate ber optic paths, red lines represent free-space optical paths and black lines represent electronic signal paths Light from a low-coherence source is directed into a ber optic coupler that splits the light into a reference and sample arm. The reference arm light travels through the ber and illuminates a mirror. The light is then backre ected back into the ber and travels back towards the coupler. The sample arm light travels through the ber and is focused onto a 7 sample upon exiting the ber. Backscattered light is collected by the ber and combined with the reference arm light at the coupler. The combined light is detected by the detector, which passes it into a processing module that computes the depth resolved re ectivity prole of the sample. Depth proles are commonly referred to as axial scans or A-lines. The input beam is incident on the sample along the axial direction and this is typically taken to be the Z direction. The plane that is perpendicular to the input beam is referred to as the transverse plane. The lateral scanner in Figure 2.1 can be used to alter the transverse position of the beam in either the X or Y directions. Typically, a two dimensional cross-sectional image or B-scan is acquired by scanning the beam along the X direction and computing A-lines at equally spaced points along the trajectory. A volumetric scan is acquired by incrementing the Y position of the beam after each B-scan acquisition. This raster scan pattern is shown in Figure 2.2. Figure 2.2: Raster scan pattern illustration. A depth (Z) prole is referred to as an axial scan or A-line. Scanning the input beam along the X (transverse) direction and acquiring A-lines at equally spaced points along the trajectory produces a cross-sectional image or B-scan. In this graphic, a B-scan has N A-lines, each of which contains a signal from M dierent depths. Incrementing the Y (trnsverse) position of the beam after each B-scan acquisition produces a volumetric scan. 8 A few additional scan patterns used in this thesis are brie y mentioned next. First, M-mode imaging refers to repeated A-line acquisitions without scanning the beam transver- sally so that multiple A-lines are acquired at the same transverse beam position over time. Acquiring multiple B-scans in sequence is called M-B mode scanning. In some cases, the volumetric data acquired during a raster scan is aggregated over depth, either by averaging or maximal projections, to create a transverse only en-face image. Finally, a circular scan protocol refers to the case for which the beam is scanned in a circle in the transverse plane. 2.1 Fourier Domain OCT 2.1.1 Axial Ranging In the following theoretical discussion about axial ranging with OCT, the transverse beam extent is ignored. Consider the Michelson interferometer in Figure 2.3. Figure 2.3: Schematic of Michelson interferometer used in OCT. The z = 0 position is located at the beamsplitter. 9 The illumination beam of the interferometer is a polychromatic plane wave with electric eld given by: E i =s(k;!)exp(i(kz!t)); (2.1) where s(k;!) is the amplitude, k is the wavenumber and ! is the angular frequency. The wavenumber is related to the source wavelength by k = 2= and the angular frequency is related to the temporal frequency by ! = 2. The wavenumber and angular frequency are coupled by the index of refraction n(!) through: k(!) = n(!)! c ; (2.2) wherec is the speed of light. For this theoretical introduction, dispersion is neglected so that k =n!=c. Letr R denote the amplitude re ectivity of a reference mirror placed at a distance z R from the beam splitter. The electric eld arriving at the beamsplitter after re ecting o of the mirror can be written as: E R /E i r R exp(ik2z R ); (2.3) where the factor of 2 results from the round-trip pathlength to and from the mirror. The sample is characterized by a depth dependent re ectivity r S (z S ). For simplicity, this re ec- tivity is approximated with a series of N discrete re ectors at dierent depths. Thus, the sample re ectivity can be written as: r S (z S ) = N X n=1 r Sn (z S z Sn ); (2.4) where (x) denotes the discrete Dirac delta function and equals 1 at x = 0 and is zero everywhere else [7]. The re ected beam arriving at the beamsplitter from the sample is the 10 coherent sum of all of the re ected beams from dierent depths within the sample and can be written as: E S /E i N X n=1 r Sn exp(ik2z Sn ): (2.5) The photodetector is a square law detector which means that it detects the square of the incident electric eld or its intensity. Let R R =jr R j 2 denote the power re ectivity of the reference mirror and R Sn =jr S N j 2 denote the power re ectivity for each re ector. The photocurrent generated by the superimposed sample and reference elds is given by: I D (k) / hjE R +E S j 2 i / hjE R j 2 +jE S j 2 + 2<fE R E ? S gi / [S(k) (R R +R S 1 +R S 2 +:::)] + h S(k) P N n=1 p R R R Sn (cos(2k(z R z Sn ))) i + h S(k) P N n6=m=1 p R Sn R Sm (cos(2k(z Sn z Sm ))) i , (2.6) where S(k) =hjs(k;!)j 2 i encodes the power spectral density of the light source is the responsivity of the detector and< denotes the real part operator. In Equation 2.6 the sum- mation consists of three distinct terms: the rst term is the DC term and has no associated phase, the second term is the cross-correlation or interference term that represents the inter- ference between the reference and sample beams, and the third term is the autocorrelation term that represents the interference of the sample beam with itself. In this theoretical anal- ysis the spectral density will be approximated by a Gaussian function. Then, the spectral density and its Fourier transform pair can be written as [7]: S(k) = F( (z)) / exp(4ln2 kk 0 k 2 ), (z) = F 1 (S(k)) / exp(z 2 k 2 ), (2.7) 11 where k 0 = 2= 0 denotes the source center wavenumber/wavelength, k denotes the full- width at half-maximum (FWHM) of the source spectral density and the phase delay is dropped. The function (z) is called the coherence function and is characterized by its FWHMz 0 , which denes the round trip coherence length of the light source. The coherence length can be written as a function of the source spectral bandwidth as: z 0 = r 4ln2 k 2 : (2.8) In Fourier domain OCT, the photocurrent in Equation 2.6 is captured and processed using the Fourier transform to reconstruct an approximation to the sample re ectivity prole p R S (z S ). In spectral domain OCT (SDOCT), the combined beam is spectrally dispersed by a spectrometer and captured on a charge coupled device (CCD) camera. In swept source OCT (SSOCT), the spectral components are captured sequentially on a single detector by sweeping the source wavenumber. Making use of the convolution property of the Fourier transform as well as the Fourier transform of a cosine function [7], the inverse Fourier transform of Equation 2.6 can be calculated as: I D (z) / [ (z) (R R +R S 1 +R S 2 +:::)] + h (z) P N n=1 p R R R Sn ((z 2(z R z Sn )) i + h (z) P N n6=m=1 p R Sn R Sm ((z 2(z Sn z Sm )) i , (2.9) 12 where the term was dropped and denotes a convolution. Using the sifting property of the Dirac delta function [7] this can be rewritten as: I D (z) / [ (z) (R R +R S 1 +R S 2 +:::)] + h P N n=1 p R R R Sn ( (z 2(z R z Sn )) i + h P N n6=m=1 p R Sn R Sm ( (z 2(z Sn z Sm )) i . (2.10) Equation 2.10 is referred to as an OCT A-scan and is the point spread function (PSF) of an OCT system. It represents a single depth prole. A few important results from Equation 2.10 are discussed next. First, the zero position of the A-line appears at the position of the reference re ector z R . Second, the displacement of each sample re ector from the reference position is doubled due to the round trip distance to each re ector. Third, there are mirror image artifacts due to the fact that the Fourier transform of a real signal is symmetric. Most importantly, each re ector is broadened out to a width equal to the coherence length due to the convolution with the coherence function. The coherence length z 0 is thus dened as the axial resolution of an OCT imaging system. By Equation 2.8, a larger source bandwidth leads to a shorter coherence length. Thus, in order to increase the system axial resolution, the source bandwidth should be very large. The fundamental quality that dierentiates OCT from other forms of optical microscopy is its ability to resolve in depth in this way. A sample interferogram and resulting A-line with re ectors located at two dierent depths are illustrated in Figure 2.4. Typically, the DC term had the largest magnitude and the autocorrelation terms are negligible compared to the interferometric term. In order to recover the interferometric term and the sample re ectivity prole p R S (z S ), a background scan is rst acquired with the signal from the sample arm blocked, which leaves only the DC term. Then, after unblocking the sample arm and acquiring a spectral interferogram, the background is subtracted and 13 Figure 2.4: Sample A-Scan for SD-OCT. Two discrete re ectors are located at two dierent depths. In the interferogram in wavenumber/frequency space, this can be seen from the two modulating frequencies of the Gaussian source spectrum. In the depth prole, or A-line, the discrete re ectors are broadened by the nite axial resolution z 0 . the resulting signal is Fourier transformed. In this way the DC term is removed and the resulting signal represents the sample re ectivity prole p R S (z S ). 2.1.2 Transverse Resolution The transverse resolution of OCT is dened as the focal spot size of the incident beam on the sample. If the source emits a Gaussian beam [8] then the 1=e 2 intensity transverse resolution is given by: ! 0 = 4 0 f D ; (2.11) where f denotes the focal length of the objective lens that focuses light onto the sample and D denotes the diameter of the collimated beam on the lens. The transverse resolution is independent of the axial resolution. This is in contrast to conventional microscopy where these values are coupled. The depth of focus, or the range over which the beam size remains approximately equal to the spot size, varies with the square of the transverse resolution. Thus, there is a tradeo between the focal diameter and the focal zone of the sample beam; 14 the smaller the focal diameter (or the better the transverse resolution), the shorter the focal zone. 2.2 Practical Spectral-Domain OCT Processing 2.2.1 Coherence Volume In practice, focused light re ected from the sample emanates from within a three dimensional coherence volume, dened by the beam width ! 0 in the transverse dimensions X and Y and by the coherence length z 0 in the axial dimension Z. An example coherence volume is illustrated in Figure 2.5. Figure 2.5: Coherence volume denition. Note that the beam width is the transverse extent in both the X and Y (not shown) dimensions. Neglecting the DC and autocorrelation terms, the interferometric signal as a function of wavenumber emanating from a coherenec volume can be written as [9]: I(x b ;y b ;k)/< ( ZZZ S(k)r(x;y;z)e 4ln2 (xx b ) 2 +(yy b ) 2 ! 2 0 e i2kz dxdydz ) ; (2.12) where (x b ;y b ) denotes the transverse position of the beam, (x;y;z) denotes the coordinate of a reference frame xed to the sample,r(x;y;z) is the backscattering coecient of the sample, 15 ! 0 is the 1=e 2 diameter of the beam intensity prole and the other terms have been previously dened. Equation 2.12 assumes that the zero path length dierence in the interferometer is located at the beginning of the sample, or z = 0. The complex OCT signal as a function of depthZ is obtained after Fourier transformation of Equation 2.12 and is given by: I(x b ;y b ;Z)/ ZZZ r(x;y;z)e i2k 0 (zZ) e 8 (xx b ) 2 +(yy b ) 2 ! 2 0 + e 4ln2 (zZ) 2 z 2 0 dxdydz; (2.13) wherez 0 denotes the FWHM axial resolution. The amplitude ofI(x b ;y b ;Z) is proportional to a coherent sum of all of the light backscattered from within a coherence volume centered at Z =z. 2.2.2 Resampling and Dispersion Compensation In practice, the reconstruction of an A-line is complicated by the fact that (a) the acquired spectra are unevenly spaced in wavenumber and (b) there is a dispersion mismatch between the reference and sample arm. Both of these eects lead to a broadening of the PSF in the axial dimension [2]. The resampling error leads to a depth dependent broadening, whereas the dispersion error leads to a depth independent broadening. The spectra in Equation 2.12 are evenly spaced in the array pixelsp of the CCD camera. Thus, the acquired wavenumbers are a function of array pixels, ork =k(p). The dispersion eect is manifested in an additional nonlinear phase term in Equation 2.12. The purpose of resampling and dispersion compensation is to recover I(k) that is evenly spaced ink prior to the Fourier transform computation in Equation 2.13. Most techniques to overcome the resampling problem rely on a precise calibration of the acquired wavelengths in the spectrometer using a diraction grating equation [10]. For the experiments performed in this work, a dierent technique is used that does not rely on the precise spectrometer 16 alignment. OCT interferograms are computed using two mirror re ections at dierent depths z 1 and z 2 . In this case, the sample re ectivity is given by r(x b ;y b ;z j ) =(x b ;y b ;z j ) for j = f1; 2g. Plugging this into Equation 2.12, substituting k(p) for k and adding the dispersion eect gives: ~ I(x b ;y b ;k(p))/< S(k(p))e i2k(p)z j e i(k(p)) : (2.14) Only the real part of the signal is measured. However, the complex signal in Equation 2.14 can be recovered using the Hilbert transform [7]. After Hilbert transformation, the phases j of the resulting signals are computed, unwrapped and subtracted to give: (k(p)) = 2k(p)(z 1 z 2 ) = 2k(p)z; (2.15) where the fact that the dispersion is independent of depth has been used so that(k) vanishes. Using the known physical distance z, Equation 2.15 can be solved for k(p), which gives the unevenly spaced wavenumbers as a function of CCD array pixel. The total wavenumber range is given by k(p max )k(p min ) = K. If there are A array pixels on the CCD camera, then the evenly spaced wavenumbers can be found by: k 0 a =k(p min ) + (a 1) K A 1 ; (2.16) for a =f1; 2;:::;Ag. The uneven wavenumber sampling issue is resolved by resampling the interferogram in Equation 2.14 to the evenly spaced wavenumbers in Equation 2.16 using spline interpolation. This resampling removes the depth dependent broadening, as illustrated in Figure 2.6. 17 After the resampling is complete, dispersion compensation is performed. The relation between the phase(k) and the multiple orders of dispersion can be described by the Taylor series expansion as: (k) =(k 0 ) + 0 (k 0 )(kk 0 ) +a 2 00 (k 0 )(kk 0 ) 2 +a 3 000 (k 0 )(kk 0 ) 3 +:::; (2.17) wherea n = (n!) 1 . The rst two terms describe a constant oset and group velocity, respec- tively, and are not related to dispersive broadening. The third term represents second order or group-velocity dispersion. Dispersion mismatch in sample and reference arms is largely compensated by this term, although adjustment of higher order dispersion can be necessary as well, especially when an ultra-broadband source is used. The nonlinear terms in Equation 2.17 lead to dispersive broadening and must be compensated for by multiplying the resam- pled interferogram by a corrective phase factor c (k). In order to compute this correction, the Hilbert transform of a single mirror re ection and its unwrapped phase are computed. Then, a linear t LIN is applied to the unwrapped phase. The residual of this curve, which represents the nonlinear terms in Equation 2.17 that lead to dispersive broadening is assigned to c (k). Then, to compensate each A-line, each resampled interferogram is multiplied by e ic(k) to compensate for dispersion. Dispersion compensation leads to decrease in the axial width of the point spread function, as shown in Figure 2.6. 2.2.3 Retinal OCT Imaging Retinal images are acquired by using the eye as the sample and focusing the beam onto the retina. Structural images are obtained by taking the magnitude of Equation 2.13. This process is illustrated in Figure 2.7 18 Figure 2.6: Resampling and dispersion compensation example. The original point spread functions (PSFs) broaden with depth. After resampling, the depth dependent broadening is removed. With resampling and dispersion compensation, the widths of the PSFs are small indicating good axial resolution. Note that the signal intensity falls o with depth. A complete treatment of this phenomenon is given in [2]. Figure 2.7: Cross-sectional imaging of the human retina. 19 2.2.4 Laboratory SDOCT System The ow experiments presented in subsequent chapters of this thesis were performed using a laboratory SDOCT system that is illustrated in Figure 2.8. It contains a superluminescent diode source with a center wavelength of 840m and a bandwidth of 49m. The theoretical axial resolution of the system is 6.4m in air, while the measured axial resolution is 8.8m in air. The deviation from the theoretical value is likely due to the non-Gaussian spectrum prole as well as aberrations that reduce the spectral resolution of the spectrometer. A colli- mator is placed in the sample arm that produced a 1=e 2 intensity diameter of approximately 1.0 mm, and is focused down to 20 m spot size using a 20 mm focal length lens. The sample arm probe beam is focused onto a glass capillary tube (Wale Apparatus) with an outer diameter of 330 m and an inner diameter of 200 m. No anti-re ection material was used to coat the glass. The capillary is placed on top of a piece of paper and attached to a ball and socket mounting platform (Thorlabs). A syringe pump (Harvard Apparatus) is used to control the ow of human blood through the tube. The recombined eld is spectrally dispersed by a spectrometer and detected by 1024 pixels on a line-scan camera. The data from the camera is transferred to a personal computer for data processing. The time between two sequential A-scan acquisitions is 56 s which corresponds to a 17 kHz repetition rate. 2.3 Doppler OCT Doppler OCT is a functional extension of OCT that enables the measurement of ow in biological tissues [11]. Doppler OCT provides both structural and functional information as a function of depth. The high imaging speeds of spectral-domain OCT make it possible to use OCT for in vivo blood ow measurements [12]. 20 Figure 2.8: Spectral domain OCT system schematic used in ow phantom experiments. SLD superluminescent diode, FC 2x2 ber coupler, PC personal computer, H 2 0 water cell, Galvos scanning mirror galvanometers, BS - beamsplitter. This system was originally designed for retinal imaging so that a water cell is in the reference arm for dispersion matching and a beamsplitter is in the sample arm to allow for slit lamp illumination of the retina. The retinal imaging system was modied by adding a focusing lens in the sample arm. Doppler Eect Doppler OCT relies on the Doppler Eect which states that moving particles will induce a Doppler frequency shift in scattered light according to [12]: f = 1 2 ( ~ k s ~ k i ) ~ V; (2.18) where ~ k i and ~ k s are the respective wave vectors for the incident and scattered light and ~ V is the velocity vector of the moving particles. In OCT, only backscattered light is detected and therefore the direction of ~ k s is the opposite of the direction of ~ k i . Using this relation and the Doppler angle between the incident beam and the velocity vector Equation 2.18 can be rewritten as: f =2Vcos()n= 0 ; (2.19) 21 where n is the index of refraction of the medium and 0 is the center wavelength of the source. Notice that the Doppler angle is needed to calculate the total velocity vector. Figure 2.9 illustrates the Doppler eect. Figure 2.9: Illustration of the Doppler Eect. Moving red blood cells inside a blood vessel induces a Doppler frequency shift in the backscattered light. The Doppler frequency shift is proportional to the axial velocity and is thus insensitive to transverse motion. Phase-Resolved Algorithm The most common method for calculating Doppler frequency shifts in moving blood vessels is the phase-resolved algorithm [13{17]. Suppose that a single particle at position (x j ;y j ;z j ) within the sample backscatters the incident beam. If the particle moves a distance (dx;dy;dz) between two sequential A-line acquisitions A j and A j+1 then the complex OCT amplitudes in Equation 2.13 can be written as: A j (Z) / a j (x j ;y j ;z j )e i2k 0 (z j Z) A j+1 (Z) / a j+1 (x j+1 ;y j+1 ;z j+1 )e i2k 0 (z j+1 Z) a j (x j ;y j ;z j )e i2k 0 (z j +dzZ) , (2.20) provided that the amplitude dierence between the two A-lines is small relative to the phase eect. The principle behind the phase-resolved (PR) algorithm is that the phase change 22 between sequential A-line scans at the same depth can be used to calculate the Doppler frequency shift as: f = \A j+1 \A j 2 = 2 ; (2.21) where is the time dierence between A-scans. Ideally, the A-scans in Equation 2.21 would be acquired at the same transverse beam position. In practice, however, because of total imaging time and OCT imaging system limitations, the A-lines are acquired with a transverse distance oset x. The eect of this transverse step size will be studied in Section 4.4. In general, the average phase shifts between sets of A-scans is used to improve velocity estimation. Because of phase wrapping, the PR algorithm can uniquely detect frequency shifts up to f = 1=2, which is approximately 9 kHz for the laboratory phantom system described previously. Clearly then a shorter time between A-scans will increase the maximum detectable shift. The minimum detectable shift is determined by the phase noise of the OCT system [16]. The phase averaging implementation of the PR algorithm typically computes the phase dierences between adjacent A-lines and then averages them over a window. A more robust averaging technique was suggested by [18] and is now commonly used for PR averaging. In this vector averaging technique, Doppler phase shifts are calculated according to: = arctan( =( P M1 m=0 P N1 n=0 A(m;n)A (m;n + 1)) <( P M1 m=0 P N1 n=0 A(m;n)A (m;n + 1)) ); (2.22) whereA(m;n) is the complex re ectivity at the pixel (m;n) andM andN are the averaging windows in the axial and transverse dimensions, respectively. 23 Flow Calculation Computation of the total velocity vector~ v(x;y;z) requires both the estimated Doppler fre- quency shifts as well as the Doppler angle between the incident beam and the blood vessel as in Equation 2.19. Two adjacent B-scans are typically required to calculate the Doppler angle between the incident light beam and the blood vessel [19]. After reconstructing the total velocity vector ~ v(x;y;z), the velocity ux, or volumetric ow rate through a surface, can be computed with a surface integral as: F = Z S (~ v~ n)dS; (2.23) where~ n is the normal to the surface. In typical OCT B-scans, the XZ or transverse-depth plane is the integration surface. However, to calculate the volumetric ow, the integration should be calculated in a plane that is normal to the blood vessel direction. If is the angle between the integration plane and the blood vessel normal, the volumetric ow rate is given by [19]: F = ZZ ~ v(x;z)cosdxdz: (2.24) For most of the results presented in this thesis, neither the Doppler angle nor the integration plane correction angle is computed. For example, in Section 4.3 the ow ratio between two Doppler OCT algorithms will be studied. In this case, neither angle is required since the eect of these angles cancel out when computing the ratio of estimated ow rates. In Section 4.4, the eect of transverse step size on Doppler OCT ow velocity measurements is examined. Since the volumetric ow rate is proportional to the sum of the Doppler phase shifts inside a vessel, this eect can be studied without explicitly computing the angles. 24 Chapter 3 Segmentation of OCT Retinal Images 3.1 Introduction OCT tomograms enable detection of morphological structures of the human retina. An example of an OCT structural image of the retina is shown in Figure 3.1. Figure 3.1: Example of an OCT structural image of the human retina. Many of the retinal structures are critical for the diagnosis and study of ocular patholo- gies. For example, OCT can be used to detect macular holes and retinal detachments [20,21]. Also, one symptom of glaucoma is loss of ganglion cells in the retina, which causes a thinning in the nerve ber layer. Thus, retinal nerve ber layer thickness is utilized in glaucoma pro- gression analysis [22, 23]. Segmentation of OCT retinal images is thus critical to the study 25 of many ocular diseases. However, segmentation of OCT retinal tomograms is complicated by high levels of speckle noise. Narrowband detection systems, including OCT, radar and ultrasound, are all subject to the presence of speckle [24]. Brie y, speckle can be caused by the interference of waves backscattererd by particles within the same coherence volume. The result is a random intensity uctuation about a mean value. Speckle reduces image delity and consequently segmentation algorithm performance. Other factors that degrade the per- formance of segmentation algorithms include poor and uneven illumination conditions and low contrast boundaries. In Section 3.2 we introduce a novel retinal layer segmentation algorithm for identifying retinal layer boundaries in OCT tomograms. For the rst time, a sparse representation using piecewise constant segments is computed to approximate each column of the image. Then, a graph theory and dynamic programming technique is applied to connect the seg- ments, column by column, into layers that extend across the image. Using the identied layers, the boundaries between the retinal layers are extracted, resulting in an image seg- mentation. The primary advantage of our segmentation algorithm is the relaxation of many structural assumptions employed by other algorithms. Specically, our algorithm does not specify the number of layers for which to look or the order in which to search for the layers. Conversely, state-of-the-art segmentation algorithms search for a specied number of retinal layer boundaries and typically employ an iterative boundary detection scheme during which a single boundary is identied using information about other previously identied bound- aries. The information used relies on a priori assumptions about overall layer structure that hold in high delity images of healthy retinas but may not extend to poor imaging or pathological conditions. Relaxing some of the structural constraints results in a more general algorithm that could possibly be applicable to pathological images and even to other imaging modalities. Some of the material presented in Section 3.2 was published in [25]. 26 One diculty in evaluating retinal layer segmentation algorithms is the lack of an ob- jective error metric to assess the accuracy and noise and parameter sensitivity of the seg- mentations. The accuracy of most of the algorithms is tested against subjective manual segmentations by one or more experts [26]. In order to overcome this limitation, we develop a synthetic retinal image generation technique in Section 3.3. A simplied image model is derived that is composed of a locally smooth component, a speckle component and an additive noise component. Each component is generated independently and then combined to produce a synthetic image. The synthetic images are validated by comparing the inten- sity distributions and axial and transverse speckle sizes with in vivo retinal images. Our technique improves upon an existing synthetic image generation method both by using a nonparametric smooth representation of the average intensity values in the image and by accurately modeling speckle noise. The use of a nonparametric representation of the image intensity provides a more exible data generation method. The synthetic images are gener- ated with known ground truth segmentations and are thus useful for objective quantitative evaluation of retinal layer segmentation algorithms. In order to illustrate its utility, we test our novel segmentation algorithm using synthetic retinal images in Section 3.4. We show that our segmentation algorithm identies the nine retinal layer boundaries accurately and that our algorithm is robust to image noise. 3.2 Retinal Layer Segmentation Algorithm 3.2.1 Introduction As stated in Section 3.1, segmentation of the retinal layers in OCT structural images is critical for the study of ocular pathologies. OCT image segmentation methods are capable of locating between 4 and 9 retinal layers [23, 27, 28]. Early segmentation approaches operated by 27 searching for intensity peaks and valleys in each column of a despeckled retinal image [29,30]. These methods suer in low contrast cases for which peaks may not be detectable and so erroneous boundaries can result. In addition these methods rely on assumptions about relative layer positioning, which may not hold in a pathological case. More recently, graph theory and dynamic programming have been applied to OCT retinal image segmentation. Garvin et al. [27] segmented a 3D composite image into ve layers by mapping the segmentation problem to one of nding a minimum cost closed set in a 3D graph. Both Yang et al. [28] and Chiu et al. [26] dene a 2D graph on the image grid and locate boundaries by cutting the graph iteratively based on intensity and/or gradient information. While accurate segmentation results have been reported, these graph methods suer from overly restrictive assumptions. Specically, these algorithms rely on iterative boundary detection during which a single boundary is identied using information about other previously identied boundaries. The information used relies on a priori assumptions about overall layer structure that hold in high delity images of healthy retinas but may not extend to poor imaging or pathological conditions. Relaxing some of the structural constraints could result in a more generic algorithm applicable to pathological images and even to other imaging modalities. Additionally, these graph based approaches suer from complexity scaling. Specically, the number of nodes in a graph is typically equal to the number of pixels in the image. Thus, graph complexity scales with the image size even though the number of layers does not. As OCT systems increase in speed and scans over the same area increase in denition, the number of pixels in each image increases as well. It might therefore be advantageous to devise a method for transforming the image from the pixel domain to a domain that captures the layer characteristics while maintaining a dimension on the order of the number of layers. In order to overcome the aforementioned limitations, we develop a novel sparsity-based retinal layer segmentation algorithm. The notion of sparsity is pervasive throughout the 28 biomedical imaging literature [31]. OCT retinal images are naturally sparse in the layer domain, an abstraction of the set of retinal layers, in the sense that there are only a handful of layers in each image. We leverage this natural sparsity by transforming the input image from the pixel space to a much smaller space while still capturing layer properties that are useful for image segmentation, such as intensity and gradient information. The sparse approximation can be viewed as a preprocessing step so that any subsequent boundary search can be performed in a lower dimensional space. Other graph search algorithms [26{28] could thus benet from this complexity reduction. After computing a sparse representation of an OCT retinal image, we perform an itera- tive multi-boundary search using graph theory and dynamic programming techniques. The signicant contribution of this multi-boundary search is the relaxation of some structural assumptions that are typically enforced in other graph search methods. Most retinal seg- mentation algorithms rely on the specication of the number of boundaries for which to look and some order in which to look for them. Conversely, our implementation proposed in this thesis only assumes that there are relatively few, primarily parallel retinal layers that are separated by smooth boundaries and are distinguished by information such as mean intensity and edge/gradient magnitude. Thus, our algorithm is fairly general and could possibly be used for other types of images such as ultrasound and synthetic aperture radar. 3.2.2 Procedure The goal of our novel segmentation algorithm is to identify all of the boundaries between the various retinal layers in an OCT structural image without making overly restrictive assumptions about the expected layer structure. Throughout this section, a retinal image is assumed to have X columns and Z rows. The rst row lies at the top of the image and the last row lies at the bottom. A superscript x will be used to denote a particular column. B j 29 will be used to denote thejth boundary ordered from the top of the image to the bottom. We assume that all boundaries extend across the entire image so that B j = [z 1 j ;z 2 j ;:::z X j ], where z x j is the row location of thejth boundary in columnx. The algorithm iteratively searches for the boundaries between the retinal layers until no new reliable boundaries are found. At the beginning of each algorithm iteration, the boundary search region is limited to lie in between each pair of adjacent boundariesB j andB j+1 found in previous algorithm iterations. A sub- image is then created by attening the image to boundaryB j , which means that pixels in the original image are mapped to a new grid so thatB j lies along the rst row of the sub-image. The attened image is then smoothed and downsampled to reduce noise and the number of sub-image pixels. Then, a sparse approximation to each column in the sub-image is computed using the Sparse Bayesian Learning algorithm. This step relies on the assumption that intensities of adjacent retinal layers are distinct and produces a compact representation of each sub-image column with a set of piecewise constant segments. The rst row of each piecewise constant segment is called a breakpoint. A graph theoretic approach is then used to group the segments in each column into layers that extend across the width of the image. The grouping is based on segment similarity in terms of intensity and edge information as well as smoothness constraints. The boundaries between each of the layers are then extracted and tested for smoothness. If no new boundaries pass the smoothness test then the algorithm halts and returns all of the identied retinal layer boundaries. Each new boundary that passes the smoothness test is rened using the technique in [26]. Our algorithm then limits the search region again to lie in between each pair of adjacent boundaries and iterates until no new boundaries are identied. Figure 3.2 depicts the owchart for the segmentation algorithm. The individual modules are described in detail in each of the following subsections. 30 Figure 3.2: Flowchart for segmentation algorithm. 3.2.2.1 Limit Search Region In OCT imaging of the human retina, the retina and choroid occupy less than a third of the total imaging depth. Thus as a rst step it is advantageous to isolate the retina and choroid and reduce the dimensions of the image. This is done by smoothing the image with a Gaussian lter and thresholding with Otsu's method [32] to create a binary mask of the region of interest (ROI). For each column x, the top-most and bottom-most pixels in the ROI are identied and used to create temporary boundaries ~ B 1 and ~ B 2 . Then, the maximum width ~ W 1;2 over all columns between ~ B 1 and ~ B 2 is computed and ~ B 2 is redened as ~ B 2 = ~ B 1 + ~ W 1;2 10. The constant oset is subtracted in order to reduce the in uence of the irregularly shaped choroid on the subsequent steps. The reduced search region is then dened as all of the pixels that lie on or in between ~ B 1 and ~ B 2 . The region is illustrated in Figure 3.3. 3.2.2.2 Flatten, Smooth and Downsample In the rst algorithm iteration, the temporary boundaries ~ B 1 and ~ B 2 dene the limited search region. In subsequent iterations, the non-temporary boundaries B j and B j+1 will 31 Figure 3.3: Fast identication of retina and choroid. The image is smoothed and thresholded to create a binary mask that identies the retina and choroid regions. dene the limited search region. In this and all subsequent sections, the temporary notation is removed for brevity. After the search region is reduced, a sub-image is created by attening the image to the top boundaryB j . This attening denes a mapping so thatB j lies along the rst row in the attened sub-image. The image is attened because the retinal layers are primarily parallel and can be oriented along the transverse dimension by attening. In order to compute the attened image, the maximum widthW j;j+1 across all columns betweenB j andB j+1 is rst computed as W j;j+1 = max x (B x j+1 B x j ) + 1. Then, the mapping from a pixel (x;z) to the attened image is formally dened by M(x;z) =zB x j + 1,8 z2 B x j ;B x j + 1;:::;B x j +W j;j+1 : (3.1) The next steps in the algorithm aim to reduce the number of pixels in the image without compromising the spatial features of interest. Specically, the images examined in this work consist of 4000 columns that span over approximately 6:9 millimeters, resulting in a physical transverse pixel size of 1:7 microns. The retinal layers are large scale image feature that span across the entire image. In order to keep the computation time low, 20x downsampling 32 is performed which changes the transverse pixel size to 34:5 microns. This pixel size is small enough to enable retinal layer identication. Prior to downsampling, the attened images are smoothed with a Gaussian lter with a width of 200 columns in order to (a) reduce speckle noise and (b) reduce the additional downsampling noise. After smoothing the attened image, downsampling is performed. Flattening, smoothing and downsampling results are shown in Figure 3.4. Note that these techniques are commonly used to improve segmentation algorithm performance [23,26]. Figure 3.4: Smoothing and downsampling of attened image. The top and bottom bound- aries used to compute the attened image are shown in red. On the rst iteration of the segmentation algorithm, these boundaries lie at the top and bottom rows of the attened image. In subsequent iterations, the top boundary will lie along the top row of the attened image, but the bottom boundary need not lie along the bottom row. 3.2.2.3 Obtain Sparse Approximation Formulation Motivated by the assumptions that (a) the layers are primarily horizontal after image atten- ing and (b) the layer intensities are approximately constant, on average, after noise removal, each column is sparsely approximated with a weighted sum of Heaviside step functions [7]. The sparse approximation provides a compact representation of each layer column while preserving the layer distinguishing properties such as mean and edge information. Formally, 33 a vector of M intensities I X in the attened sub-image is observed and a solution to the problem ^ w X =argmin w jjI X Fw X jj 2 +jjw X jj 0 (3.2) is sought. In Equation 3.2,F is anM xM normalized basis of step functions, is a tradeo parameter between goodness of t and sparsity andjjwjj 0 is the` 0 norm and is equal to the number of nonzero weights. M is determined by the height of the attened image. The ith vector in the basis is dened by f i (m) = 8 > > < > > : q Mi iM if mi q i M(Mi) otherwise; (3.3) when i> 0 and f 0 (m) = 1 p M : (3.4) Except for thei = 0 basis vector, each basis vectorf i is a function with a single discontinuity. If the following basis selection algorithm incorporates the basis vector f i by assigning if a nonzero weight, then the sparse approximation is said to have a breakpoint at position i. The goal of the problem in Equation 3.2 is to closely approximate the vector of intensitiesI X with as few basis vectors as possible. A basis vector f i is excluded from the approximation only if the corresponding weight w i is zero. Note that a positive w i indicates an increase in intensity at the breakpoint and a negative w i indicates a decrease in intensity. Sparse Bayesian Learning The sparse Bayesian learning (SBL) algorithm is employed to compute the maximum a posteriori estimates of the weights within a Bayesian framework. SBL is used because it has 34 been shown [33] to perform well with dictionaries of high coherence/collinearity, as is the case with a basis of unit step functions. The SBL algorithm runs on each column of the image sequentially. The fast implemen- tation of SBL used here is described in [34]. Brie y, the implementation begins with all weights set to zero except the vectorf i that has the largest inner product with the observed vector I X . It is said that vector f i is in the model and all other vectors are not. Then at each iteration the algorithm either (a) adds a basis vector to the model, (b) removes a basis vector from the model or (c) updates the weight of a basis vector that is already in the model. This implementation is unique in the sense that all but one of the weights are set to zero initially, whereas for most other implementations all weights are nonzero initially. This is the key factor behind the speed of the implementation. SBL runs for a specied number of iterations and returns the weight vector. In the rst iteration of the segmentation algorithm, the number of SBL iterations is specied by a parameterSBLIter. In subsequent iterations of the segmentation algorithm, the number of SBL iterations scales with the number of rows in the relevant attened image. For example, suppose that in the rst segmentation algo- rithm iteration the attened image between temporary boundaries ~ B 1 and ~ B 2 has 100 rows. If on a subsequent segmentation algorithm iteration the attened image between boundaries B k and B k+1 has 50 rows, then SBL runs for SBLIter/2 iterations on this attened image. The SBL iteration scaling is done this way because it is assumed that there will be fewer breakpoints found if the size of the attened image that represents the limited search region is smaller. Another parameter required for SBL implementation is an estimate of the image noise variance. Fortunately, a region that consists of noise only is readily identied in OCT retinal images. As previously mentioned, the retina and choroid compose only about one third of an OCT retinal tomogram. The region above the retina can be used to estimate the noise variance. 35 The SBL algorithm solves Equation 3.2 sequentially by adding and/or removing basis vectors. Basis vectors are removed by setting the corresponding weights to zero. Since the attened image to which SBL is applied contains two previously identied adjacent boundariesB k andB k+1 , it is imperative that the weights corresponding to these boundaries be nonzero. The SBL implementation used in this work allows for specication of a xed basisF B which denes a subset of the basisF that must be included in the model. In order to prevent the two previously identied boundaries from being excluded from the model, they are added to the xed basisF B . Note that this does not apply to the temporary boundaries ~ B 1 and ~ B 2 used in the rst iteration of the segmentation algorithm. Backwards Elimination Because of the signicant amount of smoothing employed prior to applying the SBL algo- rithm, many edges stretch across multiple breakpoints. An example is shown in Figure 3.5. In order to reduce the number of breakpoints in the SBL approximation without sacricing any boundary information, a backwards elimination algorithm is applied to combine close adjacent breakpoints that have the same sign. Assume that there are K breakpoints in a column x. Suppose that in this column there is a sequence of breakpoints located at po- sitions z i thru z j such that (a) all of the weights w i thru w j have the same sign and (b) z k+1 z k mnBPDst for all k 2fi;i + 1;:::;j 1g. Denote this sequence by A ij . If none of the breakpoints in A ij are in the xed basis F B , then A ij is replaced with a single breakpoint located near the center of mass of the sequence CM ij . The procedure for this replacement is as follows: 1. Compute CM ij = P j k=i w i z i = P j k=i w i . 2. Find r = argmin k jz k CM ij j for k2fi;i + 1;:::;jg . 36 3. Compute the cumulative weights for all breakpoints in the column c k+1 = P k m=0 w m for k2f0; 1;:::;j 1g. Set c 0 = 0. 4. Update weight w 0 r =c j+1 c i . 5. Remove breakpoints k2fi;i + 1;:::;jg, k6=r. This process replaces the sequence of breakpoints with a single breakpoint whose weight is updated to the cumulative weight of the sequence. On the other hand, if any of the breakpoints in A ij are in the xed basis F B previously discussed, then those are retained and the others are removed. This replacement procedure is as follows: 1. Compute the cumulative weights for all breakpoints in the column c k+1 = P k m=0 w m for k2f0; 1;:::;j 1g. Set c 0 = 0. 2. If one vector is in the xed basis then update its weight. If two vectors are in the xed basis then update only the second weight w 0 r =c j+1 c i . 3. Remove all breakpoints not in F B . Note that by construction of F B , at most two breakpoints can be in the xed basis. If there are two breakpoints inF B then the rst one must lie at the top of the image. By convention, the weight of this breakpoint is not updated. An example of the SBL algorithm applied to a single image column and the subsequent backward elimination step is illustrated in Figure 3.5. Notice that the number of breakpoints is signicantly reduced by applying the backwards elimination procedure. The SBL algorithm and backwards elimination procedure are run on each column of the input image. After completing all X columns, a sparse image approximation is obtained. An example of a sparse image approximation is shown in Figure 3.6. 37 Figure 3.5: Sparse approximation of a sample A-line after Sparse Bayesian Learning and backwards elimination. Top left panel: sample B-scan with image column denoted in red. Top right panel: The black line denotes intensity in image column. The green line denotes the result after applying SBL. The red line denotes the result after backwards elimination. Bottom panel: Zoomed in version of the blue dotted box in the top right panel. The eectiveness of the backwards elimination procedure is clearly illustrated by the reduction in the number of breakpoints. As previously mentioned, one primary advantage in computing a sparse approximation of an OCT retinal image is the reduction of the dimension of the search space used in subsequent boundary searching. Specically, the number or rows in the images tested in this thesis is 768. After running SBL, approximately 20 breakpoints are identied in each column. The number unique intensity values is thus reduced by a factor of 38 for our compact representation. This compact representation could improve the computer memory consumption for other graph based segmentation algorithms. 38 Figure 3.6: Sparse approximation of a B-scan. 3.2.2.4 Identify New Boundaries The purpose of this module is to group the intra-column piecewise constant segments ob- tained by SBL into layers that extend across all of the image columns and then extract the retinal layer boundaries from the groupings. First, some denitions and notation are pro- vided to aid the understanding of the rest of this section. Then, we formulate the boundary identication problem using a greedy approach by assuming that at column x all of the segments in columns 1 thrux 1 have been correctly grouped into layers and then trying to nd the best way to extend the layers with the segments in column x. In order to nd the best solution we construct a graph that describes the dierent ways that the layers can be extended by the segments and apply a shortest path algorithm to nd the optimal extension. One advantage of our graph-based formalism over most others is that a single graph cut can segment multiple retinal layers which means that our graph search has built in boundary interaction constraints that are typically implemented separately. From the shortest path graph cut, we then compute the layer groupings and subsequently extract the boundaries between the retinal layers. As a nal step in identifying new boundaries, a smoothness test is applied to each boundary and those that do not pass are discarded. 39 Denitions and Notation The sparse approximation of the each column x can be compactly represented by a set of N x S segmentsS x i , each associated with a mean intensity value M(S x i ), position P(S x i ), width W(S x i ) and gradient or edge height E(S x i ). Note that the edge height can be positive or negative and is dened as the height of the discontinuity at a particular position. The superscript x identies in which column the segment resides, and will be used only when the ambiguity may cause confusion. The position P(S i ) is dened as the location of the top-most pixel in the segment and the edge height E(S i ) is dened as the height of the jump at the rst pixel in the segment. These conventions are illustrated in Figure 3.7. Figure 3.7: Denitions and notation for segments and related quantities. The SBL approx- imation resulted in 4 segments S 1 through S 4 ., with associated mean value M, position P, front edge height E and width W. A layer is dened as a set of similar segments grouped within and across columns. Within a single column x, a set of adjacent segments S x i:j can be assigned to a layer L x k . A layer L x k has an associated mean value, position, width and edge height that are determined by its constituent segments. For example, the width of a layer in a particular column equals 40 the sum of the widths of the segments that reside in that layer and column. Formally, if adjacent segmentsi thruj, denoted byS x i:j are assigned to layerk, then these quantities can be written as: M(L x k ) = M(S x i:j ) = P j p=i M(S x p )W(S x p )= P j p=i W(S x p ), P(L x k ) = P(S x i:j ) = P(S x i ), W(L x k ) = W(S x i:j ) = P j p=i W(S x p ), E(L x k ) = E(S x i:j ) = E(S x i ). (3.5) Again, the superscript x identies in which column the segment/layer resides, and will be used only when the ambiguity may cause confusion. These conventions are illustrated in Figure 3.8. Figure 3.8: Denitions of layer quantities. The 5 segments in blue compose the 3 layers in red. A layer's mean, edge, width and position are determined by the segments assigned to that layer. A layer extends across multiple columns. Let ^ L x k denote the kth layer grouped across columns 1 thru x 1, or ^ L x k =[ x1 i=1 L x i . When viewed across columns, the mean, position, 41 width and edge height of ^ L x k are dened by a Gaussian weighted average of all of the corre- sponding values in the previous columns. Letting Y( ^ L x k ) denote any of the quantities M( ^ L x k ), P( ^ L x k ), W( ^ L x k ) or E( ^ L x k ), the aggregated mean, position, width and edge height are formally dened by: Y( ^ L x k ) = x1 X i=1 exp (i (x 1)) 2 2 2 G Y (L i k ); (3.6) where G is the Gaussian standard deviation and is set to 8.5 pixels which corresponds to a full-width at half-maximum of 20 pixels. Two or more adjacent layers may also be grouped within a column, similar to the way segments are grouped within a column. In this case the layer properties, namely mean, posi- tion, width and edge height, are dened similarly to Equation 3.5. Because these quantities will be referenced later on, these denitions are repeated for the aggregated layers below. Analogous equations can be given for the non-aggregated layers by replacing ^ L by L. M( ^ L x i:j ) = P j p=i M( ^ L x p )W( ^ L x p )= P j p=i W( ^ L x p ), P( ^ L x i:j ) = P( ^ L x i ), W( ^ L x i:j ) = P j p=i W( ^ L x p ), E( ^ L x i:j ) = E( ^ L x i ). (3.7) A summary of the denitions and notation for segments and layers is provided in Table 3.1. Formulation A greedy approach is taken to build up the layers by connecting segments across columns. It is assumed that at columnx all segments in the previous columns have been grouped into layers. Then, the goal is to nd the best way to extend the N x L layers, found in columns 1 thrux 1 with theN x S segments in columnx by assigning each segment to one of the layers. 42 Notation Description S x i ith segment in column x S x i:j Group of adjacent segments i thru j in column x N x S Number of segments in column x L x i ith layer in column x L x i:j Group of adjacent layers i thru j in column x ^ L x i Union of ith layer across columns 1 thru x 1 ^ L x i:j Union of group of adjacent layers across columns 1 thru x 1 N x L Number of layers in column x 1 M() Mean intensity of a segment(s)/layer(s) P() Position of beginning/top of segment(s)/layer(s) W() Width of segment(s)/layer(s) E() Front edge height of segment(s)/layer(s) Table 3.1: Summary of segment and layer denitions and associated notation. Segments are obtained from the sparse approximation of the image and are conned to a single column. A layer is dened as a set of similar segments that are grouped within and across columns. Note that the mean, position, width and edge height of the union of layers across columns are dened using the Gaussian weighted average in Equation 3.7. This process is iterated column by column until all segments have been grouped into layers. The problem formulation is illustrated in Figure 3.9. Figure 3.9: Layer building formulation. In order to solve this assignment problem a graph-based approach [35] is used. Graph- based methods require specication of (a) nodes, (b) transitions/edges, (c) transition/edge costs and (d) a graph search algorithm. The node and transitions structure can best be 43 understood using the matrix representation shown in Figure 3.10. It is assumed that the segment and layer indices are ordered from the top to the bottom of the image. Then, by construction, the layers are extended sequentially beginning at layer 1 and ending with layer N L . Figure 3.10: Illustration of matrix structure of the graph. For this graph problem, two types of nodes are dened: (1) beginning node b ij , which indicates that segmenti begins layerj, and (2) end nodee ij , which indicates that segmenti ends layer j. Traversing through node b ij indicates that the top-most pixel of layer j in the previous columns coincides with the top-most pixel of segment i in the current column x. Similarly, traversing through node e ij indicates that the bottom-most pixel of layer j in the previous columns coincides with the bottom-most pixel of segment i in the current column x. Two hard constraints impose restrictions on the reachability of nodes to ensure that the assignment of segments to layers is physically sensible. If either constraint is satised then a node is considered reachable. The rst constraint allows a segment to extend a layer only if 44 some part of it lies adjacent to some part of the layer. Formally, segment i can be grouped in layer k if P(S i ) P( ^ L k ) + W( ^ L k ) 1 P( ^ L k ) P( ^ L k ) + W(S i ) 1 (3.8) are both satised. Notice that the aggregated layer quantities dened in Equation 3.8 are used to impose this constraint. The second constraint can be broken into two parts; one for a beginning node and one for an end node. Specically, a beginning node can begin a layer only if the rst pixel in the corresponding segment is close to the rst pixel in the layer. An end node can end a layer only if the last pixel in the corresponding segment is close to the last pixel in the layer. How close is dened by a parameter called maxJump. Let P E (S i ) P(S i )+W(S i )1 denote the end position of a segment and let P E ( ^ L k ) P( ^ L k )+W( ^ L k )1 denote the end position of a layer. Formally, segment i can be grouped in layer k if jP(S i ) P( ^ L k )j maxJump, for a beginning node jP E (S i ) P E ( ^ L k )j maxJump, for an end node (3.9) A node is considered reachable if it satises either the constraint in Equation 3.8 or the constraint in Equation 3.9. An example of these hard reachability constraints in shown in Figure 3.11. In order to assign segmentsi thruj to layerk, segmenti must begin layerk and segmentj must end layerk. In terms of the node structure, this requires a beginning-to-end transition from nodeb ik to nodee jk , denoted byb ik !e jk . Note that in this formulation, no segments are skipped so that this particular transition implies that all segmentsfi;i + 1;:::;jg are assigned to layer k. End-to-beginning transitions that do not have an analogous relation the segment grouping by do enforce continuous structure are also utilized. An additional beginning-to-end merge transition is also dened to allow for layer merging. Though in- troduced here, the signicance of layer merging is made clear in the Initialization section 45 Figure 3.11: Node reachability constraint. below. Physically, a merge transition means that two or more layers that have previously been distinct in columns 1 thru x 1 will now be treated as a single layer. This can be represented by a transition from node b ik to node e jl for l>k. Hard constraints are imposed on the allowability of dierent transition types. For ex- ample, beginning-to-beginning and end-to-end transitions are never allowed. Also, graph traversal always proceeds from left to right in Figure 3.12 since layer k is extended before layer k +l. Additional beginning-to-end and end-to-beginning constraints are dened to ensure a physically sensible solution so that (1) the boundaries between layers do not cross, (2) all segments are assigned to layers and (3) positive edges connect only to positive edges and similarly for negative edges. Additionally, the maximum number of layers that can be merged in a single merge transition is set to 3 to keep the number of edges in the graph small. These constraints on the allowable transitions can be written formally as: b ik !e jl allowable only if ji & klk + 2, e ik !b jl allowable only if j =i + 1 & l =k + 1 & sign(E(S j )) =sign (E(L l )). (3.10) 46 Some allowable transitions and transitions that are not allowed are shown in Figure 3.12. Figure 3.12: Transition allowability constraint. The '+++' notation denotes an allowable merge transition. The 'xxx' notation denotes that this transition is not allowable because the sign of the edge at the beginning of segment 2 is not equal to the sign of the edge at the beginning of layer 3. Note that this violation cannot be seen in the gure. Having described the nodes and transitions, transition costs are now dened. An edge from noden i to noden j in the graph exists only if noden j is reachable as dened by Equation 3.8 and Equation 3.9 and the transition between nodes n i and n j is allowable, as dened in Equation 3.10. The cost of a graph transition is a measure of the similarity between a layer or layers and a set of segments. Since a beginning-to-end transition describes a grouping of segments into layers, the associated costs measure the similarity between the means and widths of the segments and layers. The beginning-to-end transition costs for the mean and width are dened by: C M (b ik !e jl ) = M;(j;l) jM(S i:j ) M( ^ L k:l )j, C W (b ik !e jl ) = W;(j;l) jW(S i:j ) W( ^ L k:l )j, (3.11) 47 for the allowable transitions to reachable node and are innite (i.e., no edge) otherwise. The quantities are normalizing constants for each layer-to-layerk tol transition and are dened by the inverse of the maximum value over all allowable transitions and segment pairs i and j. By normalizing in this way, the costs always lie in the unit interval. End-to-beginning transitions, on the other hand, describe a transition only to the rst segments in the layer, and so that associated cost measures the similarity between the front edge height of the segments and layers. The end-to-beginning transition cost for the edge height is dened by: C E (e ik !b jl ) = E;(j;l) jE(S i:j ) E( ^ L k:l )j, (3.12) for the allowable transitions to reachable node and are innite (i.e., no edge) otherwise. A merge penaltyP M is also dened as an additional cost for beginning-to-end transitions. The purpose for this penalty can be understood by examining the merge transition in Figure 3.12. Beginning-to-end transitions with no merging move between adjacent columns in the graph matrix. On the other hand, a merge transition skips two or more columns in that matrix. This means that if a merge occurs, the shortest path across the graph will have fewer nodes than if there is no merge. Since the cost is nonnegative at every edge, this leads to a bias towards merging during the graph traversal. In order to combat this bias, a merge penalty is imposed that scales with the number of layers merged. For example, if a merge between two layers is penalized by P M , then a merge between three layers is penalized by 2P M . Recall from Equation 3.10 that up to 3 layers can merge at a time, and so 2P M is the maximum merge penalty imposed. The total cost of beginning-to-end and end-to-beginning transitions are dened by the sum of the respective terms from Equation 3.11 and Equation 3.12. These costs are dened by: C(b ik !e jl ) =C M (b ik !e jl ) +C W (b ik !e jl ) + (kl)P M (3.13) 48 and C(e ik !b jl ) =C E (e ik !b jl ): (3.14) With all nodes and costs dened, Dijkstra's algorithm [35] is employed to nd the shortest path between two nodes in the graph. The source node is forced to be node b 11 and the destination node is forced to be node e N S N L . These restrictions mean that the rst segment must begin the rst layer and the last segment must end the last layer, both of which are natural physical restrictions. Figure 3.13 shows a sample traversal of the graph. The algorithm is run sequentially for each column from the left to the right of the image. Upon completion, all of the segments have been grouped into layers that extend across all of the image columns. Figure 3.13: Sample graph traversal and corresponding layer extension. In this traversal, no layers are merged and the 5 segments extend the 3 layers. 49 Initialization Using a greedy approach, it is assumed that at each column all of the segments in previous columns have been grouped into layers correctly. Thus the remaining issue to handle is initialization of the segments in the rst column. Rather than trying to nd the best initial assignment, each segment is assigned to its own distinct layer in the rst column. The merge transition dened above allows some layers to merge as the algorithm progresses across columns. In this way, the onus of initializing the correct layer grouping is mitigated. Note that if layers i and j merge at column x, then all of the layer properties M, P, W and E are combined analogously to Equation 3.5 for each column 1 thrux. An example of merging layers is shown in Figure 3.14. Figure 3.14: Example of layer merging. When layers merge the mean, position, width and edge are combined analogously to Equation 3.5. 50 Layer Building After the segments in the rst column are initialized to lie in their own distinct layers, the graphs are built and traversed on each column 2 thrux sequentially until all of the segments are grouped into layers. Figure 3.15 shows the results of the layer building process. Figure 3.15: Results of layer building. Left panel: Color coded by mean value of a layer in each column. Right panel: labeled image where the identied layers are color coded. Boundary Extraction and Upsampling After the layer building process completes across all of the columns, all of the segments are grouped into layers. The boundaries between the layers are then extracted by computing, for each column, the rst pixel in each layer. Then, these boundaries are upsampled using nearest neighbor interpolation to compute boundaries that extend across the original (full- width) image. Results of the rst iteration are shown in Figure 3.16. Boundary Pruning Not all of the newly found boundaries are real boundaries between retinal layers. Since the layer building module model uses locally based cost functions, big picture inaccuracies might result. For example, a candidate boundary may not be smooth enough to constitute a physically sensible boundary. In order to eliminate such false positives, a smoothness metric 51 Figure 3.16: Extracted and upsampled boundaries. Upsampling is performed using nearest neighbor interpolation. is computed for each potential boundary and compared to a threshold so that a boundary that is not smooth is pruned. In order to compute its smoothness, a boundaryB j = [z 1 j ;z 2 j ;:::z X j ] is rst low-pass ltered using a second order Butterworth lter with normalized cuto frequency ! = 0:005. Then, the mean-squared error (MSE) between the original boundary and its smoothed version is computed. If the MSE exceeds the threshold smThr, it is considered a false positive and is removed from further analysis. Figure 3.17 illustrates an example of boundary pruning. Figure 3.17: Sample boundary pruning. The red boundaries are retained while the green boundary is pruned because of a lack of smoothness. 52 3.2.2.5 Rene Boundaries Due to the high level of smoothing implemented prior to applying SBL, there may be slight inaccuracies in the retained retinal layer boundaries. In order to improve the accuracy of boundary detection, a boundary renement procedure using a fast graph-based approach based on the algorithm in [26] is employed. The algorithm is described in detail in the next paragraph. Brie y, the image is represented as a graph of nodes, where each node corresponds to a pixel. Pixels are connected to adjacent pixels with edges and Djikstra's algorithm is employed to identify a single shortest path graph cut that separates the top and bottom of the image. One especially advantageous aspect of the algorithm is that it provides automatic endpoint initialization. The retained boundaries are rened sequentially and independently. For each boundary that survives the boundary pruning stage, a small region of 3 pixels is identied and a sub-image with the boundary across the center is computed. The boundary extends across the width of the sub-image, and left-to-right is considered the forward direction. Each pixel in this sub-image denes a node in a new graph. As a modication to the implementation in [26], each node is connected to only 4 of its neighbors, 45 , 0 , -45 and -90 , so that each transition has either a down or forward component. The cost of each transition is based on a measure of the similarity of the vertical gradients between the two nodes. Formally, the transition cost c ij for moving from node i to node j is dened by c ij = 2 (r y I i +r y I j ) +c min ; (3.15) wherer y denotes the vertical gradient operator, I i denotes the intensity at the pixel in the image that corresponds to the ith node in the graph and c min = 10 5 is the minimum cost for a transition between two nodes. Note that the gradients are normalized to lie in the 53 interval [0; 1] so that the cost function encourages transitions between two pixels that have large and similar vertical gradients. As mentioned previously, the algorithm provides automatic endpoint initialization so that the beginning of the cut and end of the cut do not in uence the boundary renement. Specically, an additional column of nodes to both sides of the image are added and connected to adjacent nodes as before. In this case, however, the cost of all allowable transitions to the 4 adjacent nodes is set to the minimum cost c min so that the cut is able to traverse these columns with minimal resistance. The graph start node can be assigned as any node in the newly added column to the left of the image, and the end node can be assigned as any node in the newly added column to the right of the image. Once the image is segmented, the two additional columns can be removed, leaving an accurate cut without endpoint initialization error. An example graph traversal is shown in Figure 3.18. Figure 3.18: Sample graph cut using algorithm in [26]. By adding the additional columns to the left and right of the image through which the graph cut can move freely (at a costc min ), the endpoint initialization is automated. The boundaries are sequentially rened using the described algorithm. The results of this renement are shown in Figure 3.19. 54 Figure 3.19: Rened boundaries. This result is shown in color so that the resulting renement is clearly demonstrated. 3.2.2.6 Limit Search Region or Return If no new boundaries are retained after boundary pruning, then the segmentation algorithm stops and returns the boundaries identied during the previous algorithm iterations, if any. If new boundaries are identied then they are grouped with the previously identied boundaries and sorted from the top to the bottom of the image. Then, for each pair of adjacent boundaries B j and B j+1 , the search region is limited by attening the image to B j . Note that in this attened image, there are some pixels that are below boundary B j+1 . Recall that in each column x, B x j and B x j+1 are in the xed basis and thus discontinuities must be found by SBL at these locations. On the other hand, no discontinuity should be found at any pixelz x >B x j+1 since it does not lie in between the two boundariesB j andB j+1 . So, all of the basis vectors with breakpoints below B x j+1 are removed from the basis. An example of the search region limitation and reduced basis is shown in Figure 3.20. 55 Figure 3.20: Limited search region between adjacent boundaries. Top left panel: adjacent boundaries identied in red. Top right panel: image attened to rst boundary, which lies on a horizontal line in the attened image. The jagged red boundary is the second boundary in the left panel. The green line is an example A-scan. Bottom right panel: Sample A-line from the top right panel is shown in black and the resulting SBL approximation is shown in red. The xed basis must be retained by SBL and includes the top and bottom boundaries from the top panel. The region labeled removed from basis are the basis vectors that lie underneath the last xed basis vectors and are removed prior to the SBL run. Bottom right panel: resulting segmentation after running SBL and new boundary identication. Notice that a boundary (green to red) that was missed in the rst algorithm run is found after reducing the search region. The search region limitation and subsequent rerun of the entire segmentation algorithm is repeated for each pair of adjacent boundaries. This entire process repeats itself until no new boundaries are found. All identied boundaries are returned upon algorithm completion. 56 3.2.2.7 Summary of Parameters The segmentation algorithm employs a number of parameters. These parameters are sum- marized in Table 3.2. Parameter Name Value Description SBLIter 30 Number of iterations of SBL algorithm on rst search region limitations P M (2P M ) 4 Penalty for merging two (three) layers smThr 3.5 Smoothness threshold used for boundary pruning mnBPDst 4 Backwards elimination parameter used to com- bine close adjacent segments with same sign edge maxJump 4 Reachability constraint parameter describing the maximum allowed jump of a boundary in pixels ! 0.005 Cuto frequency for Butterworth lter dsFact 20 Downsample factor filtSz 200 Transverse length of Gaussian lter used to smooth image Table 3.2: Parameters used in segmentation algorithm 3.2.3 Summary We demonstrated a novel retinal layer segmentation algorithm for OCT retinal images using sparsity constraints. The algorithm relaxes anatomical assumptions about the number of retinal layers as well as the relative positions of the layers made by most other segmentation algorithms. In order to provide a framework for objective quantitative evaluation of our algorithm, a method for generating synthetic retinal images with known segmentations is presented next. 57 3.3 Synthetic Retinal Image Generation 3.3.1 Introduction Quantitative analysis of structural OCT images has become increasingly important for the study of various ocular pathologies [21,36]. For example, retinal layer thinning is an indicator of glaucoma [5]. Measurements of retinal layer thickness can be made after the layers are segmented using the magnitude of an OCT image. However, quantitative evaluation of the accuracy of segmentation algorithms usually relies on subjective segmentations made manually [26]. For objective quantitative evaluation, the real, or ground truth, boundaries must be known so that an objective error metric, such as mean-squared error between the ground truth boundaries and the boundaries identied by a segmentation algorithm, can be dened. The ability to generate synthetic retinal images with known ground truth boundaries would thus be useful for objective quantitative evaluation of segmentation algorithms. In this section we present a novel method for generating a synthetic OCT retinal image that mimics an in vivo OCT retinal image. The algorithm uses a segmented, manually or automatically, retinal image as input and produces dierent instantiations of the image at a prespecied noise level. Although the algorithm relies on an input segmented image, the segmentation itself need not be exact in the sense that the identied boundaries are precisely equal the retinal boundaries. However, the synthetic images will be generated so that its boundaries can be used as the ground truth solution for subsequent segmentation. Our technique improves upon the method presented in [37] in two ways. First,the authors compute a smooth approximation to the input retinal image using parametrized surface tting for the vitreous, retina and choroid separately. The implementation of this technique relies on the ability to capture the retinal structure in parametric form. In cases where the retinal structure is altered, for example due to the presence of cysts, the parametrized functions no longer model the structure accurately. Conversely, we compute a nonparametric 58 smooth approximation to the input image that can capture unexpected structures as well. Thus, our technique is more general and could be applied to generate retinal images in pathological cases. A second limitation of the work in [37] is the generation of spatially correlated speckle noise. Specically, the authors in [37] generate additive Gaussian noise and then smooth it at dierent spatial scales in order to create the typical grainy appearance of speckle noise. Their method does not accurately re ect the image formation process and requires ten independent parameters to be tuned to generate speckle. On the other hand, the speckle noise pattern generated with our technique is dictated by the OCT imaging system parameters as is the case in clinical OCT imaging. Furthermore, we are able to validate the axial and transverse speckle sizes of our speckle noise with clinical OCT images. Thus, our synthetic image generation technique more accurately mimics real OCT retinal images. 3.3.2 Model of OCT Retinal Image Assuming that a sample consists of discrete point scatterers, the noiseless spectral-domain complex OCT signal emanating from a coherence volume centered at a point (x 0 ; 0;z 0 ) within the sample can be written as: A/ X p2CV 0 r p exp(i2k 0 (z p z 0 ))G p ; (3.16) where p is the pth particle index, r p is its re ectivity and (x p ;y p ;z p ) is its position in the sample, CV 0 denotes the coherence volume centered at (x 0 ; 0;z 0 ), k 0 = 2= 0 is the source center wavenumber, and G p is a Gaussian function dened by: G p = exp 8 (x p x 0 ) 2 + (y p ) 2 ! 2 0 + (z p z 0 ) 2 ! 2 z ; (3.17) 59 where ! 0 and ! z are the 1=e 2 transverse and axial resolutions, respectively. Assuming a locally homogeneous re ectivity so that r p =R for all p2CV 0 , the local intensity variation is due to speckle only. In this case, Equation 4.9 can be rewritten as: A/R X p2CV 0 exp(i2k 0 (z p z 0 ))G p =RS; (3.18) whereS models the speckle caused by coherent scattering due to the interference of backscat- tered waves emanating from within the coherence volume. Then the measured OCT signal magnitude can be modeled as: ^ I =jRS +Nj; (3.19) whereN models incoherent scattering such as multiple scattering, scattering by out-of-focus light and subwavelength scattering as well as system noise [24]. 3.3.3 Procedure The following sections describe methods for generating synthetic R, S and N in Equation 3.19. First, we compute a smooth approximation to the input image using piecewise con- stant regions. Unlike the work in [37], we use a nonparametric smooth approximation that is capable of representing structures that are dicult to capture parametrically. Further- more, the smooth approximation framework allows for dierent levels of detail to be used to approximate the dierent retinal layers in the input image. Because we don't have a complete model for the image formation process for OCT retinal images, we leverage the data in the input segmented image to generate dierent instantiations of synthetic images. A scattered data interpolation technique is thus employed in order to compute a smooth approximation to the input image. Scattered data interpolation is commonly implemented by rst computing a triangulation over the scattered data and then interpolating within 60 each triangle. In our implementation, a random subset of the pixels in each retinal layer is rst computed. Then, the pixels in each layer are mapped to a rectangular grid and a triangulation is computed over the random subset of pixels. The mapping to a rectangular grid, although not required, guarantees that all triangles in the triangulation lie entirely in the rectangular grid which simplies the algorithm implementation. Next, the intensities of the pixels in each triangle are replaced with the respective root-mean-square value. This step preserves the root-mean-square intensity of the entire image. The rectangular region is then mapped back to the original image space. This process is repeated for each retinal layer until a smooth approximation to the entire input image is computed. The next step in our synthetic image generation method is the generation of the speckle noise componentS. Our novel technique for speckle noise generation utilizes the simulation developed in Section 4.2 and accurately re ects the real speckle noise generation process. Next, the additive noise component N is generated using the fact that this noise process is known to be a zero mean complex Gaussian process. R is computed next as a scaling factor to the smooth image approximation so as to preserve the mean squared value of the input image in the synthetic image. Finally, the components are combined using Equation 3.19 to produce the nal synthetic magnitude image. 3.3.3.1 Smooth Approximation In order to generate the local intensity in a synthetic image, a smooth approximation I SM to the input segmented image is computed. This smooth approximation is used to model local re ectivity information. In order to maintain the layered structure present in the retina, the approximation is computed for each layer separately. Our smooth approximation also maintains the root-mean-square (RMS) intensity value over all of the retinal layers in the input image. A number of options exist for generating such a smooth approximation. As previously discussed, we employ a nonparametric technique using a piecewise constant 61 representation to generate the smooth approximation for each layer. The process to compute the smooth piecewise constant approximation of the image is illustrated in Figure 3.21. Figure 3.21: Flowchart for computing smooth approximation of OCT image. Prior to computing the smooth approximation, the layer labeled as the choroid is split into 2 regions so that the choroid with high signal can be generated without the low signal sub-choroid. An example of this is illustrated in Figure 3.22. Since all of the intensity variation in the vitreous and subchoroid is due to additive noise, the smooth approximation is set to zero in these regions. Figure 3.22: Sample segmented image used for synthetic image generation. The choroid is split into 2 regions to isolate high signal and low signal regions. The articial boundary is indicated in green. As a rst step, the indices of all of the pixels in a retinal layer are identied using the segmentation of the input image. Then, a subset of these points is chosen randomly according 62 to a uniform distribution. The smooth approximation framework allows for dierent levels of detail to be used to approximate the dierent retinal layers in the input image. For example, the choroid is a rough surface and thus requires a high level of detail to represent accurately. On the other hand, the retinal layers are relatively smooth surfaces and thus do not require as high a level of detail. The level of detail in the smooth approximation for each layer is determined by the number of points in the random subset of layer pixels. For the retinal layers (excluding the choroid), the number of pixels in the random samples range from 20 to 120, depending on the number of pixels in the layer. On the other hand, because the choroid requires a high level of detail to approximate realistically, the number of samples in the random subset is approximately 1300. These values were determined empirically and can be changed should a layer require more or less detail. The corners of each layer are added to the random sample. These corners are dened by the top and bottom pixels in the rst column of the layer and the top and bottom pixels in the last column of the layer. A layer with random samples is illustrated in Figure 3.23. Figure 3.23: Sampling of pixels a in layer. The next step is to construct and apply a mapping from the pixels in a particular layer in the image space to a rectangular grid. As previously mentioned, the purpose of applying this mapping is so that all triangles in the subsequent triangulation remain enclosed by the layer boundaries which simplies the implementation. Assume that the image has M rows 63 and N columns. A boundary between each pair of adjacent layers contains a single pixel in each image column. Thus, the bth boundary B b is dened by a set of pointsfr B b (c n )g N n=1 , where c n is the nth column index and r is the particular row of the bth boundary in the nth column. Layer L is enclosed by boundaries B L above and B L+1 below. By convention, boundaryB L is in theLth layer. Then, the set of all pixelsfr m (c n )g in a particular layer L is given by: fr m (c n ) :r B L (c n )r m (c n )<r B L+1 (c n )g N n=1 : (3.20) Let r L min min n r B m (c n ) be the highest pixel in the layer and r L max max n r B+1 m (c n ) 1 be the lowest pixel in the layer. These notation conventions are illustrated in Figure 3.24. Figure 3.24: Notation conventions for mapping of layer to rectangular region. r B L (c j ) denotes the row of thejth column of theLth boundary. The boundary has a single row in each column of the image. By convention, theLth boundary lies in theL layer. r L min andr L max denote the top most and bottom most pixels in the Lth layer, respectively. 64 The mapping maps all of the points in a layer to a rectangular grid dened by [r L min ;r L max ] [c 1 ;c N ]. Formally, the mapping is dened by r m (c n )d((r L max r L min ) r m (c n )r B L (c n ) r B L+1 (c n ) 1r B L (c n ) +r L min )e; (3.21) where thedxe denotes the ceiling function dened by the smallest integer greater than or equal to x. An illustration of the mapping is shown in Figure 3.25. Note that after the mapping is applied, the top and bottom boundaries of the layer lie in a horizontal line. Also note no pixels are mapped to some of the positions in the rectangular grid. Figure 3.25: Mapping from image space to rectangular grid. The pixels in the top boundary of the layer are colored in red. The pixels in the bottom boundary of the layer are colored in purple. The white pixels in the image space denote pixels that are not in the layer. Note that after the mapping is applied both the top and bottom boundaries lie along horizontal lines along the top and bottom of the rectangular grid. Note also that since the mapping is not a surjection, some pixels in the rectangular grid are not mapped to by image pixels. These pixels are denoted in the right panel with a black x. 65 After the mapping is applied to all of the pixels in the region a Delaunay triangulation [38] is applied to the random samples. A common way to interpolate scattered data is to dene a surface over a triangulation [39]. A Delaunay triangulation was chosen here because of its computational speed and, more importantly, the minimum angle result which states that over all possible triangulations, the Delaunay triangulation maximizes the minimum angle in each triangle [40]. This property discourages long, at triangles. The left panel in Figure 3.26 illustrates an example of Delaunay triangulation dened over the samples. As a next step, each triangle is lled with the RMS of the intensities of all of the pixels inside the triangle. In order to identify whether a point is inside a given triangle, a technique based on [41] is used. Brie y, three inequalities are established that represent the condition that a point (x;y) lies on the same side of the innite line along one of the triangle's three sides as the triangle does. If all of the inequalities hold true, then (x;y) must lie inside the triangle. Note that the vertices of the triangles always lie in more than one triangle. In order to preserve the RMS intensity value of the input image in the smooth approximation and in order to prevent additional scaling needed if a vertex were used in the RMS computation for multiple triangles, each vertex is assigned to a single triangle on a rst-come rst-served basis. After the pixels in the rectangular grid are lled, they are mapped back to the original image space. The entire process for computing a smooth approximation is repeated for each retinal layer (excluding the vitrous and subchoroid). A sample smooth image is shown in Figure 3.27. 3.3.3.2 Speckle Noise Generation The OCT simulation that will be introduced and detailed in Section 4.2 for a vessel in tissue is adapted for speckle noise generation here. As previously mentioned, our speckle noise generation technique more accurately re ects the real OCT retinal image formation process 66 Figure 3.26: Example of Delaunay triangulation with lling. The triangulation is performed on a rectangular grid of pixels. The circles indicate pixels that are included in the random pixel sampling described previously. The squares indicate pixels that are in the layer but were not included in the sample. Left panel: original pixel intensity. Right panel: modied intensities computed by setting the intensity of each pixel equal to the root-mean-square (RMS) of the intensities of all of the pixels in a particular triangle. Figure 3.27: Triangulated smooth image approximation. than does the work presented in [37]. A few modications to the retinal vessel simulation are noted. Because the OCT signal magnitude and not the phase information is used for segmentation, only stationary scatterers were used by setting the vessel diameter to zero. Also, because the OCT amplitudes from the vitreous and sub choroid regions are dominated by noise, no speckle patterns were generated in these regions. A sample speckle pattern is shown in Figure 3.28. 67 Figure 3.28: Simulated speckle pattern in retina. Left panel: speckle generated over entire retinal region. Right panel: zoomed in speckle pattern. 3.3.3.3 Additive Noise Model A complex Gaussian noise process is used to model incoherent scattering and system noise in retinal OCT images [42]. The generation of this additive noise was performed eciently in MatLab. The real and imaginary parts of N are independent and identically distributed Gaussian random variables with zero mean and variance 2 . The magnitude of this noise process is known to follow a Rayleigh distrbution with parameter [42] with mean equal to p 2 and variance equal to 4 2 2 . The second moment of the magnitude of the noise N is given by E(jNj 2 ) = 2 2 : (3.22) In the vitreous, no oscillation in the signal intensity is expected and therefore all of its variation is due to noise. Thus, the statistics of the intensity distribution in the vitreous can be used to estimate the parameter . Specically, given the set of V intensity samples in the vitreousfI v g, the maximum-likelihood estimate of the parameter of the Rayleigh distribution is given by [43] ^ = r 1 2V X I 2 v : (3.23) This estimate is used to generate the complex Gaussian process. 68 In order to verify that the Rayleigh assumption is valid, the probability distribution function (pdf) of the intensities in the vitreous is estimated using a kernel smoothing tech- nique [44] and compared to the Rayleigh distribution computed using the parameter ^ . These two curves are illustrated in the right panel of Figure 3.29. Figure 3.29: Validation of Rayleigh model for OCT signal noise. The intensity distribution in the vitreous (left panel) is estimated and compared to the Rayleigh probability distribution function (right panel) with parameter estimated by Equation 3.23. For a quantitative evaluation of the noise model, a modied coecient of variation of the root-mean-square deviation (CVRMSD) between the two curves in the right panel of Figure 3.29 is computed. LetP (I) denote the estimated pdf of the vitreous intensityI and letR(I) denote the Rayleigh pdf parameterized by ^ . The modied CVRMSD is dened by: CVRMSD = q P V v=1 (P (I)R(I)) 2 1 2M P V v=1 (P (I) +R(I)) 100; (3.24) and is equal to 4.05%, which indicates a high degree of similarity between the model and the observed intensities. 3.3.3.4 Scaling the Smooth Approximation In the previous sections, procedures for computing a smooth approximation, S and N in Equation 3.19 were presented. What remains is the choice of R is used to scale the smooth 69 approximation. Taking the expected value of the magnitude squared of the phantom image ^ I in Equation 3.19 gives E(j ^ Ij 2 ) =jRj 2 E(jSj 2 ) +E(jNj 2 ); (3.25) where the fact that the coherent speckle noise S and incoherent scattering noise N are uncorrelated is used and that E(N) = 0 since it is a zero mean noise process. Using Equation 3.22, the term E(jNj 2 ) is given by 2 2 . The term E(jSj 2 ) is dicult to calculate in closed form and so it is approximated from the data. The smooth approximation of the input image I was generated so that for each triangulated region, mean(I 2 SM ) = mean(I 2 ). In order to maintain a similarity between ^ I and I the value of R is chosen so that the mean squared value in each triangulated region is preserved, or E(j ^ Ij 2 ) = mean(I 2 ). Thus, the value of R is chosen to be jRj 2 = mean(I 2 SM )E(jNj 2 ) E(jSj 2 ) ; (3.26) A consequence of this choice is that retinal regions with high intensity will exhibit the speckle associated with S, whereas noise regions with low intensity will exhibit the noise properties similar to N. After computing R by taking the square root of Equation 3.26, the phantom image ^ I is computed using Equation 3.19. A sample phantom image is illustrated in Figure 3.30. Figure 3.30: Sample phantom image 70 3.3.3.5 Signal-To-Noise Ratio The signal-to-noise ratio (SNR) can be dened by SNR 20 log 10 meanjRSj std(jNj) ; (3.27) where the mean ofjRSj is computed over the nerve ber layer and the standard deviation ofjNj is equal to 4 2 2 by construction. The phantom image generation implementation allows the user to change the image SNR. Specically, a change in SNR by K dB can be written as SNR 0 SNR = 20 log 10 std(jNj) std(jN 0 j) = 20 log 10 0 =K; (3.28) so that the new noise standard deviation is given by 0 = (10 K=20 ): (3.29) In order to implement this SNR change, the scaling parameter R is rst computed using Equation 3.26 with the original noise level . Then, the new noise standard deviation computed using Equation 3.29 is used to generate the noise process N 0 and then Equation 3.19 is applied to compute the phantom image. An example phantom image with the SNR reduced from 15 dB to 2 dB is shown in Figure 3.31. 3.3.3.6 Comparison Between In Vivo and Synthetic OCT Data The imaging parameters for the synthetic image generation, such as beam width, coherence length, axial and transverse step sizes, were chosen to mimic the imaging parameters of the clinical system used for the in vivo retinal images. To test the validity of the synthetic data, the in vivo and phantom images are qualitatively compared in Figure 3.32. 71 Figure 3.31: Sample phantom image with reduced signal-to-noise ratio (SNR). The SNR was decreased from 15 dB in the original image to 2 dB in this phantom image. Figure 3.32: Visual comparison between in vivo and synthetic OCT Bscans. Left panel: in vivo. Right panel: Synthetic. For a quantitative comparison, the distributions of intensities as well as the estimated speckle sizes are compared. As opposed to the intensity distribution in the vitreous, which is given by a simple Rayleigh density, the pdf of the intensities with the retinal layers is given by a complex expression with a Bessel function term [42]. Thus, instead of comparing the intensity distribution to a known parameterized distribution as was done for noise model validation, the pdfs of the ganglion cell complex intensity distribution between the in vivo and synthetic images are compared. The distributions are compared in Figure 3.33. For a quantitative comparison of the distributions, the CVRMSD is again computed as in Equation 72 Figure 3.33: Comparison of GCL intensity distributions between in vivo and synthetic OCT data. Left panel: GCL region. Right panel: probability density estimates of intensity distributions. 3.24. In this case, the CVRMSD is equal to 4.62%, indicating a high degree of similarity between the in vivo and synthetic images. For the comparison of the axial and transverse speckle sizes, normalized autocorrelations over a 50 m by 250 m window were computed in the ganglion cell layer [45]. Figure 3.34 shows a quantitative comparison. To quantify the speckle sizes, linear interpolation was used to compute with full-width at half-maximum (FWHM) widths in pixels. The computed axial speckle size was 1.98 pixels for the retinal image and 1.70 pixels for the synthetic image. The computed transverse speckle size was 6.30 pixels for the retinal image and 6.25 pixels for the synthetic image. The dierence between the retinal and synthetic axial speckle sizes is likely due to dispersive broadening in clinical measurements that is not accounted for in the synthetic data generation. 3.3.4 Summary We introduced a novel method for generating synthetic OCT retinal B-scan intensities. Our technique improves on the work in [37] by (a) computing a nonparametric smooth approxi- mation to the input image that can represent unexpected changes in retinal layer structure 73 Figure 3.34: Comparison between speckle sizes in in vivo and synthetic OCT data. and (b) modeling the speckle noise formation process accurately. Because the ground truth boundaries in the synthetic images are known, this method can be used for quantitative analysis of image processing algorithms when applied to retinal images. We next test our novel segmentation algorithm using both in vivo retinal images as well as synthetic retinal images generated by our method. 3.4 Evaluation of Segmentation Algorithm 3.4.1 Introduction In this section, we evaluate the performance of our novel retinal layer segmentation algorithm presented in Section 3.2. First, we show that our segmentation algorithm is able to nd the nine retinal layers (including the choroid) using in vivo retinal images obtained in our clinic. The phantom image generation technique presented in Section 3.3 is then used to test the novel segmentation algorithm. By doing so, a quantitative evaluation of the algorithm parameters, accuracy and noise sensitivity can be made automatically and objectively using the known ground truth boundaries. This objective evaluation is not possible using the in vivo retinal images because their ground truth segmentations are not known. 74 The synthetic retinal images used to test our segmentation algorithm were generated using the in vivo retinal images that were segmented by our segmentation algorithm. The imaging parameters settings used to generate the synthetic data were chosen to mimic the clinical system used for in vivo imaging. 3.4.2 Evaluation with In Vivo Retinal Images Two normal eyes were scanned using the RT-Vue Fourier-Domain OCT system (Optovue Inc., Freemont, CA), which has 5 micron depth resolution and can perform 26000 axial scans per second. A double circular scan protocol [19] was used with diameters of 2:2 and 1:9 millimeters, respectively. The segmentation algorithm was applied to a single scan at each diameter of each eye. In total, four images were segmented. Figure 3.35 shows two sample segmented images, one from each eye. Both segmentations resulted in the identication of the eight retinal layers and the choroid. For the rest of this discussion, we will group the choroid with the retina and refer to nine retinal layers. The top image in the gure was segmented in two iterations of the algorithm as the inner segment and outer segment (IS/OS) were merged on the rst iteration. On the other hand, the bottom image was segmented in a single algorithm iteration. Note that although the segmentation algorithm does not rely on a prespecication of the number of retinal layer boundaries, it was able to nd the nine layers in all four images. These results are in good agreement with other state-of-the-art segmentation algorithms that search for nine layer boundaries (e.g. [26]). 3.4.3 Parameter Determination using Synthetic Data One of the primary uses of generating synthetic data is to evaluate the parameter choices for various image processing algorithms. Three parameters from the segmentation algorithm presented in Section 3.2 are sequentially examined in order to determine a preferred setting 75 Figure 3.35: Segmentation of retinal images. Boundaries are labeled. NFL: nerve ber layer, GCL: ganglion cell layer, IPL/OPL: inner/outer plexiform layer, INL/ONL: inner/outer nuclear layer, ELM: external limiting membraneIS/OS: inner/outer segment, RPE: retinal pigment epithelium, CH: choroid. for each of them. A parameter setting is considered valid only if that setting can be used to segment the correct number of boundaries in the synthetic images. For the synthetic image used in these experiments, the number of required boundaries is nine. For parameter determination, the signal-to-noise ratio (SNR) of the generated image was set to the SNR of the in vivo image and was approximately 15 dB. Note that the contributions presented in this section are not meant to establish the exact optimal parameters. Rather, our intention is to illustrate that the synthetic image generation technique can be used to validate and tune parameter choices guided by intuition about the physical parameter interpretations. 3.4.3.1 Smoothness Threshold The hard smoothness threshold in the segmentation algorithm presented in Section 3.2 is used to prune boundaries that are not smooth enough. This threshold was implemented 76 as an ad-hoc pruning technique to remove false positives from the segmentation after each algorithm iteration. The smoothness threshold is compared to the mean-squared error (MSE) between an identied boundary and a smoothed version of it, and the boundary is pruned if the MSE exceeds the threshold. If this threshold is too low, then some real boundaries will be pruned resulting in false negatives. On the other hand, if this threshold is too high, then the algorithm will nd false positives that it deems are real boundaries even though they are not. Thus, we expect that the segmentation algorithm will eectively segment all nine layer boundaries only if the smoothness threshold is within a certain range. Physical intuition can help guide the selection of the smoothness threshold. The MSE is measured in squared pixels and represents the average squared deviation of an identied boundary from its smoothed version. Since the retinal layers are smooth continuous struc- tures, we expect that a boundary between two layers does not deviate from its smoothed version too much. On the other hand, we do expect a nonzero MSE due to the presence of speckle and other imaging noise. Intuitively, then, we expect that the real boundaries be- tween retinal layers will have a nonzero MSE that is no larger than approximately 10 squared pixels, which, using the fact that the axial distance between adjacent pixels is 2.5 m, maps to 8 m. This gives a rough order of magnitude estimate that helps guide parameter selec- tion. In order to verify our physical intuition, we varied the threshold on this MSE from 1.5 to 5.5 in steps of 1. The experiment showed that 1.5 was too low and resulted in miss- ing boundaries. Conversely, 5.5 was found to be too high and resulted in extra boundaries being segmented. When the smoothness constraint was set between 2.5 and 4.5, however, the number of identied boundaries equals nine. Thus, the valid range for this smoothness threshold is between 2.5 and 4.5, which is in good agreement with our physical intuition. Furthermore, the percent dierence over the valid range is 66%, which indicates that our segmentation algorithm is robust to the MSE threshold parameter. An intermediate value 77 of 3.5 is used as the smoothness constraint from here on. Some results of this test are shown in Figure 3.36. Figure 3.36: Eect of smoothness threshold on segmentation algorithm. Left panel: Eight boundaries are found when the smoothness threshold is set to 1.5. The outer segment bound- ary is missing. Middle panel: The correct nine boundaries are found when the smoothness threshold is set to 3.5. The outer segment boundary is illustrated in green. Right panel: An additional false boundary is found when the smoothness threshold is set to 5.5. This false positive is shown in red. 3.4.3.2 Number of Sparse Bayesian Learning Algorithm Iterations The sparse Bayesian learning (SBL) algorithm is used to compute a sparse approximation for each column in an image by inserting breakpoints at discontinuities between piecewise constant segments. As discussed in Section 3.2, the SBL algorithm runs for a pre-specied number of iterations. In general, the number of identied breakpoints increases with the number of SBL iterations up to a point. If the number of iterations is too low then the seg- mentation algorithm may not nd enough breakpoints which can result in missing boundaries in the nal segmentation. On the other hand, the computational time required to run the SBL algorithm scales with the number of iterations, and so if the number of iterations is too high then SBL will be very slow. Thus, we want to nd the minimum number of iterations needed for the algorithm to locate the correct number of boundaries. 78 Again, physical intuition can help guide the selection of the number of iterations for which to run the SBL algorithm. We know that there are at least nine identiable retinal layer boundaries which means that the minimum number of SBL iterations required to locate the boundaries is nine. Ideally, a single breakpoint would be identied for each retinal layer boundary. However, due to the presence of speckle noise, some breakpoints correspond to discontinuities in the speckle pattern and not to real retinal layer boundaries. Note that SBL is relatively insensitive to the additive Gaussian noise present in OCT retinal images (shown in Section 3.4.4) and thus the primary noise source of error is speckle noise. Furthermore, due to the high level of smoothing applied to an image prior to applying SBL, some retinal boundary edges stretch across multiple breakpoints. Thus, we expect to have to run the SBL algorithm for more than nine iterations to locate the breakpoints that correspond to all of the real retinal layer boundaries. In this experiment, we vary the number of SBL iterations in steps of 5. As can be seen in Figure 3.37, 25 SBL iterations are not enough to capture all of the retinal layer boundaries, as the inner nuclear layer boundary is not identied by our segmentation algorithm. On the other hand, 30 iterations is high enough to locate all nine of the retinal layer boundaries. Increasing the number of iterations up to 80 does not result in a change in the number of identied retinal layer boundaries. Since the minimum number of iterations that is able to nd all of the retinal layer boundaries is desired in order to minimize computation time, the number of iterations used from here on is 30. This experiment can also be used to determine a good rule of thumb for initial guesses of the best number of SBL iterations. Since we showed that locating the nine retinal layer boundaries requires at least 30 SBL iterations, we propose that a good initial guess of the number of SBL iterations is three times the number of desired breakpoints. The ideal number of SBL iterations, which should be chosen to balance the ability to nd all of the desired breakpoints with the amount of computation time, should be close to this value. 79 Figure 3.37: Eect of number of Sparse Bayesian Learning algorithm iterations on segmen- tation algorithm. Left panel: When the number of iterations is 25, the inner nuclear layer boundary is missed. Right panel: When the number of iterations is 30 or above, all nine retinal boundaries are found. The inner nuclear layer boundary is illustrated in green. 3.4.3.3 Merge Penalty After determining the valid ranges for the previous two hard constraints/thresholds, the nal segmentation algorithm parameter that is tested is the merge penalty parameter. Recall that the merge penalty is used as a regularization term in the build new layers step of the segmentation algorithm in Section 3.2 in order to remove the bias towards merging layers. If this parameter is too low, then the segmentation algorithm may miss boundaries because the corresponding layers may merge too easily. On the other hand, if this parameter is too high then layers that should merge may not. As is the case for the number of initial SBL iterations, it is desired to minimize the merge penalty because of computation time considerations. Specically, the size of the graph used in the identify new layers module scales with the number of layers at the current column of the image. As the layers merge, the total number of layers is reduced and thus there are fewer nodes in the graph. If two or more layers are going to merge, it is preferable that they do so early in the algorithm. Unlike the smoothness threshold parameter and the number of SBL iterations, the merge penalty parameter does not have a simple physical interpretation. However, we can still estimate its rough order of magnitude by considering the cost functions in Equation 3.13 80 and Equation 3.14. Recall from Section 3.2.2.4 that the merging of two layers will cause the shortest path in a graph traversal to visit two fewer nodes compared to the case for which there is no merging of layers. In this case there will be one less beginning node to end node transition and one less end node to beginning node transition. Because the maximum beginning node to end node transition cost without a merge is 2 and the maximum end node to beginning node transition cost is 1, the reduction in cost for a single merge between two layers would be 3 if no regularization term were applied. Thus, a good initial guess for the merge penalty parameter is 3. In order to verify that our understanding of the merge penalty is correct, we vary the parameter between 3 and 6 in steps of 1. Figure 3.38 shows that fewer than the desired nine boundaries are found when this parameter is set to 3. On the other hand, for values between 4 and 6, all nine boundaries are found. Thus, the preferred parameter setting for the merge penalty is 4. Figure 3.38: Eect of the merge penalty on segmentation algorithm. Left panel: When the merge penalty is 3, the outer segment boundary is missed. Right panel: When the merge penalty is 4 or above, all nine retinal layer boundaries are found. The outer segment boundary is shown in green. 3.4.4 Segmentation Algorithm Noise Sensitivity and Accuracy Generating synthetic OCT images enables a quantitative evaluation of the segmentation algorithm introduced in Section 3.2. Using the parameter settings found in Section 3.4.3, 81 the segmentation algorithm was run on a total of 30 synthetic images generated from two eyes. A single in vivo retinal image from each eye was used to generate 3 instances of noise-free images. For each of these instances, 5 dierent additive Gaussian noise settings were added that resulted in SNRs ranging from 2 to 22 dB. After running the segmentation algorithm, the number of boundaries identied was calculated. The algorithm was deemed to have failed if less than nine boundaries were found. For SNRs above 10 dB, the segmentation algorithm successfully identied all nine boundaries. For SNRs below 7.5 dB, the algorithm tends to miss the IS/OS segment, which is very thin boundary with low contrast. These results indicate that the algorithm is robust to noise for SNRs above 10 dB, but breaks down for SNRs below 7.5 dB. Two examples of segmented images are shown in Figure 3.39. Figure 3.39: Eect of the signal to noise ratio (SNR) on segmentation algorithm. Left panel: All boundaries are correctly identied when SNR = 18.1 dB. Right panel: Outer segment boundary is not identied when SNR = 3.5 dB. In order to quantitatively evaluate the accuracy of the segmentation algorithm, the iden- tied boundaries were compared to the ground truth boundaries. The average mean-squared error (MSE) was used as a metric to determine the boundary accuracy. Note that only the cases for which all nine boundaries were identied are included in this quantitative evalu- ation. Figure 3.40 illustrates that the MSE lies between 1.4 and 2.1 squared pixels for all SNRs tested, indicating a high level of accuracy. One counterintuitive observation is that it seems that the MSE actually decreases for decreasing SNR from 22 dB down to 7.5 dB. This 82 suggests that the segmentation algorithm is more sensitive to the speckle noise pattern than to the additive complex Gaussian noise and that adding Gaussian noise breaks up the speckle pattern in a way that aids the segmentation algorithm. This also suggests that incorporating speckle statistics into the segmentation algorithm could increase its robustness. Figure 3.40: Image regions used to compare distributions and speckle sizes of in vivo and synthetic OCT data. 3.5 Conclusions In this chapter we studied the segmentation of OCT retinal images. In Section 3.2, we developed a segmentation algorithm for OCT retinal images. The two primary novelties of our algorithm are (a) the sparse transformation of the input image from the pixel space to a much smaller space while still capturing layer properties that are useful for image segmen- tation and (b) the relaxation of anatomical assumptions about the number of retinal layers as well as the relative positions of the layers made by most other segmentation algorithms. There are a number of limitations to our novel segmentation algorithm. First, building the layers using the graph-based approach relies on the assumption that the layers extend across the entire image. Blood vessel shadowing [46] adds articial vertical discontinuities in the retinal layers. If the SBL algorithm misses a breakpoint in even one of the columns with 83 shadowing, the corresponding layer will not be found in the build layers step. This problem could be alleviated by adding a column-look-ahead step, where the build layers step could be run for multiple column steps (eg 1 and 2 and 3) and the extension with the lowest cost could be chosen. Another limitation is that the framework of the graph-based approach does not allow layers to have zero width in any column. However, near the fovea, a few of the layers do in fact have zero width. Thus, the algorithm cannot successfully segment images that extend across the fovea centralis without modication. In order to help quantitatively evaluate our segmentation algorithm, we introduced a novel technique for generating synthetic OCT retinal B-scan intensities with known ground truth segmentations in Section 3.3. Our method improves upon an existing synthetic image generation method by (a) computing a nonparametric smooth approximation to the input image that can represent unexpected changes in retinal layer structure and (b) modeling the speckle noise formation process accurately. The smooth approximation and complex additive Gaussian noise models used for synthetic data generation were validated by showing that the intensity distributions in the vitreous and ganglion cell layer are in good agreement with the respective distributions in in vivo images. The synthetic speckle noise pattern generated was validated by showing that the axial and transverse speckle patterns are similar to the speckle patterns in in vivo retinal images. One limitation of our synthetic image generation method is that the synthetic images appear similar to the in vivo images. One way to generalize this technique would be to allow deviations from the input segmentation boundaries. Other modications can be made to allow for simulated ocular pathologies such as cysts by adding regions of low intensity at random positions in the retina. In Section 3.4 we evaluated our segmentation algorithm. First, we used in vivo retinal images to show that our algorithm is capable of segmenting all nine retinal layers. Then, we described how our synthetic image generation technique can be combined with physi- cal intuition to guide the choices of the parameters for our segmentation algorithm. This 84 methodology can be applied to any segmentation algorithm to determine good parameter settings. Finally, we tested the accuracy and noise sensitivity of our segmentation algorithm by using synthetic images generated with our technique. Specically, we varied the signal to noise ratio (SNR) in the synthetic images and computed the mean-squared error (MSE) between the ground truth boundaries and those identied by our segmentation algorithm. We showed that for SNRs between 10 dB and 22 dB, the segmentation algorithm identies all nine retinal layers, which indicates that our algorithm is fairly robust to noise. Further- more, the average MSE between the known boundaries and the boundaries identied by our segmentation algorithm is less than 2 squared pixels, which indicates that our algorithm is able to segment the retinal layers with high accuracy. 85 Chapter 4 Blood Flow Velocity Measurement with Doppler OCT 4.1 Introduction Accurate knowledge of ocular perfusion is important for the understanding of pathophysi- ology of many retinal diseases. For example, both glaucoma and diabetic retinopathy are linked to decreases in retinal blood ow rates [47, 48]. Thus, there is considerable interest in methods for the assessment of retinal blood ow for the diagnosis of eye diseases. Flu- orescein angiography, in which dye is injected and its velocity is determined using physical displacement, can be used for qualitative ow analysis [49], but quantitative measurements are not feasible with this modality. Doppler methods, based on the frequency shift imposed on an electromagnetic wave by moving particles, have been applied to retinal circulation analysis. Laser Doppler velocimetry can be used to measure blood ow in absolute units [50] by directly measuring the Doppler shift induced by moving particles. However, since it op- erates in the frequency domain, temporal (and consequently spatial via v = ct) resolution is compromised and speed distribution across a vessel cannot be calculated [12]. Further- more, the inability to resolve ow at dierent depths reduces its utility, especially for in vivo measurements. 86 Doppler OCT is able to compute the blood ow velocity distribution inside a vessel as well as depth resolved images of retinal blood ow. Enface images of vascular networks can also be computed. Doppler OCT methods are sensitive to the axial ow velocity component only. As noted in Section 2.3, blood ow in a retinal vessel can be computed with Doppler OCT only if the Doppler angle, or angle between the incident light beam and the blood vessel, is known. The standard method for computing blood ow velocities in retinal vessels is phase- resolved (PR) Doppler OCT. PR is able to calculate blood ow velocity by evaluating the phase dierences between adjacent A-lines [13{17]. Ideally, the two A-lines used to compute the Doppler phase shifts in a blood vessel would be acquired at the same transverse beam position. In practice, however, the beam must be scanned transversally in order to reduce the total amount of time a patient's eye is imaged. This transverse scanning, along with other unwanted sources of noise such as patient movement, induces nonzero Doppler shifts even in stationary background tissue. An example PRDOCT image is shown in Figure 4.1. The PR algorithm has been used to measure total retinal blood ow near the optic nerve region where the axial component of blood ow velocities is typically high [19]. However, a number of limitations to accurately measuring ow velocities in a blood vessel with PR Doppler OCT have been noted. For example, in the macular region, the axial component of blood ow velocities is typically low due to both near-perpendicular incidence angles and very slow ow in small vessels. The small axial ow component makes vessels in the macula dicult to detect with PR Doppler OCT because the associated Doppler phase/frequency bandwidth mixes with the bandwidth of the surrounding tissue. Although spatial frequency ltering Doppler OCT methods have been proposed to alleviate this diculty, a quantitative analysis of the eect vessel size on Doppler OCT measurements is still lacking. In order to overcome the diculty in assessing the ability of Doppler OCT techniques to measure blood ow velocities accurately, we introduce a novel simulation of a blood vessel immersed in static tissue in Section 4.2. For the rst time, a full two dimensional retinal 87 Figure 4.1: Example of a phase resolved (PR) Doppler OCT image in the human retina. Top panel: OCT structural image. Bottom panel: PR Doppler OCT image of the measured Doppler phase shifts. The green and yellow indicate the small (nearly zero) Doppler shifts in the static background tissue that are due to beam motion, patient motion and system noise. The red and blue circular regions indicate retinal blood vessels. The sign of the Doppler phase shift indicates the direction of the ow velocity. vessel simulation is implemented that accurately represents the way Doppler OCT is applied in clinical studies. In practice, Doppler OCT techniques are applied to two dimensional B-scans and ow is computed by averaging the Doppler phase shifts inside a blood vessel. Other Doppler OCT simulation techniques have primarily been one dimensional in nature so that eects on Doppler ow measurements cannot be accurately assessed. In contrast, our simulation can be used to objectively and quantitatively evaluate the ow accuracy of various Doppler OCT algorithms by providing ground truth knowledge of the real ow velocities. The utility of our retinal vessel simulation in quantitatively evaluating Doppler OCT algorithms is then shown with two case studies. In Section 4.3, we compare the accuracy of the PR algorithm and a spatial frequency ltering technique called moving-scatterer-sensitive (MSS) Doppler OCT as blood vessels and axial ow components become increasingly small. For this case study, we rst validate the simulation with in vivo Doppler OCT retinal images 88 and then use the simulation to extrapolate the results in a controlled setting. The material presented in Section 4.3 was published in [51]. In Section 4.4, we study the eect of trans- verse step size on the PR algorithm to show that it produces inaccurate blood ow velocity measurements as the transverse step size between adjacent axial scans increases. For this case study, we rst validate the simulation with in vitro ow phantom experiments and again use the simulation to extrapolate the results. 4.2 Retinal Vessel Simulation 4.2.1 Introduction In this section, we develop a novel simulation of a blood vessel immersed in tissue. The pri- mary motivation for the development of this simulation is the need for objective quantitative accuracy analysis of Doppler OCT algorithms in a controlled environment. The synthetic data provides ground truth knowledge of blood ow velocities that can be used to assess the accuracy of Doppler OCT algorithms. The primary novelty of our technique is that it simulates an entire two dimensional OCT B-scan of owing particles that model moving red blood cells surrounded by stationary particles that model static background retinal tissue. Our simulation can be used to assess blood ow measurements in a blood vessel. Other simulations have been developed in order to evaluate Doppler OCT algorithms [18, 52]. However, our simulation is more realistic in terms of mimicking clinical Doppler measurements because (a) B-scans are used to assess velocity measurements and (b) we model both stationary and owing particles. For example, the authors in [52] developed a one dimensional simulation to study the eect of transverse step size on phase-resolved (PR) Doppler OCT measurements. Because their method is one dimensional, it cannot be used to assess the eect on Doppler ow measurements that are 89 computed by integrating the Doppler phase shifts in a circular cross section of a blood vessel. Furthermore, their simulation did not include owing particles so that the results are limited to the eects on Doppler phase shifts in background tissue. On the other hand, the authors in [18] developed a simulation to study the eect of signal to noise ratio on Doppler velocity measurements in owing particles. This simulation, however, is zero dimensional in the sense that the authors simply sampled from a theoretical phase distribution. The lack of geometry in this simulation as well as the exclusion of stationary particles limit its general application to the assessment of clinical Doppler OCT measurements. Each of these simulations were developed for particular studies. On the other hand, our novel simulation can be used for a variety of Doppler OCT studies as discussed in Section 4.3 and Section 4.4. The organization of this section is as follows. First, the geometry used to model a blood vessel is introduced. Then, we provide an overview of the simulation procedure and then describe each of the simulation steps in detail. A table summarizing the simulation parameters is then provided. Finally, a visual comparison between a blood vessel detected in an in vivo image and a simulated blood vessel is shown. The simulation is validated in subsequent sections with two case studies. 4.2.2 Retinal Blood Vessel Geometry The geometry of a blood vessel is approximated with a cylinder with arbitrary orientation. Then the blood vessel is completely described by the vessel radius r, a unit vector ^ n that is normal to the vessel cross section and a pointP 0 = (x 0 ;y 0 ;z 0 ) that is centered on its circular cross section or core. The simulation geometry is illustrated in Figure 4.2. 90 Figure 4.2: Vessel geometry for the Doppler OCT simulation. LetP be any point in three-dimensional space and let ^ p denote the unit vector from the point P 0 on the vessel core to the point P . Then, the point P c on the vessel core that is closest to P can be found using the projection: P c (P ) =P 0 +h(PP 0 )^ p; ^ ni ^ n; (4.1) where the angled brackets denote a vector inner product. Note that P C is a function of the arbitrary pointP and that the explicit dependence will be dropped from here on. Then, the equation of the cylinder is given by: jj(PP C )^ pjj 2 2 =r 2 : (4.2) The vessel direction is given in spherical coordinates, with a being the azimuth angle, or rotation from the x-axis in the xz plane, and z being the zenith or polar angle from the z-axis. Setting the beam direction be to the +z axis means that the zenith angle is equal to the Doppler angle D . The vessel normal can then be written as: ^ n = [n x ;n y ;n z ] = [sin D cos a ; sin D sin a ; cos D ]: (4.3) 91 A two dimensional OCT B-scan produces a cross sectional image in the xz plane. The intersection of the vessel with this plane can be found as follows. Let P = (x; 0;z) denote any point in thexz plane. By convention, the vector describing the vessel core runs through the origin so thatP 0 = (0; 0; 0). Plugging these values and the normal vector ^ n into Equation 4.1 and the result into Equation 4.2 gives the equation for the intersection of the cylinder Cyl(x;z) with the xz plane as jj(PP C )^ pjj 2 2 = r 2 =Ax 2 +Bxz +Cz 2 , A = (1n 2 x ) 2 +n 2 x n 2 y +n 2 x n 2 z , B = 2[n x n 2 y n x (1n 2 x )n x n z (1n 2 z )n x n z ], C = (1n 2 z ) 2 +n 2 x n 2 z +n 2 y n 2 z . (4.4) Note that this equation describes an ellipse in the xz plane, as shown in Figure 4.3. Figure 4.3: Intersection of cylindrical vessel with scan plane. This intersection denes the vessel boundary that can be used in subsequent calculations of ow. As shown in Figure 4.3, the maximum extent z MAX (in the z direction) of the vessel in the xz plane can be found by setting the partial derivative of the cylinder Cyl(x;z) with respect to x equal to zero. Similarly, the maximum extent x MAX in the x direction can be 92 found by setting the partial derivative of the cylinder Cyl(x;z) with respect to z equal to zero. Performing this both for z and for x gives: z MAX = r p CB=4A 2 , x MAX = r p AB=4C 2 . (4.5) 4.2.3 Simulation Procedure Our novel retinal vessel simulation generates a synthetic OCT B-scan of a blood vessel immersed in background tissue. However, a complete model of red blood cell ow through a blood vessel immersed in retinal tissue would be very dicult to implement in practice. We thus use a number of simplifying techniques to make the problem tractable, including the cylindrical model of a blood vessel previously discussed. Because the geometry of moving red blood cells is fairly complex, we model them with discrete point scatterers. Similarly, the geometry of static retinal tissue that consists of dierent types of cells with dierent geometries is modeled with discrete point scatterers. To initialize the simulation, we generate random positions according to a uniform distribution for a xed number of particles within a rectangular scan volume. The number of generated particles is determined by a simulation parameter and is kept xed for computational simplicity. Each particle is labeled as a stationary or ow particle depending on whether it lies inside the cylindrical blood vessel. Then, the beam position is initialized and the iterative part of the simulation proceeds until an entire complex OCT B-scan is computed. The iterative part of the simulation consists of a number of steps. First, the OCT signal is computed using a discrete version of Equation 2.13 for all of the particles in the current coherence volume. Note the center of the coherence volume coincides with the center of the input beam. Then, the beam is moved axially and the OCT signal is recomputed for 93 the new coherence volume. These two steps are repeated until an entire axial scan, or A- line, is acquired, and then the axial beam position is reset to its initial value. After each axial scan, the ow particles are moved according to their respective preset velocities. By convention, the velocity of a particle is computed using a parabolic ow prole and the speed of a particular particle is determined by its distance to the vessel core. Although not entirely realistic, parabolic ow is typically assumed when using Doppler OCT techniques. Some ow particles leave the rectangular scan volume after they are moved. If this were not compensated by bringing new ow particles into the scan region, then eventually there would be no ow particles remaining. This necessitates the need for ow particle replacement. In practice, red blood cells enter the scan volume at random times and in random positions. A more realistic model would account for this by sampling from a probability distribution that describes the entry of particles into the scan region. However, this would lead to a variable number of ow particles in the scan region and make the particles computationally dicult to track. Because we x the number of particles in the simulation, a ow particle is replaced upon exit from the scan region with another ow particle. Furthermore, because ow particles near the center of the vessel exit the scan region faster than particles near the vessel walls, allowing the new ow particle to have any speed would result in a clustering of ow particles near the vessel walls. In order to maintain the uniform spatial distribution of the ow particles, the speed of the replaced ow particle is preserved in the new ow particle. After all of the ow particles are replaced, the transverse position of the beam is updated. The iterative part of the simulation is then repeated until the OCT signal at all axial and transverse beam positions is computed. When the entire B-scan simulation is complete, the complex OCT signal data is saved to a le. Figure 4.4 depicts a owchart for the simulation procedure. The individual modules are explained in detail in the following subsections. 94 Figure 4.4: Retinal vessel simulation owchart. 4.2.3.1 Initialize The scan region is dened by the scan length x and scan depth z. The transverse step size between adjacent A-lines is x and the axial step size between adjacent depth pixels is z. The total number of transverse and axial pixels are then calculated using: N x = dx=xe + 1, N z = dz=ze + 1, (4.6) where thedae denotes the ceiling function dened by the smallest integer greater than or equal to a. The full-width at half-maximum (FWHM) transverse resolution ! 0 and the axial res- olution ! z are simulation parameters. Each coherence volume is a 1 x 3 vector V C = (V C x ;V C y ;V C z ) dened by a width in the x, y and z dimensions. This width is determined another simulation parameter cVolPw that species the 1=e cVolPw width of the assumed 95 Gaussian beam and Gaussian source spectrum. Then, the coherence volume boundary in the transverse dimension is given by: V C x =! 0 r cVolPw ln2 ; (4.7) and similarly for the axial converssion. Let X max = x=2 +V C x , Y max = V C y , Z max = z=2V C z , (4.8) denote the maximum extent of the simulation with respect to the origin. Then, the particles are randomly placed within the 3 dimensional scan volume V S dened by [X max ;X max ] x [Y max ;Y max ] x [Z max ;Z max ]. The density of the particles p , dened by the number of particles per cubic micron, is a simulation parameter. The total number of particles generated is given by N p =d p =(V S x V S y V S z )e. The particles are generated with random positions insideV S . Each particle is also assigned a random phase between and . The parameters that describe the vessel are its diameter D VES , Doppler angle D and azimuthal angle a . The particles that lie inside the vessel are deemed to be ow particles, and the particles that lie outside of the vessel are stationary particles. Whether a particle lies inside or outside of the vessel is determined by Equation 4.1 and Equation 4.2. Flow particles are assigned re ectivities r FLOW and stationary particles are assigned re ectivities r STAT . By convention, r FLOW +r STAT = 1. After the all of the particles are set, the initialization is completed by setting the beam position to P beam = (x=2; 0;z=2). The initialized simulation is illustrated in Figure 4.5. 96 Figure 4.5: Initialized simulation. Red dots denote stationary particles. Blue dots denote ow particles that lie inside the vessel. The vessel is shown as a gray cylinder. The trans- parent box denes the region V S . The gray box denotes the region of the rst A-line. 4.2.3.2 OCT Signal Calculation The noiseless complex OCT signal emanating from a coherence volume V C 0 centered at a point (x 0 ; 0;z 0 ) within the sample can be calculated using: A = X p2V C 0 r p exp(i2k 0 (z p z 0 ))G p ; (4.9) where p is the pth particle index, r p is its re ectivity and (x p ;y p ;z p ) is its position in the sample, and G p is a Gaussian function dened by: G p = exp 4ln2 (x p x 0 ) 2 + (y p ) 2 ! 2 0 + (z p z 0 ) 2 ! 2 z : (4.10) An A-line is computed by movingz 0 fromz=2 to z=2 in increments ofz and computing Equation 4.9 at each step. After each A-line is computed, the axial beam position is reset toz=2 and the transverse beam position increments by x. 97 4.2.3.3 Move Flow Particles After each A-line, the ow particles are moved. The velocities inside the vessel are determined by a parabolic ow prole in a vessel cross section. First, the distance d p from each ow particle to the vessel core is computed using Equation 4.2 with the distanced p replacing the vessel diameter D VES . Then, the velocity of the particle is computed using: V (d p ) =V MAX 1 d 2 p (D VES =2) 2 : (4.11) The maximum velocity V MAX in the vessel is determined using the vessel diameter and the preset ow rate R according to: V MAX = 8R D 2 VES : (4.12) The ow rate R is a simulation parameter. Note that Equation 4.12 is derived by noting that the ow rate through a vessel is equal to the average velocity in the vessel cross section divided by the cross sectional area and that the maximum velocity is equal to two times the average velocity when a parabolic ow prole is assumed. The distance that each particle moves can be calculated by multiplying Equation 4.11 by the time between A-lines . is another simulation parameter. This distance is added to the current particle position to complete ow particle movement. 4.2.3.4 Replace Flow Particles As the beam moves along the transverse dimension, the ow particle positions are sequen- tially updated. Because the scan volume has nite dimension bounded by a box V S , some particles may move out of it. Since the number of particles in the simulation is xed, par- ticles that move out of the scan region must be replaced by new particles. As discussed in the beginning of Section 4.2.3, a new ow particle replaces an exiting with the same speed. 98 Also, the angle of rotation around the vessel core of the new ow particle position is set to the angle of rotation of the particle that it is replacing. Physically, this means that the new particle position is forced to lie on a straight line in the opposite direction of the vessel ^ n FLOW from the position of the particle that is being replaced. Marching along a line in the opposite direction of the vessel will be referred to as back- tracking. The vessel direction will now be subscripted with FLOW in order to avoid confu- sion. A simple procedure can be followed in order to implement the replacement protocol. Suppose that the particles are randomly distributed in the region V S = [X max ;X max ] [Y max ;Y max ] [Z max ;Z max ]. This region describes a 3-dimensional cube that has 6 faces. Each face lies in a plane that is described by a point in the plane and a unit vector that is normal to the plane. For the following discussion, ~ p will be used to denote both a point in three dimensional space as well as a vector from the origin to that point. The denitions of each face of a plane are given in Equation 4.13. F 1 : ~ p F 1 = [X max ; 0; 0]; ^ n 1 = [1; 0; 0] F 2 : ~ p F 2 = [X max ; 0; 0]; ^ n 2 = [1; 0; 0] F 3 : ~ p F 3 = [0;Y max ; 0]; ^ n 3 = [0; 1; 0] F 4 : ~ p F 4 = [0;Y max ; 0]; ^ n 4 = [0;1; 0] F 5 : ~ p F 5 = [0; 0;Z max ]; ^ n 5 = [0; 0; 1] F 6 : ~ p F 6 = [0; 0;Z max ]; ^ n 6 = [0; 0;1] (4.13) If a ow particle has moved out of the region, then it must have crossed one of the faces F i . A backtracked vector ~ l BT from the position of the ow particle~ p FLOW along the opposite direction of the vessel ^ n FLOW will thus intersectF i . The position of this intersection will be 99 called the particle exit position ~ p EXIT , because it is the position through which the particle exited the scan region. The backtracked ow vector can be described by: ~ l BT =^ n FLOW +~ p FLOW ; (4.14) where < 0 for backtracking. ~ l BT will also intersect other faces of the box at dierent positions. The intersection point that is furthest from the particle position is called the particle entry position ~ p ENTRY because it is the position through which the particle would have entered the scan region. The conventions are shown in two dimensions in Figure 4.6. The distance between~ p FLOW and~ p EXIT will be denoted byj EXIT j and the distance between ~ p FLOW and ~ p ENTRY will be denoted byj ENTRY j. Figure 4.6: Notation conventions for ow particle replacement illustrated in 2 dimensions for simplicity. F j denote the faces of the bounding boxV S of the scan volume. p FLOW is the position of the particle that moved outside of the scan volume and is being replaced. The particle position is backtracked along the vector ~ l BT which is in the direction opposite to the ow vector ^ n FLOW . ~ p EXIT denotes the position at which the particle crossed one of the faces to exit the scan volume and is a distancej EXIT j from p FLOW . ~ p ENTRY denotes the position at which the particle would have entered the scan volume and is a distancej EXIT j from p FLOW . 100 The particle entry and exit positions are calculated as follows. First, note that the face F j lies in a plane that is dened by all of the points ~ p for whichh~ p~ p F j ; ^ n j i = 0, where the angle brackets denote the inner product. Then, the backtracked line intersects this plane only ifh ~ l BT ~ p F j ; ^ n j i =h j ^ n FLOW +~ p FLOW ~ p F j ; ^ n j i = 0. Distributing ^ n j and solving for j gives the distance from the point ~ p FLOW to the plane and is given by: j = h~ p F j ~ p FLOW ; ^ n j i h^ n FLOW ; ^ n j i : (4.15) Note that the application of Equation 4.15 requires that ^ n FLOW and ^ n j cannot be paral- lel. If the two vectors are parallel so that the ow direction is along one of the faces, the computation is skipped and the jth face is not considered for the entry and exit positions. After the j 's are computed for each j, Equation 4.14 is applied to compute the inter- section points ~ i j of the line ~ l BT with each of the faces. If an intersection point lies on the original 3 dimensional cube boundaries, then it is a candidate for the entry and exit positions. Once all of the candidates have been identied, the magnitudes of the's are compared; the closest candidate intersection point is dened as the exit position and the farthest candidate intersection point is dened as the entry position. One additional step is needed to complete the calculation of the position of the new particle. The entry point ~ p ENTRY is one possible choice for the new position. However, in the case of very fast ow, particles would tend to cluster at the entry face if replaced at ~ p ENTRY . In order to prevent this inhomogeneity, a random oset is added to the entry position. First, the distance that a replaced particle moves in between scans is computed. This distance is the maximum sensible oset and is denoted by d MAX . Then, a random numberr is generated in the interval [0;d MAX ] and used as the distance between~ p ENTRY and the new particle position. Finally, the new particle position ~ p 0 FLOW is computed according to: 101 ~ p 0 FLOW =r ^ n FLOW +~ p ENTRY : (4.16) Flow particle backtracking is shown in Figure 4.7. Figure 4.7: Flow particle backtracking and replacement. The oset from the entry position ~ p ENTRY is determined by a random number in between 0 and d MAX . 4.2.4 Summary of Simulation Parameters A number of parameters are used in the retinal vessel simulation. They are summarized in Table 4.1. 4.2.5 Summary In this section, we proposed a novel two dimensional simulation of a retinal vessel immersed in background tissue. Our simulation is more realistic than others previously reported because it is two dimensional and because we model both stationary and ow scatterers. After 102 Parameter Description x; z Scan length/depth in m x;z Axial/transverse step size between scans in m ! 0 ;! z 1=e 2 beam width/coherence length in m cVolPw 1=e cVolPw is the coherence volume bounds p Density of particles in #particles/m 3 D VES Vessel diameter in m D Doppler angle in degrees a Azimuthal angle in degrees R Flow rate in microliters per minute A-line rate in s/A-line r STAT Stationary particle re ectivity (r STAT +r FLOW = 1) Table 4.1: Summary of parameters for retinal vessel simulation. computing the simulated B-scans, the Doppler phase shifts can be calculated using Equation 2.22. Figure 4.8 shows a visual comparison between a real retinal vessel from an in vivo image and a simulated retinal vessel that provides qualitative validation of the synthetic OCT data. The OCT imaging system used to acquire the in vivo Doppler image was described in 3.4.2. The parameters of the simulation were chosen to mimic those of the clinical imaging system. Figure 4.8: Visual comparison between in vivo retinal vessel and simulated vessel computing using vector average phase-resolved Doppler OCT. The purpose for developing this simulation was to provide ground truth blood ow ve- locity data in order to assess the accuracy of Doppler OCT algorithms. Two case studies are presented in the following sections. For each case study, we rst validate that the simulation 103 produces realistic Doppler OCT results using either in vivo retinal data or in vitro experi- mental data. We then use the simulation to extrapolate the results in a controlled setting in order to assess the accuracy of Doppler OCT algorithms. 4.3 Case Study I: Phase-Resolved vs Moving-Scatterer- Sensitive Doppler OCT Algorithm Comparison 4.3.1 Introduction The goal of the rst case study that illustrates the utility of our retinal vessel simulation introduced in Section 4.2 is to examine the eect of vessel diameter on the relative blood ow estimates in two Doppler OCT algorithms. Doppler OCT was developed to measure axial phase/frequency shifts in moving red blood cells. The phase-resolved (PR) Doppler OCT algorithm is the most commonly used Doppler technique for computing blood ow velocities. When the incident beam is focused near the edge of a blood vessel, the coherence or probe volume within the sample contains both moving blood cell scatterers as well as scatterers from the static tissue background. For large blood vessels, the number of scans that contain both a stationary tissue component and a ow component is far less than the number of scans that contain only a ow component. On the other hand, for blood vessels that are small relative to the size of the coherence volume, scans that contain both a stationary component and a ow component dominate. An illustration of the size of the a blood vessel relative to the size of the coherence volume is shown in Figure 4.9. Because in vivo measurements are acquired while transversally scanning the input beam, the static background tissue will have a nonzero Doppler bandwidth that increases with increased scanning speed [53]. An increase in the static tissue bandwidth will decrease the ow sensitivity of the PR algorithm because as the bandwidth of the stationary scatterers 104 Figure 4.9: Vessel size compared with coherence volume size. increases it will begin to overlap with the bandwidth of the ow scatterers. This is especially problematic when the axial Doppler frequency shifts are small. Because the ow rate in retinal blood vessels approximately scales with the cube of the vessel diameter, smaller blood vessels exhibit smaller Doppler frequency shifts [54]. Combined with the fact that scans of small blood vessels contain a signicant stationary component, the reduced ow sensitivity of the PR algorithm is more pronounced in small blood vessels. The PR algorithm ow velocity estimate, which approximates the centroid of the combined distribution of stationary and ow scatterers, will be biased towards zero because of the nonzero static tissue bandwidth. Figure 4.10 illustrates the reduced sensitivity of Doppler OCT measurements in the case of nonzero static tissue bandwidth. Figure 4.10: Illustration of the reduced sensitivity of the phase-resolved algorithm due to nonzero static tissue bandwidth. The PR algorithm ow estimate, which approximates the centroid of the combined distribution, will be biased towards zero because of the nonzero static tissue bandwidth. 105 Spatial frequency ltering techniques have been proposed to overcome the limitations of the PR algorithm for small Doppler frequency shifts. One example of a computationally ecient spatial frequency ltering technique is the moving-scatterer-sensitive (MSS) algo- rithm [55]. Although MSS has been shown to help visualize small blood vessels that are masked in the PR algorithm, a quantitative analysis of the accuracy of the MSS algorithm relative to PR is lacking. The purpose of this study is to utilize our novel retinal vessel simulation to compare the eectiveness of the PR and MSS algorithms. As mentioned in Section 4.2, the simulations developed in [52] and [18] cannot be used to quantitatively as- sess the eect of the overlap between the static tissue Doppler bandwidth and the owing particle bandwidth. However, a quantitative comparison is necessary to determine which algorithm can measure blood ow velocities more accurately so that the better one can be used in clinical studies. Thus, our novel retinal vessel simulation helps determine which algorithm produces more accurate blood ow velocities in practice. We next describe the two algorithms that will be compared in this study. 4.3.2 Methods The vector averaging phase-resolved (PR) algorithm [18] computes Doppler frequency shifts according to: (m;n) = arctan( =( P M1 m=0 P N1 n=0 A(m;n)A (m;n + 1)) <( P M1 m=0 P N1 n=0 A(m;n)A (m;n + 1)) ); (4.17) as previously stated in Equation 2.22. 106 The moving-scatterer-sensitive (MSS) algorithm [55] was designed to overcome the re- duced sensitivity of the PR algorithm in the presence of stationary scatterers. The MSS algorithm rst subtracts adjacent A-scans: D(m;n) =A(m;n + 1)A(m;n): (4.18) MSS then applies Equation 4.17 with A replaced by D. To demonstrate how this method helps to separate remove the stationary signal component, we examine the frequency response of 4.18. Specically, after taking the Fourier transform over n we can write: D(m;f) =A(m;f)e i2fT A(m;f) = 2A(m;f)e ifT sin(fT ); (4.19) which gives a transfer functionjH(m;f)j 2 = 4sin 2 (fT ). This transfer function is that of a high-pass lter. When the Doppler frequencies of the background tissue are within the stop band of this lter, its in uence on estimating the Doppler frequency shift of moving scatterers is greatly reduced. In this study we compute the relative blood ow estimates of the PR and MSS algorithms. As shown in Equation 2.24, the absolute volumetric ow rate is computed by integrating the estimated Doppler total velocity estimates over the vessel cross sectional area. Computation of the total ow velocity requires the measurement of the Doppler angle, or the angle between the incident beam and the blood vessel. However, comparing the ratio of two velocity estimation techniques does not require the Doppler angle because the angle in the numberator and denominator of the ratio cancel. Thus, in this study we do not compute the Doppler angle. Rather, the blood ow estimates are simply computed by summing the axial Doppler phase shifts inside a blood vessel. 107 The simulation was described in Chapter 4.2. The simulation beam width and coherence length were set to 22 m and 5 m, respectively. The axial adjacent scans was set to 2.3 m. The transverse step size between adjacent A-lines was varied as described in Section 4.3.3. The A-line rate of the simulation was set to 37 kHz. The particle density was set to 0.15 particles/m 3 . The vessel diameters were varied from 15 to 180 m and the ow rates varied from 0.05 to 15L/min. The ow rate for each vessel diameter was set according the the power curve t of ow versus diameter as given in [54]. The scan length and scan width scaled with the size of each vessel diameter. After the complex OCT B-scans are simulated, the PR and MSS algorithms were applied to compute the Doppler phase shifts. The blood vessels were identied either manually or automatically, as detailed in Section 4.3.3, and the ratio of the summation of the PR Doppler shifts to the summation of the MSS Doppler shifts inside each blood vessel was computed. The simulation results were validated using in vivo retinal scans obtained with the RT- vue spectral domain OCT system described in 3.4.2. All of the simulation parameters were chosen to mimic this OCT imaging system. For the in vivo Doppler images, double circular scans were taken both in the peripapillary and macular regions. There were 4000 axial scans in each B-scan. For the peripapillary scans the inner and outer circle diameters were 3.4 and 3.75 millimeters, resulting in transverse step sizes of 2.7 and 2.9 microns, respectively. For the macular scans the inner and outer diameters were 1.9 and 2.2 millimeters, resulting in transverse step sizes of 1.5 and 1.7 microns, respectively. Each position was scanned 6 times resulting in 12 frames for each scan. Both the PR and MSS algorithms are applied to all frames. Each blood vessel was manually segmented as described in Section 4.3.3, and the ratio of the summation of the PR Doppler shifts to the summation of the MSS Doppler shifts inside each blood vessel was computed. 108 4.3.3 Results 4.3.3.1 Retinal and simulated PR-to-MSS ow ratio The simulation is rst validated using in vivo measurements. The PR-to-MSS ow ratio as a function of vessel diameter as measured by the PR algorithm was calculated after manually segmenting each blood vessel with a bounding rectangle. For this particular study the transverse resolution was set to 22 microns for the simulation in order to match the RTVue specications. The results of this experiment are shown in Figure 4.11. Figure 4.11: Left Panel: Simulated ow ratio. Right Panel: Retinal PR-to-MSS ow ratio. The following curve was used for analysis of Figure 4.11: f(x) = a(xb) x +c : (4.20) This t is used because of its simple interpretation. First, the parametera represents the asymptotic value or the ratio as the diameter gets very large. When this parameter is less than one it means that even for very large diameters there is a discrepancy between the PR and MSS ow estimates. Second, the parameterb represents the diameter at which the ratio goes to zero. This value is not physically relevant as the Doppler shifts can only be zero in the absence of noise. In practice this ratio never falls to zero. Finally, the parameters b and 109 c are related to the value at which the ratio decreases signicantly from a. Specically, when x =c + 2b, the ratio falls to a=2. The results of the retinal measurement and simulated measurement ts are in close agreement as shown in Figure 4.11. The signicant dierence lies in the parameter c. For the retinal ow ratio, the vessel diameter at which the ratio falls to a=2 is 31:8 microns. For the simulated ow ratio this value is 24:4 microns. We propose that the reason for the large discrepancy in the parameter c is that although the ideal transverse resolution of the RTVue system is 22 microns, the real transverse resolution is likely higher due to aberration eects. Thus the value at which the ow ratio is signicantly reduced is shifted towards a larger vessel diameter when compared to the simulation, which uses a transverse resolution of exactly 22 microns. In order to test this hypothesis a second simulation was implemented with a variable beam width in the range between 22 and 40 microns. In this experiment, the ow ratio is plotted as a function of the ratio of the vessel diameter to the beam width. In this case, since many more simulations were analyzed, the vessel boundary was detected automatically using a parabolic t as described next. In order to compute the vessel boundary the known vessel center is used. Parabolas are t to the centerline Doppler shifts in both the axial and transverse directions using a linear least-squares t with the constraint that the shifts must be zero both on and outside of the boundary. The tting function can be written as: f(x) = 8 > > < > > : ax 2 +bx +c ifjxj<B 0 otherwise; (4.21) where B denes the vessel boundary. The least-squares t seeks to minimize the sum squared error between ^ f and the measured Doppler shifts, with B included as a parameter. This is done in both the axial and transverse directions to obtain B AXIAL and B TRANSVERSE and 110 the smaller of the two is taken to be the vessel diameter. Then, to compute the ow ratios Doppler shifts in an ellipse with major and minor axes dened byB AXIAL andB TRANSVERSE are summed. Any case for which boundary detection is grossly in error is excluded from further analysis. It is important to note that tting a parabola to the prole tends to produce a larger diameter estimate than does thresholding at the boundaries based on the noise level. Thus some noise pixels are included in the diameter and ow calculations. While this may have a signicant eect on the diameter calculation, it has little eect on the ow calculation since the noisy Doppler shifts tend to sum to zero. After implementing the boundary nding technique, the PR to MSS ow ratio is com- puted as a function of the beam diameter (as measured by PR) to the beam width. The results are shown in Figure 4.12. From the gure we determine that the PR diameter-to- beam width ratio for which the ow ratio decreases to one-half the maximum value is 0:93. This value can be used to obtain an estimate for the true beam spot size on the retina. Specically, we can divide the measured retinal vessel diameters by possible beam spot sizes, nd the new curve t and compute the vessel diameter-to-beam width ratio for which the ow ratio decreases to one-half of its maximum value. The beam spot size that produces the result that is closest to 0:96 can be used as an estimate for the true spot size on the retina. Using 0:96 and the retinal ow ratio data calculated above we estimate that the real beam spot size on the retina is 33:1 microns. It should be noted that using this method we estimate that the beam spot size in the simulation is really 25:4 microns, not the 22 microns to which this value is set. This discrepancy is likely due to the fact that for the original simulation, the vessels were manually detected whereas for the second simulation the automatic boundary detection scheme was used. 111 Figure 4.12: Simulated PR-to-MSS ow ratios as a function of diameter-to-beam width ratio. 4.3.3.2 Simulated Flow and Simulated Diameter Accuracy The simulation is next extended and used to evaluate the accuracy, in terms of ow and vessel diameter estimates, of the PR and MSS algorithms. In these experiments the vessel diameters were computed automatically using the parabolic tting technique described previously. The simulation enables the evaluation of the ow accuracy of both the PR and MSS algorithms. Figure 4.13 demonstrates that both PR and MSS produce widely varying ow estimates when the vessel diameter is less than the beam width. For vessel diameters between one and three times the beam width the PR method produces a signicant but systematic underestimate of the ow. For vessel diameters above three times the beam width the PR method produces stable and accurate ow estimates. On the other hand, the MSS method does not produce highly stable ow estimates for any vessel diameter as evidenced by its high variability for all diameters. Furthermore, even for very large vessels, the MSS method overestimates the ow. Both of these eects are caused by the high-pass nature of the MSS algorithm. Such high-pass ltering has been shown [53] to distort the power spectrum of the Doppler signal. This is especially evident when the Doppler frequency shifts are low, as is typically the case for small blood vessels. 112 Figure 4.13: Simulated PR-to-actual and MSS-to-actual ow ratios as a function of diameter- to-beam width ratio. The simulated vessel diameter measurements are plotted in Figure 4.14. These relation- ships are linear for both the PR and MSS methods, with a slight increase in the dierence between the two methods as the actual diameter increases. Based on the linear regres- sion equations the PR method slightly underestimates the vessel diameter whereas the MSS algorithm overestimates it. It is clear that the PR algorithm produces more precise diam- eter estimates than does the MSS algorithm. It should again be noted that these diameter measurements were made automatically using the parabolic tting scheme described above. Fitting a parabola to the prole tends to produce a larger diameter estimate than does thresholding at the boundaries based on the noise level. Thus the intercepts should be slightly lower than reported here if thresholding at the boundary is used to determine the vessel diameter. Figure 4.14: Simulated diameter measurement accuracy 113 4.3.4 Summary In summary, we have shown that the PR-to-MSS ow ratio decreases signicantly as the ves- sel diameter decreases. Similar relationships have been shown in both retinal and simulation measurements. For all vessel diameters examined the PR algorithm produces more precise ow and diameter estimates than MSS due to the high-pass nature of the MSS algorithm. The results indicate that although the MSS algorithm can be used to visualize small blood vessels that are masked by the PR estimates, it cannot be used reliably to measure the ow in these vessels. The PR algorithm, however, underestimates the ow for vessel diameters less than three times the beam width. But, the simulation predicts a systematic decrease for diameters between one and three times and beam width. Thus, it seems plausible that a correction factor can be used for the ow estimates in these cases. Further testing and anal- ysis is required to determine whether these observations may help to correct for algorithm bias in in vivo retinal ow estimates. 4.4 Case Study II: Eect of Transverse Step Size on Flow Velocity Measurements in Phase-Resolved Doppler OCT 4.4.1 Introduction The goal of the second case study that illustrates the utility of our retinal vessel simulation introduced in Section 4.2 is to study the eect of transverse step size on Doppler OCT measurements. In Section 4.3 we showed that the phase-resolved algorithm is more accurate than the moving-scatterer-sensitive algorithm in terms of estimating retinal blood ow. Thus, we limit this study to the phase-resolved Doppler OCT algorithm (PRDOCT). 114 The precision and accuracy of PRDOCT measurements relies on both a large overlap between adjacent A-scans [52] as well as high signal-to-noise (SNR) ratios [56]. The overlap between scans is related to the transverse step size. The authors in [52] derived a theoretical expression for the phase dierence between adjacent A-lines as a function of transverse step size. Using this model as well as a one-dimensional simulation, they showed that increasing the transverse step size leads to a broadening of the Doppler spectrum and a resulting increase in the measured root-mean-squared phase noise. However, this study focused only on bulk or stationary scatterers and did not analyze the eect on moving particles such as red blood cells (RBCs). The authors in [56] showed that a decrease in the SNR also leads to a broadening of the Doppler spectrum. Because of the cyclic nature of phase measurements and subsequent phase wrapping, the authors showed that this spectrum broadening leads to a decrease in measured Doppler velocities, and that the eect was more pronounced at higher ow speeds [18]. The authors then proposed a modied complex vector averaging technique that mitigated this eect. However, this study did not analyze the eect of transverse step size on PRDOCT measurements. The purpose of this study is to examine the dependence of average velocity measurements inside a vessel on the transverse step size in spectral domain PRDOCT using the vector averaging technique. The primary contributions of this study are (1) the derivation of a mathematical expression for the eect of transverse step size on PRDOCT and (2) the proposal of two techniques to correct for this eect. This is the rst time that these correction techniques are proposed, and they can be used to obtain more accurate blood ow velocity estimates for in vivo clinical studies. We rst validate the simulation with in vitro blood ow phantom experiments using small transverse step sizes that are clinically relevant. We then use the simulation to extrapolate the results over a larger range of transverse step sizes and ow rates. We t parameterized logistic functions to the curves in order to compare the eect at dierent ow rates. We then 115 present lookup table and phase unwrapping techniques to correct for the eect of increased transverse step size on measured velocity data. We also show that increasing the beam width mitigates the eect without the need to correct the measured velocities. 4.4.2 Theory and Simulation The simulation was described in Section 4.2. The constant parameters in this simulation include the scan length (140 m), scan depth (140 m), full width 1=e 2 intensity beam width (20 m), coherence length in tissue (6.5 m), scatterer density (0.08 particles/m 3 ), source wavelength (840 m) and A-line rate (17 kHz). No additive noise is implemented, unless otherwise stated. Scatterers are randomly placed within the scan volume. An 80 m diameter vessel with a parabolic ow prole is placed at the scan center, and all scatterers inside the vessel simulate owing blood particles, while all scatterers outside of the vessel simulate particles in the surrounding tissue. The Doppler angle, dened as the angular deviation from perpendicularity between the incident beam and the blood vessel, is set to 100 . The transverse step size is a variable parameter. The simulated complex OCT Bscan was used to compute Doppler phase shifts and corre- sponding axial velocities using Equation 2.22 and Equation 2.21. Because of the cyclic nature of the arctangent function, the maximum detectable axial velocity shift without phase un- wrapping is V max = 0 =4nt. Throughout this work, the Doppler velocities are examined over the entire vessel area in the xz (transverse-axial) plane. The illustrated distributions and the computed Doppler shifts and velocities are aggregated using all of the pixels inside the vessel. Because the average velocity in a circular cross section of a parabolic ow prole is half that of the maximum velocity [57], the relative average velocityV rel is dened in terms of the measured velocity V as: 116 V rel = V V max =2 = 8nt 0 V: (4.22) V rel ranges between zero and one and is proportional to the volumetric ow through a vessel [57]. The eect of transverse step size on V rel is the primary focus of this case study. The authors in [52] showed that increasing the transverse step size between adjacent A-lines leads to a broadening of the Doppler spectrum. Due to the cyclic nature of phase measurements and consequent phase wrapping, this spectrum broadening introduces a sys- tematic decrease in measured average Doppler phase shifts and velocities in a vessel. To illustrate this eect, a simulation of an OCT B-scan was used to calculate the phase distri- bution inside a vessel. Figure 4.15 shows the Doppler broadening and the resultant phase wrapping with increasing transverse step size for two dierent settings for V rel . The inclu- sion of the wrapped pixels in the averaging calculation leads to a systematic decrease in and subsequent underestimation of the average Doppler phase shift inside a vessel. This underestimation is more pronounced at high velocities because of the proximity to the phase wrapping limit. Specically, forV rel = 0.36, the average Doppler phase shift inside the vessel falls by a factor of 0.84 when the transverse step size is increased from 0.5 m to 8.5 m. On the other hand, for V rel = 0.85, the average Doppler shift falls by a factor 0.65 over the same range. 4.4.3 Results and Discussion 4.4.3.1 Validation Using In Vitro Experiments As a rst experiment, the simulation is validated using an in vitro ow phantom experiment with a spectral domain OCT system. A detailed description of the experimental setup was given in Section 2.2.4. The parameters of the system, such as source center wavelength, beam width, coherence length and A-line rate, match those used in the simulation. A 117 Figure 4.15: Simulated Doppler broadening, consequent phase wrapping and mean Doppler phase shift underestimation with increasing transverse step size for two dierent velocity settings. (a,b,c): V rel set to 0.36. (d,e,f): V rel set to 0.85. The transverse step sizes are 0.5 m (a,d), 4.5 m (b,e) and 8.5 m (c,f), respectively. The distributions are computed using all of the pixels that are inside of the vessel. Note that the distribution of velocities is signicantly wider for a higher velocity even for a small step size (a,d). This is because the range of velocities in a vessel with a higher average velocity is signicantly larger than the range for a lower average velocity. The green bins indicate the phase wrapping. The vertical red lines indicate the average Doppler shifts in the vessel and are located at 0.57 radians, 0.56 radians and 0.48 radians, respectively, for V rel = 0.36 and 1.32 radians, 1.23 radians and 0.86 radians, respectively, for V rel = 0.85. The underestimation caused by increasing transverse steps size has a signicantly larger impact for faster velocities. syringe pump (Harvard Apparatus) was used to control the ow of human blood through a glass capillary tube (Wale Apparatus) with an inner diameter of 200m. A dual-plane scan protocol [12] was used to compute Doppler velocities in the tube. A visual comparison of the simulated and phantom Doppler velocities is show in Figure 4.16. For the simulated velocity calculation, the known vessel diameter was used to compute the average velocity inside the 118 vessel. For the phantom experiments, the glass capillary was manually segmented with an ellipse, and the average velocity inside of the segmented capillary is reported. Figure 4.16: Sample Doppler velocity shifts computed using PRDOCT. Left panel illustrates results of simulated 80m diameter vessel in tissue. Right panel illustrates results of in vitro experiment using a 200m diameter tube. Vmax denotes the maximum measurable velocity without phase unwrapping and is equal to 0 =2nt. The capillary tube was manually segmented with an ellipse. In the rst experiment, the transverse step size x was varied between 0.9 m and 4.2 m in increments of 0.3 m, which covers the range typically used in in vitro Doppler experiments [18,19]. Two ow speeds were used to study how well the simulation mimics the phantom studies. The simulation ow speeds were chosen so that the average velocity in the simulated vessel was approximately equal to the average velocity inside the glass capillary. These two velocities areV rel = 0.49 andV rel = 0.875. Figure 4.17 qualitatively shows that the transverse step size aects the simulated and in vitro PRDOCT measurements in a similar manner. The Doppler phase shift broadening for both the in vitro phantom experiments and the simulation is illustrated in the top panel of Figure 4.17 (a,b) for the two ow speeds. This broadening is dened as the change in the standard deviation of the Doppler phase shifts inside the capillary/vessel relative to the standard deviation at 0.9 m. Formally, the broadening is dened by: =(x)(x = 0:9): (4.23) 119 The broadening for a higher velocity is greater than the broadening for a lower velocity because the standard deviation calculation is aected by phase wrapping as previously de- scribed. The phase wrapping subsequently leads to a decrease in the average velocity in the vessel, as illustrated in Figure 4.17 (c,d). Qualitatively, both the simulation and phantom experiments show almost no fallo up to transverse step sizes of 4.2 m forV rel = 0.49, but both show a fallo for V rel = 0.875 over the same transverse step size range. Figure 4.17: Comparison of broadening and subsequent velocity underestimation for simula- tion and in vitro experiments for two dierent ow rates. The broadening (a,b) is dened in Equation 4.23 and is the relative broadening over the range of transverse step sizes. The velocityV rel (c,d) is dened in Equation 4.22. (a,c): ow speed set so thatV rel = 0.49 at x = 0 m. (b,d): ow speed set so that V rel = 0.875 at x = 0 m. (a,b) illustrate that the broadening eect is similar for the simulation and phantom experiments. (c,d) illustrate the fallo on average vessel velocity is similar for the simulation and phantom experiments. For a quantitative comparison of the velocity fallo with transverse step size, the coef- cient of variation of the root-mean-square-deviation between the simulated measurements 120 V rel;sim and the phantom measurements V rel;phtm is computed from Figure 4.17 (c,d). This quantity is dened as: CVRMSD = q P M m=1 (V rel;sim V rel;phtm ) 2 1 M P M m=1 V rel;phtm 100; (4.24) and is a normalized measure of the dierence between the two measurements. For the average velocity fallo curves for the two ow speeds studied, the CVRMSD values are 1.53% and 1.47%, thus validating the use of the simulation to study the eect of transverse step size on PRDOCT measurements. 4.4.3.2 Phase Resolved Doppler OCT Fallo with Increasing Transverse Step Size As a second experiment, seven dierent ow rates ranging from V rel = 0.12 to V rel = 0.84 are simulated using transverse step sizes ranging from 0.5 m to 24.5 m in steps of 2.0 m. It can be seen in Figure 4.18 that the measured velocities fall o in a sigmoidal fashion. Initially, a curve t was performed for all velocities using the parametric logistic function dened by: V rel =V rel;0 1 +r M 1 +r Mx ; (4.25) where V rel;0 is the relative velocity at x = 0, r is the fallo rate and M is the step size at which the derivative of V rel with respect to x reaches its maximum. More intuitively, M is the step size at which the maximum fallo in V rel occurs. Note that V rel;0 is the real average relative velocity inside the vessel andV rel is the measured relative velocity as dened by Equation 4.22. It was found that the fallo rate was approximately constant for all of the curve ts and so this parameter was xed at r = 0:655 and the curves were t again. The tting results are shown in the left panel Figure 4.18 in black. 121 The right panel in Figure 4.18 illustrates that the step size at which the fallo rate reaches its maximum value increases with increasing velocity. This indicates that the shapes of the fallo curves are approximately the same but they are shifted based on the value of M. The velocity underestimation in PRDOCT results from the broadening of the Doppler spectrum due to high transverse step sizes and the limited range of possible phase shifts spanning from - to . The broadening for higher velocities reaches the phase shift limit earlier than for lower velocities. However, once the broadening causes the Doppler spectrum to reach this limit, it aects the PRDOCT measurements similarly regardless of the ow velocity. Figure 4.18: Simulation results for fallo of PRDOCT measurements for various ow speeds. Left panel: logistic curve t for various ow speeds as given by Equation 4.25 with r = 0.655. Right panel: variation of the parameter M (Equation 4.26), which is the step size at which the derivative of V rel with respect to x reaches its maximum, with the nominal velocity V rel;0 at x = 0. A power curve was used to t the parameter M versus the real velocity V rel;0 (Figure 4.18 (right)). The equation for this curve is: M(V rel;0 ) =3:39V 2:69 rel;0 + 12:12: (4.26) The fallo factor for average velocity measurements related to increasing transverse step size can be dened by the right most term in Equation 4.25, or (1+0:655 M )=(1+0:655 Mx ). As illustrated in Figure 4.18, this fallo factor depends on the real velocityV rel;0 . It is possible to 122 bound the fallo for various transverse step sizes based on Equation 4.25 and Equation 4.26. For example, since the maximum real velocity is V rel;0 = 1:0, the minimum value for M as given by Equation 4.26 is M = 8:73. For x 5m, the fallo factor is bounded above by 0.85. Thus, even for velocities up to the phase wrapping limit, the velocity measured using vector averaging technique for PRDOCT falls o by at most 15%. Although this bounding procedure is helpful in understanding the extent to which velocity/ ow is underestimated, it does not help to correct for it. It is clear from Equation 4.25 that V rel;0 is the desired velocity. In practice, velocity is measured using scans taken at some transverse step size larger than zero and will exhibit some fallo. It is desirable to devise a method that mitigates the fallo eect for increasing transverse step size that can be applied to measured velocity data to increase the accuracy of ow estimation. Two techniques are discussed next. Velocity Correction Using a Lookup Table One aim of this experiment was to derive a correction factor for a particular x that could be applied to a measured velocity V rel to compute the real velocity V rel;0 . This correc- tion factor is precisely the inverse of the fallo factor previously dened and is equal to (1 + 0:655 Mx )=(1 + 0:655 M ). However, due to the complicated nature of the relationship between M and V rel;0 as given by Equation 4.26, it is not possible to nd this solution in closed form. On the other hand, it is possible to compute V rel;0 using a lookup table and a numerical solver. Specically, Equation 4.25 and Equation 4.26 can be combined into: V rel =V rel;0 1 + 0:655 3:39V 2:69 rel;0 +12:12 1 + 0:655 3:39V 2:69 rel;0 +12:12x : (4.27) A lookup table can be generated by solving Equation 4.27 for various transverse step sizes and relative average velocity measurements using numerical techniques. An example of a 123 one such lookup table is given in Table 4.2. The gradation of this lookup table is limited only by computer memory considerations. xnV rel 0.1 0.3 0.5 0.7 0.9 2.6 m 0.101 0.304 0.507 0.715 0.935 3.3 m 0.102 0.306 0.511 0.723 0.957 4.0 m 0.103 0.308 0.516 0.734 0.995 Table 4.2: Example lookup table for computing velocityV rel;0 from measured average relative velocity V rel and transverse step size x. Although a lookup table could be used to compute V rel;0 , its application to practical Doppler OCT measurements is limited by the scope of this work. The focus of this work was to study the eect of transverse step size on PRDOCT average velocity measurements. Increasing the transverse step size leads to a broadening of the Doppler spectrum which decreases average velocity measurements because of phase wrapping. However, increasing the transverse step size is not the only factor that can lead to Doppler broadening. As the authors in [18] point out, decreasing the signal-to-noise ratio (SNR) also leads to Doppler broadening and subsequent velocity underestimation. In order to examine the eect of SNR on the fallo, a complex additive Gaussian noise term [42] is added to simulation measurements in Equation 4.9 at each pixel. The SNR is dened by SNR = 20 log 10 ( mean(jAj) mean(jNj) std(jNj) ); (4.28) whereA is the amplitude computed in Equation 4.9,N is the complex additive Gaussian noise term and the mean and standard deviation are computed using the entire simulation region that is 140m by 140m. The SNR is measured in decibels (dB). The noise characteristics are measured by implementing a simulation with the complex re ectivity r p in Equation 4.9 set to zero. Figure 4.19 shows that an SNR above 21 dB is required to maintain the accuracy of the lookup table. For SNR values lower than 14 dB there is a signicant velocity 124 underestimation even at a step size of x = 0.5 m. In practice, then, our lookup table correction method can be applied only when the SNR level are acceptable. In addition to Figure 4.19: Simulation results for fallo of PRDOCT measurements for various signal-to- noise ratios. SNR is measured in decibels (dB). The blue markers have been generated by simulating without noise so that the SNR in this case is equal to innity. SNR, other factors, such as transverse ow component [58] and phase noise caused by system shot noise and galvanometer jitter [53] can lead to a broadening of the Doppler spectrum and can result in velocity underestimation. A complete analysis of all of these factors and how the combination of factors leads to Doppler broadening and subsequent velocity underestimation is out of the scope of this work. One approach to combining these eects, however, is to model each eect as the convolution of the Doppler spectrum with a Gaussian function with variance equal to the broadening eect. The eects could be combined by summing the variances of each Gaussian to calculate the cumulative broadening. Phase Unwrapping Another method that can help correct for velocity underestimation for increasing transverse step size is robust phase unwrapping. In this work, a publicly available implementation [59] of a two-dimensional unwrapping algorithm using branch cuts [60] is adapted for phase unwrapping in vessels with a circular cross section. In one-dimensional phase unwrapping, phase discontinuities greater than are identied and corrected by adding or subtracting 125 2 so that the discontinuity is removed. It has been shown that the real unwrapped phase can be recovered if there are no real discontinuities greater than [61]. Applying this technique to two-dimensional unwrapping, however, introduces inconsistencies in that the unwrapping procedure can become path-dependent [62]. In order to combat this diculty, Goldstein [62] and others introduced techniques for identifying branch cuts in the phase image that isolate regions that can be unwrapped in a path-independent manner. By operating on each region separately, this technique can be used to compute the unique unwrapped phases. The algorithm used here is based on this idea. Figure 4.20 shows the results of two-dimensional phase unwrapping applied to simulated vessels. After unwrapping, there is no longer a bias in the average Doppler phase shifts inside the vessel. Figure 4.20: Simulation results illustrating the reduced fallo in PRDOCT measurements when phase unwrapping is implemented. Phase unwrapping corrects for the fallo for step sizes less than 4.5 m. For step sizes above 10 m, phase unwrapping is too dicult and the result reduces to the no unwrapping case. Phase unwrapping can also fail under low SNR conditions. Figure 4.21 illustrates the eect of SNR on the measured average velocities after phase unwrapping. For transverse step sizes less than 4.5m, the SNR can be reduced to 14 dB without a signicant change in the measured velocities. This is in direct contrast to the computed average velocities without phase unwrapping (Figure 4.19) which exhibit a signicant decrease for an SNR equal to 14 126 dB. Thus, the phase unwrapping method for correcting for the fallo with transverse step size is more robust to noise than the lookup table method. Figure 4.21: Simulation results for fallo of phase unwrapped PRDOCT measurements for various signal-to-noise ratios. SNR is measured in decibels (dB). The blue markers have been generated by simulating without noise so that the SNR in this case is equal to innity. 4.4.3.3 Increasing the Beam Width Both the lookup table and phase unwrapping methods are techniques to correct measured velocities for the fallo eect caused by increasing the distance between adjacent A-lines. One method that could mitigate the eect of velocity underestimation without correcting measured velocities is increasing the beam width. Using a larger beam width leads to an increase in the amount of overlap between adjacent A-lines relative to a smaller beam width and should thus reduce the fallo in velocity estimation. In order to test this hypothesis, an experiment was performed to compare the fallo of PRDOCT using two dierent beam widths equal to 20 m and 34 m. Figure 4.22 illustrates that the fallo is slower for the higher beam width. The fallo factor at a transverse step size of x = 4.5 m is 0.94 for the 20 m beam width and 0.98 for the 34 m beam width. For a quantitative comparison, Equation 4.25 was t to both curves. The parameter M was found to be 10.64 for the 20 m beam width and 17.15 for the 34 m beam width. The ratio of the M parameter is 0.62, approximately equal to the ratio of the beam widths which is equal to 0.59. Thus, the 127 eect of increasing the transverse step size on the average velocity measurements in a vessel is approximately proportional to the beam width. Although increasing the beam width can reduce the fallo eect, the tradeo is a decrease in transverse resolution as well as some loss of signal. This should work well for larger vessels with high velocities but may obscure smaller blood vessels. Figure 4.22: Simulation comparison of fallo for dierent beam widths. 4.4.3.4 Comparison of Phase Averaging and Complex Vector Averaging In a paper previously published by our research group, the authors measured a fallo in ow measurements that was signicantly higher than that found here [19]. Specically, the ratio of the ow measured at a step size of 4.0 m relative to the measured ow at 0.7 m was 0.683. This value was then used as a correction factor for all measurements to correct for the fallo in ow. It is clear from these experiments, however, that the fallo depends on the actual Doppler velocities and thus no single correction factor can be used for all ow measurements. Also, as previously discussed, our experiments show a signicantly smaller fallo than 0.683 for all ow velocities up to 0.85 x V max . In order to understand this discrepancy, another experiment was performed to study the dierence between phase averaging PRDOCT (used in [19]), in which the phase dierences between adjacent A-scans are computed rst computed and then averaged, and complex vector averaging PRDOCT 128 (used here). As the authors in [18] explain, the vector averaging procedure helps mitigate the eect of phase wrapping caused by the broadening of the Doppler spectrum due to either low SNR or, in this case, high transverse step size. The phase averaging and vector averaging PRDOCT techniques are compared via histograms in Figure 4.23. It can be seen that even for a small step size of x = 0.5 m, the phase averaging technique suers from phase wrapping-induced average phase shift underestimation whereas the vector averaging technique does not. This eect is more pronounced at larger step sizes. Figure 4.23: Simulated phase distributions for PRDOCT techniques. (a,b): phase averaging PRDOCT. (c,d): vector averaging PRDOCT. Transverse step sizes were set to 0.5 m (a,c) and 6.5 m (b,d). Green bins indicate wrapped phases. Vertical red lines indicate the average Doppler shifts and are located at 1.28 (a) radians, 0.75 (b) radians, 1.33 (c) radians and 1.09 (d) radians, respectively. Phase averaging technique is highly susceptible to phase wrapping induced phase shift underestimation. Figure 4.24 shows the complete fallo curves for phase averaging and vector averaging for a single ow speed. At the high velocity used here, there is a signicant dierence between 129 the two techniques. At a 4.5m step size, the standard averaging technique falls by a factor of approximately 0.75. This value is still slightly larger than that reported in [19]. The reason for this may be because a higher velocity was used, or it may be a result of other phenomena exhibited by blood, such as tumbling, that are not modeled in the simulation. It is clear, however, that vector averaging is signicantly more robust at estimating average velocities inside a vessel. Figure 4.24: Simulation results for comparison of fallo for phase averaging and vector aver- aging for simulated average velocities in a vessel. Phase averaging suers from a signicant fallo even for small transverse step sizes. 4.4.4 Summary In summary, we have shown that PR Doppler OCT blood ow estimates fall of signicantly with increasing transverse step size. After validating our retinal vessel simulation with in vitro ow phantom experiments, we used the simulation to extrapolate the experimental results. We applied logistic curve ts to describe the fallo in the velocity estimates for increasing transverse step sizes which indicated that for the same transverse step size, higher velocities cause a larger fallo in the PRDOCT measurements than lower velocities. This means that it is not possible to correct clinical PR Doppler OCT measurements with a single correction factor. We proposed lookup table a phase unwrapping techniques that can 130 be used in clinical studies to correct for the velocity underestimation. We also showed that increasing the beam width can mitigate the eect of transverse step size on clinical OCT blood ow velocity measurements. 4.5 Conclusions In this chapter we studied blood ow velocity measurements with Doppler OCT. In Section 4.2, we introduced a simulation of a retinal vessel immersed in background tissue. The synthetic data provides ground truth knowledge of blood ow velocities that can be used to assess the accuracy of Doppler OCT algorithms. The primary novelty of our technique is that it simulates an entire two dimensional OCT B-scan of owing particles that model moving red blood cells surrounded by stationary particles that model static background retinal tissue. Our simulation mimics clinical Doppler OCT measurements more closely than other related simulations. However, our simulation does not provide a complete picture of the scattering from red blood cells in a retinal blood vessel. Specically, we model both static tissue scatterers and moving red blood cells with discrete point scatterers. In reality, the geometries of these scatterers are far more complex than this. Moreover, for computational considerations, we x the number of ow particles in each simulation. In practice, red blood cells enter the scan region at random times and in random positions. A more realistic model would account for this by sampling from a probability distribution that describes the entry of ow particles into the scan region. However, our implementation replaces each particle that leaves the scan region with a new particle deterministically both in time and in space. Although our simplied model of blood ow through a vessel immersed in retinal tis- sue does not entirely capture reality, it is still useful in studying various Doppler OCT algorithms. We illustrated its eectiveness with two case studies. First, we examined the eect of vessel diameter on the relative blood ow estimates of the phase-resolved (PR) and 131 moving-scatterer-sensitive (MSS) Doppler OCT algorithms in Section 4.3. After validating our simulation with in vivo OCT retinal images, we showed that while PR underestimates ow velocity measurements for very small vessels, MSS wildly overestimates the ow. Fur- thermore, we showed that for large vessel diameters, the PR algorithm is more accurate for measuring both ow rates and vessel diameters. Thus, although MSS can be used for visualizing ow in small vessels, the PR algorithm is the method of choice for measuring ow in clinical Doppler OCT studies. As a result, our next case study focused only on the PR Doppler OCT algorithm. A second case study that illustrates the utility of our Doppler OCT simulation was pre- sented in Section 4.4. In this study, we examined the dependence of complex vector averaging PR measurements of average velocity in a vessel on transverse step size. We showed that increasing the transverse step size leads to a broadening of the Doppler spectrum and sub- sequent phase wrapping. The phase wrapping causes a systematic underestimation of the average velocity in a vessel, and is more pronounced at higher velocities. After validating the simulation with an in vitro ow phantom experiment, we derived a parameterized logis- tic expression describing the velocity underestimation of PR Doppler OCT with increasing transverse step size and that the logistic function parameters depend on velocity. This in- dicates that for the same transverse step size, higher velocities cause a larger fallo in the PRDOCT measurements so that correcting for this fallo using a single correction factor for all velocities is not possible. We then presented lookup table and phase unwrapping techniques for correcting the measured velocities. We showed that an SNR above 21 dB is required to maintain the accuracy of the lookup table method, whereas an SNR above only 14 dB is required for the phase unwrapping method. Thus, the phase unwrapping technique appears to be the more precise method due to the relative insensitivity to SNR. Additionally, we showed that increasing the beam width can mitigate the fallo eect for transverse step sizes lower than 4.5m without the need to correct measured velocities. All 132 of these proposed techniques can be used to improve the accuracy of clinical Doppler OCT measurements. 133 Chapter 5 Split-Spectrum Amplitude-Decorrelation Angiography (SSADA) 5.1 Introduction As discussed in Chapter 4, Doppler OCT is able to calculate absolute blood ow velocity by evaluating the phase dierences between adjacent A-scans [13{17]. However, Doppler OCT requires good phase stability for quantifying slow ow velocities [63{65] and suers from reduced sensitivity for small blood vessels [51,66]. Furthermore, the measured Doppler frequency shift in a blood vessel varies inversely with the cosine of the Doppler angle, or the angle between the incident beam and blood vessel [19]. Yet many blood vessels in the human retina are nearly perpendicular to the incident beam, so that the Doppler frequency shifts in these vessels are small and dicult to detect. Several techniques, including joint spectral and time domain OCT [56], use of a modied Hilbert transform [67] and smart scanning protocols [68], have been introduced to overcome some of the limitations of Doppler OCT. However, techniques that do not depend critically on the Doppler angle may be particularly useful for visualizing blood vessels in the human retina. Several angiographic techniques have been introduced to overcome some of the limita- tions of Doppler OCT algorithms for visualizing retinal blood ow velocities. Some of these 134 techniques are phase-based, while others are amplitude-based. Some of the phase-based tech- niques include optical coherence angiography [69], optical micro-angiography [70,71], Doppler variance [72,73] and phase variance [74,75]. Some of the amplitude-based techniques include scattering optical coherence angiography [76, 77], speckle variance [78, 79] and correlation mapping [80]. These methods have be have been implemented using both spectral-domain and swept-source OCT imaging systems and have been used to image microvascular networks in human eyes in vivo. However, absolute blood ow velocity quantication has yet to be demonstrated. For a typical Fourier domain OCT setup, the axial resolution, determined by the source central wavelength and its spectral bandwidth, is higher than the lateral resolution deter- mined by the laser beam prole in both X and Y directions. This anisotropic resolution, with higher axial than transverse resolution, will result in higher ow sensitivity for axial motion. In the fundus, ocular pulsation related to heart beat, driven by the retrobulbar or- bital tissue, mainly occurs along the axial direction. The anisotropic resolution cell of retinal OCT imaging is very sensitive to this axial motion noise. On the other hand, retinal and choroidal blood ow vectors are primarily transverse to the OCT beam, along the wider (less sensitive) dimensions of the OCT resolution cell. Therefore, to improve the signal-to-noise ratio (SNR) of ow detection, it is desirable to lower the axial resolution and dampen the axial ow sensitivity. This reduces the axial motion noise without sacricing the transverse ow signal. To overcome this limitation, we develop split-spectrum amplitude-decorrelation angiogra- phy (SSADA) to improve the signal-to-noise ratio (SNR) of ow detection that mitigates the bulk motion decorrelation eect suered by other amplitude-based angiographic techniques. The full OCT spectrum is split into several narrower bands. Inter-B-scan decorrelation was computed using the spectral bands separately and then averaged. We applied the SSADA algorithm to in vivo images of the human macula to visualize the capillary network. 135 After demonstrating the ability of our novel algorithm to detect ow in retinal blood vessels, we extend SSADA to enable blood velocity quantication. The ability to measure blood ow is critical to the diagnosis of many ocular pathologies. In order to use SSADA for velocity quantication we perform in vitro ow phantom experiments. We correlate the SSADA signal at multiple time scales with various preset velocities and derive a linear model relating SSADA measurements to absolute ow velocities using the phantom data. We then discuss the operating range for the linear model as well as its implication for velocity quantication with SSADA in a clinical setting. The organization of this chapter is as follows. In Section 5.2, we introduce the theory behind the SSADA algorithm. In Section 5.3, we demonstrate macular angiography with the SSADA algorithm. The material presented in Section 5.3 was published in [81]. Then, in Section 5.4, we perform ow phantom experiments that enable ow velocity quantication with SSADA. The material presented in Section 5.4 was published in [82]. 5.2 SSADA Theory Speckle decorrelation has long been used in ultrasound imaging [83, 84] to detect optical scattering from moving particles such as red blood cells. This phenomenon is also clearly exhibited by the real-time OCT re ectance images. The scattering pattern of blood ow varies rapidly over time. This is caused by the fact that the ow stream drives randomly distributed blood cells through the imaging volume (voxel), resulting in decorrelation of the received backscattered signals that are a function of scatterer displacement over time. The contrast between the decorrelation of blood ow and static tissue may be used to extract ow signals for angiography. Each pixel in a B-scan OCT image is formed from backscattered signals of a 3D volume in space, referred to as a resolution cell (Figure 5.1(A)). The statistical changes in the 136 Figure 5.1: Diagrams of the modication of the OCT imaging resolution cell and the split- spectrum method used for this purpose. (A) The resolution cell in the current conguration can be modied into a new resolution cell by using band-pass ltering and split-spectrum methods. (B) Steps showing how the original 2D spectral interferogram I(k) was split into four new spectra I(k) with smaller k bandwidth. BW and bw indicate the bandwidth of full-spectrum and Gaussian lters, respectively. The regions with non-zero values in the data block are indicated by the blue pattern. envelope intensity are related to the motion of scatterers through the OCT resolution cell. As previously mentioned, the axial resolution (Z) is typically higher than the transverse (X,Y) resolution in an OCT imaging system. However, since retinal blood ow is primarily in the transverse direction, whereas bulk motion noise sources, such as pulsation related to heartbeat, occur primarily in the axial direction [81] it is desirable to dampen the axial ow sensitivity. In order to decrease the sensitivity to the axial motion noise and thus improve the signal-to-noise ratio of ow detection, the SSADA algorithm digitally broadens the axial resolution prior to computing speckle decorrelation. 137 One straightforward way to achieve this resolution modication is band-pass ltering of the spectral interferogram (shown in Figure 5.1(A)). Unfortunately, this also sacrices most of the speckle information in the spectral interferogram and decreases the ow signal. Thus this is not an eective way to increase the SNR of ow (decorrelation) detection. A better way to decrease axial resolution without losing any speckle information is to split the spectrum into dierent frequency bands (shown in Figure 5.1(A)) and calculate decorrelation in each band separately. The decorrelation ( ow) images from the multiple spectral bands can then be averaged together to make full use of the speckle information in the entire OCT spectrum. The details of the split-spectrum procedure (Figure 5.1(B)) are explained below. 5.2.1 Split-spectrum The OCT spectral interferogram is given in Equation 2.12 and is slightly modied in Equation 5.1 to t the current discussion. I(k)/< ZZ S(k)r(x;z)exp 4ln2 x 2 ! 2 0 exp(i2kz)dxdz (5.1) The transverse coordinate is denoted by x and the beam is assumed to be at transverse position x b = 0. Each pixel in an OCT image is formed by a coherent sum of all of the light backscattered from a 3D coherence volume within a sample. The dimensions of the coherence volume are determined by the full-width at half-maximum (FWHM) transverse beam width ! 0 and FWHM coherence length z 0 . In a typical OCT system the axial resolution is higher than the transverse resolution, which leads to higher decorrelation sensitivity for axial motion [85]. In the fundus, blood ow is primarily in the transverse direction, whereas bulk motion noise sources, such as pulsation related to heartbeat, occur primarily in the axial direction [81]. 138 In order to decrease the sensitivity to this axial motion noise and thus improve the signal- to-noise ratio of ow detection, the SSADA algorithm digitally broadens the axial resolution prior to computing speckle decorrelation. In the SSADA implementation, a desired axial resolution z 0 0 is rst chosen that is at least as large as the nominal axial resolution z 0 . In order to create an isotropic coherence volume that is equally sensitive to axial and transverse ow, the desired axial resolution is set to the transverse resolution, orz 0 0 =! 0 . A set of Gaussian lters in wavenumber is then created that, when multiplied with the interferometric signal in Equation 5.1, increases the axial resolution from z 0 to z 0 0 . The jth lter in the lterbank can be expressed as: G j (k) = exp(4ln2(kk 2 0j )=k 2 G ); (5.2) wherek 0j is the center wavenumber and k G is its FWHM bandwidth. In order to determine the necessary k G Fourier transform properties as well of properties of Gaussian functions are leveraged [7]. The source spectrum is approximated with the equivalent Gaussian spectrum given in Equation 2.7 to simplify the subsequent mathematical analysis. Note that the product of Equation 2.7 (FWHM = k) and Equation 5.2 (FWHM = k G ) is a product of two Gaussians in wavenumber space and a corresponding convolution of Gaussians in zspace. The convolution results in another Gaussian in z-space with FWHM equal to p (4ln2=k) 2 + (4ln2=k G ) 2 [7]. This quantity is the desired new axial resolution, as it denes the modied FWHM axial resolution resulting from a spectrum split. Note that the rst term in the square root is equal to the original axial resolution . Thus, in order to increase the axial resolution from z 0 to z 0 0 the FWHM bandwidth should be set to: k G = 4ln2= q z 02 0 z 2 0 : (5.3) 139 In order to complete the lterbank specication, the distance in wavenumber between adja- cent lters is dened as 0:5 k G . The center wavenumber of each lter is then determined with the constraint that all of the respective center wavenumbers k 0j t within the range of acquired wavenumbers between k min and k max . Each Gaussian lter is multiplied by the interferogram in Equation 5.1 to create a set of signals I j (k) = G j (k)I(k). Performing Fourier transforms produces a set of zspace signals I j (z) =G j (z)I(z). The SSADA algorithm operates only on the OCT magnitude, or A j (z) =jI j (z)j. 5.2.2 Decorrelation The SSADA algorithm computes decorrelation at a single voxel between OCT magnitudes separated in time. Let A j;mt (x;z) denote the jth split spectrum OCT magnitude at voxel (x;z) (j2 0; 1;:::;J 1) taken at time t = mt (m2 0; 1;:::;M 1). The split-spectrum amplitude-decorrelation angiography signal at a voxel (x;z) at time lag t is dened by: D(x;z; t) = 1 2 M 1 1 J J1 X j=0 M2 X m=0 A j;mt (x;z)A j;(m+1)t (x;z) A j;mt (x;z) 2 +A j;(m+1)t (x;z) 2 : (5.4) The average decorrelation value D(x;z) is bounded below by zero (no decorrelation) and above by one. For the in vivo demonstration presented in the next section, decorrelation is computed between repeated B-scans at the same slow axis scan position (M-B mode imaging). For the phantom experiments in the subsequent experiment, decorrelation is computed between repeated A-lines at the same transverse scan position (M-mode imaging). 140 5.3 Macular Angiography To demonstrate the performance of the SSADA algorithm, macular imaging was performed on three normal volunteers using the swept-source OCT system described in [81]. The axial resolution of this system is 5 m, while the transverse resolution is approximately 11 m. The axial line rate is 100 kHz. In this demonstration, the system captured 200 A-scans to cover 3 mm for each B-scan. For 3D data acquisition, the entire scan volume was evenly divided into 200 steps, with eight repeated B-scans in each step. In doing so, it required 3.2 seconds to complete one 3D volumetric scan. Under this scanning protocol, the SSADA algorithm was applied to the repeated frame sequences at each step. This implementation computes decorrelation using M-B mode imaging, and the time between measurements at the same voxel is t = 2 ms. Maximum projection images over depth of the retina were used to compute enface decorrelation images. The macular region of the fundus is responsible for central vision. Capillary dropout in the macular region due to diabetic retinopathy is a major cause of vision loss [86]. Focal loss of the choriocapillary is a likely causative factor in the pathogenesis of both dry and wet age- related macular degeneration [87], the leading cause of blindness in industrialized nations [88]. Thus macular angiography is important. The SSADA algorithm was used to demonstrate macular angiography of both the retinal and choroidal circulations in a normal eye (Figure 5.2). The vascular pattern and capillary networks visualized by SSADA were similar to those previously reported using phase-based OCT angiography techniques [89,90]. The ow pixels formed a continuous microcirculatory network in the retina. There was an absence of vascular network in the foveal avascular zone (Figure 5.2(A)) of approximately 600 m diameter, in agreement with known anatomy [91]. There were some disconnected apparent ow pixels within the foveal avascular zone (Figure 5.2(A)) due to noise. Inspection of Figure 5.2(C) shows these false ow pixels to be decorrelation noise in the high re ectance layers 141 of the RPE and photoreceptors. The choriocapillaris layer forms a con uent overlapping plexus [92], so it is to be expected that the projection image of the choroid circulation shows con uent ow (Figure 5.2(B)). Figure 5.2: In vivo 3D volumetric [3.0 (x) 3.0 (y) 2.9 (z) mm] OCT of the macula processed with the SSADA algorithm. The images in the bottom panels have been cropped from 2.9 mm to 1.5 mm axially. (A) En face maximum decorrelation projection angiogram of the retinal circulation. (B) En face maximum decorrelation projection angiogram of the choroidal circulation. Black bar, 500 m. (C) Horizontal OCT cross section through the foveal center (upper dashed line in A) with merged ow (decorrelation represented in color scale) and structure (re ectance intensity represented in gray scale) information. (D) Merged horizontal cross section of the inferior macula (lower dashed line in A). The cross sections (Figure 5.2(C, D)) showed retinal vessels from the NFL to the outer plexiform layer, in agreement with known anatomy [93]. The ow in the inner choroid had higher velocity as based on decorrelation seen in the color scale. The volume was also greater than the retinal circulation (Figure 5.2(C, D)), again consistent with known physiology that the choroidal circulation has much higher ow than the retinal circulation [92]. There were signal voids in the outer choroid which may be due to fringe washout from high ow velocity 142 and the shadowing eect of overlying tissue. The cross sections (Figure 5.2(C, D)) also showed a few spots of decorrelation in the RPE layer. These must be artifacts because the RPE is known to be avascular. This is likely due to the projection of decorrelation of ow in a proximal layer (i.e., inner retinal layers) onto distal layers with a strong re ected signal (i.e., RPE). There was also a tendency for vessels to form vertical arrays in the inner retina, which may in some instances be due to the projection artifact as well. 5.4 Velocity Quantication With SSADA As previously mentioned, several angiographic techniques have been introduced for imaging retinal blood ow. While good qualitative imaging results have been shown for all of these methods, quantitative results that map angiograms to ow velocities are lacking. Decorrela- tion has been used for quantitative ow measurements in ultrasound [83,84] and thus has the potential to be useful for measuring ow velocities in OCT. Recently, a decorrelation-based method termed intensity-based Doppler variance (IBDV) was introduced and its relationship with velocity was established [94{96]. This method computes decorrelation using only the amplitude signal. One potential advantage of SSADA is that the algorithm rst digitally creates an isotropic coherence volume, or resolution cell, prior to computing decorrelation. This should make the algorithm equally sensitive to axial and transverse motion so that SSADA may be used to quantify ow independent of Doppler angle. The purpose of the work presented in this section is to determine a relationship between SSADA measurements and ow velocity. We hypothesize that this relationship is linear within a certain range of velocities and use in vitro blood ow phantom experiments to test this hypothesis. Whole blood was used in order to to closely mimic in vivo retinal imaging. The SSADA algorithm computes decorrelation between two OCT amplitudes that are taken at the same scanning position but are separated in time. We rst examine the dependence 143 of SSADA on Doppler angle after splitting the source spectrum in wavenumber so that the axial resolution of the OCT imaging system is equal to its transverse resolution. The con- cept of multi-timescale SSADA is introduced, where the time separation between amplitude measurements is varied, and used to examine the time-dependence of decorrelation. We derive an equation that can be used to calculate the absolute ow velocity from a measured decorrelation value and particular time separation between OCT amplitude measurements. We then dene the saturation velocity for which the decorrelation values saturate and the linear relationship is no longer valid. 5.4.1 Materials and Methods The system used in the following experiments has been described in Section 2.2. The axial resolution in blood is z 0 = 6:5 m and the transverse resolution is 11:8 m, both dened by the full-width at half-maximum values. 5.4.1.1 Scan Protocol Data was collected using both dual-plane [12] and M-mode protocols. The dual-plane proto- col was implemented with two parallel B-scans that were separated by 100m and repeated eight times. The B-scans each consisted of 700 A-lines and covered 700m in the transverse dimension, resulting in a transverse step size of 1 m. M-mode scans consisted of 2600 A-lines at the same transverse position inside the capillary. Measurements were taken for ve dierent preset ow rates and ve dierent Doppler angles. The Doppler angle was controlled using a ball and socket mount. 144 5.4.1.2 Data Processing The acquisition software was provided by Bioptigen Inc. (Durham, NC), and the custom processing and analysis software was coded in MATLAB. After background subtraction, re- sampling to k-space, dispersion compensation and Fourier transformation we obtain complex OCT data. Sample OCT intensity images are shown in Figure 5.3. Figure 5.3: Sample intensity images. Capillary was placed on top of a piece of paper. Left panel: B-scan; Right panel: M-mode scan. The dual-plane B-scans were used to compute the Doppler angle between the probe beam and the glass capillary, while the M-mode scans were used for velocity and decorrelation calculations. The velocities were calculated by rst computing Doppler frequency shifts using the phase-resolved Doppler OCT algorithm [18] and then converting to absolute velocity using the measured Doppler angle. The capillary tube was automatically segmented in the M-mode scans using intensity thresholding and morphological operations. Figure 5.4 illustrates that there are signicant projection artifacts in the paper underneath the glass capillary, which is likely due to multiple scattering by the moving blood cells. To avoid the in uence of these artifacts on the measured Doppler frequency shifts, we chose a subregion in the upper half of the capillary to use for all subsequent calculations. In order to apply the SSADA algorithm, a Gaussian lterbank is dened as previously described. In this case, the axial resolution if 6.5 m, while the desired axial resolution 145 Figure 5.4: Sample M-mode Doppler Frequency shifts from phantom study. The black box denotes the entire capillary, while the purple box denotes the region of interest used in the rest of this study. is 11.8 m. Thus, the FWHM bandwidth of each Gaussian lter in wavenumber space is 285 radians/mm, as determined by Equation 5.3. The distance between the Gaussian lters is one-half of this, or 142.5 radians/mm. The wavenumbers acquired by the spectrometer ranges from 717 to 779 radians/mm, and so ve Gaussian lters with centers within this range are used. The respective lter centers are located at 719, 733, 747, 761 and 776 radians/mm, respectively. In this set of experiments, M-mode scans were used to compute the decorrelation of a single A-line. The main purpose of the switch to M-mode imaging is that it enables the study of decorrelation at very ne time scales. The minimum time lag available for this M-mode protocol is t = = 56 s. In order to study the time course of decorrelation, multi-timescale SSADA (MSSADA) is used by applying Equation 5.4 for multiple time scales t =k fork2f1; 2;:::; 600g. The other parameters used in applying Equation 5.4 include M = 2000 and J=5. Note that because M-mode imaging is used there is no dependence on the transverse coordinate. Applying Equation 5.4 for the time-scales listed gives a MSSADA image as shown in Figure 5.5. 146 Figure 5.5: Multi-timescale SSADA M-mode image of blood ow through capillary tube. Time separation indicates the time between the two A-lines used to compute the decorrela- tion. 5.4.2 Results 5.4.2.1 Doppler Angle Dependence Since SSADA is an amplitude-based method and we create an isotropic voxel by splitting the source spectrum, we expect that the measured SSADA signal will exhibit similar sensitivities in the axial and transverse directions. In order to test this claim, we analyze the variation of the decorrelation signal with Doppler angle. To avoid both noise and saturation artifacts, we choose time separations t for which decorrelation values lie in a central region around 0.145 at zero Doppler angle. Specically, time separations of t = 784, 280 and 168 s for respective ow speeds of 0.37, 1.29 and 2.00 mm/s were used to plot SSADA measurements versus Doppler angle. Figure 5.6 shows that while the decorrelation signal remains relatively constant for Doppler angles less than 10 , it increases above 10 and then appears to plateau above 20 . This indicates that while SSADA measurements are equally sensitive to axial and transverse ow for Doppler angles less than 10 , the measurements are more sensitive to axial ow for Doppler angles above 10 . The solid line in Figure 5.6 indicates a linear t of the relationship between decorrelation and Doppler angle. Although there appears to be a jump in the decorrelation values above 147 Figure 5.6: Dependence of measured SSADA signal on Doppler angle. Dark line indicates linear t with Pearsons R = 0.89 (p < 0.005). Spearmans correlation coecient = 0.87, indicating a monotonic relationship. 10 , the linear t is useful in determining how sensitive decorrelation is to changes in Doppler angle. The change in decorrelation due to an angular variation can be used as an indicator of this sensitivity. Specically, we calculate the dierence between the decorrelation values at 0 and 30 that is predicted by the linear t. We then divide that number by the average decorrelation value over that range and multiply by 100 to convert to a percentage. This deviation is calculated to be 12.8% and indicates a small but signicant dependence of decorrelation measurements on Doppler angle. 5.4.2.2 Saturation SSADA eectively enumerates the dissimilarity between a pixels amplitude at two dierent time instances. If the interval between the two measurements is long enough, the respective amplitudes will be independent, and the decorrelation signal will be fully saturated. This denes a state of complete decorrelation, and any increase in the time interval will not alter the SSADA measurement. Thus, only decorrelation values that are below the saturation level are useful for distinguishing between varying ow speeds. By visually inspecting Figure 5.7, we can see that complete decorrelation occurs in approximately 500 s for a ow speed 148 equal to 2 mm/s. At this speed and time separation the red blood cells (RBCs) are displaced by only 1.0 m, less than one-tenth of the coherence volume size. This suggests that the decorrelation reaches full saturation well before the RBCs move completely through the coherence volume, which indicates a high sensitivity to speckle variation caused by the RBCs moving through the coherence volume. Note that the curves in Figure 5.7 asymptotically approach the complete decorrelation value of 0.21. This motivates us to dene a threshold over which the decorrelation rate slows down considerably and the curves in Figure 5.7. begin to atten. We set this decorrelation saturation threshold to 85% of the asymptotic decorrelation value. The resulting threshold is 0.18, and all decorrelation values above this threshold are referred to as saturated. Figure 5.7: Multi-timescale decorrelation averaged over capillary for various ow speeds. The asymptotic decorrelation value for our experiments is 0.21. 5.4.2.3 Relationship between decorrelation and velocity The goal of this work is to be able to relate the velocity of blood ow to a measured decor- relation value. In order to do so, we examine the velocity dependence of the decorrelation signal at various time separations/intervals. We expect that, for a xed time separation, the decorrelation will increase with increasing velocity. However, there should be some minimum 149 velocity beneath which measured decorrelations are indistinguishable from noise, and that this velocity should be related to various system noise sources. Furthermore, there should also be a maximum velocity above which measured decorrelations are saturated. As discussed in the section on saturation, the rate of decorrelation begins to slow before decorrelation val- ues are fully saturated. This means that the relationship between decorrelation and velocity changes as the decorrelation approaches full saturation. Therefore, in order to determine a relationship between decorrelation and velocity that does not change with time, we exclude all decorrelation values above the previously dened decorrelation saturation threshold of 0.18 from further analysis. With the saturated points removed, we t a linear model to the decorrelation versus velocity relationship for a given time separation. The data and ts are shown in Figure 5.8. Figure 5.8: Decorrelation vs velocity for various time separations ( t) between A-lines. Left panel: black dotted line indicates the saturation cuto value of 0.18. Right Panel: Linear ts of decorrelation vs velocity after saturated points have been removed. We summarize the tting parameters in Table 5.1. From this table we see that the slope changes with time separation. On the other hand, the intercept values remain relatively constant with changing time separations. We thus establish the linear relationship D t =m (1) t v +b (1) ; (5.5) 150 where D t is the measured decorrelation value at a particular time separation t, v is the ow velocity, m (1) t is the slope parameter that is a function of t and b (1) is the intercept parameter. The signicance of this intercept parameter, which is equal to the decorrelation when the velocity is zero, is treated in Section 5.4.3.1. Time separation [ms] Slope [s/mm] Intercept [a.u.] R-squared 0.056 0.0162 0.0780 0.76 (p<0.001) 0.122 0.0294 0.0800 0.91 (p<0.001) 0.168 0.0434 0.0796 0.94 (p<0.001) 0.244 0.0564 0.0804 0.94 (p<0.001) 0.280 0.0704 0.0802 0.95 (p<0.001) Table 5.1: Summary of linear t of decorrelation versus velocity. We next examined the dependence of the slopesm (1) t on time separation t and found it to be linear, as described by the equation m (1) t = 0:24 t + 0:0025 (Figure 5.9). Since the intercept was very small, the equation could be simplied to m (1) t m (2) t = 0:24 t. The eect of this approximation is discussed in Section 5.4.3.2. Figure 5.9: Linear t of slope m (1) t versus time t. Plugging this relationship into Equation 5.5 gives the decorrelation as a function velocity and time separation as: D(v; t) =m (2) tv +b (1) = 0:24 tv + 0:08: (5.6) 151 In practice, we measure the decorrelation at a particular time separation and wish to nd the ow velocity. Thus, we can invert Equation 5.6 to solve for velocity. Substituting m for 1=m (2) and b for b (1) we can write: v(D; t) = m (Db) t = 4:17 (D 0:08) t : (5.7) This model is only valid for a specic range of velocities and time separations, which dene an operating range for our model. Using Equation 5.7, we can compute the satura- tion velocity v SAT , which is dened as the velocity at which the decorrelation reaches the saturation cuto value of 0.18, for various time separations . The linear model in Equation 5.7 does not hold for velocities above v SAT . Some time separation-saturation velocity pairs are illustrated in Table 5.2. Time separation [ms] Saturation velocity [mm/s] 0.056 7.4 0.122 3.7 0.168 2.5 0.244 1.8 0.280 1.5 Table 5.2: Operating range for linear model. 5.4.3 Discussion 5.4.3.1 Model Parameters In order to study the parameters of our model, we rst rewrite Equation 5.7 as: v t = x =m (Db) = 4:17 (D 0:08); (5.8) 152 where x is the distance that the RBCs move between scans separated by time interval t and all other terms have been previously dened. The two parameters in this model are the slopem and the decorrelation interceptb. Since decorrelation is dimensionless, the parameter b must be dimensionless as well. Furthermore, we see from Equation 5.8 that when the RBC displacement equals zero, the decorrelation equals 0.08. Thus the parameterb is equal to the decorrelation in non- ow pixels, or the bulk decorrelation. It equals the minimum measurable decorrelation value and can be dened as the decorrelation noise oor. We expect that this parameter will vary inversely with the system signal-to-noise ratio, similar to the way that the phase noise oor does for Doppler OCT [97]. Further experiments are required to verify if and how this parameter relates to the signal-to-noise ratio of a particular OCT imaging system. The slope parameter m must have spatial units (eg m) in order to ensure consistency. We can understand its signicance by examining decorrelation saturation. Specically, after applying the decorrelation saturation cuto value (D SAT = 0.08) to Equation 5.8 and solving for m we have: m = 4:17 = x SAT =0:1; (5.9) where x SAT = 0.417 m indicates the distance the RBCs must move in order to saturate the decorrelation measurement. x SAT is proportional to the saturation velocity v SAT for a constant t and should depend on a particular OCT imaging system. We expect that x SAT , and thus m, should be proportional to a particular spatial scale related to the physical imaging parameters. It has yet to be determined, however, what the relevant spatial scale might be. One hypothesis is that x SAT increases with increasing beam width ! 0 so that the ratio x SAT =! 0 0:0353 is a constant. It may also be the case that x SAT , and consequently v SAT scales with wavelength, so that the saturation velocities given in Table 153 5.2 increase by a factor of 1.24 for a 1050 nm OCT imaging system. Further experiments using dierent wavelengths and beam widths are required to test these hypotheses. 5.4.3.2 Model Limitations There are a number of limitations of the linear model in Equation 5.7. First, in order to establish the linear relationship between decorrelation and velocity we had to exclude saturated points from our analysis. In practice, this establishes an upper limit on a velocity- time separation pairing for quantitative SSADA using our linear model. Furthermore, as we can see in Figure 5.7, for very slow ow speeds (e.g. 0.37 mm/s) the curve looks more s- shaped than linear. We expect that for slower speeds the curve will look even more s-shaped. It seems then that a more accurate model for the decorrelation to velocity relationship might be sigmoidal. Both of these models would also naturally handle the saturated data points. Another limitation of our model is the assumption that the parameter b, related to the decorrelation of stationary scatterers, is independent of the time separation t. For the time separations examined in our analysis, this parameter was indeed constant. However, we expect that the bulk decorrelation should increase with time and reach a saturation point when the time separation is very large. Furthermore, we treated the intercept as a small constant and consequently left it out of the derivation of Equation 5.6. This parameter may be related to the decorrelation caused by shot noise, system vibrations or Brownian motion. Accounting for this parameter would cause a slight increase in the calculated saturation velocities. Lastly, we expect that the decorrelation should depend on scatterer size and possibly blood cell concentration. We used blood in our experiments to mimic in vivo measurements as closely as possible. However, we might gain some additional insight by varying scatterer size and hematocrit levels. Additional experiments are needed to test these hypotheses. 154 5.4.3.3 Comparison with previous work on intensity-based Doppler variance angiography Liu and colleagues also performed similar ow phantom experiments [95,96] and found sim- ilar results for both intensity-based Doppler variance (IBDV) and Doppler phase variance. Note that they used IBDV which is similar to SSADA but does not apply spectrum split- ting and uses a dierence averaging procedure. Many of the results of the two works are similar, including establishing decorrelation saturation values and a linear range relating the calculated signal to velocity that depends on the time separation between measurements. However, there are a couple of important dierences between the results in that work and those shown here. First, there is a signicant dierence regarding the dependence on Doppler angle. In order to compare the variation with Doppler angle, we rst normalize our SSADA measurements by subtracting the background decorrelation of 0.08 found at zero ow. Be- cause the IBDV background values were negligibly small compared to ow signal in [96] background subtraction was not necessary for IBDV. We compare the variation in the mea- surements over a Doppler angular range of approximately 18 (the largest angle tested by Liu et al.). Specically, after background subtraction, the SSADA signal increases from 0.065 at perpendicular incidence to approximately 0.080 at a Doppler angle of 18 for the data presented in 5.6. Thus, our results in this work indicate that the SSADA signal increases by approximately 23% over an angular range of 18 . On the other hand, the IBDV signal in [96] increases from 80 at perpendicular incidence to approximately 150 at 18 , an 87.5% increase over the same angular range. Thus, the Doppler angle dependence of IBDV [96] was signicantly higher than the angle dependence of SSADA reported here. Another important dierence between this work and that found in [95] is that in [95] the authors showed a saturation velocity over 100 mm/s for an inter-B-frame time separation of 0.02 ms, whereas our model predicts a saturation velocity of approximately 20 mm/s for that time scale. 155 We hypothesize that these signicant dierences are likely caused by the choice of the algorithms and the dierence in the owing phantom. Specically, by creating an isotropic voxel after splitting the source spectrum, SSADA aims to reduce ow directional sensitivity over IBDV. This could explain the reduced directional dependence of SSADA over IBDV, which did not split the OCT signal spectrum and therefore had much ner axial resolution compared to transverse spot size. Additionally, intralipid solution composed of spherical particles of 0.356 m diameter was used as the owing phantom in [95,96]. In contrast our ow phantom used whole blood where the predominant scattering particles are red blood cells that have an average diameter of 7.2 m and a disc-like shape [98]. Because Doppler variance decreases with increasing particle size [99], we expect that the saturation velocity decreases as well. Additional eects of blood such as tumbling and high viscosity could cause these observed dierences as well. 5.4.3.4 Clinical SSADA The ow phantom experiments presented in this work have a number of implications for clin- ical SSADA. We have shown that SSADA measurements have little dependence on Doppler angle for Doppler angles less than 10 . So for retinal imaging where the OCT beam is nearly perpendicular to retinal vessels, clinical SSADA may be eectively angle independent. The clinical SSADA scans previously published by our research group [81,100,101] used a inter- frame time separation of 2.0 ms, which is on the long side of the time scale investigated in this article. Referring back to Figure 5.5 we see that for this time scale the saturation velocity is 0.2 mm/s (0.3 mm/sec if adjusted for the longer wavelength of 1050 nm). Since human reti- nal capillary ow speeds have been estimated to be in the range of 0.4-3.0 mm/s [102{104], this suggests that SSADA is well suited for detailed angiography down to the capillary level. However, decorrelation signal should be saturated even at the capillary level according to our phantom calibration. This does not entirely agree with the clinical retinal angiograms that 156 we have observed, where there is a gradation of ow signals at the smallest retinal vessels as visualized by a false color scale [81, 100], and this graduated ow signal increased with visual stimulation [101]. We hypothesize that this gradation could be due to the fact that capillaries are smaller than the diameter of the OCT probe beam. In this case, the static background tissue may lower the overall decorrelation value and thus increase the saturation velocity. Our ow phantom used a capillary tube that is much larger than the OCT beam diameter. This is one aspect in which our phantom setup diers signicantly from real hu- man capillaries. In other aspects such as the beam diameter, the use of whole blood, and the SSADA algorithm, the phantom results should simulate the clinical parameters well. For the purpose of measuring ow velocity, a faster OCT system with a shorter inter- frame time scale would be better suited. Specically, in order to bring the saturation velocity above 3.0 mm/s and thus enable capillary velocity quantication within the linear range, a time separation less than 139 s is needed, according to 5.5. If our M-B mode imaging protocol calls for 200 A-lines per B-scan, then an imaging speed of 1.4 million A-lines per second (1.4 MHz) is needed. Thus, megahertz OCT systems [105, 106] may be useful for blood ow velocity quantication within the linear range of SSADA. 5.4.4 Summary We performed in vitro ow phantom experiments in order to derive a linear relationship between SSADA measurements and absolute ow velocity. We hypothesized that SSADA measurements are independent of Doppler angle after splitting the spectrum to create an isotropic voxel. Contrary to our hypothesis, an angle dependence was found, but the variation due to Doppler angle was small and much reduced compared to previous work using non- split-spectrum intensity-based angiography [95]. The phantom experiments established that the decorrelation signal is linearly related to velocity over a limited range, and that this 157 range is dependent on the time scale of SSADA measurement. The saturation velocities were also proportional to the SSADA time scale. Extrapolation to OCT systems used for clinical retinal imaging with SSADA [81,100] indicate that the systems should be detecting ow down to the slowest capillary velocity. 5.5 Conclusions In this chapter we introduced a novel method for visualizing and quantifying retinal blood ow velocities called split-spectrum amplitude-decorrelation angiography (SSADA). Unlike Doppler OCT techniques, our amplitude-based SSADA algorithm is sensitive to transverse ow and is thus suitable for ow detection in the human macular region in which the blood vessel are nearly perpendicular to the input probe beam. One advantage that our SSADA algorithm has over other amplitude-based angiography algorithms is that SSADA computes decorrelation at a pixel after damping the axial ow sensitivity, which reduces its suscepti- bility to pulsation noise that is primarily manifested in the axial direction. In Section 5.2, we described the theory behind the SSADA algorithm and described how the axial resolution could be digitally increased by splitting the source power spectrum. Then, in Section 5.3, we demonstrated that SSADA can be used to detect transverse ow in the human macular region. After demonstrating the ability of our novel algorithm to detect ow in retinal blood vessels, we extended SSADA to enable blood velocity quantication in Section 5.4 using in vitro ow phantom experiments. We rst showed that SSADA is insensitive to Doppler angles less than 10 , which means that SSADA is eectively angle independent in the human macular region. This is important for clinical SSADA implementations because it means that measurement of the Doppler angle is not necessary to compute absolute blood ow velocities. Then, we derived a linear model that relates the SSADA signal to total blood 158 ow velocity measurements. We identied an operating range over which our linear model is valid and established that our clinical SSADA implementation should be detecting ow down to the lower capillary velocities. Thus, SSADA is a novel technique with reduced axial noise sensitivity that can be used to quantify absolute retinal blood ow velocities. 159 Chapter 6 Summary and Possible Extensions In this thesis, we presented a number of novel contributions to Fourier domain OCT imaging of the human retina. One common theme throughout this work was the generation of synthetic data to provide a ground truth for quantitative evaluation of various OCT image processing algorithms. Synthetic data is always rst validated against either in vivo or in vitro data and then used to extrapolate the results in a variety of conditions. For structural OCT imaging, we rst discussed the importance of retinal layer segmen- tation for diagnosing ocular pathologies. Then, we introduced a novel retinal layer seg- mentation algorithm that is capable of segmenting the nine retinal layer boundaries. The segmentation algorithm utilized a sparse learning approach to calculate a sparse approxi- mation to an in vivo retinal image. A graph theoretic approach is then taken to segment the retinal boundaries using the sparse approximation. To enable quantitative evaluation of the segmentation algorithm, we developed a novel technique for generating synthetic OCT intensity images and validated it by comparing to in vivo retinal scans. We then used our technique to provide an objective and quantitative evaluation of our segmentation algorithm, which was shown to be fairly robust to signal-to-noise ratios down to 10 dB. The synthetic OCT intensity image generation technique could be very useful for the evaluation of other image processing algorithms. 160 For functional OCT imaging, we rst discussed the importance of measuring blood ow velocities in the retina for diagnosing ocular pathologies. We then implemented a novel sim- ulation of a retinal vessel immersed in tissue and subsequently used it to generate synthetic Doppler OCT retinal data. We presented the utility of our simulation with two case stud- ies. First, we studied the accuracy of two Doppler OCT algorithms. We showed that the phase-resolved (PR) algorithm is more accurate than the moving-scatterer-sensitive (MSS) algorithm in terms of measuring blood ow and vessel diameter. However, for vessel diam- eters less than 3 times the beam width, we showed that PR underestimates ow velocity measurements. Second, we used the simulation to analyze the eect of transverse beam step size on the velocities calculated by PR Doppler OCT. We demonstrated that increasing the transverse step size produces a systematic reduction in measured Doppler velocities. We then proposed two techniques, including a table lookup method and careful phase unwrap- ping, for correcting the measured velocities and discussed the practical limitations of each method. We also presented an amplitude-based ow velocity imaging algorithm called split-spectrum amplitude-decorrelation angiography (SSADA). In contrast to Doppler based methods, we showed that SSADA is sensitive to ow that is transverse to the input beam, which has important practical implications because most of the blood vessels in the human macula are nearly perpendicular to the OCT illumination beam. We then used in vitro ow phantom experiments in order to derive a linear relationship between SSADA measurements and ab- solute ow velocity to enable ow quantication with SSADA. The phantom experiments established that the decorrelation signal is linearly related to velocity over a limited range, and that this range is dependent on the time scale of SSADA measurement. The saturation velocities were also proportional to the SSADA time scale. Extrapolation to OCT systems used for clinical retinal imaging with SSADA indicate that the systems should be detecting ow down to the lowest range capillary velocity. 161 Each topic presented in this thesis could be extended in future work. Our novel segmen- tation algorithm could be extended to relax the constraint that retinal layers must extend across the entire image. Doing so might allow pathological cases to be handled. Also, speckle statistics could be incorporated into the graph cut algorithm to increase robustness. The synthetic image generation method could be extended to allow for the simulation of dierent types of retinal structures, such as pathologies. Our retinal vessel simulation was shown to be useful for studying Doppler OCT mea- surements and generating speckle patterns. In order to enable its application to SSADA measurements, the geometry of the scatterers should be more realistic. For example, the blood cells could be modeled by spheres with diameters equal to the approximate red blood cell diameter instead of using a point scatterer model. Also a more realistic model could be used for particle entry into the scan region. Additionally, the vessel simulation method could be used to develop an improved method for estimating blood vessel diameters. The SSADA algorithm could be improved by incorporating image registration to reduce the eect of bulk motion noise on decorrelation measurements. For quantitative SSADA, a more accurate nonlinear model (e.g. sigmoidal) could be examined to extend the operating range and enable absolute ow velocity measurements for fast ow speeds. The model could also be tested on ultra-high speed systems for in vivo velocity quantication. 162 Reference List [1] D. Huang, E.A. Swanson, C.P. Lin, J.S. Schuman, W.G. Stinson, W. Chang, M.R. Hee, T. Flotte, K. Gregory, C.A. Puliato, and J.G. Fujimoto. Optical coherence tomography. Science, 254(5035):1178{1181, 1991. [2] W. Drexler and J. G. Fujimoto (eds.). Optical Coherence Tomography: Technology and Applications. Springer, Berlin, Germany, 2008. [3] Rhcastilhos. Schematic diagram of the human eye. https://commons.wikimedia. org/wiki/File:Schematic_diagram_of_the_human_eye_en.svg, 2007. [Retrieved April 5, 2011]. [4] A. Chan, J. S. Duker, T. H. Ko, J. G. Fujimoto, and J. S. Schuman. Normal macular thickness measurements in healthy eyes using stratus optical coherence tomography. Arch. Ophthalmol, 124(2):193{198, 2006. [5] J. S. Schuman, M. R. Hee, C. A. Puliato, C. Wong, T. Pedut-Kloitzman, C. P. Lin, E. Hertzmark, J. A. Izatt, A. Swanson, and J. G. Fujimoto. Quantication of nerve ber layer thickness in normal and glaucomatous eyes using optical coherence tomography. Arch. Ophthalmol. (Chicago), 113:586{596, 1995. [6] M. R. Hee, C. A. Puliato, J. S. Duker, E. Reichel, J. G. Coker, J. R. Wilkins, J. S. Schuman, E. A. Swanson, and J. G. Fujimoto. Topography of diabetic macular edema with optical coherence tomography. Ophthalmology (Philadelphia), 105(2):360{370, 1998. [7] A. D. Poularikas and S. Seely. Signals and Systems. Krieger Publishing Co., New York, NY, 1991. [8] S. Nemoto. Determination of waist parameters of a gaussian beam. Applied Optics, 25(21):3859{3863, 1986. [9] S.H. Yun, G.J. Tearney, J.F. de Boer, and B.E. Bouma. Motion artifacts in optical coherence tomography with frequency-domain imaging. Optics Express, 12(13):2977{ 2998, 2004. 163 [10] T. J. Eon, Y. C. Ahn, C. S. Kim, and Z. Chen. Calibration and characterization protocol for spectral-domain optical coherence tomography using ber bragg gratings. Journal of Biomedical Optics Letters, 16(3):030501, 2011. [11] X.J. Wang, T.E. Milner, and J.S. Nelson. Characterization of uid ow velocity by optical doppler tomography. Optics Letters, 20(11):1337{1339, 1995. [12] Y. Wang, B. Bower, J.A. Izatt, O. Tan, and D. Huang. In vivo total retinal blood ow measurement by fourier domain doppler optical coherence tomography. Journal of Biomedical Optics, 12(4):041215, 2007. [13] Y. Zhao, Z. Chen, C. Saxer, S. Xiang, J.F. de Boer, and J.S. Nelson. Phase-resolved optical coherence tomography and optical doppler tomography for imaging blood ow in human skin with fast scanning speed and high velocity sensitivity. Optics Letters, 25(2):114{116, 2000. [14] V. X. D. Yang, M. L. Gordon, A. Mok, Y. Zhao, Z. Chen, R. S. C. Cobbold, Brian C. Wilson, and I. Alex Vitkin. Improved phase-resolved optical doppler tomography using the kasai velocity estimator and histogram segmentation. Optics Communications, 208(4):209{214, 2002. [15] R. Leitgeb, L. Schmetterer, W. Drexler, A. Fercher, R. Zawadzki, and T. Bajraszewski. Real-time assessment of retinal blood ow with ultrafast acquisition by color doppler fourier domain optical coherence tomography. Optics Express, 11(23):3116{3121, 2003. [16] M. A. Choma, A. K. Ellerbee, S. Yazdanfar, and J. A. Izatt. Doppler ow imaging of cytoplasmic streaming using spectral domain phase microscopy. Journal of Biomedical Optics, 11(2):024014, 2006. [17] B. Baumann, B. Potsaid, M. F. Kraus, J. J. Liu, D. Huang, J. Hornegger, A. E. Cable, J. S. Duker, and J. G. Fujimoto. Total retinal blood ow measurement with ultrahigh speed swept source/fourier domain OCT. Biomedical Optics Express, 2(6):1539{1552, 2011. [18] A. Szkulmowska, M Szkulmowski, A. Kowalczyk, and M. Wojtkowski. Phase-resolved doppler optical coherence tomography - limitations and improvements. Optics Letters, 33(13):1425{1427, 2008. [19] Y. Wang, B. Bower, J.A. Izatt, O. Tan, and D. Huang. Retinal blood ow measurement by circumpapillary fourier domain doppler optical coherence tomography. Journal of Biomedical Optics, 13(6):064003 1{9, 2008. [20] C.A. Puliato, M.R. Hee, C.P. Lin, J.S. Schuman, J.S. Duker, J.A. Izatt, E.A. Swanson, and J.G. Fujimoto. Imaging of macular diseases with optical coherence tomography. Ophthalmology, 102(2):217{229, 1995. 164 [21] U. Schmidt-Erfurth, R. A. Leitgeb, S. Michels, B. Povazay, S. Sacu, B. Hermann, C. Ahlers, H. Sattmann, C. Scholdaand A. F. Fercher, and W. Drexler. Three- dimensional ultrahigh-resolution optical coherence tomography of macular diseases. Invest. Ophth. and Vis. Sci., 46(9):3393{3402, 2005. [22] M. Mujat, R. Chan, B. Cense, B.H. Park, T. Akkin C. Joo, T.C. Chen, and J.F. de Boer. Retinal nerve ber layer thickness map determined from optical coherence tomography images. Optics Express, 13(23):9480{9491, 2005. [23] A. Mishra, A. Wong, K. Bizheva, and D. Clausi. Intra-retinal layer segmentation in optical coherence tomography images. Optics Express, 17(26):23719{23728, 2009. [24] J.M. Schmitt, S.H. Xiang, and K.M. Yung. Speckle in optical coherence tomography. J. of Biomedical Optics, 4(1):95{105, 1999. [25] J. Tokayer, A. Ortega, and D. Huang. Sparsity-based retinal layer segmentation of optical coherence tomography images. In IEEE International Conference on Image Processing, 2011. [26] S.J. Chiu, X.T. Li, P. Nicholas, C.A. Toth, J.A. Izatt, and S. Farsiu. Automatic segmentation of seven retinal layers in sdoct images congruent with expert manual segmentation. Optics Express, 18(18):19413{19428, 2010. [27] M.K. Garvin, M.D. Abramo, R. Kardon, S.R. Russel, X. Wu, and M. Sonka. In- traretinal layer segmentation of macular optical coherence tomography images using optimal 3d graph search. Medical Imaging, IEEE Transactions on, 27(10):1495{1505, 2008. [28] Q. Yang, C.A. Reisman, Z. Wang, Y. Fukuma, M. Hangai, N. Yoshimura, A. Tomi- dokoro, M. Araie, A.S. Raza, D.C. Hood, and K. Chan. Automated layer segmen- tation of macular oct images using dual-scale gradient information. Optics Express, 18(20):21293{21307, 2010. [29] H. Ishikawa, D. Stein, G. Wollstein, S. Beaton, J.G. Fujimoto, and J.S. Schuman. Macular segmentation with optical coherence tomography. Investigative Ophthalmol- ogy and Visual Science, 46(6):2012{2017, 2005. [30] D.C. Fernandez, H.M. Salinas, and C.A. Puliato. Automated detection of retinal layer structures on optical coherence tomography images. Optics Express, 13(25):10200{ 10216, 2005. [31] G. Wang, Y. Bresler, and V. Ntziachristos. Guest editorial: Compressive sensing for biomedical imaging. IEEE Transactions on Medical Imaging, 30(5):1013{1016, 2011. [32] N. Otsu. A threshold selection method from gray-level histograms. IEEE Trans. Sys., Man., Cyber, 9(1):62{66, 1979. 165 [33] R. Pique-Regi, J. Monso-Varona, A. Ortega, R.C. Seeger, T.J. Triche, and S. As- gharzadeh. Sparse representation and bayesian detection of genome copy number al- terations from microarray data. Bioinformatics, 24(3):309{318, 2008. [34] M.E. Tipping and A.C. Faul. Fast marginal likelihood maximisation for sparse bayesian models. In C.M. Bishop and B.J. Frey, editors, Proc. of the Ninth International Work- shop on Articial Intelligence and Statistics, Key West, FL., 2003. [35] E.W. Dijkstra. A note on two problems in connexion with graphs. Numerische Math- ematik, 1(1):269{271, 1959. [36] T. H. Ko, J. G. Fujijmoto, J. S. Schuman, L. A. Paunescu, A. M. Kowalevicz, I. Hartl, W. Drexler, G. Wollstein, H. Ishikawa, and J. S. Duker. Comparison of ultrahigh- and standard-resolution optical coherence tomography for imaging macular pathology. Ophthalmology, 112:1922{1935, 2005. [37] P. Serranho, C. Maduro, T. Santos, J. Cunha-Vaz, and R. Bernardes. Synthetic oct data for image processing performance testing. In IEEE International Conference on Image Processing, 2011. [38] B. Delaunay. Sur la sphere vide. Bull. Acad. Science USSR VII: Class. Sci. Mat. Nat., pages 793{800, 1934. [39] R. Franke and G. Nielson. Smooth interpolation of large sets of scattered data. Int. J. Numer. Meths. Eng., 15:1691{1704, 1980. [40] O. R. Musin. Properties of the delaunay triangulation. In Proceedings of the thirteenth annual symposium on Computational geometry, pages 424{427, 1997. [41] R. Staord. How to get pixels from a triangle. http://www.mathworks.com/ matlabcentral/newsreader/view_thread/170977, 2008. [Retrieved March 23, 2012]. [42] S. M. Motaghian Nezam, C. Joo, G. J. Tearney, and J. F. de Boer. Application of maximum likelihood estimator in nano-scale optical path length measurement using spectral-domain optical coherence phase microscopy. Optics Express, 16(22):17186{ 17195, 2008. [43] J. Sijbers, A. A. den Dekker, E. Raman, and D. Van Dyck. Parameter estimation from magnitude mr images. International Journal of Imaging Systems and Technology, 10:109{114, 1999. [44] B. W. Silverman. Density Estimation for Statistics and Data Analysis. Chapman and Hall, New York, NY, 1986. 166 [45] G. Lamouche, C. E. Bisaillon, S. Vergnole, and J. P. Monchalin. On the speckle size in optical coherence tomography. In Proc. SPIE 6847, Optical Coherence Tomography and Coherence Domain Optical Methods in Biomedicine XII, 684724M, 2008. [46] S. Lu, C.Y. Cheung, J. Liu, J.H. Lim, C.K. Leung, and T.Y. Wong. Automated layer segmentation of optical coherence tomography images. IEEE Transactions on Biomedical Engineering, 57(10):2605{2608, 2010. [47] J. Flammer, S. Orgl, V. P. Costa, N. Orzalesi, G. K. Krieglstein, L. M. Serra, J. P. Renard, , and E. Stefnsson. The impact of ocular blood ow in glaucoma. Progress in Retinal and Eye Research, 21(4):359{393, 2002. [48] L. Schmetterer and M. Wolzt. Ocular blood ow and associated functional deviations in diabetic retinopathy. Diabetologia, 42(4):387{405, 1999. [49] R.W. Flower. Extraction of choriocapillaris hemodynamic data from icg uorescence angiograms. Investigative Ophthalmology and Visual Science, 34:2720{2729, 1993. [50] G.T. Feke, D.G. Goger, H. Tagawa, and F.C. Delori. Laser doppler technique for abso- lute measurement of blood speed in retinal vessels. IEEE Transactions on Biomedical Engineering, 34(9):673{800, 1987. [51] J. Tokayer, O. Tan, and D. Huang. Eect of blood vessel diameter on relative blood ow estimates in doppler optical coherence tomography algorithms. In Proc. SPIE 7889, Optical Coherence Tomography and Coherence Domain Optical Methods in Biomedicine XV, 78892X, 2011. [52] B. J. Vakoc, G.J. Tearney, and B.E. Bouma. Statistical prperties of phase-decorrelation in phase-resolved doppler optical coherence tomography. IEEE Transactions on Med- ical Engineering, 28(6):814{821, 2009. [53] V.J. Srinivasan, S. Sakadzic, I. Gorczynska, S. Ruvinskaya, W. Wu, J.G. Fujimoto, and D.A. Boas. Quantitative cerebral blood ow with optical coherence tomography. Optics Express, 18(3):2477{2494, 2010. [54] C.E. Riva, J.E. Grunwald, S.H. Sinclair, and B.L. Petrig. Blood velocity and volumet- ric ow rate in human retinal vessels. Investigative Ophthamology and Visual Science, 26:1124{1132, 1985. [55] H. Ren and X. Li. Clutter rejection lters for optical doppler tomography. Optics Express, 14(13):6103{6112, 2006. [56] M. Szkulmowski, A. Szkulmowska, T. Bajraszewski, A. Kowalczyk, and M. Wo- jtkowski. High-penetration swept source doppler optical coherence angiography by fully numerical phase stabilization. Optics Express, 16(9):6008{6025, 2008. 167 [57] H. Wehbe, M. Ruggeri, G. Gregori S. Jiao, C. A. Puliato, and W. Zhao. Automatic retinal blood ow calculation using spectral domain optical coherence tomography. Optics Express, 15(23):15193{15206, 2007. [58] S. G. Proskurin, Y. He, and R. K. Wang. Determination of ow velocity vector based on doppler shift and spectrum broadening with optical coherence tomography. Optics Letters, 28(14):1227{1229, 2003. [59] C. Smith. GoldsteinUnwrap2D r1. http://www.mathworks.com/matlabcentral/ fileexchange/29497, 2010. [Retrieved July 1, 2013]. [60] D. C. Ghiglia and M. D. Pritt. Two-Dimensional Phase Unwrapping: Theory, Algo- rithms and Software. Wiley-Interscience, Malabar, Fl, 1991. [61] K. Itoh. Analysis of the phase unwrapping problem. Applied Optics, 21(14):2470, 1982. [62] R. Goldstein, H. Zebker, and C. Werner. Satellite radar interferometry: Two- dimensional phase unwrapping. Radio Science, 23:713{720, 1988. [63] H. C. Hendargo, R. P. McNabb, A. H. Dhalla, N. Shepherd, and J. A. Izatt. Doppler velocity detection limits in spectrometer-based versus swept-source optical coherence tomography. Biomedical Optics Express, 2(8):2175{2188, 2011. [64] R. K. Wang and Z. Ma. Real-time ow imaging by removing texture pattern artifacts in spectral-domain optical doppler tomography. Optics Letters, 31(20):3001{3003, 2006. [65] Y. J. Hong, S. Makita, F. Jaillon, M. J. Ju, J. Min, B. H. Lee, M. Itoh, M. Miura, and Y. Yasuno. High-penetration swept source doppler optical coherence angiography by fully numerical phase stabilization. Optics Express, 20(3):2740{2760, 2012. [66] H. Ren, T. Sun, D.J. MacDonald, M.J. Cobb, and X. Li. Real-time in vivo blood- ow imaging by moving-scatterer-sensitive spectral-domain optical doppler tomogra- phy. Optics Letters, 31(7):927{929, 2006. [67] Y. K. Tao, A. M. Davis, and J. A. Izatt. Single-pass volumetric bidirectional blood ow imaging spectral domain optical coherence tomography using a modied hilbert transform. Optics Express, 16(16):12350{12361, 2008. [68] I. Grulkowski, I. Gorczynska, M. Szkulmowski, D. Szlag, A. Szkulmowska, R. A. Leit- geb, A. Kowalczyk, and M. Wojtkowski. Scanning protocols dedicated to smart veloc- ity ranging in spectral oct. Optics Express, 17(26):23736{23754, 2008. [69] S. Makita, Y. Hong, M. Yamanari, T. Yatagai, , and Y. Yasunoi. Optical coherence angiography. Optics Express, 14(17):7821{7840, 2006. 168 [70] L. An and R. K. Wang. In vivo volumetric imaging of vascular perfusion within human retina and choroids with optical micro-angiography. Optics Express, 16(15):11438{ 11452, 2008. [71] R. K. Wang, L. An, P. Francis, , and D. J. Wilson. Depth-resolved imaging of capillary networks in retina and choroid using ultrahigh sensitive optical microangiography. Optics Letters, 35(9):1467{1469, 2010. [72] L. Yu and Z. Chen. Doppler variance imaging for three-dimensional retina and choroid angiography. Journal of Biomedical Optics, 15(1):016029, 2010. [73] G. Liu, W. Qi, L. Yu, , and Z. Chen. Real-time bulk-motion-correction free doppler variance optical coherence tomography for choroidal capillary vasculature imaging. Optics Express, 19(4):3657{3666, 2011. [74] J. Fingler, D. Schwartz, C. Yang, and S. E. Fraser. Mobility and transverse ow visualization using phase variance contrast with spectral domain optical coherence tomography. Optics Express, 15(20):12636{12653, 2007. [75] B. J. Vakoc, R. M. Lanning, J. A. Tyrell, T. P. Padera, L. A. Bartlett, T. Stylianopou- los, L. L. Munn, G. J. Tearney, D. Fukumura, R. K. Jain, and B. E. Bouma. Three- dimensional microscopy of the tumor microenvironment in vivo using optical frequency domain imaging. Nature Medicine, 15(10):1219{1223, 2009. [76] Y. Yasuno, Y. Hong, S. Makita, M. Yamanari, M. Akiba, M. Miura, , and T. Yatagai. In vivo high-contrast imaging of deep posterior eye by 1- um swept source optical coherence tomography and scattering optical coherence angiography. Optics Express, 15(10):6121{6139, 2007. [77] Y. Hong, S. Makita, M. Yamanari, M. Miura, S. Kim, T. Yatagai, , and Y. Yasuno. Three-dimensional visualization of choroidal vessels by using standard and ultra-high resolution scattering optical coherence angiography. Optics Express, 15(12):7538{7550, 2007. [78] A. Mariampillai, B.A. Standish, E.H. Moriyama, M. Khurana, N.R. Munce, M.K.K. Leung, J. Jiang, A. Cable, B.C. Wilson, I.A. Vitkin, and V.X.D. Yang. Speckel variance detection of microvasculature using swept-source optical coherence tomography. Optics letters, 33(13):1530{1532, 2008. [79] H. C. Hendargo, R. Estrada, S. J. Chiu, C. Tomasi, S. Farsiu, and J. A. Izatt. Au- tomated non-rigid registration and mosaicing for robust imaging of distinct retinal capillary beds using speckle variance optical coherence tomography. Biomedical Op- tics Express, 4(6):803{821, 2013. 169 [80] E. Jonathan, J. Eneld, , and M. J. Leahy. Correlation mapping method for gener- ating microcirculation morphology from optical coherence tomography (oct) intensity images. Journal of Biophotonics, 4(9):583{587, 2011. [81] Y. Jia, O. Tan, J. Tokayer, B. Potsaid, Y. Wang, J. J. Liu, M. F. Kraus, H. Subhash, J. G. Fujimoto, J. Hornegger, and D. Huang. Split-spectrum amplitude-decorrelation angiography with optical coherence tomography. Optics Express, 20(4):4710{4725, 2012. [82] J. Tokayer, Y. Jia, A. Dhalla, and D. Huang. Blood ow velocity quantication using split-spectrum amplitude-decorrelation angiography with optical coherence tomogra- phy. Biomedical Optics Express, 4(10):1909{1924, 2013. [83] W. Li, C. T. Lancee, E. I. Cespedes, A. F. W. van der Steen, and N. Bom. Decorrela- tion properties of intravascular echo signals: Potentials for blood velocity estimation. Journal of Acoustic Society of America, 71(14):70D{86D, 1993. [84] W. Li, A. F. W. van der Steen, C. T. Lancee, E. I. Cespedes, and N. Bom. Blood ow imaging and volume ow quantitation with intravascular ultrasound. Ultrasound in Medicine and Biology, 24(2):203{214, 1998. [85] E. I. Cespedes A. F. W. van der Steen W. Li, C. T. Lancee and N. Bom. Decorrelation of intravascular ultrasound signals: A computer simulation study. In Proc. IEEE Ultrasonics Symposium, volume 2, pages 1165{1168, 1997. [86] O. Arend, A. Remky, D. Evans, R, Stber, and A. Harris. Contrast sensitivity loss is coupled with capillary dropout in patients with diabetes. JAMA, 38(9):1819{1824, 1997. [87] J. Zhao, D. A. Frambach, P. P. Lee, M. Lee, and P. F. Lopez. Delayed macular chorio- capillary circulation in age-related macular degeneration. Int. Ophthalmol., 19(1):1{12, 1995. [88] N. M. Bressler. Age-related macular degeneration is the leading cause of blindness. JAMA, 291(15):1900{1901, 2004. [89] R. K. Wang and L. An. Multifunctional imaging of human retina and choroid with 1050-nm spectral domain optical coherence tomography at 92-khz line scan rate. J. Biomed. Opt, 16(5):050503, 2011. [90] D. Y. Kim, J. Fingler, J. S. Werner, D. M. Schwartz, S. E. Fraser, and R. J. Zawadzki. In vivo volumetric imaging of human retinal circulation with phase-variance optical coherence tomography. Biomed. Opt. Express, 2(6):1504{1513, 2011. [91] L. Laatikainen and J. Larinkari. Capillary-free area of the fovea with advancing age. Invest. Ophthalmol. Vis. Sci., 16(12):1154{1157, 1977. 170 [92] Rohand Weiter. Retinal and choroidal circulation. Mosby Elsevier, St. Louis, MO, 2008. [93] R. H. W. Funk. Blood supply of the retina. Ophthalmic Res., 29(5):320{325, 1997. [94] G. Liu, L. Chou, W. Jia, W. Qi, B. Choi, and Z. Chen. Intensity-based modied doppler variance algorithm: application to phase instable and phase stable optical coherence tomography systems. Optics Express, 19(12):11429{11140, 2011. [95] G. Liu, W. Jia, V. Sun, B. Choi, and Z. Chen. High-resolution imaging of microvas- culature in human skin in vivo with optical coherence tomography. Optics Express, 20(7):7694{7705, 2012. [96] G. Liu, A. J. Lin, B. J. Tromberg, and Z. Chen. A comparison of doppler optical coherence tomography methods. Optics Express, 3(10):2669{2680, 2012. [97] S. Yazdanfar, C. Yang, M. V. Sarunic, and J. A. Izatt. Frequency estimation precision in doppler optical coherence tomography using the cramer-rao lower bound. Optics Express, 13(2):410{416, 2005. [98] M. L. Turgeon. Clinical Hematology: Theory and Procedures. Lippincott, Williams and Wilkins, Baltimore, MD, 2005. [99] C. S. Kim, W. Qi, J. Zhang, Y. J. Kwon, and Z. Chen. Imaging and quantifying brownian motion of micro- and nanoparticles using phase-resolved doppler variance optical coherence tomography. Journal of Biomedical Optics, 18(3):030504, 2013. [100] Y. Jia, J. C. Morrison, J. Tokayer, O. Tan, L. Lombardi, B. Baumann, C. D. Lu, W. Choi, J. G. Fujimoto, and D. Huang. Quantitative oct angiography of optic nerve head blood ow. Biomedical Optics Express, 3(12):3127{3137, 2012. [101] E. Wei, Y. Jia, O. Tan, B. Potsaid, J. J. Liu, W. Choi, J. G. Fujimoto, and D. Huang. Parafoveal retinal vascular response to pattern visual stimulation assessed with oct angiography. PLOS ONE (submitted). [102] C.E. Riva and B. Petrig. Blue eld entopic phenomenon and blood velocity in the retinal capillaries. Journal of Optical Society of America, 70(10):1234{1238, 1980. [103] R. Flower, E. Peiretti, M. Magnani, L. Rossi, S. Serani, Z. Gryczynski, , and I. Gryczynski. Observation of erythrocyte dynamics in the retinal capillaries and chori- ocapillaris using icg-loaded erythrocyte ghost cells. Investigatigve Ophthalmology and Visual Science, 49(12):5510{5516, 2008. [104] J. Tam, P. Tiruveedhula, , and A. Roorda. Characterization of single-le ow through human retinal parafoveal capillaries using an adaptive optics scanning laser ophthal- moscope. Biomedical Optics Express, 2(4):781{793, 2011. 171 [105] T. Klein, W. Wieser, C. M. Eigenwillig, B. R. Biedermann, and R. Huber. Megahertz oct for ultrawide-eld retinal imaging with a 1050nm fourier domain mode-locker laser. Optics Express, 19(4):3044{3062, 2011. [106] B. Potsaid, V. Jayaraman, J. G. Fujimoto, J. Jiang, P. J. S. Heim, and A. E. Ca- ble. Mems tunable vcsel light source for ultrahigh speed 60khz 1mhz axial scan rate and long range centimeter class oct imaging. In Proc. SPIE 8213, Optical Coherence Tomography and Coherence Domain Optical Methods in Biomedicine XVI, 82130M, 2012. 172
Abstract (if available)
Abstract
Optical coherence tomography (OCT) is a depth-resolved cross-sectional imaging modality capable of imaging biological tissue at the micron-scale in vivo. OCT has found widespread use in ophthalmology because of the translucent nature of retinal tissue as well as OCT's high resolution and non-contact implementation. OCT is analogous to ultrasound, except that it uses light instead of sound to illuminate a sample. OCT measures travel time of light that is backscattered from structures at different depths or axial distances (axial scans) within the sample. Cross-sectional images are generated by scanning the optical beam in the transverse direction and performing successive axial scan measurements. ❧ Structural imaging of the human retina has provided new insights into ophthalmic disease. For example, retinal nerve fiber layer thickness is utilized in glaucoma progression analysis. Thus, segmentation of OCT retinal images, which is complicated by low image contrast and high levels of speckle noise, is crucial for the study and diagnosis of ocular diseases. Measurement of retinal blood flow is also important for the study of many ocular pathologies including glaucoma and diabetic retinopathy. Doppler OCT, which utilizes measured Doppler frequency/phase shifts to infer blood flow speed, has consequently found widespread use in the ophthalmic community. The Doppler effect is only sensitive to movement along the incident beam's axis. In the case of the human macular region, however, many blood vessels are nearly perpendicular to the incident beam, which makes the measured Doppler frequency shifts small and difficult to detect. For this reason, measurement of blood flow in the macula using Doppler OCT is a challenging task. ❧ This thesis encompasses contributions to both structural and functional OCT retinal imaging. One common theme found throughout this thesis is the generation of synthetic or simulated OCT data in order to quantitatively evaluate various image processing algorithms. For each simulation application the synthetic data is first validated against in vivo retinal data or in vitro experimental data and then used to extrapolate the results and draw inferences about algorithm performance. ❧ We first design and implement a novel sparsity-based retinal layer segmentation algorithm that identifies the boundaries between retinal layers in OCT structural images. In contrast to most state of the art retinal layer segmentation algorithms, our algorithm does not require overly restrictive a priori assumptions about the expected layer structure and can thus possibly be used to detect unexpected structures that arise in pathological cases. One difficulty with quantitative evaluation of segmentation algorithms applied to in vivo retinal images is that the ground truth segmentation is not known. Motivated by this fact, we design a novel method for generating synthetic structural retinal images for which the ground truth segmentation is known. Our technique improves upon an existing synthetic image generation method both by using a nonparametric smooth representation of the average intensity values in the image and by accurately modeling speckle noise. After verifying that synthetic images generated with our method accurately represent real in vivo retinal images, we show that our segmentation algorithm accurately identifies the retinal layer boundaries in the synthetic images and that our algorithm is robust to image noise. ❧ We next examine functional OCT retinal imaging by studying various methods for measuring blood flow velocities in retinal vessels. We develop a two dimensional simulation of a blood vessel immersed in tissue that mimics functional imaging in a clinical setting more accurately than existing one dimensional simulations of retinal blood flow. This retinal vessel simulation enables quantitative evaluation of the accuracy of Doppler OCT algorithms by providing the ground truth blood flow velocities that Doppler OCT techniques aim to measure. We illustrate the utility of the simulation with two case studies. First, we evaluate the accuracy of two commonly used Doppler OCT algorithms as vessel diameters and associated flow speeds become increasingly small. We show that the phase-resolved Doppler OCT algorithm performs best for large blood vessels while neither algorithm performs well for very small blood vessels and slow blood flow velocities. This is the first time that a quantitative evaluation of the estimation accuracy of the Doppler OCT algorithms is performed using synthetic data with known ground truth blood flow velocities. In the second case study we examine the effect of transverse step size between adjacent axial scans on the phase-resolved Doppler OCT algorithm. We show that the phase-resolved Doppler OCT algorithm systematically underestimates blood flow velocities with increasing transverse step size and that this effect is more pronounced at higher velocities. We propose two techniques for correcting measured velocities that can be used to improve the accuracy of the phase-resolved Doppler OCT algorithm. ❧ As previously mentioned, Doppler OCT techniques are only sensitive to velocities along the direction of the incident beam (axial velocities). While Doppler OCT is commonly used to measure blood flow near the optic disc, its application to flow estimation in the human macula is limited because many of the blood vessels in this region are nearly perpendicular to the incident beam. In order to overcome this limitation, we introduce a novel method for visualizing blood flow called split-spectrum amplitude-decorrelation angiography (SSADA). In contrast to Doppler phase-based methods, SSADA relies on intensity information only and is sensitive to both transverse (perpendicular) and axial flow velocities. After establishing that SSADA can be used to image flow in the human macula, we perform phantom flow experiments to establish a linear relationship between SSADA measurements and preset blood flow velocities that enables velocity quantification and thus blood flow measurement in the human macula with SSADA.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Transmission tomography for high contrast media based on sparse data
PDF
All-optical signal processing toward reconfigurable optical networks
PDF
Adaptive event-driven simulation strategies for accurate and high performance retinal simulation
PDF
Multiplexing techniques and reconfigurable networking functions for optical communications using orbital angular momentum beams
PDF
Block-based image steganalysis: algorithm and performance evaluation
PDF
Structure learning for manifolds and multivariate time series
PDF
3D vessel mapping techniques for retina and brain as an early imaging biomarker for small vessel diseases
PDF
Neuromorphic motion sensing circuits in a silicon retina
PDF
Depth inference and visual saliency detection from 2D images
PDF
Motion pattern learning and applications to tracking and detection
PDF
Syntax-aware natural language processing techniques and their applications
PDF
Advanced features and feature selection methods for vibration and audio signal classification
PDF
Signal processing methods for brain connectivity
PDF
A signal processing approach to robust jet engine fault detection and diagnosis
PDF
Light management in nanostructures: nanowire solar cells and optical epitaxial growth
PDF
Elements of next-generation wireless video systems: millimeter-wave and device-to-device algorithms
PDF
High-frequency ultrasound array-based imaging system for biomedical applications
PDF
Sweep-free Brillouin optical time-domain analyzer
PDF
Thermal analysis and multiobjective optimization for three dimensional integrated circuits
PDF
Modeling and predicting with spatial‐temporal social networks
Asset Metadata
Creator
Tokayer, Jason Michael
(author)
Core Title
Contributions to structural and functional retinal imaging via Fourier domain optical coherence tomography
School
Viterbi School of Engineering
Degree
Doctor of Philosophy
Degree Program
Electrical Engineering
Publication Date
10/02/2013
Defense Date
08/23/2013
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
biomedical image processing,OAI-PMH Harvest,optical coherence tomography
Format
application/pdf
(imt)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Huang, David (
committee chair
), Ortega, Antonio K. (
committee chair
), Jenkins, Brian Keith (
committee member
), Mel, Bartlett W. (
committee member
)
Creator Email
tokayer@usc.edu,TokeMan24@gmail.com
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c3-334273
Unique identifier
UC11296555
Identifier
etd-TokayerJas-2075.pdf (filename),usctheses-c3-334273 (legacy record id)
Legacy Identifier
etd-TokayerJas-2075.pdf
Dmrecord
334273
Document Type
Dissertation
Format
application/pdf (imt)
Rights
Tokayer, Jason Michael
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
biomedical image processing
optical coherence tomography