Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Advanced coronary CT angiography image processing techniques
(USC Thesis Other)
Advanced coronary CT angiography image processing techniques
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
ADVANCED CORONARY CT ANGIOGRAPHY IMAGE PROCESSING TECHNIQUES by Dongwoo Kang A Dissertation Presented to the FACULTY OF THE GRADUATE SCHOOL UNIVERSITY OF SOUTHERN CALIFORNIA In Partial Fulllment of the Requirements for the Degree DOCTOR OF PHILOSOPHY (ELECTRICAL ENGINEERING) August 2013 Copyright 2013 Dongwoo Kang Dedication TO MY PARENTS ii Acknowledgements Thanks to many great people, my doctoral thesis could be completed. First of all, I would like to express my sincere gratitude to my advisor, Professor C.-C. Jay Kuo, for his support and guidance throughout my Ph.D. process. It was my honor and luck to have studied with him. Professor Kuo provided the big picture and deep insight on the seemingly impossible problems. Also I would like to thank Dr. Damini Dey and Dr. Piotr Slomka for their support and guidance on the research problems. I also appreciate committee members, Professors Richard Leahy, K. Kirk Shung and Krishna Nayak their valuable comments during the Ph. D. defense and qualifying examination. I also thank my research group members and friends. Especially, I would like to express my gratitude to the senior Ph.D. students at USC, Taehoon Shin, Dr. Jonghye Woo, Joohyun Cho, Dr. SeongHo Cho and Dr. Dr. Jewon Kang. They were note only good friends, but also good mentors in both research and life. I also thank my SSHS friends, SNU friends, USC friends and collaborators at Cedars-Sinai Medical Center. Last but most important acknowledgment goes to my father Chang-Hyung Kang, my mother Young Soon Park, my brother Dongsoo Kang and my sister Dr. Chorong Kang. Without their help and encouragement throughout the journey for my doctorate at USC, this dissertation would not have been possible. iii Table of Contents Dedication ii Acknowledgements iii List Of Tables vii List Of Figures viii Abstract xiii 1 Introduction 1 1.1 Significance of the Research . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Review of Previous Work . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.3 Contributions of the Research . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.4 Organization of the thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2 Heart Chambers and Whole Heart Segmentation Techniques: A Re- view 9 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.2 Clinical Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.3 Segmentation Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 2.3.1 Boundary-Driven Techniques . . . . . . . . . . . . . . . . . . . . . 13 2.3.1.1 Active contours (or snakes) . . . . . . . . . . . . . . . . . 13 2.3.1.2 Geodesic active contour . . . . . . . . . . . . . . . . . . . 15 2.3.2 Region-Based Techniques . . . . . . . . . . . . . . . . . . . . . . . 16 2.3.2.1 Mumford-Shah functional . . . . . . . . . . . . . . . . . . 17 2.3.2.2 Level-set based technique . . . . . . . . . . . . . . . . . . 17 2.3.2.3 Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . 20 2.3.3 Graph-Cuts Techniques . . . . . . . . . . . . . . . . . . . . . . . . 21 2.3.4 Model-Fitting Techniques . . . . . . . . . . . . . . . . . . . . . . . 24 2.4 Application to Specific Imaging Modalities . . . . . . . . . . . . . . . . . . 26 2.4.1 Ultrasound Imaging . . . . . . . . . . . . . . . . . . . . . . . . . . 26 2.4.2 Nuclear Imaging (SPECT and PET) . . . . . . . . . . . . . . . . . 27 2.4.3 Computed Tomography (CT) . . . . . . . . . . . . . . . . . . . . . 30 2.4.4 MRI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 iv 2.4.5 Parameter Correlation between Imaging . . . . . . . . . . . . . . . 33 2.5 Validation (Evaluation) of Segmentation Results . . . . . . . . . . . . . . 33 2.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 3 Automated Detection of Nonobstructive and Obstructive Arterial Le- sions from Coronary CT Angiography 35 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 3.2 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 3.2.1 Clinical Background . . . . . . . . . . . . . . . . . . . . . . . . . . 36 3.2.2 Technical Background . . . . . . . . . . . . . . . . . . . . . . . . . 37 3.3 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 3.3.1 Centerline Extraction and Classification of three Main Arteries . . 41 3.3.2 Vessel Linearizations . . . . . . . . . . . . . . . . . . . . . . . . . . 43 3.3.3 Lumen Segmentation and Calcium Volume Measurement . . . . . 43 3.3.4 Detection of Lesions with Stenosis . . . . . . . . . . . . . . . . . . 44 3.3.5 CCTA Acquisition and Reconstruction . . . . . . . . . . . . . . . . 46 3.3.6 Visual Assessment and Reference Standard . . . . . . . . . . . . . 47 3.3.7 Statistical Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 3.4 Results. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 3.5 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 3.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 4 Image Denoising of Low-radiation Dose Coronary CT Angiography by an Adaptive Block-Matching 3D Algorithm 56 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 4.2 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 4.2.1 Clinical Background . . . . . . . . . . . . . . . . . . . . . . . . . . 57 4.2.2 Technical Background . . . . . . . . . . . . . . . . . . . . . . . . . 59 4.3 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 4.3.1 Overview of Block-Matching 3D algorithm . . . . . . . . . . . . . . 62 4.3.2 Adaptive Block-Matching 3D Scheme . . . . . . . . . . . . . . . . 64 4.3.3 Data Acquisition and Evaluation Framework . . . . . . . . . . . . 65 4.4 Experimental Results and Discussion . . . . . . . . . . . . . . . . . . . . . 66 4.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 5 Structured Learning Algorithm for Detection of Coronary Arterial Le- sions from Coronary CT Angiography 71 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 5.2 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 5.3 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 5.3.1 First level base classifier: analytic method . . . . . . . . . . . . . . 74 5.3.2 First level base classifier: learning-based method . . . . . . . . . . 76 5.3.3 Firstlevelbaseclassifier: aschemetobalancethenumberofnormal data and lesion data in the learning-based method . . . . . . . . . 78 5.3.4 DecisionFusion: combinationoftheanalyticmethodandthelearning- based method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 v 5.3.5 Visual assessment and reference standard . . . . . . . . . . . . . . 79 5.4 Results. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 5.5 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 5.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 6 Conclusion and Future Work 90 6.1 Summary of the Research . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 6.2 Future Research Directions . . . . . . . . . . . . . . . . . . . . . . . . . . 92 Bibliography 94 vi List Of Tables 3.1 Algorithm Performance. Performance characteristics of the proposed al- gorithm of lesion (25% stenosis) detection in N=42 patients (13 complete normal). In a total of 45 lesions with 25% stenosis, 6 were of severe steno- sis (70%), and 14 were obstructive stenosis (50%). Sensitivity was 93%, specificity was 81%, and accuracy was 83% per segment. . . . . . . . . . . 51 3.2 Reason for Additionally Detected Lesions. Breakdown of detected false positive lesions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 3.3 Program Reproducibility. Results when different readers ran the program. 52 4.1 Typical organ radiation doses from various radiologic studies in grays (mGy) or sieverts(mSv)) [36]. In CT scanners, 1 mSv = 1 mGy. . . . . . 58 5.1 Performances of two first level base classifiers and the final decision fusion algorithm. Baseclassifier1istheanalyticalgorithmandthebaseclassifier 2 is the SVM-based learning algorithm. . . . . . . . . . . . . . . . . . . . 80 5.2 The proposed algorithm performance in lesion (≥25% stenosis) detection in 42 patients (13 complete normal). In a total of 252 coronary artery proximal and mid segments in 42 patient, 45 segments had lesions with ≥25% stenosis. Sensitivity was 93%, specificity was 95%, and balanced accuracy was 94% per segment. . . . . . . . . . . . . . . . . . . . . . . . 81 5.3 Performances comparison with previous works; Arnoldi et al., 2010 [13], Halpern et al., 2011 [123], Kang et al., 2013 [139] and the proposed method. 87 vii List Of Figures 2.1 An example of heart chamber segmentation in 3-D contrast CT volumes with green line delineation for the LV endocardium, magenta for the LV epicardium, cyan for the left atrium (LA), orange for the right ventricle (RV), and blue for the right atrium (RA) [289]. The first row shows a full torso view (the first column) and the closeup view (the right three columns) of three orthogonal cuts from 3-D volume data. The four images in the second row show the tracking results for the heart chambers on a dynamic 3-D sequence with 10 frames. (Reproduced from Y. Zheng et al. with permission of 2008 IEEE). . . . . . . . . . . . . . . . . . . . . . . . . 10 2.2 Short axis ultrasound images illustrating the tracking of the endocardial border by the active contour technique [184]. An initial contour evolves to the final contour as indicated by the white dotted line in each image. (Reproduced from I. Mikic et al. with permission of 1998 IEEE). . . . . . 15 2.3 The LV segmentation results for MR images by the level-set function with the visual information and anatomical constraints, where the sequence of images corresponds to the same slice but in different moments of a car- diac cycle [202]. (Reproduced from N. Paragios with permission of 2002 Springer Science and Business Media). . . . . . . . . . . . . . . . . . . . . 19 2.4 LVsegmentationexamplesforcontrastcardiacCTimagesusingthegraph- cuts technique [136]. The segmentation algorithm used here combines the EM-based region segmentation, the Dijkstra active contours using graph- cuts, and the shape information through a pattern matching strategy. The graph-cutsalgorithmisusedtocuttheedgesinthegraphtoformaclosed- boundary contour between two different regions. (Reproduced from M. P. Jolly with permission of 2006 Springer Science and Business Media.) . . . 22 2.5 Segmentation results obtained by applying the AAM technique to an ul- trasound image sequence over one heart beat period [29]: (a) the initial 1-phase AAM model positioned, (b) the match after 5 AMM iterations, (c) the final match after 20 AAM iterations, and (d) the manual contours for comparison. The first row shows phase images 1, the second row shows phase images 2, and the third row shows phase images 3 from 16 image phases. (Reproduced from J. Bosch et al. with permission of 2002 IEEE). 23 viii 2.6 The gated SPECT segmentation in Ref. [112], where the first row shows original myocardial perfusion SPECT (MPS) and the second row shows the segmented image of the first row. . . . . . . . . . . . . . . . . . . . . . 28 3.1 Invasive coronary angiography (ICA), which is current gold-standard. . . 36 3.2 Cross-sectional intravascular US view showing external elastic membrane (green) and lumen intima border (yellow) [82]. . . . . . . . . . . . . . . . 37 3.3 CCTA mixed plaque (A) and contrast attenuation (HU) of epicardial fat (EF), non-calcified plaque (NCP), lumen region [80]. . . . . . . . . . . . . 38 3.4 SCCT coronary segmentation diagram in axial view [215]. . . . . . . . . . 39 3.5 Flowchart of the proposed method. . . . . . . . . . . . . . . . . . . . . . . 41 3.6 The first row shows an example of extracted centerlines in mid LAD (red) and D1(black). Each column shows an image at different angles, where colored line indicates axis perpendicular to each other. The second row shows vessel linearization (LAD) of the first row in three orthogonal direc- tions. Both rows show the same location of a lesion (25 49% stenosis by expert visual grading). The third row also shows linearized vessel (LAD) of a normal CCTA dataset. Red outline shows segmented lumen using our method. Detected lesion locations are marked by yellow point. . . . . . . 42 3.7 Example of lumen segmentation and lesion detection in LAD. Range of proximal LAD lesion (stenosis 25 49%) marked by expert is shown in pur- ple. Lumen diameters computed from the segmented lumen are shown in blue, and cropped lumen diameters by anatomical knowledge are shown in cyan. Expected normal luminal diameter is derived from the scan by automatedpiecewiselinefitting(showninred)betweenbranchpoints,and takesintoaccountnormaltaperingpresentinthedataset. Lesionwith25% stenosis detected by the algorithm, concordant with the expert observer, is marked with a black vertical line. . . . . . . . . . . . . . . . . . . . . . 44 3.8 An example of 3D volume rendering (a), detected nonobstructive lesion (mixedplaque)inaCCTAimage(b)andaccordingICAimage(c). Arrows in (a), (b) and (c) indicate the location of the same lesion (25 49% stenosis by expert visual grading from CCTA, and 34.0% stenosis by quantitative analysis from ICA).. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 3.9 The first row shows an example of extracted centerlines in mid LAD (red) and D1 (black). Each column shows an image at different angles, where colored line indicates axis perpendicular to each other. The second row shows vessel linearization (LAD) of the first row in three orthogonal direc- tions. Both rows show the same location of a lesion (25 49% stenosis by expert visual grading). The third row also shows linearized vessel (LAD) of a normal CCTA dataset. Red outline shows segmented lumen using our method. Detected lesion locations are marked by yellow point. . . . . . . 48 ix 3.10 Example of lumen segmentation and lesion detection in LAD. Range of proximal LAD lesion (stenosis 25 49%) marked by expert is shown in pur- ple. Lumen diameters computed from the segmented lumen are shown in blue, and cropped lumen diameters by anatomical knowledge are shown in cyon. Expected normal luminal diameter is derived from the scan by automatedpiecewiselinefitting(showninred)betweenbranchpoints,and takesintoaccountnormaltaperingpresentinthedataset. Lesionwith25% stenosis detected by the algorithm, concordant with the expert observer, is marked with a black vertical line. The second vertical line, which is at around x = 60mm, is the lesion detected additionally by calcium volume measurement. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 3.11 An example of 3D volume rendering (a), detected nonobstructive lesion (mixed plaque) by stenosis calculation (CCTA image) (b), and detected nonobstructive lesion (mixed plaque) by calcium volume measurement at a branch point (CCTA image) (c). The right arrow in (a) and the arrow in (b), and the left arrow in (a) and the arrow in (c) indicate the location of the same lesion (25 49% stenosis) each. This patient did not undergo ICA. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 3.12 Detection of lesion with stenosis. Arrows indicate the location of lesions. Detected lesions with stenosis by primarily non-calcified plaque in the mid segment (the first row, 59% stenosis by quantitative analysis and 50 69% stenosis by expert visual grading). The second row shows according ICA images of the first row (36.2% stenosis by quantitative analysis). . . . . . 53 3.13 Detection of lesion with stenosis. Arrows indicate the location of lesions. Detected lesions with stenosis by mixed plaque in the proximal segment (the first row, 70% stenosis by quantitative analysis and 90 99% stenosis by expert visual grading). The second row shows according ICA images of the first row (62.4% stenosis by quantitative analysis). . . . . . . . . . . . 54 3.14 An example of false positives. The algorithm detected this location as a lesion with stenosis 25%, but human expert readers graded it ¡25% stenosis. 55 4.1 Short-axis images of the Left ventricle in end-diastolic and end-systolic with 120 kVp and 100 kVp are shown [191]. . . . . . . . . . . . . . . . . . 59 4.2 20short-axisslicesofthemidleftventricularoftheR-Rinterval(0%-95%) scanned at 120 kVp [191]. The effective radiation dose was 4.9 mSv. Full tube current was applied only at 70% of the R-R interval, but minimal tube current is used in all other parts of the cardiac cycle. Although the radiation dose was decreased much, image noise was increased during the cardiac cycle except at 70% of the R-R interval. . . . . . . . . . . . . . . . 60 4.3 Probability distribution from CT projection data [169]. The projection data can be approximated as a Gaussian functional. . . . . . . . . . . . . 61 x 4.4 Grouping blocks from noisy natural images corrupted by AWGN with showing a reference block (R) and a few blocks matched to R [74]. . . . . 63 4.5 Flowchart of the BM3D denoising algorithm [74]. . . . . . . . . . . . . . . 64 4.6 Simulation result with 70% phase. Original 70% phases (left), Additive White Gaussian Noise added with standard deviation 444.9HU (middle), and the denoised 70% phases (right). . . . . . . . . . . . . . . . . . . . . . 65 4.7 One example of denoising results of low-radiation CT images with LV as- sessment. Thefiguresohwsthemidleftventricularshort-axisslices. Three different mid left ventricular short-axis slices of the original 70% phases (the first row), the original 40% phases (the second row) and the denoised 40% phases (the third row). The contours indicate the boundary of my- ocardium (green) and blood pool (red). The measured myocardial masses were 130g (70% phase), 112g (40% phase) and 133g (denoised 40% phase). 67 4.8 Myocardialmassmeasurementfrom7patientdatasetsfor70%phase,40% phaseanddenoised40%phase. Therewasnosignificantdifferencebetween 70% phase and the denoised 40% phase masses (NS- not significant). . . 68 4.9 Image noise (standard deviation of HU) in the ROIs set in the blood pool and myocardium in LV. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 4.10 Comparison to other methods. Original 70% phase (1st row), original 40% phase (2nd row) and the denoised 40% phase by BM3D (3rd row). In each row, short axis view (left), 4-chamber view (middle) and 3D volume rendering are presented. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 5.1 Flowchart of the structured learning algorithm . . . . . . . . . . . . . . . 74 5.2 Exampleoflumensegmentaitonandlesiondetectioninalinearizedvolume in LAD [139]. Range of proximal LAD lesion (stenosis 25%-49%) marked byexpertisshownasasmallboxataroundx=27mm-48mm. Lumendi- ameters computed from the segmented lumen are shownand their cropped lumen diameters by anatomical knowledge are also shown. Expected nor- mal luminal diameter is derived from the scan by automated piecewise line fitting between branch points, and takes into account normal tapering present in the dataset. The locations of the lesions with ≥25% stenosis detected by the algorithm, concordant with the expert observer, is marked with vertical arrows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 5.3 Flowchart of the learning-based algorithm as a first level base classifier for coronary arterial lesions from coronary CTA . . . . . . . . . . . . . . . . . 76 5.4 Small volume patches as inputs for feature extraction and SVM classification 77 xi 5.5 An example of linearized volume with ground truth in blue box (expert readers marking) is shown (first row). Overlapping volume patches in lesion areas and non-overlapping volume parches in normal areas (second row) are also shown. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 5.6 Flowchart of Decision Fusion. . . . . . . . . . . . . . . . . . . . . . . . . . 80 5.7 Soft values from the learning-based method by SVR are shown. SVR produces continuous values from the learning-based method as a first level base classifier, and these SVR values will be used as a feature for the final decision fusion. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 5.8 In the SVM-based learning algorithm as a first level base classifier, the improved sensitivity and balanced accuracy by data balancing scheme be- tween normal class and lesion class are shown in (A) and (B). Also, the performance variability according to the different small volume patch sizes is shown in (C). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 5.9 Detection of lesion with stenosis. Arrows indicate the location of lesions. Detected lesions with stenosis by mixed plaque in the proximal segment (the first row, 70% stenosis by quantitative analysis and 90%99% stenosis by expert visual grading). The second row shows according ICA images of the first row (62.4% stenosis by quantitative analysis). . . . . . . . . . . . 83 5.10 Decision Fusion results with SVM with the polynomial kernel of order 2. 252 coronary artery segments are displayed as points in the plot. The segmentswithlesionsareshowninredandthenormalsegmentsareshown in blue. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 5.11 Decision Fusion results with SVM with the polynomial kernel of order 2. 252 coronary artery segments are displayed as points in the plot. The segmentswithlesionsareshowninredandthenormalsegmentsareshown in blue. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 5.12 SVM results with kernels of polynomial of order 1 (top, left), order 2 (top, right), order 4(bottom, left), and order 5(bottom, right). We chose the kernel function in order not to miss the true lesions (green circle). . . . . 86 5.13 An Example of false Positives by a previous work [139], but not detected by the proposed algorithm: expert readers graded it <25% . . . . . . . . 87 xii Abstract Computer-aided cardiac image analysis obtained by various modalities plays an impor- tant role in the early diagnosis and treatment of cardiovascular disease. Numerous com- puterized methods have been developed to tackle this problem. Recent studies employ sophisticated techniques using available cues from cardiac anatomy such as geometry, visual appearance, and prior knowledge. Especially, visual analysis of three-dimensional (3D) coronary computed tomography angiography (CCTA) remains challenging due to largenumberofimage slicesandtortuouscharacterofthevessels. In thisthesis, wefocus on cardiac applications associated with coronary artery disease and cardiac arrhythmias, and study the related computer-aided diagnosis problems from computed tomography angiography (CCTA). First, in Chapter 2, we provide an overview of cardiac segmenta- tion techniques in all kinds of cardiac image modalites, with the goal of providing useful advice and references. In addition, we describe important clinical applications, imaging modalities, and validation methods used for cardiac segmentation. In Chapter 3, we propose a robust, automated algorithm for unsupervised computer detectionof coronary artery lesions from CCTA.Our knowledge-basedalgorithm consists of centerline extraction, vessel classification, vessel linearization, lumen segmentation with scan-specific lumen attenuation ranges, and lesion location detection. Presence and location of lesions are identified using a multi-pass algorithm which considers expected or ”normal” vessel tapering and luminal stenosis from the segmented vessel. Expected luminaldiameterisderivedfromthescanbyautomatedpiecewiseleastsquareslinefitting over proximal and mid segments (67%) of the coronary artery considering the locations of the small branches attached to the main coronary arteries. We applied this algorithm to 42 CCTA patient datasets, acquired with dual-source CT, where 21 datasets had 45 xiii lesionswithstenosis25%. Thereferencestandardwasprovidedbyvisualandquantitative identification of lesions with any stenosis ≥25% by 3 expert observers using consensus reading. Our algorithm identified 43 lesions (93%) confirmed by the expert observers. There were 46 additional lesions detected; 23 out of 46 (50%) of these were less-stenosed lesions. When the artery was divided into 15 coronary segments according to standard cardiology reporting guidelines, per-segment basis, sensitivity was 93% and per-segment specificitywas81%. Ouralgorithmshowspromisingresultsinthedetectionofobstructive and nonobstructive CCTA lesions. In Chapter 4, we propose a novel low-radiation dose CCTA denoising algorithm. Our aim in this study was to optimize and validate an adaptive de-noising algorithm based on Block-Matching 3D, for reducing image noise and improving left ventricular assessment, in low-radiation dose CCTA. In this study, we describe the denoising algorithm and its validation, with low-radiation dose coronary CTA datasets from consecutive 7 patients. We validated the algorithm using a novel method, with the myocardial mass from the low-noise cardiac phase as a reference standard, and objective measurement of image noise. After denoising, the myocardial mass was not statistically different by comparison of individual data points by the students’ t-test (130.9±31.3g in low-noise 70% phase vs 142.1±48.8g in the denoised 40% phase, p= 0.23). Image noise improved significantly between the 40% phase and the denoised 40% phase by the students’ t-test, both in the blood pool (p-value <0.0001) and myocardium (p-value <0.0001). We optimized and validated an adaptive BM3D denoising algorithm for coronary CTA. This new method reduces image noise and has the potential for improving myocardial function assessment from low-dose coronary CTA. In Chapter 5, we propose a novel machine learning technique to detect coronary arterial lesions with stenosis ≥25% from CCTA. We proposed an improved automated algorithm for detection of coronary arterial lesions from coronary CT angiography, by adapting a machine learning algorithm on the same data used in Chapter 3, which was described based on [139]. Our structured learning-based algorithm consists of two stages: (1)Dividingeachcoronaryarteryintosmallvolumepatches,andintegratingseveralquan- titative geometric and shape features for coronary arterial lesions in each small volume xiv patchbySupportVectorMachine(SVM)algorithm,(2)ApplyingSVM-baseddecisionfu- sion algorithm to combine a formula-based analytic method and a learning-based method in the stage (1). We applied this algorithm to 42 CCTA patient datasets, acquired with dual-sourceCT,where21datasetshad45lesionswithstenosis≥25%. Thereferencestan- dard was provided by visual and quantitative identification of lesions with any stenosis ≥25%bythreeexpertreadersusingconsensusreading. Whenthearterywasdividedinto 15 coronary segments according to standard cardiology reporting guidelines, per-segment basis, the sensitivity was 93%, and the specificity was 95% using 10-fold cross-validation. In conclusions, we developed a novel machine learning based algorithm for detection of coronary arterial lesions from CCTA. The proposed structured learning algorithm per- formed with high sensitivity and high specificity as compared to 3 experienced expert readers. xv Chapter 1 Introduction 1.1 Signicance of the Research Cardiovascular disease (CVD) is the major cause of morbidity and mortality in the west- ernworld. Morethan2200patientsdieofCVDeachdayintheUnitedStatesalone[220]. CVD includes a variety of disorders of the cardiac muscle and the vascular system. The common causes of CVD include ischemic heart disease and congestive heart failure [27]. In particular, atherosclerotic heart disease, i.e., coronary artery disease (CAD) is the leading cause of sudden cardiac death (SCD) for both men and women [284] in developed countries. Most of these patients have no prior symptoms of any kind but suffer from heart disease, which may cause a heart attack [81]. Noninvasive cardiac imaging is an invaluable tool for the diagnosis and treatment of patients with known or suspected CVD. Magnetic resonance imaging (MRI), com- puted tomography (CT), positron emission tomography (PET), single photon emission computed tomography (SPECT), and ultrasound (US) have been used extensively for physiologic understanding and diagnostic purposes in cardiology. These imaging tech- nologies have greatly increased our understanding of normal and diseased anatomy. In the case of ischemic heart disease, the first consequence of the disease is the changes in the myocardial perfusion assessed by SPECT and PET or by MRI [175]. In par- ticular, the perfusion deficit leads to metabolic changes in myocardial tissues assessed by PET. A myocardial ischemia could further diminish ejection of blood because of the reduced capacity of the heart as analyzed by the myocardial contractile function using 1 US, PET/SPECT, CT, or MRI. Three dimensional (3D) coronary computed tomogra- phy angiography (CCTA) with the use of 64-slice CT scanners is increasingly employed for non-invasive evaluation of CAD, having shown high accuracy and negative predictive value for detection of coronary artery stenosis in comparison with invasive coronary an- giography (ICA) [6,40,126,182,185]. Beyond stenosis, CCTA also permits noninvasive assessment of atherosclerotic plaque, and coronary artery remodeling [5,155,210]. Cardiac image segmentation plays a crucial role and allows for a wide range of ap- plications, including quantification of volume, computer-aided diagnosis, localization of pathology,andimage-guidedinterventions. However,manualdelineationistedious,time- consuming, and is limited by inter- and intraobserver variability. In addition, many seg- mentation algorithms are sensitive to the initialization and, therefore, the results are not always reproducible, which is also limited by interalgorithm variability. Furthermore, the amount and quality of imaging data that needs to be routinely acquired in one or more subjectshasincreasedsignificantly. Therefore, itiscrucialtodevelopautomated, precise, and reproducible segmentation methods. Although computer-aided extraction of the coronary arteries is often employed to aid visual analysis [79,168,179,183,225], currently clinical assessment of CCTA and lesion detectionisbasedonvisualanalysisusingvisualizationtools,whichistimeconsumingand subject to observer variability [212]. It was reported that acquiring expertise in CCTA stenoisis≥50%takesmorethan1year[212]. Automaticsoftwarethatdetectandidentify the coronary artery lesions will reduce the time for coronary analysis and increase the accuracy of CCTA clinical assessment. Detection and quantification of coronary artery lesions are particularly challenging due to limited spatial resolution and coronary artery motion, even smaller plaque size than arteries, and complex and variable coronary artery anatomies. Electrocardiographic (ECG)-gated helical coronary CT Angiography (CCTA) using multidetector computed tomography (MDCT) scanners can generate whole-volume ven- tricular data in any phase of the cardiac cycle, and has been shown to accurately assess global and regional left ventricular (LV) function [174]. However, the radiation exposure 2 duringCTexaminationsisofgreatconcern[36]. Therefore,ECG-basedtubecurrentmod- ulation-applyingmaximaltubecurrenttoonlythephasesofthecardiaccycleneededfor diagnosis - is routinely applied to reduce the radiation dose with helical MDCT. Because of that, the image noise is increased during phases of the cardiac cycle in which the tube current is minimized, reducing the image quality. This is a limitation in the analysis of the LV function by CCTA. 1.2 Review of Previous Work In this section, we provide a brief review of this general research field, and the previously proposed algorithms will be reviewed in detail in each chapter. A variety of segmentation techniques have been proposed over the last few decades. While earlier approaches were often based on heuristics, recent studies employ more sophisticated and principled techniques. However, cardiac image segmentation still re- mained a challenge due to the highly variable nature of cardiac anatomy, function, and pathology [187]. Furthermore, intensity distributions are heavily influenced by the dis- ease state, imaging protocols, artifacts, or noise. Therefore, many researchers are seeking techniques to deal with such constraints. The research in cardiac image segmentation ranges from the fundamental problems of image analysis, including shape modeling and tracking, to more applied topics such as clinical quantification, computer-aided diagno- sis, and image-guided interventions. Detailed review on cardiac segmentation methods applied to images from major noninvasive modalities such as US, PET/SPECT, CT, and MRI will be provided in Chapter 2. Chapter 2 focuses on the segmentation of the car- diac chambers and whole heart applied to static and gated images (obtained through the cardiac cycle). Many computer-aided algorithms of detecting and diagnosing various abnormali- ties have been developed with medical imaging, such as detection and quantification of chronic obstructive pulmonary disease in lung [240,243,245,258,276], colon can- cer [114,149,223,250], and lesions in mammograms [101,192,195,252]. Detection and quantification of coronary artery lesions are particularly challenging due to limited spa- tial resolution and coronary artery motion, even smaller plaque size than arteries, and 3 complex and variable coronary artery anatomies. Automated lesion detection requires accurate extraction of coronary artery centerlines, and classification of normal and ab- normal lumen cross-sections, quantification of luminal stenosis and finally, classification of lesions with different degree of stenosis. Obtaining a reliable coronary centerline from CCTA is an important process for the visualization of the CCTA data in clinical practice, and also serves as a starting point for lumensegmentationandstenosisgradecalculationforlesiondetection. Manyapproaches havebeenproposedoncenterlineextraction: thinningbasedtechniques[107,170,171,223], tracking methods [15,272,273], minimal path techniques [61,79,183], and distance trans- formmethods[26,30,52,290]havebeenpresented. SeveraltechniquesofvisualizingCCTA imagesareusedwiththeobjectofthediagnosisofCADinclinicalpractice: maximumin- tensityprojection(MIP),volumerendering,multi-planarreformatting(MPR)andcurved planar reformatting (CPR) [41], which is generated through a centerline of the vessel of interest. Linearization of coronary artery is another way of visualization of CCTA im- ages by straightening the vessels by use of centerlines, where the visualization has the advantages of viewing the entire vessel at a time and providing better view of plaques and lumen stenosis. Coronary artery segmentation is also an essential task for diagnosis assistance. Region growing methods, active contours, and level-set based methods are popular. Accurate identification of plaques is challenging, especially for the non-calcified plaques, due to many factors such as the small size of coronary arteries, reconstruction artifacts caused by irregular heart beats, beam hardening, and partial volume averaging. To date, only a few studies have attempted automatic detection of lesions [13,85,116, 123,143]; onlyobstructivelesions(withstenosis≥50%)weredetected. Toourknowledge, there has been no attempt of automatic detection of nonobstructive lesions (25-49%). However, nonobstructive lesions (25-49%) have been shown to be a clinically significant predictor of future coronary events [152,247]. We have proposed an algorithm for au- tomated detection of both obstructive and non-obstructive lesions [139,140] by analytic methods, which is described in Chapter 3. However, the specificity (81%) was relatively low due to 39 additional detections on a per-segment basis. To obtain higher specificity 4 and as high as sensitivity compared to the previous works, we also proposed a novel ma- chine learning based technique on the same dataset used in [139], which is described in Chapter5. Toourknowledge,machinelearningtechniqueshavenotbeenappliedtoauto- matic detection of lesions from coronary CTA, even though machine learning algorithms have been used in other kinds of problems. Also, our proposed structured learning algo- rithmisdifferentfrompreviousconventionalmachinelearningtechniques; ithastwolevel systems, where the decision fusion classifier in the second level makes the final decision based on the base classifiers decision. Different from other machine learning techniques, we selected an analytic method as a base classifier. Although MR images provide high resolution and signal-to-noise ratio, various fil- tering techniques were applied to MR as a post-processing due to its well known noise characteristics: Rician distributed noise [67,108,178,197]. Although CT noise modeling is not established well yet, various methods were also proposed in order to reduce the noise in low-radiation dose CT images, especially to suppress noise in image reconstruc- tion process [249,265]. The other option to reduce the noise in low-radiation dose CT is denoising the noisy in the image domain as a post-processing. Several image denois- ing algorithms were applied to low-dose CT image noise reduction such as anisotropic diffusion filter [226], wavelet based structure-preserving filter [28] and nonlocal means (NLM) algorithm [144]. Also normal-radiation dose CT images were used as a priori information [172] and two different CT volumes from high-energy and low-energy scans were utilized [16] for restoring the low-radiation dose CT images. A state-of-art image denoising algorithm, Block-Matching 3D (BM3D) [73,74], is a recently proposed one and is known as outperforming among the plenty of image denoising algorithms. BM3D algo- rithm is based on an enhanced sparse representation in 3D transform domain through 2D similar block matching. The filtered images through BM3D can show the finest anatom- ical details shared by matched blocks and preserve the unique features of each image block. In Chapter 4, we describe the denoising algorithm and its validation, with low- radiationdosecoronaryCTAdatasetsfromconsecutivepatients,forreducingimagenoise and improving LV assessment. This work was published in [141]. 5 1.3 Contributions of the Research The main contribution of the research in this thesis are summarized below. • Heart Chambers and Whole Heart Segmentation Techniques: A Review In this review, we aim to provide an overview on cardiac segmentation methods appliedtoimagesfrommajornoninvasivemodalitiessuchasUS,PET/SPECT,CT, and MRI. We focus on the segmentation of the cardiac chambers and whole heart appliedtostaticandgatedimages(obtainedthroughthecardiaccycle). Inaddition, wealsodiscussimportantclinicalapplications,characteristicsofimagingmodalities, and validation methods used for cardiac segmentation. We do not discuss coronary vessel tracking, which is a separate topic, which is included in Chapter 3 and 4. We hope that this article can serve as a useful guide to recent developments in this growing field. • Automated Detection of Nonobstructive and Obstructive Arterial Lesions from Coronary CT Angiography To date, only a few studies have attempted automatic detection of lesions [13,85, 123]. Furthermore, detection was performed only on obstructive lesions with steno- sis ≥50%. To our knowledge, there has been no attempt of automatic detection of nonobstructive lesions (25-49%). Importantly, however, nonobstructive lesions (25-49%) have been shown to be a clinically significant predictor of future coronary events [152,247]. Our automated method provides automated detection of lesions starting from centerline extraction to computation of stenosis. The only required user-interactions were 3 clicks in the whole process; setting RCA and LM ostium points for centerline extraction and artery classification, and placing a region of interest in the aorta for obtaining scan-specific luminal attenuation range. Au- tomatic algorithms have been previously proposed for these steps [280] and these could be combined with our lesion detection technique. The remaining processes were all automatic. We developed a novel automated algorithm for detection and localization of obstructive and nonobstructive arterial lesions from CCTA, which performed with high sensitivity (93%) compared to 3 experienced expert readers. 6 • Image Denoising of Low-radiation Dose coronary CT Angiography by an Adaptive Block-Matching 3D Algorithm A novel image denoising algorithm, Block-Matching 3D (BM3D), has been recently proposed,andhasbeenshowntobesuperiortopreviousimagedenoisingalgorithms [73,74]. BM3D algorithm is based on an enhanced sparse representation in 3D transform domain through similar block matching. The filtered images through BM3Dcanshowthefinestanatomicaldetailssharedbymatchedblocksandpreserve the unique features of each image block, as well as, the edges in the images. Our aim in this study was to optimize and validate an adaptive denoising algorithm based on BM3D, for reducing image noise and improving LV assessment, in low- dose CCTA. We have shown that the noise was reduced to the level obtained with full-dose data. This development may allow robust estimation of cardiac function from low-dose gated CCTA. To our knowledge, this is the first report of the BM3D algorithm adapted to low-dose CT. We have validated the algorithm with image datasets from consecutive patients using a novel method with the myocardial mass from thehigh-dose cardiac phase as a reference standard. Weshowedthat accuracy of measured myocardial mass by the automated segmentation software improved significantly due to denoising. • Structured Learning Algorithm for Detection of Coronary Arterial Lesions from Coronary CT Angiography The analytic methods described in Chapter 3 was published in [139,140] with promising results with detection of both obstructive and non-obstructive lesions lesions from CCTA. It performed with high sensitivity (93%) compared to 3 ex- perienced expert readers, however, the specificity (81%) was relatively low due to 39 additional detections on a per-segment basis. In order to obtain higher speci- ficity while keeping the good sensitivity, we propose a novel algorithm for coronary arterial lesion detection, by adapting a machine learning algorithm on the same data used in [139]. Structured learning algorithm is proposed, which consists of two stages: (1) Dividing each coronary artery into small volume patches, and integrat- ing several quantitative geometric and shape features for coronary arterial lesions 7 in each small volume patch by Support Vector Machine (SVM) algorithm, (2) Ap- plying SVM-based decision fusion algorithm to combine a formula-based analytic method and a learning-based method in the stage (1). We applied this algorithm to 42 CCTA patient datasets, acquired with dual-source CT, where 21 datasets had 45 lesions with stenosis 25%. The reference standard was provided by visual and quantitative identification of lesions with any stenosis 25% by three expert readers using consensus reading. When the artery was divided into 15 coronary segments according to standard cardiology reporting guidelines, per-segment basis, the sensitivity was 93%, and the specificity was 95% using 10-fold cross-validation. 1.4 Organization of the thesis The rest of this thesis is organized as follows. The background of the computer-aided cardiac imaging is studied as a review on heart chambers and whole heart segmenta- tion techniques in Chapter 2. Then, the automated detection of nonobstructive and obstructive arterial lesions from CCTA is discussed in Chapter 3. The image denoising of low-radiation dose CCTA by an adaptive BM3D algorithm is presented in Chapter 4. ThestructuredlearningalgorithmfordetectionofcoronaryarteriallesionsfromCCTAis described in Chapter 3, and the results are compared to the results of Chapter 3. Finally, concluding remarks and future work items are given in Chapter 6. 8 Chapter 2 Heart Chambers and Whole Heart Segmentation Techniques: A Review 2.1 Introduction Noninvasive cardiac imaging is an invaluable tool for the diagnosis and treatment of car- diovascular disease (CVD). Magnetic resonance imaging (MRI), computed tomography (CT), positron emission tomography (PET), single photon emission computed tomogra- phy (SPECT), and ultrasound (US) have been used extensively for physiologic under- standing and diagnostic purposes in cardiology. These imaging technologies have greatly increased our understanding of normal and diseased anatomy. Cardiac image segmenta- tion plays a crucial role and allows for a wide range of applications, including quantifi- cation of volume, computer-aided diagnosis, localization of pathology, and image-guided interventions. However, manual delineation is tedious, time-consuming, and is limited by inter- and intraobserver variability. In addition, many segmentation algorithms are sensitive to the initialization and therefore the results are not always reproducible, which is also limited by interalgorithm variability. Furthermore, the amount and quality of imaging data that needs to be routinely acquired in one or more subjects has increased significantly. Therefore, it is crucial to develop automated, precise, and reproducible seg- mentation methods. Figure 2.1 illustrates an example of segmentation of heart on CT scan. A variety of segmentation techniques have been proposed over the last few decades. While earlier approaches were often based on heuristics, recent studies employ more 9 t = 1/10 of a cardiac cycle t = 2/10 of a cardiac cycle t = 3/10 of a cardiac cycle t = 6/10 of a cardiac cycle A view of trunk of the human body (the first column) and 3 different orthogonal plane views (the right three columns) Figure 2.1: An example of heart chamber segmentation in 3-D contrast CT volumes with green line delineation for the LV endocardium, magenta for the LV epicardium, cyan for the left atrium (LA), orange for the right ventricle (RV), and blue for the right atrium (RA) [289]. The first row shows a full torso view (the first column) and the closeup view (the right three columns) of three orthogonal cuts from 3-D volume data. The four images in the second row show the tracking results for the heart chambers on a dynamic 3-D sequence with 10 frames. (Reproduced from Y. Zheng et al. with permission of 2008 IEEE). sophisticated and principled techniques. However, cardiac image segmentation still re- mained a challenge due to the highly variable nature of cardiac anatomy, function, and pathology [187]. Furthermore, intensity distributions are heavily influenced by the dis- ease state, imaging protocols, artifacts, or noise. Therefore, many researchers are seeking techniques to deal with such constraints. The research in cardiac image segmentation ranges from the fundamental problems of image analysis, including shape modeling and tracking,tomoreappliedtopicssuchasclinicalquantification,computer-aideddiagnosis, and image-guided interventions. Inthisreview,weaimtoprovideanoverviewoncardiacsegmentationmethodsapplied toimagesfrommajornoninvasivemodalitiessuchasUS,PET/SPECT,CT,andMRI.We 10 focus on the segmentation of the cardiac chambers and whole heart applied to static and gatedimages(obtainedthroughthecardiaccycle). Inaddition,wealsodiscussimportant clinical applications, characteristics of imaging modalities, and validation methods used for cardiac segmentation. We do not discuss coronary vessel tracking, which is a separate topic. We hope that this article can serve as a useful guide to recent developments in this growing field. The review is organized as follows. The clinical background of cardiac imagesegmentationisdiscussedinSec. 2. Numeroussegmentationmethodsaredescribed in Sec. 3. Cardiac imaging modalities are reviewed in Sec. 4. Approaches to validation of the segmentation results are discussed in Sec. 5. Concluding remarks are given in Sec. 6. 2.2 Clinical Background CVDisthemajorcauseofmorbidityandmortalityinthewesternworld. Morethan2200 patients die of CVD each day in the United States alone [220]. CVD involves a variety of disorders of the cardiac muscle and the vascular system. The common causes of CVD include ischemic heart disease and congestive heart failure [27]. Cardiac imaging has played a crucial and complementary role to the diagnosis and treatment of patients with known or suspected CVD. In the case of ischemic heart disease, the first consequence of the disease is the changes in the myocardial perfusion assessed by SPECT and PET or by MRI [175]. In particular, the perfusion deficit leads to metabolic changes in myocardial tissues assessed by PET. A myocardial ischemia could further diminish ejection of blood because of the reduced capacity of the heart as analyzed by the myocardial contractile function using US, PET/SPECT, CT, or MRI. Assessment of the left ventricle (LV) contractile function is essential for diagnosis and prognosis of CVD. The LV contractile function is commonly analyzed as it pumps oxygenated blood to the entire body [242,270]. The computer-aided or fully automated segmentation of the ventricular myocardium is generally used to standardize analysis and improve the reproducibility of the assessment of contractile cardiac function [111]. In ad- dition, itformsanimportantpreliminarysteptoprovideusefuldiagnosticinformationby 11 quantifying clinically important parameters, including end-diastolic volume (EDV), end- systolicvolume(ESV),ejectionfraction(EF),wallmotionandthickening, wallthickness, stroke volume (SV), and transient ischemic dilation (TID) [209]. Furthermore, segmen- tation of the LV is necessary for the quantification of myocardial perfusion [110], the size of the myocardial infarct [129], or myocardial mass [48]. Accurate determination of these parameters can help with a variety of diagnostic or prognostic applications in cardiology. In addition to the LV segmentation, the whole heart, including the right ventricle, atria, aorta, andpulmonaryartery[167]isoftensegmentedfor3-Dvisualizationpurposes to analyze coronary lesions or other cardiac abnormalities. Theprimaryapplicationofcardiacsegmentationhasbeenthemeasurementofcardiac function. The most commonly used index of LV contractile function is the EF, which is the index of volume strain (change in volume divided by initial volume) [14,242]. The EF can be derived from EDV and ESV given by EF = EDV −ESV EDV ×100(%), (2.1) where EF can be measured by gated SPECT/PET, US, MRI, or CT. SV is related to EF calculated by subtraction of ESV from EDV. SV also correlates with cardiac function and is a determinant of cardiac output. Assessment of the LV regional wall motion and thickening plays an important role in the assessment of contractile cardiac function at rest, during stress-induced ischemia, and of its viability [18,128,248,261]. Methods to quantify wall motion can rely on detecting endocardial motion by observing image intensity changes, determining the boundary wall of the ventricle, or attempting to track anatomical myocardial landmarks [100]. Wall thickening (WT) is usually measured using centerlines [18,128], which can be defined in terms of percentage of systolic thickening and calculated per landmark point as WT(%)= w es −w ed w ed ×100, (2.2) where w es and w ed are myocardial wall thicknesses (the distance from endocardial and epicardial contours) at end systolic and end diastolic, respectively [167]. Moreover, TID 12 of LV is a specific and sensitive parameter for detecting severe coronary artery disease (CAD) [2]. TID is defined as the ratio of volume of blood pool after stress compared with rest. TID has been mostly measured by SPECT [2]. 2.3 Segmentation Techniques In this section, we review several techniques for the segmentation of heart chambers and the whole heart. Cardiac image segmentation techniques can be divided into four main categories: 1. boundary-driven techniques, 2. region-based techniques, 3, graph-cuts techniques, and 4, model fitting techniques, in which multiple techniques are often used together to efficiently address the segmentation problem. We describe the methods in each category, and discuss their advantages and disadvantages. 2.3.1 Boundary-Driven Techniques 2.3.1.1 Active contours (or snakes) Boundary-driven segmentation techniques are based on the concept of evolving contours, deforming from the initial to the final position. One of the most widely used methods is the active contour model, which is also referred to as snakes [142]. The active contour modelallowsacurvedefinedintheimagedomaintoevolveundertheinfluenceofinternal and external forces. The internal force is imposed on the contour in order to control the smoothness while the external force is usually derived from the image itself. An edge detector function is utilized as the external force in the classical active contour model. Mostactivecontourmodelsonlydetectobjectswithedgesdefinedbythegradients. Kass et al. [142] were the first to formulate the classical active contour model using an energy minimization approach. The active contour model seeks the lowest energy of an objective function, where the total energy of the active contour model is defined as E total =E in +E ex , (2.3) 13 where E in denotes the internal energy incorporating prior knowledge such as smoothness or a particular shape and E ex represents the external energy describing how well the curve matches the image data locally. A curve v(s) can be represented as v(s) =[x(s),y(s)], 0≤s≤ 1, (2.4) With such a representation, the internal and external energies can be formulated as E in = ∫ 1 0 E in [v(s)]ds and E ex = ∫ 1 0 E ex [v(s)]ds, (2.5) where E in (v(s)) can be given by E in (v(s)) =α(s) dv ds 2 (Elasticity)+β(s) d 2 v ds 2 2 (stiffness), (2.6) and one example of external energy is E ex [v(s)]=−{|G x [v(s)]| 2 +|G y [v(s)]| 2 } (2.7) A simple edge detector is used in Eq. (7) to formulate the external energy term E ex , where G x and G y denote the gradient images along x and y axes, respectively. An example of the evolving 2-D contours obtained by applying the active contour model to a sequence of US of the LV is shown in Figure 2.2. These contours deform gradually to the exact object boundaries by minimizing the energy of the active contour model. Although the active contour model has been a seminal work, it has some limita- tions. For instance, it is sensitive to the initialization as the contour may get stuck to a local minimum near the initial contour. The curve may pass through the boundary of the field of view of the image when the image has high amounts of noise. In addition, the accuracy of the active contour model depends on the convergence criteria employed in the minimization technique. A few attempts have been made to improve the original model by adopting new types of external field, including gradient vector flow [278] and the balloon model [60]. 14 Selected frames from one cardiac cycle. Figure 2.2: Short axis ultrasound images illustrating the tracking of the endocardial border by the active contour technique [184]. An initial contour evolves to the final contour as indicated by the white dotted line in each image. (Reproduced from I. Mikic et al. with permission of 1998 IEEE). 2.3.1.2 Geodesic active contour The original active contour model can be expressed as the geodesic active contour [47, 145,146,282] using level set formulation [230]. This method enables an implicit parame- terization, allowing automatic changes in the topology. The geodesic active contour is an extendedversionofthegeometricactivecontours[46]byusinggeometricflowtoshrinkor expandacurve. Itallowsstableboundarydetectionwhentheimagegradientssufferfrom large variations [47]. The problem of fitting a contour is equivalent to finding geodesics of the minimal distance curves by minimizing the intrinsic energy given by E(v)= ∫ 1 0 g{|∇I[v(p)]|}|v ′ (p)|dp = ∫ L(v) 0 g{|∇I[v(p)]|}ds, (2.8) 15 where d(s)=|v ′ (p)|dp, g is an edge indicator function, e.g., g = 1 1+|∇ ^ I| p (p≥ 1) and L = ∫ 1 0 || ∂v ∂p || is the curve length functional. ˆ I is a smoothed version of I. The corresponding geodesic active contour model is given by ∂ϕ ∂t =g(I)|∇ϕ| [ div ( ∇ϕ |∇ϕ| ) +k ] +∇g(I)·∇ϕ (2.9) where phi is an implicit representation of the curve v that is explained in Section of Level-set methods, and k is a positive real constant that is related to the constant curve velocity term cg(I)|∇u|. The term ∇g()·∇ϕ is adopted to improve the geometric flow and tackle the problem caused by low-contrast edges [47]. The geodesic active contour with the level-set representation has become the basis of many boundary-driven segmentation techniques developed in the last decade [50,206]. Although the geodesic active contour model has been applied to cardiac image segmen- tation, it has several limitations [50,206]. One example is the sensitivity of the computed gradient value to noise because the differentiation of gray levels tends to magnify noise. 2.3.2 Region-Based Techniques Intheregion-basedsegmentationtechniques, regionsofinterest, includingchambersfrom extracardiac structures, are partitioned by a selected global model that provides approx- imations of the region of interest. In other words, the global information defined within theregionofinterestisusedtodifferentiatetheregionofinterestfromothersbyglobalho- mogeneity regional properties [49,202]. Hybrid techniques that combine the region-based and boundary-based information have also been proposed to enhance the segmentation performance. 16 2.3.2.1 Mumford-Shah functional Mumford and Shah [189] proposed a functional utilizing a piecewise smooth model. The functional of the piecewise model is smooth within regions yet may not be always smooth across the boundaries. The Mumford-Shah functional is defined as: E(f,C)=λ ∫ ∫ R (f(x,y)−I(x,y)) 2 dxdy+ ∫ ∫ R−C ||∇f(x,y)|| 2 dxdy+µ|C|, (2.10) where λ and µ are positive parameters, |C| is the boundary length, R is a domain, and f(·,·)isapiecewisesmoothfunctionthatapproximates I(·,·)andisalsoasolutionimage by minimizing the Eq. (10). The first term represents the data term that measures a dissimilarity between the input image and the solution image, the second is a smoothing termexceptatimagediscontinuities,andthethirdsmoothesboundariesC. Inthisenergy functional, the discontinuities of the boundaries are expressed explicitly. This segmentation model has some drawbacks. It is computationally expensive [11] and is not robust in the presence of strong noise and/or missing information. To cir- cumvent these limitations, a fuzzy algorithm was introduced in the Mumford-Shah seg- mentation using the Bayesian and Maximum A Posteriori (MAP) estimator [39]. Prior knowledge has also been incorporated [37,53,68,257] to overcome the problem of noise and/or missing information that commonly occurs in medical imaging. 2.3.2.2 Level-set based technique Unliketheparametricrepresentation, thelevel-setframeworkrepresentscurvesimplicitly as the zero level set of a scalar function proposed by Osher and Sethian [201]. Following the introduction of the level-set framework, Sethian [230], Osher and Fedkiw [199], and OsherandParagios[200]builtasolidfoundationofthelevel-setrepresentationappliedto avarietyofproblems. Therepresentationforcontourevolutioninthelevel-setframework is implicit, parameter-free, and intrinsic. Let Ω∈R n , wheren is 2 or 3, denote the image 17 domain. AcontourC ∈ Ωcanberepresentedbythezerolevelsetofahigher-dimensional embedding function ϕ(x): Ω→R as given by C ={x∈ Ω|ϕ = 0} interior(C)={x∈ Ω|ϕ> 0}, exterior(c)={x∈ Ω|ϕ< 0}, (2.11) where ϕ(x) is a signed distance function that imposes |∇ϕ| = 1 almost everywhere. The contour evolution equation is then given by dC dt =F⃗ n (2.12) where ⃗ (n) denotes the outward unit vector normal of C and F denotes a speed function. The interface is the zero level of ϕ (i.e., ϕ(C(t),t) for all t). An evolution equation for ϕ then can be derived using ⃗ (n)= ∇ϕ |∇ϕ| as ∂ϕ ∂t =−F|∇ϕ| (2.13) The contour evolution dC dt = F⃗ n corresponds to an evolution of phi given by ∂ϕ ∂t = −F|∇ϕ|. The level-set based segmentation method has been extensively utilized in the imagesegmentationproblemsduetoavarietyofadvantages: itisparameterfree,implicit, can change the topology, and provides a direct way to estimate the geometric properties. In addition, a large amount of effort has been made for its performance improvement [176,205,233,256]. In boundary-driven techniques, the gradient is used as a criterion to stop the curve. However, there are objects whose boundaries cannot be defined, such as smeared bound- aries. Chan and Vese [50] proposed a different model incorporating an implicit energy functional in boundaries C with active contours and the level-set representation by mod- ifying the Mumford- Shah functional, i.e., E(f,C)= N ∑ i=1 λ 2 ∫ ∫ R i [c i (x,y)−I(x,y)] 2 dxdy+µ|C|, (2.14) 18 Selected frames from one cardiac cycle. Figure2.3: TheLVsegmentationresultsforMRimagesbythelevel-setfunctionwiththe visual information and anatomical constraints, where the sequence of images corresponds to the same slice but in different moments of a cardiac cycle [202]. (Reproduced from N. Paragios with permission of 2002 Springer Science and Business Media). whereasetofdisjointregionsR i coverRandf(x,y)=constantc i on(x,y)∈R i . N isthe number of image partitioning. Equation (14) is minimized in c i by setting c i to the mean of I(·,·) in R i . In the case of the two partitioning regions c 1 and c 2 , the Euler-Lagrange derivation of Eq. (14) is demonstrated by ∂ϕ ∂t =δ [ µ div ( ∇ϕ |∇ϕ| ) −|c 1 −I| 2 +|c 2 −I| 2 ] , (2.15) where δ is a Dirac delta function. Therefore, this energy minimization process depends on regional constants c i and the level-set function ϕ. µ ∈R + is a balancing parameter between data fidelity and regularization. 19 2.3.2.3 Clustering Clustering algorithms have been used to group image pixels of similar features in the image segmentation problems. The resulting pixel-cluster memberships provide a seg- mentation of the image. Clustering-based segmentation methods are considered to be an old yet robust technique [31,51,58,193]. One of the widely used clustering techniques is the K-means algorithm. This approach uses an objective function that expresses the performance of a representation for k given clusters. If we represent the center of each image cluster by m i and the jth element in cluster i by x j , the objective function can be defined as Φ(clusters,data)= ∑ i ∈ clusters ∑ j ∈ i ′ th cluster (x j −m i ) T (x j −m i ) . (2.16) Although this objective function produces k clusters, it may not guarantee the con- vergence to the global minimum [62]. Another clustering-based segmentation method is the fuzzy c-means algorithm based on the K-means and fuzzy set theory [23,211,219]. The conventional fuzzy c-means method does not fully utilize the spatial information of the image. To cope with this limitation, an approach was developed to incorporate the spatial information into the objective function by indicating the strength of association between each pixel and a particular cluster (i.e., the probability that a pixel belongs to a specific cluster) in order to improve the segmentation results [56]. In addition, the expectation-maximization (EM) algorithm using the Gaussian mix- ture model is one of the well-established clustering-based methods. The iterative al- gorithm uses the posterior probabilities and the maximum likelihood estimates of the means, covariances, and coefficients of the mixture model [45,162]. Furthermore, the EM algorithm can be combined with various models such as the hidden Markov random field model in order to achieve accurate and robust segmentation results [287]. How- ever, clustering-based methods have a few weaknesses. The methods are sensitive to initialization, noise, and inhomogeneities of image intensities [151]. 20 2.3.3 Graph-Cuts Techniques The graph-cuts technique [34,35] was originated from Greig’s maximum a posteriori (MAP) estimation [120] in order to find the maximum flow for binary images. An in- teractive graph-cuts technique can find a globally optimal segmentation of an image. The user selects some pixels called seed points as hard constraints inside the object to be segmented as well as some pixels belonging to the background. The objective func- tion is typically defined by boundary and regional properties of the segments. Therefore the obtained segmentation provides the best balance of boundary and region properties satisfying the constraints [35]. In the graph-cuts theory [35], an image is interpreted as a graph, where all pixels are connected to its neighbors. Graph node set P and edge set Q connect nodes v ∈ P to form a graph G = P,Q. Terminals are two special nodes, known as the source (s) and sink (t), which are the start and end nodes of the flow in the graph, respectively. Also, there are two types of edges: n-links that connect neighboring pixels and t-links that connect pixels in image to terminal nodes. The cost or weight w e is assigned to each edge, e∈Q. The costs of n-links are the penalties for discontinuities between the pixels, and the costs of t-links are the penalties for assigning the corresponding terminal to the pixel. Thus, the total cost of the n-links represents the cost of the boundary while the total cost of the t-links indicates the regional properties. A cut x ⊂ Q is a set of edges that separates the graph into regions connected to terminal nodes. The cost of a cut is defined by the sum of the costs of edges that belong to the cut, which is denoted by |X| = ∑ e∈X w e , (2.17) Then optimal segmentation results using the graph-cuts technique amounts to finding the optimal solution for the cost of a cut, i.e., a minimal cost cut. An example of medical image segmentation using the graph-cuts techniques is illustrated in Figure 2.4. Severalmethodstofindanoptimalcostcuthavebeenproposedsuchasminimizingthe maximumcutbetweenthesegments[277]andnormalizingthecostofacut[232]. Boykov and Kolmogorov [33] proposed a max-flow/min-cut algorithm and compared its efficiency 21 Selected frames from one cardiac cycle with ED phase (left) and ES phase (right). Figure 2.4: LV segmentation examples for contrast cardiac CT images using the graph- cuts technique [136]. The segmentation algorithm used here combines the EM-based region segmentation, the Dijkstra active contours using graph-cuts, and the shape infor- mation through a pattern matching strategy. The graph-cuts algorithm is used to cut the edges in the graph to form a closed-boundary contour between two different regions. (Reproduced from M. P. Jolly with permission of 2006 Springer Science and Business Media.) with Goldberg-Tarjan’s pushrelabel [115] and Ford-Fulkerson’s augmenting paths [98]. Based on the cut cost described above, the energy function can be formulated, consisting of the boundary term and the regional term. Let l p be the label for a given pixel p, which can be either an object or the background. Let S be a set of pixels and N be a set of all pairs of neighboring elements. The energy function [32] for graph-cuts can then be given by: E(l)=E smooth (l)+E data (l), (2.18) where E smooth = ∑ p,q ∈ N V pq (l p ,l q ) (2.19) and E data = ∑ p ∈ S D p (l p ), (2.20) where V pq denotes the cost of n-link between two pixels p and q and D p denotes the cost of t-link at pixelp. E smooth isa boundary term that imposes smoothness whereas E data is 22 a region term that measures how well a label fits the data. V pq is the interaction function between neighboring pixels p and q, and D p is a log-likelihood function at pixel p. One limitation of the graph-cuts technique is that it is not fully automated, as it demands the initialization of seed points in the object and the background regions. Figure 2.5: Segmentation results obtained by applying the AAM technique to an ultra- soundimagesequenceoveroneheartbeatperiod[29]: (a)theinitial1-phaseAAMmodel positioned, (b) the match after 5 AMM iterations, (c) the final match after 20 AAM it- erations, and (d) the manual contours for comparison. The first row shows phase images 1, the second row shows phase images 2, and the third row shows phase images 3 from 16 image phases. (Reproduced from J. Bosch et al. with permission of 2002 IEEE). 23 2.3.4 Model-Fitting Techniques The model-fitting segmentation attempts to match a predefined geometric shape to the locations of the extracted image features of an image. A two-step procedure is usually needed in the model-fitting segmentation: (1) generating the shape model from a train- ing set and (2) performing the fitting of the model to a new image. The models contain the information about the shape and its variations. The main tasks in the model-fitting are the extraction of the features and generation of the best fitting model from the fea- tures. Given an accurate and appropriate model, the segmentation procedure becomes an optimization problem of finding the best model parameters for a given patient im- age. Human heart anatomy exhibits specific features and therefore the similar shape or intensity information about hearts can be utilized by means of a shape-prior knowledge. Prior knowledge can be used to compensate for common difficulties such as poor image contrast, noise, and missing boundaries. Integrating the prior knowledge using explicit shape representation into segmentation process has been a topic of interest for decades. For instance, global shape information with closed curves represented by Fourier descriptors was proposed where the Gaussian priorwasassumedforFouriercoefficients[77,244]. Theshapemodelwasbuiltbylearning thedistributionofFouriercoefficients. Inaddition, activeshapemodels(ASM)wereused in a variety of segmentation tasks [66,260,286]. In brief, key landmark points on each training image generate a statistical model of shape variation, and a statistical model of intensity is built by warping each example image to match the mean shape. Principal component analysis (PCA) is applied on the key landmark points where the sample dis- tribution is assumed as a Gaussian distribution. Any sample within the distribution can be expressed as a mean shape with a linear combination of eigenvectors [186]. Cootes et al. [64–66] built statistical models by positioning control points across training images and developed the active appearance model (AAM) [63]. An example of image segmen- tation based on the AAM is illustrated in Figure 2.5. The landmark points should be placedinaconsistentwayoveralargedatabaseoftrainingshapesinordertoavoidincor- rect parameterization [260]. Also, if the size of a training set is small, the model cannot capture its variability and is unable to approximate data that are not included in the 24 training set [286]. Furthermore, a statistical model is incorporated in order to describe intersubject shape variabilities. For example, the dimension of the parametric contours was reduced by the use of PCA. By projecting the shape onto the shape parameters and enforcing limits, global shape constraints have been applied to ensure that the current shape remains similar to that in the training set [66]. Wang and Staib [266] extended the work of Cootes et al. [66] using a Bayesian framework to adjust the weights between the statistical prior knowledge and the image information based on image quality and reliability of the training set. The B-splines based curve representation was applied to the classical active contours model [69,71,72]. There have been several attempts to incorporate the prior knowledge of shape in the implicit shape representation. Leventon et al. [161] incorporated the shape-prior information in the level-set framework with a set of previously segmented data using the signed distance function. A shape-prior model was also proposed to restrict the flow of thegeodesicactivecontour, wherethepriorshapewasderivedbyperformingthePCAon a collection of the signed distance function of the training shape. A similar approach was proposed in Ref. [54] with an energy functional, including the information of the image gradientandtheshapeofinterestingeometricactivecontoursusingthedistancefunction torepresenttrainingdistances. Anotherobjectivefunctionforsegmentationwasproposed in Ref. [255] by applying the PCA to a collection of signed distance representations of the training data. Rousson and Paragios [221] applied a shape constraint to the implicit representation using the level-set to formulate an energy functional, where an initial segmentation result can be corrected by the level-set shape prior model through PCA. They also considered a stochastic framework in constructing the shape model with two unknown variables: the shape image and the local degrees of shape deformations. In specific applications, 3-D heart modeling was explored in Ref. [100] and the four- chamber heart modeling was proposed in Refs. [289] and [288]. Geometric constraint was also incorporated in the LV segmentation problem. The model-based approach in Ref. [237] has gained a lot of attention as a solution to the image segmentation problem with incomplete image information [70,222]. 25 Several other model-fitting methods have been investigated to date. The atlas-based segmentation was carried out based on the registration, where multiple atlases were reg- istered to a target image by propagation of the atlas image labels with spatially varying decision fusion weight in CT scans [134]. In addition, a deformable surface represented by a simplex mesh in the 3-D space used the time constraints in segmenting the SPECT cardiac image sequence in Ref. 2. Modeling the four-chamber heart was performed for 3- D cardiac CT segmentation [78], where the simplex meshes were used to provide a stable computation of curvature-based internal forces. Heart modeling was accomplished with a statistical shape model [66] and labeling is performed on mesh points that correspond to special anatomical structures such as control points that integrate mesh models [289]. The whole heart segmentation method, including four chambers, myocardium, and great vessels in CT images, was proposed in Ref. [88], where ASM and the generalized Hough transform for automatic model initialization were exploited. 2.4 Application to Specic Imaging Modalities In this section, several modalities for cardiac examinations are reviewed and techniques used for segmentation in each modality are presented. We summarize roles and char- acteristics of each modality with reference to the recent work [19], and describe the segmentation techniques used for each modality. 2.4.1 Ultrasound Imaging US imaging is the most widely used technique in cardiology for evaluation of contrac- tile cardiac function. It has several advantages, including good temporal resolution and relatively low cost. It can be used to assess tissue perfusion by myocardial contrast echocardiography [25]. Additionally, it is well-suited for image-guided interventions due to its recent advances, allowing visualization of instruments as well as cardiac structures throughthebloodpool[164]. However, USimagingsuffersfromlowSNR(signal-to-noise ratio) and speckle noise [196], making the LV segmentation task challenging. Moreover, theacquisitionisusuallyperformedin2-D[196]andthereforedependsontheorientation, 26 leadingto missing boundaries and lowcontrastbetweenregions ofinterest[38]. US imag- ing of the heart involves 2- D, 2-D+t, 3-D, 3-D+t, and Doppler echocardiography, each of which poses different challenges. In this review, we focus primarily on the segmentation of the 3−D and 3−D+t data. Arecentadvanceinthisfieldofcardiacimagingisthreedimensionalechocardiography (3-DE). This tool has been used only for research purposes in the past, but due to recent improvements in software algorithms and transducer technology, it is now used in clinical practice[24]. 2-Dand3-Dechocardiographyusedifferenttransducers. 3-DEiswell-suited for LV mass, volumes, and EF [24] because 2-D imaging can potentially provide biased measurements of EF [125]. Numerous segmentation techniques have been proposed for US imaging. 3-D AAM was proposed [173], where its model was learned from the manual segmentation results andtheinformationoftheshapeandimageappearanceofcardiacstructureswasincluded in a single model. The level-set or the active contour segmentation methods were also applied to the US segmentation [12,283,291]. Level-set based method with specialized processing was adopted to extract highly curved volumes while ensuring smoothness of signals [12]. Additionally, an algorithm based on deep neural networks and optimization was employed [44] and a discriminative classifier, random forest, was used to delineate myocardium [159]. For an in-depth review on the segmentation of US images, we refer the reader to Ref. [196]. 2.4.2 Nuclear Imaging (SPECT and PET) Nuclear imaging has been an accepted clinical gold standard for the quantification of relative myocardial perfusion at stress and rest [239]. It is also the mainstream imaging technique to estimate myocardial hypo-perfusion due to coronary stenosis. Gated my- ocardial perfusion SPECT [207] is also widely used for the quantitative assessment of the LV function. LV regional wall motion and thickening by SPECT play an integral part to assess coronary artery disease and determine the extent and severity of functional ab- normalities [113]. Accurate segmentation of LVand quantification of the volume offer an 27 Figure 2.6: The gated SPECT segmentation in Ref. [112], where the first row shows original myocardial perfusion SPECT (MPS) and the second row shows the segmented image of the first row. objective means to determine the risk stratification and therapeutic strategy [228]. How- ever, delineation of the endocardial surface with nuclear imaging is challenging due to relatively low image resolution, extracardiac background activities, partial volume effect, count statistics, and reconstruction parameters [253]. A few techniques have been developed for nuclear imaging segmentation. Germano et al. [112] proposed LV segmentation method for SPECT, which is widely used in nuclear cardiology practice as illustrated in Figure 2.6. In addition, wall motion and thickening were further investigated with the same technique [113]. In brief, an asymmetric Gaus- sian was exploited to fit to each profile in each interval of a gated MPS volume, where a maximal count myocardial surface was determined. Other well-established methods for the quantitative analysis of nuclear myocardial perfusion imaging exist such as the Corri- dor4DM [96], the Emory Cardiac Toolbox [105], the University of Virginia quantification 28 program [267], and the Yale quantification software [165]. These automated software tools allow highly automatic definition of the LV contours and measure perfusion defect size, EF, EDV, and LV mass. In other developments, the level-set technique was employed for the segmentation of cardiac gated SPECT images126 and a geometric active contour-based SPECT segmen- tation technique was proposed [75]. Slomka et al. [238] and Declerck et al. [76] proposed a template-based segmentation method using the registration-based approach. Addition- ally, the 4-D (3-D+t) shape prior was adopted in Ref. [150] using implicit shape represen- tation of the left myocardium in SPECT image segmentation. This study extended the shape modeling to the spatiotemporal domain by treating time as the fourth dimension and applied the 4-D PCA. Faber et al. [91] employed an explicit edge detection method to estimate endocardial and epicardial boundaries using the structural information in gated SPECT perfusion images. The 3-D ASM segmentation algorithm was adopted in Refs. [253] and [198] for cardiac perfusion gated SPECT studies and the construction of geometrical shape and appearance models. Reutter et al. [218] used a 3-D edge detec- tion technique for the segmentation of respiratory-gated PET transmission images and Markov random fields were adopted for 3-D segmentation of cardiac PET images [138]. Ingatedcardiacimaging, ashortandcyclicimagesequenceisgenerated, representing a single heartbeat that summarizes data acquired over cardiac cycles [224,269]. Gated SPECT images can provide global and regional parameters of LV function as described in Sec. 2. Once LV is segmented [95], the endocardial and epicardial boundaries are utilized for the quantification of global and regional parameters. The LV cavity volume is determined by the volume of each voxel and number of voxels bound by the LV en- docardium and valve plane [109] Measurements of EF including ES and ED from gated SPECT are validated in many studies, demonstrating good accuracy [93,109]. However, the relatively low resolution of nuclear cardiac images can lead to an underestimation of the LV cavity size, especially when patients have small ventricles, therefore resulting in overestimation of the EF [99,124,190]. Quantitative measurement of wall motion is obtained by displacements of the endocardium from ED to ES [92,281] and WT quantifi- cation is measured by assessing the apparent intensity of the myocardium from ED to ES 29 resultingfromthepartialvolumeeffect[103,231,246]. Despitethelowresolutionofgated MPS, partial volume effect is actually exploited to analyze motion and thickening, since changes in the image intensity are related to the thickening of the myocardium [113]. 2.4.3 Computed Tomography (CT) In cardiac CT, there are two imaging procedures: (1) coronary calcium scoring with noncontrast CT and (2) noninvasive imaging of coronary arteries with contrast-enhanced CT. Typically, noncontrast CT imaging exploits the natural density of tissues. As a result, various densities using different attenuation values such as air, calcium, fat, and soft tissues can be easily distinguished [81]. Noncontrast CT imaging is a low-radiation exposuremethodwithinasinglebreathhold, determiningthepresenceofcoronaryartery calcium [81]. In comparison, contrast-enhanced CT is used for imaging of coronary arter- ies with contrast material such as a bolus or continuous infusion of a high concentration of iodinated contrast material [229]. Furthermore, coronary CT angiography has been shown to be highly effective in detecting coronary stenosis [181]. Especially in the recent rapid advances in CT technology, CT can provide detailed anatomical information of chambers, vessels, coronary arteries, and coronary calcium scoring. Coronary CT angiog- raphycanvisualizenotonlythevessellumenbutalsothevesselwall,allowingnoninvasive assessment of the presence and the size of the noncalcified coronary plaque [82]. Addi- tionally, CT imaging provides functional as well as anatomical information, which can be used for quantitative assessment for systolic WTand regional wall motion [153,194]. Various segmentation techniques have been proposed for cardiac CT applications. Funka-Lea et al. [104] proposed a method to segment the entire heart using graph-cuts. Segmenting the entire heart was performed for clearer visualization of coronary vessels on the surface of the heart. They attempted to set up an initialization process to find seed regions automatically using a blowing balloon that measures the maximum heart volume and added an extra constraint with a blob energy term to the original graph- cuts formulation. Extracting the myocardium in 4-D cardiac MR and CT images was proposed in Ref. [136] using the graph-cuts as well as EM-based segmentation. Zheng et al.1 presented a segmentation method based on the marginal space learning by searching 30 fortheoptimalsmoothsurface. Model-basedtechniqueswerealsoadoptedforcardiacCT image segmentation using ASM with PCA [89]. Methods for region growing [83,188] and thresholding[137,279]werealsoemployed. Anentirelydifferenttopicisthesegmentation of cornary arteries from the CT angiography data, which is is well covered by other reviews [148,160]. 2.4.4 MRI Cardiac MRI allows comprehensive cardiac assessment by several types of acquisitions that can be performed during one scanning session [209]. It provides high-resolution visualization of cardiac chamber volumes, functions, and myocardial mass [10]. Car- diac MRI has been established as the research gold standard for these measurements, with more and more clinical impact. Moreover, recently developed delayed enhancement imaging with gadolinium contrast has emerged as a highly sensitive and specific method for detecting myocardial necrosis. This allows improved evaluation of the myocardial infarction [130,147]. Perfusion MRI imaging can also be performed for the diagnosis of ischemic heart disease. However, the perfusion MR imaging depends on a first-pass tech- nique, which limits the conspicuity of perfusion defects [20,135]. The advantages of MRI include exquisite soft-tissue contrast, high spatial resolution, low SNR, ability to char- acterize tissue with a variety of pulse sequences, and no ionizing radiation. Compared to PET or SPECT, the dependence of MR signal on regional hypoperfusion is minimal and does not prevent segmentation tasks. Some of the disadvantages are that cardiac MRI typically employs one breath-hold per slice with 5 to 15 slices per patient study, therefore necessitating multiple breath-holds for each patient dataset. Additionally, the images are of high-resolution in-plane but the resolution between slices is low (typically 8 to 10 mm). Also, multiple breath-hold acquisition can cause errors in spatial alignment and result in artifacts of the 3-D heart image. These misalignments can be corrected by software registration techniques [236]. Recently, full volume 3-D MRI acquisitions have been proposed [209]. Cardiac MR tagging is an important reference technique to measure myocardial func- tion, which allows quantification of local myocardial strain and strain rate [132,285]. 31 Tagged MR produces signals that can be used to track motion. Several techniques have been developed, including magnetization, saturation, spatial modulation of magnetiza- tion (SPAMM), delay alternating with nutation for tailored excitation (DANTE), and complementarySPAMM(CSPAMM).Thesetechniquesproduceavisiblepatternofmag- netizationsaturationonthemagnitudereconstructedimagewithoutanypost-processing. However,quantifyingmyocardialmotionrequiresexhaustivepost-processing. Incontrast, more advanced techniques such as Harmonic phase (HARP), displacement encoding with simulated echoes (DENSE), and strain encoding (SENC) [132,133] compute motion di- rectly from the signal and do not directly show tagging pattern. Simple post-processing is required for myocardial motion information. For more details, we refer readers to the recent review of cardiac tagged MRI [132]. Numerous image segmentation techniques have been applied to MRI and are summa- rized below. Petitjean et al. [209] presented a review of segmentation methods in short axis MR images. Paragios [202] used the level-set technique using a geometric flow to segment endo- and epi cardium of the LV. Two evolving contours were employed for the endo- and epicardium and the method combined the visual information with anatomical constraints to segment both regions of interest simultaneously. Paragios et al. [202–204] applied the shape prior knowledge with the level-set representation to achieve robust and accurate results. Moreover, several constraints and prior knowledge have been incorporated in the level-setframeworkforefficientlysegmentingregionsofinterest. Forexample,thevelocity constrainedfrontpropagationmethodwasproposedbyusingthemagnitudeanddirection ofthephasecontrastvelocityastheconstraints[274]. Wooetal.[275]proposedstatistical distance between the shape of endo- and epicardium as a shape constraint using signed distance functions. Tsai et al. [254] proposed a shape-based approach to curve evolution andCiofoloetal.[57]proposedamyocardiumsegmentationschemeforlate-enhancement cardiac MR images by incorporating the shape prior with contour evolution. Zhu et al. [292] applied a dynamic statistical shape model with the Bayesian method. Segmentation techniques using thresholding [117,163], region growing [9,234], and boundary detection [118,216] were applied to MRI data. For instance, a local assessment 32 of boundary detection method was proposed to improve the capture range and accuracy [208]. Segmentation algorithm using optimal binary thresholding method and region growing was presented to delineate 3-D+t cine MR images [59]. In addition, learning frameworks were used to segment 2-D tagged cardiac MR images [213,214]. 2.4.5 Parameter Correlation between Imaging Several attempts have been made to compare and correlate the quantitative parameters obtained by different imaging modalities and different image segmentation approaches. Various reports in the literature indicate that cardiac MRI can provide accurate esti- mates of EF, LV volumes [97,154,166,241,262]. and wall motion/thickening analy- sis [97,154,166]. In addition, gated SPECT has been extensively validated against var- ious two-dimensional imaging techniques, such as echocardiography, [55] but there are only a limited number of studies comparing gated SPECT with other three-dimensional techniques such as cardiac MRI, which is considered the reference standard for assess- ing LV volumes [17,235,259,264]. Visual interpretations of wall motion by observers on the two modalities have been compared along with LV volumes [17,235,259,264] but quantitative comparison for assessment of regional wall motion/thickening has not been reported previously. Using echocardiographic sequences, values of LV volumes, EF, and regional endocardial shortening also correlate with MR. Cardiac MRI was used as a reference method for comparison with unenhanced and contrast-enhanced echocardiog- raphy [21,127]. LV mass obtained by contrast enhanced color Doppler echocardiography has shown excellent agreement with those from MRI [22]. The left and right ventricular EDV, ESV, stroke volume, EF, and myocardial mass obtained by dual-source CT also correlated well with those from MRI [251]. 2.5 Validation (Evaluation) of Segmentation Results Automatic cardiac image segmentation results can be evaluated alone or by comparing it with a reference, possibly a different imaging modality, including the manual segmen- tation result or a ground truth. For stand-alone evaluation, one can exploit statistical 33 properties of heart anatomy and/or observe the segmented images. For reference-based evaluation, both quantitative and qualitative comparisons can be performed. Quantita- tive comparison can be done by measuring various metrics such as the fractional energy difference, the Hausdorff distance, the average perpendicular distance, the dice metric, and the mean absolute distance [283] between the segmented structures. The average perpendicular distance measures the distance from the automatically segmented contour to the corresponding manually drawn contour by experts, and averages of all contour points. For LV segmentation, the ED and the ES phases of all slices have been measured. TheEFandtheLVmassarealsoimportantclinicalparameterstoevaluate. Table1sum- marizes previous studies that dealt with cardiac segmentation validation with respect to different imaging modalities, imaging targets, the number of data sets, evaluation results, and comments. 2.6 Conclusion Several advanced segmentation techniques have been proposed in the image processing and computer vision communities for the cardiac image analysis. In this review, we have categorized them into four major classes: 1) the boundary-driven techniques, 2) the region-driven techniques, 3) the graph-cuts techniques, and 4) the model-fitting tech- niques. These techniques have been applied to segmentation of cardiac images acquired by different imaging modalities, providing high automation and accuracy in determining clinicallysignificantparameters. Thesecomputationaltechniquesaidcliniciansinevalua- tionofthecardiacanatomyandfunction, andultimatelyleadtoimprovementsinpatient care. However, cardiac image segmentation continues to remain a challenge due to the complex anatomy of the heart, limited spatial resolution, imaging characteristics, car- diac and respiratory motion, and variable pathology and anatomy. Therefore, improved segmentation techniques with enhanced reliability, reduced computation time, superior accuracy, and full automation will be needed for the future. 34 Chapter 3 Automated Detection of Nonobstructive and Obstructive Arterial Lesions from Coronary CT Angiography 3.1 Introduction Coronary Artery Disease (CAD) is the leading cause of death worldwide for both men and women [284]. Three dimensional (3D) coronary computed tomography angiogra- phy (CCTA) with the use of 64-slice CT scanners is increasingly employed for non- invasive evaluation of CAD, having shown high accuracy and negative predictive value for detection of coronary artery stenosis in comparison with invasive coronary angiogra- phy [6,40,126,182,185]. Beyond stenosis, CCTA also permits noninvasive assessment of atherosclerotic plaque, and coronary artery remodeling [5,155,210]. Although computer-aided extraction of the coronary arteries is often employed to aid visual analysis [79,168,179,183,225], currently clinical assessment of CCTA and lesion detection is based on visual analysis using visualization tools, which is time consuming and subject to observer variability [212]. It was reported that acquiring expertise in CCTA stenoisis≥50% takes more than 1 year [212]. Automatic software that detect and identifythecoronaryarterylesionswillreducethetimeforcoronaryanalysisandincrease the accuracy of CCTA clinical assessment. Todate, only a few studies haveattempted automatic detection of lesions [13,85,123]. Furthermoredetectionwasperformedonlyonobstructivelesionswithstenosis≥50%. To ourknowledge,therehasbeennoattemptofautomaticdetectionofnonobstructivelesions (25-49%). Importantly, however, nonobstructive lesions (25-49%) have been shown to be 35 Figure 3.1: Invasive coronary angiography (ICA), which is current gold-standard. of clinically significant predictor of future coronary events [152,247]. Therefore, our aim inthisstudywastodeveloparobustautomatedalgorithmtodetectbothobstructiveand nonobstructive lesions from CCTA and validate it in comparison with expert readers. 3.2 Background 3.2.1 Clinical Background Atherosclerotic cardiovascular disease is blockage of the coronary arteries, which restrict blood flow to the heart muscle by clogging the artery [284]. Among these plaques, the calcified plaques is widely assessed by non-contrast cardiac CT [81] with low-radiation exposure. The non-contrast CT has been increasingly used to identify the presence and severityofcoronaryarterycalcifiedplaquewithoutintravenousinjectionofcontrastagent. For coronary luminal stenosis by both the calcified and non-calcified plaques, the current gold-standard, invasive coronary angiography (ICA) is known to be the most 36 Figure 3.2: Cross-sectional intravascular US view showing external elastic membrane (green) and lumen intima border (yellow) [82]. accuratewhennotcriticallyocclusive[94,263](Figure3.1). Also,intravascularUSimages were widely used for coronary artery wall visualization and plaque volume measurement [106] (Figure 3.2). CCTA with the use of 64-slice CT scanners has recently become an increasingly ef- fective clinical tool for noninvasive assessment of the coronary arteries with potential for both the calcified and non-calcified plaque quantification [7,19] (Figure 3.3). However, the current CCTA coronary plaque analysis is performed manually. 3.2.2 Technical Background Many computer-aided algorithms of detecting and diagnosing various abnormalities have been developed with medical imaging, such as detection and quantification of chronic 37 Figure 3.3: CCTA mixed plaque (A) and contrast attenuation (HU) of epicardial fat (EF), non-calcified plaque (NCP), lumen region [80]. obstructive pulmonary disease in lung [240,243,245,258,276], colon cancer [114,149,223, 250], and lesions in mammograms [101,192,195,252]. Detectionandquantificationofcoronaryarterylesionsareparticularlychallengingdue to limited spatial resolution and coronary artery motion, even smaller plaque size than arteries, and complex and variable coronary artery anatomies (Figure 3.4). Automated lesion detection requires accurate extraction of coronary artery centerlines, and classifi- cation of normal and abnormal lumen cross-sections, quantification of luminal stenosis and finally, classification of lesions with the different degree of stenosis. Obtaining a reliable coronary centerline from CCTA is an important process for vi- sualization CCTA data in clinical practice, and also serves as a starting point for lumen segmentation and stenosis grade calculation for lesion detection. Many approaches have been proposed on centerline extraction: thinning based techniques [107,170,171,223], tracking methods [15,272,273], minimal path techniques [61,79,183], and distance trans- form methods [26,30,52,290] have been presented. Several techniques of visualizing CCTA images are used with the object of the diagnosis of CAD in clinical practice: max- imum intensity projection (MIP), volume rendering, multi-planar reformatting (MPR) 38 Figure 3.4: SCCT coronary segmentation diagram in axial view [215]. and curved planar reformatting (CPR) [41], which is generated through a centerline of the vessel of interest. Linearization of coronary artery is another way of visualization of CCTA images by straightening the vessels by use of centerlines, where the visualization has the advantages of viewing the entire vessel at a time and providing better view of plaques and lumen stenosis. CCTA Lumen segmentation is also an essential task for diagnosis assistance and quantification of pathologies from complex datasets. Region growing methods, active contours, and level-set based methods are popular. Accurate identification of plaques is challenging, especially for the non-calcified plaques, due to many factors such as the small size of coronary arteries, reconstruction artifacts caused by irregular heart beats, beam hardening, and partial volume averaging. Detailed review on 3D vessel lumen segmentation techniques are available in [160]. To date, there have only a few proprietary methods attempting or validating auto- matic detection of lesions [13,85,123]; these methods have concentrated on obstructive lesions only. We have previously described a preliminary algorithm for automated de- tection of both obstructive and non-obstructive lesions, validated on 19 patients [140]. 39 In this work, we describe an improved automated algorithm, which was validated on de- tection of lesions in the left anterior descending (LAD), left circumflex (LCX) and right coronary artery (RCA) of 42 consecutive patients (126 arteries, 252 proximal and mid segments). 3.3 Methods Our algorithm can be divided into following four main steps, as described below: 1) cen- terline extraction, 2) vessel linearization, 3) lumen segmentation and 4) lesion location detection. The algorithm requires twouser-defined pointsas input, at the ostium of RCA and the left main (LM) coronary artery, and secondly, the placing of a standard circular regionofinterestintheaortaattheleveloftheLMostium,toobtainscan-specificattenu- ation range for luminal contrast as previously described [80,82]. Note that algorithms for the automatic segmentation of aorta and detection of the origin of coronary arteries have been previously proposed [280] and therefore they were not the focus of this work. The algorithmusesasinputa3DCCTAimagedataset(typicallythebestphase)andabinary mask of coronary tree that is initially computed by the commercially available coronary arterysegmentationsoftware(Circulation, SyngoMMWPVersionVE31A,SiemensMed- ical Solutions, Forchheim, Germany). Following these steps, the process lesion detection is automated. The luminal attenuation in CCTA differs with acquisition protocols and between pa- tients scanned with the same protocol. Plaque attenuation thresholds have been shown to vary significantly with intracoronary lumen attenuation [42] and reconstruction pa- rameters choice [7] and are therefore patient and scan-specific. Therefore, scan-specific attenuationthresholdlevelsforlumenandplaque[80,82]areimportantforaccuratelumen segmentation and they were utilized in our algorithm. Our knowledge-based algorithm allows normal tapering of the coronary arteries and identifies and adjusts for arterial branch points. The flowchart of the proposed automated lesion detection algorithm is given in Figure 3.5. 40 Fig. 1. Flowchart of the proposed method. Figure 3.5: Flowchart of the proposed method. 3.3.1 Centerline Extraction and Classication of three Main Arteries In this section, we present an automatic centerline extraction algorithm as our first step, which also classifies the 3 main coronary arteries: LAD, LCX and RCA. Initially, 3D Thinning [156] is applied to the input mask (CCTA dataset bounded by the initial lumen mask), resulting in a collection of N center points xi in the arteries including aorta. Subsequently, these N points are connected using graph theory. An undirected graph G =<V,E > is defined as a set of nodes (V =p 1 ,p 2 , ... ,p N ) and a set of edges (E) that connect adjacent nodes. A nonnegative weight we is assigned on each edge e ∈ E, where w e is defined as the Euclidean distance between two nodes that are connected by an edge e. We deleted the thinning points inside the aorta by detecting the center of aorta p a which is found by using Dijkstras shortest path method [84] applied 41 Fig. 2. The first row shows an example of extracted centerlines in mid LAD (red) and D1(black). Each column shows an image at different angles, where colored Figure3.6: ThefirstrowshowsanexampleofextractedcenterlinesinmidLAD(red)and D1(black). Each column shows an image at different angles, where colored line indicates axis perpendicular to each other. The second row shows vessel linearization (LAD) of the first row in three orthogonal directions. Both rows show the same location of a lesion (25 49% stenosis by expert visual grading). The third row also shows linearized vessel (LAD) of a normal CCTA dataset. Red outline shows segmented lumen using our method. Detected lesion locations are marked by yellow point. between the ostium point of RCA p R and the ostium point of the LM p L resulting in K center points, so that the points in the left and right coronary arteries are not connected to each other. A temporal end-point p temp1 ∈ V of RCA, the farthest point from the ostium p R , is found using Dijkstras shortest path method. Among the points p i ∈ V and / ∈ {points on the path from p R to p temp1 }, another temporal end-point x temp2 ∈ V of RCA is found by searching the maximum cost that is sum of distances from p R and p temp1 using Dijkstras shortest path. The end-points p temp1 and p temp2 are further classified into 42 the end-points of posterior descending artery (PDA)-RCA and posterior lateral branch (PLB)-RCA (p rca1 and p rca2 ) utilizing anatomical knowledge of relative artery positions. Similarly, the end-point of the 1st left coronary artery (either LAD or LCX) is also foundusingDijkstrasshortestpathmethod. TheLADandtheLCXarefurtherclassified using knowledge-based rules of relative artery positions. The end-point of Ramus artery is also identified if it exists. In each of the three main coronary arteries, the end-points of small branches, such as the first diagonal branch (D1) on LAD and the first obtuse marginal branch (OM1) on LCX, are also detected using Dijkstras shortest path method and anatomical knowledge of relative artery position. The first row in Figure 3.6 shows the results of an extracted centerline and a vessel classified as LAD. 3.3.2 Vessel Linearizations The vessels are subsequently converted to a linearized representation for further image processing. At each point p i ∈ V of the extracted centerlines, two basis vectors, which span the cross-sectional plane perpendicular to the centerline, are calculated. The basis vectorsdefinethecross-sectionalplanesperpendiculartothecenterline,whichareusedto map the arteries to linearized image coordinates. These cross-sectional planes (size 8 mm 8 mm, corresponding to 21x21 matrix with a voxel-size of 0.38 mm, the smallest voxel dimension of the coronary CTA volume) are stacked up along the centerlines, resulting in 3D linear volume of coronary arteries. The second row in Figure 3.6 shows an example of the resulting linearized vessels. 3.3.3 Lumen Segmentation and Calcium Volume Measurement Since attenuation ranges for lumen and plaque can depend on the patient and the ac- quisition protocol, we compute these attenuation ranges automatically from the scan using a validated method previously described our group [80,82]. Attenuation thresholds for the lumen, non-calcified and calcified plaque are found from the image histogram of the normal blood pool region-of-interest placed in the aortic root, and are adjusted for proximal-to-distal decrease in contrast [80,82]. Scan-specific attenuation range for the lumen is defined by the upper attenuation threshold for non-calcified plaque and the 43 Example of lumen segmentation and lesion detection in LAD. Range of proximal LAD lesion (stenosis 25~49%) marked by expert is Figure 3.7: Example of lumen segmentation and lesion detection in LAD. Range of proxi- malLADlesion(stenosis2549%)markedbyexpertisshowninpurple. Lumendiameters computed from the segmented lumen are shown in blue, and cropped lumen diameters by anatomical knowledge are shown in cyan. Expected normal luminal diameter is derived from the scan by automated piecewise line fitting (shown in red) between branch points, and takes into account normal tapering present in the dataset. Lesion with 25% stenosis detected by the algorithm, concordant with the expert observer, is marked with a black vertical line. lower attenuation threshold for calcified plaque [80,82]. The lumen is then identified by recursive region-growing in the linearized volume, using the computed attenuation range, which allows exclusion of calcified and non-calcified plaque. An example of the lumen segmentation is shown in red in the third row in Figure 3.6. Additionally, calcium volumes are measured using the attenuation threshold for cal- cified plaque on the five consecutive slices in the linearized volume. 3.3.4 Detection of Lesions with Stenosis From the previous steps, presence and location of lesions are identified by a knowledge- based algorithm, using the lumen segmentation performed with scan-specific lumen at- tenuation range. Lumen diameters for each 2D cross-section are first obtained from the segmentedlumenareas. Expectedornormalluminaldiameterisderivedfromthescanby 44 automated piecewise least squares line fitting (shown in red in Figure 3.7) over the proxi- mal and mid segments (67%) of the coronary artery between branch points detected; this computation allows us to take into account any normal tapering present in the dataset. The lumen diameters at all positions are first cropped, considering expected dimensions ofthecoronaryarteries[3],andthenthelumendiametersafterbranchpointsarecropped again by using the lumen diameters before the branch points. Lesion detection is then performed in multiple passes through the coronary artery, using the following steps: (i)Inthefirstpass, allpossiblelesionsarefoundbyconsideringthedifference, d, from the piecewise fitted line, as below: d =(as+b)−l s (3.1) where s is the distance in mm from the ostium, l s is the luminal diameter at s, and a and b are obtained from least-squares line fitting. In this first pass, branch-points are excluded by cropping above the fitted line, as shown in Figure 3.7. (ii) For each possible lesion, the algorithm searches locally for proximal and distal normal references by considering d (1). (iii) In a second pass, we compute a stenosis estimate S e (%) for each possible lesion, as below: S e = [ 1− l s (as+b) ×100 ] (3.2) If S e is greater than or equal to 25%, the lesion is considered for the next step. (iv) For lesions with S e ≥25%, % stenosis is computed for each cross-section corre- sponding tos between the detected proximal and distal limits, considering both reference limits, as previously published: S t = [ 1− l s l p − sp s d (l p −l d ) ] (3.3) where l s , l p , l d are the luminal diameters for cross-section corresponding to s, proximal and distal references; s p and s d are the linear distances between the proximal reference, and the distal reference, and the cross-section corresponding to s, respectively. 45 An example of 3D volume rendering (a), detected nonobstructive lesion (mixed plaque) in a CCTA image (b) and according ICA image (c). Arrows in (a), (b) Figure3.8: Anexampleof3Dvolumerendering(a),detectednonobstructivelesion(mixed plaque) in a CCTA image (b) and according ICA image (c). Arrows in (a), (b) and (c) indicate the location of the same lesion (25 49% stenosis by expert visual grading from CCTA, and 34.0% stenosis by quantitative analysis from ICA). Lesions with maximum stenosis S t ≥25% are marked, in the region of maximum stenosis as well as the proximal and distal reference segments. At branch point locations, lesions can be missed due to the wider lumen diameter at the branch. Therefore, in addition to the above lesions with stenosis detection algorithm, a search for calcified lesions was performed at each branch point, and calcified lesion was added at that location if detected. The user has the option to manually accept or reject any identified lesions (calcified or non-calcified). 3.3.5 CCTA Acquisition and Reconstruction The results of our proposed algorithm are evaluated on the CCTA data sets of 42 con- secutive patients, acquired at the Cedars-Sinai Medical Center. All CCTA datasets were acquired on the dual-source 64-slice CT scanner (Definition Siemens Medical Solution, 46 Forchheim, Germany) with gantry-rotation time of 330 milliseconds and standard colli- mation of 0.6 mm, and had good-excellent image quality. CCTA scan parameters were 512×512 matrix, voxel size 0.38×0.38×0.3 mm, and typically consisted of 400-500 slices per dataset. Of the 42 patients, 10 patients subsequently underwent invasive coronary angiography (ICA) within 1 month of the CCTA scan (Figure 3.8). 3.3.6 Visual Assessment and Reference Standard All data sets were first visually assessed in a standard and systematic way, by 3 expert readers, using consensus reading to minimize the inter-observer variability; these readers were three experienced imaging cardiologists. Segmental analysis was based on the stan- dard 15-segment American Heart Association [86]. Each segment of the coronary artery tree was graded for the presence and type of plaque or stenosis, as recommended by pub- lished guidelines of the Society of Cardiovascular CT [215], and all coronary lesions with stenosis ≥25% were identified. This was used as the reference standard for algorithm performance in this study. For comparison, a second blinded reader (imaging cardiologist with Level III CT certification, with 1 year of experience with cardiac CT), also inde- pendently identified all coronary lesions with stenosis≥25%. The observer agreement for this reader with the reference standard was 94.8% (kappa 0.84, 95% confidence interval 0.75 to 0.92, p<0.0001). 3.3.7 Statistical Analysis For the automated lesion detection algorithm, 3 operator interactions are needed, to place the ROI on the aorta, and annotating the ostia of the left and right coronary arteries. Forevaluationofthereproducibilityofthese3interactions,asecondindependent expert reader evaluated the algorithm by running the software independently on all the cases. Theagreementofthelesiondetectionresultsderivedfromtheproposedautomated lesion detection software was measured using the kappa statistics, as well as sensitivity, specificityandReceiver-OperatorCharacteristic(ROC)analysis,withAnalyse-itsoftware [1]. A p value <0.05 was considered statistically significant. 47 Fig. 5. The first row shows an example of extracted centerlines in mid LAD (red) and D1 (black). Each column shows an image at different angles, where colored Figure 3.9: The first row shows an example of extracted centerlines in mid LAD (red) and D1 (black). Each column shows an image at different angles, where colored line indicates axis perpendicular to each other. The second row shows vessel linearization (LAD) of the first row in three orthogonal directions. Both rows show the same location ofalesion(2549%stenosisbyexpertvisualgrading). Thethirdrowalsoshowslinearized vessel (LAD) of a normal CCTA dataset. Red outline shows segmented lumen using our method. Detected lesion locations are marked by yellow point. 3.4 Results We tested the algorithm on 42 consecutive patients [26 male]. The mean age was 60±12 years and the mean body weight was 83±10 kg. 21 patients had any coronary lesions with stenosis greater than or equal to 25%. In these patients, 45 lesions with stenosis ≥25% were identified. Eight out of the remaining 21 patients had lesions with stenosis <25% and 13 patients did not have any lesions (no luminal stenosis or plaque). In the 45 lesions with stenosis ≥25%, 20 lesions were obstructive (≥50%). The proposed algorithm ran successfully on all the proximal and mid coronary artery segments in all patients (Figure 3.9∼Figure 3.11) , with an execution time of around 50 48 Figure 3.10: Example of lumen segmentation and lesion detection in LAD. Range of proximal LAD lesion (stenosis 25 49%) marked by expert is shown in purple. Lumen diameters computed from the segmented lumen are shown in blue, and cropped lumen diametersbyanatomicalknowledgeareshownincyon. Expectednormalluminaldiameter isderivedfromthescanbyautomatedpiecewiselinefitting(showninred)betweenbranch points, and takes into account normal tapering present in the dataset. Lesion with 25% stenosis detected by the algorithm, concordant with the expert observer, is marked with a black vertical line. The second vertical line, which is at around x = 60mm, is the lesion detected additionally by calcium volume measurement. secondsforcenterlineextractionforallcoronaryarteriesand<2secondsforallsubsequent steps on a standard 2.5 GHz personal computer running windows XP. In the 45 lesions with stenosis≥25%, the proposed automated algorithm correctly identified 22/24 lesions in the LAD, 10/10 in the LCX, and 11/11 in the RCA. Figure 3.12 and Figure 3.13 show 2 patient examples from our study. In total, the proposed automated algorithm correctly identified true lesions yielding a sensitivity of 93% (42/45) on a per-segment basis (Table 3.1). Two lesions (25 49% stenosis) were missed by our approach. Also, on a per-segment basis, the proposed algorithm showed 81% specificity, 83% accuracy, 99% negative predictive value, and 48% positive predictive value. Therewere39falsepositivedetections(Table3.2)inthe252coronaryarterysegments of 42 patients by the proposed algorithm resulting in an average of 0.18 per segment; the reasonsforthesefalsepositivesaredescribedinTable3.2. Twentythreefalsepositivesout 49 An example of 3D volume rendering (a), detected nonobstructive lesion (mixed plaque) by stenosis calculation (CCTA image) (b), Figure 3.11: An example of 3D volume rendering (a), detected nonobstructive lesion (mixed plaque) by stenosis calculation (CCTA image) (b), and detected nonobstructive lesion (mixed plaque) by calcium volume measurement at a branch point (CCTA image) (c). The right arrow in (a) and the arrow in (b), and the left arrow in (a) and the arrow in (c) indicate the location of the same lesion (25 49% stenosis) each. This patient did not undergo ICA. of 46 (50%) were lesions with ¡25% stenosis as assessed by the expert observer (Figure 3.14). When excluding the 23 detections that were lesions having stenosis ¡25% and considering only the detections that were not related with stenotic atherosclerosis as false positives, specificity was increased to 89% and the average false positive ratio was 0.09 per segment. Out of the remaining 23 false positive detection marks, 8 cases were associated with normal segment with narrowing without plaque, 13 cases were associated with undetected small branches, and 2 cases were associated with unclear image contrast and blurring. Additionally, 10 out of our 42 patients underwent invasive coronary angiography (ICA), using the Inova digital X-ray system from GE Healthcare with multiple views of 50 Table3.1: AlgorithmPerformance. Performancecharacteristicsoftheproposedalgorithm of lesion (25% stenosis) detection in N=42 patients (13 complete normal). In a total of 45 lesions with 25% stenosis, 6 were of severe stenosis (70%), and 14 were obstructive stenosis (50%). Sensitivity was 93%, specificity was 81%, and accuracy was 83% per segment. ALGORITHM PERFORMANCE Lesions found by Expert ( 25% stenosis) True lesions found by algorithm Sensitivity Specificity Accuracy LAD 24 22 92% 75% 80% LCX 10 10 100% 76% 79% Per segment RCA 11 11 100% 82% 75% Performance characteristics of the proposed algorithm of lesion ( 25% stenosis) detection in N=42 patients (13 complete normal). In a total of 45 lesions with 25% stenosis, 6 were of severe stenosis ( 70%), and 14 were obstructive stenosis ( 50%). Sensitivity was ซ ฌ Table3.2: ReasonforAdditionallyDetectedLesions. Breakdownofdetectedfalsepositive lesions. ซ REASON FOR ADDITIONALLY DETECTED LESIONS LAD LCX RCA Stenosis < 25% by expert 9 6 8 Normal segment with narrowing (no stenosis or plaque (normal variant)) 3 2 3 Undetected small vessel branch 2 3 8 Unclear Image contrast + blurring 0 2 0 Breakdown of detected false positive lesions. ฌ theleftandrightcoronaryarterytoidentifytheprojectioninwhichthesegmentappeared most stenotic. Standard cardiac catheterization technique was employed. Acquired im- agesweretransferredtoanAGFAHeartlabworkstationforquantitativecoronarycatheter angiography (QCA) analysis. Ten cases were interpreted for stenosis by consensus of two experienced readers, and a separated investigator independently performed QCA for all 10 cases. Reference luminal proximal and distal positions to the stenosis were defined by readers, and then QCA software on the workstation detected luminal edges, locate site of maximal stenosis, and quantify the maximal stenosis [217] (Figure 3.12 and Figure 3.13). Based on ICA-based QCA, 10 out of 10 patients had stenosis ≥25%, in agreement with the results of our algorithm on CCTA (Figure 3.12 and Figure 3.13). Program reproducibility: There was very good agreement between 2 independent readers for running the lesion detection software, with an observed agreement of 94.8% (kappa 0.89, 95% confidence interval 0.83 to 0.95, p<0.0001). For comparison, this pro- gram agreement was comparable to agreement of the second blinded reader with the 51 Table 3.3: Program Reproducibility. Results when different readers ran the program. ซ ฌ PROGRAM REPRODUCIBILITY Sensitivity Specificity ROC-AUC Program (reader 1) 96% 78% 0.87 Program (reader 2) 96% 80% 0.87 Results when different readers ran the program. reference standard (agreement 94.8%, kappa 0.84, 95% confidence interval 0.75 to 0.92, p<0.0001). Table 3.3 shows the sensitivity, specificity and ROC-area-under-the-curve (ROC- AUC) for the 2 readers running the program, compared to the reference standard. The ROC-AUC was 0.87 for both readers running the program. 3.5 Discussion Our automated method provides automated detection of lesions starting from centerline extraction to computation of stenosis. The only required user-interactions were 3 clicks in the whole process; setting RCA and LM ostium points for centerline extraction and artery classification, and placing a region of interest in the aorta for obtaining scan- specific luminal attenuation range. Automatic algorithms have been previously proposed for these steps [280] and these could be combined with our lesion detection technique. The remaining processes were all automatic. Our proposed study is in line with the few previous studies attempting automated lesion detection from CCTA. Halpern [123] and Arnoldi [13] published validation papers using commercial software with expert human interpretation, where they detected ob- structive lesions only. Dinesh [85] proposed a method, which utilized manual centerlines and artery classification, did not provide specific stenosis calculation, and were evaluated with the small number of patients (8 patients). Our algorithm was automated with less user-interaction and did not need manual selection of main arteries. One of the main advances as compared to previously published work is that our methodcanaccuratelydetectbothobstructiveandnonobstructivelesions(2549%steno- sis), whereas previous studies detected obstructive lesions only (≥50% stenosis); this is 52 Fig. 8. Detection of lesion with stenosis. Arrows indicate the location of lesions. Detected lesions with stenosis by primarily non-calcified plaque in the mid segment Figure 3.12: Detection of lesion with stenosis. Arrows indicate the location of lesions. Detected lesions with stenosis by primarily non-calcified plaque in the mid segment (the first row, 59% stenosis by quantitative analysis and 50 69% stenosis by expert visual grading). The second row shows according ICA images of the first row (36.2% stenosis by quantitative analysis). of particular clinical value since lesions with nonobstructive stenosis have been shown to contribute to cardiovascular events. Our algorithm was evaluated over a sizable patient population, 42 patients. The algorithm showed the sensitivity of 96% and negative predictive value of 99% on a per- segmentbasisinthemainthreecoronaryarteries, whichisthemostdesirable. Specificity of 78% is relatively low due to 46 additional detections on a per-segments basis, which could be manually rejected as in clinically-used coronary calcium scoring [8,43]. 23 out of the 46 additional detections were lesions with stenosis less than 25% (Figure 3.14). If we consider only detections that are related with stenotic atherosclerosis as false positives, we have the specificity of 89%, which is also desirable. From a clinical standpoint, in our method, it is essential for the computer-aided system to have a high sensitivity. The user 53 Fig. 9. Detection of lesion with stenosis. Arrows indicate the location of lesions. Detected lesions with stenosis by mixed plaque in the proximal segment (the first Figure 3.13: Detection of lesion with stenosis. Arrows indicate the location of lesions. Detected lesions with stenosis by mixed plaque in the proximal segment (the first row, 70% stenosis by quantitative analysis and 90 99% stenosis by expert visual grading). The second row shows according ICA images of the first row (62.4% stenosis by quantitative analysis). can easily discard the false positive results by one-button-click. By identifying all the potential sites, the system would aid the physician by quickly identifying all lesions. The proposed automated lesion detection algorithm showed high software repro- ducibility (94.8small inter-observer variability was due to the 3 user-interactions; setting 2 ostial points and set a ROI in aorta. Those user-interactions produced slightly dif- ferent lumen segmentation and estimated normal luminal reference diameters, resulting in different stenosis calculation. A full automatic system without any user-interaction will produce no inter-observer variability. This software agreement was comparable to agreement of the reference standard. There were a few limitations in our study. Computation time in centerline extraction isrelativelylongandthealgorithmisnotfullyautomatedeventhoughonly3simpleuser- interactions are required. Reduced specificity due to the additional findings is also poses 54 Fig. 10. An example of false positives. The algorithm detected this location as a lesion with stenosis 25%, but human expert readers graded it <25% stenosis. Figure 3.14: An example of false positives. The algorithm detected this location as a lesion with stenosis 25%, but human expert readers graded it ¡25% stenosis. a challenge. Automatic finding of all small arterial branches and more accurate expected normal reference diameter calculation should ameliorate this limitation. Future work: More accurate lumen segmentation, automated detection of all branches and assessment of the vessel wall in combination with stenosis may improve our lesion detection results, and in particular, decrease the frequency of false-positive lesions. 3.6 Conclusion Inconclusion, wedevelopedanovelautomatedalgorithmfordetectionandlocalizationof obstructive and nonobstructive arterial lesions from CCTA, which performed with high sensitivity compared to an expert observer. 55 Chapter 4 Image Denoising of Low-radiation Dose Coronary CT Angiography by an Adaptive Block-Matching 3D Algorithm 4.1 Introduction Accurateassessmentofleftventricular(LV)functionandregionalwallmotionisessential for the diagnosis of cardiovascular disease and is an important predictor of major adverse cardiac events [270]. Electrocardiographic (ECG)-gated helical coronary CT Angiogra- phy (CTA) using multidetector computed tomography (MDCT) scanners can generate whole-volume ventricular data in any phase of the cardiac cycle, and has been shown to accurately assess global and regional LV function [174]. However, the radiation ex- posure during CT examinations is of great concern [36]. Therefore, ECG-based tube current modulation - applying maximal tube current to only the phases of the cardiac cycle needed for diagnosis - is routinely applied to reduce the radiation dose with helical MDCT. Because of that the image noise is increased during phases of the cardiac cycle in which the tube current is minimized reducing the image quality. This is a limitation in the analysis of the LV function by CTA. A novel image denoising algorithm, Block-Matching 3D (BM3D) has been a recently proposed, which has been shown to be superior to previous image denoising algorithms [73,74]. BM3D algorithm is based on an enhanced sparse representation in 3D transform domain through similar block matching. The filtered images through BM3D can show the finest anatomical details shared by matched blocks and preserve the unique features 56 of each image block as well as edges in the images. To our knowledge, this method has not been previously applied to low-dose CT images. Our aim in this study was to optimize and validate an adaptive denoising algorithm based on BM3D, for reducing image noise and improving LV assessment, in low-dose coronary CTA. In this paper we describe the denoising algorithm and its validation, with low-radiation dose coronary CTA datasets from consecutive patients. 4.2 Background 4.2.1 Clinical Background More than 62 million CT scans are currently used each year in the United States, primar- ily by the decreased scan time (<1 second). However, the radiation dose for patients who undergo CT is of great concern. CT represents an important source of ionizing radiation arising from medical exposures, with high-risk cancer [90]. In general, reducing radiation dose results in increased noise in images. Therefore, it is of great importance to mini- mize radiation exposure during imaging, in accordance with the ”as low as reasonably achievable” (ALARA) philosophy. The radiation dose by CT scanning can be described by various measures. The effective dose (in sieverts, (Sv) or (mSv)) is designed to be propotional to a generic estimate of the overall harm to the patient caused by radiation exposure [180]. It is effective when the dose distribution is not homogeneous like CT. Organ dose from CT scanning (1.5∼20 mSv ) is much higher than the plain-film radiog- raphy (0.005∼0.1 mSv) as shown in Table 4.1. Especially doses from CCTA is estimated as 6.7∼13 mSv [131]. Reducing the radiation dose of CCTA while preserving image quality diagnostically is significant in CCTA research. The dual-source CT (Definition scanner, Siemens Medical Solutions, Forcheim, Germany) provides 3 distinct dose-reducing strategies for CCTA with the use of helical acquisitions [122]: (1) lowering the tube voltage from 120 to 100 kVp when the patient size allows, as with other 64-slice CT scanners, with the radiation dose reduction of 39% to 51% (Figure 4.1); (2) using the shortest possible full tube current (FTC) window with Electrocardiographic (ECG)-based tube current 57 Table 4.1: Typical organ radiation doses from various radiologic studies in grays (mGy) or sieverts(mSv)) [36]. In CT scanners, 1 mSv = 1 mGy. Study Type Relevant Organ Relevant Organ Dose* (mGy or mSv) Dental radiography Brain 0.005 Posterior–anterior chest radiography Lung 0.01 Lateral chest radiography Lung 0.15 Screening mammography Breast 3 Adult abdominal CT Stomach 10 Barium enema Colon 15 Neonatal abdominal CT Stomach 20 modulation; (3) reducing the tube current outside the FTC window from 20% to 4%. However, these dose-reducingstrategies cannot be applied indiscriminatelyto all patients undergoing CCTA without compromising image quality. Use these 3 stategies to develop and evaluate a clinical algorithm for reducing radation dose in CCTA in a consecutive group of patients referred for clinically indicated CCTA [122]. ECG-based tube current modulation, which only applies maximal tube current to the phaseorphasesofthecardiaccycleneededfordiagnosticimagingofthecoronaryarteries, is routinely used to reduce radiation dose with helical MDCT. Conventional tube current modulation reduces tube current to about 20% of the maximum tube current during parts of the cardiac cycle, resulting in an overall estimated radiation dose savings of 30%-50% (5.2 mSv in average) [191]. With the dual-source CT scanner, maximal ECG- based tube current modulation reduces tube current to as low as 4% outside a predefined window (Siemens Medical Systems, Forchheim, Germany), further reducing the radiation dose [122]. Image noise is increased during the phases of the cardiac cycle in which tube current is suppressed (Figure 4.1), particularly with maximal dose modulation. It is not known known whether evaluation of regional LV wall motion with such data is reliable. A study was proposed on such data [191]; determining whether global and regional LV 58 Figure 4.1: Short-axis images of the Left ventricle in end-diastolic and end-systolic with 120 kVp and 100 kVp are shown [191]. wall motion and function can be reliably assessed from helical dual-source CCTA using the lowest-allowable during dose modulation (Figure 4.2). However, further accurate LV functional assessment is obstructed by image noises. 4.2.2 Technical Background Noise in images I is a still challenging problem in signal and image processing society. The noise in medical imaging generally originates in the physical processes of image scan- ning, rather than in the tissue textures. Generally, the overall noise in medical imaging is assumed to be additive with a zero-mean, constant-variance σ N Gaussian distribution (AWGN). However, the noise in medical imaging can be modeled by a nonlinear function 59 Figure 4.2: 20 short-axis slices of the mid left ventricular of the R-R interval (0%-95%) scanned at 120 kVp [191]. The effective radiation dose was 4.9 mSv. Full tube current wasapplied only at 70% of the R-R interval, but minimal tube currentis used in all other parts of the cardiac cycle. Although the radiation dose was decreased much, image noise was increased during the cardiac cycle except at 70% of the R-R interval. oftheimageintensitydependingontheimageacquisitionprotocol. Abetterunderstand- ing of the noise properties in medical imaging is required for the accurate denoising in medical imaging. The MR noise is well known to have a Rician distribution [121]. CT image noise originates from noise projection measurement, having quantum noise and electronic noise [177]. Although the noise of CT images are found approximately to have a Gaussian [157,169] [157] or Poisson distribution [271] (Figure 4.3), further analysis for the CT noise modeling is required because of its correlated nonlinear characteristics and acquisition protocol [119]. 60 5.2 5.3 5.4 5.5 5.6 5.7 x 10 4 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 x 10 -3 mean density distribution observed data Gaussian Poisson Gamma Variance Figure 1: Noise property of acquired low-dose CT projection data. The number of channels per view is 888 and the number Density Distribution Measured data Figure 4.3: Probability distribution from CT projection data [169]. The projection data can be approximated as a Gaussian functional. Although MR images provide high resolution and signal-to-noise ratio, various fil- tering techniques were applied to MR as a post processing with its well known noise characteristics: Rician distributed noise [67,108,178,197]. Although the CT noise mod- eling is not established well yet, various methods were also proposed in order to reduce the noise in low-radiation dose CT images, especially suppressing noise in image recon- struction [249,265]. The other option to reduce the noise in low-radiation dose CT is denoising the noisy in the image domain as a post-processing. Several image denoising algorithms were applied to low-dose CT image noise reduction such as anisotropic diffu- sion filter [226], wavelet based structure-preserving filter [28] and nonlocal means (NLM) 61 algorithm [144]. Also normal-radiation dose CT images were used as a priori informa- tion [172] and two different CT volumes from high-energy and low-energy scans were utilized [16] for restoring the low-radiation dose CT images. A state-of-art image denois- ing algorithm, Block-Matching 3D (BM3D) [73,74], is a recently proposed and known as outperforming among the plenty of image denoising algorithms. BM3D algorithm is based on an enhanced sparse representation in 3D transform domain through 2D simi- lar block matching. The filtered images through BM3D can show the finest anatomical details shared by matched blocks and preserve the unique features of each image block. 4.3 Methods 4.3.1 Overview of Block-Matching 3D algorithm The BM3D algorithm was first proposed by Dabov et al. [73,74]. BM3D is an image denoising algorithm based on block-matching (Figure 4.4) by block similarities measure- mentandreducingnoisein3Dtransformdomainviasparserepresentationofthegrouped matched blocks (Figure 4.5). Let us define an observed noisy image y : X → R of the form y(i)=x(i)+n(i), i∈X, (4.1) where i is the pixel location in the 2D spatial coordinate in the image domain X, x is the true image, and n is the independent and identically distributed (i.i.d.) additive white Gaussian noise (AWGN) with zero mean and standard deviation σ n . Theinput for the BM3D isthe observedimage y andthe standard deviationσ n ofthe noise n. With the inputs, the denoising procedure is performed in two steps: obtaining a basic estimate ˆ x basic and a final estimate ˆ x final of the true images. The basic estimate is obtained via block matching and hard-thresholding the coefficients of a unitary 3D transform from y. The final estimate obtained from the basic estimate and uses Wiener filter instead of hard-thresholding. Because of the similarity between the matched blocks, the unitary 3D transform can obtain a highly sparse representation of the true images with preserving the finest details. 62 Fig. 1. Illustration of grouping blocks from noisy natural images corrupted by white Gaussian noise with standard deviation 15 and zero mean. Each fra Figure4.4: GroupingblocksfromnoisynaturalimagescorruptedbyAWGNwithshowing a reference block (R) and a few blocks matched to R [74]. With a fixed block size N B , block-matching searches similar blocks to a reference block Y R by a block-similarity measure S (Eq. (2)), which is calculated using l 2 distance between blocks Y i and a reference block Y R . S(Y R ,Y i )= N 2 B ||Th b (T 2D (Y R ))−Th b (T 2D (Y i ))|| 2 2 , (4.2) whereTh b denotes the thresholding operator with threshold θ thr2d andT 2D is a 2D linear unitary transform operator such as discrete cosine transform (DCT) and discrete Fourier transform (DFT). The matched blocks are stacked together to form a 3D array G R . Then a unitary 3D transform T3D is applied on G R , and denoising is performed in 3D transform domain by hard-thresholding the transform coefficients. The estimate of true image blocks ˆ X Rbasic is obtained by inverse transform as ˆ X Rbasic =T −1 3D (Th tf (T 3D (G R ))), (4.3) where Th f denotes the thresholding operator with threshold θ thr3d σ n and T 3 D is a 3D linear unitary transform operator. After estimation processing, the basic estimate of the 63 Fig. 3. Flowchart of the proposed image denoising algorithm. The operations surrounded by dashed lines are repeated for each processed block (marked Figure 4.5: Flowchart of the BM3D denoising algorithm [74]. true image ˆ x basic is computed as a weighted average of all local block estimates ˆ X Rbasic to avoid an overcomplete representation due to the overlap between the estimated blocks. ThedenoisingperformanceisincreasedbyapplyingWienerfilterbasedschemeonthe basic estimate ˆ x basic with similar procedure to Eq. (2) and (3), with replacing the hard- thresholding with Wiener filtering. The following estimate of true image blocks ˆ X Rwiener is produced as ˆ X Rwiener =T −1 3D (C wiener (T 3D (G R ))), (4.4) where C wiener is the empirical Wiener shrinkage coefficients defined from block estimates from the basic estimate ˆ x basic . The final estimate ˆ x final is obtained from the block-wise estimates ˆ X Rwiener by a weighted average of all local block estimates. 4.3.2 Adaptive Block-Matching 3D Scheme According to the previous studies to estimate the characteristics and noise variance in CT images [119,157], we assumed the noise the noise on CT images can be modeled as Gaussian distribution. We estimated the initial noise variance in the low-radiation dose CT images by previously proposed wavelet-based noise estimation methods [87], assuming that the noise is unbiased AWGN. The estimated noise standard deviation ˆ σ n is used input for the BM3D in Eq. (2) and (3). We applied the anisotropic diffusion edge enhancing diffusion (EED) [268] with the parameters for the minimum smoothing in order to enhance the edges in LV. EED was 64 Fig. 1. Simulation result with 70% phase. Original 70% phases (left), Additive White Gaussian f x Figure4.6: Simulationresultwith70%phase. Original70%phases(left),AdditiveWhite Gaussian Noise added with standard deviation 444.9HU (middle), and the denoised 70% phases (right). applied on the basic estimate ˆ x basic , and the updated ˆ x basic estimate was used in the Wiener filter process in Eq. (4). Thestandard-radiationdosescans(70%phase)wereusedassimulationimagestofind the optimized parameters (Figure 4.6). With the estimated noise standard deviation ˆ σ n , we investigated the best parameters for BM3D on the 70% phases, where AWGN with the estimated noise variance ˆ σ 2 n 2n.. is added in. The parameters investigated with peak signal-to-noise ratio (PSNR) calculation (Eq. (5)) were θ thr2d , θ thr3d for the hardthresh- olding function in Eq. (2) and (3), and the block size N B and the depth of blocks stacked in G R . PSNR = 10log 10 ( MAX(areferencevolume) 2 MSE ) (4.5) 4.3.3 Data Acquisition and Evaluation Framework Our study included 7 consecutive patients who underwent coronary CTA for clinical reasons with the dual-source CT scanner. The acquisition protocol has previously been describedindetail[122,191]. Briefly,allcoronaryCTAscanswereacquiredwiththesame dual-source CT scanner (Definition; Siemens Medical Solutions, Forchheim, Germany), with a gantry rotation time of 330 msec, x-ray tube voltage of 120 kVp and maximal 65 tube current only at the mid-diastolic phase (70%), with the tube current reduced to 4% at all other phases of the cardiac cycle. Raw data were reconstructed using 0.6 mm slice thickness, 0.3 mm slice increment, a 250x250 mm 2 field of view, single-segment reconstruction, a medium-smooth reconstruction kernel (B26f). Our adaptive BM3D denoising algorithm was applied on the 40% phase, which is one of the noisiest phases due to reduced tube-current. Thefinaldenoisedimages ˆ x f inal werecomparedtothemid-diastolic70%phaseofthe R-Rinterval,whichwasjudgedtobethemostnoise-freephase. Imagenoisewasmeasured for the blood pool and the myocardium by placing a region-of-interest about 2.1 cm 2 in a corresponding uniform region on, as previously described for the coronary arteries in [4,122,158]. Myocardial masses obtained from the 40% low-dose noisy frames and denoised frames were compared to the gold standard defined as mass measured from the 70%cardiacphaseusingcommercialsoftware(Vitreaversion4.2, VitalImages, Toshiba). Our assumption for the comparison of myocardial mass is that the mass should not vary significantly between all the phases during the cardiac cycle according to the principle of incompressibility for myocardial tissue. 4.4 Experimental Results and Discussion The proposed algorithm ran successfully algorithm on all 7 patient CT volumes on a standard 2.5 GHz personal computer running windows XP. The restored images from the noisy 40% phases were compared to the original 40% phases and the 70% phases. The denoised images were assessed by an expert observer visually, and further evaluated quantitatively. Theperformance ofthe algorithmon low-radiationdose datasetswasevaluatedquan- titatively. Myocardial and blood pool volumes in the 7 patient dataset were measured from the original 70% phase, the original 40% phase and the denoised 40% phase (Fig- ure 4.7). The original myocardial mass from the 70% phase and the original 40% phase were significantly different (130.9±31.3g in 70% phase vs 99.3±28.9 g in the 40% phase, 66 Fig. 2. One example of denoising results of low-radiation CT images with LV assessment. The Figure 4.7: One example of denoising results of low-radiation CT images with LV assess- ment. The figure sohws the mid left ventricular short-axis slices. Three different mid left ventricular short-axis slices of the original 70% phases (the first row), the original 40% phases (the second row) and the denoised 40% phases (the third row). The con- tours indicate the boundary of myocardium (green) and blood pool (red). The measured myocardial masses were 130g (70% phase), 112g (40% phase) and 133g (denoised 40% phase). 67 Fig. 3. Myocardial mass measurement from 7 patient data sets for 70% phase, 40% phase and Figure 4.8: Myocardial mass measurement from 7 patient data sets for 70% phase, 40% phase and denoised 40% phase. There was no significant difference between 70% phase and the denoised 40% phase masses (NS- not significant). p=0.007). After denoising, the myocardial mass were not statistically different by com- parison of individual datapoints by the students’ t-test (130.9±31.3g in 70% phase vs 142.1±48.8g in denoised 40% phase, p= 0.23) (Figure 4.8). The standard deviations in the blood pool were 31.33.8 (70% phase), 119.2±23.9 (40% phase) and 20.4±6.9 (denoised 40% phase), and the standard deviations in the myocardium were 25.8±8.5 (70% phase), 90.8±20.0 (40% phase), 21.1±6.4 (denoised 40%) as shown in Figure 4.9. Noise is improved significantly between the 40% phase and the denoised 40% phase by the students’ t-test, both in the blood pool (p¡0.0001) and myocardium (p¡0.0001). In this work, novel denoising technique was applied to low-dose CTA obtained with 4% tube current. We have shown that the noise was reduced to the level obtained with full-dose data. This development may allow robust estimation of cardiac function from low-dose gated CTA (Figure 4.10). 68 0 20 40 60 80 100 120 140 160 Blood Pool Myocardium Noise (HU) 70% phase 40% phase denoised 40% phase Fig. 4. Image noise (standard deviation of HU) in the ROIs set in the blood pool and myocar- Figure 4.9: Image noise (standard deviation of HU) in the ROIs set in the blood pool and myocardium in LV. To our knowledge, this is the first report of the BM3D algorithm adapted to low-dose CT.Wehavevalidatedthealgorithmwithimagedatasetsfromconsecutivepatientsusing a novel method with the myocardial mass from the high-dose cardiac phase as a reference standard. We showed that accuracy of measured myocardial mass by the automated segmentation software significantly improved due to denoising. Further validation will need to be performed to assess the improvement of accuracy for the determination of ejection fraction if appropriate external gold standard is available. However, our method of myocardial mass comparison indirectly evaluated the accuracy of the both epicardial and endocardial contours. 4.5 Conclusion In conclusion, we have optimized and validated an adaptive BM3D denoising algorithm and applied to cardiac CTA. This new method reduces image noise and has potential for improving myocardial function assessment from low-dose coronary CTA. 69 G Original 70% phase Original 40% phase Denoised 40% phase by BM3D Figure 4.10: Comparison to other methods. Original 70% phase (1st row), original 40% phase (2nd row) and the denoised 40% phase by BM3D (3rd row). In each row, short axis view (left), 4-chamber view (middle) and 3D volume rendering are presented. 70 Chapter 5 Structured Learning Algorithm for Detection of Coronary Arterial Lesions from Coronary CT Angiography 5.1 Introduction CoronaryArteryDisease(CAD)istheleadingcauseofmorbidityandmortalityworldwide for both men and women [284]. Three dimensional (3D) coronary computed tomography angiography (CCTA) with the use of multidetector CT scanners is increasingly employed for non-invasive evaluation of CAD, having shown high accuracy and negative predictive value for the detection of coronary artery stenosis in comparison with invasive coronary angiography [6,40,126,182,185]. Beyond stenosis, CCTA also permits noninvasive assess- ment of atherosclerotic plaque, and coronary artery remodeling [5,155,210]. Current clinical assessment of CCTA and lesion detection is based on visual analysis, which is time consuming and subject to observer variability [212], although computer- aidedextractionofthecoronaryarteriesisoftenemployed. Itwasreportedthatacquiring expertise in CCTA interpretation may take more than 1 year [212]. Computer software thatautomaticallyidentifiescoronaryarterylesionswouldreducesuchobservervariability as well as the time needed for the assessment of images. We have previously described an algorithmforautomateddetectionoflesions[139,140]byanalyticmethods. However, the specificity (81%) was relatively low, reflecting 39 false-positive findings on a per-segment basis. In this work, we describe an improved automated algorithm, by adapting a machine learning algorithm on the same data used in [139]. Structured learning algorithm is 71 proposed, which consists of two stages: (1) Dividing each coronary artery into small volume patches, and integrating several quantitative geometric and shape features for coronary arterial lesions in each small volume patch by Support Vector Machine (SVM) algorithm, (2) Applying SVM-based decision fusion algorithm to combine a formula- based analytic method and a learning-based method in the stage (1). The algorithm was validated on detection of lesions in the left anterior descending (LAD), left circumflex (LCX) and right coronary artery (RCA) of 42 consecutive patients (126 arteries, 252 proximal and mid segments in total). The aim in our study was to develop a novel machine learning based algorithm to detect both obstructive and nonobstructive lesions from CCTA and validate it in comparison with 3 experienced expert readers. 5.2 Background Many efforts have been made in the development of the computer-aided detection and diagnosis of various abnormalities in medical imaging, for example for detection and quantification of chronic obstructive pulmonary disease in lung [240,243,245,258,276], colon cancer [114,149,223,250], and lesions in mammograms [101,192,195,252]. a few studieshaveattemptedautomaticdetectionoflesions[13,85,116,123,143]. Detectionand quantificationofcoronaryarterylesionsareparticularlychallengingduetolimitedspatial resolutionandcoronaryarterymotion,evensmallerplaquesizethanarteries,andcomplex and variable coronary artery anatomies. Automated lesion detection requires accurate extractionofcoronaryarterycenterlines,andclassificationofnormalandabnormallumen cross-sections, quantification of luminal stenosis, and classification of lesions with the different degree of stenosis. Previousstudiesbyotherinvestigatorsincoronarylesiondetection[13,85,116,123,143] attemptedtodetectonlyobstructivelesions(withstenosis≥50%). However, nonobstruc- tive lesions (stenosis <50%) have been shown to be a clinically significant predictor of future coronary events [152,247]. We have previously described an algorithm for auto- mated detection of both obstructive and non-obstructive lesions [139,140] by analytic methods. However, the specificity (81%) was relatively low due to 39 additional detec- tions on a per-segment basis in 252 segments. To obtain higher specificity and similar 72 sensitivity compared to the previous work, we propose a novel machine learning based technique based on the combination of analytical and machine-learning classifiers. We test the new approach on the on the same dataset used in the previous study [139]. To our knowledge, machine learning techniques have not yet been applied to automatic de- tection of coronary lesions from CCTA, even though machine learning algorithms have been used extensively in other kind of feature detection problems. Furthermore, our proposed approach is different from previous conventional machine learning techniques; it is designed as a two level system, where the decision fusion classifier makes the final decision based on the 2 input of base classifiers. There were previous machine learning methods proposed where diverse machine learning classifiers were combined. However, to our knowledge, these techniques did not combine an analytical technique for feature detection with machine-learning techniques. 5.3 Methods Theproposedstructuredlearningalgorithmfordetectionofcoronaryarteriallesionsfrom coronary CTA consists of two levels. Two first level base classifiers will produce each own result, which indicates the existence of lesions in each coronary arterial segment, and the final decision is made by the second level classifier. Different from the classic AdaBoost algorithm, in our structured learning algorithm, one of the first level base classifiers is analytic method, and the other is SVM-based learning algorithm. Our algorithm, which starts from the linearized volume of the 3 main arteries [139,140], can be divided into following two main steps: (1) two first level base classifiers: an analytic method [139,140] andalearning-basedmethodbySVM(2)Finaldecisionbydecisionfusionofthetwofirst level base classifiers results. The overall flowchart is shown in Fig. 1. Vessel linearization technique is described in our previous work [139,140], and we don’t describe it in this paper. Also, we used our previous analytic methods [139,140] as one of the first level base classifier, which was a stenosis formula calculation algorithm. Our study consisted of 42 consecutive patients, who underwent CCTA for clinical reasons at the Cedars-Sinai Medical Center. All CCTA datasets were acquired on the 73 Figure 5.1: Flowchart of the structured learning algorithm dual-source 64-slice CT scanner (Definition Siemens Medical Solution, Forchheim, Ger- many) with a gantry-rotation time of 330 ms and standard collimation of 0.6 mm, and had good-excellent image quality. CCTA scan parameters were 512 512 matrix, voxel size 0.38 0.38 0.3 mm3, and typically consisted of 400500 slices per dataset. Of the 42 patients,10patientssubsequentlyunderwentinvasivecoronaryangiography(ICA)within 1 month of the CCTA scan. 5.3.1 First level base classier: analytic method Ourpreviousstudiesproposedananalyticmethodtodetectcoronaryarteriallesionsfrom coronary CTA [139,140]. The algorithm is based on a coronary arterial cross-sectionaly analysis, and consists of centerline extraction, vessel classification, vessel linearization, lumen segmentation, and lesion location detection. Presence and location of lesions were identified using stenosis formula in Eq. (1) [139,140], which considers expected or normal vesseltaperingandluminalstenosisfromthesegmentedvessel(Fig. 2). Expectedluminal diameter is derived from the scan by automated piecewise least squares line fitting over 74 Figure 5.2: Example of lumen segmentaiton and lesion detection in a linearized volume in LAD [139]. Range of proximal LAD lesion (stenosis 25%-49%) marked by expert is shown as a small box at around x= 27 mm - 48 mm. Lumen diameters computed from the segmented lumen are shown and their cropped lumen diameters by anatomical knowledge are also shown. Expected normal luminal diameter is derived from the scan byautomatedpiecewiselinefittingbetweenbranchpoints, andtakesintoaccountnormal tapering present in the dataset. The locations of the lesions with≥25% stenosis detected by the algorithm, concordant with the expert observer, is marked with vertical arrows. proximal and mid segments (67%) of the coronary artery considering the locations of the small branches attached to the main coronary arteries. S t = [ 1− l s l p − sp s d (l p −l d ) ] (5.1) wherel s ,l p ,l d aretheluminaldiametersforcross-sectioncorrespondingtos,proximal and distal references; s p and s d are the linear distances between the proximal reference, and the distal reference, and the cross-section corresponding to s, respectively. We dont describe the details of the algorithm in this paper, but refer [139,140]. 75 Figure 5.3: Flowchart of the learning-based algorithm as a first level base classifier for coronary arterial lesions from coronary CTA 5.3.2 First level base classier: learning-based method As an another first level base classifier, we propose a new learning-based algorithm using Support Vector Machine (SVM) for the detection of coronary arterial lesions from coro- naryCTA.The algorithm starts from the same input with the analyticmethod [139,140], and it uses the extracted centerlines and linearized volumes [139,140]. The linearized volumes of each coronary artery are used for the feature extraction and SVM-based clas- sification. The flowchart of the proposed learning-based algorithm is given in Fig. 3. The input for the classification are small volume patches, which are obtained from each whole linearized volume (42 patients, 126 linear volumes (LAD, LCX and RCA) by dividing it into small linearized volumes (Fig. 4) with considering the lesion length. The optimized size of the small volume patches was decided by SVM-based classification experiments. 76 Figure 5.4: Small volume patches as inputs for feature extraction and SVM classification Total 9 features were extracted from each small volume patch, including geometric featuresandshapefeatures. Thegeometricfeatureswerethestenosiscalculation, thedif- ferencebetweenexpectednormallumendiameterandactuallumendiameterandcalcium volume at each branch point from our previous work [139,140] as shown in Fig. 2. The shape features from cross sections of the coronary arteries are circularity of cross-sections of lumen and the ratio between maximum diameter and min diameters in luminal cross- sections. For the shape features, maximum, minimum and average values in each small volume patch were used. With these extracted features, Support Vector Machine (SVM) was used for 2-class classification between normal volume patches and volume patches with lesions. The radial basis function (RBF) kernel was used, which is the most popular and the simplest kernel function for SVM [ref], and the algorithm was validated by standard 10-fold cross- validation [ref]. 77 5.3.3 First level base classier: a scheme to balance the number of normal data and lesion data in the learning-based method In our problem of lesion detection problem, the number of volume patches with lesions is much smaller than the number of normal volume patches. SVM finds the classification in ordertominimizethenumberofmisclassification,aimingforthebestoverallaccuracynot but the best sensitivity. However, in our problem, the sensitivity is the most important values, i.e. minimized missed lesions are desirable. Therefore, we propose a scheme of balancing the number of data between the two classes, which are volume patches with lesions and normal volume patches: (1) Increasing the number of volume patches with lesions by overlapping volume patch scheme (2) decreasing the number of normal volume patches by clearing the normal volume patches randomly. We increased the number of lesion data by a system of overlapping volume patches to each other in region areas that are marked by expert readers, while non-overlapping volume scheme is used in normal areas (Fig. 5). The size of volume patches is fixed and the overlapping portion is decided by experiments to maximize the sensitivity. The balance of two classes can even be more increased up to one-to-one by decreasing the number of normal data, which we propose deleting the normal volume patches randomly: to make the ratio between the number of lesions and normals one-to-one. With these data number balancing scheme, we can improve the sensitivity much with small loss of specificity. 5.3.4 Decision Fusion: combination of the analytic method and the learning-based method We propose a novel decision fusion method, which combines the results of two first level base classifiers that are introduced in previous sections. A-C. Each result from the an- alytic method and the learning-based method are converted into per segment analysis based on the standard 15-segment American Heart Association, and those results are used as features for the second level final decision classifier. Therefore, the features for the decision fusion are 2-dimensional vectors, and SVM is used for the final decision of 78 Figure 5.5: An example of linearized volume with ground truth in blue box (expert readers marking) is shown (first row). Overlapping volume patches in lesion areas and non-overlapping volume parches in normal areas (second row) are also shown. classification between the normal segments and the segments with lesions in the second level. The flowchart is shown in Fig. 6. Instead of using hard values resulting from the first level classifiers, which are binary classification results, i.e. class lesion or class normal, we use soft values for the sophisti- cated analysis. We used the final stenosis as the soft values from the analytic method, and calculated Support Vector Regression (SVR) results from the learning-based method (Fig. 7). The SVM-based decision fusion with polynomial kernels is also validated by standard 10-fold cross validation. 5.3.5 Visual assessment and reference standard All datasets were first visually assessed in a standard and systematic way, by three ex- perienced expert readers, using consensus reading to minimize the interobserver vari- ability; these readers were three experienced imaging cardiologists. Segmental analysis was based on the standard 15-segment American Heart Association [86]. Each segment of the coronary artery tree was graded for the presence and type of plaque or stenosis, as recommended by published guidelines of the Society of Cardiovascular CT [215], and all coronary lesions with stenosis ≥25% were identified. This was used as the reference standard for algorithm performance in this study. For comparison, a second blinded 79 Figure 5.6: Flowchart of Decision Fusion. Table 5.1: Performances of two first level base classifiers and the final decision fusion algorithm. Base classifier 1 is the analytic algorithm and the base classifier 2 is the SVM-based learning algorithm. reader (imaging cardiologist with Level III CT certification, with 1 yr of experience with cardiac CT) also independently identified all coronary lesions with stenosis ≥25%. The observer agreement for this reader with the reference standard was 94.8% (kappa 0.84, 95% confidence interval 0.75 to 0.92, p<0.0001). 5.4 Results Our proposed algorithm was tested on 42 consecutive patients [26 male]. The mean age was6012yearsandthemeanbodyweightwas8310kg. 21patientshadcoronarylesions with stenosis greater than or equal to 25%. In these patients, 45 lesions with stenosis 80 Figure5.7: Softvaluesfromthelearning-basedmethodbySVRareshown. SVRproduces continuousvaluesfromthelearning-basedmethodasafirstlevelbaseclassifier, andthese SVR values will be used as a feature for the final decision fusion. ≥25% were identified. Eight out of the remaining 21 patients had lesions with stenosis <25% and 13 patients did not have any lesions (no luminal stenosis or plaque). In the 45 lesions with stenosis ≥25%, 20 lesions were obstructive (≥50%). The proposed algorithm ran successfully on all proximal and mid coronary artery segments in all patients on a standard 2.5 GHz personal computer running windows XP, and the algorithm was validated by standard 10-fold cross validation. In the 45 coronary artery segments having lesions with stenosis ≥25%, the proposed automated algorithm Table 5.2: The proposed algorithm performance in lesion (≥25% stenosis) detection in 42 patients (13 complete normal). In a total of 252 coronary artery proximal and mid segmentsin42patient, 45segmentshadlesionswith≥25%stenosis. Sensitivitywas93%, specificity was 95%, and balanced accuracy was 94% per segment. 81 Figure 5.8: In the SVM-based learning algorithm as a first level base classifier, the im- proved sensitivity and balanced accuracy by data balancing scheme between normal class and lesion class are shown in (A) and (B). Also, the performance variability according to the different small volume patch sizes is shown in (C). 82 Fig. 9. Detection of lesion with stenosis. Arrows indicate the location of lesions. Detected lesions with stenosis by mixed plaque in the proximal segment (the first Figure 5.9: Detection of lesion with stenosis. Arrows indicate the location of lesions. Detected lesions with stenosis by mixed plaque in the proximal segment (the first row, 70%stenosisbyquantitativeanalysisand90%99%stenosisbyexpertvisualgrading). The second row shows according ICA images of the first row (62.4% stenosis by quantitative analysis). 83 Figure5.10: DecisionFusionresultswithSVMwiththepolynomialkerneloforder2. 252 coronary artery segments are displayed as points in the plot. The segments with lesions are shown in red and the normal segments are shown in blue. Figure5.11: DecisionFusionresultswithSVMwiththepolynomialkerneloforder2. 252 coronary artery segments are displayed as points in the plot. The segments with lesions are shown in red and the normal segments are shown in blue. 84 correctly identified 42/45 with producing 9 false positive detections in the 252 coronary artery segments of 42 patients. Fig. 8. (A) and (B) shows how the scheme of balancing the number between the two classes affects the performance in the first level base classifier, the SVM-based learning algorithm with the radial base function kernel. As the number balance of the two classes became one-to-one by normal volume clearing method, the sensitivity was improved a lot with higher balanced accuracy, while decreasing the specificity a little, which are desirable when considering the importance of sensitivity in our detection problem. Also, asthenumberoflesiondataincreasesbyvolumeoverlappingscheme, alltheperformance including sensitivity and specificity improved. Additionally, the the size of the small volume patches were also examined (Fig. 8. (C)). The size of the best performance was 18.75 mm, which produces 773 normal volume patches and 773 volume patches with lesions with use of data balancing scheme. In total, as a first level base classifier, the SVM-based learning algorithm produced 89% sensitivity and 91% specificity. When the two first level base classifiers were combined by the decision fusion algo- rithm, all the sensitivity, specificity and accuracy were improved (Table 5.1). In total, the proposed decision fusion algorithm correctly identified true lesions yielding a high sensitivity of 93% (42/45) on a per-segment basis. Three lesions (25 49% stenosis) were missed by our approach. Also, on a per-segment basis, the proposed algorithm showed 95% specificity, 94% accuracy (Table 5.2). Fig. 9 shows an example from our study re- sults. The SVM classification plot of the final decision fusion with the polynomial kernel of order 2 is shown in Fig. 10. Fig. 11 shows ROC curve for the running the decision fusion algorithm compared to the reference standard. The ROC-area-under-the-curve (ROC-AUC) was 0.937. We calculated SVR values at the decision fusion level for the ROC analysis. Additionally, 10 out of our 42 patients underwent invasive coronary angiography (ICA), using the Inova digital X-ray system from GE Healthcare with multiple views of the left and right coronary artery to identify the projection in which the segment appeared most stenotic. Acquired images were transferred to an AGFA Heartlab work- station for quantitative coronary catheter angiography (QCA) analysis by consensus of 85 Figure 5.12: SVM results with kernels of polynomial of order 1 (top, left), order 2 (top, right), order 4(bottom, left), and order 5(bottom, right). We chose the kernel function in order not to miss the true lesions (green circle). two experienced readers. Based on ICA-based QCA, 10 out of 10 patients had stenosis ≥25%, in agreement with the results of our algorithm on CCTA. 5.5 Discussion A novel machine learning technique is proposed for the detection of coronary arterial lesions with stenosis stenosis ≥25% with desirable results. The results of our proposed algorithm demonstrate high agreement with 3 expert readers in consensus in a sizable patient population (42 patients), and the algorithm was validated by standard 10-fold cross-validation. The algorithm showed a sensitivity of 93% and a specificity of 95% on 86 Table 5.3: Performances comparison with previous works; Arnoldi et al., 2010 [13], Halpern et al., 2011 [123], Kang et al., 2013 [139] and the proposed method. Figure 5.13: An Example of false Positives by a previous work [139], but not detected by the proposed algorithm: expert readers graded it <25% 87 a per-segment basis in the main three coronary arteries, which are the most desirable. From a clinical standpoint, it is essential for the computer-aided system to have a high sensitivity since the false positive results can be discarded by one-button-click. By iden- tifying all potential lesions, the system would aid the physician by quickly identifying all lesions. Furthermore, plaque burden could be measured automatically over all identified lesions [80,82]. The proposed structured learning algorithm is different from AdaBoost algorithm [102,227], where it can only use machine learning methods as base classifiers, while the proposed structured learning algorithm used the analytic method [139,140] as one of the first level base classifiers. In that sense, any other analytic methods or machine learning algorithms can be easily combined to our proposed structured learning algorithm, which may improve our results more. One of the main advance of the structured learning is that the final decision fusion can produce good results even if the performance of each first level base classifier is poor in sensitivity or specificity. In our results, the analytic method[139]hadlowspecificity(81%)andtheSVM-basedlearningalgorithmhadalittle lower sensitivity (89%). However, the decision fusion in the second level produced both desirable high sensitivity (93%) and specificity (95%). We experimented simple kernel functions (Fig. 12), so that the classifier works well with the simplest kernel functions. We selected the kernel in order to perform the best sensitivity not the overall accuracy: polynomial kernel of order 2. Our proposed study is in line with the few previous studies attempting automated lesion detection from CCTA. Halpern [123] and Arnoldi [13] published validation pa- pers using commercial software with expert human interpretation, where they detected obstructive lesions only (with ≥50% stenosis). Dinesh [85] proposed a method, which utilized manual centerlines and artery classification, and did not provide specific stenosis calculation, and were evaluated with the small number of patients (8 patients). Recently Kelm [143] and Goldberg [116] also published automated detection of obstructive (≥50% stenosis) coronary artery lesions from CCTA. For nonobstructive (≥25% stenosis) lesion detection, D. Kang et al. [139,140] proposed an analytic algorithm recently. Performance comparison between our proposed algorithm and previous works is shown in Table III. 88 Besides the higher performance of sensitivity and specificity compared to previous works [13,85,116,123,139,140,143], to our knowledge, our method is the first report of machine learning techniques applied to coronary arterial lesion detection from coronary CTA. This is of particular technical value with introducing a new technique to lesion detection problems from coronary CTA, with showing the potentials to be improved even more in computerized coronary arterial lesion detection problems. Also, one of the main advances as compared to previously published work is that our method can accurately detect both obstructive and nonobstructive lesions (25 49% stenosis), whereas previous studies detected obstructive lesions only (≥50% stenosis) except [139,140]. This is of particular clinical value since lesions with nonobstructive stenosis have been shown to contributetocardiovascularevents. Nonobstructivelesions(2549%)aremorechallenging to detect due to the subtle narrowing of the lumen. Also, when it is compared to [139], we increased the specificity 14% higher compared to [139], and this reduced many false positives (Fig. 13). There were a few limitations in our study. Invasive coronary angiography was not performed for all patients. In our study, the reference standard was clinically utilized visual detection and grading of lesions, by three expert readers in consensus. However, stenosis calculation by expert readers was not available. Our algorithm used only the features extracted from lumen for automated identification of lesions; however, features from the assessment of the vessel wall in combination with lumen may improve our lesion detection results, and this needs to be further evaluated for detection of lesions with stenosis <25%. 5.6 Conclusion In conclusion, we developed a novel machine learning based algorithm for detection of coronaryarteriallesionsfromcoronaryCTA.Theproposedstructuredlearningalgorithm performed with high sensitivity and high specificity compared to 3 experienced expert readers. 89 Chapter 6 Conclusion and Future Work 6.1 Summary of the Research Cardiac image image analysis with cardiac diseases were studied in this research. Es- pecially coronary computed tomographic angiography (CCTA) with the use of 64-slice CT scanners has recently become an increasingly effective clinical tool for noninvasive assessment of the coronary arteries. Also, ECG-gated low-radiation dose helical multi- detector CT scanners can generate whole-volume ventricular data in any phase of the cardiac cycle. We have presented a novel automatic coronary lesion detection algorithm, which identified luminal stenosis (≥2%). We also proposed a low-radiation dose CCTA denoisingalgorithm based on BM3D for the purpose of accurateleft ventricleassessment. In Chapter 2, we reviewed several advanced segmentation techniques that have been proposed in the image processing and computer vision communities for the cardiac image analysis. The boundary-driven techniques, the region-driven techniques, the graph-cuts techniques, and the model-fitting techniques. These techniques have been applied to segmentation of cardiac images acquired by different imaging modalities, providing high automation and accuracy in determining clinically significant parameters. These compu- tational techniques aid clinicians in evaluation of the cardiac anatomy and function, and ultimately lead to improvements in patient care. In Chapter 3, we developed a novel automated algorithm for detection and local- ization of obstructive and nonobstructive arterial lesions from CCTA, which performed with high sensitivity compared to an expert observer. Our automated method consists of 90 centerline extraction, vessel classification, vessel linearization, lumen segmentation with scan-specific lumen attenuation ranges, and lesion location detection. Presence and lo- cation of lesions are identified using a multi-pass algorithm which considers expected or ”normal” vessel tapering and luminal stenosis from the segmented vessel. Expected lu- minal diameter is derived from the scan by automated piecewise least squares line fitting overproximalandmidsegments(67%)ofthecoronaryarteryconsideringthelocationsof thesmallbranchesattachedtothemaincoronaryarteries. Wetestedthealgorithmon42 consecutive patients with 45 lesions with stenosis greater than or equal to 25%. The ref- erence standard was provided by visual and quantitative identification of lesions with any stenosis ≥25% by 3 expert observers using consensus reading. Our algorithm identified 43 lesions (96%) confirmed by the expert observers. When the artery was divided into 15 coronary segments according to standard cardiology reporting guidelines, per-segment basis, sensitivity was 96% and per-segment specificity was 78%. Our algorithm shows promising results in the detection of obstructive and nonobstructive CCTA lesions. In Chapter 4, we have optimized and validated an adaptive Block-Matching 3D (BM3D) denoising algorithm and applied to cardiac low-radiation dose CCTA from con- secutive 7 patients. A novel image denoising algorithm, BM3D has been a recently proposed, which has been shown to be superior to previous image denoising algorithms To our knowledge, this is the first report of the BM3D algorithm adapted to low-dose CCTA. We showed that accuracy of measured myocardial mass by the automated seg- mentation software significantly improved due to denoising. We validated the algorithm using a novel method, with the myocardial mass from the low-noise cardiac phase as a reference standard, and objective measurement of image noise. After denoising, the my- ocardial mass were not statistically different by comparison of individual data points by the students’ t-test (130.9±31.3g in low-noise 70% phase vs 142.1±48.8g in the denoised 40% phase, p= 0.23). Image noise improved significantly between the 40% phase and the denoised 40% phase by the students’ t-test, both in the blood pool (p-value <0.0001) and myocardium (p-value <0.0001). This new method reduces image noise and has the potential for improving myocardial function assessment from low-dose coronary CTA. 91 In Chapter 5, In this work, we proposed an improved automated algorithm for de- tection of coronary arterial lesions from coronary CT angiography, by adapting a ma- chine learning algorithm on the same data used in Chapter 3, which was described based on [139]. Structured learning algorithm is proposed, which consists of two stages: (1) Dividing each coronary artery into small volume patches, and integrating several quan- titative geometric and shape features for coronary arterial lesions in each small volume patchbySupportVectorMachine(SVM)algorithm,(2)ApplyingSVM-baseddecisionfu- sion algorithm to combine a formula-based analytic method and a learning-based method in the stage (1). The algorithm was validated on detection of lesions in the left anterior descending(LAD), left circumflex(LCX) and rightcoronaryartery (RCA)of 42consecu- tive patients (126 arteries, 252 proximal and mid segments). The aim in our study was to develop a novel machine learning based algorithm to detect both obstructive and nonob- structive lesions from CCTA and validate it in comparison with 3 experienced expert readers. When the two first level base classifiers were combined by the decision fusion algorithm, all the sensitivity, specificity and accuracy were improved (Table 5.1). In to- tal, theproposeddecisionfusionalgorithmcorrectlyidentifiedtruelesionsyieldingahigh sensitivity of 93% (42/45) on a per-segment basis. Three lesions (25 49% stenosis) were missed by our approach. Also, on a per-segment basis, the proposed algorithm showed 95% specificity, 94% accuracy (Table 5.2). Besides the higher performance of sensitivity and specificity compared to previous works, to our knowledge, machine learning tech- niques have not been applied to automatic detection of lesions from coronary CTA, even though machine learning algorithms have been used in other kinds of problems. Also, our proposed structured learning algorithm is different from previous conventional machine learning techniques; it has two level systems, where the decision fusion classifier in the second level makes the final decision based on the base classifiers decision. Different from other machine learning techniques, we selected an analytic method as a base classifier. 6.2 Future Research Directions To make our work more complete, we would like to extend current results along the following three directions. 92 • Arterial lesion scoring We would like to examine a larger patient group and perform arterial lesion scoring using 5 grade system; grade 1(0%), 2(1 24%), 3(25 49%), 4(50 69%), 5(70 89%) and 6(90 99%). Since the plaque lie within the arterial wall, the arterial wall should be delineated first for more accurate scoring system. This is however a challenging issuesincetheimagecontrastofthearterialwallhasasimilarintensitydistribution to its surround areas. • Registration-based low-dose CCTA denoising Among the 20 phases of the cardiac CT images during one cardiac cycle, only 70% phase has a good image quality. We can utilize the 70% phase as a shape priori information to induce signal signal restoration of the low-radiation dose CCTA. Further validation will need to be performed to assess the improvement of accuracy forthedeterminationofejectionfractionifappropriateexternalgoldstandard(MR) is available. • Noise Modeling in CCTA The noise in CCTA is generally assumed to be additive with a zero-mean, constant- varianceGaussiandistribution. However, statisticalanalysissuggeststhatthenoise variance could be better modeled by a nonlinear function of the image intensity dependingonexternalparametersrelatedtotheimageacquisitionprotocol. Various types of noise (e.g., photon, electronics, and quantization) will be investigated in order to model an accurate CCTA noise. 93 Bibliography [1] “Analyse-it for microsoft excel (version 2.20),” Analyse-it Software, Ltd., http://www.analyse-it.com/, 2009. [2] A. Abidov, J. Bax, S. Hayes, R. Hachamovitch, I. Cohen, J. Gerlach, X. Kang, J. Friedman, G. Germano, and D. Berman, “Transient ischemic dilation ratio of the left ventricle is a significant predictor of future cardiac events in patients with otherwise normal myocardial perfusion spect,” Journal of the American College of Cardiology, vol. 42, no. 10, p. 1818, 2003. [3] S. Achenbach, “Computed tomography coronary angiography,” Journal of the American College of Cardiology, vol. 48, no. 10, pp. 1919–1928, 2006. [4] S.Achenbach,T.Giesler,D.Ropers,S.Ulzheimer,K.Anders,E.Wenkel,K.Pohle, M. Kachelriess, H. Derlien, W. Kalender et al., “Comparison of image quality in contrast-enhanced coronary-artery visualization by electron beam tomography and retrospectively electrocardiogram-gated multislice spiral computed tomography,” Investigative radiology, vol. 38, no. 2, pp. 119–128, 2003. [5] S. Achenbach, F. Moselewski, D. Ropers, M. Ferencik, U. Hoffmann, B. MacNeill, K.Pohle,U.Baum,K.Anders,I.Jangetal.,“Detectionofcalcifiedandnoncalcified coronary atherosclerotic plaque by contrast-enhanced, submillimeter multidetector spiral computed tomography a segment-based comparison with intravascular ultra- sound,” Circulation, vol. 109, no. 1, pp. 14–17, 2004. [6] S. Achenbach, U. Ropers, A. Kuettner, K. Anders, T. Pflederer, S. Komatsu, W. Bautz, W. Daniel, and D. Ropers, “Randomized comparison of 64-slice single- and dual-source computed tomography coronary angiography for the detection of coronaryarterydisease,” JACC: Cardiovascular Imaging,vol.1,no.2,pp.177–186, 2008. [7] S. Achenbach, “Cardiac ct: state of the art for the detection of coronary arterial stenosis,” Journal of cardiovascular computed tomography, vol. 1, no. 1, pp. 3–20, 2007. [8] A.Agatston,W.Janowitz,F.Hildner,N.Zusmer,M.ViamonteJr,andR.Detrano, “Quantification of coronary artery calcium using ultrafast computed tomography,” Journal of the American College of Cardiology, vol. 15, no. 4, pp. 827–832, 1990. [9] M.Alattar, N.Osman, andA.Fahmy, “Myocardialsegmentationusingconstrained multi-seeded region growing,” Image Analysis and Recognition, pp. 89–98, 2010. 94 [10] K. Alfakih, S. Reid, T. Jones, and M. Sivananthan, “Assessment of ventricular function and mass by cardiac magnetic resonance imaging,” European radiology, vol. 14, no. 10, pp. 1813–1822, 2004. [11] C. Alvino and A. Yezzi, “Fast mumford-shah segmentation using image scale space bases,” in Proceedings of SPIE, vol. 6498, 2007, p. 64980F. [12] E. Angelini, S. Homma, G. Pearson, J. Holmes, and A. Laine, “Segmentation of real-time three-dimensional ultrasound for quantification of ventricular function: a clinicalstudyonrightandleftventricles,” Ultrasoundinmedicine&biology,vol.31, no. 9, pp. 1143–1158, 2005. [13] E.Arnoldi, M.Gebregziabher, U.Schoepf, R.Goldenberg, L.Ramos-Duran, P.Zw- erner, K. Nikolaou, M. Reiser, P. Costello, and C. Thilo, “Automated computer- aided stenosis detection at coronary ct angiography: initial experience,” European radiology, vol. 20, no. 5, pp. 1160–1167, 2010. [14] G. Aurigemma, M. Zile, and W. Gaasch, “Contractile behavior of the left ventricle in diastolic heart failure,” Circulation, vol. 113, no. 2, pp. 296–304, 2006. [15] S. Aylward and E. Bullitt, “Initialization, noise, singularities, and scale in height ridge traversal for tubular object centerline extraction,” Medical Imaging, IEEE Transactions on, vol. 21, no. 2, pp. 61–75, 2002. [16] M. Balda, B. Heismann, and J. Hornegger, “Value-based noise reduction for low- dosedual-energycomputedtomography,”MedicalImageComputingandComputer- Assisted Intervention–MICCAI 2010, pp. 547–554, 2010. [17] C. Bavelaar-Croon, H. Kayser, E. van der Wall, A. de Roos, P. Dibbets-Schneider, E. Pauwels, G. Germano, and D. Atsma, “Left ventricular function: Correlation of quantitative gated spect and mr imaging over a wide range of values1,” Radiology, vol. 217, no. 2, pp. 572–575, 2000. [18] N. Beohar, J. Flaherty, C. Davidson, M. Vidovich, A. Brodsky, D. Lee, E. Wu, E. Bolson, R. Bonow, and F. Sheehan, “Quantitative assessment of regional left ventricular function with cardiac mri: Three-dimensional centersurface method,” Catheterization and Cardiovascular Interventions, vol.69, no.5, pp.721–728, 2007. [19] D. Berman, A. Abidov, R. Hachamovitch, J. Min, P. Slomka, G. Germano, and L. Shaw, “Comparative roles of cardiac ct and nuclear cardiology in assessment of the patient with suspected coronary artery disease.” The Journal of invasive cardiology, vol. 21, no. 7, p. 352, 2009. [20] D. Berman, R. Hachamovitch, L. Shaw, J. Friedman, S. Hayes, L. Thomson, D. Fieno, G. Germano, P. Slomka, N. Wong et al., “Roles of nuclear cardiology, cardiac computed tomography, and cardiac magnetic resonance: assessment of pa- tientswithsuspectedcoronaryarterydisease,” JournalofNuclearMedicine,vol.47, no. 1, pp. 74–82, 2006. 95 [21] J. Bermejo, J. Timperley, R. Odreman, M. Mulet, J. Noble, A. Banning, R. Yotti, E. P´ erez-David, J. Declerck, H. Becher et al., “Objective quantification of global and regional left ventricular systolic function by endocardial tracking of contrast echocardiographic sequences,” International journal of cardiology, vol. 124, no. 1, pp. 47–56, 2008. [22] G. Bezante, X. Chen, G. Molinari, A. Valbusa, L. Deferrari, V. Sebastiani, N. Yokoyama, S. Steinmetz, A. Barsotti, and K. Schwarz, “Left ventricular my- ocardial mass determination by contrast enhanced colour doppler compared with magnetic resonance imaging,” Heart, vol. 91, no. 1, pp. 38–43, 2005. [23] J. Bezdek, L. Hall, L. Clarke et al., “Review of mr image segmentation techniques using pattern recognition,” MEDICAL PHYSICS-LANCASTER PA-, vol. 20, pp. 1033–1033, 1993. [24] A. Bhan, S. Kapetanakis, and M. Monaghan, “Three-dimensional echocardiogra- phy,” Heart, vol. 96, no. 2, pp. 153–163, 2010. [25] S. Bierig, P. Mikolajczak, S. Herrmann, N. Elmore, M. Kern, and A. Labovitz, “Comparisonofmyocardialcontrastechocardiographyderivedmyocardialperfusion reserve with invasive determination of coronary flow reserve,” European Journal of Echocardiography, vol. 10, no. 2, pp. 250–255, 2009. [26] I. Bitter, A. Kaufman, and M. Sato, “Penalized-distance volumetric skeleton algo- rithm,” Visualization and Computer Graphics, IEEE Transactions on, vol.7, no. 3, pp. 195–206, 2001. [27] L.Bonneux,J.Barendregt,K.Meeter,G.Bonsel,andP.VanderMaas,“Estimating clinical morbidity due to ischemic heart disease and congestive heart failure: the future rise of heart failure.” American Journal of Public Health, vol. 84, no. 1, pp. 20–28, 1994. [28] A. Borsdorf, R. Raupach, T. Flohr, and J. Hornegger, “Wavelet based noise reduc- tion in ct-images using correlation analysis,” Medical Imaging, IEEE Transactions on, vol. 27, no. 12, pp. 1685–1703, 2008. [29] J. Bosch, S. Mitchell, B. Lelieveldt, F. Nijland, O. Kamp, M. Sonka, and J. Reiber, “Automaticsegmentationofechocardiographicsequencesbyactiveappearancemo- tion models,” Medical Imaging, IEEE Transactions on, vol. 21, no. 11, pp. 1374– 1383, 2002. [30] T. Boskamp, D. Rinck, F. Link, B. K¨ ummerlen, G. Stamm, and P. Mildenberger, “New vessel analysis tool for morphometric quantification and visualization of ves- sels in ct and mr imaging data sets1,” Radiographics, vol. 24, no. 1, pp. 287–297, 2004. [31] A. Boudraa, “Automated detection of the left ventricular region in magnetic res- onance images by fuzzy c-means model,” The International Journal of Cardiac Imaging, vol. 13, no. 4, pp. 347–355, 1997. 96 [32] Y. Boykov and G. Funka-Lea, “Graph cuts and efficient nd image segmentation,” International Journal of Computer Vision, vol. 70, no. 2, pp. 109–131, 2006. [33] Y. Boykov and V. Kolmogorov, “An experimental comparison of min-cut/max- flow algorithms for energy minimization in vision,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 26, no. 9, pp. 1124–1137, 2004. [34] Y. Boykov, O. Veksler, and R. Zabih, “Fast approximate energy minimization via graph cuts,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 23, no. 11, pp. 1222–1239, 2001. [35] Y. Boykov and M. Jolly, “Interactive graph cuts for optimal boundary & region segmentation of objects in nd images,” in Computer Vision, 2001. ICCV 2001. Proceedings. Eighth IEEE International Conference on, vol. 1. IEEE, 2001, pp. 105–112. [36] D. Brenner and E. Hall, “Computed tomography?an increasing source of radiation exposure,” New England Journal of Medicine,vol.357,no.22,pp.2277–2284,2007. [37] X. Bresson, P. Vandergheynst, and J. Thiran, “A variational model for object segmentation using boundary information and shape prior driven by the mumford- shah functional,” International Journal of Computer Vision, vol. 68, no. 2, pp. 145–162, 2006. [38] S.Bridal,J.Correas,A.Saied,andP.Laugier,“Milestonesontheroadtohigherres- olution, quantitative, and functional ultrasonic imaging,” Proceedings of the IEEE, vol. 91, no. 10, pp. 1543–1561, 2003. [39] T. Brox and D. Cremers, “On the statistical interpretation of the piecewise smooth mumford-shah functional,” Scale Space and Variational Methods in Computer Vi- sion, pp. 203–213, 2007. [40] M. Budoff, D. Dowe, J. Jollis, M. Gitter, J. Sutherland, E. Halamert, M. Scherer, R. Bellinger, A. Martin, R. Benton et al., “Diagnostic performance of 64- multidetector row coronary computed tomographic angiography for evaluation of coronary artery stenosis in individuals without known coronary artery disease: re- sults from the prospective multicenter accuracy (assessment by coronary computed tomographicangiographyof individualsundergoing invasivecoronary angiography) trial,”JournaloftheAmericanCollegeofCardiology,vol.52,no.21,pp.1724–1732, 2008. [41] F.Cademartiri,L.LaGrutta,A.Palumbo,P.Malagutti,F.Pugliese,W.Meijboom, T. Baks, N. Mollet, N. Bruining, R. Hamers et al., “Non-invasive visualization of coronary atherosclerosis: state-of-art,” Journal of Cardiovascular Medicine, vol. 8, no. 3, pp. 129–137, 2007. [42] F.Cademartiri, L.LaGrutta, G.Runza, A.Palumbo, E.Maffei, N.Mollet, T.Bar- tolotta,P.Somers,M.Knaapen,S.Verheyeet al.,“Influenceofconvolutionfiltering 97 on coronary plaque attenuation values: observations in an ex vivo model of mul- tislice computed tomography coronary angiography,” European radiology, vol. 17, no. 7, pp. 1842–1849, 2007. [43] T. Callister, B. Cooil, S. Raya, N. Lippolis, D. Russo, and P. Raggi, “Coronary artery disease: improved reproducibility of calcium scoring with an electron-beam ct volumetric method.” Radiology, vol. 208, no. 3, pp. 807–814, 1998. [44] G. Carneiro, J. Nascimento, and A. Freitas, “Robust left ventricle segmentation from ultrasound data using deep neural networks and efficient search methods,” in Biomedical Imaging: From Nano to Macro, 2010 IEEE International Symposium on. IEEE, 2010, pp. 1085–1088. [45] C. Carson, S. Belongie, H. Greenspan, and J. Malik, “Blobworld: Image segmenta- tion using expectation-maximization and its application to image querying,” Pat- tern Analysis and Machine Intelligence, IEEE Transactions on, vol. 24, no. 8, pp. 1026–1038, 2002. [46] V. Caselles, “Geometric models for active contours,” in Image Processing, 1995. Proceedings., International Conference on, vol. 3. IEEE, 1995, pp. 9–12. [47] V. Caselles, R. Kimmel, and G. Sapiro, “Geodesic active contours,” International journal of computer vision, vol. 22, no. 1, pp. 61–79, 1997. [48] M.Cerqueira,N.Weissman,V.Dilsizian,A.Jacobs,S.Kaul,W.Laskey,D.Pennell, J. Rumberger, T. Ryan, M. Verani et al., “Standardized myocardial segmentation and nomenclature for tomographic imaging of the heart a statement for healthcare professionals from the cardiac imaging committee of the council on clinical cardi- ology of the american heart association,” Circulation, vol. 105, no. 4, pp. 539–542, 2002. [49] A.Chakraborty,L.Staib, andJ.Duncan, “Deformableboundaryfindinginfluenced by region homogeneity,” in Computer Vision and Pattern Recognition, 1994. Pro- ceedings CVPR’94., 1994 IEEE Computer Society Conference on. IEEE, 1994, pp. 624–627. [50] T. Chan and L. Vese, “Active contours without edges,” Image Processing, IEEE Transactions on, vol. 10, no. 2, pp. 266–277, 2001. [51] C. Chen, J. Luo, and K. Parker, “Image segmentation via adaptive k-mean cluster- ing and knowledge-based morphological operations with biomedical applications,” Image Processing, IEEE Transactions on, vol. 7, no. 12, pp. 1673–1683, 1998. [52] D. Chen, B. Li, Z. Liang, M. Wan, A. Kaufman, and M. Wax, “A tree-branch searching, multiresolution approach to skeletonization for virtual endoscopy,” in Proc. SPIE Med. Imag, vol. 3979. Citeseer, 2000, pp. 726–734. [53] Y. Chen, H. Tagare, S. Thiruvenkadam, F. Huang, D. Wilson, K. Gopinath, R. Briggs, and E. Geiser, “Using prior shapes in geometric active contours in a 98 variational framework,” International Journal of Computer Vision, vol. 50, no. 3, pp. 315–328, 2002. [54] Y. Chen, S. Thiruvenkadam, H. Tagare, F. Huang, D. Wilson, and E. Geiser, “On the incorporation of shape priors into geometric active contours,” in Variational and Level Set Methods in Computer Vision, 2001. Proceedings. IEEE Workshop on. IEEE, 2001, pp. 145–152. [55] T. Chua, H. Kiat, G. Germano, G. Maurer, K. Van Train, J. Friedman, and D.Berman,“Gatedtechnetium-99msestamibiforsimultaneousassessmentofstress myocardialperfusion,postexerciseregionalventricularfunctionandmyocardialvia- bility: correlationwithechocardiographyandrestthallium-201scintigraphy,” Jour- nal of the American College of Cardiology, vol. 23, no. 5, pp. 1107–1114, 1994. [56] K. Chuang, H. Tzeng, S. Chen, J. Wu, and T. Chen, “Fuzzy c-means clustering with spatial information for image segmentation,” Computerized Medical Imaging and Graphics, vol. 30, no. 1, pp. 9–15, 2006. [57] C. Ciofolo, M. Fradkin, B. Mory, G. Hautvast, and M. Breeuwer, “Automatic my- ocardium segmentation in late-enhancement mri,” in Biomedical Imaging: From Nano to Macro, 2008. ISBI 2008. 5th IEEE International Symposium on. IEEE, 2008, pp. 225–228. [58] M. Clark, L. Hall, D. Goldgof, L. Clarke, R. Velthuizen, and M. Silbiger, “Mri seg- mentation using fuzzy clustering techniques,” Engineering in Medicine and Biology Magazine, IEEE, vol. 13, no. 5, pp. 730–742, 1994. [59] C. Cocosco, W. Niessen, T. Netsch, E. Vonken, G. Lund, A. Stork, and M. Viergever, “Automatic image-driven segmentation of the ventricles in cardiac cine mri,” Journal of Magnetic Resonance Imaging, vol. 28, no. 2, pp. 366–374, 2008. [60] L.Cohen, “Onactivecontourmodelsandballoons,” CVGIP: Image understanding, vol. 53, no. 2, pp. 211–218, 1991. [61] L. Cohen and R. Kimmel, “Global minimum for active contour models: A minimal path approach,” International Journal of Computer Vision, vol. 24, no. 1, pp. 57– 78, 1997. [62] G. Coleman and H. Andrews, “Image segmentation by clustering,” Proceedings of the IEEE, vol. 67, no. 5, pp. 773–785, 1979. [63] T. Cootes, G. Edwards, and C. Taylor, “Active appearance models,” Pattern Anal- ysis and Machine Intelligence, IEEE Transactions on, vol. 23, no. 6, pp. 681–685, 2001. [64] T. Cootes, A. Hill, C. Taylor, and J. Haslam, “Use of active shape models for locating structures in medical images,” Image and vision computing, vol. 12, no. 6, pp. 355–365, 1994. 99 [65] T. Cootes and C. Taylor, “Active shape models–smart snakes,” in Proc. British Machine Vision Conference, vol. 266275. Citeseer, 1992. [66] T. Cootes, C. Taylor, D. Cooper, J. Graham et al., “Active shape models-their trainingandapplication,” Computer vision and image understanding,vol.61,no.1, pp. 38–59, 1995. [67] P. Coup´ e, P. Yger, S. Prima, P. Hellier, C. Kervrann, and C. Barillot, “An opti- mizedblockwisenonlocalmeansdenoisingfilterfor3-dmagneticresonanceimages,” Medical Imaging, IEEE Transactions on, vol. 27, no. 4, pp. 425–441, 2008. [68] D. Cremers, T. Kohlberger, and C. Schn¨ orr, “Nonlinear shape statistics in mum- ford?shah based segmentation,” Computer Vision?ECCV 2002, pp. 516–518, 2002. [69] D. ”Cremers, T. Kohlberger, and C. Schn¨ orr, “Shape statistics in kernel space for variationalimagesegmentation,” Pattern Recognition,vol.36,no.9,pp.1929–1943, 2003. [70] D.Cremers, S.Osher, andS.Soatto, “Kerneldensityestimationandintrinsicalign- mentforshapepriorsinlevelsetsegmentation,” International Journal of Computer Vision, vol. 69, no. 3, pp. 335–351, 2006. [71] D. Cremers, C. Schnorr, and J. Weickert, “Diffusion-snakes: combining statistical shapeknowledgeandimageinformationinavariationalframework,” in Variational and Level Set Methods in Computer Vision, 2001. Proceedings. IEEE Workshop on. IEEE, 2001, pp. 137–144. [72] D. Cremers, F. Tischh¨ auser, J. Weickert, and C. Schn¨ orr, “Diffusion snakes: In- troducing statistical shape knowledge into the mumford-shah functional,” Interna- tional journal of computer vision, vol. 50, no. 3, pp. 295–313, 2002. [73] K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising with block- matching and 3 d filtering,” in Proceedings of SPIE, vol. 6064, 2006, pp. 354–365. [74] K. ”Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3dtransform-domaincollaborativefiltering,” Image Processing, IEEE Transactions on, vol. 16, no. 8, pp. 2080 –2095, 2007. [75] E. Debreuve, M. Barlaud, G. Aubert, I. Laurette, and J. Darcourt, “Space-time segmentation using level set active contours applied to myocardial gated spect,” Medical Imaging, IEEE Transactions on, vol. 20, no. 7, pp. 643–659, 2001. [76] J. Declerck, J. Feldmar, M. Goris, and F. Betting, “Automatic registration and alignmentonatemplateofcardiacstressandrestreorientedspectimages,” Medical Imaging, IEEE Transactions on, vol. 16, no. 6, pp. 727–737, 1997. [77] K. Delibasis, P. Undrill, and G. Cameron, “Designing fourier descriptor-based geo- metricmodelsforobjectinterpretationinmedicalimagesusinggeneticalgorithms,” Computer Vision and Image Understanding, vol. 66, no. 3, pp. 286–300, 1997. 100 [78] H. Delingette, “General object reconstruction based on simplex meshes,” Interna- tional Journal of Computer Vision, vol. 32, no. 2, pp. 111–146, 1999. [79] T. Deschamps and L. Cohen, “Fast extraction of minimal paths in 3d images and applications to virtual endoscopy1,” Medical Image Analysis, vol. 5, no. 4, pp. 281– 299, 2001. [80] D.Dey,V.Cheng,P.Slomka,R.Nakazato,A.Ramesh,S.Gurudevan,G.Germano, and D. Berman, “Automated 3-dimensional quantification of noncalcified and cal- cified coronary plaque from coronary ct angiography,” Journal of Cardiovascular Computed Tomography, vol. 3, no. 6, pp. 372–382, 2009. [81] D. Dey, I. Kakadiaris, M. Budoff, M. Naghavi, and D. Berman, “Comprehensive non-contrast ct imaging of the vulnerable patient,” Asymptomatic Atherosclerosis, pp. 375–391, 2010. [82] D. Dey, T. Schepis, M. Marwan, P. Slomka, D. Berman, and S. Achenbach, “Auto- mated three-dimensional quantification of noncalcified coronary plaque from coro- nary ct angiography: comparison with intravascular us,” Radiology, vol. 257, no. 2, pp. 516–522, 2010. [83] D. Dey, Y. Suzuki, S. Suzuki, M. Ohba, P. Slomka, D. Polk, L. Shaw, and D. Berman, “Automated quantitation of pericardiac fat from noncontrast ct,” In- vestigative radiology, vol. 43, no. 2, p. 145, 2008. [84] E.Dijkstra,“Anoteontwoproblemsinconnexionwithgraphs,” Numerische math- ematik, vol. 1, no. 1, pp. 269–271, 1959. [85] M. Dinesh, P. Devarakota, and J. Kumar, “Automatic detection of plaques with severe stenosis in coronary vessels of ct angiography,” in Proceedings of SPIE, vol. 7624, 2010, p. 76242Q. [86] J. Dodge Jr, B. Brown, E. Bolson, and H. Dodge, “Intrathoracic spatial location of specifiedcoronarysegmentsonthenormalhumanheart.applicationsinquantitative arteriography, assessment of regional risk and contraction, and anatomic display,” Circulation, vol. 78, no. 5, pp. 1167–1180, 1988. [87] D. Donoho and J. Johnstone, “Ideal spatial adaptation by wavelet shrinkage,” Biometrika, vol. 81, no. 3, pp. 425–455, 1994. [88] O.Ecabert,J.Peters,H.Schramm,C.Lorenz,J.VonBerg,M.Walker,M.Vembar, M. Olszewski, K. Subramanyan, G. Lavi et al., “Automatic model-based segmen- tation of the heart in ct images,” Medical Imaging, IEEE Transactions on, vol. 27, no. 9, pp. 1189–1201, 2008. [89] O. Ecabert, J. Peters, and J. Weese, “Modeling shape variability for full heart segmentation in cardiac computed-tomography images,” in Proceedings of SPIE, vol. 6144, 2006, p. 61443R. 101 [90] A. Einstein, M. Henzlova, and S. Rajagopalan, “Estimating risk of cancer associ- ated with radiation exposure from 64-slice computed tomography coronary angiog- raphy,” JAMA: the journal of the American Medical Association, vol. 298, no. 3, pp. 317–323, 2007. [91] T. Faber, C. Cooke, R. Folks, J. Vansant, K. Nichols, E. DePuey, R. Pettigrew, E.Garciaet al.,“Leftventricularfunctionandperfusionfromgatedspectperfusion images: an integrated method.” Journal of nuclear medicine: official publication, Society of Nuclear Medicine, vol. 40, no. 4, p. 650, 1999. [92] T.Faber, E. Stokely, R.Peshock, and J. Corbett, “A model-basedfour-dimensional left ventricular surface detector,” Medical Imaging, IEEE Transactions on, vol. 10, no. 3, pp. 321–329, 1991. [93] T. Faber, J. Vansant, R. Pettigrew, J. Galt, M. Blais, G. Chatzimavroudis, C. Cooke, R. Folks, S. Waldrop, E. Gurtler-Krawczynska et al., “Evaluation of left ventricular endocardial volumes and ejection fractions computed from gated perfusion spect with magnetic resonance imaging: comparison of two methods,” Journal of Nuclear Cardiology, vol. 8, no. 6, pp. 645–651, 2001. [94] E. Falk and V. Fuster, “Angina pectoris and disease progression,” Circulation, vol. 92, no. 8, pp. 2033–2035, 1995. [95] E. Ficaro and J. Corbett, “Advances in quantitative perfusion spect imaging,” Journal of nuclear Cardiology, vol. 11, no. 1, pp. 62–70, 2004. [96] E.Ficaro,B.Lee,J.Kritzman,andJ.Corbett,“Corridor4dm: themichiganmethod for quantitative nuclear cardiology,” Journal of nuclear cardiology, vol. 14, no. 4, pp. 455–465, 2007. [97] S. Forbat, M. Sakrana, K. Darasz, F. El-Demerdash, and S. Underwood, “Rapid assessment of left ventricular volume by short axis cine mri,” British journal of radiology, vol. 69, no. 819, pp. 221–225, 1996. [98] D. Ford and D. Fulkerson, Flows in networks. Princeton university press, 2010. [99] P. Ford, S. Chatziioannou, W. Moore, and R. Dhekne, “Overestimation of the lvef by quantitative gated spect in simulated left ventricles,” Journal of Nuclear Medicine, vol. 42, no. 3, pp. 454–459, 2001. [100] A. Frangi, W. Niessen, and M. Viergever, “Three-dimensional modeling for func- tional analysis of cardiac images, a review,” Medical Imaging, IEEE Transactions on, vol. 20, no. 1, pp. 2–5, 2001. [101] T. Freer and M. Ulissey, “Screening mammography with computer-aided detection: Prospective study of 12,860 patients in a community breast center1,” Radiology, vol. 220, no. 3, pp. 781–786, 2001. 102 [102] Y. Freund and R. E. Schapire, “A decision-theoretic generalization of on-line learn- ing and an application to boosting,” Journal of computer and system sciences, vol. 55, no. 1, pp. 119–139, 1997. [103] K. Fukuchi, T. Uehara, T. Morozumi, E. Tsujimura, S. Hasegawa, K. Yutani, H. Kusuoka, and T. Nishimura, “Quantification of systolic count increase in technetium-99m-mibi gated myocardial spect,” The Journal of nuclear medicine, vol. 38, no. 7, pp. 1067–1073, 1997. [104] G. Funka-Lea, Y. Boykov, C. Florin, M. Jolly, R. Moreau-Gobard, R. Ramaraj, and D. Rinck, “Automatic heart isolation for ct coronary visualization using graph- cuts,” in Biomedical Imaging: Nano to Macro, 2006. 3rd IEEE International Sym- posium on. IEEE, 2006, pp. 614–617. [105] E. Garcia, T. Faber, C. Cooke, R. Folks, J. Chen, and C. Santana, “The increasing role of quantification in clinical nuclear cardiology: The emory approach,” Journal of nuclear cardiology, vol. 14, no. 4, pp. 420–432, 2007. [106] J. Ge, F. Chirillo, J. Schwedtmann, G. G¨ orge, M. Haude, D. Baumgart, V. Shah, C. Von Birgelen, S. Sack, H. Boudoulas et al., “Screening of ruptured plaques in patients with coronary artery disease by intravascular ultrasound,” Heart, vol. 81, no. 6, pp. 621–627, 1999. [107] Y. Ge, D. Stelts, J. Wang, and D. Vining, “Computing the centerline of a colon: a robust and efficient method based on 3d skeletons,” Journal of Computer Assisted Tomography, vol. 23, no. 5, pp. 786–94, 1999. [108] G. Gerig, O. Kubler, R. Kikinis, and F. Jolesz, “Nonlinear anisotropic filtering of mri data,” Medical Imaging, IEEE Transactions on, vol. 11, no. 2, pp. 221–232, 1992. [109] G. Germano, J. Erel, H. Kiat, P. Kavanagh, D. Berman et al., “Quantitative lvef andqualitativeregionalfunctionfromgatedthallium-201perfusionspect,” Journal of Nuclear Medicine, vol. 38, no. 5, pp. 749–753, 1997. [110] G.Germano,P.Kavanagh,P.Slomka,S.VanKriekinge,G.Pollard,andD.Berman, “Quantitation in gated perfusion spect imaging: The cedars-sinai approach,” Jour- nal of nuclear cardiology, vol. 14, no. 4, pp. 433–454, 2007. [111] G. Germano, P. Kavanagh, P. Waechter, J. Areeda, S. Van Kriekinge, T. Sharir, H. Lewin, and D. Berman, “A new algorithm for the quantitation of myocardial perfusion spect. i: technical principles and reproducibility.” Journal of nuclear medicine, vol. 41, no. 4, pp. 712–719, 2000. [112] G. Germano, H. Kiat, P. Kavanagh, M. Moriel, M. Mazzanti, H. Su, K. Train, and D. Berman, “Automatic quantification of ejection fraction from gated myocardial perfusion spect,” Journal of Nuclear Medicine, vol. 36, no. 11, p. 2138, 1995. 103 [113] G.GermanoPhD,M.Erel,M.Lewin,M.Kavanagh,B.Paul,M.Berman,S.Daniel et al., “Automatic quantitation of regional myocardial wall motion and thickening from gated technetium-99m sestamibi myocardial perfusion single-photon emission computed tomography,” Journal of the American College of Cardiology, vol. 30, no. 5, pp. 1360–1367, 1997. [114] S. Gokturk, C. Tomasi, B. Acar, C. Beaulieu, D. Paik, R. Jeffrey Jr, J. Yee, and S.Napel,“Astatistical3-dpatternprocessingmethodforcomputer-aideddetection of polyps in ct colonography,” Medical Imaging, IEEE Transactions on, vol. 20, no. 12, pp. 1251–1260, 2001. [115] A. Goldberg and R. Tarjan, “A new approach to the maximum-flow problem,” Journal of the ACM (JACM), vol. 35, no. 4, pp. 921–940, 1988. [116] R. Goldenberg, D. Eilot, G. Begelman, E. Walach, E. Ben-Ishai, and N. Peled, “Computer-aided simple triage (cast) for coronary ct angiography (ccta),” Interna- tional journal of computer assisted radiology and surgery, vol. 7, no. 6, pp. 819–827, 2012. [117] A.GoshtasbyandD.Turner,“Segmentationofcardiaccinemrimagesforextraction of right and left ventricular chambers,” Medical Imaging, IEEE Transactions on, vol. 14, no. 1, pp. 56–64, 1995. [118] P.Gotardo,K.Boyer,J.Saltz,andS.Raman,“Anewdeformablemodelforbound- ary tracking in cardiac mri and its application to the detection of intra-ventricular dyssynchrony,”inComputer Vision and Pattern Recognition, 2006 IEEE Computer Society Conference on, vol. 1. IEEE, 2006, pp. 736–743. [119] P. Gravel, G. Beaudoin, and J. De Guise, “A method for modeling noise in medical images,” Medical Imaging, IEEE Transactions on, vol. 23, no. 10, pp. 1221–1232, 2004. [120] D.Greig, B.Porteous, andA.Seheult, “Exactmaximumaposterioriestimationfor binary images,” Journal of the Royal Statistical Society. Series B (Methodological), pp. 271–279, 1989. [121] H.GudbjartssonandS.Patz, “Thericiandistributionofnoisymridata,” Magnetic Resonance in Medicine, vol. 34, no. 6, pp. 910–914, 1995. [122] A. Gutstein, D. Dey, V. Cheng, A. Wolak, H. Gransar, Y. Suzuki, J. Friedman, L. Thomson, S. Hayes, R. Pimentel et al., “Algorithm for radiation dose reduction with helical dual source coronary computed tomography angiography in clinical practice,” Journal of cardiovascular computed tomography, vol. 2, no. 5, pp. 311– 322, 2008. [123] E. Halpern and D. Halpern, “Diagnosis of coronary stenosis with ct angiography:: Comparison of automated computer diagnosis with expert readings,” Academic Radiology, vol. 18, no. 3, pp. 324–333, 2011. 104 [124] A. Hambye, A. Vervaet, and A. Dobbeleir, “Variability of left ventricular ejection fraction and volumes with quantitative gated spect: influence of algorithm, pixel size and reconstruction parameters in small and normal-sized hearts,” European journal of nuclear medicine and molecular imaging, vol. 31, no. 12, pp. 1606–1613, 2004. [125] J. Hare, C. Jenkins, S. Nakatani, A. Ogawa, and T. Marwick, “Feasibility and clinical utility of 3d echocardiography in routine practice,” in Cardiac Society of Australia and New Zealand 55th Annual Scientific Meeting and the International Society for Heart Research, Australasian Section 2007,vol.6,no.Supp.2. Elsevier, 2011, pp. S41–S41. [126] J. Hausleiter, T. Meyer, M. Hadamitzky, M. Zankl, P. Gerein, K. D¨ orrler, A. Kas- trati, S.Martinoff, andA.Sch¨ omig, “Non-invasivecoronarycomputedtomographic angiography for patients with suspected coronary artery disease: the coronary an- giography by computed tomography with the use of a submillimeter resolution (cactus) trial,” European heart journal, vol. 28, no. 24, pp. 3034–3041, 2007. [127] R.Hoffmann, S.VonBardeleben, F.TenCate, A.Borges, J.Kasprzak, C.Firschke, S. Lafitte, N. Al-Saadi, S. Kuntz-Hehner, M. Engelhardt et al., “Assessment of sys- tolic left ventricular function: a multi-centre comparison of cineventriculography, cardiac magnetic resonance imaging, unenhanced and contrast-enhanced echocar- diography,” European heart journal, vol. 26, no. 6, pp. 607–616, 2005. [128] E. Holman, V. Buller, A. de Roos, R. van der Geest, L. Baur, A. van der Laarse, A. Bruschke, J. Reiber, and E. van der Wall, “Detection and quantification of dys- functional myocardium by magnetic resonance imaging: a new three-dimensional method for quantitative wall-thickening analysis,” Circulation, vol. 95, no. 4, pp. 924–931, 1997. [129] L. Hsu, W. Ingkanisorn, P. Kellman, A. Aletras, and A. Arai, “Quantitative my- ocardial infarction on delayed enhancement mri. part ii: Clinical application of an automated feature analysis and combined thresholding infarct sizing algorithm,” Journal of Magnetic Resonance Imaging, vol. 23, no. 3, pp. 309–314, 2006. [130] P. Hunold, T. Schlosser, F. Vogt, H. Eggebrecht, A. Schmermund, O. Bruder, W. Sch¨ uler, and J. Barkhausen, “Myocardial late enhancement in contrast- enhanced cardiac mri: distinction between infarction scar and non–infarction- relateddisease,”American Journal of Roentgenology,vol.184,no.5,pp.1420–1426, 2005. [131] P. Hunold, F. Vogt, A. Schmermund, J. Debatin, G. Kerkhoff, T. Budde, R. Erbel, K. Ewen, and J. Barkhausen, “Radiation exposure during cardiac ct: Effective doses at multi–detector row ct and electron-beam ct1,” Radiology, vol. 226, no. 1, pp. 145–152, 2003. [132] E. Ibrahim, “Myocardial tagging by cardiovascular magnetic resonance: evolution of techniques–pulse sequences, analysis algorithms, and applications,” Journal of Cardiovascular Magnetic Resonance, vol. 13, no. 1, pp. 1–40, 2011. 105 [133] E. Ibrahim, M. Stuber, A. Fahmy, K. Abd-Elmoniem, T. Sasano, M. Abraham, andN.Osman, “Real-timemrimagingofmyocardialregionalfunctionusingstrain- encoding (senc) with tissue through-plane motion tracking,” Journal of Magnetic Resonance Imaging, vol. 26, no. 6, pp. 1461–1470, 2007. [134] I. Isgum, M. Staring, A. Rutten, M. Prokop, M. Viergever, and B. van Ginneken, “Multi-atlas-based segmentation with local decision fusion?application to cardiac and aortic segmentation in ct scans,” Medical Imaging, IEEE Transactions on, vol. 28, no. 7, pp. 1000–1010, 2009. [135] M. Ishida, S. Kato, H. Sakuma et al., “Cardiac mri in ischemic heart disease.” Circulation journal: official journal of the Japanese Circulation Society, vol. 73, no. 9, p. 1577, 2009. [136] M.Jolly,“Automaticsegmentationoftheleftventricleincardiacmrandctimages,” International Journal of Computer Vision, vol. 70, no. 2, pp. 151–163, 2006. [137] K. Juergens, H. Seifarth, F. Range, S. Wienbeck, M. Wenker, W. Heindel, and R. Fischbach, “Automated threshold-based 3d segmentation versus short-axis planimetryforassessmentofgloballeftventricularfunctionwithdual-sourcemdct,” American Journal of Roentgenology, vol. 190, no. 2, pp. 308–314, 2008. [138] A. Juslin and J. Tohka, “Unsupervised segmentation of cardiac pet transmission imagesforautomaticheartvolumeextraction,” in Engineering in Medicine and Bi- ology Society, 2006. EMBS’06. 28th Annual International Conference of the IEEE. IEEE, 2006, pp. 1077–1080. [139] D. Kang, P. Slomka, a. C. V. Nakazato, Arsanjani R., J. Min, D. Li, D. Berman, C.-C. Kuo, and D. Dey, “Automated knowledge-based detection of nonobstructive and obstructive arterial lesions from coronary ct angiography,” Medical Physics, vol. 40, no. 4, pp. 041912–1–10, 2013. [140] D. Kang, P. Slomka, R. Nakazato, V. Cheng, J. Min, D. Li, D. Berman, C.-C. Kuo, and D. Dey, “Automatic detection of significant and subtle arterial lesions from coronary ct angiography,” in Proceedings of SPIE, vol. 8314, 2012, pp. 831435–7. [141] D. Kang, P. Slomka, R. Nakazato, J. Woo, D. Berman, C.-C. Kuo, and D. Dey, “Image denoising of low-radiation dose coronary ct angiography by an adaptive block-matching 3d algorithm,” in Proceedings of SPIE, vol. 8669, 2013, p. 86692G. [142] M. Kass, A. Witkin, and D. Terzopoulos, “Snakes Active contour models,” Inter- national journal of computer vision, vol. 1, no. 4, pp. 321–331, 1988. [143] B.M.Kelm,S.Mittal,Y.Zheng,A.Tsymbal,D.Bernhardt,F.Vega-Higuera,S.K. Zhou,P.Meer,andD.Comaniciu,“Detection,gradingandclassificationofcoronary stenoses in computed tomography angiography,” in Medical Image Computing and Computer-Assisted Intervention–MICCAI 2011. Springer, 2011, pp. 25–32. 106 [144] Z. Kelm, D. Blezek, B. Bartholmai, and B. Erickson, “Optimizing non-local means for denoising low dose ct,” in Biomedical Imaging: From Nano to Macro, 2009. ISBI’09. IEEE International Symposium on. IEEE, 2009, pp. 662–665. [145] S. ”Kichenassamy, A. Kumar, P. Olver, A. Tannenbaum, and A. Yezzi, “Gradient flows and geometric active contour models,” in Computer Vision, 1995. Proceed- ings., Fifth International Conference on. IEEE, 1995, pp. 810–815. [146] S. Kichenassamy, A. Kumar, P. Olver, A. Tannenbaum, and A. Yezzi, “Confor- mal curvature flows: from phase transitions to active vision,” Archive for Rational Mechanics and Analysis, vol. 134, no. 3, pp. 275–301, 1996. [147] R. Kim, D. Fieno, T. Parrish, K. Harris, E. Chen, O. Simonetti, J. Bundy, J. Finn, F.Klocke, andR.Judd, “Relationshipofmridelayedcontrastenhancementtoirre- versible injury, infarct age, and contractile function,” Circulation, vol. 100, no. 19, pp. 1992–2002, 1999. [148] C. Kirbas and F. Quek, “A review of vessel extraction techniques and algorithms,” ACM computing surveys, vol. 36, no. 2, pp. 81–121, 2004. [149] G. Kiss, J. Van Cleynenbreugel, M. Thomeer, P. Suetens, and G. Marchal, “Computer-aided diagnosis in virtual colonography via combination of surface nor- mal and sphere fitting methods,” European radiology, vol. 12, no. 1, pp. 77–81, 2002. [150] T.Kohlberger,D.Cremers,M.Rousson,R.Ramaraj,andG.Funka-Lea,“4dshape priors for a level set segmentation of the left myocardium in spect sequences,” Medical Image Computing and Computer-Assisted Intervention–MICCAI 2006, pp. 92–100, 2006. [151] S. Krinidis and V. Chatzis, “A robust fuzzy local information c-means clustering algorithm,” Image Processing, IEEE Transactions on, vol.19, no.5, pp.1328–1337, 2010. [152] T. Kristensen, K. Kofoed, J. K¨ uhl, W. Nielsen, M. Nielsen, and H. Kelbæk, “Prog- nostic implications of nonobstructive coronary plaques in patients with non-st- segment elevation myocardial infarction:: A multidetector computed tomography study,” Journal of the American College of Cardiology, vol. 58, no. 5, pp. 502–509, 2011. [153] T. Kristensen, K. Kofoed, D. Møller, M. Ersbøll, T. K¨ uhl, P. von der Recke, L. Køber, M. Nielsen, and H. Kelbæk, “Quantitative assessment of left ventric- ular systolic wall thickening using multidetector computed tomography,” European journal of radiology, vol. 72, no. 1, pp. 92–97, 2009. [154] H. Lamb, J. Doornbos, E. van der Velde, M. Kruit, J. Reiber, and A. de Roos, “Echo planar mri of the heart on a standard system: validation of measurements of left ventricular function and mass,” Journal of computer assisted tomography, vol. 20, no. 6, p. 942, 1996. 107 [155] A. Leber, A. Becker, A. Knez, F. von Ziegler, M. Sirol, K. Nikolaou, B. Ohnesorge, Z. Fayad, C. Becker, M. Reiser et al., “Accuracy of 64-slice computed tomography to classify and quantify plaque volumes in the proximal coronary system: a com- parative study using intravascular ultrasound,” Journal of the American College of Cardiology, vol. 47, no. 3, pp. 672–677, 2006. [156] T. Lee, R. Kashyap, and C. Chu, “Building skeleton models via 3-d medial sur- face/axis thinning algorithms,” CVGIP: Graphical Model and Image Processing, vol. 56, no. 6, pp. 462–478, 1994. [157] T. Lei and W. Sewchand, “Statistical approach to x-ray ct imaging and its applica- tions in image analysis. i. statistical analysis of x-ray ct imaging,” Medical Imaging, IEEE Transactions on, vol. 11, no. 1, pp. 53–61, 1992. [158] A. Lembcke, T. Wiese, J. Schnorr, S. Wagner, J. Mews, T. Kroencke, C. Enzweiler, B. Hamm, and M. Taupitz, “Image quality of noninvasive coronary angiography using multislice spiral computed tomography and electron-beam computed tomog- raphy: intraindividual comparison in an animal model,” Investigative radiology, vol. 39, no. 6, pp. 357–364, 2004. [159] V.Lempitsky,M.Verhoek,J.Noble,andA.Blake,“Randomforestclassificationfor automaticdelineationofmyocardiuminreal-time3dechocardiography,” Functional Imaging and Modeling of the Heart, pp. 447–456, 2009. [160] D. Lesage, E. Angelini, I. Bloch, and G. Funka-Lea, “A review of 3d vessel lumen segmentationtechniques: Models, features and extraction schemes,” Medical Image Analysis, vol. 13, no. 6, pp. 819–845, 2009. [161] M.Leventon,W.Grimson,andO.Faugeras,“Statisticalshapeinfluenceingeodesic active contours,” in Computer Vision and Pattern Recognition, 2000. Proceedings. IEEE Conference on, vol. 1. IEEE, 2000, pp. 316–323. [162] Z. Liang, J. MacFall, and D. Harrington, “Parameter estimation and tissue seg- mentation from multispectral mr images,” Medical Imaging, IEEE Transactions on, vol. 13, no. 3, pp. 441–449, 1994. [163] X. Lin, B. Cowan, and A. Young, “Automated detection of left ventricle in 4d mr images: experience from a large study,” Medical Image Computing and Computer- Assisted Intervention–MICCAI 2006, pp. 728–735, 2006. [164] M. Linguraru, N. Vasilyev, P. del Nido, and R. Howe, “Atrial septal defect track- ing in 3d cardiac ultrasound,” Medical Image Computing and Computer-Assisted Intervention–MICCAI 2006, pp. 596–603, 2006. [165] Y. Liu, “Quantification of nuclear cardiac images: The yale approach,” Journal of nuclear cardiology, vol. 14, no. 4, pp. 483–491, 2007. [166] D.Longmore,S.Underwood,G.Hounsfield,C.Bland,P.Poole-Wilson,D.Denison, R. Klipstein, D. Firmin, M. Watanabe, K. Fox et al., “Dimensional accuracy of 108 magnetic resonance in studies of the heart,” The Lancet, vol. 325, no. 8442, pp. 1360–1362, 1985. [167] C.LorenzandJ.Berg,“Acomprehensiveshapemodeloftheheart,” Medical Image Analysis, vol. 10, no. 4, pp. 657–670, 2006. [168] L. Lorigo, O. Faugeras, W. Grimson, R. Keriven, R. Kikinis, A. Nabavi, and C.Westin,“Curves: Curveevolutionforvesselsegmentation,” Medical Image Anal- ysis, vol. 5, no. 3, pp. 195–206, 2001. [169] H. Lu, X. Li, I. Hsiao, and Z. Liang, “Analytical noise treatment for low-dose ct projection data by penalized weighted least-square smoothing in the kl domain,” in SPIE Medical Imaging, vol. 4682, 2002, pp. 146–152. [170] C. Ma and M. Sonka, “A fully parallel 3d thinning algorithm and its applications,” Computer vision and image understanding, vol. 64, no. 3, pp. 420–433, 1996. [171] C. Ma, S. Wan, and J. Lee, “Three-dimensional topology preserving reduction on the 4-subfields,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 24, no. 12, pp. 1594–1605, 2002. [172] J. Ma, J. Huang, Q. Feng, H. Zhang, H. Lu, Z. Liang, and W. Chen, “Low-dose computedtomographyimagerestorationusingpreviousnormal-dosescan,” Medical Physics, vol. 38, no. 10, pp. 5713–31, 2011. [173] M.Ma,M.VanStralen,J.Reiber,J.Bosch,andB.Lelieveldt,“Modeldrivenquan- tification of left ventricular function from sparse single-beat 3d echocardiography,” Medical Image Analysis, vol. 14, no. 4, pp. 582–593, 2010. [174] A. Mahnken, G. M¨ uhlenbruch, R. G¨ unther, and J. Wildberger, “Cardiac ct: coro- nary arteries and beyond,” European radiology, vol. 17, no. 4, pp. 994–1008, 2007. [175] T.Makela,P.Clarysse,O.Sipila,N.Pauna,Q.Pham,T.Katila,andI.Magnin,“A reviewofcardiacimageregistrationmethods,”MedicalImaging, IEEETransactions on, vol. 21, no. 9, pp. 1011–1021, 2002. [176] R. Malladi, J. Sethian, and B. Vemuri, “Shape modeling with front propagation: A levelsetapproach,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 17, no. 2, pp. 158–175, 1995. [177] A. Manduca, L. Yu, J. Trzasko, N. Khaylova, J. Kofler, C. McCollough, and J. Fletcher, “Projection space denoising with bilateral filtering and ct noise mod- eling for dose reduction in ct,” Medical physics, vol. 36, no. 11, pp. 4911–4920, 2009. [178] J.Manj´ on,J.Carbonell-Caballero,J.Lull,G.Garc´ ıa-Mart´ ı,L.Mart´ ı-Bonmat´ ı,and M. Robles, “Mri denoising using non-local means,” Medical image analysis, vol. 12, no. 4, pp. 514–523, 2008. 109 [179] R. Manniesing, M. Schaap, S. Rozie, R. Hameeteman, D. Vukadinovic, A. van der Lugt, and W. Niessen, “Robust cta lumen segmentation of the atherosclerotic carotid artery bifurcation in a large patient population,” Medical Image Analysis, vol. 14, no. 6, pp. 759–769, 2010. [180] C. McCollough and B. Schueler, “Calculation of effective dose,” Medical physics, vol. 27, no. 5, pp. 828–839, 2000. [181] W.Meijboom,M.Meijs,J.Schuijf,M.Cramer,N.Mollet,C.vanMieghem,K.Nie- man, J. van Werkhoven, G. Pundziute, A. Weustink et al., “Diagnostic accuracy of 64-slice computed tomography coronary angiography: a prospective, multicenter, multivendor study,” Journal of the American College of Cardiology, vol. 52, no. 25, pp. 2135–2144, 2008. [182] W. Meijboom, C. van Mieghem, N. Mollet, F. Pugliese, A. Weustink, N. van Pelt, F.Cademartiri,K.Nieman,E.Boersma,P.deJaegere et al.,“64-slicecomputedto- mography coronary angiography in patients with high, intermediate, or low pretest probability of significant coronary artery disease,” Journal of the American College of Cardiology, vol. 50, no. 15, pp. 1469–1475, 2007. [183] C. Metz, M. Schaap, A. Weustink, N. Mollet, T. van Walsum, and W. Niessen, “Coronary centerline extraction from ct coronary angiography images using a min- imum cost path approach,” Medical physics, vol. 36, p. 5568, 2009. [184] I. Mikic, S. Krucinski, and J. Thomas, “Segmentation and tracking in echocar- diographic sequences: Active contours guided by optical flow estimates,” Medical Imaging, IEEE Transactions on, vol. 17, no. 2, pp. 274–284, 1998. [185] J.Miller,C.Rochitte,M.Dewey,A.Arbab-Zadeh,H.Niinuma,I.Gottlieb,N.Paul, M. Clouse, E. Shapiro, J. Hoe et al., “Diagnostic performance of coronary an- giography by 64-row ct,” New England Journal of Medicine, vol. 359, no. 22, pp. 2324–2336, 2008. [186] S. Mitchell, J. Bosch, B. Lelieveldt, R. van der Geest, J. Reiber, and M. Sonka, “3- d active appearance models: segmentation of cardiac mr and ultrasound images,” Medical Imaging, IEEE Transactions on, vol. 21, no. 9, pp. 1167–1178, 2002. [187] J.MontagnatandH.Delingette, “4ddeformablemodelswithtemporalconstraints: application to 4d cardiac image segmentation,” Medical Image Analysis, vol. 9, no. 1, pp. 87–100, 2005. [188] G. M¨ uhlenbruch, M. Das, C. Hohl, J. Wildberger, D. Rinck, T. Flohr, R. Koos, C. Knackstedt, R. G¨ unther, and A. Mahnken, “Global left ventricular function in cardiac ct. evaluation of an automated 3d region-growing segmentation algorithm,” European radiology, vol. 16, no. 5, pp. 1117–1123, 2006. [189] D. Mumford and J. Shah, “Boundary detection by minimizing functionals,” in Proc IEEE Conf Computer Vision and Pattern Recognition, vol. 17. IEEE, 1985, pp. 22–26. 110 [190] K. Nakajima, J. Taki, T. Higuchi, M. Kawano, M. Taniguchi, K. Maruhashi, S. Sakazume, and N. Tonami, “Gated spet quantification of small hearts: mathe- maticalsimulationandclinicalapplication,” European Journal of Nuclear Medicine and Molecular Imaging, vol. 27, no. 9, pp. 1372–1379, 2000. [191] R. Nakazato, B. Tamarappoo, T. Smith, V. Cheng, D. Dey, H. Shmilovich, A. Gut- stein,S.Gurudevan,S.Hayes,L.Thomson et al.,“Assessmentofleftventricularre- gional wall motion and ejection fraction with low-radiation dose helical dual-source ct: comparison to two-dimensional echocardiography,” Journal of Cardiovascular Computed Tomography, vol. 5, no. 3, pp. 149–157, 2011. [192] S. Nawano, K. Murakami, N. Moriyama, H. Kobatake, H. Takeo, and K. Shimura, “Computer-aided diagnosis in full digital mammography,” Investigative radiology, vol. 34, no. 4, pp. 310–6, 1999. [193] H. Ng, S. Ong, K. Foong, P. Goh, and W. Nowinski, “Medical image segmentation using k-means clustering and improved watershed algorithm,” in Image Analysis and Interpretation, 2006 IEEE Southwest Symposium on. Ieee, 2006, pp. 61–65. [194] E. Nicol, J. Stirrup, M. Roughton, S. Padley, and M. Rubens, “64-channel cardiac computed tomography: Intraobserver and interobserver variability, part 2: Global and regional ventricular function, mass, and first pass perfusion,” Journal of com- puter assisted tomography, vol. 33, no. 2, p. 169, 2009. [195] R. Nishikawa, “Current status and future directions of computer-aided diagnosis in mammography,” Computerized Medical Imaging and Graphics, vol. 31, no. 4, pp. 224–235, 2007. [196] J. Noble and D. Boukerroui, “Ultrasound image segmentation: A survey,” Medical Imaging, IEEE Transactions on, vol. 25, no. 8, pp. 987–1010, 2006. [197] R. Nowak, “Wavelet-based rician noise removal for magnetic resonance imaging,” Image Processing, IEEE Transactions on, vol. 8, no. 10, pp. 1408–1419, 1999. [198] S.Ordas, S.Aguade, J.Castell, andA.Frangi, “Astatisticalmodel-basedapproach for the automatic quantitative analysis of perfusion gated spect studies,” in Proc. SPIE, vol. 5746. Citeseer, 2005, pp. 560–570. [199] S.OsherandR.Fedkiw, Level set methods and dynamic implicit surfaces. Springer Verlag, 2003, vol. 153. [200] S. Osher and N. Paragios, Geometric level set methods in imaging, vision, and graphics. Springer-Verlag New York Inc, 2003. [201] S. Osher and J. Sethian, “Fronts propagating with curvature-dependent speed: al- gorithmsbasedonhamilton-jacobiformulations,” Journal of computationalphysics, vol. 79, no. 1, pp. 12–49, 1988. 111 [202] N. Paragios, “A variational approach for the segmentation of the left ventricle in cardiac image analysis,” International Journal of Computer Vision, vol. 50, no. 3, pp. 345–362, 2002. [203] N. ””Paragios, “A level set approach for shape-driven segmentation and tracking of the left ventricle,” Medical Imaging, IEEE Transactions on, vol. 22, no. 6, pp. 773–776, 2003. [204] N. ”Paragios, “Variational methods and partial differential equations in cardiac image analysis,” in Biomedical Imaging: Nano to Macro, 2004. IEEE International Symposium on. IEEE, 2004, pp. 17–20. [205] N. Paragios and R. Deriche, “Geodesic active contours and level sets for the detec- tion and tracking of moving objects,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 22, no. 3, pp. 266–280, 2000. [206] N.”ParagiosandR.Deriche,“Geodesicactiveregionsandlevelsetmethodsformo- tion estimation and tracking,” Computer Vision and Image Understanding, vol. 97, no. 3, pp. 259–282, 2005. [207] A.PaulandH.Nabi,“Gatedmyocardialperfusionspect: basicprinciples,technical aspects, and clinical applications,” Journal of nuclear medicine technology, vol. 32, no. 4, pp. 179–187, 2004. [208] J. Peters, O. Ecabert, C. Meyer, R. Kneser, and J. Weese, “Optimizing boundary detection via simulated search with applications to multi-modal heart segmenta- tion,” Medical Image Analysis, vol. 14, no. 1, pp. 70–84, 2010. [209] C.PetitjeanandJ.Dacher,“Areviewofsegmentationmethodsinshortaxiscardiac mr images,” Medical Image Analysis, vol. 15, no. 2, pp. 169–184, 2011. [210] M. Petranovic, A. Soni, H. Bezzera, R. Loureiro, A. Sarwar, C. Raffel, E. Pomer- antsev, I.Jang, T.Brady, S.Achenbach et al., “Assessmentofnonstenoticcoronary lesions by 64-slice multidetector computed tomography in comparison to intravas- cular ultrasound: evaluation of nonculprit coronary lesions,” Journal of Cardiovas- cular Computed Tomography, vol. 3, no. 1, pp. 24–31, 2009. [211] D. Pham and J. Prince, “Adaptive fuzzy segmentation of magnetic resonance im- ages,” Medical Imaging, IEEE Transactions on, vol. 18, no. 9, pp. 737–752, 1999. [212] F. Pugliese, M. Hunink, K. Gruszczynska, F. Alberghina, R. Malag´ o, N. van Pelt, N. Mollet, F. Cademartiri, A. Weustink, W. Meijboom et al., “Learning curve for coronary ct angiography: What constitutes sufficient training? 1,” Radiology, vol. 251, no. 2, pp. 359–368, 2009. [213] Z. Qian, D. Metaxas, and L. Axel, “A learning framework for the automatic and accuratesegmentationofcardiactaggedmriimages,” Computer Vision for Biomed- ical Image Applications, pp. 93–102, 2005. 112 [214] Z. ”Qian, D. Metaxas, and L. Axel, “Boosting and nonparametric based tracking of tagged mri cardiac boundaries,” Medical Image Computing and Computer-Assisted Intervention–MICCAI 2006, pp. 636–644, 2006. [215] G. Raff, A. Abidov, S. Achenbach, D. Berman, L. Boxt, M. Budoff, V. Cheng, T. DeFrance, J. Hellinger, R. Karlsberg et al., “Scct guidelines for the interpre- tation and reporting of coronary computed tomographic angiography,” Journal of cardiovascular computed tomography, vol. 3, no. 2, pp. 122–136, 2009. [216] S. Ranganath, “Contour extraction from cardiac mri studies using snakes,” Medical Imaging, IEEE Transactions on, vol. 14, no. 2, pp. 328–338, 1995. [217] J. Reiber, P. Serruys, C. Kooijman, W. Wijns, C. Slager, J. Gerbrands, J. Schu- urbiers, A. Den Boer, P. Hugenholtz et al., “Assessment of short-, medium-, and long-term variations in arterial dimensions from computer-assisted quantitation of coronary cineangiograms.” Circulation, vol. 71, no. 2, pp. 280–8, 1985. [218] B. Reutter, G. Klein, and R. Huesman, “Automated 3-d segmentation of respiratory-gated pet transmission images,” Nuclear Science, IEEE Transactions on, vol. 44, no. 6, pp. 2473–2476, 1997. [219] M. Rezaee, P. Van Der Zwet, B. Lelieveldt, R. Van Der Geest, and J. Reiber, “A multiresolution image segmentation technique based on pyramidal segmentation and fuzzy clustering,” Image Processing, IEEE Transactions on, vol. 9, no. 7, pp. 1238–1248, 2000. [220] V. Roger, A. Go, D. Lloyd-Jones, R. Adams, J. Berry, T. Brown, M. Carnethon, S. Dai, G. de Simone, E. Ford et al., “Heart disease and stroke statistics-2011 update,” Circulation, vol. 123, no. 4, pp. e18–e209, 2011. [221] M.RoussonandN.Paragios,“Shapepriorsforlevelsetrepresentations,” Computer Vision?ECCV 2002, pp. 416–418, 2002. [222] M. Rousson, N. Paragios, and R. Deriche, “Implicit active shape models for 3d segmentation in mr imaging,” Medical Image Computing and Computer-Assisted Intervention–MICCAI 2004, pp. 209–216, 2004. [223] R. Sadleir and P. Whelan, “Colon centreline calculation for ct colonography using optimised 3d opological thinning,” in 3D Data Processing Visualization and Trans- mission, 2002. Proceedings. First International Symposium on. IEEE, 2002, pp. 800–803. [224] M. Sandler, Diagnostic nuclear medicine. Lippincott Williams & Wilkins, 2003. [225] M. Schaap, L. Neefjes, C. Metz, A. Van Der Giessen, A. Weustink, N. Mollet, J. Wentzel, T. van Walsum, and W. Niessen, “Coronary lumen segmentation using graph cuts and robust kernel regression,” in Information Processing in Medical Imaging, vol. 5636. Springer, 2009, pp. 528–539. 113 [226] M. Schaap, A. Schilham, K. Zuiderveld, M. Prokop, E. Vonken, and W. Niessen, “Fast noise reduction in computed tomography for improved 3-d visualization,” Medical Imaging, IEEE Transactions on, vol. 27, no. 8, pp. 1120–1129, 2008. [227] R. E. Schapire, Y. Freund, P. Bartlett, and W. S. Lee, “Boosting the margin: A new explanation for the effectiveness of voting methods,” The annals of statistics, vol. 26, no. 5, pp. 1651–1686, 1998. [228] T.Schepis,O.Gaemperli,P.Koepfli,I.Valenta,K.Strobel,A.Brunner,S.Leschka, L. Desbiolles, L. Husmann, H. Alkadhi et al., “Comparison of 64-slice ct with gated spect for evaluation of left ventricular function,” Journal of Nuclear Medicine, vol. 47, no. 8, pp. 1288–1294, 2006. [229] U. Schoepf, CT of the Heart: Principles and Applications. Humana Pr Inc, 2005. [230] J. Sethian, Level set methods and fast marching methods: evolving interfaces in computational geometry, fluid mechanics, computer vision, and materials science. Cambridge Univ Pr, 1999, no. 3. [231] T. Sharir, D. Berman, P. Waechter, J. Areeda, P. Kavanagh, J. Gerlach, X. Kang, andG.Germano,“Quantitativeanalysisofregionalmotionandthickeningbygated myocardial perfusion spect: normal heterogeneity and criteria for abnormality,” Journal of Nuclear Medicine, vol. 42, no. 11, pp. 1630–1638, 2001. [232] J. Shi and J. Malik, “Normalized cuts and image segmentation,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 22, no. 8, pp. 888–905, 2000. [233] E.Sifakis, C.Garcia, andG.Tziritas, “Bayesianlevelsetsforimagesegmentation,” Journal of Visual Communication and Image Representation, vol.13, no.1, pp.44– 64, 2002. [234] H. Singleton and G. Pohost, “Automatic cardiac mr image segmentation using edge detection by tissue classification in pixel neighborhoods,” Magnetic resonance in medicine, vol. 37, no. 3, pp. 418–424, 1997. [235] R.Slart,J.Bax,R.deJong,J.deBoer,H.Lamb,P.Mook,A.Willemsen,W.Vaal- burg, D. van Veldhuisen, and P. Jager, “Comparison of gated pet with mri for eval- uationofleftventricularfunctioninpatientswithcoronaryarterydisease,” Journal of Nuclear Medicine, vol. 45, no. 2, pp. 176–182, 2004. [236] P. Slomka, D. Fieno, A. Ramesh, V. Goyal, H. Nishina, L. Thompson, R. Saouaf, D. Berman, and G. Germano, “Patient motion correction for multiplanar, multi- breath-hold cardiac cine mr imaging,” Journal of Magnetic Resonance Imaging, vol. 25, no. 5, pp. 965–973, 2007. [237] P. Slomka, G. Hurwitz, G. St Clement, J. Stephenson et al., “Three-dimensional demarcationofperfusionzonescorrespondingtospecificcoronaryarteries: applica- tionforautomatedinterpretationofmyocardialspect.”Journalofnuclearmedicine: official publication, Society of Nuclear Medicine, vol. 36, no. 11, p. 2120, 1995. 114 [238] P. Slomka, G. Hurwitz, J. Stephenson, T. Cradduck et al., “Automated alignment and sizing of myocardial stress and rest scans to three-dimensional normal tem- plates using an image registration algorithm.” Journal of nuclear medicine: official publication, Society of Nuclear Medicine, vol. 36, no. 6, p. 1115, 1995. [239] P. Slomka, J. Patton, D. Berman, and G. Germano, “Advances in technical aspects ofmyocardialperfusionspectimaging,” Journal of nuclear cardiology,vol.16,no.2, pp. 255–276, 2009. [240] I. Sluimer, P. van Waes, M. Viergever, and B. van Ginneken, “Computer-aided diagnosis in high resolution ct of the lungs,” Medical Physics, vol. 30, no. 12, pp. 3081–90, 2003. [241] S.Soldo,L.Haywood,S.Norris,J.Gober,P.Colletti,andM.Terk,“Methodforas- sessingcardiacfunctionusingmagneticresonanceimaging.” Biomedical instrumen- tation & technology/Association for the Advancement of Medical Instrumentation, vol. 30, no. 4, p. 359, 1996. [242] S. Solomon, N. Anavekar, H. Skali, J. McMurray, K. Swedberg, S. Yusuf, C. Granger, E. Michelson, D. Wang, S. Pocock et al., “Influence of ejection frac- tion on cardiovascular outcomes in a broad spectrum of heart failure patients,” Circulation, vol. 112, no. 24, pp. 3738–3744, 2005. [243] L. Srensen, S. Shaker, and M. De Bruijne, “Quantitative analysis of pulmonary emphysema using local binary patterns,” Medical Imaging, IEEE Transactions on, vol. 29, no. 2, pp. 559–569, 2010. [244] L. Staib and J. Duncan, “Boundary finding with parametrically deformable mod- els,”IEEEtransactionsonpatternanalysisandmachineintelligence,vol.14,no.11, pp. 1061–1075, 1992. [245] T. Stavngaard, S. Shaker, K. Bach, B. Stoel, and A. Dirksen, “Quantitative as- sessment of regional emphysema distribution in patients with chronic obstructive pulmonary disease (copd),” Acta Radiologica, vol. 47, no. 9, pp. 914–921, 2006. [246] J. Stollfuss, F. Haas, I. Matsunari, J. Neverve, S. Nekolla, J. Schneider-Eicke, U.Schricke,S.Ziegler,andM.Schwaiger,“Regionalmyocardialwallthickeningand global ejection fraction in patients with low angiographic left ventricular ejection fraction assessed by visual and quantitative resting ecg-gated 99m tc-tetrofosmin single-photon emission tomography and magnetic resonance imaging,” European Journal of Nuclear Medicine and Molecular Imaging, vol. 25, no. 5, pp. 522–530, 1998. [247] G. Stone, A. Maehara, A. Lansky, B. de Bruyne, E. Cristea, G. Mintz, R. Mehran, J. McPherson, N. Farhat, S. Marso et al., “A prospective natural-history study of coronary atherosclerosis,” New England Journal of Medicine, vol. 364, no. 3, pp. 226–235, 2011. 115 [248] A. Suinesiaputra, A. Frangi, T. Kaandorp, H. Lamb, J. Bax, J. Reiber, and B. Lelieveldt, “Automated detection of regional wall motion abnormalities based on a statistical model applied to multislice short-axis cardiac mr images,” Medical Imaging, IEEE Transactions on, vol. 28, no. 4, pp. 595–607, 2009. [249] P. Sukovic and N. Clinthorne, “Penalized weighted least-squares image reconstruc- tionfordualenergyx-raytransmissiontomography,”MedicalImaging, IEEETrans- actions on, vol. 19, no. 11, pp. 1075–1081, 2000. [250] P. Sundaram, A. Zomorodian, C. Beaulieu, and S. Napel, “Colon polyp detection using smoothed shape operators: Preliminary results,” Medical Image Analysis, vol. 12, no. 2, pp. 99–119, 2008. [251] R. Takx, A. Moscariello, U. Schoepf, J. Barraza Jr, J. Nance Jr, G. Bastarrika, M. Das, and M. Meyer, “Quantification of left and right ventricular function and myocardial mass: Comparison of low-radiation dose 2nd generation dual-source ct and cardiac mri,” European Journal of Radiology, 2011. [252] T. Tanaka, N. Nitta, S. Ohta, T. Kobayashi, A. Kano, K. Tsuchiya, Y. Murakami, S. Kitahara, M. Wakamiya, A. Furukawa et al., “Evaluation of computer-aided detection of lesions in mammograms obtained with a digital phase-contrast mam- mography system,” European radiology, vol. 19, no. 12, pp. 2886–2895, 2009. [253] C. Tobon-Gomez, C. Butakoff, S. Aguade, F. Sukno, G. Moragas, and A. Frangi, “Automatic construction of 3d-asm intensity models by simulating image acqui- sition: Application to myocardial gated spect studies,” Medical Imaging, IEEE Transactions on, vol. 27, no. 11, pp. 1655–1667, 2008. [254] A. Tsai, A. Yezzi Jr, W. Wells, C. Tempany, D. Tucker, A. Fan, W. Grimson, and A. Willsky, “A shape-based approach to the segmentation of medical imagery using level sets,” Medical Imaging, IEEE Transactions on, vol. 22, no. 2, pp. 137–154, 2003. [255] A. Tsai, A. Yezzi Jr, W. Wells III, C. Tempany, D. Tucker, A. Fan, W. Grimson, and A. Willsky, “Model-based curve evolution technique for image segmentation,” in Computer Vision and Pattern Recognition, 2001. CVPR 2001. Proceedings of the 2001 IEEE Computer Society Conference on, vol. 1. IEEE, 2001, pp. I–463. [256] A. Tsai, A. Yezzi Jr, and A. Willsky, “A curve evolution approach to smoothing and segmentation using the mumford-shah functional,” in Computer Vision and Pattern Recognition, 2000. Proceedings. IEEE Conference on, vol. 1. IEEE, 2000, pp. 119–124. [257] A. ”Tsai, A. Yezzi Jr, and A. Willsky, “Curve evolution implementation of the mumford-shah functional for image segmentation, denoising, interpolation, and magnification,” Image Processing, IEEE Transactions on, vol. 10, no. 8, pp. 1169– 1186, 2001. 116 [258] R. Uppaluri, E. Hoffman, M. Sonka, P. Hartley, G. Hunninghake, and G. McLEN- NAN, “Computer recognition of regional lung disease patterns,” American Journal of Respiratory and Critical Care Medicine, vol. 160, no. 2, pp. 648–654, 1999. [259] P.Vaduganathan,Z.He,G.Vick,J.Mahmarian,andM.Verani,“Evaluationofleft ventricular wall motion, volumes, and ejection fraction by gated myocardial tomog- raphy with technetium 99m-labeled tetrofosmin: a comparison with cine magnetic resonance imaging,” Journal of Nuclear Cardiology, vol. 6, no. 1, pp. 3–10, 1999. [260] H. Van Assen, M. Danilouchkine, A. Frangi, S. Ord´ as, J. Westenberg, J. Reiber, and B. Lelieveldt, “Spasm: A 3d-asm for segmentation of sparse and arbitrarily oriented cardiac mri data,” Medical Image Analysis, vol. 10, no. 2, pp. 286–303, 2006. [261] R. van der Geest, A. de Roos, E. van der Wall, and J. Reiber, “Quantitative anal- ysis of cardiovascular mr images,” The International Journal of Cardiac Imaging, vol. 13, no. 3, pp. 247–258, 1997. [262] F. Van Rugge, E. Holman, E. Van der Wall, A. De Roos, A. Van der Laarse, and A. Bruschke, “Quantitation of global and regional left ventricular function by cine magnetic resonance imaging during dobutamine stress in normal human subjects,” European heart journal, vol. 14, no. 4, pp. 456–463, 1993. [263] R. Virmani, A. Burke, A. Farb, and F. Kolodgie, “Pathology of the vulnerable plaque,” Journal of the American College of Cardiology, vol. 47, no. 8, pp. C13– C18, 2006. [264] F. WAHBA, H. Lamb, J. Bax, P. Dibbets-Schneider, C. Bavelaar-Croon, A. Zwin- derman, E. Pauwels, and E. Van Der Wall, “Assessment of regional myocardial wall motion and thickening by gated 99tcm-tetrofosmin spect: a comparison with magnetic resonance imaging,” Nuclear medicine communications, vol. 22, no. 6, p. 663, 2001. [265] J. Wang, T. Li, H. Lu, and Z. Liang, “Penalized weighted least-squares approach to sinogram noise reduction and image reconstruction for low-dose x-ray computed tomography,” Medical Imaging, IEEE Transactions on, vol. 25, no. 10, pp. 1272– 1283, 2006. [266] Y. Wang and L. Staib, “Boundary finding with correspondence using statistical shape models,” in Computer Vision and Pattern Recognition, 1998. Proceedings. 1998 IEEE Computer Society Conference on. IEEE, 1998, pp. 338–345. [267] D. Watson and W. Smith, “The role of quantitation in clinical nuclear cardiology: The university of virginia approach,” Journal of nuclear cardiology, vol. 14, no. 4, pp. 466–482, 2007. [268] J. Weickert, B. Romeny, and M. Viergever, “Efficient and reliable schemes for non- linear diffusion filtering,” Image Processing, IEEE Transactions on, vol. 7, no. 3, pp. 398–410, 1998. 117 [269] M. Wernick and J. Aarsvold, Emission tomography: the fundamentals of PET and SPECT. Academic Press, 2004. [270] H. White, R. Norris, M. Brown, P. Brandt, R. Whitlock, and C. Wild, “Left ven- tricular end-systolic volume as the major determinant of survival after recovery from myocardial infarction,” Circulation, vol. 76, no. 1, pp. 44–51, 1987. [271] B.Whiting,P.Massoumzadeh,O.Earl,J.OSullivan,D.Snyder,andJ.Williamson, “Propertiesofpreprocessedsinogramdatainx-raycomputedtomography,” Medical physics, vol. 33, no. 9, pp. 3290–3303, 2006. [272] O.Wink, W.J. Niessen, and M.A.Viergever, “Multiscalevesseltracking,” Medical Imaging, IEEE Transactions on, vol. 23, no. 1, pp. 130–133, 2004. [273] O. Wink, W. Niessen, and M. Viergever, “Fast delineation and visualization of ves- sels in 3-d angiographic images,” Medical Imaging, IEEE Transactions on, vol. 19, no. 4, pp. 337–346, 2000. [274] A. Wong, H. Liu, and P. Shi, “Segmentation of myocardium using velocity field constrained front propagation,” in Applications of Computer Vision, 2002.(WACV 2002). Proceedings. Sixth IEEE Workshop on. IEEE, 2002, pp. 84–89. [275] J. Woo, B. Hong, A. Ramesh, G. Germano, C. Kuo, and P. Slomka, “Curve evolu- tion with a dual shape similarity and its application to segmentation of left ventri- cle,” in Proceedings of SPIE, vol. 7259, 2009, p. 72593T. [276] D. Wormanns, M. Fiebich, M. Saidi, S. Diederich, and W. Heindel, “Automatic detection of pulmonary nodules at spiral ct: clinical application of a computer- aided diagnosis system,” European radiology, vol. 12, no. 5, pp. 1052–1057, 2002. [277] Z. Wu and R. Leahy, “An optimal graph theoretic approach to data clustering: Theory and its application to image segmentation,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 15, no. 11, pp. 1101–1113, 1993. [278] C. Xu and J. Prince, “Snakes, shapes, and gradient vector flow,” Image Processing, IEEE Transactions on, vol. 7, no. 3, pp. 359–369, 1998. [279] R. Yalamanchili, D. Dey, U. Kukure, R. Nakazato, D. Berman, and I. Kakadi- aris, “Knowledge-based quantification of pericardial fat in non-contrast ct data,” in Proceedings of SPIE, vol. 7623, 2010, p. 76231X. [280] G. Yang, P. Kitslaar, M. Frenay, A. Broersen, M. Boogers, J. Bax, J. Reiber, and J. Dijkstra, “Automatic centerline extraction of coronary arteries in coronary computed tomographic angiography,” The International Journal of Cardiovascular Imaging (formerly Cardiac Imaging), pp. 1–13, 2011. [281] K. Yang and H. Chen, “Evaluation of global and regional left ventricular func- tion using technetium-99m sestamibi ecg-gated single-photon emission tomogra- phy,” European Journal of Nuclear Medicine and Molecular Imaging, vol. 25, no. 5, pp. 515–521, 1998. 118 [282] A. Yezzi Jr, S. Kichenassamy, A. Kumar, P. Olver, and A. Tannenbaum, “A geo- metric snake model for segmentation of medical imagery,” Medical Imaging, IEEE Transactions on, vol. 16, no. 2, pp. 199–209, 1997. [283] Y. Yue, H. Tagare, E. Madsen, G. Frank, and M. Hobson, “Evaluation of a cardiac ultrasound segmentation algorithm using a phantom,” Medical Image Computing and Computer-Assisted Intervention–MICCAI 2008, pp. 101–109, 2008. [284] S. Yusuf, S. Reddy, S. ˆ Ounpuu, and S. Anand, “Global burden of cardiovascular diseases: Part i: General considerations, the epidemiologic transition, risk factors, and impact of urbanization,” Circulation, vol. 104, no. 23, pp. 2746–2753, 2001. [285] E. Zerhouni, D. Parish, W. Rogers, A. Yang, and E. Shapiro, “Human heart: tag- ging with mr imaging–a method for noninvasive assessment of myocardial motion.” Radiology, vol. 169, no. 1, pp. 59–63, 1988. [286] H. Zhang, A. Wahle, R. Johnson, T. Scholz, and M. Sonka, “4-d cardiac mr image analysis: Left and right ventricular morphology and function,” Medical Imaging, IEEE Transactions on, vol. 29, no. 2, pp. 350–364, 2010. [287] Y. Zhang, M. Brady, and S. Smith, “Segmentation of brain mr images through a hidden markov random field model and the expectation-maximization algorithm,” Medical Imaging, IEEE Transactions on, vol. 20, no. 1, pp. 45–57, 2001. [288] Y. Zheng, A. Barbu, B. Georgescu, M. Scheuering, and D. Comaniciu, “Fast auto- matic heart chamber segmentation from 3d ct data using marginal space learning and steerable features,” in Computer Vision, 2007. ICCV 2007. IEEE 11th Inter- national Conference on. Ieee, 2007, pp. 1–8. [289] Y. ”Zheng, A. Barbu, B. Georgescu, M. Scheuering, and D. Comaniciu, “Four- chamberheartmodelingandautomaticsegmentationfor3-dcardiacctvolumesus- ing marginal space learning and steerable features,” Medical Imaging, IEEE Trans- actions on, vol. 27, no. 11, pp. 1668–1681, 2008. [290] Y. Zhou and A. Toga, “Efficient skeletonization of volumetric objects,” Visualiza- tion and Computer Graphics, IEEE Transactions on, vol. 5, no. 3, pp. 196–209, 1999. [291] Y.Zhu,X.Papademetris,A.Sinusas,andJ.Duncan,“Acoupleddeformablemodel for tracking myocardial borders from real-time echocardiography using an incom- pressibility constraint,” Medical image analysis, vol. 14, no. 3, pp. 429–448, 2010. [292] Y. ”Zhu, X. Papademetris, A. Sinusas, and J. Duncan, “Segmentation of the left ventriclefromcardiacmrimagesusingasubject-specificdynamicalmodel,” Medical Imaging, IEEE Transactions on, vol. 29, no. 3, pp. 669–687, 2010. 119
Abstract (if available)
Abstract
Computer-aided cardiac image analysis obtained by various modalities plays an important role in the early diagnosis and treatment of cardiovascular disease. Numerous computerized methods have been developed to tackle this problem. Recent studies employ sophisticated techniques using available cues from cardiac anatomy such as geometry, visual appearance, and prior knowledge. Especially, visual analysis of three-dimensional (3D) coronary computed tomography angiography (CCTA) remains challenging due to large number of image slices and tortuous character of the vessels. In this thesis, we focus on cardiac applications associated with coronary artery disease and cardiac arrhythmias, and study the related computer-aided diagnosis problems from computed tomography angiography (CCTA). First, in Chapter 2, we provide an overview of cardiac segmentation techniques in all kinds of cardiac image modalites, with the goal of providing useful advice and references. In addition, we describe important clinical applications, imaging modalities, and validation methods used for cardiac segmentation. ❧ In Chapter 3, we propose a robust, automated algorithm for unsupervised computer detection of coronary artery lesions from CCTA. Our knowledge-based algorithm consists of centerline extraction, vessel classification, vessel linearization, lumen segmentation with scan-specific lumen attenuation ranges, and lesion location detection. Presence and location of lesions are identified using a multi-pass algorithm which considers expected or ”normal” vessel tapering and luminal stenosis from the segmented vessel. Expected luminal diameter is derived from the scan by automated piecewise least squares line fitting over proximal and mid segments (67%) of the coronary artery considering the locations of the small branches attached to the main coronary arteries. We applied this algorithm to 42 CCTA patient datasets, acquired with dual-source CT, where 21 datasets had 45 lesions with stenosis 25%. The reference standard was provided by visual and quantitative identification of lesions with any stenosis ≥25% by 3 expert observers using consensus reading. Our algorithm identified 43 lesions (93%) confirmed by the expert observers. There were 46 additional lesions detected
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Improving the sensitivity and spatial coverage of cardiac arterial spin labeling for assessment of coronary artery disease
PDF
Characterization of lenticulostriate arteries using high-resolution black blood MRI as an early imaging biomarker for vascular cognitive impairment and dementia
PDF
Improving sensitivity and spatial coverage of myocardial arterial spin labeling
PDF
Machine learning based techniques for biomedical image/video analysis
PDF
3D vessel mapping techniques for retina and brain as an early imaging biomarker for small vessel diseases
PDF
Improved myocardial arterial spin labeled perfusion imaging
PDF
Contributions to structural and functional retinal imaging via Fourier domain optical coherence tomography
PDF
Shift-invariant autoregressive reconstruction for MRI
PDF
Data-driven image analysis, modeling, synthesis and anomaly localization techniques
PDF
Sense and sensibility: statistical techniques for human energy expenditure estimation using kinematic sensors
Asset Metadata
Creator
Kang, Dongwoo
(author)
Core Title
Advanced coronary CT angiography image processing techniques
School
Viterbi School of Engineering
Degree
Doctor of Philosophy
Degree Program
Electrical Engineering
Publication Date
07/02/2013
Defense Date
05/07/2013
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
computer-aided diagnosis,coronary arterial lesion detection,coronary CT angiography,image denoising,image processing,low-radiation dose coronary CT angiography,machine learning,OAI-PMH Harvest
Format
application/pdf
(imt)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Kuo, C.-C. Jay (
committee chair
), Leahy, Richard (
committee member
), Nayak, Krishna (
committee member
), Shung, K. Kirk (
committee member
)
Creator Email
kangdong@usc.edu,zidanvs9@gmail.com
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c3-282774
Unique identifier
UC11294002
Identifier
etd-KangDongwo-1727.pdf (filename),usctheses-c3-282774 (legacy record id)
Legacy Identifier
etd-KangDongwo-1727.pdf
Dmrecord
282774
Document Type
Dissertation
Format
application/pdf (imt)
Rights
Kang, Dongwoo
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
computer-aided diagnosis
coronary arterial lesion detection
coronary CT angiography
image denoising
image processing
low-radiation dose coronary CT angiography
machine learning