Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Machine learning-driven deformation prediction and compensation for additive manufacturing
(USC Thesis Other)
Machine learning-driven deformation prediction and compensation for additive manufacturing
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
MACHINE LEARNING-DRIVEN DEFORMATION PREDICTION AND COMPENSATION FOR ADDITIVE MANUFACTURING by Nathan Decker A Dissertation Presented to the FACULTY OF THE GRADUATE SCHOOL UNIVERSITY OF SOUTHERN CALIFORNIA In Partial Fulfillment of the Requirements for the Degree DOCTOR OF PHILOSOPHY (INDUSTRIAL AND SYSTEMS ENGINEERING) May 2022 Copyright 2022 Nathan Decker Nihil est sine ratione. - Gottfried Wilhelm Leibniz ii Acknowledgments This work could not have happened without the support of those who have walked alongside me. Thank you Mama, Papa, and Lauren for your constant dedication, sacrifice, and love. You’ve been there for me on the best days and the worst days. You helped me to grow in both knowledge and wisdom while having my back in every venture. Thank you Breigh for your willingness to listen, your care for me, your patience, and your constant support. Thank you Chris for being a sounding board to bounce ideas off of, a voice of reason, an endless supply of help, and an empathic presence in my life. Thank you Zach for your friendship, loyalty, and kindness. Thank you Dr. Yee for showing me my passion for research, believing in me, advocating for me, and offering wisdom when it was most needed. Thank you Dr. Chen for your dedication to making your students the best physicists they could be. Thank you Brandon, Kristen, Dimitri, Danielle, and Nate for carrying me through my undergraduate classes and encouraging me in the years since. Thank you Anthony, Shannon, and Grace for showing me the redemptive aspects of my graduate school experience. Thank you Dr. Huang, for your guidance throughout these past four years. Thank you for your willingness to give time sacrificially in countless meetings and revisions. Thank you for your support, encouragement, and the space to explore and grow that you offered me. This would not have been possible had you not constantly challenged me to do better. Thank you Yuanxiang for your willingness to accompany me on trips to CAM to obtain the data for this work, your encouragement, and your patience in explaining concepts I iii struggled with. Thank you to all my labmates for your kindness and support. Thank you to my dissertation and qualifying committee members Drs. Huang, Chen, Abbas, Zhao, and Wang for your guidance, wisdom and time. Thank you Dr. Chen for challenging me with hands on opportunities to learn in ISE 510 and 511. Thank you Dr. Abbas for the eye-opening discussions and new methodologies you exposed us to in ISE 562 and 599. Thank you Shelly, Grace, Roxanna, and Sydney for doing the hard work to make this possible. Lastly, I want to acknowledge God’s faithfulness to me over these past four years. He was present in the times of triumph and success, as well as the moments of failure, when life was hard and research discouraging. Apart from Him, this would not have been possible. iv Contents ii Acknowledgments iii List of Tables viii List of Figures ix Relevant Resources xiii Abstract xv 1 Introduction 1 1.1 AM Deviation Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.1.1 Current State of the Art . . . . . . . . . . . . . . . . . . . . . . . . 2 1.1.2 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.2 AM Deviation Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.2.1 State of the Art . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.2.2 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 1.3 AM Deviation Compensation . . . . . . . . . . . . . . . . . . . . . . . . . 13 1.3.1 State of the Art . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 1.3.2 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 1.4 Shape Descriptor Representations . . . . . . . . . . . . . . . . . . . . . . . 18 1.5 Dental Additive Manufacturing . . . . . . . . . . . . . . . . . . . . . . . . 19 1.6 Outline of Dissertation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 2 Efficiently Registering Scan Point Clouds for Shape Accuracy Assess- ment and Modeling 25 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 2.2 Types of Registration Errors . . . . . . . . . . . . . . . . . . . . . . . . . . 26 2.2.1 Non-uniform Sampling/Scanning . . . . . . . . . . . . . . . . . . . 26 2.2.2 Deviation Minimization Bias Due to Unconstrained Registration . . 27 2.2.3 Local Minimum Errors . . . . . . . . . . . . . . . . . . . . . . . . . 29 2.3 Error Reduction Strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 v 2.3.1 Initial Positioning of Scan Point Cloud . . . . . . . . . . . . . . . . 30 2.3.2 Scan Point Cloud Segmentation . . . . . . . . . . . . . . . . . . . . 31 2.3.3 Reorientation of Scan Point Cloud . . . . . . . . . . . . . . . . . . 31 2.3.4 PointCloudResamplingwithSPSRtoAchieveUniformPointDensity 33 2.3.5 ICP Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . 34 2.3.6 Deviation Calculation . . . . . . . . . . . . . . . . . . . . . . . . . 34 2.4 Validation Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 2.5 Demonstration of Registration Errors . . . . . . . . . . . . . . . . . . . . . 35 2.5.1 Evaluation of Proposed Registration Methodology . . . . . . . . . . 39 2.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 3 Prediction and Compensation via Mesh-Based Feature Vectors 46 3.1 Feature Extraction for Triangular Mesh-Based Shape Deviation Represen- tation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 3.1.1 Position-Related Predictors . . . . . . . . . . . . . . . . . . . . . . 47 3.1.2 Surface Orientation and Curvature Predictors. . . . . . . . . . . . . 47 3.1.3 Material Expansion/Shrinkage Predictor . . . . . . . . . . . . . . . 49 3.2 Shape Deviation Measurement and Calculation . . . . . . . . . . . . . . . 50 3.3 Random Forest Model to Predict Shape Deviation With Extracted Features 51 3.3.1 Random Forest Method . . . . . . . . . . . . . . . . . . . . . . . . 52 3.3.2 Feature Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 3.3.3 Measuring Covariance Shift to Determine Feasibility of Prediction . 56 3.3.4 Prescriptive Compensation of Shape Deviation . . . . . . . . . . . . 58 3.4 Validation Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 3.4.1 Test Object Design, Printing, and Measurement . . . . . . . . . . . 59 3.4.2 Model Training Results . . . . . . . . . . . . . . . . . . . . . . . . . 61 3.4.3 Model Prediction Results . . . . . . . . . . . . . . . . . . . . . . . . 64 3.4.4 Compensation Results . . . . . . . . . . . . . . . . . . . . . . . . . 66 3.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 4 Prediction and Compensation via Mesh and Spherical Harmonic-Based Feature Vectors 70 4.1 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 4.1.1 Spherical Harmonics Shape Descriptor Generation . . . . . . . . . . 72 4.1.2 Deviation Quantification . . . . . . . . . . . . . . . . . . . . . . . . 77 4.1.3 Deviation Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 4.1.4 Shape Compensation . . . . . . . . . . . . . . . . . . . . . . . . . . 78 4.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 4.2.1 Dataset Generation and Segmentation . . . . . . . . . . . . . . . . 79 4.2.2 Hyperparameter Optimization and Model Training . . . . . . . . . 80 4.2.3 Prediction Generation . . . . . . . . . . . . . . . . . . . . . . . . . 81 4.2.4 Compensation Results . . . . . . . . . . . . . . . . . . . . . . . . . 83 4.3 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 vi 5 Optimizing the Expected Utility of Shape Distortion Compensation Strategies 87 5.1 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 5.1.1 Constructing a Value Function . . . . . . . . . . . . . . . . . . . . . 88 5.1.2 Determining a Proper Utility Function . . . . . . . . . . . . . . . . 91 5.1.3 Calculating Expected Utility . . . . . . . . . . . . . . . . . . . . . . 93 5.1.4 Generation of Prior Belief Distributions . . . . . . . . . . . . . . . . 93 5.1.5 Calculating Tolerance Probabilities Given Spatial Autocorrelation . 95 5.1.6 Alternative Compensation Strategy . . . . . . . . . . . . . . . . . . 99 5.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 5.3 Dental Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 5.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 6 Discussion and Future Work 109 6.1 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 6.2 Future Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 6.2.1 Leveraging Streams of In-Situ Measurement Data . . . . . . . . . . 111 6.2.2 Improving Robustness of Models with Limited Data . . . . . . . . . 112 6.2.3 Transitioning to Function at Scale . . . . . . . . . . . . . . . . . . . 112 6.2.4 Quality for the Masses . . . . . . . . . . . . . . . . . . . . . . . . . 113 6.3 A System Architecture for a Path Forward . . . . . . . . . . . . . . . . . . 114 6.3.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 6.3.2 Client-Side Software . . . . . . . . . . . . . . . . . . . . . . . . . . 116 6.3.3 Expert-Side Software . . . . . . . . . . . . . . . . . . . . . . . . . . 117 6.3.4 Cloud-Based App for Exchange of Data and Models . . . . . . . . . 117 6.3.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 Reference List 119 vii List of Tables 2.1 Comparison of measured deviations before and after registration - Fig. 2.9. 36 2.2 Comparison of measured deviations before and after registration - Fig. 2.10. 37 2.3 Comparison of measured deviations before and after registration - Fig. 2.11. 38 2.4 Comparison of measured deviations produced by unconstrained ICP and the proposed method - Fig. 2.14. . . . . . . . . . . . . . . . . . . . . . . . 43 2.5 Analysis of model fitting results for the fixed effect. . . . . . . . . . . . . . 44 2.6 Analysis of model fitting results for random effects. . . . . . . . . . . . . . 44 3.1 Normalized covariate shift metrics between individual shape datasets. . . . 65 3.2 MAE values for predictions of deviation values. . . . . . . . . . . . . . . . 65 3.3 Mean absolute vertex error and RMS vertex error for uncompensated and compensated half-ovoid parts. . . . . . . . . . . . . . . . . . . . . . . . . . 68 4.1 Error for model predictions on test dataset. . . . . . . . . . . . . . . . . . 82 4.2 Mean absolute deviation and root mean squared deviation for compensated and uncompensated parts. . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 5.1 Example parameters for value functions. . . . . . . . . . . . . . . . . . . . 102 5.2 Maximum expected utility. . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 5.3 Parameters for the dental case study. . . . . . . . . . . . . . . . . . . . . . 105 5.4 Maximum expected utility for dental case study. . . . . . . . . . . . . . . . 106 viii List of Figures 1.1 Diagram of methods for modeling geometric deviations in AM parts. . . . . 8 1.2 Flowchart of proposed methodology. . . . . . . . . . . . . . . . . . . . . . . 12 1.3 Full arch dental model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 2.1 (Top)CADdesignandpointcloudofpartwithlateralshrinkage. (Bottom) Alignment of point cloud with inconsistent density after ICP. . . . . . . . . 27 2.2 (Top) CAD design and point cloud of part with vertical shrinkage. (Mid- dle) Alignment of point cloud without bottom points after ICP. (Bottom) Alignment of point cloud with bottom points after ICP. . . . . . . . . . . . 28 2.3 Illustration of registration results after convergence to an inaccurate local minimum. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 2.4 Flowchart of the proposed procedure. . . . . . . . . . . . . . . . . . . . . . 31 2.5 Table points (red) and shape points (green). . . . . . . . . . . . . . . . . . 32 2.6 Alignment of bottom plane of scan point cloud to CAD reference. . . . . . 33 2.7 Scan point cloud. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 2.8 Triangular mesh of scan point cloud after screened Poisson surface recon- struction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 2.9 Point cloud with uneven density: deformed part deviations before registra- tion (left) and after registration (right). . . . . . . . . . . . . . . . . . . . . 37 ix 2.10 Point cloud without bottom: deformed part deviations before registration (left) and after registration (right). . . . . . . . . . . . . . . . . . . . . . . 38 2.11 Complete point cloud: deformed part deviations before registration (left) and after registration (right). . . . . . . . . . . . . . . . . . . . . . . . . . 39 2.12 CAD models of the parts used in the experiment. . . . . . . . . . . . . . . 40 2.13 Illustration of the experimental design. . . . . . . . . . . . . . . . . . . . . 42 2.14 Deviationsusingproposedmethod(left)anddeviationsusingunconstrained ICP (right). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 3.1 Normal vector for each face surrounding a vertex with median vector. . . . 49 3.2 Distance between vertex and central z-axis. . . . . . . . . . . . . . . . . . . 50 3.3 Single regression tree using random forest. . . . . . . . . . . . . . . . . . . 53 3.4 Ensemble of trees using random forest. . . . . . . . . . . . . . . . . . . . . 53 3.5 Illustration of compensation strategy. . . . . . . . . . . . . . . . . . . . . . 59 3.6 3D printed objects for test dataset. . . . . . . . . . . . . . . . . . . . . . . 60 3.7 Deviation values across the surface of each shape. . . . . . . . . . . . . . . 61 3.8 Histogram of magnitudes of deviation values for the half-ovoid shape. . . . 62 3.9 Out-of-bag error versus number of trees in ensemble for first model (top) and second model (bottom). . . . . . . . . . . . . . . . . . . . . . . . . . . 63 3.10 Significance values of each predictor variable in the first model (top) and the second model (bottom). . . . . . . . . . . . . . . . . . . . . . . . . . . 64 3.11 Actual deviation values versus predicted deviation values for withheld test- ing data (First Model). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 3.12 First model: Predicted deviation values versus actual deviation values for out-of-bag data in the training dataset (same shape predictions) (left) and predicted deviation values versus actual deviation values for the testing dataset (new shape predictions) (right). . . . . . . . . . . . . . . . . . . . . 67 x 3.13 Second model: Predicted deviation values versus actual deviation values for out-of-bag data in the training dataset (same shape predictions) (left) and predicted deviation values versus actual deviation values for the testing dataset (new shape predictions) (right). . . . . . . . . . . . . . . . . . . . . 67 3.14 Deviation values for uncompensated part (left) versus compensated part (right). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 4.1 Flowchart of proposed methodology. . . . . . . . . . . . . . . . . . . . . . . 71 4.2 Close-up view of remeshed surface. . . . . . . . . . . . . . . . . . . . . . . 72 4.3 Voxelization V of mesh M. . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 4.4 Sampled points given the set values of r for a single value of p n . . . . . . . 75 4.5 Visualization of spherical harmonics of degree 0 through 6. . . . . . . . . . 76 4.6 Printed parts from two different angles. . . . . . . . . . . . . . . . . . . . . 79 4.7 Visualization of deviations across all printed shapes. . . . . . . . . . . . . . 80 4.8 Response surface showing the RMSE of predictions generated by the model on the validation dataset as a function of the hyperparameters used to train it. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 4.9 Predicted and actual deviations for the test dataset. . . . . . . . . . . . . . 83 4.10 Comparison of deviations across the surface of the uncompensated and compensated parts shown at differing angles. . . . . . . . . . . . . . . . . . 85 4.11 Comparison of the frequency of deviations found in the compensated and uncompensated parts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 5.1 Lottery modified by shifting the payout. . . . . . . . . . . . . . . . . . . . 92 5.2 Probability distribution of geometric deviations of compensated vertices. . 94 5.3 Semivariogram of the compensation deviation data (mm). . . . . . . . . . . 96 5.4 Semivariogram and covariogram of compensated deviation data. . . . . . . 98 5.5 Alternative compensation strategy. . . . . . . . . . . . . . . . . . . . . . . 100 xi 5.6 Expected utility of Value Function 1 as a function of c. . . . . . . . . . . . 101 5.7 Expected utility of Value Function 2 as a function of c. . . . . . . . . . . . 101 5.8 Expected utility of Value Function 3 as a function of c. . . . . . . . . . . . 102 5.9 Sensitivity analysis for the difference in optimal utility values to α and γ for Equations 2 and 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 5.10 Sensitivity analysis for the optimal value of c to γ for Equation 3. . . . . . 104 5.11 Probability distribution of geometric deviations of compensated vertices using spherical harmonics-based model. . . . . . . . . . . . . . . . . . . . . 106 5.12 Expected utility as a function of c. . . . . . . . . . . . . . . . . . . . . . . 107 6.1 Diagram of the proposed system. . . . . . . . . . . . . . . . . . . . . . . . 115 6.2 Screenshot of a prototype of the client-side software program. . . . . . . . 116 6.3 Screenshots of a prototype of the cloud-based exchange. . . . . . . . . . . . 118 xii Relevant Resources This dissertation is adapted from the following conference and journal articles: • Decker, N., and Huang, Q., “Machine Learning of Spherical Harmonic Shape rep- resentations for Geometric Accuracy Improvement in Dental Additive Manufactur- ing,” (Drafting). • Decker, N., and Huang, H., 2021, “Optimizing the Expected Utility of Shape Distor- tionCompensationStrategiesforAdditiveManufacturing”, ProcediaManufacturing 53: 348-358. • Decker, N., Lyu, M., Wang, Y., and Huang, Q., 2020, “Geometric Accuracy Predic- tion and Improvement for Additive Manufacturing Using Triangular Mesh Shape Data”, ASME Transactions, Journal of Manufacturing Science and Engineering 143(6): 061006-1-12. • Decker, N., Wang, Y., and Huang, Q., 2020, “Efficiently Registering Scan Point Clouds of 3D Printed Parts for Shape Accuracy Assessment and Modeling”, SME Journal of Manufacturing Systems 56: 587-597. • Decker, N., and Huang, Q., 2020, “Intelligent Accuracy Control Service System for Small-Scale Additive Manufacturing”, SME Manufacturing Letters 26: 48-52. xiii • Decker,N.,andHuang,Q.,2019,“GeometricAccuracyPredictionforAdditiveMan- ufacturing Through Machine Learning of Triangular Mesh Data”, ASME MSEC, Erie, PA. The following conference and journal articles influenced this work, but are not described in this report: • Baturynska, I., Semeniuta, O., Decker, N, Martinsen, K., and Huang, Q., “Fast Dimensional Quality Screening of 3D-Printed Parts Using Shape Distributions,” (Drafting). • Henson, C., Decker, N., and Huang, Q., 2021, “A digital twin strategy for major failure detection in fused deposition modeling processes”, Procedia Manufacturing 53, 359-367. xiv Abstract Recent years have seen tremendous growth in the field of additive manufacturing (AM), driven by new interest in fields including aerospace, medicine, and construction. This growth has brought with it a wide array of novel processes, materials, and software methods meant to enable adoption in an increasingly broad number of functional appli- cations. This means that parts produced by AM are now frequently used in situations where mechanical performance and geometric tolerances are highly relevant considera- tions. Unfortunately, AM currently faces a wide range of difficulties in the area of accu- racy control. This challenge is exacerbated by the fact that the domains AM is most poised to disrupt also happen to be those with the most stringent tolerancing require- ments. Aerospace components and medical devices are prime examples of this dynamic. For AM to continue its trajectory of growth, a wide range of tools and methodologies is needed to allow engineers to increase the accuracy of manufactured parts and overcome these critical challenges. This dissertation research seeks to address the challenge of accu- racy control with an approach that integrates the measurement of observed geometric errors with methods for predicting future errors and strategies for best compensating for them. The first topic that is addressed is the challenge of accuracy quantification. This work starts by evaluating common pitfalls in the most popular method for identifying geometric deviations in a 3D printed part found in the literature: registration and comparison of 3D scan point clouds using the iterative closest point algorithm. A set of modifications to this xv methodology is proposed as a means of addressing these issues and establishing a rigorous experimental procedure that produces stable and consistent measurements of geometric deviation - a critical and commonly overlooked prerequisite to modeling research that can be replicated and employed in an industrial setting. The proposed approach is shown to be less susceptible to pitfalls and produce more stable results than the conventional approach. The second topic that is addressed in this line of work is building a predictive model using the geometry of a part being printed and knowledge of the underlying printing process. The methodology proposed in this line of research is designed to enable predic- tions for unseen shapes based on small sets of data from previously manufactured shapes with differing geometry. The ability to learn from deviations from dissimilar shapes is an important step that is required to achieve the goal of first-time-right prints. This work utilizes a novel set of predictor variables alongside random forest modeling to enable modeling of and predictions for shapes that are more complex than previous experimental work in the literature. Compensation for geometric errors predicted using this proposed approach by modifying the shape file is shown in a demonstration experiment to reduce the magnitude of geometric deviations by 44%. The next line of work in this dissertation expands the proposed modeling approach to incorporate greater amounts of geometric information describing the shape being printed. This seeks to enable modeling for far more complex shapes than were utilized in the pre- vious chapter. In order to achieve this goal, the approach utilizes a spherical harmonic transform of the surface to efficiently capture geometric information into a feature vector that can be ingested by a machine learning model. Modifications were made to the stan- dard spherical harmonic feature vector conversion method found in the literature to allow it to produce feature vectors describing the neighborhood that surrounds a specific single point on a surface, a necessary prerequisite for the required task. This was performed through the alteration of its spherical sampling approach. The method was evaluated on xvi a dataset of full arch dental models, and shown to reduce the magnitude of geometric deviations by 42%. After this, the challenge of optimally compensating for these predicted errors is pre- sented. While a number of approaches for doing this are given in the literature, this line of research proposes a novel method that draws on techniques from decision analysis and multi-attribute utility theory to incorporate manufacturer beliefs and preferences into the process. The result of this is a compensation strategy that optimizes the expected utility of a given part to a manufacturer based on the whole of their available information. This methodology is evaluated computationally on the compensation data from the previous two chapters, and shown to significantly improve a manufacturer’s expected utility. Finally, these topics are discussed in depth, particularly in the context of their inte- gration as an overarching framework. Further, directions for future research in the above areas are outlined. Lastly, preliminary development steps for a software tool that enables the automation of the above methodologies is described and the software illustrated. xvii Chapter 1 Introduction This research proposes an end-to-end workflow and series of new methodologies for measuring, predicting, and compensating for shape deviations in additive manufacturing. The goal of this is to reduce shape deviations in a manner that efficiently learns from past data, is driven by a manufacturer’s needs, and offers repeatability in different contexts. This chapter will explore the background of this problem, as well as the motivating factors driving the proposed solution. It will also cover relevant research in the field. The content covered in each of these sections will be grouped into three major research thrusts: evaluation, prediction, and compensation. Evaluation refers to the manner in which geo- metric shape deviations are quantified and assessed. Prediction covers the process of generating models for predicting shape deviations based on past deviation measurement data. Finally, compensation refers to the process of making adjustments to the printing process so as to mitigate the prevalence of the predicted deviations. While each consti- tutes a separate area of the literature with its own set of associated methodologies and challenges, this section will argue for the importance of a comprehensive approach that incorporates and makes progress on all three research challenges in a unified framework. A side line of the literature will be explored in the area of shape descriptor representation - a field that seeks to generate numeric vectors describing a given geometry. This offers insight into the work described in Chapters 3 and 4. A second side line of the literature will be explored in the area of dental additive manufacturing, which is used as a test case for the methodologies outlined in Chapters 4 and 5. In the last decade, additive manufacturing (AM) has transitioned from solely a tool for prototyping to a critical technology for the production of functional parts used in 1 a growing number of fields such as aerospace and medicine [1–6]. Yet, the presence of undesirable geometric shape deviations that may lead to scrap or rework remains a pervasive issue [2,7,8]. This can aggravate existing high costs in AM and hamper further industrial adoption. Geometric complexity of three-dimensional (3D) objects is one major issue,amongothers,thathinderseffortstoachieveconsistentshapeaccuracyacrossalarge variety of products, particularly considering the nature of one-of-a-kind manufacturing or low-volume production in AM. Unlike mass production, learning to produce one product or one family of products in accurate geometries is insufficient for AM. 1.1 AM Deviation Evaluation The first step in addressing this challenge involves establishing a method for assess- ment of part accuracy, as this is a critical prerequisite to any work involving modeling or compensation. This is because deviations must be measured in a consistent and repeat- able way to construct a dataset capable of producing useful models. It is important to accurately define to what degree and where deviations occur on a given part in a manner that is reproducible. Unfortunately, while existing solutions have largely been accepted across industrial applications for the task of quality inspection, the issue of deviation quantification for predictive modeling remains an open problem. 1.1.1 Current State of the Art The first challenge faced on this front is that there are many ways to measure the accuracy of a manufactured part. These can range in complexity and cost from measure- ment of predefined dimensions using calipers to the use of a CT scanner for whole surface measurement [9–12]. To fully assess and control quality of parts produced by AM, it is necessary to know the magnitude and direction of deviation across the entire surface of a part. One method that is growing in popularity is the use of a 3D scanner to generate 2 a digital cloud of points that replicate the object being scanned. These 3D scanners uti- lize a range of technologies for surface reconstruction, including structured light scanning, laser triangulation, and photogrammetry [13–17]. Once a point cloud of the manufactured object is generated, it must be aligned (or registered) against a reference computer aided design (CAD) or other 3D model representing the object’s ideal shape and size [18]. After this is completed, deviations between the two surfaces can be calculated. This alignment can be produced using a number of algorithms that have been devel- oped in the past few decades. Several of the most common algorithms are described in detail below, though the list is certainly not exhaustive. A more comprehensive discussion of the topic of registration can be found in Tam et al. [19]. One simple and commonly utilized alignment method is point pairs picking [20]. With this method, a user is first asked to pick several pairs of corresponding points on both the scanned point cloud, and the shape that it will be aligned to. These should be located across the surface of the object. Then, the transformation (translation/rotation) of the scan point cloud that minimizes the sum of the distances between these pairs of points is calculated and applied. In the case of rigid registration (where both shapes are identical), this transformation can be perfectly determined with just three point pairs [21]. In situations where deviation between the two shapes is to be measured, such as AM accuracy assessment, more points are needed to achieve a quality alignment. Smith, et al. [22], for example, utilized eight landmark points across the surface to perform alignment of scan point clouds of 3D printed parts produced on an FDM printer with their corresponding CAD file. Another common method for achieving alignment is the 4PCS Algorithm [21]. This algorithm is designed to produce alignment in cases where point sets include outliers and noise. Further, the algorithm can be successfully applied without any prefiltering or initial alignment [21]. It functions by finding coplanar 4-points bases in the first point cloud that also correspond to 4-points bases in the second point cloud. A base is set of 3 points that have good candidate correspondences between the two point clouds. Using these corresponding bases, the optimal transformation to align the two point clouds is determined [23]. A number of variants and modifications to this algorithm can be found in the literature [24,25]. Finally, one of the most frequently used tools for alignment is the iterative closest point (ICP) algorithm developed by Besl and McKay [26]. This algorithm is capable of aligning geometric representations including point sets, line segment sets, implicit curves, parametric curves, triangle sets, implicit surfaces, and parametric surfaces [26]. For the two point clouds to be aligned, the first point cloud is called the “data” shape, which is produced using a 3D scanner in this application. The second is the “model” shape, which represents the ideal for shape and position. The ICP algorithm consists of the steps shown below in Algorithm 1. The final alignment of the data shape is then outputted as the resulting transformation. Algorithm 1 Iterative Closest Point Algorithm 1: while The difference between the current iteration’s mean square distance and that of the previous iteration is above a preset threshold do 2: for Each point on the data shape do 3: Determine the closest corresponding point on the model shape 4: end for 5: Find the transformation (translation/rotation) of the data shape that minimizes the mean of the squares of distances between these point pairs. 6: Apply the transformation to the data shape. 7: Calculate the mean square distance for the point pairs. 8: end while A large number of works have utilized ICP to align scan point clouds of manufactured parts against their CAD designs for the purpose of deviation calculation. In the AM literature, Klar, et al. [27] measured the dimensional accuracy of 3D printed rectangular prisms of high consistency nanocellulose. They began by fitting planes to each face of a prism’s scan point cloud. The mean distance between each of these planes was then used to generate a new ‘ideal’ rectangular prism. This new rectangular prism was first aligned 4 to the scan point cloud by eye, and then secondly aligned to the point cloud using the ICP algorithm implemented in CloudCompare, an open source program for working with point clouds and triangular meshes [28]. Following this alignment, the distances to the new prism shape were calculated for each point on the scan point cloud. Alharbi, et al. [29] studied the dimensional accuracy of 3D printed dental restorations produced using stereolithography. Scans of their printed parts were aligned to the STL file that they were printed from, which represents the ideal shape of the printed part. The first alignment step was done by eye, followed by ICP. Finally, the distances between the STL file and printed parts were calculated. Because the shape of a 3D printed part being evaluated will differ slightly from the design it is being registered against, this is an instance of nonrigid registration. Unfor- tunately, nonrigid registration tends to be a more difficult challenge than rigid registra- tion [19]. Further, alignments that are produced by the ICP algorithm can differ as a result of factors such as scan density, completeness of the scan, differing initial align- ments, and convergence to differing local minimums. In order to meet these challenges, a number of improvements to the ICP algorithm have been proposed. ICP variants using resampling have been proposed as a means of avoiding convergence to poor alignments. Gelfand, et al. [30] proposed modifications to the ICP algorithm that sampled points on regions of the aligned shapes that had geometries considered to be more ‘stable’. These geometries are complex, and only allow translation or rotation with changes in the algorithm’s error metric. Kwok and Tang [31] used stability analysis to improve upon normal space sampling, resulting in a more efficient and robust registration algorithm. Yu, etal.[32]proposedamethodthatresamplesandremovesnoisefrompoints on the point clouds to be aligned, increasing accuracy when the algorithm is applied to 3D face verification. Chetverikov et al. [33,34] proposed the addition of trimming to the ICP algorithm to allow ICP to accurately converge in the presence of substantial differences between the 5 data and model shapes. In this formulation, Least Trimmed Squares is used as the error metric to be minimized in Step 2 of the ICP algorithm. As a result of this, the presence of outliers and deviations in the shapes to be aligned has less of an impact on the final alignment, as the largest point to point distances are ignored. Dong et al. [35] built on thisapproachbyaddingLiegrouprepresentationstodeterminegeometrictransformations when anisotropic scaling is also desired. Minguez, et al. [36] and Armesto et al. [37] proposed the Metric-Based ICP Technique as a means of improving the algorithm’s robustness and precision. This method replaces Euclidean distance with a new distance measure that takes into account both translation and rotation, both of which are relevant to proper alignment. Kapoutsis, et al. [38,39] proposed the Morphological ICP algorithm, which reduces thecomputationalcomplexityofICP’sclosestcorrespondingpointoperator. Thismethod starts by building a Voronoi diagram of model points using the morphological Voronoi tessellation method. Then closest corresponding points can be determined using the diagram. This reduces the computational cost of the operation from O(N p N x ) to O(N p ) whereN p andN x arethenumberofpointsinthedatashapeandmodelshaperespectively. Finally, because convergence to a global minimum is highly desirable for ensuring that the ICP algorithm doesn’t converge to an unreasonable local minimum, Yang, et al. [40] proposed Go-ICP, which uses the branch and bound method to search SE(3) space for a transformation that reduces the distance objective function value. ICP is then performed with this initialized position, and the result is set as the upper bound for the next branch and bound search. This process is repeated until convergence to a desired accuracy. 1.1.2 Contributions While each of these proposed methods offers strong gains over the conventional ICP algorithm, their use for applications in AM faces a significant challenge. Namely, the 6 alignment that minimizes deviations between two surfaces isn’t necessarily the most rea- sonable alignment for applications that seek to model and correct deviations of printed parts. Instead, it is desirable that the algorithm for aligning surfaces takes into account preexisting knowledge of the additive manufacturing process and produces alignments that make sense in light of this information. As a result, a registration methodology that addresses the challenges posed by ICP while also incorporating manufacturing process knowledge is needed. The first contribution of this research is to identify some of the challenges posed by ICP when applied to assessing the accuracy of AM built parts. This discussion is necessary because ICP is one of the most frequently used tools for 3D scan point cloud alignment in the AM literature. It is important to note here that while ICP wasn’t proposed with manufacturing quality assessment in mind, it has become popular in this context. A number of these potential pitfalls will be discussed in depth in Section 2.2 and quantitatively evaluated using simulated data in Section 2.5. While other algorithms differ in how alignment is achieved, many of the points that will be covered in this research will apply to them to a varying extent. Second, a systematic approach to registration of point clouds specifically for AM qual- ity assessment will be presented in Section 2.3. This approach is designed to minimize variability and inaccuracies from ICP alignment, while keeping deviations from a part’s design as close to where they originated as possible. This is done through the use of geometric constraints on the ICP algorithm’s alignment based on manufacturing process knowledge. ThemethodologyutilizestheICPalgorithm, butdoesnotfundamentallyalter it. Finally, a quantitative case study will be conducted in order to assess the potential magnitude of deviations in ICP point cloud alignments before and after the application of the proposed registration methodology. The ultimate goal of this is to enable effective modeling in the work that will follow. 7 1.2 AM Deviation Modeling Once the method by which geometric deviations are measured is firmly established, it is then possible to move on to the next task in the proposed overarching workflow: modeling. This task seeks to predict geometric errors on the surface of a shape before it is printed using data from past prints. 1.2.1 State of the Art A growing body of research seeks to address this shape deformation issue through predictive modeling and compensation approaches. As summarized in Fig. 1.1, there are two main categories of predictive modeling approaches reported in the literature for shape deformation control: physics-based approaches utilizing finite element modeling [41–45] and data-driven approaches based on statistical and machine learning [46–52]. Figure 1.1: Diagram of methods for modeling geometric deviations in AM parts. 8 Physics-basedmodelingusesfirstprinciplestosimulatethephysicalphenomenaunder- lying an AM process. Results from these simulations can be effective for predicting thermal and mechanical behavior of parts during a print. For instance, physics-based models have been applied to simulate residual stresses in produced parts, give insight into part distortion, and predict spatiotemporal temperature of feedstock in a build envelope, among many other uses [41–44]]. Challenges faced by physics-based modeling include the computational complexity of simulations and the need to account for a wide variety of physical phenomena that affect a process [45]. Furthermore, these phenomena can often be specific to a single method of AM, i.e., results from a simulation of a selective laser melting machine would not be useful for modeling a machine using material extrusion. Data-driven approaches for shape deformation control utilize data either from pro- cesses or from products to establish process-oriented models or product shape-oriented models. These surrogate models greatly reduce computational costs. Process-oriented models seek to address geometric differences caused by process variables. Empirical and statistical methods have been applied to the investigation and modeling of AM pro- cesses [53–56]. Factors such as layer thickness and flow rate are varied to discover optimal settings for quality control. Tong et al. [47,57], for example, utilize polynomial regression models to predict shrinkage in spatial directions and corrects material shrinkage and kine- matic errors caused by motion of the extruder by altering computer-aided design (CAD) designs. One downside to process-oriented models is that the product shapes and their impact on shape deformations are often not considered. Product shape-oriented models seek to account for this by including the geometry of the manufactured part to inform error predictions. This does not mean that the process is ignored, butisinsteadincorporatedalongsideinformationregardingpartshape. Acritical step in shape-oriented modeling is the mathematical representation of shape deformation for freeform 3D objects. Three main representation approaches have been reported in the literature: point cloud representation, parametric representation, and triangular mesh 9 representation. Point cloud-based approaches have sought to describe geometry using coordinates of points on a product boundary. Xu et al. ( [58], for example, presented a framework for establishing the optimal correspondence between points on a deformed shape and a CAD model. A compensation profile based on this correspondence is then developed and applied to prescriptively alter the CAD model. A different point cloud- basedapproachispresentedin[59], whichsoughttoutilizedeeplearningtoenablethermal distortion modeling. A part’s thermal history was captured using a thermal camera focusedonthebuildplateofaprinteremployinglaser-basedadditivemanufacturing. This informationwasthenusedtotrainaconvolutionalneuralnetwork, whichgaveadistortion prediction for each point. The method was demonstrated using a number of 3D printed disks. Another related study focused on the use of transfer learning between models for different AM materials [60]. The model that was employed utilized information regarding a point’s position on the disk shape that was printed to predict geometric distortion. In addition to proper shape registration and correspondence, one challenge for this approach is that models based on point cloud representations of shape deformation can be highly shape dependent, making it hard to translate knowledge from shape to shape. As a result, the datasets in the previous articles are highly homogeneous. Parametric representation approaches transform the point cloud data to extract defor- mation patterns or reduce complexity due to shape variety. Huang et al., for example, demonstrated a modeling and compensation method by representing the shape deforma- tion in a polar coordinate system [2]. One advantage of this approach is that it decouples geometric shape complexity from deformation modeling through transformation, mak- ing systematic spatial error patterns more apparent and easier to analyze. Huang et al. showed [46,61–63] that this approach was able to reduce errors in stereolithography (SLA) printed cylinders by up to 90% and in SLA printed 2D freeform shapes by 50% or more. One disadvantage of this method is that it requires a parametric function to be fit to the 10 surface of each shape that is to be modeled. Unfortunately, this can become prohibitively tedious for complex 3D shapes [64]. To account for this problem and expedite model building, this dissertation proposes a shape-orientedmodelingapproachbasedonfeaturesextractedfromtriangularmeshshape representations of printed objects. This form of shape representation is an ideal candidate because of the ease with which it can describe complex 3D geometries, and is illustrated in Figure 3.1. Furthermore, parts manufactured using AM are almost universally handled as triangular mesh files. The STL file is the most common format for transferring 3D shapes from CAD software or databases to slicing software for a 3D printer. It stores 3D shapes in the form of a simple triangular mesh and has maintained widespread popularity over the past several decades due to its simplicity and wide compatibility across systems. Other more recent file formats for 3D printing such as the additive manufacturing file format (AMF) and 3D manufacturing format (3MF) formats [65,66] incorporate func- tionality beyond the storage of a single triangular mesh, such as color and texture, more naturally defined curves, and more. These formats have found support from government and industry and are growing in adoption. Because this modeling method can utilize the same data structure that the part is produced with, its simplicity and accuracy are increased. Otherworkhasusedtriangularmeshrepresentationsofgeometryinseekingtoimprove printaccuracyoftenbyselectingidealorientationsforprintingorbyfocusingongeometric differences for specific error-prone features. Chowdhury et al. [67] proposed an approach for selecting the optimal build orientation of a part using a model with orientation-based variables relevant to a part’s final geometric accuracy. These variables were derived from the part’s STL file. Their method combined this model with compensation produced by a neural network trained on finite element analysis data to reduce the overall error of the part [67,68]. Moroni et al. [69] demonstrated a means of identifying cylindrical voids in a part’s shape using a triangular mesh. This approach then predicted the dimensional 11 error of the cylindrical voids based on their geometric properties. Moroni et al. [70] also extended this method to selecting optimal print orientation. 1.2.2 Contributions The method proposed in this line of research seeks to predict and compensate for geometric deviations across the entire surface of a given part, making it a useful tool for increasing the shape accuracy of an AM machine. It begins by performing feature extraction from triangular mesh representations of manufactured parts. These features are used alongside deviation measurement data for the respective parts to train a random forest machine learning model. This model can then be used to predict errors for future prints. Finally, a new 3D design based on the original design is generated with modifica- tions to the shape meant to compensate for the predicted errors. The new design is then printed, resulting in a part with reduced geometric deviations. This process is illustrated in Fig. 1.2. One key contribution of this approach is that it quickly facilitates modeling of freeform surfaces that would likely be exceedingly difficult to model using parametric function-based approaches. Figure 1.2: Flowchart of proposed methodology. In Chapter 3, an experiment to validate this approach using a number of benchmark- ing objects produced on a fused deposition modeling (FDM) 3D printer will be presented. 12 The experiment used a dataset of four objects and their corresponding geometric devia- tions to train a machine learning model. This model was then used to make predictions for a new shape that was treated as a testing dataset. The predicted deviations for this shape compared favorably to the actual deviations of the shape when printed, demon- strating the potential of this approach for applications in error prediction. Finally, these predictions and a compensation strategy found in the literature were utilized to generate a compensated CAD file of the shape, which was printed and evaluated. This compen- sated part was found to have average deviations that were 44% smaller than those of the uncompensated original print. In Chapter 4, a follow-up experiment is presented which utilizes an additional set of features consisting of spherical harmonic shape transformations derived from the objects being printed. This allowed the method to efficiently capture more information regarding surface of the part in the form of a feature vector that can be ingested by a machine learning model. Modifications were made to the standard spherical harmonic feature vector conversion method found in the literature to allow it to produce feature vectors specific to a single point on a surface, a necessary prerequisite for the required task. The method was evaluated on a dataset of full arch dental models, and shown to reduce the magnitude of geometric deviations by 42%. The topic of shape representations will be discussed in greater depth later in this chapter. 1.3 AM Deviation Compensation The previously described line of research relied on a standard strategy for generating compensated parts, which seeks to modify the geometry of a shape so as to exactly cancel out the errors that will be generated during a manufacturing process. This third area of research seeks to understand the implications of this strategy, and proposes a 13 new approach that goes beyond simple cancellation, incorporating information about a manufacturer’s prior beliefs and preferences into the compensation process. 1.3.1 State of the Art Approaches for improving AM accuracy utilizing predictive product design adjustment seek to generate predictions for the geometric inaccuracies of a manufactured part, and then adjust the dimensions of the part before printing so as to compensate for them, producing a part with the intended dimensions. Below we provide a detailed review of design adjustment methods. This approach was utilized by Tong, et al. [57] to improve the accuracy of parts produced using stereolithography (SLA). The proposed process started with a kinematic model designed to predict the dimensional inaccuracies of the printed part caused by inaccuracies in the motion of a mirror that reflects the printer’s laser beam into the resin vat. A test artifact was designed, produced, and measured to determine the coefficients in the kinematic model. With the fitted model, Tong, et al. generated predictions for the inaccuracies of a new part that was similar to the test artifact. These predictions were used to modify each vertex in the part’s STL file. Each vertex in the triangular mesh was translated in the direction opposite to the predicted translation due to kinematic error. A part produced using the compensated STL file was compared to one produced using the original STL file, and was found to have significantly less volumetric error. Tong, et al. [47] then extended this work for use with a fused deposition modeling (FDM) printer by developing a separate kinematic error model for that machine. They further demonstrated the application of compensation to the part’s slice file. Huang, et al. [46,48,61,62,64] proposed a strategy to optimally compensate for a part’s predicted deformation based on the analysis that a design incorporating compensation might have a slightly different distortion pattern than the original design. The proposed method addresses this by accounting for the predicted additional deviations caused by 14 adding compensation. Huang, et al. employed this compensation strategy along with a parametric function-based predictive modeling approach to generate predictions of geo- metric errors for 2D freeform shapes [46,48,62] and 3D primitive shapes [46,61–63]. Using this method, dimensional accuracy for 2D freeform shapes was shown to increase by fifty percent or more. Chowdhury, et al. [67,68] used a thermal modeling based approach to predict thermal deformationsgeneratedinpartsproducedusingselectivelasersintering(SLS).Predictions of distortion from a thermo-mechanical finite element analysis (FEA) model were used to train a neural network. For the network, an instance in the model was a vertex on the part’s STL file. During training, the post-deformation positions of vertices were used as predictor variables, and the pre-deformation position of those vertices treated as a response. Once this model was trained, the network was used to predict the proper compensated position of each vertex on a part’s designed STL file. This worked by having the neural network predict what staring vertex position would result in the desired vertex position once distortion was added. McConaha and Anand [71] iterated on this approach, using a sacrificial build instead ofapredictivemodel. Underthisstrategy, apartisprintedandthen3Dscanned, withthe measured distortions then used instead of predicted distortions. McConaha and Anand used a neural network compensation approach similar to Chowdhury, et al. [67,68], but instead used the post-deformation positions of vertices to train a network that would predict the reverse of vectors describing the transformation between design and deformed points. This was done so as to mitigate issues due to extrapolation. Zhang, et al. [72] further built on this line of work, and proposed applying the distor- tion predictions produced using a thermo-mechanical FEA simulation to a non-uniform rational basis spline (NURBS) surface instead of an STL file so as to preserve accuracy. One unifying theme found in each of the presented works is a desire to most effectively reducethemagnitudeofgeometricdeviationsbasedonthepriorbeliefofthemanufacturer 15 as to what these deviations will be. In the literature, this prior belief can be defined by a predictive model or simply the results of one or several sacrificial parts. Whilethisisareasonableandbeneficialgoal, therearetwoaspectstotheseapproaches worth considering. First, because all additive manufacturing methods are inherently complex combinations of several physical processes and engineered systems subject to constant variation, no model or sacrificial part will perfectly predict the magnitude of deviationsacrossthesurfaceofagivenpart. Asaresult, allpredictionscomewithinherent uncertainty. Further, the effects of compensation itself are subject to natural variations in the printing process. Therefore, knowledge regarding uncertainty of these outcomes would be worth considering when determining when and where to apply compensation. If information regarding how the model performs on previously unseen data is known, it would be desirable that this prior probability distribution influences the compensation that is performed. Second, not all improvements or reductions in geometric accuracy are equal in the eyes of a manufacturer. A manufacturer might be able to employ a tools such as a grinder, or a hybrid manufacturing system [73] to correct for dimensions that are too large, but unable to correct for dimensions that are too small in post-processing. In this case, inaccurate compensation that produces dimensions that are too small is far more costly than inaccurate compensation producing dimensions that are too large. A manufacturer might also have to meet certain tolerance requirements. In this case, compensation that puts a part within the required tolerances would be far preferable to compensation that leaves or puts the part’s dimensions outside of them. Similarly, asymmetric tolerances [74] might be encountered, which could influence the significance of certain compensation errors. Intuitively, an ideal compensation strategy should take these considerations into account. Onepossibletoolforperformingcompensationwhileincorporatinginformationregard- ing a method’s uncertainty and a manufacturer’s preferences is Multi-Attribute Utility 16 Theory (MAUT) [75]. Under this approach, a decision maker starts by constructing a model that ascribes a dollar value to a set of conditions. In this case, these conditions mightbetheoverallaccuracyoftheprint, orwhetheritiswithincertaintolerances. Then, a von Neumann-Morgenstern utility is calculated from the given value function and the probability distribution of the various outcomes. The optimal compensation strategy is that which maximizes the expected utility. Several examples of MAUT being applied to manufacturing decisions exist in the literature. One cluster of work has focused on the application of MAUT and Bayesian analysis to subtractive manufacturing. Abbas, et al. [76] demonstrated the use of decision analysisinordertooptimizeprofitforamanufacturerperformingamillingoperation. The decisions considered included which tools to use, and which process parameters should be selected. The cost due to tool wear and labor to perform the milling operation were both major parts of this study. Hupman, et al. [77] build on this approach by evaluating the effectivenessofdifferentincentivestructuresforachievingoptimalvalueforamanufacturer by properly incentivizing milling machine operators. Schmitz, et al. [78] go into greater depth in describing the application of Bayesian analysis for this application while Zapata- Ramos et al. [79] studied the value of information and experimentation in context of efforts to optimize profit. Finally, Karandikar, et al. [80] utilized Bayesian updating to predict tool life in these systems. Xu and Huang [81] applied MAUT to analyze setup plans in the field of process planning. Theirworkprovidedacasestudyillustratinghowtodefineoptimalityofasetup plan by combining manufacturing error simulation with MAUT. Pergher and Teixeira de Almeida [82] applied MAUT to choose the proper parameters for a production plan under uncertainty. They later developed a multi-attribute utility model for choosing which dispatchingrulestouseinajobshopenvironment[83]. Othermethodsofdecisionanalysis such as the Analytic Hierarchy Process (AHP) and the weight and rate method have been 17 applied to AM, specifically for decisions related to which AM method or material to use, or which process settings to employ [84–86]. 1.3.2 Contributions The main contribution of this line of work is to propose a methodology by which the efficacy of a compensation strategy for AM can be evaluated given prior beliefs about the model’s performance, and a manufacturer’s priorities. This allows a manufacturer to evaluate whether a given compensation strategy should be employed, or which should be chosen given multiple options. A further benefit of this approach is that the utility to the manufacturer of producing a given part with or without a compensation strategy is calculated as a dollar value. This would aid in determining pricing strategies for both the parts themselves in a job shop setting, and for software and models that enable compen- sation. The proposed methodology was evaluated on experimental data from both phases of the previous modeling research effort. This work shows that the conventional compen- sation strategy frequently fails to maximize a manufacturer’s utility, and demonstrates how a simple modification to the strategy can greatly increase expected utility of a given compensated print. 1.4 Shape Descriptor Representations One side line of the literature that needs to be briefly examined is the field of shape descriptor representations. A large number of strategies for describing 3D shape infor- mation have been proposed in the literature [87]. One particularly relevant subset of these methods is the feature-vector approach, which seeks to describe the shape as a numeric vector – a particularly advantageous format for applications in machine learn- ing [87]. Feature vector approaches can describe shapes using a number of tools, including 18 derived statistics, volume metrics, surface geometry, and rendered images [87]. Two sig- nificant constraints are relevant for this application when considering such methods: 1) the method must be able to describe the geometry of the whole shape, and 2) the method must generate feature vectors specific to a particular location on the surface of the part, as the vector needs to be useful for making predictions of deviation in different locations. This could be thought of as describing the overall geometry from the perspective of that location. One approach that meets these two criteria uses a spherical harmonic transform to describe concentric spherical cuts of a surface concisely in the form of a vector, and is proposed by [88,89]. The method starts by voxelizing the shape of interest. Then, a set of points is uniformly sampled from the surface of a sphere with a given radius. These points are assigned a value of one if they fall within a voxel that is occupied by the surface of the shape, and zero otherwise. The set is then treated as samples from a spherical function, and a spherical harmonic transform is performed. The set of weights assigned to each included basis function in the transform is then returned as the feature vector. This approach is explained in greater detail in Chapter 4. One application of this approach, proposed by [89] was for querying databases of tens of thousands of 3D shapes for those that are similar to a given shape in a computationally efficient manner. Because the spherical cuts can be centered at arbitrary points in the same dimensional space as the shape, the resulting feature vectors will differ based on the center point chosen. 1.5 Dental Additive Manufacturing One final side line of the literature that needs to be covered is the unique set of demands posed by dental additive manufacturing, which will be used as a case study for evaluating some of the work in the chapters that follow. A key challenge posed by dental applications is a strict need for accuracy, as precise fit within a patient’s mouth or 19 accuracy in describing the shape of their teeth is necessary for positive clinical outcomes. Dental/Orthodontic case models, for instance, are used to plan out a patient’s treatment as well as provide a point of comparison once treatment is completed [90]. Standards given in the literature for the largest clinically acceptable distance between a case model and ground truth have included 0.3 mm [91,92] and 0.5 mm [93–96]. Clear orthodontic aligners and retainers, which have become increasingly popular among patients in recent years, are another significant use case for dental additive manufacturing. Some sources have estimated the largest clinically acceptable distance in this case to be around 0.25 mm [97–99]. Other areas, such as Prosthodontics, require even more stringent requirements on clinically acceptable distances. Estimates for the largest clinically acceptable distance in fit for implant casts, for example, range from 0.059 mm on the conservative side to 0.150 mm at the highest [100–103]. Applications such as crowns and bridges can require still greater accuracy [104]. Another key challenge compounding the issue of strict accuracy requirements is the complexity of the parts that must be manufactured in the course of dental care. Full-arch dental models, like the one shown in Figure 1.3 for example, feature complex geometry with fine resolution features that can be both concave and convex and vary widely in scale. This presents a high number of heterogeneous small details that must be produced with high accuracy [103]. One final challenge is the wide variety of printing technologies and machines available for dental applications. Methods of AM that have been explored in the literature include Digital Light Projection (DLP), Stereolithography (SLA), Fused Deposition Modeling, and Material Jetting [103,105]. These machines can range in cost and quality from high- end industrial grade machines, often found in a lab, to lower-cost tabletop models, more likely to be found in a dental office. Because of this variety in manufacturing processes, there are significantly varying degrees of accuracy that one can expect from a printed part. 20 Figure 1.3: Full arch dental model. A considerable amount of effort in the literature has been applied to assessing the accuracy of dental parts produced using AM. One specific application where this is par- ticularly true is for full-arch dental case models such as the one shown in Figure 1.3. Even though the requirement for clinically acceptable accuracy is more lenient for this applica- tion than many others [91,92], a number of studies have identified deficiencies in accuracy offull-archmodelsproducedusingcommerciallyavailableprintersthatmightrenderthem clinically unacceptable depending on the standard used, while many others reported gen- erally acceptable results [103]. This reflects the tremendous heterogeneity in the quality of machines, materials, and processes available in the marketplace. Camardella, et al., for example, assessed the accuracy of SLA printed full-arch models, and found upper and lower intermolar distances to exceed clinically acceptable limits [106]. Kim, et al. evaluated the performance of four different AM methods: SLA, DLP, FDM, and Mate- rial Jetting [96]. Kim, et al. reported intercanine and intermolar width measurements for models produced using DLP and FDM that exceed the 0.3 mm clinically acceptable limit [96] found elsewhere in the literature [91,92]. Further, colored presentations showing dimensional deviations superimposed onto the CAD reference model for parts produced using SLA showed small regions that exceeded the clinically acceptable limit as well [96]. Rebong, et al. examined full-arch replications of plaster models produced using FDM, 21 SLA, and Material Jetting [107]. Differences between the 3D printed parts and the plas- ter model were reported for a wide range of predefined dimensions on the teeth [107]. Each of the methods had a mean measured deviation value that exceeded the 0.3 mm for at least one of the predefined dimensions on the models [107]. Shin, et al. studied the use of various infill and structural settings on the resulting accuracy of full-arch models printed using a DLP printer [108]. Colored presentations showing dimensional deviations superimposed onto the CAD reference model showed regions that exceeded the clinically acceptable limit in sets of teeth where there was no cross-arch plate [108]. Lin, et al. eval- uated the accuracy of analogs of full arch models printed using SLA and DLP printers with a focus on potential degradation in accuracy due to storage over extended periods of time [109]. Measurements in the posterior region of the parts produced using SLA showed deviations that exceeded 0.3 mm [109]. One significant trend that was explicitly noted in many of the above studies involving SLA and DLP was the reduction of transversal dimensions in the posterior region of the arch, i.e., inward contraction of the arch [106–110]. The trend is also observable in the colored presentations showing dimensional deviations superimposed onto the CAD reference model given in several studies [96,108,110]. This trend with be further explored in the data found in Chapter 4. 1.6 Outline of Dissertation The chapters that follow will cover research that has been completed in these areas. Chapter 2 discusses the topic of geometric registration and accuracy quantification. This work starts by evaluating common pitfalls in the most popular method for identifying geo- metricdeviationsina3Dprintedpartfoundintheliterature: registrationandcomparison of 3D scan point clouds using the iterative closest point algorithm. A set of modifications to this methodology is proposed as a means of addressing these issues and establishing 22 a rigorous experimental procedure that produces stable and consistent measurements of geometric deviation. The proposed approach is shown to be less susceptible to pitfalls and produce more stable results than the conventional approach. Chapter 3 proposes a predictive model using the generated geometric deviation mea- surements. The methodology proposed in this dissertation research is designed to enable predictions for unseen shapes based on small sets of data from previously manufactured shapes with differing geometry. This research utilizes a novel set of predictor variables alongside random forest modeling to enable modeling and predictions for shapes that are more complicated than previous experimental work in the literature. Compensation for geometric errors predicted using this proposed approach by modifying the shape file is shown in a demonstration experiment to reduce the magnitude of geometric deviations by 44%. Chapter 4 presents a follow-up experiment which utilizes an additional set of features derived from a spherical harmonic shape representation of the objects being printed. This allows the method to efficiently capture geometric information regarding the global surface of the part in the form of a feature vector that can be ingested by a machine learning model. Modifications were made to the standard spherical harmonic feature vector conversion method found in the literature to allow it to produce feature vectors specific to a single point on a surface, a necessary prerequisite for the required task. The method was evaluated on a dataset of full arch dental models, and shown to reduce the magnitude of geometric deviations by 42%. Chapter 5 addresses the challenge of optimally compensating for the predicted errors. While a number of approaches for doing this are given in the literature, this line of research proposes a novel method that draws on techniques from decision analysis and multi-attribute utility theory to incorporate manufacturer beliefs and preferences into the process. The result of this is a compensation strategy that optimizes the expected utility of a given part to a manufacturer based on the whole of their available information. 23 Lastly, Chapter 6 provides an opportunity for discussion of these topics, particularly with a focus on how they integrate into a holistic framework. Avenues for future research work in this field are also discussed. Finally, preliminary development steps for a software tool that enables the automation of the above methodologies is described and the software illustrated. 24 Chapter 2 Efficiently Registering Scan Point Clouds for Shape Accuracy Assessment and Modeling 2.1 Introduction The first contribution of this chapter is to identify some of the challenges posed by ICP when applied to assessing the accuracy of AM built parts. This discussion is necessary because ICP is one of the most frequently used tools for 3D scan point cloud alignment in the AM literature. It is important to note here that while ICP wasn’t proposed with manufacturing quality assessment in mind, it has become popular in this context. A number of these potential pitfalls will be discussed in depth in Section 2.2 and quantita- tively evaluated using simulated data in Section 2.4. Other discussions of these pitfalls are available in the literature, however, this study seeks to uniquely make the issue evident in the context of AM. While other algorithms differ in how alignment is achieved, many of the points that will be covered in this chapter will potentially apply to them to a varying extent. Second, a systematic approach to registration of point clouds specifically for AM qual- ity assessment will be presented in Section 2.3. This approach is designed to minimize variability and inaccuracies from ICP alignment, while keeping deviations from a part’s design as close to where they originated as possible. This is done through the use of geometric constraints on the ICP algorithm’s alignment based on manufacturing process 25 knowledge. ThemethodologyutilizestheICPalgorithm, butdoesnotfundamentallyalter it. Finally, a quantitative case study will be conducted in order to assess the potential magnitude of deviations in ICP point cloud alignments before and after the application of the proposed registration methodology. This will also be described in Section 2.4. 2.2 Types of Registration Errors BecauseofthepotentialbiasesintroducedbytheICPalgorithm, aswellastheinherent challenges of non-rigid registration, the deviation measurements produced after registra- tion of scan point clouds are not always indicative of a 3D printed part’s error. This section will examine a number of potential pitfalls. These alignment errors can be due to a number of factors including selective scanning, error minimization bias, and convergence to a local minimum that is far from the global minimum. 2.2.1 Non-uniform Sampling/Scanning The first category of errors that will be illustrated here is selective scanning induced errors. Many factors, such as a surface’s reflectivity or angle with respect to a scanner can impact the density of a scan point cloud. Further, someone manually scanning a part using a laser scanner, for instance, might conduct multiple passes of a certain region but only one of another region. This would result in a point cloud of varying density. It is also possible that certain surfaces on the shape might be inaccessible to the scanner, meaning that they remain unscanned. One example of this would be a deep recess in a part. Finally, when a part is scanned while resting on a worktop or other surface, the bottom of the part will often remain unscanned, since this region of the scan will be inaccessible to the 3D scanner unless multiple scans are performed. This results in a point cloud of just the top surfaces of a shape. This inconsistency in the density of a scan point cloud can greatly impact the alignment produced by the ICP algorithm. 26 Because the ICP algorithm seeks to minimize the mean of the squares of distances between the closest point pairs, registration will be biased towards alignments that mini- mize deviation in regions with greater point cloud density [26]. As a consequence, regions with lower point cloud density will be aligned with greater deviation. This effect is illus- trated in 2D in Fig. 2.1. The top diagram shows how a scanned part with uniform lateral shrinkage would intuitively be registered against its CAD design. The bottom diagram roughly illustrates the effect of the differing point cloud density on final alignment. Figure 2.1: (Top) CAD design and point cloud of part with lateral shrinkage. (Bottom) Alignment of point cloud with inconsistent density after ICP. 2.2.2 Deviation Minimization Bias Due to Unconstrained Reg- istration A similar issue occurs when a scan only contains points from the top surface of an object. This is often done out of necessity, since it is difficult to position a part in order to allow its entire surface to be scanned. In this instance, the ICP algorithm will work to minimize the deviation found on the top surface of the part, without the bottom surface acting as a constraint on the alignment. Consequently, the top surface will be pulled into near alignment with the reference part, irrespective of where the bottom of the scanned part would naturally be found. This can result in a substantial underestimation 27 of dimensional inaccuracy. This situation is illustrated in Fig. 2.2. The top diagram shows a natural alignment of a part with inadequate height against its CAD design. It should be noted here that what constitutes a natural alignment between a deformed part and its CAD design depends on the assumptions that are made. In this case, it will be assumed that the bottom surface of a printed part is manufactured with perfect accuracy. This assumption will be explained further in Section 2.3.3. It can be seen that under this alignment, there is substantial geometric deviation between the two shapes. Figure 2.2: (Top) CAD design and point cloud of part with vertical shrinkage. (Middle) Alignment of point cloud without bottom points after ICP. (Bottom) Alignment of point cloud with bottom points after ICP. The difference in alignments produced for shapes with and without bottom points is illustrated in the middle and bottom diagrams. Interestingly, even with bottom points, ICP aligns the point cloud in the center of the intended design. These two examples highlight a significant tendency of the ICP algorithm: to spread overall shape deviation. In the context of aligning two similar shapes, this makes sense, however in the context of finding dimensional deviations in 3D printed parts, this is problematic. The alignment 28 that minimizes the mean of the squares of distances between closest point pairs might not adequately represent a printer’s build errors. If a printed part only deviates from its design in a specific area, for instance, this error can be spread across the whole shape. This can make there appear to be less error than there actually is, and remove deviations from the region in which they were produced. In the bottom diagram of Figure 2.2, for instance, while the bottom surface of the dome was printed without any deviation, it will show deviation. Conversely, the top surface will show less deviation from its design than is actually present. These pitfalls occur because the ICP algorithm isn’t constrained by the realities of the manufacturing process that was used to produce the part being evaluated. Engineering informed assumptions derived from how the part was created help a user to determine what alignments make sense in light of prior knowledge. Unconstrained ICP doesn’t benefit from this knowledge. 2.2.3 Local Minimum Errors The ICP algorithm has been proven to always converge monotonically to a local min- imum solution as defined by the mean-square distance function [26]. One issue with this is that the most logical alignment of a point cloud may not be the local minimum that the ICP algorithm converges to. In the context of scanned point clouds of 3D printed parts, the globally optimal solution may not be the most logical alignment either, as was discussed in the previous section. An extreme example of this issue is illustrated in Fig. 2.3. Here, the algorithm gets trapped in a local minimum that is clearly not a reason- able alignment of the two point clouds. Subtle versions of this can prove to be more problematic. 29 Figure 2.3: Illustration of registration results after convergence to an inaccurate local minimum. 2.3 Error Reduction Strategy In order to address some of these potential pitfalls, a methodology for producing repeatable and reasonable alignments in the context of additive manufacturing quality control using the ICP algorithm is presented here. To address issues due to non-uniform scan density, uniform resampling of a scan point cloud is performed. The method also uses engineering-informed assumptions about the 3D printing process to constrain the ICP algorithm and limit deviation minimization bias. Finally, a strong initial alignment is produced before ICP is utilized, increasing the likelihood ICP converges to a reasonable minimum. An overview of the method is given by the flowchart in Fig. 2.4. 2.3.1 Initial Positioning of Scan Point Cloud It is necessary to first align the scanned point cloud by eye as closely with the refer- ence CAD surface as possible. This helps to prevent the more egregious variety of local minimum errors discussed in Section 2.2.3. Methods such as the one presented in [20] might be utilized to simplify this process. 30 Figure 2.4: Flowchart of the proposed procedure. 2.3.2 Scan Point Cloud Segmentation It will be assumed that the part was resting on a flat worktop during scanning. The second step in this procedure is separating the scan point cloud into two sets: table pointsQ table and shape pointsQ shape . Table points are generated when the scanner scans the surface the object is resting on. This is illustrated in Fig. 2.5. This separation can be achieved using an automated algorithm, such as the one presented in [111] and implemented in CloudCompare. 2.3.3 Reorientation of Scan Point Cloud Once the scanned point cloud is segmented, it is necessary to perform an initial align- ment with respect to the ideal reference shape. This is done using an engineering informed assumption. For many extrusion, vat-polymerization, selective sintering, and directed 31 Figure 2.5: Table points (red) and shape points (green). energy deposition-based 3D printing processes, material is applied to a roughly perfectly flat build plate to form the bottom of an additively manufactured part. In this case, it becomes reasonable to assume that the bottom of a 3D printed part is produced with perfect accuracy. For some AM processes, this assumption might not be reasonable. One example of this is a part built in the middle of a powder bed fusion build chamber. It should also be noted that in the presence of warping, this assumption would no longer be valid. While this requirement is a strong one, it should be noted that with care, it can be applied to many situations. This is because the aforementioned compatible AM methods make up the vast majority of the AM market share. If this assumption is reasonable, then the print errors on the object can be attributed to the rest of the object’s surface. This implies that the planes representing the bottom of the scanned point cloud and the bottom of the reference CAD shape should be parallel and intersecting. This can be implemented according to the following procedure. If a planef(x,y,z) = β 1 x +β 2 y +β 3 z +β 0 = 0 is fit to the table points Q table , then the vector N Scan =∇f = [β 1 ,β 2 ,β 3 ] T points in the direction normal to the bottom surface of the scanned point cloud. N Scan should be made equal to N CAD , which in most cases will be: [0, 0,−1] T . This is illustrated in Fig. 2.6. Once the bottom of the scan point cloud is made parallel to the bottom of the CAD reference shape, it is necessary to remove any distance between the two parallel planes. 32 Figure 2.6: Alignment of bottom plane of scan point cloud to CAD reference. In the case that the bottom of CAD reference shape falls on the x-y plane, β 0 of plane f should be set to zero, leading in a translation along the z-axis. Once this final alignment is achieved, the table points can be disregarded. Translation and rotation of the scan point cloud can be achieved using affine transformation matrices. One computationally inexpensive algorithm for producing this alignment is presented by Möller and Hughes [112], and can be utilized for this purpose. 2.3.4 Point Cloud Resampling with SPSR to Achieve Uniform Point Density In the event that the scan point cloud is unevenly dense, sparse, or contains many unreasonable outliers, it is possible to generate a more consistent point cloud using Screened Poisson Surface Reconstruction (SPSR). This algorithm is explained in [113] and implemented in many open source applications. The first step of this process is to generate a mesh from the point cloud using SPSR. Then a large number of points can be randomly sampled from this mesh, resulting in a more uniform scan point cloud. One tradeoff of this approach is the potential smoothing of fine features. It is important to monitor this, and adjust parameters accordingly. SPSR is illustrated in Figs. 2.7 and 2.8. 33 Figure 2.7: Scan point cloud. Figure 2.8: Triangular mesh of scan point cloud after screened Poisson surface reconstruc- tion. 2.3.5 ICP Implementation At this point, fine-resolution registration can be obtained using the ICP algorithm. In order to ensure that the assumptions applied in Section 2.3.3 remain in effect, it is necessary to constrain the ICP algorithm to only perform translation along the x and y-axes, and rotation about the z-axis. In this way, the bottom planes of each shape will remain parallel and intersecting. As a result, the ICP algorithm will go from six degrees of freedom to three. 2.3.6 Deviation Calculation Once this registration is performed, deviations can be calculated as the distances from vertices on the CAD reference mesh to vertices on the scan point cloud. Results should be checked to ensure that they make intuitive sense. 34 2.4 Validation Experiment In this section, several of the registration issues discussed in Section 2.2 will be demon- strated quantitatively using simulated data. This will be used to illustrate the potential magnitude of registration errors. Second, real data will be registered using unconstrained ICP, and the methodology presented in Section 2.2. The results produced by each of these techniques will be compared and discussed. All experiments were carried out using CloudCompare [28], open source software for manipulating and performing computations on point clouds and meshes, as well as MATLAB. 2.5 Demonstration of Registration Errors The first set of experiments here seeks to demonstrate quantitatively the effects of the previously discussed errors using simulated data. An STL file of an egg-shaped part (80 mm x 40 mm x 20 mm) is first duplicated in order to create a reference design STL model, as well as a part into which error will be introduced. The second part is then modified in order to introduce a specific error. The initial alignment of these parts is considered ground truth, since their relative positions and orientations were not changed during the introduction of the dimensional errors. Intuitively, then, inspection of the differences between the two shapes would show precisely the errors that were introduced. After this, a point cloud is generated based on the modified STL. Finally, deviations are determined by measuring the distance from each vertex on the top surface of the design STL file to the point cloud, which simulates the result of a laser scan. Deviations are first calculated using the initial ground truth alignment. Then, registration of the scan point cloud to the mesh is performed using 50,000 sample points, and deviations are measured again. This allows the alignment errors introduced by the registration process to be evaluated. 35 The first error to be simulated is uneven point cloud density. Dimensional inaccuracy wasintroducedintothepartbyshrinkingitalongthex-axisbyafactorof5%. Thisresults in the dimensional deviations shown on the left side of Fig. 2.9. In the following figures, surfaces that are blue correspond to dimensions that are too small, while red corresponds with dimensions that are too large. The top surface of each part is shown in the figures. The green axis in the bottom left corner of each figure corresponds to the positive y-axis, while the red corresponds to the positive x-axis. The positive z-axis comes out of the page. The right half of the part’s corresponding point cloud was generated using 300,000 points, while the left half was generated using 100,000 points. After applying unconstrained ICP registration, the scanned point cloud is shifted 0.318 mm along the x-axis. Thus, while the average magnitude of deviations remains similar, their location changes. The deviations measured using each alignment as well as the extent of the introduced alignment error are given in Table 2.1. The affine transformation matrix describing the change in alignment produced by ICP registration is (mm): T = 1 0 0.003 0.318 0 1 0 −0.006 −0.003 0 1 0.116 0 0 0 1 (2.1) Table 2.1: Comparison of measured deviations before and after registration - Fig. 2.9. Before Registration After Registration Average Magnitude of Deviation for Vertices on STL (mm) 0.380 0.363 Average Deviation for Vertices on STL (mm) 0.378 0.308 RMSE from Registration (mm) 0.226 The second error to be simulated is improper calibration along the z-axis, meaning that the height of the part is 5% too small. While the bottom layers of the part print with 36 Figure 2.9: Point cloud with uneven density: deformed part deviations before registration (left) and after registration (right). reasonable accuracy, each subsequent layer increases the absolute dimensional accuracy of the layer above it. In the first case, illustrated in Fig. 2.10, the scanned point cloud of this part has no bottom. In the second case, illustrated in Fig. 2.11, the scan includes the bottom of the hypothetical measured part. After applying unconstrained ICP registration for the first case, the scanned point cloud is shifted 0.690 mm along the z-axis. For the second case, the scanned point cloud is shifted 0.288 mm along the z-axis. Table 2.2: Comparison of measured deviations before and after registration - Fig. 2.10. Before Registration After Registration Average Magnitude of Deviation for Vertices on STL (mm) 0.401 0.141 Average Deviation for Vertices on STL (mm) 0.401 -0.036 RMSE from Registration (mm) 0.471 37 Figure 2.10: Point cloud without bottom: deformed part deviations before registration (left) and after registration (right). Table 2.3: Comparison of measured deviations before and after registration - Fig. 2.11. Before Registration After Registration Average Magnitude of Deviation for Vertices on STL (mm) 0.402 0.240 Average Deviation for Vertices on STL (mm) 0.401 0.219 RMSE from Registration (mm) 0.197 It can be seen that in both cases, registration moves the part upwards, reducing the magnitude of the deviations detected across the surface of the part. In the case where the point cloud includes bottom points, the magnitude of this shift is substantially smaller. One noteworthy observation is that in the first case, registration moves the scan point cloud so far upwards that negative deviations are turned into positive deviations around 38 Figure 2.11: Complete point cloud: deformed part deviations before registration (left) and after registration (right). the bottom edges. This magnitude shift can present an especially difficult challenge for efforts to predict and model dimensional accuracy (Tables 2.2 and 2.3. 2.5.1 Evaluation of Proposed Registration Methodology A second experiment was then carried out to evaluate the impact of the changes to ICP introduced by the proposed registration methodology. The objective was to deter- mine whether the proposed methodology generates deviation values that differ from those produced by ICP in a statistically significant sense. In this experiment, four different shapes were printed on a fifth generation MakerBot Replicator FDM 3D printer. These shapes are shown in Fig. 2.12. Each shape was scanned three times using a Romer Arm 73 Series 7325 manufactured by Hexagon Manufacturing Systems with an accuracy of± 39 80µ m. For each scan, three different users produced alignments using both unconstrained ICP and the proposed method. Figure 2.12: CAD models of the parts used in the experiment. Because the parts are printed with randomly generated errors from the manufacturing process, there are no ground truth deviation values to compare registration outcomes against. Instead, the RMS magnitude of deviations measured at each vertex on a given part’s STL file will be determined and treated as the response variable. The method used is treated as a fixed effect with two levels: ICP and the proposed procedure. The shape of the object being evaluated is considered a random effect because of the infinite shape variety in practice. Four different shapes were chosen to incorporate a variety of common features, including both smooth geometries and sharp edges. It is important to evaluate the effect due to shape since this can strongly impact how different methods perform. Nested under the shapes is the scan factor, a random effect representing the scan-to-scan variation. Different scans may affect the relative density of the point cloud, as well as the orientation the part begins in before initial alignment. These effects were 40 discussed in Section 2. Finally, the user producing alignment is also treated as a random effect. This accounts for patterns as well as varying quality in initial alignments. It is desirable for a registration method to be invariant to user, as this would mean that the method produces repeatable results. In this experiment, a given shape is considered to be a unit. This experimental design is illustrated in Fig. 2.13. A linear mixed effects model with nested random effects will be used to evaluate whether unconstrained ICP and the proposed method produce measurements of global error that differ in a significant way when tested on real manufacturing data. The proposed model is given as: y =η +τ i +α j +β k +γ e(k) + (τγ) il(k) + (τα) ij + (αγ) jl(k) +ϵ ijkl (2.2) whereτ i is the treatment effect from the method used, α j is the random effect from user, β k is the random effect due to shape, and γ e(k) is the random effect from scan, which is nested under the effect due to shape. Three potential interaction effects are also included in the model: (τγ) il(k) , (τα) ij , and (αγ) jl(k) . The interaction between method and scan is included as different scans of the same object will likely interact differently with the ICP method, and potentially the proposed method. The interaction between method and user is included due to potential impact from the user’s initial alignments on final results. Such an impact is undesirable. Finally, interactions between user and scan are accounted for in the model. Before the proposed model is evaluated, it is helpful to look at one instance of align- ments in depth for a single part and scan. It can be seen in Fig. 14 that unconstrained registration pulls the scan point cloud farther upwards than the proposed registration method. The scanned point cloud is shifted -0.046 mm along the x-axis, -0.019 mm along the y-axis and 0.128 mm along the z-axis. This results in a reduction in average mag- nitude of deviations by more than 10%. Further, sections of the top surface of the part are considered too large as a result of this shift. This conforms well with the prediction 41 Figure 2.13: Illustration of the experimental design. that ICP will underestimate deviations that was presented in Section 2.2.2. A comparison of the measured deviations produced by the two different methods is given in Table 2.4. The affine transformation matrix describing the change in alignment produced by ICP registration is (mm): T = 1 0.002 0.001 −0.046 −0.002 1 0.001 −0.019 −0.001 −0.001 1 0.128 0 0 0 1 (2.3) The linear mixed-effects model was fit using the lme4 package in R via restricted maximum likelihood estimation (REML) [114]. REML is a popular form of maximum 42 Table 2.4: Comparison of measured deviations produced by unconstrained ICP and the proposed method - Fig. 2.14. Unconstrained ICP Proposed Method Average Magnitude of Deviation for Vertices on STL (mm) 0.085 0.096 Average Deviation for Vertices on STL (mm) 0.017 0.089 RMSE Difference Between Deviations for Each Method (mm) 0.092 Figure 2.14: Deviations using proposed method (left) and deviations using unconstrained ICP (right). likelihood estimation for fitting linear mixed-effects models [115]. One advantage of this method over maximum likelihood estimation is that it produces unbiased estimates of variance parameters. After the proposed model was fit, terms representing statistically insignificant effects were systematically removed using backwards elimination until only statistically significant effects remained. This resulted in the following simplified model: y =η +τ i +β k + (τγ) il(k) +ϵ ijkl (2.4) which yields the analysis given in Tables 5 and 6. The random effects due to both user and scan were not found to be significant. As a result, the proposed method was not 43 shown to be sensitive to operator changes. This is a necessary condition for a registration methodology, as it must produce repeatable results between operators. Interestingly, while the effect of scan variation itself wasn’t shown to be significant in this model, the interaction between the method used and scans did prove to be significant. This makes intuitive sense in light of Section 2.2.1. Table 2.5: Analysis of model fitting results for the fixed effect. Estimate Std. Error t Value Pr(> |t|) (Intercept) 0.265446 0.127317 2.085 0.128182 Proposed Method 0.037712 0.008713 4.329 0.000362 Table 2.6: Analysis of model fitting results for random effects. Std. Dev. Pr(> Chisq) Part (Intercept) 0.254336 < 2.2e-16 Method:Scan (Intercept) 0.021314 < 2.2e-16 Residual 0.001876 It can be seen in the fixed effect table that there is a statistically significant difference between the RMS of deviations produced by each of the methods. The proposed method tends to generate deviations of a greater magnitude. This is consistent with the tendency of unconstrained ICP to minimize deviations, which was demonstrated in Section 2.5. As a result, the experiment provides positive support for the hypothesis that the proposed method is less likely to underestimate geometric deviations. 2.6 Conclusion In conclusion, several potential challenges for obtaining quality alignments of scan point clouds using ICP registration were discussed. These challenges were then illustrated using simulated data, which allowed for quantification of their impact on the accuracy of deviation measurements. The impact of each of these registration issues was shown to be significant enough to noticeably impact the measured deviations using simulated data. 44 A method to address some of these challenges based on engineering informed assump- tionswaspresented. Thismethodwasusedonrealscanpointclouddata, andcomparedto unconstrained ICP registration via a design of experiments approach. Differences between the magnitude of deviations produced by the alignments from each method were shown to be significant, while operator effects were not shown to be significant. The consistent and attributable measurements of deviation produced by this method serve to enable the work that will follow. 45 Chapter 3 Prediction and Compensation via Mesh-Based Feature Vectors 3.1 Feature Extraction for Triangular Mesh-Based Shape Deviation Representation Modeling complex surfaces that cannot be easily described analytically presents a challenge to many existing modeling methodologies. One way to address this challenge is the use of a finite number of predictor variables that capture certain geometric properties of a surface that are deemed relevant based on prior engineering-informed knowledge. These predictor variables can be computed for an evenly distributed set of points across the surface of an object. These points will then function as instances in the model for which predictions can be made and to which position modifications can be applied for the purpose of compensation. Here, a set of eight predictor variables x, corresponding to each relevant property under consideration, is constructed using feature extraction from a triangular mesh describing the shape to be printed. For simplicity, the vertices that make up the shape’s triangular mesh can be considered the instances in the model. To produce an unbiased model, it is necessary that the triangular mesh be uniformly dense across the surface of the shape and have triangles of the consistent size. This can be achieved by remeshing an object’s STL file using one of several algorithms [116]. This chapter considers three broad areas of phenomena that have been shown to affect print accuracy. 46 These include position within the print bed, orientation and curvature of a surface, and thermal expansion effects. 3.1.1 Position-Related Predictors The first area of significance for feature extraction is the physical position of a vertex in a print bed. Several studies have demonstrated that position within a printer’s print bed is significantly correlated with the resulting accuracy of printed parts [47,49]. In the context of FDM, this location dependency can be connected to extruder positioning, while for other processes like digital light processing, this can be connected to optical variation [117]. For the n th vertex p n , the first three predictor variables (x n,1 ,x n,2 ,x n,3 ) used in this model correspond to the x, y, and z coordinates of each vertex. These predictors seek to capture errors related to the actual position of the printed object within the print bed. For the validation experiment that will follow, objects were positioned in the slicing software so that the position values from the STL file were exactly each vertex’s position within the 3D printer’s print bed. One implication of this is that the same object printed in different orientations or locations will have different predictor sets. 3.1.2 Surface Orientation and Curvature Predictors. The next area of significance is the orientation and curvature of a surface. This will be used due to the association of properties such as surface slope with common print errors [118]. Furthermore, surface curvature in the x–y plane can influence how the material is deposited. The next four predictor variables are derived from the set of normal vectors corresponding to theg n triangular faces adjacent to a given vertex n. Each normal vector S i = (1,ϕ i ,θ i ),i = 1, 2,...,g n is expressed in spherical notation with radius 1, an elevation angle, and an azimuth angle. The predictor variables are calculated as follows and illustrated in Fig. 3.1. Figure 3.1 depicts how these predictor variables would be 47 calculated for a single vertex n (or instance) on the triangular mesh, which is shown as a black dot on the mesh, and the expanded view to the left. x n,4 =median({θ i ,i = 1,...,g n }) (3.1) x n,5 =max({θ i ,i = 1,...,g n })−min({θ i ,i = 1,...,g n }) (3.2) x n,6 =median({ϕ i ,i = 1,...,g n }) (3.3) x n,7 =max({ϕ i ,i = 1,...,g n })−min({ϕ i ,i = 1,...,g n }) (3.4) The first of these predictor variables is the median value of the azimuth angles in the set. This can be interpreted as the direction that the geometric features are facing. This is a useful term for predicting shape-related print errors. In Fig. 3.1, the vector that represents the median azimuth and elevation angle is shown in red, both on the triangular mesh, and the expanded view. The first predictor variable is shown as θ. The second of these variables is the range in azimuth angles (i.e., max θ i – min θ i ). This can be interpreted as an indicator of the curvature of the surface. Changes in curvature can affect how material is deposited from the extruder in material extrusion-based processes, or how energy is concentrated on feedstock in powder bed fusion-based processes, and so on. In Fig. 3.1, the third variable is shown as ϕ . The third of these variables is the median value of the elevation angles in the set. This can be interpreted as the slope of the geometric features. This is of particular interest due to the correlation between slope and common print errors. This variable is also useful for detecting overhangs, which can be difficult to print accurately. Finally, the fourth of these variables is the range in elevation angles (i.e., max ϕ i – min ϕ i ). This can be interpreted as the degree to which 48 the slope changes over the surface described by the triangular faces. This has relevance to shape-dependent errors. Figure 3.1: Normal vector for each face surrounding a vertex with median vector. 3.1.3 Material Expansion/Shrinkage Predictor The final predictor variable proposed here is the distance from each vertex to an axis in the z-direction placed at the center of each shape (which in the case of the validation experiment intersects the point 0,0,0 on the printer’s build platform): x n,8 = q x 2 n,1 +x 2 n,2 (3.5) This distance between the z-axis placed at the center of the shape and a single vertex (or instance) n on the triangular mesh is shown in Fig. 3.2. The feature is of significance due to the thermal expansion effects of the printed materials [119]. If an object is formed at a high temperature, as it cools, the printed material’s coefficient of linear thermal expansion dictates the degree to which its overall size is reduced. Such temperature changes can lead to warping, residual stresses, and dimensional inaccuracies [120,121]. This is further complicated by the fact that heat can be concentrated at different locations overshortperiodsoftime. Objectsoflargersizeexpandandcontractbyagreaterabsolute distance due to scaling. Points on the surface that are at a greater distance from what can be considered the center of the object will therefore experience a greater degree of 49 displacement. This necessitates a proxy for a point’s distance from the rough center of expansion to be accounted for. Given an STL file, the set of each of these predictor variables can be quickly calculated for each vertex. They can give a good idea of the relevant geometric factors that can influence the accuracy of a 3D print. The relative efficacy of each of these predictor variables will be briefly evaluated in Section 2.5. Figure 3.2: Distance between vertex and central z-axis. 3.2 Shape Deviation Measurement and Calculation Aprocedureformeasuringdeviationsacrossthesurfaceofaprintedobjectispresented here. It is important that deviation values be calculated at each vertex on an object’s triangular mesh. This then allows for deviations to be used as the response variable corresponding to each set of predictor variables. Thisprocedurebeginsbyproducingadensepointcloudofmeasurementsofthesurface of a 3D printed object. In the validation experiment described later, each object was scanned using a ROMER Absolute Arm with an attached laser scanner manufactured by Hexagon Manufacturing Intelligence. According to the manufacturer, this scanner has an accuracy of 80 µ m. The objects were each scanned with several passes from different 50 angles so as to create scans with between 500,000 and 1.6 million points. In comparison, each design STL file has approximately 50,000 data points. Registration is performed according to the methodology presented in Chapter 2. The shortest distance betweenv n and the scanned mesh is returned in the form of a vector d n . Themagnitudeofdeviationinthedirectionnormaltothetriangularmeshateachvertexis then calculated asy n =u n ·d n , whereu n is the vector (x n,4 ,x n,6 , 1) expressed in Cartesian coordinates. Signs correspond with whether the deviation represents a dimension that is too large or too small. This results in a set of response values representing deviation values that are normal to the surface of the designed triangular mesh. These values are used as the set of response variables y 1 through y N . For a training dataset containing multiple printed parts, data{(x n ,y n ),n = 1, 2,...N} is the ensemble of the total N vertices from all of the shapes. Note that each vertex may have a different number of adjacent triangle faces. For the validation experiment conducted in Section 3.4, for example, there are four triangular mesh files that correspond to four different shapes that are all included in the training dataset. 3.3 Random Forest Model to Predict Shape Devia- tion With Extracted Features To learn and predict shape deviations, it is necessary to develop a predictive model based on the training data. Because triangular mesh files often contain tens of thousands of vertices, the size of the datasets generated by this method can be cumbersome, posing a computational challenge for machine learning methods. Conversely, because of the small number of example shapes that might be available for model training, the approach must also be flexible and generalize well under covariate shift. One computationally efficient modeling approach that can be utilized in this situation is the random forest method. 51 One way to quantify the computational efficiency of a machine learning algorithm is time complexity, which reflects the number of computations that must be performed to generate a model and thus time. The random forest algorithm has a worst-case scenario time complexity on the order of O(MKÑ 2 log(Ñ)), where M is the number of trees in the random forest, K is the number of variables drawn at each node, and Ñ is the number of data points N multiplied by 0.632, since bootstrap samples draw 63.2% of data points on average [122]. As a point of comparison, an algorithm such as Gaussian Process regression has a worst-case scenario time complexity on the order of O(N 3 ) [123]. For the training sets utilized in the proof-of-concept experiments that will follow, this is roughly three orders of magnitude more complex. 3.3.1 Random Forest Method Researchers have successfully applied machine learning to make accurate predictions in a wide range of applications related to manufacturing. One particularly popular algo- rithm for applications is random forest modeling, which has been applied to predicting surface roughness of parts produced with AM, fault diagnosis of bearings, and tool wear, to name just a few use cases [124–126]. The random forest algorithm is a means of super- vised ensemble learning originally conceived by Breiman [127]. It utilizes regression or classification trees, which are a method of machine learning that recursively segments a given dataset into increasingly small groups based on predictor variables, allowing it to produce a response value given a new set of predictors [127]. The resulting structure of this segmentation process resembles the roots of a tree and is shown in Fig. 3.3. The random forest algorithm constructs an ensemble, or forest, of these trees, each trained on a subset of the overall dataset. This process is explained in further detail later and is illustrated in Fig. 3.4. The goal of a regression tree is to generate a set of rules that efficiently segment the given training set using predictor variables in a way that generates accurate predictions of 52 Figure 3.3: Single regression tree using random forest. Figure 3.4: Ensemble of trees using random forest. a response variable. This process begins with a single node and randomly chooses a set of predictor variables to be used in dividing the dataset. Given P total predictor variables, it is generally recommended that the number of predictor variables sampled for each node be set to P 3 in the case of regression and √ P in the case of classification [128]. By using 53 this subset of predictor variables, the algorithm seeks to split the data at the node in a manner that minimizes the sum of the squares of error for each response label y i : SSE = 1 N node N node X i=1 (y i − ¯ y) 2 (3.6) where ¯ y = 1 N node N node X i=1 y i (3.7) This process is then repeated for each resulting node until a predetermined condition is met. Two common conditions include a predetermined minimum number of data obser- vations at a node or a maximum tree depth. Once the stopping condition is met, each of the terminal nodes is labeled with the average value of the responses for the observations contained by that node. New predictions are generated using a set of predictor variable values to navigate down the tree until arriving at a terminal node that corresponds to the predicted response value. The random forest algorithm begins by generating subsets or “bootstrap samples” from the overall dataset. These bootstrap samples are drawn randomly from the overall dataset with replacement, allowing for some data to be shared between samples [127]. A regression tree is then trained for each bootstrap sample. To make predictions using a generated forest, the predictor variables are used to generate individual predictions from each tree. The average of this set of predictions is then given as the overall output of the ensemble. One benefit of the random forest algorithm for this application is that the addition of irrelevant data (predictor sets are highly dissimilar to those for the predicted part) does not strongly affect predictions based on the most relevant data. In this way, the indi- vidual trees can naturally accommodate diverse datasets in training without substantial degradation in prediction quality. 54 3.3.2 Feature Selection To gain an understanding of the relative importance of the predictor variables used in this model, the out-of-bag permuted predictor change in error for each predictor variable was calculated during the validation experiment [127–129]. For each tree in the random forest, the training set data that was not used to train the tree, also referred to as the out-of-bag observations{(x n ,y n ),n = 1, 2,...N out−of−bag } is used to generate a set of predictions of the response variable{ˆ y n ,n = 1, 2,...N out−of−bag }. The mean squared error (MSE) of this set of predictions, i.e., out-of-bag error is defined as follows: MSE out−of−bag = P N out−of−bag n=1 (ˆ y n −y n ) 2 N out−of−bag (3.8) Then, for the first predictor variable x :,1 , each of its values in the dataset are permuted so as to randomize the values of that predictor variable’s input. A new set of predictions is generated using this data, and the MSE of these predictions is calculated. The change in prediction error is defined as the difference between the original and changed MSE values: ∆ Error x 1 ,tree 1 =MSE x 1 ,tree 1 −MSE out−of−bag,tree 1 (3.9) ∆ Error x 1 = 1 T T X i=1 (∆ Error x 1 ,tree 1 ) (3.10) A large value of ∆ Error x 1 indicates that this is a significant predictor variable, since randomizing its input causes the predictions of the regression tree to become much worse. This process is repeated for each of the predictor variables in the dataset and for each of the regression trees in the random forest. The change in MSE is calculated for each predictor variable and averaged over all of the trees in the random forest. Each of these 55 results is divided by the standard deviation of the ∆ Error values for the entire ensemble and is outputted as the final significance value S(x n ) for each predictor variable: S(x n ) = ∆ Error xn std(∆ Error ensemble ) (3.11) 3.3.3 Measuring Covariance Shift to Determine Feasibility of Prediction It is important to note that for this model as with most models, generating predictions that require large degrees of extrapolation will likely result in poor predictions. Conse- quently, it is necessary to ensure that the training dataset comes from a distribution that issimilartotheshapeonewishestopredict. Amethodologyfordeterminingthesimilarity of shapes according to the predictor variables generated in Section 3.1 is presented here. The result is a distance metric between any two triangular meshes that can be utilized to estimate whether a training dataset for modeling has adequate similarity to the shape one wishes to predict for. We can denote the distribution of the training and test sets to be P and Q, respectively, where P = Q is the ideal case in which predictions can be made with confidence. In practice, however, test distribution Q will differ arbitrarily from the train- ing distribution P. Such a change is known as covariate shift [130,131]. This is due to the fact that we wish to predict errors for shapes that are different than the shapes that have already been printed. Sugiyama et al. [132,133] note that the Kullback–Leibler divergence between two distributions for datasets can be interpreted as an estimator for the level of covariateshiftbetweenthem. Anapproachbasedonthisisutilizedhere. Jensen–Shannon divergence is utilized instead in order to gain symmetry between distance measurements, and independent distributions for each predictor variable are calculated for the sake of computational cost. To estimate the distributions P and Q for each feature, kernel density 56 estimation [134,135] is applied to get the density estimation of features i = 1, ..., 8 as follows: p i (x) = 1 n n X j=1 K(x−x j,i ) (3.12) for all x j,i in the dataset for the first shape, and q i (x) = 1 n m X j=1 K(x−x j,i ) (3.13) for all x j,i in the dataset of the second shape, where K(·) is the normal kernel, which is the same for both distributions: K(u) = 1 √ 2π e − u 2 2 (3.14) After determining the probability distributions for both datasets, the covariate shift for each feature i can be quantified using Jensen–Shannon divergence [136]. JS i (P||Q) = 1 2 KL i (P||M) + 1 2 KL i (Q||M) (3.15) where M = P + Q 2 and KL i (P||Q) is the Kullback–Leibler divergence [137] defined as follows: KL i (P||Q) = Z ∞ −∞ p i (x)log( p i (x) q i (x) )dx (3.16) A final divergence metric between two shapes can be given as the sum of the Jensen- Shannon divergences for each predictor variable: DM(Shape 1 ,Shape 2 ) = 8 X i=1 JS i (P||Q) (3.17) 57 3.3.4 Prescriptive Compensation of Shape Deviation Once predictions are made for a part that is to be printed, it becomes necessary to leverage these predictions to improve the part’s eventual quality. This method for compensating for positioning error was utilized in [46,57,58]. The general idea is that if a portion of the object is predicted to be too large or small by a certain amount, the shape of the object can be altered in the opposite direction by a corresponding amount before the object is printed, thus resulting in part with less error. For each vertex on the triangular mesh v n , a new compensated vertex is generated by translating the vertex a distance of−ˆ y n along a vector normal to the surface at that point. This vector can be calculated in spherical coordinates as (x n,4 ,x n,6 , 1). This process is illustrated in Fig. 3.5. It should be noted that this is not an optimal approach, like what is presented by Huang et al. [46,48], but is instead a heuristic. One implication of this is that in many situations, the optimal compensation for a part is different from the negative value of the observed deviation. The reason a nonoptimal approach is taken here is due to the nature of random forest modeling, as small changes in the predictor set do not yield large if any changes in the response from the prediction function. This is because according to the regression tree algorithm, all values within a certain region of the n-dimensional predictor space will return the same value. In the validation experiment that will follow, for instance, only 29% of the points on the compensated STL file showed different values of predicted error after compensation (assuming the compensated STL file is then considered the ideal shape). Of those that did, the average change in predicted error was 0.0013 mm, which is well below the resolution of the 3D printer used in this study. Once each vertex is modified, a new compensated STL file is generated for printing. 58 Figure 3.5: Illustration of compensation strategy. 3.4 Validation Experiment 3.4.1 Test Object Design, Printing, and Measurement To test the efficacy of the proposed method, a case study utilizing an FDM 3D printer was constructed. The goal of the experiment was to determine whether the geometric accuracy of a previously unseen shape could be improved using accuracy data from several other related shapes using the proposed method. In addition, the predictive accuracy of the model for the unseen shape was also evaluated. The experiment is designed to mirror a situation that might be encountered in an industrial setting — when a manufacturer is about to print a new part, but only has accuracy data for a small number of somewhat related shapes. The previously discussed methodologies for finding the most relevant accuracy data one possesses, leveraging that data to generate predictions, and using those predictions to improve accuracy through compensation are all evaluated. A dataset of 3D printed shapes was generated on an FDM printer with four shapes being used for model training and one always withheld for model testing. These included a half-ovoid, a half-teardrop, a triangular pyramid, a half-snail shell, and a knob shape. These objects are chosen to represent different geometries including varying curved and flatfaces, varioustopologies, andedgesofdifferingangles. Theedgelengthofeachtriangle in the mesh of each object was set to be approximately half a millimeter during remeshing. 59 It is important to note that because this process takes the original parts and moves them to a much higher mesh density, accuracy is preserved. This is because small triangles can express a freeform shape with greater accuracy than large triangles. Accuracy for freeform parts would likely not be preserved in the opposite direction. Following this remeshingprocess,thebenchmarkingobjectswereprintedonaMakerBotReplicatorFDM 3D printer using MakerBot brand Polylactic Acid filament. Each object was printed with full infill. Care was taken to ensure that the point defined as the origin in the triangular meshfileforeachobjectwasprintedattheexactcenteroftheprintbed. Thisensuredthat the positions of each vertex in the triangular mesh directly corresponded to the positions of the printed objects within the printer’s build envelope. The printed test objects are shown in Fig. 3.6. Figure 3.6: 3D printed objects for test dataset. The deviation values for each of the 3D printed shapes were calculated according to the procedure in Section 3.2. These deviation values are shown in Fig. 3.7, which is a heatmap of deviation values across the surface of each shape. The color at each point indicates the extent of the deviations across the surface. Red points correspond to parts of the shape that are too large, while blue points correspond to parts of the shape that are too small. 60 Figure 3.7: Deviation values across the surface of each shape. To better understand the distribution of deviation magnitudes, a histogram showing the frequencies with which various magnitudes of deviation values occur is shown in Fig. 3.8. Thishistogramis specificallyforthehalf-ovoidshape, whichiswithheld asthetesting dataset for one iteration of the experiment. None of the values from the bottom surface of this shape is included as the deviations are assumed to be zero based on the assumptions used during registration. It can be seen that most deviations are within 0.3 mm of the desired dimension. 3.4.2 Model Training Results To better understand the efficacy of the method for prediction, two different models were trained. For the first model, the half-teardrop, triangular pyramid, half-snail shell, and knob shape were used as the training data, while the half-ovoid was used as the testing dataset. For the second model, the half-ovoid, triangular pyramid, half-snail shell, and knob shape were used as the training data, while the half-teardrop was used as the testing dataset. For each model, an ensemble of regression trees was trained using the random forest method. The minimum number of observations at each node was set to 200, while the number of trees in the ensemble was set to 30. This ensemble size was 61 Figure 3.8: Histogram of magnitudes of deviation values for the half-ovoid shape. chosen due to the fact that experimental results indicated that further increases in the ensemble size for this dataset yield increasingly small gains in out-of-bag error, as shown in Fig. 3.9. Thanks to the simplicity of the random forest algorithm, the total training time was less than 30 s, while predictions can be generated at a speed of roughly 110,000 predictions per second. The relative significance of each predictor variable was calculated for the trained models according to the procedure described in Section 3.3.2. These values are shown in Fig. 3.10. These results suggest that each of the predictor variables contributes to the overall accuracy of the model, however to differing degrees. Table 3.1 compares the covariate shift metrics between each shape in the dataset. The final values are divided by the maximum covariate shift in the table to produce a normalized set. It can be seen that the half-ovoid dataset withheld for testing in the first model is most similar to the half-teardrop and triangular pyramid shapes. Conversely, the knob shape shows a greater magnitude of covariate shift from most of its peers, indicating that predictions made for this shape would likely be of poorer quality. If one wished to 62 Figure 3.9: Out-of-bag error versus number of trees in ensemble for first model (top) and second model (bottom). generate predictions for the knob, it would be advisable to train the model on data more representative of its unique shape. 63 Figure 3.10: Significance values of each predictor variable in the first model (top) and the second model (bottom). 3.4.3 Model Prediction Results By using the testing shapes that were withheld from the training sets, a new set of predictions was generated for each random forest model. The mean absolute error (MAE) ofpredictionsforout-of-bagdatainthetrainingdataset, aswellastheMAEofpredictions for the withheld shapes, are provided in Table 2. The first error quantifies the accuracy 64 Table 3.1: Normalized covariate shift metrics between individual shape datasets. Half- teardrop Half-ovoid Triangular Pyramid Knob Half-snail shell Half-teardrop 0 0.20 0.35 0.78 0.77 Half-ovoid 0.20 0 0.25 0.91 1 Triangular Pyramid 0.35 0.25 0 0.64 0.74 Knob 0.78 0.91 0.64 0 0.27 Half-snail shell 0.77 1 0.74 0.27 0 of the model when making new predictions for the overall shapes (but not individual datapoints) that it has already seen in training. The second error quantifies the accuracy of predictions made for a new shape that the model has not seen during training. The predictions for deviation across the surface of the half-ovoid are graphed alongside the actual deviation values for the shape, allowing for comparison. This is illustrated in Fig. 3.11 with the same coloring scheme as shown in Fig. 3.7. Table 3.2: MAE values for predictions of deviation values. MAE for out-of-bag data in training dataset (same shape error) (mm) MAE for testing dataset (new shape error) (mm) First Model 0.0564 0.0457 Second Model 0.0513 0.0708 Plots of predicted deviation values versus actual deviation values for the out-of-bag data used in model training, as well as for the testing shape, are given in Figs. 3.12 and 3.13. For reference, the lines ˆ y =y+0.1 mm and ˆ y =y−0.1 mm are provided. Predictions that fall outside these bounds might be considered of low quality. It can be seen from these results that this method is capable of producing reasonably accurate predictions for a previously unseen shape from a small training set of just four related shapes. The predictions shown in Figures 3.12 and 3.13 might be useful for an operator of a 3D printer seeking to determine whether a specific 3D printed shape will be within a prespecified tolerance before beginning the print. This procedure might also be of use 65 Figure 3.11: Actual deviation values versus predicted deviation values for withheld testing data (First Model). whendeterminingthebestorientationwithwhichtoprintanobjecttomaximizeaccuracy. Figure 3.13 also demonstrates that there is room for improving the accuracy of the model. This would likely include expansion of the initial training set and refinement of the initial predictor variables. 3.4.4 Compensation Results A compensated STL file for the half-ovoid shape was generated according to the pro- cedure given in Section 3.3.4 using the first model’s predictions. This STL file was then printed in the same manner and with the same material as the previous objects. Its dimensional accuracy was measured, and deviations are shown alongside the noncompen- sated half-ovoid in Fig. 3.14. It can be seen that error is substantially reduced using the compensation methodology. The dimensional error is quantified for the compensated and 66 Figure 3.12: First model: Predicted deviation values versus actual deviation values for out-of-bag data in the training dataset (same shape predictions) (left) and predicted devi- ation values versus actual deviation values for the testing dataset (new shape predictions) (right). Figure 3.13: Second model: Predicted deviation values versus actual deviation values for out-of-bag data in the training dataset (same shape predictions) (left) and predicted devi- ation values versus actual deviation values for the testing dataset (new shape predictions) (right). noncompensated half-ovoid as the average vertex error according to the equation below, and given in Table 3. AVE = 1 N vertices N vertices X i=1 |y n | (3.18) 67 Figure 3.14: Deviation values for uncompensated part (left) versus compensated part (right). It can be seen from the results in Table 3 that the application of the presented com- pensation methodology results in a 44% reduction in average vertex error and a 50% reduction in root-mean-square vertex error for the testing shape. Table 3.3: Mean absolute vertex error and RMS vertex error for uncompensated and compensated half-ovoid parts. Uncompensated half-ovoid (mm) Compensated half-ovoid (mm) MAE 0.0723 0.0404 RMS 0.1047 0.0528 3.5 Conclusion This study establishes a new data-driven, nonparametric model to predict shape accu- racy of 3D printed products by learning from triangular meshes of a small set of training shapes. The accuracy of a new 3D shape can be quickly predicted through a trained random forest model with little human intervention in specifying models for complicated 68 3D geometries. With features extracted from triangular meshes, the proposed modeling approach is shown to produce reasonable predictions of shape deviation for a new part based on a limited training set of previous print data. Compensation leveraging these predictions is also shown to be effective, resulting in a 44% reduction in average vertex deviation. One further interesting insight gained from the presented experiment was that quality of the data is a necessary condition for reasonable predictions. Table 3.1 indicates that only two of the shapes in the training set had low covariate shift as compared with the testing dataset. This is likely toward the lower bound on what can be utilized to maintain accurate predictions. Those wishing to utilize this methodology should therefore ensure that their training dataset contains an adequate amount of data similar to the shapes they wish to predict for. Applications for which this is already naturally the case can be found under the concept of “mass customization,” where similarly shaped products are produced with small custom differences introduced per customer specifications. These might include the 3D printing of retainers, custom footwear, and medical implants among many other fields. The methodology for determining shape similarity based on covariate shift of the presented predictor variables might be utilized across other shape deviation modeling methodologies for which these conditions are present to ensure the sufficiency of training data. 69 Chapter 4 Prediction and Compensation via Mesh and Spherical Harmonic-Based Feature Vectors One significant limitation of the methodology presented in the previous chapter is the fact that it only utilizes local surface geometry defined by the triangles connected to a given vertex to make predictions of dimensional deviation. This can present challenges when the geometry of a part being modeled is highly complex, and can limit the accu- racy of predictions. A step forward, then, would be to incorporate information regarding geometry beyond this threshold into the proposed modeling approach. This chapter seeks to achieve this through the use of spherical harmonic-based feature vectors. The proposed approach will then be evaluated in the significantly more complex case of dental additive manufacturing. While the proposed method is evaluated in the context of dental additive manufacturing, it would be ideally suited for other applications requiring complex geome- tries with high part-to-part resemblance. These include those in biomedical engineering, aerospace, and customized consumer products. 4.1 Methodology The overall process that will be proposed in this chapter is shown below, in Figure 4.1. The overall structure of this method is largely similar to the one proposed in the previous chapter, with the addition of a set of predictor variables derived from spherical 70 harmonic transformations of the parts surface. These sets will then be used in tandem. This section will focus on how these additional predictor variables are derived. Figure 4.1: Flowchart of proposed methodology. 71 Asinthepreviouschapter, predictorvariablesaregeneratedfromameshMdescribing the shape, which is stored in an STL file, and defined as a tuple: M = (P,F) (4.1) where P is the set of unique vertices on the mesh and F is the set of triangular facets. Further, parts will be meshed according to the same conditions as those in Chapter 3. An example of this is illustrated on one of the full arch dental models in Figure 4.2. Figure 4.2: Close-up view of remeshed surface. 4.1.1 Spherical Harmonics Shape Descriptor Generation The new set of shape descriptors utilized in this process seek to capture information regarding the geometry that extends significantly beyond the facets adjacent to a given vertex. This process is based off the one proposed by Kazhdan, et al. [88] and Funkhouser et al. [89] with several modifications that will be flagged along the way. 72 The first step in this process is voxelizing the set of points P from mesh M. For a 2R x 2R x 2R grid of voxels V, a given voxel is set equal to 1 if it contains one or more vertices from P, and 0 otherwise: v s,t,u = ∃p n ∈P :d(s−R− 1) 1 1 ||p n −p i || 2 ||p n −p i || 2 ≤ 1 (4.9) The new mesh containing the set of compensated vertices is then printed under the same print settings and conditions. 78 4.2 Results 4.2.1 Dataset Generation and Segmentation In order to test the efficacy of this methodology, a set of six half-arch dental models [138–141] was printed on a professional grade dental SLA printer one at a time, with each model placed at the center of the build plate. The printed parts are shown below in Figure 4.6. These models were then scanned, allowing for deviations across the surface of the parts to be determined according to the methodology in Section 4.1.2. The resulting deviations are shown below in Figure 4.7, with colors applied according to the magnitude of deviation. Red corresponds to an area of the part that was printed too large, while blue corresponds to an area that was printed too small. Figure 4.6: Printed parts from two different angles. The set of parts was then segmented into a training set to be used for model training, a validation set to be used for tuning of modeling hyperparameters, and a test set for overall 79 Figure 4.7: Visualization of deviations across all printed shapes. evaluation of the proposed methodology’s performance. To provide a more challenging and robust evaluation of the performance of the methodology, the data is segmented by shape, as opposed to a randomly selected set fraction of the overall data instances. This forces the model to be assessed while making predictions for a shape that it has not seen before, mimicking how it would be used in real-world applications. 4.2.2 Hyperparameter Optimization and Model Training To determine a set of optimal hyperparameters, a study was performed on the valida- tion dataset shown above. Two critical hyperparameters to the methodology were studied, 80 one central to the generation of predictor variables, and the other central to the random forest model being trained. The first hyperparameter under consideration is the minimum number of observations allowed for a terminal node in a regression tree in the random forest model. This determines how deeply the tree will be constructed, with a smaller minimum number corresponding to a deeper tree, and vice versa. Values of this hyper- parameter evaluated here ranged from 2 to 20. The second hyperparameter that required optimization was the number of values of r that should be included in the training dataset. Here, values of r were staggered by 10 mm, and included r = 5 mm, 15 mm, 25 mm, and 35 mm. Each increasing value of r represents information regarding geometry that was further and further from the vertex being predicted for. As a result, if more distant infor- mation is of diminishing value to the model, this can impact overall model performance. Evaluated values of this hyperparameter ranged from 0 (no spherical harmonic predictor variables included) to 4 (144 predictor variables derived from all 4 values of r). Here, an optimal set of hyperparameters is one that minimizes the root mean square error (RMSE) of a trained model’s predictions for the validation dataset. Figure 4.8 shows the response surface derived from this experiment. The optimal hyperparameters, shown by the red dot, are a minimum terminal node size of 8, and the first two values of r included. It is interesting to note the significant improvement in performance gained by the addition of the spherical harmonic predictor variables as well as the gradual decline in performance once values of r beyond 5 and 15 mm are included. 4.2.3 Prediction Generation A first step in evaluating the performance of the proposed methodology is evaluating the accuracy of its predictions for the test dataset. A model was trained on the training dataset using the hyperparameters identified in Section 4.2.2. Predictions for the test dataset were generated, and are shown alongside the actual deviations below in Figure 4.9. Further, the RMSE and mean absolute error (MAE) for the predictions generated by 81 Figure 4.8: Response surface showing the RMSE of predictions generated by the model on the validation dataset as a function of the hyperparameters used to train it. the model are given below in Table 4.1. It can be seen in Figure 4.9 and Table 4.1 that prediction results largely agree with the empirical measurements. Table 4.1: Error for model predictions on test dataset. MAE 0.127 mm RMS 0.094 mm 82 Figure 4.9: Predicted and actual deviations for the test dataset. 4.2.4 Compensation Results Compensation according to the method described in Section 4.1.4 was applied to the mesh corresponding to the test dataset using the predictions generated above. The new mesh was printed with the same material, and under the same conditions as those printed for the original dataset. The deviations on the surface of the new compensated part are shown below in Figure 4.10 alongside those of the original uncompensated part. The fig- ure shows a significant qualitative improvement in the overall geometric accuracy of the printed part. Table 4.2 quantifies the mean absolute deviation, as well as the root mean squared deviation for the set of vertices found on both the compensated and uncompen- satedparts. Reductionsinrootmeansquareddeviationaswellasmeanabsolutedeviation 83 fromthecompensationmethodologywerefoundtoexceed40%, implyingasignificantben- efit from the proposed approach. This effect is shown in greater detail in Figure 4.11, which shows a frequency distribution of the various magnitudes of deviations found on both the compensated and uncompensated part. The taller and more centered the distri- bution, the more accurate the part. This figure shows a considerable number of measured deviations on the uncompensated part corresponding to regions that were printed was too small. There are also a decent number of positive deviations. Compensation was shown to reduce both of these categories, with improvements in the former being significantly more substantial. Vertical lines indicating the threshold beyond which a given deviation might be considered clinically relevant are given for two applications for which full-arch dental models might be used: the fabrication of clear aligners via thermoforming, as well as orthodontic case planning. It can be seen that the number of problematic deviations beyond these thresholds is significantly reduced using compensation. Table 4.2: Mean absolute deviation and root mean squared deviation for compensated and uncompensated parts. Compensated Part Uncompensated Part Percent Reduction MAE 0.085 mm 0.146 mm 41.8% RMS 0.114 mm 0.198 mm 42.4% 4.3 Conclusion The above results demonstrate the efficacy of the proposed approach, as the frequency of deviations outside the realm of what is considered clinically acceptable is reduced. It is interesting to note that the greatest improvement was seen on regions of the part where the measured dimensions were too small (negative deviations). These were signifi- cantly reduced, while those that occurred in regions that were too large were only slightly reduced. This is likely due to their substantially greater prevalence in the original part. 84 Figure 4.10: Comparison of deviations across the surface of the uncompensated and com- pensated parts shown at differing angles. Another interesting observation in the above results is that the benefit of including additional values of r becomes negative after a certain distance is reached from a given vertex. This is most likely because the correlation between the geometry of a printed object and the deviation at a point on the object deteriorates as the distance between the region of the geometry and the point increases. Because of this reduced correlation, the inclusion of this information into the machine learning model brings about decreased 85 Figure 4.11: Comparison of the frequency of deviations found in the compensated and uncompensated parts. performance when compared to a model trained on a smaller subset of highly relevant data. It may be of value to see if this relationship holds on other parts, and if so, with which sets of distances r this hyperparameter is optimized. 86 Chapter 5 Optimizing the Expected Utility of Shape Distortion Compensation Strategies With a methodology for predicting and compensating for errors established, a question naturally arises: is the existing approach for compensation found in the literature the most optimal? Put another way, does compensation that seeks to optimally reduce the magnitude of errors across the part actually serve a manufacturer best? One unifying theme found in the works presented in Section 1.3 is a desire to most effectively reduce the magnitude of geometric deviations based on the prior belief of the manufacturer as to what these deviations will be. In the literature, this prior belief has been defined by a predictive model or simply the deviations found on one or several sacrificial parts. Whilethisisareasonableandbeneficialgoal, therearetwoaspectstotheseapproaches worth considering. First, because all additive manufacturing methods are inherently complex combinations of several physical processes and engineered systems subject to constant variation, no model or sacrificial part will perfectly predict the magnitude of deviationsacrossthesurfaceofagivenpart. Asaresult, allpredictionscomewithinherent uncertainty. Further, the effects of compensation itself are subject to natural variations in the printing process. Therefore, knowledge regarding uncertainty of these outcomes would be worth considering when determining when and where to apply compensation. If information regarding how the model performs on previously unseen shapes is known, 87 it would be desirable that this prior probability distribution influences the compensation that is performed. Second, not all improvements or reductions in geometric accuracy are equal in the eyes of a manufacturer. A manufacturer might be able to employ a tools such as a grinder, or a hybrid manufacturing system [73] to correct for dimensions that are too large, but unable to correct for dimensions that are too small in post-processing. In this case, inaccurate compensation that produces dimensions that are too small is far more costly than inaccurate compensation producing dimensions that are too large. A manufacturer might also have to meet certain tolerance requirements. In this case, compensation that puts a part within the required tolerances would be far preferable to compensation that leaves or puts the part’s dimensions outside of them. Similarly, asymmetric tolerances [74] might be encountered, which could influence the significance of certain compensation errors. Intuitively, an ideal compensation strategy should take these considerations into account. This chapter explores this question, and has the following structure. First, consid- erations for constructing a value function describing a manufacturer’s preferences are discussed, and example functions are given. Second, a methodology for calculating the expected utility is described. Third, the compensation strategy used in the study is intro- duced. Finally, results demonstrating the method are given. The proposed strategy is shown to significantly increase the expected utility of a print. 5.1 Methodology 5.1.1 Constructing a Value Function The first step in the proposed approach is to develop a value function that describes the preferences a manufacturer has for a specific print. These preference attributes could 88 include the overall accuracy of the part, specific tolerances, and more, and seek to account for the unique challenges and constraints brought on by using additive manufacturing. The value function seeks to express the dollar value to a manufacturer of a completed print as a function of these attributes. One situation where these values and costs are particularly well-defined is in the case of an AM service provider. These businesses accept print jobs from a wide range of companies for a predefined price and with prenegotiated quality requirements. These parts are then manufactured by the service provider at a specific cost, and then returned to the customer, ideally at a profit. Several different value functions will be discussed below, which represent only a small fraction of the possible functions that could be utilized to express manufacturer preferences. The first value function might be used in a situation where a manufacturer must meet certain tolerances, has no ability to fix an out-of-tolerance print, and derives no benefit from improving the accuracy of the part within the tolerances: V 1 =V base I t all −C P (5.1) V base isthebasevalueofasuccessfulprintofthepart. Forinstance, ifthemanufacturer is a 3D printing service provider, this would be the price paid by a customer for the part, assuming it met the tolerance requirements. I t all is an indicator variable that is equal to one if the required tolerances have been met, and zero otherwise. Here, tolerance requirements are considered met if each dimension on the part is measured to be within the intended dimension plus t h or minus t l , the upper and lower tolerance bounds. Finally, C P is the cost to manufacture the part, including materials, energy, machine maintenance, etc. It can be seen here that the value of the print is the difference between the benefits and the cost, and will be negative if the print fails to meet the tolerance requirements. In this situation, the part will either be worth all or nothing to the manu- facturer depending on whether it meets tolerance requirements. It should be noted that 89 outside meeting tolerances, increasing or decreasing accuracy doesn’t financially impact the manufacturer. This reflects the very common case where tolerances are the only geo- metric quality metric that must be met by a manufacturer in a contract with a customer. The second value function might be used in a situation where the manufacturer has no tolerance requirements, but is penalized for errors according to a quadratic loss function: V 2 =B max −α n X i=1 (||x i − ˆ x i || 2 ) 2 +V base −C P (5.2) B max is the maximum additional value over the base value that would be derived from a perfectly accurate part. The second term sums the squares of geometric deviations at each point over each of the n points that are evaluated, and is then multiplied by the scaling termα to determine the accuracy penalty. Error is defined as the Euclidean norm between the measured position of the point x i . and the designed position of the point ˆ x i . An absolute value could be used instead of a square of the error terms if that better reflectedthemanufacturer’spreference. Itisdesirablethatthenumberofpointsevaluated across the STL file be made uniformly dense through remeshing, and the constant α be set according to n, so as to not bias the calculation. In this instance, the manufacturer no longer has to meet a set of fixed tolerances, but is instead incentivized financially to minimize the overall error with an exponentially increasing penalty for increasing error magnitudes. This might be the case when a part is being built for prototyping and visualization purposes as opposed to functional end-use. The manufacturer would still value a less accurate part to a lower degree, as a low-quality product would be more likely to leave a customer unsatisfied. Finally, a third value function might be used in a situation where a manufacturer has tolerance requirements, derives no benefit for improving accuracy within the tolerances, and has an ability to fix an out-of-tolerance dimension if it is larger than the design, albeit at a cost: 90 V 3 =V base I t l +γ n X i=1 (t h −max(t h ,||x i − ˆ x i || 2 ))−C P (5.3) I t l is equal to one if the lower tolerance requirement has been met, and zero otherwise. There is also a cost for physically repairing geometric deviations that are above the upper tolerancet h , scaled byγ. In this situation, the part is worth nothing to the manufacturer if a given dimension violates the lower tolerance bound, as they can no longer make that dimension larger once the print is completed. In this way, the manufacturer’s incentives are similar to those laid out in the first value function. In this situation, however, a deviation resulting in a dimension that is too large can be fixed using some form of subtractive manufacturing, which could be as simple as a bench grinder and as complex as a CNC machine. Because of the cost for these repairs, the incentive to have no dimension fall below the lower tolerances will have to be weighed against the cost of making some too large. 5.1.2 Determining a Proper Utility Function Once a function describing the value of a certain outcome has been established, it is necessary to determine the expected utility over that value function. This is because there is uncertainty as to which outcome will materialize, and decision makers may value different situations differently based on the distribution of risk. In the case of a service provider manufacturing hundreds if not thousands of part orders a day, it might be reasonable to assume that in the case of a single print with a value in the range of ~$10 to $1,000 they follow the delta property [142]: ˜ y δ = ˜ y +δ (5.4) Here, ˜ y is the greatest amount of money a decision maker would pay for a deal that paysy 1 dollars with probability p and y 2 dollars with probability 1-p. Similarly, ˜ y δ is the 91 greatest amount of money a decision maker would pay for a deal that pays y 1 +δ dollars with probability p and y 2 +δ dollars with probability 1-p. This is illustrated in Fig. 5.1. Figure 5.1: Lottery modified by shifting the payout. It might also be reasonable to assume that in the range of ~$10 to $1,000 the manufac- turer is risk neutral. This would suggest a linear utility function. As a result, we might determine that a manufacturer’s utility is equal to their value function [142]. For this situation, the task of calculating expected utility is greatly simplified. It should be noted that these assumptions will not be reasonable for all manufacturers, especially when the potential value of a part increases significantly. In these cases, a more elaborate utility function will be required. 92 5.1.3 Calculating Expected Utility With this in place, it is possible to calculate the expected utility of the value functions defined above. The expected utility of the first function becomes E[U(V 1 )] =V base P (InTol)−C P (5.5) where P (InTol) is the probability that all points on the part are within the required tolerances. The expected utility over the second value function can be expressed as: E[U(V 2 )] =−αn Z ∞ −∞ f err (x)x 2 dx +B max +V base −C P (5.6) wheref err (x) is the probability density function of the prior belief distribution of geomet- ric deviation magnitudes after compensation. Finally, the third expected utility can be expressed as: E[U(V 3 )] =V base P (InLowerTol) +γn(t h − (t h F err (t h ) + Z ∞ t h f err (x)xdx))−C P (5.7) whereP (InLowerTol)istheprobabilitythatallpointsonthepartarewithintherequired lower tolerances (not too small). In order to determine each of these expected values, it is necessary to determine P (InTol), P (InLowerTol), and f err (x). Methodologies for determining these probabilities and distributions will be given in the next two sections. 5.1.4 Generation of Prior Belief Distributions The probability density function of the prior belief distribution of geometric deviation magnitudes after compensation f err (x) for this example is empirically generated from a dataset of vertices from a part compensated according to the method proposed in Chapter 3. This reflects the belief of the manufacturer as to the probability of achieving certain 93 magnitudes of vertex deviations on a part compensated using the predictive model and compensation strategy given in Chapter 3. This greatly simplifies the task of understand- ing uncertainty, since uncertainty regarding the efficacy of predictions, compensation and measurement can all be accounted for in one distribution that focuses on the metric of ultimate interest: deviation. This empirically generated distribution is shown in Fig. 5.2. It can be seen here that this distribution is slightly skewed to the left. Because the dis- tribution is generated empirically, this will cause challenges when determining the joint probability distribution of multiple points, as will be seen later. In an industrial setting, the use of big data analytics would be an enabling technology in this effort, as it would facilitate the collection of large amounts of data representing the efficacy of compensation on individual machines and varying process parameters. This would allow for the use of prior belief distributions that are conditional on the most relevant information available. Figure 5.2: Probability distribution of geometric deviations of compensated vertices. 94 5.1.5 Calculating Tolerance Probabilities Given Spatial Auto- correlation Next, it is necessary to determine P (InTol) and P (InLowerTol) for a new part for which tolerance probabilities are desired based on the data used to generate f err (x). The part will be evaluated at a set of n locations on its surface: L. One challenge faced here is that vertices on the surface of the shape within close proximity of each other will likely exhibit some degree of spatial autocorrelation. This was confirmed for the given dataset using Moran’s I test [143]. One way to account for this issue is to only measure points across the surface of the part that are sufficiently separated so as to not be influenced by spatial autocorrelation. A semivariogram of the compensation deviation data is given in Fig. 5.3. It can be seen that after points are roughly 20 mm apart, the effect of spatial autocorrelation becomes negligible. If one wishes to simplify the calculation of these probabilities by assuming independence, all measured points must be greater than this distance apart for this dataset. However, it is more likely that the points on the surface of the part that will be mea- sured, often using a coordinate measurement machine or 3D scanner, will be significantly closer than the limit for spatial independence due to their large number (thousands). In this instance, a method for calculating P (InTol) and P (InLowerTol) while accounting for dependency between the deviation magnitudes of nearby points should be utilized. One will be illustrated below. In it, a Monte Carlo approach is used to determine the per- centage of simulated parts that are within and outside of the manufacturer’s predefined tolerance requirements, allowing for the determination ofP (InTol) andP (InLowerTol). In order to do this, a large number of sets containing simulated deviations at each of the points to be evaluated on the prospective part are generated and then screened against the manufacturer’s predefined tolerancing criteria. For a given set of vertices, it is neces- sary to draw a random sample of points from the deviation distributionf err (x). However, 95 Figure 5.3: Semivariogram of the compensation deviation data (mm). since the magnitude of deviations at nearby points is dependent on their neighbors, it is necessary to construct and draw magnitudes from a joint probability distribution that takes this correlation into account. This necessitates the simulation of a joint distribution with empirically defined marginals. One preliminary task that must be done beforehand is to determine the degree of expected covariance between points to be evaluated on the new part to be manufactured. First, functions describing the semivariogram and covariogram are fit to the manufac- turer’s previous compensation deviation data. These functions seek to describe the rela- tionship between distance between points and covariance for magnitude of deviation. In 96 this example, the spherical variogram model will be utilized, where semivariance SV is a function of distance h given by: SV (h;r,s,a) 0 if h = 0 a + (s−a)( 3h 2r − h 3 2r 3 ) if 0<h≤r s if h>r (5.8) where a is the nugget of the semivariogram, s is the sill, and r is the range [144]. The spherical covariogram model is given as: CV (h;r,s,a) s if h = 0 (s−a)(1− 3h 2r − h 3 2r 3 ) if 0<h≤r 0 if h>r (5.9) These are fit to the compensated deviation data from Chpater 3, and shown in Fig. 5.4. Using the spherical covariogram model, it is possible to determine a covariance matrix Σ describing the covariance between each of the points L on the part to be evaluated given the distances between them. With this established, simulated sets of deviation measurements for all of the vertices on the part can be generated by drawing samples from a multivariate distribution with marginals based on the probability distribution f err (x) shown in Fig. 5.2. This can be a challenging task, since f err (x) is an empirical, non-normal distribution. Further, because thousands of points will be evaluated across the surface of the part, the high dimensionalityofthedatawillpresentanadditionalhurdle. Oneusefultoolforaddressing these challenges is a copula structure, which allows users to describe multivariate joint distributions in terms of univariate marginal distributions and the ‘link’ between them. In simpler terms, copulas allow for the modeling of dependence between random variables, 97 Figure 5.4: Semivariogram and covariogram of compensated deviation data. which is needed for this application. While there are a number of classes of copulas that have been utilized in the literature, one of the more popular copula structures is the Gaussian copula, which is generated from the multivariate normal distribution. Given a correlation matrix R∈ [−1, 1] dxd the Gaussian copula can be written as: C Gauss R (u) = Φ r (Φ −1 (u 1 ),..., Φ −1 (u d )) (5.10) where Φ r is the joint cumulative distribution function (CDF) of the multivariate normal distribution with a mean of zero and covariance matrix corresponding to the correlation matrix R, while Φ −1 is the inverse of the CDF of the normal distribution. This structure waschosenbecauseoftheflexibilitywithwhichitcanbeusedtomodelcomplexsituations like the one encountered in this application. One method for doing this, which was illustrated in [145], will be utilized here. First, K samples x 1 ,x 2 ,...x K of the n-dimensional vector were generated from a multivariate normal distribution with a covariance matrix Σ. Here, K = 10000 and n = 5000. The 98 cumulative probability of each value is determined using the normal cumulative distri- bution function t n,i = Φ k,i (x k,i ) where k = 1,...,10000 and i = 1,...,5000. Finally, the simulated values of deviation at each evaluated point for each simulated part are gener- ated using the inverse of the cumulative distribution function for the distribution shown in Fig. 2.4: y k,i = F −1 k,i (t k,i ). The probabilities P (InTol) and P (InLowerTol) can be determined from the proportion of the generated sets from the multivariate distribution that are entirely within the required tolerances. For the purposes of this work, a part is considered out of tolerance if the deviation at one of its vertices is outside the given constraints, however this same methodology could be applied to other schemes. Once these probabilities are determined, expected utility can be calculated as given in Equa- tions 5.5-5.7. It should be noted that one potential downside to the use of Gaussian copulas is their weak tail dependence, which implies that the probability of clusters of extreme events can be underestimated using this approach [146]. It is important that this be weighed against the definition of a part being out of tolerance that is defined by a manufacturer to ensure that the distribution that is described using the copula structure is well suited for estimation. 5.1.6 Alternative Compensation Strategy In order to demonstrate the usefulness of this methodology, a simple alternative com- pensation strategy is proposed. In this strategy, which is illustrated in Figure 5.5, each vertex is translated along a vector normal to the surface a distance equal to the opposite of the predicted deviation plus a constant c, which will be the same for every point on the surface of the part. Because ˆ y will vary for each point, the amount of compensation applied to each point will differ as well. This constant c is simply a parameter of the strategy that will be optimized by choosing the value that maximizes expected utility as calculated using the proposed methodology. The prior belief distribution for the results of 99 the alternative compensation scheme can be approximated by translating the distribution f err (x) by the value c. Figure 5.5: Alternative compensation strategy. 5.2 Results An example scenario is presented below, in order to demonstrate the proposed approach. A manufacturer will build a part, but wishes to employ compensation with an expected distribution of remaining deviations represented in Figure 5.2. The expected value of the printed part will be evaluated for varying values of the hyperparameter of the compensation strategy c using the three proposed value functions. Parameters for each of the three value functions are chosen in order to reflect a potential situation a manufac- turer might face. They are given in Table 5.1. The expected utility of the compensated part for each value function as a function of different values of c is shown in Figures 5.6 - 5.8. Expected utilities are calculated using the proposed method to account for spatial autocorrelation. The maximum of each function is indicated by a blue circle. It can be seen that in each case, the value of c (mm) that maximizes the expected utility of the compensated part is not zero. The maximum expected utility values using the alternative compensation strategy are compared against the expected utility values from the standard compensation strategy in Table 5.2. 100 Figure 5.6: Expected utility of Value Function 1 as a function of c. Figure 5.7: Expected utility of Value Function 2 as a function of c. Value Function 1 simply penalizes prints that are out of the required tolerances, mean- ing that a value of c that minimizes this likelihood will maximize expected utility. Since the distribution off err (x) shown in Fig. 5.2 is skewed slightly to the left, we can conclude that the compensation procedure/model utilized in Chapter 3 has a slight tendency to 101 Figure 5.8: Expected utility of Value Function 3 as a function of c. Table 5.1: Example parameters for value functions. Parameter Value V base $300 C P $100 B max $20 t h 0.225 mm t l 0.225 mm α 3 γ 30 n 5000 Table 5.2: Maximum expected utility. Value Function Maximum Expected Utility Standard Expected Utility (at c=0) Difference in Utility Values 1 $89.93 (at c = 0.014) $81.56 $8.37 2 $179.90 (at c = 0.013) $177.50 $2.40 3 $143.30 (at c = 0.019) $85.39 $57.91 produce compensated dimensions that are too small. As a result, a value of c that is pos- itive can help to offset this effect, and thus maximize utility. Similarly, Value Function 2 is maximized when the overall sum of squares of deviations is minimized. Therefore, the optimal value of c is also positive to account for the skew in f err (x). Finally, Value 102 Function 3 seeks to keep all absolute deviations above the lower tolerance bound (i.e. no dimensions that are too small) while also minimizing the sum of deviations above the upper tolerance bound (i.e. dimensions that are too large). When c is less than -0.1 mm, the value function flattens out to −C P as the part is guaranteed to fail the lower tolerance test. When c is greater than -0.1 mm, the likelihood of failing the lower tolerance test decreases, increasing the expected utility. However, as c increases beyond 0.119 mm, the effect of the increasing cost to repair above tolerance deviations outweighs the effect of the decreasing likelihood of lower tolerance failure, and the expected utility decreases rapidly. Because the value functions are significantly impacted by the manufacturer’s prefer- ences, and therefore choice of α and γ, a sensitivity analysis of the two coefficients is useful to determine the generalizability of these results to situations with differing prefer- ences. Fig. 5.10 illustrates the optimal value of c for Value Function 3 as a function of γ. Value Function 1 does not utilize α or γ, and is therefore not analyzed here. Since Value Function 2 only penalizes the sum of squares of deviations, and has no other criteria, the optimal value of c is not sensitive to changes in the α coefficient. We can also see that the optimal value of c for Value Function 3 is sensitive to the value of γ. This is because increasing cost for repairs on dimensions that are too large causes the optimal value of c to decrease to compensate. Figure 5.9 illustrates the difference between the maximum expected utility and standard expected utility as a function of α and γ. We can see that this difference increases linearly with α in the case of Value Function 2 and decreases with increasing γ in the case of Value Function 3. 5.3 Dental Case Study Inordertoprovideanadditionalandmoreconcreteexampleoftheabovemethodology, this section will examine a common situation from the field of dental additive manufac- turing. In this case, a dentist is seeking to print a full arch dental model to plan treatment 103 Figure 5.9: Sensitivity analysis for the difference in optimal utility values to α and γ for Equations 2 and 3. Figure 5.10: Sensitivity analysis for the optimal value of c to γ for Equation 3. 104 for a patient. The dentist purchases the resin for their printer at a cost of $150/L, and uses 0.05 L of resin on printing a full set of teeth. Further, the dentist estimates that half an hour of labor will be needed to print and inspect the model, and this labor will cost $20/hour. Printing this model for the purposes of diagnosis and treatment planning will be reimbursed by the patient’s insurance (under code D0470) at a rate of $98.66 [147] with the contingency that it must be within clinically acceptable tolerances. This dentist has opted to utilize the 0.5 mm clinically acceptable accuracy threshold identified in [93–96] and evaluate the part at 500 clinically relevant points along the surface of the teeth being printed. Therefore, a part with measured deviations greater than 0.5 mm in magnitude present will need to be scraped. These details are summarized below in Table 5.3. In this situation, the dentist’s value can be described by Value Function 1 (Equation 5.1). Here, we will also say that they are risk neutral and follow the delta property, simplifying analysis. Table 5.3: Parameters for the dental case study. Parameter Value V base $98.66 C P ( $150 L )(0.05L) + ( $20 hr )(0.5hr) = $17.50 t h 0.5 mm t l 0.5 mm n 500 The dentist chooses to apply compensation according to the method proposed in Sec- tion 5.1.6, using the predictive model developed in Chapter 4. The dentist’s prior belief for the efficacy of compensation is defined by the distribution of deviations remaining on the compensated part from Chapter 4, which is shown below in Figure 5.11. Analysis according to the method outlined above is done to determine the expected utility of the print as a function of the value of c used in compensation. These results are shown below in Figure 5.12 and Table 5.4 105 Figure 5.11: Probability distribution of geometric deviations of compensated vertices using spherical harmonics-based model. Table 5.4: Maximum expected utility for dental case study. Maximum Expected Utility Standard Expected Utility (at c=0) Difference in Utility Values $45.87 (at c = -0.008) $45.17 $0.70 These results indicate that the optimal utility to the manufacturer occurs at a value of c = -0.008, and not at the standard c = 0. This indicates that additional utility might be derived from the print by altering the compensation strategy used, giving further support to the argument above. 106 Figure 5.12: Expected utility as a function of c. 5.4 Conclusion It can be seen from these results that the conventional compensation strategy, which seeks to minimize a part’s deviations, does not necessarily optimize the expected utility of a produced part. Further, even with a relatively simple change to the conventional compensation strategy, it is possible to significantly increase the expected utility of a given print. Because each manufacturer will have different incentives and tolerance for risks of different magnitudes, the value functions and utility functions determined over the value functions will need to be adjusted accordingly. This general methodology can also be used for other applications. It could useful for a manufacturer to determine whether they should attempt to print a part on a given machine or with a specific predictive model. It 107 could also be used to help a service provider determine whether it should accept a specific job, or how it should price contracts for prints with certain tolerances and requirements. There are two limitations to the proposed methodology that should be highlighted. First, the calculations of expected utility are only as accurate as the data they are based on. In the absence of adequate data, or when using deviation data that is not represen- tative of the situation the manufacturer will be facing, recommendations based on this methodology will not be useful. Second, constructing a value function describing a man- ufacturer’s preferences can be difficult. While three different functions that account for relevant outcomes are proposed here, real world value functions can be highly complex, and difficult to pin down. There is a significant body of work that should be consulted on how best to elicit information for constructing a value function while avoiding biases and pitfalls. 108 Chapter 6 Discussion and Future Work This chapter will begin with a brief discussion of the integration between the previous chapters, offering insight into how the individual methodologies were integrated into a cohesiveframework. Followingthis,possiblefuturedirectionsforthisfieldwillbeoutlined. Finally, the development of a set of prototype software tools demonstrating how this methodology might be employed in an industrial setting is described. 6.1 Discussion This dissertation has proposed a unified framework for measuring deviation, predict- ing it in future prints, and compensating for it based on generated predictions. One benefit of this holistic approach is that the assumptions and specifications underlying the individual tools presented here can be tailored specifically to benefit subsequent steps. While the steps in the quantification, prediction, and compensation process are presented individually as chapters in this dissertation, the interaction between all three should not be overlooked. Intentionality towards this interplay is a critical component to the success of the overall framework, and was implemented in a number of ways. For instance, by performing registration in a manner that preserves deviations in the location in which they occurred during manufacturing, the quality of data that is passed to modeling efforts is improved. When registration and quantification are not performed with an eye towards the modeling tasks that will follow, deviations are hidden and noise is introduced into what will eventually become the training data for predictive models. Similarly,theprocedureforpredictivemodelingneedstobeperformedwithaneyetowards 109 how the part will be compensated. A strong example of these considerations is the fact that deviations along the median normal vector to a given vertex are what is measured during quantification and what is predicted by the models demonstrated here, as this constitutes an actionable insight that can be used during compensation. Thisis further illustratedthroughtheuse ofacentralinstancedefinition usedthrough- out the research: individual vertices on the triangular mesh of the designed STL file. When registration is performed, the deviations can be calculated with the designed file as the reference or the scanned shape. While both are commonly used, here, deviations are intentionally measured from the vertices of the designed shape to the scanned shape. This allows each individual deviation measurement to be assigned to a vertex on the design mesh. Similarly, each predictor variable for modeling presented here is calculated with respect to an individual vertex on the design mesh. Together, these decisions allow modeling to be performed, and greatly simplifies the overall process. The proposed com- pensation strategies are centered around movement of individual vertices, allowing for each individual prediction to be utilized in shape alteration. As a result of this care, the results generated at each step in the procedure are com- patible with the needs of the step that will follow. While maintaining this compatibility in the presence of rapidly changing technologies, processes, and streams of data represents a challenge, this work ideally represents a path forward. 6.2 Future Research This section will seek to highlight future areas of focus that might be explored to move the goal of AM accuracy improvement forward. Because the problem is highly interdisciplinary, and the scope is wide, the following discussion should be understood to be a sample of what could follow. There is significant potential for future research in the area of prediction and modeling as this work presents deep challenges that will continue to 110 benefit from the future development of algorithms for machine learning as well as methods of data collection. 6.2.1 Leveraging Streams of In-Situ Measurement Data One promising avenue of future research in this field could focus on the integration of in-situ measurement data from a printer with the predictive product design adjustment paradigm. As manufacturers transition to Industry 4.0, there is a growing desire for advanced cyber-physical systems (CPS) that can collect and leverage data to increase manufacturing efficiency and quality. Sources of in-situ data that have been collected in the literature are diverse, and reflective of the broad array of AM methods available to users. These sources of data include acoustic emissions monitoring, optical monitoring, infrared (IR) sensing for thermography, accelerometer measurements and more. This line of research would seek to incorporate these data sources into the predictive modeling methodology, allowing predictions of deviation to be constantly updated during the print based on the observed conditions. Ideally, this would allow for more accurate predictions than those produced by a method that generates a set of static predictions before a print. Further, if optical or other measurements taken in-situ allow the dimensions of the object being printed to be known in real time, this would enable real-time evaluation of model performance, allowing it to be refined on-the-fly. An additional benefit of collecting these streams of data beyond enhanced predictive modeling is be their utility for detecting print failure. A single system that would perform in-situ predictive product design adjustment when deviations are small and halt the print when deviations are unrecoverably large, or likely to damage printer hardware would be of significant value to manufacturers. Finally, this data would allow manufacturers to qualify and inspect parts in-situ, providing them with a much broader range of insight than would be possible with the information gathered after the print has completed. 111 6.2.2 Improving Robustness of Models with Limited Data Two critical, yet opposite tasks lay ahead in the core area of predictive product design adjustment. One encompasses the need to more effectively generate predictions in the face of limited data, while the other relates to how predictive efficacy might be improved in the face of large quantities of data. The first of these challenges reflects the need to produce models that can perform well, even when the training data available is limited. For instance, a central task in this research is how to leverage small amounts of data to generate predictions for a wide variety of shapes. As modeling approaches become more effective, ideally, less and less training data would need to be collected before effective predictions could be made. Fur- ther, predictions might be made for shapes representing greater and greater covariate shift from the original training data. Developing predictive modeling strategies to more powerfully leverage data is a critical task for commercializing this technology. Progress in this direction may be achieved through more effective representations of product shapes or process parameters for the purpose of modeling, or through more effective algorithms for modeling. 6.2.3 Transitioning to Function at Scale The opposite problem faced in this field of research is what to do when the scale of data available becomes far greater than what can be handled efficiently by existing methodologies. Data might proliferate to bring about these scenarios in a number of ways. One example would be a smaller number of machines that automatically collect quality data for each print that is conducted for the purposes of inspection and modeling. Within a short time, this would easily surpass the amount of data used for modeling methods demonstrated in the literature. Similarly, data collected ad hoc for quality inspection purposes across a large manufacturing company would constitute a similar situation. 112 A new set of methods and strategies are needed for this situation. This might come in the form of methods for down-selecting only the most relevant data for modeling based on a similarity metric, or strategies for producing large-scale models. The complexity of this task is only worsened by the diversity of process parameters, materials, and machines that might be represented across the entirety of the collected dataset. 6.2.4 Quality for the Masses An additional topic that needs to be mentioned here is the critical need for exploring how predictive product design adjustment might be carried out in a diverse set of con- texts. Unfortunately, the strategy that is ideal for a large multi-national corporation with thousands of printers at dozens of sites across the world all generating data using highly accurate measurement systems would be different than one geared to an individual user with a single printer and no measurement system beyond a pair of calipers. Future work delineating how a methodology such as the one presented here would function in widely differing contexts would be highly valuable. One potential means of providing accuracy models for individual users might be through pooling and open sharing of manufactur- ing quality data, allowing users to benefit from future advances in this work even in the absence of comparable resources. As the cost of highly accurate 3D scanning systems decreases and they find their way into the hands of a greater number of users, this path seems more feasible. 113 6.3 A System Architecture for a Path Forward 6.3.1 Overview As the above sections have suggested, there are a number of significant barriers that have hampered translation of this line of research into industry. First, there are reason- able limitations in the domain knowledge possessed by those managing an AM workflow. Producing an effective data-driven model for an AM process using the methods presented in the previous chapters currently requires advanced knowledge in fields such as statistical modeling or machine learning. Second, there is a significant cost in both time and money to generate these modeling tools in-house for a specific AM process. Small organizations with a limited number of AM machines, or for whom AM is not a main focus might not deem the effort worthwhile. Further, the amount/ quality of data generated by a smaller manufacturer might prove to be a challenge for generating the models that they desire. Finally, such a manufacturer might not have access to the measurement equipment or sensors that are needed to train a specific model. These challenges point to the need for a CPS for AM architecture that can centralize this effort to overcome the knowledge gap and resource constraints, allowing small-scale makers and manufacturers to outsource modeling to domain experts and automated pro- cesses. Further, with such a system, AM geometric accuracy data could be pooled among users, reducing the cost of generating this data for each new model that is desired, and lowering the barrier to entry for new users. While the benefits of this approach are signif- icant, the need for manufacturers to maintain the privacy of large swaths data presents a major challenge. One example of this would be a manufacturer that produces proprietary parts, and would be hesitant to send CAD files and geometric accuracy data for those parts to a 3rd party, let alone share that data with its peers. For this reason, a system that allows for some geometric accuracy data to be shared, while preserving the security 114 of proprietary information and still allowing modeling to be outsourced to a third party is needed. Preliminary work in the Huang Lab has sought to develop a prototype distributed sys- tem architecture for addressing the modeling needs of manufacturers while also enabling the separation of proprietary data. In this system, there are three main components. First, the 3rd party service maintains a set of software tools for generating statistical or machinelearningmodelsbasedongeometricaccuracydata. Second, amanufacturerclient runs software that can take models generated by the 3rd party and use them to generate accuracy predictions and compensated CAD files. Third, both the manufacturers and the 3rd party service utilize a cloud-based app in order to transfer manufacturing data and trained models. The client can manufacture non-proprietary objects using an AM process that they desire to generate a model for. The data from these parts can be sent to the 3rd party service, which generates a trained model based on the data and sends it to the client. Then, the client can deploy the models on their local system, which can handle proprietary and otherwise confidential CAD designs. This system is illustrated in Figure 6.1. Figure 6.1: Diagram of the proposed system. 115 6.3.2 Client-Side Software In the proposed system, software that runs locally on a client’s computers handles deployment of trained models for predicting geometric deviations and modification of CAD files, as well as collection of the relevant manufacturing data that will be used for modeling. Because of this, all private information is retained by the client. This software also offers the ability to visualize errors and perform analysis to determine whether a part is likely to fall within required tolerances. A prototype of this software that was developed is shown in Figure 6.2. It allows for the visualization of errors across the surface of the part in 3D, while also providing tools for quantitative analysis of the predicted deviations. It takes in a STL file of a part, as well as a trained model for a given process, and can output a modified STL file of the compensated part. Figure 6.2: Screenshot of a prototype of the client-side software program. 116 6.3.3 Expert-Side Software A broader set of tools are required on the expert side in order to process accuracy measurement data, AM process data, generate predictor variables that can be used to generate a model, and finally to train the required model. These tools might vary as a result of the data that is available from a client, as well as their specific needs. For this reason, a one size fits all approach is likely not possible. Further, because of the significant variability in process conditions and measurement data that is produced by and sent from users, expert judgment and domain knowledge will likely be difficult to fully remove from this process. 6.3.4 Cloud-Based App for Exchange of Data and Models Thefinalcomponentofthissystemisacloud-basedappthatfacilitatestheexchangeof non-proprietary process accuracy measurement data from users to the 3rd party service, and trained models in return. Screenshots from a prototype of this app are shown in Figure 6.3. This app also allows users to organize their measurement data from each of their printers, printing processes, and products. This enables manufacturers to track AM accuracy across their resources while giving them a complete inventory of the data they have available for modeling. 6.3.5 Conclusion Asystemthatbalancestheneedfordataprivacywithoutsourcedexpertiseisnecessary to facilitate the translation of research into shape deviation modeling and compensation fromacademiatowidespreadindustrialuse. Thedistributedsystemarchitectureproposed here offers a step in this direction. Such a system might be put into place by a large AM machine manufacturer as a service that would be offered to customers. This would fit with the industry-wide pivot to software as a service (SaaS) and recurring revenue streams to 117 Figure 6.3: Screenshots of a prototype of the cloud-based exchange. compliment large one-time purchases. This would further serve as a step in facilitating the necessary transition of disconnected printers into an Industry 4.0 framework. One significant downside to the proposed approach is that due to controls on which data is shared, knowledge generated using proprietary parts, which could be the most relevant to a manufacturer’s needs, can’t be utilized for model training and improvement. 118 Reference List [1] N. A. Meisel, M. R. Woods, T. W. Simpson, and C. J. Dickman, “Redesigning a Reaction Control Thruster for Metal-Based Additive Manufacturing: A Case Study in Design for Additive Manufacturing,” Journal of Mechanical Design, vol. 139, oct 2017. [2] Y. Huang, M. C. Leu, J. Mazumder, and A. Donmez, “Additive Manufacturing: Current State, Future Potential, Gaps and Needs, and Recommendations,” Journal of Manufacturing Science and Engineering, vol. 137, no. 1, p. 014001, 2015. [3] F. Belfi, F. Iorizzo, C. Galbiati, and F. Lepore, “Space structures with embed- ded Flat Plate Pulsating Heat Pipe built by Additive Manufacturing technology: development, test and performance analysis.,” Journal of Heat Transfer, vol. 141, no. September, pp. 1–8, 2018. [4] Y. Huang and S. R. Schmid, “Additive Manufacturing for Health: State of the Art, Gaps and Needs, and Recommendations,” Journal of Manufacturing Science and Engineering, vol. 140, no. 9, p. 094001, 2018. [5] K. Stephenson, “A Detailed Five-Year Review of Medical Device Additive Manu- facturing Research and its Potential for Translation to Clinical Practice,” in 8th Frontiers in Biomedical Devices, p. V003T14A014, aug 2015. [6] C. Comotti, D. Regazzoni, C. Rizzi, and A. Vitali, “Additive Manufacturing to Advance Functional Design: An Application in the Medical Field,” Journal of Com- puting and Information Science in Engineering, vol. 17, no. 3, p. 031006, 2017. [7] D. Dimitrov, W. Van Wijck, K. Schreve, and N. De Beer, “Investigating the achiev- able accuracy of three dimensional printing,” Rapid Prototyping Journal, vol. 12, no. 1, pp. 42–52, 2006. [8] A. Lanzotti, D. M. Del Giudice, A. Lepore, G. Staiano, and M. Martorelli, “On the Geometric Accuracy of RepRap Open-Source Three-Dimensional Printer,” Journal of Mechanical Design, vol. 137, no. 10, p. 101703, 2015. [9] W. F. Mitchell, D. C. Lang, T. A. Merdes, E. W. Reutzel, and G. S. Welsh, “Dimen- sional accuracy of titanium direct metal laser sintered parts,” Solid Freeform Fabri- cation Symposium – An Additive Manufacturing Conference, pp. 2029–2042, 2016. 119 [10] A. Du Plessis, S. G. Le Roux, and F. Steyn, “X-ray computed tomography of consumer-grade 3D-printed parts,” 3D Printing and Additive Manufacturing, vol. 2, no. 4, pp. 191–195, 2015. [11] N. Decker and A. Yee, “Assessing the use of binary blends of acrylonitrile butadiene styrene and post-consumer high density polyethylene in fused filament fabrication,” International Journal of Additive and Subtractive Materials Manufacturing, vol. 1, no. 2, p. 161, 2017. [12] M. Babu, P. Franciosa, and D. Ceglarek, “Spatio-Temporal Adaptive Sampling for effective coverage measurement planning during quality inspection of free form sur- faces using robotic 3D optical scanner,” Journal of Manufacturing Systems, vol. 53, no. May, pp. 93–108, 2019. [13] C. Boehnen and P. Flynn, “Accuracy of 3D scanning technologies in a face scan- ning scenario,” Proceedings of International Conference on 3-D Digital Imaging and Modeling, 3DIM, pp. 310–317, 2005. [14] Li Zhang, B. Curless, and S. Seitz, “Rapid shape acquisition using color struc- tured light and multi-pass dynamic programming,” in Proceedings. First Interna- tional Symposium on 3D Data Processing Visualization and Transmission, pp. 24– 36, IEEE Comput. Soc, 2002. [15] J. Flugge, K. Wendt, H. Danzebrink, and A. Abou-zeid, “Optical Methods for Dimensional Metrology in Production Engineering,” CIRP Annals - Manufacturing Technology, vol. 51, no. 2, pp. 685–699, 2002. [16] K. Harding, ed., Handbook of Optical Dimensional Metrology. CRC Press, apr 2016. [17] P. I. Stavroulakis and R. K. Leach, “Invited Review Article: Review of post-process optical form metrology for industrial-grade metal additive manufactured parts,” Review of Scientific Instruments , vol. 87, no. 4, 2016. [18] Y. Li and P. Gu, “Automatic localization and comparison for free-form surface inspection,” Journal of Manufacturing Systems, vol. 25, no. 4, pp. 251–268, 2006. [19] G. K. Tam, Z. Q. Cheng, Y. K. Lai, F. C. Langbein, Y. Liu, D. Marshall, R. R. Martin, X. F. Sun, and P. L. Rosin, “Registration of 3d point clouds and meshes: A survey from rigid to Nonrigid,” IEEE Transactions on Visualization and Computer Graphics, vol. 19, no. 7, pp. 1199–1217, 2013. [20] D. Girardeau-Montaut, “Cloud Compare: Align,” 2021. [21] D. Aiger, N. J. Mitra, and D. Cohen-Or, “4-Points Congruent Sets for Robust Pairwise Surface Registration,” SIGGRAPH’08: International Conference on Com- puter Graphics and Interactive Techniques, ACM SIGGRAPH 2008 Papers 2008, no. August 2008, 2008. 120 [22] B.K.Smith,L.Bian,P.Rao,R.Jafari-Marandi,M.A.Tschopp,andM.Khanzadeh, “Quantifying Geometric Accuracy With Unsupervised Machine Learning: Using Self-OrganizingMaponFusedFilamentFabricationAdditiveManufacturingParts,” Journal of Manufacturing Science and Engineering, vol. 140, no. 3, p. 031011, 2017. [23] R. Zhang, H. Li, L. Liu, and M. Wu, “A G-Super4PCS registration method for photogrammetric and TLS data in geology,” ISPRS International Journal of Geo- Information, vol. 6, no. 5, 2017. [24] J. Huang, T. H. Kwok, and C. Zhou, “V4PCS: Volumetric 4pcs algorithm for global registration,” Proceedings of the ASME Design Engineering Technical Conference, vol. 1, pp. 1–10, 2017. [25] N. Mellado, D. Aiger, and N. J. Mitra, “SUPER 4PCS fast global pointcloud reg- istration via smart indexing,” Eurographics Symposium on Geometry Processing, vol. 33, no. 5, pp. 205–215, 2014. [26] P. J. Besl and N. D. McKay, “A Method for Registration of 3-D Shapes,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 14, no. 2, pp. 239– 256, 1992. [27] V. Klar, J. Pere, T. Turpeinen, P. Kärki, H. Orelma, and P. Kuosmanen, “Shape fidelity and structure of 3D printed high consistency nanocellulose,” Scientific Reports, vol. 9, no. 1, pp. 1–10, 2019. [28] D. Girardeau-Montaut, “CloudCompare.” [29] N. Alharbi, R. Osman, and D. Wismeijer, “Factors Influencing the Dimensional Accuracy of 3D-Printed Full-Coverage Dental Restorations Using Stereolithography Technology,” The International Journal of Prosthodontics, vol. 29, no. 5, pp. 503– 510, 2016. [30] N. Gelfand, L. Ikemoto, S. Rusinkiewicz, and M. Levoy, “Geometrically stable sam- plingfortheICPalgorithm,” Proceedings of International Conference on 3-D Digital Imaging and Modeling, 3DIM, vol. 2003-Janua, pp. 260–267, 2003. [31] T.-h. Kwok and K. Tang, “Improvements to the Iterative Closest Point Algorithm for Shape Registration in Manufacturing,” Journal of Manufacturing Science and Engineering, vol. 138, p. 011014, jan 2016. [32] Y. Yu, F. Da, and Y. Guo, “Sparse ICP with Resampling and Denoising for 3D Face Verification,” IEEE Transactions on Information Forensics and Security, vol. 14, no. 7, pp. 1917–1927, 2019. [33] D. Chetverikov, D. Svirko, D. Stepanov, and P. Krsek, “The trimmed iterative closest point algorithm,” Proceedings - International Conference on Pattern Recog- nition, vol. 16, no. 3, pp. 545–548, 2002. 121 [34] D. Chetverikov, D. Stepanov, and P. Krsek, “Robust Euclidean alignment of 3D point sets: The trimmed iterative closest point algorithm,” Image and Vision Com- puting, vol. 23, no. 3, pp. 299–309, 2005. [35] J. Dong, Y. Peng, S. Ying, and Z. Hu, “LieTrICP: An improvement of trimmed iterative closest point algorithm,” Neurocomputing, vol. 140, pp. 67–76, 2014. [36] J. Minguez, L. Montesano, and F. Lamiraux, “Metric-based iterative closest point scan matching for sensor displacement estimation,” IEEE Transactions on Robotics, vol. 22, no. 5, pp. 1047–1054, 2006. [37] L. Armesto, J. Minguez, and L. Montesano, “A generalization of the metric-based iterative closest point technique for 3D scan matching,” Proceedings - IEEE Inter- national Conference on Robotics and Automation, pp. 1367–1372, 2010. [38] C. Kapoutsis, C. P. Vavoulidis, and I. Pitas, “Morphological iterative closest point algorithm,” Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) , vol. 1296, no. 11, pp. 416–423, 1997. [39] C. A. Kapoutsis, C. P. Vavoulidis, and I. Pitas, “Morphological techniques in the iterative closet point algorithm,” IEEE International Conference on Image Process- ing, vol. 1, no. d, pp. 808–812, 1998. [40] J. Yang, H. Li, D. Campbell, and Y. Jia, “Go-ICP: A Globally Optimal Solution to 3D ICP Point-Set Registration,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 38, no. 11, pp. 2241–2254, 2016. [41] A. Hussein, L. Hao, C. Yan, and R. Everson, “Finite element simulation of the temperature and stress fields in single layers built without-support in selective laser melting,” Materials and Design, vol. 52, pp. 638–647, dec 2013. [42] D. Pal, N. Patil, K. Zeng, and B. Stucker, “An Integrated Approach to Addi- tive Manufacturing Simulations Using Physics Based, Coupled Multiscale Process Modeling,” Journal of Manufacturing Science and Engineering, vol. 136, no. 6, pp. 061022–1 – 061022–16, 2014. [43] J. C. Steuben, A. P. Iliopoulos, and J. G. Michopoulos, “On Multiphysics Dis- crete Element Modeling of Powder-Based Additive Manufacturing Processes,” in Proceedings of the ASME 2016 International Design Engineering Technical Confer- ences and Computers and Information in Engineering Conference, (August 21-24, 2016, Charlotte, North Carolina, USA), p. V01AT02A032, aug 2016. [44] A. Cattenone, S. Morganti, G. Alaimo, and F. Auricchio, “Finite element analysis of Additive Manufacturing based on Fused Deposition Modeling (FDM): distortion prediction and comparison with experimental data.,” Journal of Manufacturing Sci- ence and Engineering, vol. 141, no. 1, pp. 011010–1 – 011010–17, 2018. 122 [45] J. G. Michopoulos, A. P. Iliopoulos, J. C. Steuben, A. J. Birnbaum, and S. G. Lam- brakos, “On the multiphysics modeling challenges for metal additive manufacturing processes,” Additive Manufacturing, vol. 22, no. February 2015, pp. 784–799, 2018. [46] Q. Huang, J. Zhang, A. Sabbaghi, and T. Dasgupta, “Optimal offline compensa- tion of shape shrinkage for three-dimensional printing processes,” IIE Transactions (Institute of Industrial Engineers), vol. 47, no. 5, pp. 431–441, 2015. [47] K. Tong, S. Joshi, and E. A. Lehtihet, “Error compensation for fused deposition modeling (FDM) machine by correcting slice files,” Rapid Prototyping Journal, vol. 14, no. 1, pp. 4–14, 2008. [48] Q. Huang, “An Analytical Foundation for Optimal Compensation of Three- Dimensional Shape Deformation in Additive Manufacturing,” Journal of Manu- facturing Science and Engineering, vol. 138, no. 6, p. 061010, 2016. [49] A. Wang, S. Song, Q. Huang, and F. Tsung, “In-Plane Shape-Deviation Modeling and Compensation for Fused Deposition Modeling Processes,” IEEE Transactions on Automation Science and Engineering, vol. 14, no. 2, pp. 968–976, 2017. [50] L. Cheng, A. Wang, and F. Tsung, “A prediction and compensation scheme for in-plane shape deviation of additive manufacturing with information on process parameters,” IISE Transactions, vol. 50, no. 5, pp. 394–406, 2018. [51] A.SabbaghiandQ.Huang, “Modeltransferacrossadditivemanufacturingprocesses via mean effect equivalence of lurking variables,” The Annals of Applied Statistics, vol. 12, pp. 2409–2429, dec 2018. [52] R. de Souza Borges Ferreira, A. Sabbaghi, and Q. Huang, “Automated Geometric Shape Deviation Modeling for Additive Manufacturing Systems via Bayesian Neural Networks,” IEEE Transactions on Automation Science and Engineering, vol. PP, pp. 1–15, 2019. [53] S. L. Campanelli, G. Cardano, R. Giannoccaro, A. D. Ludovico, and E. L. J. Bohez, “Statistical analysis of the stereolithographic process to improve the accuracy,” Computer-Aided Design, vol. 39, no. 1, pp. 80–86, 2007. [54] J. G. Zhou, D. Herscovici, and C. C. Chen, “Parametric process optimization to improve the accuracy of rapid prototyped stereolithography parts,” International Journal of Machine Tools and Manufacture, vol. 40, no. 3, pp. 363–379, 2000. [55] M. S. Hossain, D. Espalin, J. Ramos, M. Perez, and R. Wicker, “Improved mechan- ical properties of fused deposition modeling-manufactured parts through build parameter modifications,” ASME Transactions, Journal of Manufacturing Science and Engineering, vol. 136, no. 6, p. 61002, 2014. 123 [56] A. Lanzotti, M. Martorelli, and G. Staiano, “Understanding Process Parameter Effects of RepRap Open-Source Three-Dimensional Printers Through a Design of Experiments Approach,” Journal of Manufacturing Science and Engineering, vol. 137, p. 011017, feb 2015. [57] K. Tong, E. A. Lehtihet, and S. Joshi, “Software compensation of rapid prototyping machines,” Precision Engineering, vol. 28, no. 3, pp. 280–292, 2004. [58] K. Xu, T. H. Kwok, Z. Zhao, and Y. Chen, “A reverse compensation framework for shape deformation control in additive manufacturing,” Journal of Computing and Information Science in Engineering, vol. 17, no. 2, 2017. [59] J. Francis and L. Bian, “Deep Learning for Distortion Prediction in Laser-Based Additive Manufacturing using Big Data,” Manufacturing Letters, vol. 20, pp. 10– 14, 2019. [60] J. Francis, A. Sabbaghi, M. Ravi Shankar, M. Ghasri-Khouzani, and L. Bian, “Effi- cient Distortion Prediction of Additively Manufactured Parts Using Bayesian Model Transfer Between Material Systems,” Journal of Manufacturing Science and Engi- neering, vol. 142, no. 5, pp. 1–16, 2020. [61] H. Luan and Q. Huang, “Prescriptive Modeling and Compensation of In-Plane Shape Deformation for 3-D Printed Freeform Products,” IEEE Transactions on Automation Science and Engineering, vol. 14, no. 1, pp. 73–82, 2017. [62] Q. Huang, H. Nouri, K. Xu, Y. Chen, S. Sosina, and T. Dasgupta, “Statistical Pre- dictive Modeling and Compensation of Geometric Deviations of Three-Dimensional Printed Products,” Journal of Manufacturing Science and Engineering, vol. 136, no. 6, p. 061008, 2014. [63] Y. Jin, S. Joe Qin, and Q. Huang, “Offline Predictive Control of Out-of-Plane Shape Deformation for Additive Manufacturing,” Journal of Manufacturing Science and Engineering, vol. 138, no. 12, p. 121005, 2016. [64] Q. Huang, Y. Wang, M. Lyu, and W. Lin, “Shape Deviation Generator (SDG) - A Convolution Framework for Learning and Predicting 3D Printing Shape Accu- racy,” IEEE Transactions on Automation Science and Engineering, vol. 17, no. 3, pp. 1486–1500, 2020. [65] J. D. Hiller and H. Lipson, “STL 2.0: A proposal for a universal multi-material Additive Manufacturing File format,” 20th Annual International Solid Freeform Fabrication Symposium, SFF 2009, no. 1, pp. 266–278, 2009. [66] 3MF Consortium, “3MF Consortium.” 124 [67] S. Chowdhury, K. Mhapsekar, and S. Anand, “Part Build Orientation Optimization and Neural Network-Based Geometry Compensation for Additive Manufacturing Process,” Journal of Manufacturing Science and Engineering, vol. 140, pp. 031009– 1 – 031009–15, dec 2018. [68] S. Chowdhury and S. Anand, “Artificial Neural Network Based Geometric Com- pensation for Thermal Deformation in Additive Manufacturing Processes,” in Pro- ceedings of the ASME MSEC, (June 27 - July 1, 2016, Blacksburg, Virginia, USA), pp. MSEC2016–8784, pp. V003T08A006, 2016. [69] G. Moroni, W. P. Syam, and S. Petró, “Towards early estimation of part accuracy in additive manufacturing,” Procedia CIRP, vol. 21, pp. 300–305, 2014. [70] G. Moroni, W. P. Syam, and S. Petrò, “Functionality-based part orientation for additive manufacturing,” Procedia CIRP, vol. 36, pp. 217–222, 2015. [71] M. McConaha and S. Anand, “Additive manufacturing distortion compensation based on scan data of built geometry,” Journal of Manufacturing Science and Engi- neering, Transactions of the ASME, vol. 142, no. 6, pp. 1–14, 2020. [72] B. Zhang, L. Li, and S. Anand, “Distortion Prediction and NURBS Based Geome- try Compensation for Reducing Part Errors in Additive Manufacturing,” Procedia Manufacturing, vol. 48, pp. 706–717, 2020. [73] G. Manogharan, R. Wysk, O. Harrysson, and R. Aman, “AIMS - A Metal Additive- hybridManufacturingSystem: SystemArchitectureandAttributes,” Procedia Man- ufacturing, vol. 1, pp. 273–286, 2015. [74] S. Maghsoodloo and M. H. Li, “Optimal asymmetric tolerance design,” IIE Trans- actions (Institute of Industrial Engineers), vol. 32, no. 12, pp. 1127–1137, 2000. [75] J. von Neumann and O. Morgenstern, Theory of games and economic behavior. Princeton, NJ: Princeton University Press, 1944. [76] A. E. Abbas, L. Yang, R. Zapata, and T. L. Schmitz, “Application of decision analysis to milling profit maximisation: An introduction,” International Journal of Materials and Product Technology, vol. 35, no. 1-2, pp. 64–88, 2009. [77] A. C. Hupman, A. E. Abbas, and T. L. Schmitz, “Incentives versus value in manu- facturing systems: An application to high-speed milling,” Journal of Manufacturing Systems, vol. 36, pp. 20–26, 2015. [78] T. L. Schmitz, J. Karandikar, N. Ho Kim, and A. Abbas, “Uncertainty in machin- ing: Workshop summary and contributions,” Journal of Manufacturing Science and Engineering, Transactions of the ASME, vol. 133, no. 5, 2011. 125 [79] R. E. Zapata-Ramos, T. L. Schmitz, M. Traverso, and A. Abbas, “Value of infor- mation and experimentation in milling profit optimisation,” International Journal of Mechatronics and Manufacturing Systems, vol. 2, no. 5-6, pp. 580–599, 2009. [80] J. M. Karandikar, A. E. Abbas, and T. L. Schmitz, “Tool life prediction using Bayesian updating. Part 1: Milling tool life model using a discrete grid method,” Precision Engineering, vol. 38, no. 1, pp. 18–27, 2014. [81] N. Xu and S. H. Huang, “Multiple attributes utility analysis in setup plan eval- uation,” Journal of Manufacturing Science and Engineering, Transactions of the ASME, vol. 128, no. 1, pp. 220–227, 2006. [82] I. Pergher and A. T. de Almeida, “A multi-attribute decision model for setting pro- duction planning parameters,” Journal of Manufacturing Systems, vol. 42, pp. 224– 232, 2017. [83] I. Pergher and A. T. de Almeida, “A multi-attribute, rank-dependent utility model for selecting dispatching rules,” Journal of Manufacturing Systems, vol. 46, pp. 264– 271, 2018. [84] U. K. uz Zaman, M. Rivette, A. Siadat, and S. M. Mousavi, “Integrated product- process design: Material and manufacturing process selection for additive manu- facturing using multi-criteria decision making,” Robotics and Computer-Integrated Manufacturing, vol. 51, no. March 2017, pp. 169–180, 2018. [85] Y. Wang, R. Blache, and X. Xu, “Selection of additive manufacturing processes,” Rapid Prototyping Journal, vol. 23, no. 2, pp. 434–447, 2017. [86] Y. Zhang and A. Bernard, “An integrated decision-making model for multi- attributes decision-making (MADM) problems in additive manufacturing process planning,” Rapid Prototyping Journal, vol. 20, no. 5, pp. 377–389, 2014. [87] B. Bustos, D. A. Keim, D. Saupe, T. Schreck, and D. V. Vranić, “Feature-based similarity search in 3D object databases,” ACM Computing Surveys, vol. 37, no. 4, pp. 345–387, 2005. [88] M. Kazhdan, T. Funkhouser, and S. Rusinkiewicz, “Rotation Invariant Spherical Harmonic Representation of 3D Shape Descriptors,” in SGP ’03: Proceedings of the 2003 Eurographics/ACM SIGGRAPH symposium on Geometry processing, pp. 156– 164, 2003. [89] T. Funkhouser, P. Min, M. Kazhdan, J. Chen, A. Halderman, D. Dobkin, and D.Jacobs,“Asearchenginefor3Dmodels,” ACM Transactions on Graphics,vol.22, no. 1, pp. 83–105, 2003. 126 [90] H. T. Yau, T. J. Yang, and Y. C. Chen, “Tooth model reconstruction based upon data fusion for orthodontic treatment simulation,” Computers in Biology and Medicine, vol. 48, no. 1, pp. 8–16, 2014. [91] Y. H. Sohmura, H. Satoh, J. Takahashi, and K. Takada, “Complete 3-d reconstruc- tion of dental cast shape using perceptual grouping,” IEEE Transactions on Medical Imaging, vol. 20, no. 10, pp. 1093–1101, 2001. [92] A. Hazeveld, J. J. Huddleston Slater, and Y. Ren, “Accuracy and reproducibility of dental replica models reconstructed by different rapid prototyping techniques,” American Journal of Orthodontics and Dentofacial Orthopedics, vol. 145, no. 1, pp. 108–115, 2014. [93] P. Aly and C. Mohsen, “Comparison of the Accuracy of Three-Dimensional Printed Casts, Digital, and Conventional Casts: An in Vitro Study,” European Journal of Dentistry, vol. 14, no. 2, pp. 189–193, 2020. [94] G.B.Brown,G.F.Currier,O.Kadioglu,andJ.P.Kierl,“Accuracyof3-dimensional printed dental models reconstructed from digital intraoral impressions,” American Journal of Orthodontics and Dentofacial Orthopedics, vol. 154, no. 5, pp. 733–739, 2018. [95] L. Bohner, M. Hanisch, G. De Luca Canto, E. Mukai, N. Sesma, and P. Tortamano Neto, “Accuracy of casts fabricated by digital and conventional implant impres- sions,” Journal of Oral Implantology, vol. 45, no. 2, pp. 94–99, 2019. [96] S. Y. Kim, Y. S. Shin, H. D. Jung, C. J. Hwang, H. S. Baik, and J. Y. Cha, “Pre- cision and trueness of dental models manufactured with different 3-dimensional printing techniques,” American Journal of Orthodontics and Dentofacial Orthope- dics, vol. 153, no. 1, pp. 144–153, 2018. [97] S. T. Jaber, M. Y. Hajeer, T. Z. Khattab, and L. Mahaini, “Evaluation of the fused deposition modeling and the digital light processing techniques in terms of dimensional accuracy of printing dental models used for the fabrication of clear aligners,” Clinical and Experimental Dental Research, vol. 7, no. 4, pp. 591–600, 2021. [98] N. L. Koenig, Accuracy of Fit of Direct Printed Aligners Versus Thermoformed Aligners. PhD thesis, Saint Louis University, 2020. [99] O. A. Naeem, A Comparison of Three-Dimensional Printing Technologies on the Precision, Trueness, and Accuracy of Printed Retainers. PhD thesis, Virginia Com- monwealth University, 2020. [100] P. Papaspyridakos, Y. wei Chen, B. Alshawaf, K. Kang, M. Finkelman, V. Chronopoulos, and H. P. Weber, “Digital workflow: In vitro accuracy of 3D 127 printed casts generated from complete-arch digital implant scans,” Journal of Pros- thetic Dentistry, vol. 124, no. 5, pp. 589–593, 2020. [101] P. Papaspyridakos, G. I. Benic, V. L. Hogsett, G. S. White, K. Lal, and G. O. Gal- lucci, “Accuracy of implant casts generated with splinted and non-splinted impres- sion techniques for edentulous patients: An optical scanning study,” Clinical Oral Implants Research, vol. 23, no. 6, pp. 676–681, 2012. [102] P. Papaspyridakos, H. Hirayama, C. J. Chen, C. H. Ho, V. Chronopoulos, and H. P. Weber, “Full-arch implant fixed prostheses: a comparative study on the effect of connection type and impression technique on accuracy of fit,” Clinical Oral Implants Research, vol. 27, no. 9, pp. 1099–1105, 2016. [103] Y. Etemad-Shahidi, O. B. Qallandar, J. Evenden, F. Alifui-Segbaya, and K. E. Ahmed, “Accuracy of 3-dimensionally printed full-arch dental models: A systematic review,” Journal of Clinical Medicine, vol. 9, no. 10, pp. 1–18, 2020. [104] M. D. Scherer, “Digital Dental Model Production with High Accuracy 3D Printing,” Tech. Rep. March, formlabs, Somerville, MA, 2017. [105] S. Pillai, A. Upadhyay, P. Khayambashi, I. Farooq, H. Sabri, M. Tarar, K. T. Lee, I. Harb, S. Zhou, Y. Wang, and S. D. Tran, “Dental 3d-printing: Transferring art from the laboratories to the clinics,” Polymers, vol. 13, no. 1, pp. 1–25, 2021. [106] L. T. Camardella, O. V. Vilella, M. M. van Hezel, and K. H. Breuning, “Genauigkeit von stereolitographisch gedruckten digitalen Modellen im Vergleich zu Gipsmod- ellen,” Journal of Orofacial Orthopedics, vol. 78, no. 5, pp. 394–402, 2017. [107] R. E. Rebong, K. T. Stewart, A. Utreja, and A. A. Ghoneima, “Accuracy of three- dimensional dental resin models created by fused deposition modeling, stereolithog- raphy, and Polyjet prototype technologies: A comparative study,” Angle Orthodon- tist, vol. 88, no. 3, pp. 363–369, 2018. [108] S. H. Shin, J. H. Lim, Y. J. Kang, J. H. Kim, J. S. Shim, and J. E. Kim, “Evaluation of the 3d printing accuracy of a dental model according to its internal structure and cross-arch plate design: An in vitro study,” Materials, vol. 13, no. 23, pp. 1–12, 2020. [109] L. H. Lin, J. Granatelli, F. Alifui-Segbaya, L. Drake, D. Smith, and K. E. Ahmed, “A proposed in vitro methodology for assessing the accuracy of three-dimensionally printed dental models and the impact of storage on dimensional stability,” Applied Sciences (Switzerland), vol. 11, no. 13, 2021. [110] B. Alexandru-Victor, G. Cristina, B. Sorana, M. Marius, D. Diana, and C. Radu- Septimiu, “Three-dimensional accuracy evaluation of two additive manufacturing processes in the production of dental models,” Key Engineering Materials, vol. 752, no. August, pp. 119–125, 2017. 128 [111] W. Zhang, J. Qi, P. Wan, H. Wang, D. Xie, X. Wang, and G. Yan, “An easy-to-use airborne LiDAR data filtering method based on cloth simulation,” Remote Sensing, vol. 8, no. 6, pp. 1–22, 2016. [112] T. Möller and J. F. Hughes, “Efficiently Building a Matrix to Rotate One Vector to Another,” Journal of Graphics Tools, vol. 4, pp. 1–4, jan 1999. [113] M. Kazhdan and H. Hoppe, “Screened poisson surface reconstruction,” ACM Trans- actions on Graphics, vol. 32, no. 3, pp. 1–13, 2013. [114] D. Bates, M. Mächler, B. Bolker, and S. Walker, “Fitting Linear Mixed-Effects Models Using lme4,” Journal of Statistical Software, vol. 67, no. 1, pp. 201–210, 2015. [115] M. S. Bartlett, “Properties of sufficiency and statistical tests,” Proceedings of the Royal Society of London. Series A - Mathematical and Physical Sciences, vol. 160, pp. 268–282, may 1937. [116] E. Marchandise, J. F. Remacle, and C. Geuzaine, “Optimal parametrizations for surface remeshing,” Engineering with Computers, vol. 30, no. 3, pp. 383–402, 2014. [117] M. Liu, C. Sun, S. Huang, and Z. Zhang, “An accurate projector calibration method based on polynomial distortion representation,” Sensors (Switzerland), vol. 15, no. 10, pp. 26567–26582, 2015. [118] G. Strano, L. Hao, R. M. Everson, and K. E. Evans, “Surface roughness analysis, modelling and prediction in selective laser melting,” Journal of Materials Processing Technology, vol. 213, no. 4, pp. 589–597, 2013. [119] A. I. Botean, “Thermal expansion coefficient determination of polylactic acid using digital image correlation,” E3S Web of Conferences, vol. 32, p. 01007, 2018. [120] J. W. Stansbury and M. J. Idacavage, “3D printing with polymers: Challenges among expanding options and opportunities,” Dental Materials, vol. 32, no. 1, pp. 54–64, 2016. [121] A. A. D’Amico, A. Debaie, and A. M. Peterson, “Effect of layer thickness on irre- versible thermal expansion and interlayer strength in fused deposition modeling,” Rapid Prototyping Journal, vol. 23, no. 5, pp. 943–953, 2017. [122] Gilles Louppe, Understanding Random Forests: From Theory to Practice. PhD thesis, University of Liège, 2014. [123] C.K.WilliamsandC.E.Rasmussen,“GaussianProcessesforRegression,” Advances in Neural Information Processing Systems, pp. 514–520, jun 1996. 129 [124] D.Wu, C.Jennings, J.Terpenny, R.Gao, andS.Kumara, “Data-DrivenPrognostics Using Random Forests: Prediction of Tool Wear,” 2017. [125] D. Wu, C. Jennings, J. Terpenny, R. X. Gao, and S. Kumara, “A Comparative Study on Machine Learning Algorithms for Smart Manufacturing: Tool Wear Pre- dictionUsingRandomForests,” Journal of Manufacturing Science and Engineering, vol. 139, no. 7, p. 071018, 2017. [126] J. Tian, Y. Ai, M. Zhao, C. Fei, and F. Zhang, “Fault Diagnosis Method for Inter- Shaft Bearings Based on Information Exergy and Random Forest,” in Proceedings of the ASME Turbo Expo, (Oslo, Norway, June 11-15, 2018), pp. GT2018–76101, pp. V006T05A017, 2018. [127] L. Breiman, “Random Forests,” Machine Learning, vol. 45, no. 1, pp. 5–32, 2001. [128] T. Hastie, R. Tibshirani, and J. Friedman, The Elements of Statistical Learning. Springer Series in Statistics, New York, NY: Springer New York, 2 ed., 2009. [129] A. Liaw and M. Wiener, “Classification and regression by randomForest,” 2002. [130] J. Quiñonero-Candela, A. Schwaighofer, M. Sugiyama, and N. Lawrence, eds., Dataset Shift in Machine Learning. Cambridge, MA: The MIT Press, 2009. [131] S. Bickel, M. Brückner, and T. Scheffer, “Discriminative learning under covariate shift,” Journal of Machine Learning Research, vol. 10, pp. 2137–2155, 2009. [132] M. Sugiyama and M. Kawanabe, Machine Learning in Non-Stationary Environ- ments. The MIT Press, mar 2012. [133] M. Sugiyama, M. Krauledat, and K.-R. Müller, “Covariate Shift Adaptation by Importance Weighted Cross Validation,” Journal of Machine Learning Research, vol. 8, pp. 985–1005, 2007. [134] M.Rosenblatt,“RemarksonSomeNonparametricEstimatesofaDensityFunction,” 1956. [135] E. Parzen, “On Estimation of a Probability Density Function and Mode,” 1962. [136] D. Endres and J. Schindelin, “A new metric for probability distributions,” IEEE Transactions on Information Theory, vol. 49, pp. 1858–1860, jul 2003. [137] S. Kullback and R. A. Leibler, “On Information and Sufficiency,” The Annals of Mathematical Statistics, vol. 22, pp. 79–86, mar 1951. [138] Cults3D, “Demonstrative Models with Verti Articulator B4D,” 2021. [139] Cults3D, “Dental Model Bridge,” 2021. 130 [140] Cults3D, “Dental Model Hollow,” 2021. [141] Cults3D, “Orthodontic Model Hollow,” 2021. [142] A. Abbas, Foundations of Multiattribute Utility. Cambridge University Press, 1 ed., 2018. [143] P. Moran, “Notes on Continuous Stochastic Phenomena,” Biometrika, vol. 37, no. 1/2, pp. 17–23, 1950. [144] T. E. Smith, “Notebook on Spatial Data Analysis.” [145] T. Wang and J. S. Dyer, “A copulas-based approach to modeling dependence in decision trees,” Operations Research, vol. 60, no. 1, pp. 225–242, 2012. [146] E. Furman, A. Kuznetsov, J. Su, and R. Zitikis, “Tail dependence of the Gaussian copula revisited,” Insurance: Mathematics and Economics, vol. 69, pp. 97–103, 2016. [147] Optum, “Dental Fee Schedule,” 2020. 131
Abstract (if available)
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Fabrication-aware machine learning for accuracy control in additive manufacturing
PDF
Statistical modeling and machine learning for shape accuracy control in additive manufacturing
PDF
Some scale-up methodologies for advanced manufacturing
PDF
Data-driven and logic-based analysis of learning-enabled cyber-physical systems
PDF
Machine learning methods for 2D/3D shape retrieval and classification
PDF
Scalable dynamic digital humans
PDF
Selective separation shaping: an additive manufacturing method for metals and ceramics
PDF
Human and machine probabilistic estimation for decision analysis
PDF
Deformation control for mask image projection based stereolithography process
PDF
Probabilistic data-driven predictive models for energy applications
PDF
Energy control and material deposition methods for fast fabrication with high surface quality in additive manufacturing using photo-polymerization
PDF
Studies into computational intelligence approaches for the identification of complex nonlinear systems
PDF
Performance monitoring and disturbance adaptation for model predictive control
PDF
Nanostructure interaction modeling and estimation for scalable nanomanufacturing
PDF
Selective Separation Shaping (SSS): large scale cementitious fabrication potentials
PDF
Scalable polymerization additive manufacturing: principle and optimization
PDF
Hybrid vat photopolymerization processes for viscous photocurable and non-photocurable materials
PDF
Machine learning of DNA shape and spatial geometry
PDF
Economic model predictive control for building energy systems
PDF
Scalable optimization for trustworthy AI: robust and fair machine learning
Asset Metadata
Creator
Decker, Nathan
(author)
Core Title
Machine learning-driven deformation prediction and compensation for additive manufacturing
School
Viterbi School of Engineering
Degree
Doctor of Philosophy
Degree Program
Industrial and Systems Engineering
Publication Date
03/24/2022
Defense Date
03/09/2022
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
3D printing,additive manufacturing,CAD/CAM/CAE,cyber-physical systems,dimensional accuracy assessment,freeform shape,inspection and quality control,iterative closest point (ICP),modeling and simulation,OAI-PMH Harvest,rapid prototyping and solid freeform fabrication,registration,shape deviation modeling,shape representation
Format
application/pdf
(imt)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Huang, Qiang (
committee chair
), Abbas, Ali (
committee member
), Chen, Yong (
committee member
)
Creator Email
nathanidecker@gmail.com,ndecker@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-oUC110843360
Unique identifier
UC110843360
Document Type
Dissertation
Format
application/pdf (imt)
Rights
Decker, Nathan
Type
texts
Source
20220331-usctheses-batch-918
(batch),
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the author, as the original true and official version of the work, but does not grant the reader permission to use the work if the desired use is covered by copyright. It is the author, as rights holder, who must provide use permission if such use is covered by copyright. The original signature page accompanying the original submission of the work to the USC Libraries is retained by the USC Libraries and a copy of it may be obtained by authorized requesters contacting the repository e-mail address given.
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Repository Email
cisadmin@lib.usc.edu
Tags
3D printing
additive manufacturing
CAD/CAM/CAE
cyber-physical systems
dimensional accuracy assessment
freeform shape
inspection and quality control
iterative closest point (ICP)
modeling and simulation
rapid prototyping and solid freeform fabrication
shape deviation modeling
shape representation