Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
A polynomial chaos formalism for uncertainty budget assessment
(USC Thesis Other)
A polynomial chaos formalism for uncertainty budget assessment
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
A Polynomial Chaos Formalism for Uncertainty Budget Assessment by Zhiheng Wang A Dissertation Presented to the FACULTY OF THE USC GRADUATE SCHOOL UNIVERSITY OF SOUTHERN CALIFORNIA In Partial Fulfillment of the Requirements for the Degree DOCTOR OF PHILOSOPHY (CIVIL ENGINEERING) May 2022 Copyright 2022 Zhiheng Wang I dedicate this work to my dad Dr. Zheng Wang and my advisor Dr. Roger Georges Ghanem, who have made the most impacts on my way of thinking and behaving. ii Acknowledgments I would like to express my sincere gratitude to Professor Roger G. Ghanem for his continuous support, guidance, comments, for having confidence in me, and most importantly, for establish- ing a role model for my life. His immense knowledge and insightful vision have influenced me throughout the course of my PhD. I also greatly thank him for many more things he did for me, which I may not have realized or been concerned about. I will always be grateful to my former research advisor Professor Jie Li for his support and introducing me to the field of probabilistic analysis. His insightful suggestions and confidence in me from the beginning have encouraged me to keep looking for the beauty of uncertainty. My special thanks go to Professor Sami F. Masri for his support and dissertation committee service. He always provides timely and strong support to me, and his advice on innovative topics will be extremely useful in my career. My sincere thanks also go to Professor Thomas H. Jordan for serving on the dissertation committee. I am greatly honored and grateful for the insightful discussion and his advice to my work on seismic hazard analysis. I would like to thank Professor Patrick J. Lynett and Professor Bora Gencturk for serving on the qualifying committee. Special thanks to Professor Amy Rechenmacher for her encouragement. Financial support from the US National Science Foundation and the US Department of Energy is greatly appreciated. Finally, I would like to express my special thanks to my family: my father Zheng Wang, my mother Huafang Lei, my grandfather Changjun Wang, and my grandmother Shuqin Yao. I feel fortunate and grateful to have been born and raised in this family whose love, suggestions, and support have inspired me to complete this work and to keep moving forward. iii Table of Contents Dedication ii Acknowledgments iii List of Tables viii List of Figures x Abstract xiii Chapter 1: Introduction 1 1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Review of Relevant Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2.1 Polynomial chaos expansion . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2.2 Basis adaptation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.2.3 Kernel density estimation . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.3 Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Chapter 2: Stochastic Modeling 8 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.2 Extended Polynomial Chaos Expansion for Mixed Aleatory and Epistemic Uncer- tainties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.3 Quantification of Aleatory and Epistemic Uncertainties . . . . . . . . . . . . . . . 14 2.3.1 Representation of PDF: KDE . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.3.2 Stochastic sensitivity measures . . . . . . . . . . . . . . . . . . . . . . . . 15 2.3.3 Methods of quantifying the effect of aleatory and epistemic uncertainties on PDF of QoI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 2.3.3.1 Finite difference approach . . . . . . . . . . . . . . . . . . . . . 16 2.3.3.2 Stochastic sensitivity approach . . . . . . . . . . . . . . . . . . 17 2.3.4 Stochastic model for PDFs . . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.4 Illustrative Numerical Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 2.4.1 Example I: Beam structure . . . . . . . . . . . . . . . . . . . . . . . . . . 20 2.4.2 Example II: Reinforced concrete shear wall . . . . . . . . . . . . . . . . . 23 2.5 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 iv Chapter 3: Sensitivity Measures 32 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 3.2 Modified Extented Polynomial Chaos Expansion . . . . . . . . . . . . . . . . . . 35 3.3 Global and Reliability Sensitivity Measures With Respect to Distribution Parameters 38 3.3.1 Global sensitivity index function with respect to distribution parameters . . 39 3.3.2 Reliability sensitivity index with respect to distribution parameters . . . . . 40 3.4 Case Studies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 3.4.1 Example I: Ishigami function . . . . . . . . . . . . . . . . . . . . . . . . . 43 3.4.2 Example II: Beam stucture . . . . . . . . . . . . . . . . . . . . . . . . . . 46 3.4.3 Case study III: Reinforced concrete shear wall . . . . . . . . . . . . . . . 48 3.5 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 Chapter 4: Bayesian Model Calibration 55 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 4.2 PCE with Random Coefficients by EPCE . . . . . . . . . . . . . . . . . . . . . . 59 4.2.1 Representation of error in EPCE . . . . . . . . . . . . . . . . . . . . . . . 60 4.3 Bayesian Inference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 4.3.1 Standard Bayesian inference . . . . . . . . . . . . . . . . . . . . . . . . . 62 4.3.2 Bayesian inference for random PCE coefficients . . . . . . . . . . . . . . 62 4.3.2.1 Prior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 4.3.2.2 Likelihood . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 4.3.2.3 Posterior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 4.3.2.4 Metropolis-Hastings Algorithm . . . . . . . . . . . . . . . . . . 64 4.4 Stochastic Models for PDFs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 4.5 Case Studies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 4.5.1 Example I: Beam structure . . . . . . . . . . . . . . . . . . . . . . . . . . 68 4.5.2 Example II: Reinforced concrete shear wall . . . . . . . . . . . . . . . . . 72 4.6 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 Chapter 5: Seismic Hazard Forecastings 80 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 5.2 Standard Probabilistic Seismic Hazard Analysis (PSHA) . . . . . . . . . . . . . . 83 5.2.1 Earthquake source characterizations . . . . . . . . . . . . . . . . . . . . . 83 5.2.1.1 Model of magnitude . . . . . . . . . . . . . . . . . . . . . . . . 84 5.2.1.2 Model of source-to-site distance . . . . . . . . . . . . . . . . . 84 5.2.2 Ground motion model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 5.2.3 Combined calculation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 5.2.4 Uncertainty assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 5.3 EPCE-based Seismic Hazard Analysis . . . . . . . . . . . . . . . . . . . . . . . . 88 5.3.1 Deterministic hazard model . . . . . . . . . . . . . . . . . . . . . . . . . 89 5.3.2 Uncertainty representation . . . . . . . . . . . . . . . . . . . . . . . . . . 89 5.3.3 Uncertainty propagation . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 5.3.4 Hazard curve . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 5.3.5 Stochastic model for hazard curves . . . . . . . . . . . . . . . . . . . . . 93 5.4 Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 v 5.5 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 Chapter 6: Stochastic Multiscale Modeling 102 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 6.2 Modeling and Propagation of Hierarchical Uncertainties and Modeling Errors . . . 105 6.2.1 Representation of modeling errors . . . . . . . . . . . . . . . . . . . . . . 105 6.2.1.1 Statistical error . . . . . . . . . . . . . . . . . . . . . . . . . . 105 6.2.1.2 Model error . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 6.2.2 Generalized extended polynomial chaos expansion . . . . . . . . . . . . . 107 6.3 Methods of Quantifying the Effects of Modeling Errors and Random Parameters . . 110 6.3.1 Stochastic sensitivity measures . . . . . . . . . . . . . . . . . . . . . . . . 110 6.3.2 Influence of statistical error . . . . . . . . . . . . . . . . . . . . . . . . . 111 6.3.2.1 Total variation in PDF of QoI due to statistical error . . . . . . . 111 6.3.2.2 Influence of statistical error on failure probability . . . . . . . . 113 6.3.3 Influence of model error . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 6.3.3.1 Sensitivity of PDF of QoI with respect to model parameters in none-finest scales . . . . . . . . . . . . . . . . . . . . . . . . . 114 6.3.3.2 Sensitivity of failure probability with respect to model parame- ters in none-finest scales . . . . . . . . . . . . . . . . . . . . . 116 6.3.4 Sensitivity of QoI with respect to parameters in finest observable scale . . . 117 6.4 Example: Multi-scale Car Composites Modeling . . . . . . . . . . . . . . . . . . 118 6.4.1 Fine-scale physical sub-models . . . . . . . . . . . . . . . . . . . . . . . 118 6.4.2 Coarse-scale physical sub-model . . . . . . . . . . . . . . . . . . . . . . . 122 6.4.3 Probabilistic models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 6.5 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 6.6 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 Chapter 7: Stochastic Optimal Control of Hypersonic Trajectories 132 7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 7.2 Optimal Trajectory Control Problem . . . . . . . . . . . . . . . . . . . . . . . . . 137 7.2.1 Aerodynamic model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 7.2.2 Equations of motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 7.2.3 Path constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 7.2.4 Optimal control problem . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 7.3 Stochastic Optimal Trajectory Control Modeling . . . . . . . . . . . . . . . . . . 143 7.3.1 Numerical scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 7.3.1.1 Indirect methods . . . . . . . . . . . . . . . . . . . . . . . . . . 144 7.3.1.2 Multi-stage stabilized continuation . . . . . . . . . . . . . . . . 145 7.3.2 Parametrization of random inputs . . . . . . . . . . . . . . . . . . . . . . 148 7.4 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 7.5 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152 Bibliography 156 vi Appendices 167 A Sensitivity of QoI to an independent model input . . . . . . . . . . . . . . . . . . 167 vii List of Tables 2.1 Statistical parameters of random inputs for Example I . . . . . . . . . . . . . . . . . . 21 2.2 Statistical parameters of random inputs for Example II . . . . . . . . . . . . . . . . . . 26 3.1 Distribution parameters of random inputs for Example I: Ishigami test function . . . . . . 43 3.2 Distribution of statistical parameters for Example I: Ishigami test function . . . . . . . . 44 3.3 Reliability sensitivity indices with respect to distribution parameters of inputs, k P i , for Example I: Ishigami test function . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 3.4 Distribution parameters of random inputs for Example II: beam structure . . . . . . . . . 46 3.5 Distribution of statistical parameters for Example II: beam structure . . . . . . . . . . . 47 3.6 Reliability sensitivity indices with respect to distribution parameters of inputs, k P i , for Example II: beam structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 3.7 Distribution parameters of random inputs for Example III: reinforced concrete shear wall . 50 3.8 Distribution of statistical parameters for Example III: reinforced concrete shear wall . . . . 51 3.9 Reliability sensitivity indices with respect to distribution parameters of inputs, k P i , for Example III: reinforced concrete shear wall. . . . . . . . . . . . . . . . . . . . . . . . 53 4.1 Statistical parameters of random inputs for Example I: beam structure . . . . . . . . . . 69 4.2 Statistical parameters of random inputs for Example II: reinforced concrete shear wall . . . 75 5.1 Statistical parameters ofC C C factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 6.1 Fine-scale model inputsK K K F F F and outputsX X X F F F . . . . . . . . . . . . . . . . . . . . . . . 123 6.2 Coarse-scale model inputsK K K C C C and output (i.e. QoI)X . . . . . . . . . . . . . . . . . . 124 6.3 Distribution parameters of fine-scale inputsK K K F F F . . . . . . . . . . . . . . . . . . . . . 125 6.4 Distribution parameters of inputs directly entering coarse-scale model . . . . . . . . . . 125 6.5 Reliability sensitivity indices with respect to coarse-scale random inputsK K K C C C . . . . . . . 130 6.6 Sensitivity of QoIQ A with respect to random inputsK K K . . . . . . . . . . . . . . . . . . 131 viii 7.1 Value of constant parameters used in the model . . . . . . . . . . . . . . . . . . . . . 140 7.2 An example of initial, terminal and path constraints . . . . . . . . . . . . . . . . . . . 143 7.3 Statistical parameters of random inputs . . . . . . . . . . . . . . . . . . . . . . . . . 150 ix List of Figures 2.1 Schematic of the physical setup for Example I: Random beam on random supports. 20 2.2 PDF computed by KDE (N= 10 4 ;10 5 ;10 6 ) using EPCE for Example I. . . . . . . 22 2.3 The family of PDFs at 1000 realizations ofr for Example I. . . . . . . . . . . . . 22 2.4 PDF of failure probability thatX mid 2:45 cm for Example I. . . . . . . . . . . . 23 2.5 Change in response PDF between P P P o and P P P n by the finite difference method for Example I. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 2.6 Statistical samples (Eq. 2.13) and their expectation (bold red; Eq. 2.15) for change in response PDF with a 95% confidence interval ofr by the stochastic sensitivity approach for Example I. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 2.7 Schematic of the physical setup for Example II: Reinforced concrete shear wall. . . 25 2.8 PDF computed by KDE (N= 10 4 ;10 5 ;10 6 ) using EPCE for Example II. . . . . . . 28 2.9 The family of PDFs at 1000 realizations ofr for Example II. . . . . . . . . . . . . 28 2.10 PDF of failure probability thatX 37:5 kN*m for Example II. . . . . . . . . . . . 29 2.11 Change in response PDF between P P P o and P P P n by the finite difference method for Example II. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 2.12 Statistical samples (Eq. 2.13) and their expectation (dashed bold red; Eq. 2.15) for the change in response PDF with a 95% confidence interval ofr by the stochastic sensitivity approach for Example II. . . . . . . . . . . . . . . . . . . . . . . . . . 30 3.1 The PDF of QoI in Example I: Ishigami test function. . . . . . . . . . . . . . . . . 44 3.2 The new sensitivity index function with respect to distribution parameters of in- puts,z P i (x), for Example I: Ishigami test function. . . . . . . . . . . . . . . . . . . 45 3.3 Schematic of the physical setup for Example I: Random beam on random supports. 46 3.4 The PDF of QoI in Example II: beam structure. . . . . . . . . . . . . . . . . . . . 47 3.5 The new sensitivity index function with respect to distribution parameters of in- puts,z P i (x), for Example II: beam structure. . . . . . . . . . . . . . . . . . . . . . 48 x 3.6 Schematic of the physical setup for Example III: Reinforced concrete shear wall. . 49 3.7 The PDF of QoI in Example III: reinforced concrete shear wall. . . . . . . . . . . 52 3.8 The new sensitivity index function with respect to distribution parameters of in- puts,z P i (x), for Example III: reinforced concrete shear wall. . . . . . . . . . . . . 52 3.9 The change in response PDF given a 10% pertubation on P 1 in Example III: rein- forced concrete shear wall. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 4.1 Schematic of the physical setup for Example I: Random beam on random supports. 68 4.2 Comparison of statistics of the priorX t t t (r) directly from EPCE (left) and the prior Z t t t (r) with error term (right) in Example I. . . . . . . . . . . . . . . . . . . . . . 70 4.3 Posterior distribution ofZ t t t in Example I. . . . . . . . . . . . . . . . . . . . . . . 71 4.4 The family of response PDFs computed by posterior PCEs (red) and the family by prior PCEs (blue) in Example I. . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 4.5 The posterior and prior distributions of the probability of failure in Example I. . . . 72 4.6 The posterior and prior distributions of K K K by standard Bayesian inference (s M = 30%) in Example I. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 4.7 The family of response PDFs computed by posterior PCEs (red); the family by prior PCEs (blue); prior (dashed green) and posterior (dashed yellow) PDFs of QoI by standard Bayesian inference ofK K K (s M = 30%) in Example I. . . . . . . . . 74 4.8 Schematic of the physical setup for Example II: Reinforced concrete shear wall. . . 74 4.9 Comparison of statistics of the priorX t t t (r) directly from EPCE (left) and the prior Z t t t (r) with error term (right) in Example II. . . . . . . . . . . . . . . . . . . . . . 76 4.10 Posterior distribution ofZ t t t in Example II. . . . . . . . . . . . . . . . . . . . . . . 76 4.11 The family of response PDFs computed by posterior PCEs (red) and the family by prior PCEs (blue) in Example II. . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 4.12 The posterior and prior distributions of the probability of failure in Example II. . . 77 5.1 Diagrams for quantifying aleatory and epistemic uncertainties by (a) standard PSHA and (b) EPCE-based procudures. . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 5.2 PDF of PGA by the EPCE-based and standard PSHA approaches. . . . . . . . . . 97 5.3 CDF of PGA by the EPCE-based and standard PSHA approaches. . . . . . . . . . 97 5.4 Hazard curve by the EPCE-based and standard PSHA approaches. . . . . . . . . . 98 5.5 The family of PDFs of PGA by 1000 samples ofr r r in EPCE-based approach com- pared with PDF of PGA by the standard PSHA approach (bold blue). . . . . . . . . 98 xi 5.6 The family of hazard curves by 1000 samples ofr r r in EPCE-based approach com- pared with the hazard curve by the standard PSHA approach (bold blue). . . . . . . 99 5.7 PDF of PoE that PGA> 0:07 g by the EPCE-based approach. . . . . . . . . . . . . 99 6.1 The finite element model of three-point bending test for the composite material. . . 123 6.2 PDF of absorbed energyQ A by adapted gEPCE. . . . . . . . . . . . . . . . . . . . 126 6.3 The family of PDFs associated with statistical error at 1000 realizations ofr dat . . . 127 6.4 The family of change in PDF of QoI associated with statistical error at 1000 real- izations ofr dat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 6.5 PDF of failure probability associated with statistical error under the failure crite- rions that (a) Q A 2:0 kNmm; (b) Q A 2:5 kNmm; and (c) Q A 3:0 kNmm, compared with corresponding mean failure probability (red dots). . . . . . . . . . 129 6.6 Sensitivity index functionsd d d K C of PDF ofQ A with respect to coarse-scale random model inputsK K K C C C (a)d K C i ;i= 1;;12 ; and (b)d K C i ;i= 2;;12. . . . . . . . . . 130 7.1 Description of flight dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 7.2 The path constraints of avoided circular region projected in the qf space with various radius centered at the same location. . . . . . . . . . . . . . . . . . . . . . 140 7.3 Trajectories computed from the model with example constraints . . . . . . . . . . 142 7.4 PDF from the adaptive extended PCE for OTC problem. . . . . . . . . . . . . . . 152 7.5 The family of PDFs at 1000 realizations ofr for OTC problem. . . . . . . . . . . 153 7.6 PDF of failure probability thatv t f 600 m/s for OTC problem. . . . . . . . . . . . 153 7.7 Statistical samples for change in response PDF with 95% confidence interval ofr by the stochastic sensitivity approach for OTC problem. . . . . . . . . . . . . . . . 154 xii Abstract This work focuses on characterizing and managing inference for physical systems under uncer- tainties and modeling errors. To this end, contributions made to advance the state of the art in uncertainty quantification (UQ) include: (1) surrogate modeling that enables unified and efficient characterization and propagation of various sources of uncertainties; (2) sensitivity measures that quantitatively assess the impact of information on the full probability density function (PDF) and the probability of failure (PoF)/reliability; (3) Bayesian model calibration that provides physically insightful priors and reduced computational cost; (4) stochastic multiscale modeling that quantifies hierarchical uncertainties and modeling errors. These approaches constitute a systematic stochastic framework, grounded in polynomial chaos formalism, for credible design, analysis, and optimiza- tion of complex systems in engineering and science. Applications in civil, mechanical, aerospace engineering and seismic hazard analysis are investigated based on the proposed framework. xiii Chapter 1 Introduction 1.1 Motivation Life-cycle management, failure preparedness and post-failure recovery are key ingredients for the analysis, design, and planning of sustainable and resilient systems. Given the long prediction hori- zon implicit in life-cycle planning and the complexity of interactions and mechanisms associated with failure and postfailure assessments, especially in the presence of techno-socio-economic cou- plings, a useful approach to these problems must provide quantitative estimates of uncertainties and modeling errors. Dealing with uncertainty in model-based inference has been addressed with much attention in recent decades Ghanem and Spanos (2003) Ghanem (1999b). Uncertainty quantification (UQ) is the rational process of managing the interplay between data, models and decisions and involves steps of collecting data, constructing physics and probabilistic models, estimating their parameters, predicting quantities of interest, and updating the parameters of the models Ghanem et al. (2017). Uncertain input parameters are usually modeled as random variables with specified prior mod- els for their probability distributions. Estimating parameter values for these distributions is chal- lenged, both numerically and conceptually, by model errors and measurement errors Soize (2017). Insightful prior probabilistic models are important both as key ingredients of Bayesian updating and for the purpose of articulating prior knowledge in a useful format. Prior models are also cru- cial from a practical perspective, since in many settings data is not readily available, and prior 1 models that are sufficiently robust to account for current knowledge are the only recourse for prob- abilistic inference. Emphasizing current knowledge, however, can hamper scientific discovery, and attempts to widen their scope have been pursued in the guise of epistemic uncertainties (e.g., data and model errors) Der Kiureghian and Ditlevsen (2009). Model parameters within each model class, when inferred from data, are typically described as random variables with a probabilistic structure that depends both on the model class and on the data. Sources of uncertainty can be generally classified into aleatory (or inherent) uncertainty and epistemic (or approximation) uncer- tainty Der Kiureghian and Ditlevsen (2009). Aleatory uncertainty typically refers to irreducible variabilities inherent in nature and is traditionally treated using random variables or random fields. Epistemic uncertainty, on the other hand, is usually reduced to within bounds, by acquiring more data and accordingly revising the predictive model. The aleatory or epistemic nature of uncertainty is relative to the choice of predictive model, which includes both the physics and probabilistic mod- els. Once these models have been selected, however, the aleatory/epistemic distinction becomes meaningful, and types are typically involved in any given problem. 1.2 Review of Relevant Techniques 1.2.1 Polynomial chaos expansion Let X represent a scalar QoI that can be expressed as a function of a vector k k k=fk 1 ;;k d g in R d representing physical random parameters. The d-dimensional vector k k k is first expressed as a mapping from a d-dimensional vector x x x =fx 1 ;;x d g of uncorrelated standard normal random variables using, for instance, the Rosenblatt transformation. The setx x x is referred to as the “germ” of the PCE. ExpressingX as a function ofx x x in the formX(x x x) and representingX(x x x) in an orthog- onal polynomial expansion with respect to x x x yields the polynomial chaos expansion (PCE) of X relative tox x x , X(x x x)= å ja a ajp X a a a y a a a (x x x) (1.1) 2 wherefX a a a g are called PCE coefficients; p denotes the highest order in the polynomial expansion; a a a is ad-dimensional multi-index, andfy a a a g denote normalized multivariate Hermite polynomials that can be expressed in terms of their univariate counterparts as follows, y a a a (x x x)= d Õ p=1 y a p (x p )= d Õ p=1 h a p (x p ) p a p ! ; x x x;a a a2R d (1.2) where, h a p represents the one-dimensional Hermite polynomial of order p. The collection of the multivariate polynomials forms an orthogonal set with respect to the multivariate Gaussian density function. The Hermite polynomials are orthonormal with respect to the standard normal distribution. In order to emphasize its dependence on the PCE representation, we denote the realization of X synthesized from its PCE using samplex x x (i) ofx x x by, X (i) D =X(x x x (i) ;fX a a a g)= å ja a ajp X a a a y a a a (x x x (i) ): (1.3) The coefficientsX a a a can be readily expressed as, X a a a = Z R d X(x x x)y a a a (x x x)r(x)dx x x (1.4) where r(x) in the mathematical expectation indicated by the integral is the probability density function of x x x . The PCE coefficients are usually estimated using quadrature approximations to multidimensional integrals as follows, X a a a = å q2Q X(x x x q )y a a a (x x x q )w q ; ja a aj p; (1.5) 3 where,Q is the set of sparse quadrature points,q is a quadrature node inQ andw q is the associated weight. The quadrature level required to achieve a preset accuracy in approximating any X a a a in- creases with the order of the associated polynomial,y a a a . For a given polynomial order p and germ dimensiond, the number of these PCE coefficients, denoted byN c , is equal to, N c = (d+p)! d! p! : (1.6) 1.2.2 Basis adaptation To pursue further computational efficiency, the number of samples needed to compute the coef- ficients of polynomial chaos can be reduced by the basis adaptation. The key idea of the basis adaptation is to apply a rotation to the random inputs and then build a chaos expansion on the QoI only using the first few rotated random variables which captures most of the Gaussian probabilistic information of the unrotated ones. Let us define a rotation matrixA A A :R d+1 !R d+1 , q q q =A A Ah h h; A A AA A A T =I I I; (1.7) where q q q is the rotated vector of Gaussian random variables also in R d+1 . Define X A A A (q q q) as the representation of QoI in terms ofq q q. Thus, we have, X A A A (q q q)=X(h h h); (1.8) then we have the equivalent PCEs with a polynomial order p, å jl l ljp X A A A l l l y l l l (q q q)= å jg g gjp X g g g y g g g (h h h); (1.9) 4 thus, the coefficients of the rotated and unrotated PCEsX A A A l l l andX g g g can be mapped from each other, expressed as, X A A A l l l = å jg g gjp X g g g hy g g g ;y A A A l l l i; jl l lj p; (1.10) X g g g = å jl l ljp X A A A l l l hy A A A l l l ;y g g g i; jg g gj p; (1.11) wherey A A A l l l (h h h)=y l l l (A A Ah h h) andh;i represents the inner product of the two vectors. The key is to construct the A matrix and there are some approaches that have been developed for standard PCE Tipireddy and Ghanem (2014). Among these procedures, the same first step is usually to build a linear (or first-order) chaos expansion with respect to unrotated random variables to compute the first-order polynomial chaos coefficients and use these Gaussian components as the first row in the rotation matrix. Then the Gram-Schmidt orthogonalization is performed to complete the rest of the rotation matrix but with different predetermined remaining parts of the rotation matrix. 1.2.3 Kernel density estimation The Kernel Density Estimation (KDE) is a standard approach to represent PDFs based on samples Davis et al. (2011). The PDF of X (the QoI), denoted by f X (x) and KDE of the PDF, denoted by ˆ f X (x), can be expressed as, f X (x) D = ˆ f X (x)= 1 Nh N å j=1 K xX (j) h ! (1.12) where we recall that, X (j) = å jg g gjp X g g g y g g g (h h h (j) ): (1.13) 5 The Gaussian kernel is used for K with its bandwidth h determined following Silverman’s rule Silverman (1986) as, h= 4s 5 3N 1 5 ; (1.14) where s is the sample standard deviation evaluated from the N samples. It should be noted that statistical properties of the KDE hinge on the samples being independently selected from the dis- tribution ofX. 1.3 Outline In Chapter 2, a coherent framework is first introduced to simultaneously model aleatory and epis- temic uncertainties and propagate the influence of both uncertainties to the predicted response in a computationally efficient manner. Second, we present a procedure for efficiently evaluating the influence on the PDF of output quantities from both aleatory and epistemic variables. In Chapter 3, the functional global sensitivity and reliability sensitivity indices are proposed. These measures provide the sensitivity of PDF and failure probability with respect to information. In Chapter 4, a procedure is described to assess the predictive accuracy of stochastic models subject to modeling errors (i.e., model inadequacy and data error). These modeling errors are charcterized by the parameters of the EPCE surrogate model (i.e., polynomial chaos coefficients). When observations are obtained, the Bayesian paradigm is applied to formulate and solve the inverse problem. In Chapter 5, a stochastic framework that coherently quantifies the aleatory and epistemic un- certainties in seismic hazard forcasting is presented, which aims to bring scientific discoveries to improve the hazard model. In Chapter 6, a stochastic multiscale modeling approach is introduced that quantifies various sources of uncertainties in hierarchy. In Chapter 7, a stochastic optimal control framework is proposed with application to hypersonic trajectory planning in the presence of statistical error. 6 The approaches in Chapters 2, 3, and 4 constitute a systematic probabilistic analysis framework that characterizes and manages inference for complex systems under various sources of uncertain- ties for robust and efficient model-based prediction, reliability and risk assessment, sensitivity analysis, and model validation. Then these approaches are applied to applications in seismic haz- ard analysis, civil, mechanical, and aerospace engineering in Chapters 5, 6, and 7, respectively. 7 Chapter 2 Stochastic Modeling 2.1 Introduction UQ is the rational process of managing the interplay between data, models and decisions and involves steps of collecting data, constructing physics and probabilistic models, estimating their parameters, predicting quantities of interest, and updating the parameters of the models Ghanem et al. (2017). This chapter presents an UQ approach to assessing the significance of additional information concerning model parameters on the credibility of the probability density function of prediction variables. Additional information about the parameters is construed as modifying the parameters in the prior probability models (for instance, shape parameters in a Beta distribution, or coefficients in the PC model), while credibility of the PDF is interpreted as statistical scatter of the PDF, viewed as a statistic. In this chapter, we construct a functional representation of the output PDF in terms of the uncertainty in the PDF of input parameters. We express this functional form in a polynomial chaos representation, namely the extended polynomial chaos expansion (EPCE), and use the directional sensitivities to quantify the effect of perturbing various statistical parameters from their current values. Uncertain input parameters are usually modeled as random variables with specified prior mod- els for their probability distributions. Estimating parameter values for these distributions is chal- lenged, both numerically and conceptually, by model errors and measurement errors Soize (2017). Insightful prior probabilistic models are important both as key ingredients of Bayesian updating 8 and for the purpose of articulating prior knowledge in a useful format. Prior models are also cru- cial from a practical perspective, since in many settings data is not readily available, and prior models that are sufficiently robust to account for current knowledge are the only recourse for prob- abilistic inference. Emphasizing current knowledge, however, can hamper scientific discovery, and attempts to widen their scope have been pursued in the guise of epistemic uncertainty and model error Der Kiureghian and Ditlevsen (2009). Model parameters within each model class, when inferred from data, are typically described as random variables with a probabilistic structure that depends both on the model class and on the data. Sources of uncertainty can be generally classified into aleatory (or data) uncertainty and epistemic (or approximation) uncertainty Der Ki- ureghian and Ditlevsen (2009). Aleatory uncertainty typically refers to irreducible variabilities inherent in nature and is traditionally treated using random variables or random fields. Epistemic uncertainty, on the other hand, is usually reduced to within bounds, by acquiring more data and accordingly revising the predictive model. A variety of approaches focusing on modeling and propagating epistemic uncertainty have been proposed in the literature exploring both revisions to physics models and to uncertainty model for the parameters. For example, Helton J.C. repre- sents epistemic uncertainty using evidence theory and implemented into Monte Carlo procedures Helton et al. (2007). Valdebenito M.A. applied intervening variables to quantify epistemic uncer- tainty based on the first-order Taylor expansion Valdebenito et al. (2013). Jacquelin E. treated the uncertain inputs as random and fuzzy variables and the response is also described by fuzzy ap- proachJacquelin et al. (2016). In this chapter, we tackle the epistemic uncertainty associated with the choice of probabilistic model for input parameters. The aleatory or epistemic nature of uncertainty is relative to the choice of predictive model, which includes both the physics and probabilistic models. Once these models have been selected, however, the aleatory/epistemic distinction becomes meaningful, and types are typically involved in any given problem. A common approach has been to segregate these two types of uncertainty and perform nested iterations, with aleatory analysis on the inner loop and epistemic analysis on the outer loop Hofer et al. (2002). In this manner, the two types of uncertainties can be separated and 9 easily traced. Specifically, each particular instance of the epistemic variables generates a response PDF based only on the aleatory uncertainties. The family of PDFs thus evaluated at many instances of the epistemic variables can be used to visualize the combined uncertainty in the response and further interpret the results using various statistical metrics. This paradigm, while conceptually simple, is computationally prohibitive. In this chapter, we preserve the conceptually appealing uncertainty segregation for purposes of visualization and interpretation Abrahamson and Bom- mer (2005), and enhance its computational efficiency through novel stochastic polynomial chaos representations that provide a uniform treatment of aleatory and epistemic uncertainties. PCE is an uncertainty quantification method that has been widely used in many areas across science and physics Ghanem and Spanos (1990) Sarkar and Ghanem (2002); Crestaux et al. (2009); Shao et al. (2017). Conventionally, mostly dealing with aleatory uncertainty, PCE has demonstrated robustness and computational efficiency. As for epistemic uncertainty quantification applications, PCE has been applied in the context of multi-uncertainty modeling Jakeman et al. (2010); Sch¨ obi and Sudret (2017) and global sensitivity analysis Ehre et al. (2020). Besides, PCE has been inte- grated within some frameworks as a surrogate model, including interval methods Valdebenito et al. (2013); Eldred et al. (2011), fuzzy set theory Wang et al. (2018), evidence theory Yin et al. (2018). A key idea in the present chapter is to treat the coefficients in an aleatory PCE as random variables whose uncertainty encodes epistemic uncertainty, which is itself described as random variables independent of the aleatory ones. A PCE is then carried out relative to the epistemic vari- ables. In view of the polynomial structure of PCE, this two-step expansion can be implemented as a single PCE in higher-dimension. This approach was already present in developing sampling dis- tribution for PCE representations Das et al. (2008) and carrying out associated Bayesian updating Ghanem and Doostan (2006); Arnst et al. (2010); Sargsyan et al. (2019, 2015). In this work, the propagation of epistemic uncertainty to the PDF of the quantity of interest is assessed. Operationally, as new information is acquired, the probabilistic models (or parameters of these models, referred to herein as “statistical parameters”) of the input variables should be updated. As a result, the change in response PDF resulting from the update in statistical input parameters, 10 namely the sensitivity of response PDF with respect to statistical parameters of model inputs, is relevant for risk assessment, reliability analysis, model verification and validation. A simple idea is to treat the statistical parameters of input before and after update as two separate UQ problems and to solve them separately. This would entail two separate stochastic forward propagations of uncertainty each requiring significant computational resources, with no clear path for sharing the computational burden between the two tasks. This “sensitivity” relative to epistemic uncertainty is substantially more taxing, numerically, than standard sensitivity formulations Ghanem (1999a); Sudret (2008). This cost is the greater if the sensitivity of the PDF itself is sought, and not merely the sensitivity of the variance-based (or Sobol) and the moment-independent indices Borgonovo (2007). Recent works discussing the evaluation of sensitivity measures in multi-uncertainty prob- lems include using the variance decomposition of the logarithm of the conditional failure proba- bility Ehre et al. (2020), combining importance sampling and importance splitting methods with Sobol indices Morio (2011), and applying the Kriging method to compute conditional expectation of failure probability Wang et al. (2013). The aforementioned approaches are all based on con- ditional probabilities. In this chapter, we construct a composite map from statistical parameters to PDF of QoI, by integrating a PCE of the QoI within a kernel density estimate (KDE) of that QoI’s PDF. We demonstrate the value of this map for both propagating epistemic uncertainty and evaluating PDF sensitivity relative to these uncertainties. The objective of this work is thus twofold. First we provide a coherent framework to simultane- ously model aleatory and epistemic uncertainties and propagate the influence of both uncertainties to the predicted response in a computationally efficient manner. Second, we develop a framework for efficiently evaluating the sensitivity of probability density functions (PDF) of output quanti- ties relative to both aleatory and epistemic variables. The remainder of the chapter is structured as follows. Previous chapter has reviewed the classical PCE approach. Section 2.2 presents the EPCE approach that takes into account simultaneous aleatory and epistemic uncertainties. Section 2.3 describes the framework of quantifying the influence of aleatory and epistemic uncertainties 11 on several response metrics. Section 2.4 applies the framework to an analytical and a numerical illustrative examples. Section 2.5 presents the conclusions and some closing comments. 2.2 Extended Polynomial Chaos Expansion for Mixed Aleatory and Epistemic Uncertainties Clearly, in the classical PCE introduced in foregoing chapters, the numerical value ofX a a a depends both on the mapping fromk k k toX, and the mapping fromx x x tok k k. The former mapping encapsulates physics models and governing equations while the latter mapping describes the probabilistic model of the physical parameters k k k, in a functional form that explicitly relates them to a set of indepen- dent Gaussian random variablesx x x . Uncertainty in the probabilistic model of k k k is propagated into uncertainty aboutX through the composite map fromk k k tox x x andx x x toX. We introduce them-dimensional vectorP P P=fP 1 ;;P m g representing all the statistical param- eters of the input random variables k k k. These parameters are typically estimated based on a finite sample. Different estimation methods yield different probabilistic models forP P P with the Maximum Likelihood estimates (MLE) generally yielding an asymptotically (for large sample size) Gaussian distribution with variance that is inversely proportional to sample size. We represent the random vector P P P in a polynomial chaos decomposition relative to a new Gaussian germr r r independent of x x x . Motivated by asymptotic results concerning MLE sampling distributions, we limit this PCE to a first order expansion, resulting in a Gaussian model for P P P. We also assume a one-dimensional PCE representation for P P P, imposing a strict dependence between the different P i , making them all linear transformations of the same scalar random variabler. This statistical dependence between components of P P P is justified by the observation that experimental evidence that influences our es- timate of any one of the P i ’s is likely to also affect our estimates of all other components of P P P. 12 We thus introduce r as a standard normal random variable independent of x x x =fx 1 ;;x d g, and express the parameters of the input PDFs,P i , in the form, P i =m P i +s P i r; i= 1;;N P ; (2.1) where m P i is the mean of P i , and s P i its standard deviation. Also, N P denotes the number of parameters from the setP P P that is presumed to be uncertain. Thus, X can be represented as a function of a new germ h h h =fx 1 ;;x d ;rg, and is therefore denoted asX(h h h). The extended polynomial expansion ofX can thus be expressed as, X(h h h)= å g g g2R d+1 jg g gjp X g g g y g g g (h h h); h h h2R d+1 (2.2) wherefX g g g g denote the EPCE coefficients. Making use of Eq. (1.2.1), we can separate the depen- dence onx from the dependence onr, resulting in the following useful representation, X(h h h)=X(x x x;r)= å a a a2R d ;b2R ja a aj+bp X a a ab y a a a (x x x)y b (r); x x x2R d ; r2R; (2.3) with the subscript a a ab being a d+ 1 multi-index formed as the concatenation of a a a and b. It is consistent with common views Ghanem and Doostan (2006); Das et al. (2008); Arnst et al. (2010); Helton et al. (2007); Der Kiureghian and Ditlevsen (2009); Jacquelin et al. (2016); Soize (2017) to construe dependence on x x x and r r r to represent, respectively, aleatory and epistemic uncertain- ties. Eq. 2.2 provides a joint and uniform representation of these two uncertainties in a single representation. It is worth mentioning that the extension of the foregoing to the case where each P i is decomposed according to its own stochastic dimensionr i can be readily accommodated, with some increase in computational cost. This higher parameterization, however, is not necessarily more physical or more accurate. Indeed, all stochastic parameters can be viewed as depending on the same microstructure with the random parameterr identifying the particular microstructure 13 being investigated. For instance,r could refer to such microstructure properties as the size of the largest impurity, or the magnitude of the largest contrast between elastic moduli, or the theoretical distance between upper and lower bounds on local elastic properties. The linear dependence of P i on r would then necessitate small sensitivity of the probabilistic parameters to perturbation in the microstructure. This linear dependence could be relaxed by pursuing a higher-order PC expan- sion in Eq. 2.1. Our present formulation is restricted to capturing the influence of uncertainty in parameters of the input PDF, and does not account for model error which is a key component of epistemic uncertainty. 2.3 Quantification of Aleatory and Epistemic Uncertainties In subsections 2.3.1 and 2.3.2 we first introduce two key ingredients for constructing the composite map from input parameters to PDF of QoI and for evaluating sensitivities across this map. We then demonstrate the application of these tools to quantifying changes in these PDFs. 2.3.1 Representation of PDF: KDE We will be mainly interested in quantifying the influence of input parameter uncertainties on the probability density function (PDF) of quantities of interest. We rely on Kernel Density Estimates (KDE) to represent these PDFs Davis et al. (2011). The PDF ofX (the QoI), denoted by f X (x) and KDE of the PDF, denoted by ˆ f X (x), can be expressed as, f X (x) D = ˆ f X (x)= 1 Nh N å j=1 K xX (j) h ! (2.4) where we recall that, X (j) = å jg g gjp X g g g y g g g (h h h (j) ): (2.5) 14 The Gaussian kernel is used for K with its bandwidth h determined following Silverman’s rule Silverman (1986) as, h= 4s 5 3N 1 5 ; (2.6) where s is the sample standard deviation evaluated from the N samples. It should be noted that statistical properties of the KDE hinge on the samples being independently selected from the dis- tribution of X. In our formulation, samples of h h h2R d+1 are independently sampled from a d+1- dimensional Gaussian distribution, and subsequently pushed through the EPCE to yield samples of X. The sample thus collected does not necessarily, a-priori, follow the distribution ofX. However, mean square convergence of EPCE implies its convergence in distribution. Thus, provided the EPCE ofX is converged, samples collected from the EPCE will adhere to the distribution ofX. 2.3.2 Stochastic sensitivity measures When considering the uncertainties of P i ;i= 1;;N P ; as described in Eq. 2.1, the EPCE as de- veloped in Eq. 2.2 can be used and integrated with the KDE in Eq. 2.4, to result in, f X (x)= 1 Nh N å j=1 K xå jg g gjp X g g g y g g g (h h h (j) ) h ! ; h h h (j) 2R d+1 : (2.7) This expression for f X involves summation over all d+ 1 stochastic dimensions (h h h) and does not therefore express dependence on any of them. In order to retain sensitivity with respect to the parametersP P P, we make use of Eq. 2.3 and replace Eq. 2.7 by the following equation, f X (x;r)= 1 Nh N å j=1 K 0 B B @ 1 h 0 B B @ x å a a a2R d ;b2R ja a aj+bp X a a ab y a a a (x x x (j) )y b (r) 1 C C A 1 C C A ; x x x (j) 2R d ; r2R (2.8) 15 By taking the directional derivative of f X (x;r) in Eq. 2.8 with respect to P i , the sensitivity of the PDF to the statistical parameters of inputs, denoted by f X;P i (x), is given by, f X;P i (x;r)= ¶ f X (x;r) ¶P i =s P i ¶ f X (x;r) ¶r ; i= 1;;N P ; (2.9) and substituting the KDE formulation in Eq. 2.8 into Eq. 2.9 results in, f X;P i (x;r)= s P i Nh 2 N å j=1 " xX(h h h (j) ) h K xX(h h h (j) ) h ! å ja a aj+bp X a a ab y a a a (x x x (j) ) ¶y b (r) ¶r # ; i= 1;;N P (2.10) where we relied on the Gaussian form of the kernel. Eq. 2.10 provides a stochastic representation of the sensitivity of PDF with respect to probabilistic parameters in the input random variables. The sensitivity measure has the property that, for each value of r, the integral of f X with respect tox is equal to zero. 2.3.3 Methods of quantifying the effect of aleatory and epistemic uncertainties on PDF of QoI In this section, we investigate the influence on the response PDF of uncertainty in the parameters P P P characterizing the probability model of input parameters. We explore three different approaches to that end. The first approach is based on a finite difference scheme, the second one is based on a sensitivity analysis, and the third approach provides an explicit expression of the probability measure on the PDF induced by uncertainty inP P P. 2.3.3.1 Finite difference approach We consider information about the system contained in two distinct datasets that we label “origi- nal” and “new”. As above, we letN P denotes the number ofP i treated as random variables, and we introduceP P P o andP P P n as the vectors of allP i ; (i= 1;;N P ) values of the original and new datasets. We then pursue a PCE representation with each of these two sets of parameters, resulting in two 16 distinct PDFs. Representing each of these two PDFs by a KDE relative to the same samplefX (i) g, and taking the difference of the resulting two expressions yields the following representation for the change in PDF Df X (x)= 1 Nh " N å i=1 K xX(x x x (i) ;fX o a g) h ! N å i=1 K xX(x x x (i) ;fX n a g) h !# : (2.11) It should be mentioned that this approach does not take the uncertainties of the statistical parame- ters into account and does not involve any explicit modeling ofP P P. The approach does not therefore provide the facility to interpolate or extrapolate beyond the two datasets associated with P P P o and P P P n . This approach is mentioned because of its simplicity and in order to provide a comparison with more advanced approaches described next. It is important to note that a rigorous probabilistic analysis as data is acquired should involve Bayesian update of model parameters. The result in this section, using a difference scheme, is meant to provide an assessment that is both easy and efficient to implement, while providing useful insight into the credibility of statistical inferences. 2.3.3.2 Stochastic sensitivity approach Given our probabilistic model forP i in accordance with Eq. 2.1, an interval forP i can be associated with a pre-specified confidence level c i . We note that this confidence level could be equally well have been specified on r. Denoting the upper and lower bounds of the associated confidence interval byu i andl i , respectively, permits us to express the precision ofP i as, DP i =l i u i ; i= 1;;N P ; (2.12) which can be used to develop the induced precision on f X (x). Specifically, using Eq. 2.10 results in, Df X (x;r)= N P å i=1 f X;P i (x;r)DP i : (2.13) 17 Noting the recurrence equation for the derivative of univariate Hermite polynomials, h 0 n (x)=xh n (x)h n+1 (x); (2.14) and taking the mathematical expectation of the derivative of f X relative to P i , results in the fol- lowing expression for the expected value ofDf X (x;r), whereh:i denotes the expectation operator, hDf X (x; :)i= N P å i=1 s P i Nh 2 N å j=1 " xX(h h h (j) ) h K xX(h h h (j) ) h ! å ja a aj+bp X a a ab y a a a (x x x (j) )hry b (r)i # DP i : (2.15) The stochastic sensitivity approach considers the epistemic uncertainty that characterizes the un- certainty in the statistical parametersP i . The stochastic sensitivity approach requires the solution of a(d+1)-dimension problem while the difference-based method requires the solution of two independentd-dimensional problems. 2.3.4 Stochastic model for PDFs A stochastic model for the PDF of X has already been developed in Eqs. 2.7 and 2.8. While these equations were used above to characterize sensitivity measures, in this section they are used to characterize confidence in estimates of failure probability. The EPCE allows to separate the epistemic variabler from the aleatory variablesx x x . Its PDF can be expressed in a more suggestive form than Eq. 2.8 as in, f X (x;r)= 1 Nh N å j=1 K 0 @ xå jg g gjp X g g g y g g g (x x x (j) ;r) h 1 A ; (2.16) where y g g g (x x x (j) ;r) indicates that the polynomial y g g g is evaluated at the sample where the first d components are specified byx x x (j) , while the last(d+ 1) st component remains as a free variable. It 18 can be seen that Eq. 2.7 is the distribution of the family of PDFs generated by Eq. 2.16, marginal- ized overr. One important application of the foregoing ideas is to the characterization of failure probabil- ities, themselves, as random variables. In many applications, the failure probability P f is defined as the probability of reaching or exceeding a critical threshold and is of great significance. This distribution is typically predicated on pre-specified probabilistic models for input parameters, and thus lends itself to the present analysis. To simplify the presentation, and without loss of generality, we assume a scalar description of the limit state in terms of a critical threshold for the QoI, denoted byX c . The failure probability,P f , is then given by the following integral, P f (r)= Z xX c f X (x;r)dx; (2.17) where we have explicitly expressed the dependence of P f on the epistemic input uncertainty en- coded inr. The PDF ofP f computed by KDE is expressed as, f P f (x)= 1 N r h f N r å j=1 K 0 @ xP (j) f h f 1 A ; (2.18) where N r is the number of samples of r used in estimating the KDE, and P (j) f ; j= 1;;N r is the j-th realization of failure probability evaluated atr j . The Gaussian kernel is used for K with the bandwidth h f determined following Silverman’s rule as h f =(4s 5 f =3N r ) 1=5 , where s f is the standard deviation estimated from theN r failure probability samples. 2.4 Illustrative Numerical Examples In this section, two examples are investigated to demonstrate the proposed framework. Example I is a beam structure of which a closed-form expression for the QoI is known. Example II is a reinforced concrete wall of which the hysteresis analysis is performed using finite elements. 19 2.4.1 Example I: Beam structure The beam structure is shown in Fig. 2.1. The model characterizes the mid-span displacementX mid Figure 2.1: Schematic of the physical setup for Example I: Random beam on random supports. of the beam fixed by a linear spring and a rotational spring at each end, with concentrated load F acting in the middle of the beam. The random inputs include the linear spring, rotational spring, flexural stiffness and beam span, which are denoted as k 1 , k 2 , EI and L, respectively. The four input variables are mutually independent and follow the Beta distributions. The vector of random statistical parameters P P P consists of the eight random variables a a a and b b b. The mean value of P P P is taken equal to the values estimated from the original dataset, and the coefficient of variation of each entry in P P P is assumed to 5%. In order to explore the finite difference approach presented above, we assume that two datasets are made available to the analyst at two different occasions, yielding different parameters for the Beta distribution. These parameters can be estimated from the data using, for instance, a maximum likelihood approach. The distribution parameters are listed in Tab. 2.1 where, a a a o andb b b o are the vectors of two shape parameters of each input from the original dataset; a a a n and b b b n are the vectors of two shape parameters of each input from the new dataset; q q q andr r r are the vectors of the lower and upper bounds of each input and assumed to be the same in the two datasets. It can be shown from elementary mechanics of materials that mid-span displacement X mid for this beam is given by, X mid = F 16EI L 3 3 8EIk 2 L 3 +k 1 k 2 L 6 16EI(k 2 +k 1 L 2 ) + 8EIk 1 L 2 k 1 k 2 L 3 2k 1 k 2 + 2k 2 1 L 2 : (2.19) 20 Table 2.1: Statistical parameters of random inputs for Example I Input variables Distribution a a a o b b b o a a a n b b b n q q q r r r Linear springk 1 (N*m) Beta 4 5 5 6 350 650 Rotational springk 2 (N*m/rad) Beta 4 5 5 6 400 600 Flexural stiffnessEI (N/m*m) Beta 4 5 5 6 80 186.67 Beam spanL (m) Beta 4 5 5 6 0.216 0.264 The PDF of X mid using KDE based on the EPCE as in Eq. 2.2 is shown in Fig. 2.2. A second order EPCE was found to be sufficiently converged in the tail of the PDF to carry out the foregoing PDF sensitivity studies. Besides, to avoid the effect of noise from KDE on the tail of the PDF, the PDFs obtained fromN= 10 4 ;10 5 ;10 6 are plotted in Fig. 2.2. It is found that 10 5 samples can give an accurate tail of the PDF and thus used in the remaining computation. Fig. 2.3 shows the family of PDFs computed from Eq. 2.8 at 1000 samples ofr. As indicated previously, the PDF in Fig. 2.2 is just the distribution over the ensemble of PDFs for which samples are shown in Fig. 2.3. Assuming failure is associated withX mid > 2:45cm, the distribution of the probability of failure (P f ) is obtained by evaluatingP f for each sample in Fig. 2.3, and plotting the PDF of the resulting values. The resulting PDF is shown in Fig. 2.4. Although confidence intervals forP f can be easily synthesized from the PDF, more accurate decision analysis can be developed by relying on the full PDF. The change in response PDF resulting from the change in the statistical parameters from P P P o to P P P n computed by the finite difference approach is shown in Fig. 2.5. Based on Fig. 2.5, the failure criterion of X mid = 2:45 cm exhibits one of the largest sensitivities to changes in P i in the tail region. The sensitivity plot in this figure tapers towards zero for larger values of displacement, consistent with the fact that P i refers to the shape parameter of beta distributions rather than to its range. We thus observe, as expected, a deterministic upper bound on the mid-span displacement as the shape parameter varies over its range. On the other hand, given a 95% confidence level forr, the confidence intervals[l i ;u i ] for eachP i , wherei= 1;;N P are computed to be[3:6;4:4] for the shape parameters a a a and [4:51;5:49] for the shape parameters b b b. Thus, Fig. 2.6 shows the result computed by the stochastic sensitivity approach according to Eq. 2.13. The figure shows the 21 1.25 1.50 1.75 2.00 2.25 2.50 2.75 3.00 3.25 Mid-span displacement (cm) 0.0 0.5 1.0 1.5 2.0 f X (x) N = 10e4 N = 10e5 N = 10e6 Figure 2.2: PDF computed by KDE (N= 10 4 ;10 5 ;10 6 ) using EPCE for Example I. 1.5 2.0 2.5 3.0 3.5 Mid-span displacement (cm) 0.0 0.5 1.0 1.5 2.0 f X (x, ρ) Figure 2.3: The family of PDFs at 1000 realizations ofr for Example I. 22 0.0300 0.0325 0.0350 0.0375 0.0400 0.0425 0.0450 0.0475 Failure probability (Pr{x 2.45 cm}) 0 20 40 60 80 100 120 PDF Figure 2.4: PDF of failure probability thatX mid 2:45 cm for Example I. statistical samples of the change in response PDF, as well as the expectation of these samples. The scatter in this sample is a reflection of the epistemic uncertainty about the probabilistic model of the input parameters. The shape ofDf X using both the difference and the sensitivity methods is as expected with the largest difference near the mode and a change of sign before tapering off to zero. The net area under the curve is equal to zero. The change in PDF according to sensitivity is different from the change using finite difference which can be attributed to one or both of two factors. First, the derivative of the PDF with respect to P i at each of the two datasets is quite different, leading to a discrepancy upon linearizing at either of the two datasets. Second, the values of P i associated with the new dataset lie outside the confidence intervals just evaluated. 2.4.2 Example II: Reinforced concrete shear wall It has been suggested that epistemic uncertainty is non-negligible in seismic analysis of structures Ellingwood and Kinali (2009) Li et al. (2016) Allen and Maute (2005). To illustrate the proposed methodologies in this context, a reinforced concrete shear wall model is taken as an example in this study. This model comes from an experimental study Thomsen IV and Wallace (2004). Fig. 2.7 depicts the geometry, dimension and reinforcement of the shear wall. There are two steps in the loading procedure. First, a constant axial load of 378 kN is applied on the top of the wall, followed by a cyclic lateral load achieved by controlling the displacement. The 23 1.25 1.50 1.75 2.00 2.25 2.50 2.75 3.00 Mid-span displacement (cm) −0.10 −0.05 0.00 0.05 0.10 0.15 0.20 Δf X (x) Figure 2.5: Change in response PDF between P P P o and P P P n by the finite difference method for Ex- ample I. 1.4 1.6 1.8 2.0 2.2 2.4 2.6 2.8 Mid-span displacement (cm) −0.04 −0.02 0.00 0.02 0.04 0.06 Δf X (x,ρ) Figure 2.6: Statistical samples (Eq. 2.13) and their expectation (bold red; Eq. 2.15) for change in response PDF with a 95% confidence interval of r by the stochastic sensitivity approach for Example I. 24 Figure 2.7: Schematic of the physical setup for Example II: Reinforced concrete shear wall. applied lateral drift consisted of a train of triangular pulses of alternating signs. Additional details of the setup and its loading are described elsewhere Thomsen IV and Wallace (2004). The purpose of the present analysis is to find the influence of the statistical parameters of mechanical properties of concrete and steel on the response PDF of the energy dissipated throughout the structure via hysteresis. Some of the material properties are considered as random variables, including the concrete elastic modulus E c , the concrete tensile strength f r , the concrete compressive strength f c and the steel yielding strength f y . For concrete, E c , f r and f c are of course correlated, but for simplicity they were regarded as fully correlated in this chapter. According to the code ACI 318-19 318 (2019), the relationships between these parameters are, E c = 57;000 p f c ; f r = 7:5l p f c ; (2.20) where, the units are in psi. To calculate these concrete parameters, the f c is sampled from a Beta distribution first and the f r and E c are then generated according to Eq. 2.20. The steel strength is also modeled as a Beta random input. The material properties input to the shear wall structure are listed in Tab. 2.2. where, in this shear wall hysteresis problem, a a a SW o and b b b SW o are the vectors of 25 Table 2.2: Statistical parameters of random inputs for Example II Material Input variable Type a a a SW o b b b SW o a a a SW n b b b SW n q q q SW r r r SW Concrete Compressive strength f c (in Pa) Beta 2 2 2.4 2.3 3:91 10 7 5:29 10 7 Steel Yielding strength f y (in Pa) Beta 2 2 2.4 2.2 3:40 10 8 4:60 10 8 two shape parameters of each input from the original dataset; a a a SW n and b b b SW n are the vectors of two shape parameters of each input from the new dataset;q q q SW andr r r SW are the vectors of the lower and upper bounds of each input and assumed to be the same in the two datasets. Thus, in this example, the vector P P P of random parameters has four components consisting of the shape parameters a a a SW andb b b SW . The mean value ofP P P is taken equal to the values estimated from the original dataset, and the coefficient of variation of each entry in P P P is assumed to 5%. The resulting [l i ;u i ] interval, at the 95% confidence level, for each of the shape parameters is equal to[1:80;2:20]. We also note, however, that parameters from the new dataset are outside this confidence interval. We implement in Abaqus Hibbitt (2001) a model that follows the theoretical development in Feng et.al Feng et al. (2018) which features a multi-dimensional softened plasticity damage model. The steel material follows a Menegotto-Pinto model that includes strain-hardening, Baushinger ef- fects and tension stiffening and a multi-layer shell element is used for the shear wall Feng et al. (2018). Additional material properties include the concrete Poisson’s ratio, the steel elastic modu- lus and the steel hardening ratio which are deterministic inputs. The response PDF of the energy dissipation using KDE based on the EPCE is shown in Fig. 2.8. Again, convergence test is conducted by using N = 10 4 ;10 5 ;10 6 to avoid the effect of noise from KDE on the tail of the PDF, as shown in Fig. 2.2. N = 10 5 is found to give an accurate tail of the PDF and thus used in the final calculations. Then Fig. 2.9 shows the results of the family of PDFs computed at 1000 samples of r. Here again, a second order EPCE was found to yield a converged PDF. Again, the PDF in Fig. 2.8 is just the distribution of the family of PDFs in Fig. 2.9. Postprocessing these results, the probability, P f , of exceeding a threshold level of 37.5 X = 37:5 kNm is shown in Fig. 2.10. We reiterate that the probability of failure is itself characterized as a random variable with a computed scatter that reflects its credibility for critical decision making. 26 The change in response PDF due to the change in the statistical parameters fromP P P o toP P P n computed by the finite difference approach is shown in Fig. 2.11. Here again we seek a value in the tail region of energy dissipation that exhibits significant sensitivity to fluctuations inP i . We thus select a value of 37:5kNm at which to evaluate sensitivities and variations in PDFs. On the other hand, given a 95% confidence level forr, the confidence intervals[l i ;u i ] for eachP i , (i= 1;;N P ) are computed and thenDP P P is obtained. Thus, Fig. 2.12 shows the result from the stochastic sensitivity approach according to Eq. 2.13, as well as the expected value of change in PDF according to Eq. 2.15. The figure shows a statistical ensemble of these samples of change in PDF that reflects the epistemic uncertainty about the probabilistic parameters of the input variables. The same observations noted for the previous example while comparing f X (x) obtained using the two formalisms apply to the present example. There is a distinct difference in the shape of the scatter between this example and the previous one. In our definition ofDf X (x), the incrementsDP i are deterministic. The scatter in Df X is thus due to the scatter in the sensitivities expressed in Eq. 2.10. The two examples clearly exhibit different dependence of the sensitivities on x and r, demonstrating the influence of the chaos coefficients of X on these sensitivities. The difference between Figs. 6 and 12 is mainly due to the contributions of higher order polynomials in r (Eq. 2.8) in Example II while Example I is largely dominated by the first order polynomials inr. It is also noted that the scatter in the sensitivities does not mirror the scatter in the PDF, and that the sensitivity of the PDF varies considerably along its support. 2.5 Concluding Remarks A framework is proposed in this chapter to quantify mixed aleatory and epistemic uncertainties and to evaluate their influence on the probability density functions of various quantities of in- terest. The epistemic uncertainty is modeled as random variables which are integrated with the aleatory variables into the EPCE, thus realizing a single-stage efficient quantification of both type of uncertainties. The sensitivity of response PDF with respect to the statistical parameters of input 27 33 34 35 36 37 38 Energy dissipation (kN*m) 0.0 0.2 0.4 0.6 0.8 f X (x) N = 10e4 N = 10e5 N = 10e6 Figure 2.8: PDF computed by KDE (N= 10 4 ;10 5 ;10 6 ) using EPCE for Example II. 32 33 34 35 36 37 38 39 Energy dissipation (kN*m) 0.0 0.2 0.4 0.6 0.8 f X (x,ρ) Figure 2.9: The family of PDFs at 1000 realizations ofr for Example II. 28 0.010 0.015 0.020 0.025 0.030 0.035 0.040 0.045 Failure probability (Pr x ≥ 37.5 kN*m}) 0 20 40 60 80 100 PDF Figure 2.10: PDF of failure probability thatX 37:5 kN*m for Example II. 35 36 37 38 39 Energy dissipation (kN*m) −0.06 −0.04 −0.02 0.00 0.02 0.04 0.06 0.08 Δf X (x) Figure 2.11: Change in response PDF between P P P o and P P P n by the finite difference method for Example II. 29 34 35 36 37 38 Energy dissipation (kN*m) −0.02 −0.01 0.00 0.01 0.02 0.03 Δf X ( Δρ) Figure 2.12: Statistical samples (Eq. 2.13) and their expectation (dashed bold red; Eq. 2.15) for the change in response PDF with a 95% confidence interval ofr by the stochastic sensitivity approach for Example II. 30 variables is expressed through a combination of KDE and EPCE, thus allowing for a straightfor- ward post-processing of the EPCE to determine the sensitivity of PDF to epistemic variables, with little additional computational effort. Based on the result of the EPCE, several metrics includ- ing the family of PDFs and the distribution of the failure probability are investigated to provide interpretations to the response from different angles. Based on the significant variation of the sen- sitivities along the support of the PDF, we deduce that the acquisition of additional observations, aimed at shaping the statistical parameters, should take into account which portion of the support is expressed by the relevant decisions. If the statistical parametersP P P are estimated according to the MLE arguments, then their asymp- totic distribution will be Gaussian. However, in the small data case, the distribution ofP P P will gen- erally depend on the dataset. As a potential extension of this work, one could replace Eq. 2.1 with a higher-order EPCE to account for more general form of the density function. In this case, the development in section 4.3.2 will have to be modified, while the development in sections 4.3.1 and 4.4 will remain valid. 31 Chapter 3 Sensitivity Measures 3.1 Introduction Sensitivity analysis (SA) is an assessment of how the uncertainty in the model output can be ap- portioned to the sources of uncertainty in the model input Saltelli et al. (2004). Generally, SA can be classified into two categories: local sensitivity analysis and global sensitivity analysis. Local sensitivity analysis is usually performed when the analysis around a nominal point in the model input space is of interest. Global sensitivity analysis takes into account all the variation range of the parameters, and apportions the output uncertainty to the uncertainty of the input parameters, cov- ering their entire range space. Among the fruitful global sensitivity measures that are available, the variance-based (i.e., Sobol’ index Sobol (2001)) and the density-based (or moment-independent) Borgonovo (2007) sensitivity indices have gained much popularity during the past two decades. In this context, the model inputs involved in the global sensitivity analysis are modeled as random variables of which the distribution is explicitly known and dependent on a couple of distribution parameters (i.e., mean and standard deviation in a Gaussian distribution, or shape parameters in a Beta distribution). For the variance-based and density-based indices, they are both scalar measures and are evaluated based on the unconditional and conditional probability of output. Practically, engineering applications are frequently associated with uncertainties resulting from scarce information. In such cases, the required probabilistic modeling can thus be difficult due to the lack of data. When rare events in which small probabilities are of interest, as is the case in 32 most reliability and risk analysis problems, the tail of the PDF becomes important Der Kiureghian and Ditlevsen (2009). In such cases, error could arise from the probabilistic models. Kiureghian A.D. advocated to parameterize the choice of the distribution so that the error in the probabilis- tic model is represented by the uncertainty in the distribution parameters Kiureghian (1989). The determination of distribution parameters in the probabilistic model of input then involves sub- jective decisions with ”expert opinion”, in conjunction with the available data, to produce weak inference of input characterization. In this context, sources of uncertainty can be classified into aleatory (or inherent) uncertainty and epistemic (or approximation) uncertainty Der Kiureghian and Ditlevsen (2009). Aleatory uncertainty typically refers to irreducible variabilities inherent in nature and is usually treated using random variables. Epistemic uncertainty representing lack of knowledge can be reduced by acquiring more data and accordingly revising the predictive model. A plenty of research focuses on sensitivity analysis for aleatory uncertainty. Actually, the state of knowledge and the type of uncertainty in the model have significant impact on the characteriza- tion of uncertainty in sensitivity analysis Borgonovo and Plischke (2016); Helton et al. (2006b); Guo and Du (2007). Thus, a robust and thorough sensitivity analysis requires investigating both the aleatory and epistemic types of uncertainties that the model captures. The approaches focus- ing on modeling and propagating epistemic uncertainty in the global sensitivity analysis include evidence theory Helton et al. (2006a), fuzzy theory Beer et al. (2013), interval analysis Jakeman et al. (2010) and probability theory Ehre et al. (2020). A common idea has been to segregate these two types of uncertainty and perform nested iterations, with aleatory analysis on the inner loop and epistemic analysis on the outer loop Au (2005); Nannapaneni and Mahadevan (2016); Chabridon et al. (2018). Meynaoui A. applied the Hilbert-Schmidt dependence measures to reduce computational cost from double-loop monte carlo into a single-loop monte carlo and evaluated the scalar second-level global sensitivity indices Meynaoui et al. (2019). In this work, we enhance the computational efficiency through novel stochastic polynomial chaos representations that provide a uniform treatment of aleatory and epistemic uncertainties, and propose more informative impor- tance measures. To rank the importance of the model inputs or their distribution parameters, most 33 works use the well-known sensitivity measures, mostly by the variance-based Ehre et al. (2020); Morio (2011); Zhang et al. (2020) and moment-independent Luyi et al. (2012); Aven and Nøkland (2010) indices, or propose new scalar measures Wang et al. (2013). Wang performed a single-loop evaluation of Sobol’ indices by sampling from a joint auxiliary density with respect to both the random inputs and their uncertain distribution parameters Wang and Jia (2020). These importance measures are based on conditional probability and are scalar indices. It might be interesting for the analysts or engineers to ask the following questions: Does the importance ranking of distribution parameters change at different values of QoI? Does the degree to which the distribution parameters influence the model output change at different values of QoI? Whether an increase/decrease in a distribution parameter provokes an increase/decrease in statistics of model output Borgonovo and Plischke (2016) ? Recently, an extended polynomial chaos expansion (EPCE) for quantification of simultaneous epistemic and aleatory uncertainties is developed Wang and Ghanem (2021, 2019). An epistemic random variable is used to represent the uncertainty in the distribution parameters. The sensitivi- ties of the response PDF with respect to the distribution parameters of input are evaluated through an EPCE-based kernel density construction. In this work, to achieve separate sensitivity with re- spect to any distribution parameter from others, we modify the EPCE by using multiple epistemic variables of which each represents a microstructure of a distribution parameter. Consequently, the obtained sensitivity measure is in a functional form, with respect to the output, thus the aforemen- tioned three questions can be addressed. Such measure is more informative than scalar indices. For instance, it may provide different importance ranking of distribution parameters at different values of output. In addition, the direction of change in output can be informed from the sign of the sensitivity index function. Reliability sensitivity is defined as the partial derivative of the failure probability with respect to the distribution parameters of model input Jensen et al. (2015); Dubourg and Sudret (2014); Lu et al. (2008). It helps to identify the ranking of the parameters in the probabilistic model and to guide the reliability-based design and risk assessment. Typically, reliability sensitivity analysis 34 is computationally expensive because it requires a large number of model simulations to evalu- ate the failure probability for different distribution parameters. Thus, many researches have been conducted to apply efficient algorithms to perform reliability sensitivity analysis, including adap- tive importance sampling Wu (1994), line sampling Lu et al. (2008), Kriging Dubourg and Sudret (2014) and Polynomial-Chaos Kriging Sch¨ obi et al. (2017). In this work, we develop a straightfor- ward stochastic representation of the sensitivity of failure probability with respect to distribution parameters through a MEPCE-KDE-based formulation, which is efficient. Especially when the aforementioned global sensitivity index function is obtained, the reliability sensitivity index can be computed directly, without extra computational cost, by integrating the tail of the GSI function. The main objective of this work is to provide a new functional global sensitivity index and to efficiently evaluate reliability sensitivity index. The remainder of the chapter is structured as follows. Previous chapters have presented the conventional PCE and the EPCE approaches. Sec- tion 3.2 introduces the MEPCE probabilistic modeling approach in which epistemic variables are assigned to each distribution parameter. Section 3.3 presents the derivation of the new global sen- sitivity index function and the evaluation of reliability sensitivity index. Section 3.4 demonstrates the two sensitivity measures on three analytical and numerical illustrative examples. Section 3.5 summarizes the conclusions and gives an outlook. 3.2 Modified Extented Polynomial Chaos Expansion The PCE and EPCE have been introduced in foregoing chapters. The modification of the aformen- tioned EPCE to the case where each P i is decomposed according to its own stochastic dimension r i can be readily accommodated, with some increase in computational cost. It is worth mentioning that this higher parameterization is not necessarily more physical or more accurate, but indicates that each distribution parameter is viewed as dependent on a single microstructure, with the random parameterr i identifying the particular microstructure being investigated. 35 To formulate the modified extended polynomial chaos expansion, let us introduce the m- dimension vector r r r =fr 1 ;;r m g of standard normal random variables that are uncorrelated to x x x =fx 1 ;:::;x d g and are one-to-one corresponding to P P P=fP 1 ;;P m g. Then we assume that P i ;i= 1;:::m, follows a normal distribution and can be modeled as random variable that depends onr i . In other words,P i ;i= 1;:::N P can be modeled as a normal random variableP i N(m P i ;s 2 P i ), which results in, P i =m P i +s P i r i ; i= 1;:::m; (3.1) wherem P i is the mean ofP i , ands P i its standard deviation. In this sense, X can be represented as a function offx 1 ;:::;x d g andfr 1 ;:::;r m g which are all independent standard normal random variables, and is denoted by X(h h h 000 ) where h h h 000 =fx 1 ;:::;x d ; r 1 ;:::;r m g inR (d+m) . ExpressingX as a function ofh h h 0 in the formX(h h h 000 ) and representingX(h h h 000 ) in an orthogonal polynomial expansion with respect toh h h 0 , namely modified extended PCE, result in, X(h h h 000 )= å jg g g 000 jp X g g g 000y g g g 000(h h h 000 ) (3.2) wherefX g g g 000g denote the PCE coefficients; p denotes the highest order in the polynomial expansion; h h h 0 is a (d+m)-dimensional multi-indices, andfy g g g 000g represent normalized multivariate Hermite polynomials that can be expressed in terms of their univariate counterparts using the following notation, y g g g 000(h h h)= d+m Õ p=1 y g 0 p (h 0 p )= d+m Õ p=1 h g 0 p (h 0 p ) p g 0 p ! ; (3.3) where, h g 0 p represents the one-dimensional Hermite polynomial of order p. The collection of the multivariate polynomials forms an orthogonal set with respect to the multivariate Gaussian density function. 36 The dependence onx x x can be separated from the dependence onr r r, which results in the follow- ing useful representation, X(h h h 000 )=X(x x x;r r r)= å a a a2R d ;b2R m ja a aj+jb b bjp X a a ab b b y a a a (x x x)y b b b (r r r); x x x2R d ; r r r2R m ; (3.4) with the subscripta a ab b b being ad+m multi-index formed as the concatenation ofa a a andb b b. Further, to separate a singler i fromx x x and all other elements inr r r, it is derived as, X(h h h 000 )=X(x x x;r r r i ;r i )=å a a a2R d ;w w w2R (m1) ;t2R ja a aj+jw w wj+jtjp X a a aw w wt y a a a (x x x)y w w w (r r r i )y t (r i ); x x x2R d ; r r r i 2R m1 ;r i 2R (3.5) with the subscript a a aw w wt being a d+m multi-index formed as the concatenation of a a a, w w w and t, wherer r r i denotes all the elements inr r r exceptr i . The PCE coefficientsX g g g 000 are estimated using quadrature rules to solve for the multidimensional integrals as follows, X g g g 000 = å q 0 2Q 0 X(h h h 0q 0 )y g g g 000(h h h 0q 0 )w 0 q 0; g g g 0 p; (3.6) where, Q 0 is the set of sparse quadrature points, q 0 is a quadrature node in set Q 0 and w 0 q 0 is the associated weight. The quadrature level required to achieve a preset accuracy in approximating anyX g g g 000 increases with the order of the associated polynomial,y g g g 000. For a given polynomial order p and germ dimensiond, the number of these PCE coefficients, denoted byN 0 c , is equal to, N 0 c = (d+p+m)! (d+m)! p! : (3.7) In this fasion, the dependence onr i can be separated from the dependence on other germs inh h h 0 , which is readily implemented to perform sensitivity analysis with respect to each single epistemic germ. 37 3.3 Global and Reliability Sensitivity Measures With Respect to Distribution Parameters The aforementioned MEPCE and the KDE are the two key ingredients to develop the new sensi- tivity measures in this work. Then subsection 3.3.1 introduces the new global sensitivity function with respect to distribution parameters and as a post-processing, subsection 3.3.2 presents the eval- uation of reliability sensitivity index in an efficient manner. Again, the PDF ofX (the QoI), denoted by f X (x) and KDE of the PDF, denoted by ˆ f X (x), can be expressed as, f X (x) D = ˆ f X (x)= 1 Nh N å j=1 K xX (j) h ! (3.8) where we recall that, X (j) = å jg g g 000 jp X g g g 000y g g g 000(h h h 0(j) ): (3.9) The Gaussian kernel is used forK with its bandwidthh determined following Silverman’s rule as, h= 4s 5 3N 1 5 ; (3.10) where s is the sample standard deviation evaluated from the N samples. It should be noted that statistical properties of the KDE hinge on the samples being independently selected from the dis- tribution of X. In our formulation, samples ofh h h2R d+m are independently sampled from a d+m- dimensional Gaussian distribution, and subsequently pushed through the PCE to yield samples of X. The sample thus collected does not necessarily, a-priori, follow the distribution ofX. However, mean square convergence of PCE implies its convergence in distribution. Thus, provided the PCE ofX is converged, samples collected from the PCE will adhere to the distribution ofX. 38 3.3.1 Global sensitivity index function with respect to distribution parameters As described in Eq. 3.1, the modified extended PCE as developed in Eq. 3.2 is used and integrated with the KDE in Eq. 3.8, to result in, f X (x)= 1 Nh N å j=1 K xå jg g g 000 jp X g g g 000y g g g 000(h h h 0(j) ) h ! ; h h h 0(j) 2R d+m : (3.11) This expression for f X involves summation over all (d+m) stochastic dimensions (h h h 0 ) and does not therefore express dependence on any of them. In order to retain sensitivity with respect to the parametersP P P, we make use of equation (3.4) and replace equation (3.11) by the following equation, f X (x;r i )= 1 Nh å N j=1 K 0 B B @ 1 h 0 B B @ x å a a a2R d ;w w w2R (m1) ;t2R ja a aj+jw w wj+jtjp X a a aw w wt y a a a (x x x (j) )y w w w (r r r (j) i )y t (r i ) 1 C C A 1 C C A ; x x x (j) 2R d ; r r r (j) i 2R m1 r i 2R (3.12) By taking the directional derivative of f X (x;r i ) in Eq. 3.12 with respect toP i , the sensitivity of the PDF to the distribution parameters of inputs, denoted by f X;P i (x), is given by, f X;P i (x;r i )= ¶ f X (x;r i ) ¶P i =s P i ¶ f X (x;r i ) ¶r i ; i= 1;;m; (3.13) and substituting the KDE formulation in Eq. 3.12 into Eq. 3.13 results in, f X;P i (x;r i )= s P i Nh 2 N å j=1 2 4 xX(x x x (j) ;r r r (j) i ;r i ) h K xX(x x x (j) ;r r r (j) i ;r i ) h ! å a a a2R d ;w w w2R (m1) ;t2R ja a aj+jw w wj+jtjp X a a aw w wt y a a a (x x x (j) )y w w w (r r r (j) i ) ¶yt(ri) ¶ri 3 5 ; i= 1;;m (3.14) where we relied on the Gaussian form of the kernel. And taking the mathematical expectation of the derivative of f X relative to P i , results in the following expression for the expected value of f X;P i (x;r i ), denoted by a functionz P i (x), which results in, z P i (x) D =E r i [f X;P i (x;r i )] (3.15) 39 To derive the analytical expression of z P i (x), noting the recurrence equation for the derivative of univariate Hermite polynomials, h 0 n (x)=xh n (x)h n+1 (x); (3.16) and taking the mathematical expectation of the derivative of f X relative to P i , results in the fol- lowing expression for the expected value ofDf X (x;r) to replace Eq. 3.15, whereh:i denotes the expectation operator, z P i (x)= s P i Nh 2 N å j=1 2 4 h xX(x x x (j) ;r r r (j) i ;r i ) h K xX(x x x (j) ;r r r (j) i ;r i ) h ! å a a a2R d ;w w w2R (m1) ;t2R ja a aj+jw w wj+jtjp X a a aw w wt y a a a (x x x (j) )y w w w (r r r (j) i ) ¶yt(r i ) ¶r i i 3 5 ; i= 1;;m: (3.17) The sensitivity index functionz P i (x) is the functional global sensitivity index developed in this work. The properties of this sensitivity measure include: (a) The sensitivity index curves with respect to different distribution parameters are comparable (one can plot all the sensitivity index curves on the same coordinate for straightforward compari- son); (b) Ranking of importance of all the distribution parameters may change at different values of QoI; (c) The sign of the sensitivity index function suggests whether the PDF of QoI is increasing or decreasing with respect to distribution parameters of inputs; (d) The net area under a sensitivity index curve is equal to zero. The sensitivity index function provides a straightforward and efficient stochastic representation of the sensitivity of PDF with respect to distribution parameters in the input random variables. 3.3.2 Reliability sensitivity index with respect to distribution parameters Reliability sensitivity refers to the partial derivative of the failure probability with respect to the distribution parameters of input. It can help to identify the ranking of the distribution parameters 40 and to guide the reliability-based design and risk assessment. In engineering applications, the failure probability is defined as the probability of reaching or exceeding a critical threshold and is of great significance. Let us assume a scalar description of the limit state in terms of a critical threshold for the QoI, denoted by X c . The failure probability, F, is then given by the following integral, F(r i )= Z xX c f X (x;r i )dx; (3.18) where the f X (x;r i ) is computed by Eq. 3.12. By taking the directional derivative of F(r i ) in Eq. 3.18 with respect toP i , the sensitivity of the failure reliability to the distribution parameters of inputs, denoted byF P i (r i ), is given by, F P i (r i )= ¶F(r i ) ¶P i =s P i dF(r i ) dr i ; i= 1;;m; (3.19) and substituting the formulation in Eq. 3.18 into Eq. 3.19 results in, F P i (r i )=s P i d dr i Z xX c f X (x;r i )dx ; i= 1;;m; (3.20) according to the Leibniz’s rule, Eq. 3.20 can be equivalently computed by, F P i (r i )=s P i Z xX c ¶ ¶r i f X (x;r i ) dx; i= 1;;m; (3.21) and substituting the formulation in Eq. 3.14 into Eq. 3.21 results in, F P i (r i )= s P i Nh 2 R xXc N å j=1 " xX(x x x (j) ;r r r (j) i ;r i ) h K xX(x x x (j) ;r r r (j) i ;r i ) h ! å a a a2R d ;w w w2R (m1) ;t2R ja a aj+jw w wj+jtjp X a a aw w wt y a a a (x x x (j) )y w w w (r r r (j) i ) ¶yt(r i ) ¶r i # dx;i= 1:::m; (3.22) where we relied on the Gaussian form of the kernel. Equation (3.22) provides a stochastic repre- sentation of the sensitivity of failure probability with respect to each probabilistic parameter in the input random variables. And taking the mathematical expectation of the derivative ofF relative to 41 P i , results in the following expression for the expected value of F P i (r i ), denoted by a scalar K P i , which results in, K P i D =E r i [F P i (r i )] (3.23) To derive the analytical expression ofK P i , it results in the following expression to replace Eq. 3.23, whereh:i denotes the expectation operator, K P i = s P i Nh 2 R xXc N å j=1 2 4 h xX(x x x (j) ;r r r (j) i ;r i ) h K xX(x x x (j) ;r r r (j) i ;r i ) h ! å a a a2R d ;w w w2R (m1) ;t2R ja a aj+jw w wj+jtjp X a a aw w wt y a a a (x x x (j) )y w w w (r r r (j) i ) ¶yt(r i ) ¶r i i 3 5 dx; i= 1:::m: (3.24) where we have explicitly expressed the dependence of K P i on the epistemic input uncertainty en- coded inr i . Then to normalized the indicators, the weighted K P i with respect to distribution parameters is used to represent the reliability sensitivity index, denoted byk P i , which results in, k P i = jK P i j m å j=1 jK P i j ; i= 1;;m: (3.25) wherek P i is the reliability sensitivity measure which is a scalar index developed in this work. The properties of the reliability sensitivity measurek P i include: (a) m å j=1 k P i = 1; (b) 0k P i 1; The reliability sensitivity index function provides a straightforward and efficient stochastic representation of the sensitivity of failure probability with respect to probabilistic parameters in the input random variables. 42 Table 3.1: Distribution parameters of random inputs for Example I: Ishigami test function Input variables Distribution lllb b b u u ub b b k 1 uniform p p k 2 uniform p p k 3 uniform p p 3.4 Case Studies In this section, three examples are investigated to demonstrate the proposed sensitivity measures. Example I is a Ishigami test function which is a benchmark problem commonly used in sensitivity analysis. Example II is a beam structure of which a closed-form expression for the QoI is known. Example III is a reinforced concrete wall of which the hysteresis analysis is performed using finite elements. 3.4.1 Example I: Ishigami function The Ishigami function is frequently used to test sensitivity analysis methods because it exhibits strong nonlinearity and nonmonotonicity. The function is expressed as, X =g(k)=sin(k 1 )+asin 2 (k 2 )+b(k 3 ) 4 sin(k 1 ); (3.26) and the k i ;i= 1;2;3 are assumed independent and uniformly distributed betweenp andp. The input distributions and the values of the constants a and b (5 and 0.1, respectively). The vector of random distribution parameters P P P consists of the six random variables lllb b b and u u ub b b which are the vectors of lower and uper bounds of uniform distribution. The distribution parameters are listed in Tab. 3.1. The mean value ofP P P is taken equal to the nominal values, and the coefficient of variation of each entry inP P P is assumed to 5%. The distributions ofP P P are described in Tab. 3.2. Fig. 3.1 shows the response PDF of Ishigami test function using Eq. 3.11. Fig. 3.2 compares the proposed sensitivity index function with respect to each input distribution parameter as in Eq. 3.17, z P i (x), for Ishigami test function. It can be seen that the ranking of importance of the distribution 43 Table 3.2: Distribution of statistical parameters for Example I: Ishigami test function P i Parameter Distribution m P i s P i = 5%m P i P 1 Lower bound of uniformk 1 Normal -3.14159 0.15708 P 2 Upper bound of uniformk 1 Normal 3.14159 0.15708 P 3 Lower bound of uniformk 2 Normal -3.14159 0.15708 P 4 Upper bound of uniformk 2 Normal 3.14159 0.15708 P 5 Lower bound of uniformk 3 Normal -3.14159 0.15708 P 6 Upper bound of uniformk 3 Normal 3.14159 0.15708 parameters varies along the QoI axis. The multiple changes of sign of the sensitivity index func- tions suggest whether the PDF of QoI increasing or decreasing with respect to the distribution parameters of inputs. Besides, the net area under each sensitivity index curve is equal to one. −10 0 10 20 30 Ishigami function value (QoI) 0.00 0.02 0.04 0.06 0.08 0.10 0.12 PDF Figure 3.1: The PDF of QoI in Example I: Ishigami test function. Postprocessing the foregoing results according to Eq. 3.25, assuming failure is associated with X > 10, the reliability sensitivity indices with respect to distribution parameters of inputs k P i are shown in Tab. 3.3. The ranking of importance of the distribution parameters contributing to the failure probability isP 1 >P 2 >P 5 >P 6 >P 4 >P 3 . 44 Figure 3.2: The new sensitivity index function with respect to distribution parameters of inputs, z P i (x), for Example I: Ishigami test function. Table 3.3: Reliability sensitivity indices with respect to distribution parameters of inputs,k P i , for Example I: Ishigami test function Distribution parameter Reliability sensitivity indexk P i P 1 0.291 P 2 0.232 P 3 0.054 P 4 0.069 P 5 0.184 P 6 0.170 45 Table 3.4: Distribution parameters of random inputs for Example II: beam structure Input variables Distribution a a a b b b q q q(fixed) r r r (fixed) Linear springk 1 (N*m) Beta 4 5 350 650 Rotational springk 2 (N*m/rad) Beta 4 5 400 600 Flexural stiffnessEI (N/m*m) Beta 4 5 80 186.67 Beam spanL (m) Beta 4 5 0.216 0.264 3.4.2 Example II: Beam stucture The beam structure is plotted in Fig. 3.3. The model characterizes the mid-span displacementX mid Figure 3.3: Schematic of the physical setup for Example I: Random beam on random supports. of the beam fixed by a linear spring and a rotational spring at each end, with a concentrated load acting in the middle of the beam. The random inputs include the linear spring, rotational spring, flexural stiffness and beam span, which are denoted as k 1 , k 2 , EI and L, respectively. It can be shown from elementary mechanics of materials that mid-span displacement X mid for this beam is given by, X mid = F 16EI L 3 3 8EIk 2 L 3 +k 1 k 2 L 6 16EI(k 2 +k 1 L 2 ) + 8EIk 1 L 2 k 1 k 2 L 3 2k 1 k 2 + 2k 2 1 L 2 : (3.27) The four input variables,k 1 ,k 2 ,EI andL, are mutually independent and follow the Beta distri- butions. The vector of random distribution parameters P P P consists of the eight random variables a a a and b b b which are the vectors of two shape parameters of the Beta distribution while the lower and upper bounds are fixed. The distribution parameters are listed in Tab. 3.4. The mean value of P P P is taken equal to the nominal values, and the coefficient of variation of each entry inP P P is assumed to 5%. The distributions ofP P P are described in Tab. 3.5. 46 Table 3.5: Distribution of statistical parameters for Example II: beam structure P i Shape parameter Distribution m P i s P i = 5%m P i P 1 a 1 Normal 4 0.2 P 2 b 1 Normal 5 0.25 P 3 a 2 Normal 4 0.2 P 4 b 2 Normal 5 0.25 P 5 a 3 Normal 4 0.2 P 6 b 3 Normal 5 0.25 P 7 a 4 Normal 4 0.2 P 8 b 4 Normal 5 0.25 The PDF of mid-span displacement of the beam structure is shown in Fig. 3.4 and subsequently the comparison of the proposed sensitivity index functionz P i (x) for the input distribution parame- ters is depicted in Fig. 3.5. The importance ranking of distribution parameters varies along the axis of QoI. The response PDF is overall most sensitive to P 3 and P 4 while they contribute in inverse directions. 1.50 1.75 2.00 2.25 2.50 2.75 3.00 3.25 Mid-span displacement (cm) 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 PDF Figure 3.4: The PDF of QoI in Example II: beam structure. 47 Figure 3.5: The new sensitivity index function with respect to distribution parameters of inputs, z P i (x), for Example II: beam structure. Assuming failure is associated with X mid > 2:45cm, then based on Fig. 3.5 and according to Eq. 3.25, the reliability sensitivity indices with respect to distribution parameters of inputs k P i are computed. The result is shown in Tab. 3.6. The ranking of importance of the distribution parameters contributing to the reliability isP 4 >P 3 >P 8 >P 7 >P 6 >P 5 >P 2 >P 1 . 3.4.3 Case study III: Reinforced concrete shear wall It has been suggested that epistemic uncertainty is non-negligible in structural analysis Ellingwood and Kinali (2009); Feng and Li (2016); Feng et al. (2020); Wang et al. (2015); Allen and Maute (2005). To illustrate the proposed two sensitivity indices in this work, a reinforced concrete shear wall model is taken as an example in this study. This model comes from an experimental study Thomsen IV and Wallace (2004). Fig. 3.6 depicts the geometry, dimension and reinforcement of the shear wall. There are two steps in the loading procedure. First, a constant axial load of 378 kN is applied on the top of the wall, followed by a cyclic lateral load achieved by controlling the displacement. The 48 Table 3.6: Reliability sensitivity indices with respect to distribution parameters of inputs,k P i , for Example II: beam structure Distribution parameter Reliability sensitivity indexk P i P 1 0.014 P 2 0.017 P 3 0.210 P 4 0.257 P 5 0.067 P 6 0.082 P 7 0.159 P 8 0.195 Figure 3.6: Schematic of the physical setup for Example III: Reinforced concrete shear wall. 49 Table 3.7: Distribution parameters of random inputs for Example III: reinforced concrete shear wall Material Input variable Distribution a a a SW b b b SW q q q SW (fixed) r r r SW (fixed) Concrete Compressive strength f c (in Pa) Beta 2 2 3:91 10 7 5:29 10 7 Steel Yielding strength f y (in Pa) Beta 2 2 3:40 10 8 4:60 10 8 applied lateral drift consisted of a train of triangular pulses of alternating signs. Additional details of the setup and its loading are described elsewhere Thomsen IV and Wallace (2004). The purpose of the present analysis is to find the sensitivity of the energy dissipated throughout the structure via hysteresis with respect to the distribution parameters of mechanical properties of concrete and steel. Some of the material properties are considered as random variables, including the concrete elastic modulus E c , the concrete tensile strength f r , the concrete compressive strength f c and the steel yielding strength f y . For concrete, E c , f r and f c are of course correlated, but for simplicity they were regarded as fully correlated in this chapter. According to the code ACI 318-19 318 (2019), the relationships between these parameters are, E c = 57;000 p f c ; f r = 7:5l p f c ; (3.28) where, the units are in psi. To calculate these concrete parameters, the f c is sampled from a Beta distribution first and the f r and E c are then generated according to Eq. 3.28. The steel strength is also modeled as a Beta random input. The material properties input to the shear wall structure are listed in Tab. 3.7. In this shear wall hysteresis problem, the vector of random distribution parameters P P P consists of the four random variables a a a SW and b b b SW which are the vectors of two shape parameters of Beta distribution while the lower and upper bounds are fixed. The mean value of P P P is taken equal to the nominal values, and the coefficient of variation of each entry in P P P is assumed to 5%. The distributions ofP P P are described in Tab. 3.8. 50 Table 3.8: Distribution of statistical parameters for Example III: reinforced concrete shear wall P j Shape arameter Distribution Meanm P i Standard deviation (s P i = 5%m P i ) P 1 a SW 1 Normal 2 0.1 P 2 b SW 1 Normal 2 0.1 P 3 a SW 2 Normal 2 0.1 P 4 b SW 2 Normal 2 0.1 We implement in Abaqus Hibbitt (2001) a model that follows the theoretical development in Feng et.al Feng et al. (2018) which features a multi-dimensional softened plasticity damage model. The steel material follows a Menegotto-Pinto model that includes strain-hardening, Baushinger ef- fects and tension stiffening and a multi-layer shell element is used for the shear wall Feng et al. (2018). Additional material properties include the concrete Poisson’s ratio, the steel elastic modu- lus and the steel hardening ratio which are deterministic inputs. The PDF of energy dissipation of the shear wall is plotted in Fig. 3.7. In Fig. 3.8, it is found that, in the hysteresis analysis of reinforced concrete shear wall, the sensitivity index functions z P i (x) have zero sensitivity around the value of QoI where the peak of response PDF is located. Actually, a similar finding in the beam structure is also obtained in Fig. 3.5. To make a deeper analysis of this, Fig. 3.9 shows the response PDF of the shear wall problem given a pertubation on P 1 . The zero sensitivity point essentially represents the unchanged point on the response PDF. Thus, the results in Figs. 3.7 and 3.4 indicate that the zero change in PDF is close to its peak in these two applications. As a postprocessing of Fig. 3.8, assuming failure is associated withX = 37:5kNm, according to Eq. 3.25, the reliability sensitivity indices with respect to distribution parameters of inputs k P i are computed. The result is shown in Tab. 3.9. The ranking of importance of the distribution parameters contributing to the reliability isP 2 >P 3 >P 1 >P 4 . 51 33 34 35 36 37 38 Energy dissipation (kN*m) 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 PDF Figure 3.7: The PDF of QoI in Example III: reinforced concrete shear wall. Figure 3.8: The new sensitivity index function with respect to distribution parameters of inputs, z P i (x), for Example III: reinforced concrete shear wall. 52 Figure 3.9: The change in response PDF given a 10% pertubation onP 1 in Example III: reinforced concrete shear wall. Table 3.9: Reliability sensitivity indices with respect to distribution parameters of inputs,k P i , for Example III: reinforced concrete shear wall. Distribution parameter Reliability sensitivity indexk P i P 1 0.228 P 2 0.385 P 3 0.306 P 4 0.081 53 3.5 Concluding Remarks Frequently, practical engineering problems have limited data to perform sensitivity analysis and reliability assessment which are associated with design and safety of the system. The two sensi- tivity measures presented in this study are tailored to guide the sensitivity analysis that accounts for modeling error in the probability models. The new functional global sensitivity index provides the importance rankings at different values of output. The sign of the sensitivity index function can provide indication about direction of change in output. Moreover, the reliability sensitivity index can be evaluated directly from the sensitivity index function, as a post-processing without extra computational cost. The whole procedure is very efficient in which the computational cost only lies in constructing the EPCE. This framework could provide a unified paradigm, which is informative and efficient, for reliability assessment and global sensitivity analysis. 54 Chapter 4 Bayesian Model Calibration 4.1 Introduction Uncertainties exist in many areas of science and engineering, and the manner of dealing with them for purposes of prediction has been of widespread interest Ghanem and Spanos (2003); Soize (2017) . All physics models, ranging from experiments to computer codes, are approximations of their target phenomena. The input parameters (e.g., material properties, load characteristics) of a physics model can have intrinsic and irreducible randomness of a phenomenon, which is charac- terized as the ”aleatory uncertainty” Der Kiureghian and Ditlevsen (2009) . These uncertain input parameters are typically modeled as random variables, which are called basic random variables be- cause they are typically observable, with specified models for their probability distributions based on the collected data. After propagating through the physics model, the input uncertainties are transformed into uncertainties in the QoI for purpose of predictions and decision-making. On the other hand, observed behavior of the QoI reflects the randomness in the inputs and thus can inversely calibrate the assumed probabilistic models (a.k.a prior). This process is called uncer- tainty quantification (UQ) which aims to manage the interplay between data, models and decisions Ghanem et al. (2017). There exists two stark realities in these presented procedures. First, no physics model can precisely represent but approximate the corresponding true phenomenon; Secondly, estimating 55 statistical parameter values for the probability distributions of inputs is challenged, both numeri- cally and conceptually, due to incomplete data and measurement errors. The former induces the physics model error (typically model error in short) and the latter generates the error associated with imprecise probability models. Both sources of error are reducible if one can acquire more knowledge or obtain a more elaborate physics model, categorized as the ”epistemic uncertainty”, and are investigated in this work. The model error (also known as model inadequacy), which refers to the discrepancy between prediction and reality, can result from the uncertain effect of simplifications and inaccurate model representations of the modeled phenomenon. Thus, model error is inevitably random and is usu- ally estimated through model validation which is statistical assessment of the model, including representations of uncertainties, against observations of the reality Chaloner and Verdinelli (1995). It is common to represent the model error with a completely statistical formulation and calibrate it with observations. Kennedy M.C. assumed the model inadequacy with a additive Gaussian error term Kennedy and O’Hagan (2001), which has been widely adopted in many areas and applica- tions. Recently, researchers have been focused on developing physically meaningful priors by embedding model error in the model structure. For example, Soize C. applied random matrix the- ory to build the prior distribution of stochastic models of the operators of structural mechanics, tailored to second-order dynamical systems Soize (2013). Morrison R.E. introduced a stochastic inadequacy operator and added it in the submodel equation of a chemical system Morrison et al. (2018). Sargsyan K. embedded model error which is represented by polynomial chaos in consti- tutive laws and phenomenological parameterizations instead of using additive error term Sargsyan et al. (2019). Though the likelihood function becomes more complex, the embedded model er- ror can be implemented wherever critical assumptions and approximations are made within the model structure, as a potential alternate phenomenology Sargsyan et al. (2015). In addition, this construction respects physical constraints and preserves the completeness of model structure. 56 Surrogate models (also known as metamodels) have gained increasing popularity for model validation and calibration in the past few decades, particularly in complex engineering and sci- entific applications Marzouk et al. (2007); Willcox et al. (2021); Raissi et al. (2019); Huan and Marzouk (2013); Shao et al. (2017) . Such models are trained from the physics models and the probabilistic models associate with random model inputs, using comparatively a small number of model runs. Once the surrogates are built, they are used to substitute the physics models in the process of uncertainty propagation as efficient alternatives. Actually, to quantify the model error in the physics model can be directly transformed into the task of investigating the model error in the surrogate model. PCE is a well-known surrogate model for its robustness and computational efficiency Ghanem and Spanos (1990); Ghanem (1999b); Marzouk and Najm (2009); Sarkar and Ghanem (2002). In PCE, the chaos are composed of different combinations of the ”germs” which are standard Gaussian variables and are one-to-one corresponding to the random inputs. The coefficients of the chaos expansion are trained from physics model runs at specific input values which depend on the probabilistic models of inputs. It is clear that both the model error from physics model and the error with respect to imprecise probabilities only affect these coefficients in PCE surrogate model. Such coefficients can be viewed as the parameters in this surrogate model, and an idea is thus to embed the model error in these coefficients and calibrate them with observations, which has been studied in a series of works in which uniform priors were assumed for the PCE coefficients Ghanem and Doostan (2006); Ghanem et al. (2008); Arnst et al. (2010). In this article, a more coherent and physically insightful manner to construct the prior of PCE coefficients is presented. A typical manner to deal with epistemic uncertainty associate with incomplete probability in- formation has been to perform nested iterations, with aleatory analysis on the inner loop and epis- temic analysis on the outer loop Hofer et al. (2002). In this case, the two types of uncertainties can be separated and easily traced. In particular, each realization of the probabilistic models gen- erates a response PDF based only on the basic random variables. Thus, the ensemble of response PDFs evaluated at a number of realizations of the probabilistic models can be used to visualize the 57 combined uncertainty in the response and further interpret the results using various statistical met- rics. This paradigm, while conceptually simple, is computationally prohibitive. Many researches have been focusing on modeling and propagating the uncertainty model for the input parameters with sparse data, including evidence theory Helton et al. (2007); Yin et al. (2018), fuzzy set theory Valdebenito et al. (2013); Wang et al. (2018) and interval analysis Eldred et al. (2011). In the previous publications, the authors presented a novel stochastic polynomial chaos rep- resentations, namely the extended polynomial chaos expansions (EPCE), that tackles the epis- temic uncertainty with respect to the choice of probabilistic model for input parameters Wang and Ghanem (2021, 2019). It is emphasized that the epistemic uncertainty, talked in a broader context in this work, includes both the errors associated with limited input data and the imprecise physics models. But since we argue the physical model error is embedded in the coefficients of PCE sur- rogate model and it does not change the model structure, it is still reasonable to keep the so-called ”epistemic random variable” as the name used in the numerical procedure to describe the error caused by imprecise probabilities. In EPCE, the computational efficiency is enhanced through a uniform treatment of simultaneous aleatory and epistemic random variables while the conceptually appealing uncertainty segregation for purposes of visualization and interpretation is preserved. Yet the Gaussian model of the statistical parameters, equivalent to a truncated first-order chaos expan- sion, could induce error which propagates through the hierarchical models and finally affects the prediction. In this work, we account for the error from modeling the statistical parameters and rep- resent it as a chaos expansion of QoI. This approach preserves the capability to take sensitivities of response PDF with respect to the epistemic random variable, and the computational effort does not increase. The article is structured as follows. Previous chapters have provided detailed introduction of PCE and EPCE surrogate models. In Section 4.2, we introduce the construction of PCE with random coefficients in which the uncertainty reflects the error associated with incomplete data. Following that, we describe the procedure of updating the random PCE coefficients built in Sec- tion 4.3 using MCMC, as well as a brief overview of the standard Bayesian approach to infer the 58 physical model parameters implemented with additive Gaussian model error. Then stochastic mod- els of QoI are introduced for validation metrics in Section 4.4. Finally, the proposed procedure is illustrated by an analytical and a numerical examples in Section 4.5 and we present the conclusions and some closing comments in Section 4.6. 4.2 PCE with Random Coefficients by EPCE The PCE and EPCE have been introduced in foregoing chapters. We can separate the dependence onx from the dependence onr, resulting in the following representation of EPCE, X(h h h)=X(x x x;r)= å a a a2R d ;b2R ja a aj+bp X a a ab y a a a (x x x)y b (r); x x x2R d ; r2R; (4.1) with the subscripta a ab being a (d+ 1) multi-index formed as the concatenation ofa a a andb. Then it results in a PCE depending onr, which is expressed as, X(x x x ;r)= å ja a ajp X a a a (r)y a a a (x x x); (4.2) where the PCE coefficients X a a a (r) are random variables depending onr. It is clear that, X a a a (r)2 R N pce andX g g g 2R N epce . In this work, we use the initial data set to construct the EPCE so as to build X a a a (r). It should be noted that, in the vector X a a a (r), not all PCE coefficients are random but those added by terms with respect to r. Therefore, the EPCE in Eq. (4.2) can be separated into the r-dependent and r-independent parts as, X(x x x ;r)= å jt t tj(p1) X t t t (r)y t t t (x x x)+ å jw w wj=p X w w w y w w w (x x x); (4.3) where given a PCE order p, t t t is the multi-index for orders less than p, and w w w is the multi-index for order p; X t t t (r) are the random PCE coefficients which all depend on a same random variable 59 r, andX w w w are the deterministic PCE coefficients which are independent ofr but are also obtained from the EPCE. Actually, only the highest order PCE coefficients are independent ofr. Thus, the number of elements inX t t t (r) andX w w w , denoted byN r andN d , respectively, results in, N r = (d+p 1)! d!(p 1)! ; N d = (d+p 1)! (d 1)! p! : (4.4) Obviously,N r +N d =N pce . 4.2.1 Representation of error in EPCE The sources of error in EPCE could be from the probabilistic models of P P P and K K K, and finite order of EPCE. The latter two are also sources of error in PCE and are not studied in this work. We focus on the error from model ofP P P. This error could induce additional uncertainty to the random vector X z z z (r) and thus needs to be accounted for. We denote the true EPCE of QoI as ˜ X(x x x ;r) and the error from EPCE ase epce , which results in, ˜ X(x x x ;r)=X(x x x ;r)+e epce ; (4.5) To represent the errore epce , our idea is to add an error terme z z z (r)2R (N r 1) where 1jz z zj (p 1), to each PCE coefficient in X t t t (r) except X 0 (r). In this fashion, the random PCE coeffi- cients with EPCE error, denoted byZ t t t (r)2R N r , is expressed as, Z t t t (r)=X t t t (r)+e z z z (r); (4.6) where e z z z (r) are the error terms which include two levels of uncertainty from r and the EPCE error. And we use a Gaussian errore z z z (r)N 0;s 2 z z z (r) wheres z z z (r)= 1%X z z z (r) . Thus,Z t t t (r) is a random vector which has two levels of uncertainty. From the physical perspective, Z t t t (r) 60 accounts for the errors from physics model, probabilistic model of K K K and the probabilistic model ofP P P . Substituting Eq. (4.6) into Eq. (4.3) to represent Eq. (4.5) results in, ˜ X(x x x ;r)= å jt t tj(p1) Z t t t (r)y t t t (x x x)+ å jw w wj=p X w w w y w w w (x x x); (4.7) In this case, we claim that the error in EPCE does not affect the mean of QoI but the higher- order statistics. The error terms generate scatter around the ”mode” which is the relationship obtained by EPCE. In other words, the mode of the prior is from EPCE and we add an additive Gaussian error which induces a small scatter around this mode. In this manner, the prior con- structed is more physcially meaningful than in the literatures (e.g. an uniform assumption of X a a a ). When observations are acqiured, Eq. (4.7) is the model that we aim to update in whichZ t t t (r)2R N r are the parameters to infer. Ther-independent part participates in the inverse analysis but the X w w w remain deterministic during the analysis. To explicitly represent the error term, Eq. (4.7) can be equivalently written as, ˜ X(x x x ;r)= å jt t tj(p1) X t t t (r)y t t t (x x x)+ å jw w wj=p X w w w y w w w (x x x)+ å 1jz z zj(p1) e z z z (r)y z z z (x x x) | {z } EPCE error term : (4.8) Though this representation is not used in this work, it is convenient to take directional derivative with respect tor when investigations related to sensitivities are of interest. 4.3 Bayesian Inference Let us assumeN D number of observationsD D D=(D (1) ;D (2) ;;D (N D ) ) are acquired. The task is to calibrate the prediction model to make more accurate predictions. The ”model” can be either the physics model (e.g. a finite element code) or the surrogate model (e.g. a PCE). In this section, we perform Bayesian inference for both cases. 61 4.3.1 Standard Bayesian inference The standard Bayesian inference is to update the physical parameters K K K in the physics model. In this case, the Bayes’s theorem is written as, f X (K K KjD D D)=C s L s (D D DjK K K) f X (K K K); (4.9) where, f X (K K K) is the prior distribution ofK K K; f X (K K KjD D D) represents the posterior distribution ofK K K; L s (D D DjK K K) is the likelihood function which can be seen as a function ofK K K given observationsD D D; C s is a normalizing constant. To account for the model error, we use an additive Gaussian representation which is expressed as, ˜ X(K K K)=X(K K K)+N(0;s 2 M ); (4.10) where ˜ X(K K K) andX(K K K) denote the values of QoI evaluated atK K K by the true physics and the physics model, respectively. The discrepancy between ˜ X(K K K) andX(K K K) characterizes the model error which is represented by a Gaussian distribution with zero mean and standard deviation ofs M . Thus, the likelihhod function can be represented by, L s (D D DjK K K)= N D Õ m=1 1 s M p 2p exp 0 @ 1 2 D (m) X(K K K) s M ! 2 1 A ; m= 1;;N D ; (4.11) 4.3.2 Bayesian inference for random PCE coefficients The Bayesian inference can be also applied to calibrate the PCE surrogate model in Eq. (4.7) . In this case, the parameters to infer are the random PCE coefficients. The samples of prior are given byZ t t t (r) as constructed in Eq. (4.6). Once the prior is built, we do not considerr any more. Thus, the posterior of these random PCE coefficients denoted by Z t t t jD D D does not depend on r. To unify 62 the notation in the Bayesian analysis, let us denote the random PCE coefficients to be updated by Z t t t 2R N r . The Bayes’s theorem applied onZ t t t is represented by, f X (Z t t t jD D D)=CL D D Dj(Z t t t ;X w w w ) f X (Z t t t ); (4.12) where, f X (Z t t t ) is the prior distribution which is a joint distribution ofZ t t t ; f X (Z t t t jD D D) represents the posterior distribution which is also a joint distribution ofZ t t t ; L D D Dj(Z t t t ;X w w w ) is the likelihood function which can be seen as a function ofZ t t t given observations D D D; C is a normalizing constant. 4.3.2.1 Prior The samples of prior can be directly obtained by sampling r and mapping them into samples of Z t t t (r). Then the joint prior distribution f X (Z t t t ) can be estimated by the multivariate kernel density estimation (KDE) using these prior samples, which results in, f X (Z t t t )= 1 N r H H H 1=2 1 N r å j=1 K H H H 1=2 1 (Z t t t Z t t t (r (j) )) (4.13) where K is the kernel function which is a symmetric multivariate density; H H H 1 2R N r R N r is the bandwidth matrix which is symmetrix and positive definite;N r is the number of samples ofZ t t t (r) (orr) used in estimating the multivariate KDE. 4.3.2.2 Likelihood The likelihood function is given by, L(D D DjZ t t t )= N D Õ m=1 f X D (m) j(Z t t t ;X w w w ) ; (4.14) 63 where f X j(Z t t t ;X w w w ) denotes the response PDF estimated by PCE and can be viewed as a function ofZ t t t . Then, f X D (m) j(Z t t t ;X w w w ) is the value of the response PDF evaluated at an observationD (m) . In other words, the likelihood function and response PDF are both stochastic models depending on Z t t t . 4.3.2.3 Posterior According to the Bayes’ thereom in Eq. (4.12), combined with Eqs. (4.13) and (4.14), the posterior is expressed as, f X (Z t t t jD D D)= C N r H H H 1=2 1 N D Õ m=1 f X (D (m) j Z t t t ;X w w w ) N r å j=1 K H H H 1=2 1 Z t t t Z t t t (r (j) ) ; (4.15) 4.3.2.4 Metropolis-Hastings Algorithm We use the Metropolis-Hastings algorithm to sample from the posterior. The pseudocode is shown in Algorithm 1. It is worth mentioning that the model evaluations in associated Markov chains are performed by the PCE surrogate model that has been built in Section 4.2, and the Bayesian procedure thus has little numerical efforts. After posterior samples are collected, the multivariate KDE is used to estimate the joint poste- rior PDF ofZ t t t jD D D, which results in, f X (Z t t t jD D D)= 1 N a H H H 1=2 2 N a å j=1 K H H H 1=2 2 (Z t t t Z (j) t t t jD D D) (4.16) whereZ (j) t t t jD D D is the j-th posterior sample;K is the kernel function which is a symmetric multivari- ate density;H H H 2 2R N r R N r is the bandwidth matrix which is symmetrix and positive definite;N a is the number of accepted samples of the posteriorZ t t t jD D D used in multivariate KDE. 64 Algorithm 1 Metropolis-Hastings posterior sampler for PCE coefficients 1: function TARGET(Z (k) t t t ) . Input is a sample of random PCE coefficients 2: L(D D DjZ (k) t t t ) Õ N D m=1 f X (D (m) j(Z (k) t t t ;X w w w )) 3: p(Z (k) t t t ) L(D D DjZ (k) t t t )f X (Z (k) t t t ) 4: returnp(Z (k) t t t ) . Output is a scalar 5: end function 6: 7: function PROPOSAL(Z (k) t t t ) . Define proposal distribution 8: Z (k) t t t Multivariate normal(Z (k) t t t ;S S S) .S S S is covariance matrix 9: returnZ (k+1) t t t 10: end function 11: 12: Initialization: Choose any sample ofZ (0) t t t from prior; 13: fork 1 toN s do .N s is the number of MCMC steps 14: Z (k+1) t t t PROPOSAL(Z (k) t t t ) 15: p(Z (k) t t t ) TARGET(Z (k) t t t ) 16: p(Z (k+1) t t t ) TARGET(Z (k+1) t t t ) 17: R (k) min p(Z (k+1) t t t ) p(Z (k) t t t ) ;1 18: u (k) Uniform(0, 1) 19: ifR (k) u (k) then 20: acceptZ (k+1) t t t 21: Z (k+1) t t t Z (k+1) t t t 22: else 23: rejectZ (k+1) t t t 24: Z (k+1) t t t Z (k) t t t 25: end if 26: end for 27: return accepted samples . Collect posterior samples 65 4.4 Stochastic Models for PDFs When we use the posterior PCE coefficientsZ t t t jD D D to build the PCE, the updated PCE is a function of the random vectorZ t t t jD D D, expressed as, ˜ X(x x x ;Z t t t jD D D)= å jt t tj(p1) (Z t t t jD D D)y t t t (x x x)+ å jw w wj=p X w w w y w w w (x x x); (4.17) A stochastic model for the PDF of X is used to characterize confidence in estimates of failure probability. The PDF of QoI computed by PCEs with prior and posterior random PCE coefficients Z t t t (r) andZ t t t jD D D are random variables and can be expressed as, f X x;Z t t t (r) = 1 N r h 1 N r å j=1 K x ˜ X x x x (j) ;Z t t t (r) h 1 ! ; f X (x;Z t t t jD D D)= 1 N a h 2 N a å j=1 K x ˜ X(x x x (j) ;Z t t t jD D D) h 2 ! ; (4.18) where Gaussian kernel is used for K with the bandwidth h 1 and h 2 determined following Silver- man’s rule Silverman (1986) ash 1 =(4s 5 1 =3N r ) 1=5 andh 2 =(4s 5 2 =3N a ) 1=5 , wheres 1 ands 2 are the standard deviations estimated from the N r and N a samples of QoI, respectively. Again, the samples used in Eq. (4.12) are evaluated by the PCE surrogate model before and after update, respectively, given by, ˜ X(x x x (j) ;Z t t t (r))= å jt t tj(p1) Z t t t (r)y t t t (x x x (j) )+ å jw w wj=p X w w w y w w w (x x x (j) ) ; ˜ X(x x x (j) ;Z t t t jD D D)= å jt t tj(p1) (Z t t t jD D D)y t t t (x x x (j) )+ å jw w wj=p X w w w y w w w (x x x (j) ); (4.19) wherey t t t (x x x (j) ) andy w w w (x x x (j) ) indicates that the polynomialy t t t andy w w w are evaluated at the sample x x x (j) . 66 One important application of the aforementioned ideas is to the characterization of failure prob- abilities, themselves, as random variables. In many applications, the failure probability P f is de- fined as the probability of reaching or exceeding a critical threshold and is of great significance. This distribution is typically predicated on pre-specified probabilistic models for input parameters, and thus lends itself to the present analysis. To simplify the presentation, and without loss of gen- erality, we assume a scalar description of the limit state in terms of a critical threshold for the QoI, denoted by X c . The prior and posterior failure probabilities denoted by P f 1 and P f 2 , respectively, are then given by the following integral, P f 1 Z t t t (r) = Z xX c f X x;Z t t t (r) dx ; P f 2 (Z t t t jD D D)= Z xX c f X (x;Z t t t jD D D)dx; (4.20) where we have explicitly expressed the dependence of P f 1 and P f 2 on the prior and posterior random PCE coefficients Z t t t (r) and Z t t t jD D D, respectively. Then, the PDF of P f 1 and P f 2 computed by KDE is expressed as, f P f 1 (x)= 1 N r h f 1 N r å j=1 K xP f 1 Z t t t (r (j) ) h f 1 ! ; f P f 2 (x)= 1 N a h f 2 N a å j=1 K xP f 2 (Z (j) t t t jD D D) h f 2 ! ; (4.21) where f P f 1 (x) and f P f 2 (x) are the prior and posterior PDFs of failure probabilities P f 1 and P f 2 . The Gaussian kernel is used for K with the bandwidth h f 1 and h f 2 determined following Silver- man’s rule ash f 1 =(4s 5 f 1 =3N r ) 1=5 andh f 2 =(4s 5 f 2 =3N a ) 1=5 , wheres f 1 ands f 2 are the standard deviations estimated from theN r andN a failure probability samples, respectively. 67 4.5 Case Studies In this section, two examples are investigated to demonstrate the proposed approach. Example I is a beam structure of which a closed-form expression for the QoI is known. Example II is a reinforced concrete wall on which the hysteresis analysis is performed using finite elements. 4.5.1 Example I: Beam structure The beam structure is shown in Fig. 4.1. The model characterizes the mid-span displacementX mid Figure 4.1: Schematic of the physical setup for Example I: Random beam on random supports. of the beam fixed by a linear spring and a rotational spring at each end, with concentrated load F acting in the middle of the beam. The random inputs include the linear spring, rotational spring, flexural stiffness and beam span, which are denoted as k 1 , k 2 , EI and L, respectively. It can be shown from elementary mechanics of materials that mid-span displacement X mid for this beam is given by, X mid = F 16EI L 3 3 8EIk 2 L 3 +k 1 k 2 L 6 16EI(k 2 +k 1 L 2 ) + 8EIk 1 L 2 k 1 k 2 L 3 2k 1 k 2 + 2k 2 1 L 2 : (4.22) The four input variables, k 1 , k 2 , EI and L, are mutually independent and follow the Beta dis- tributions. The vector of random statistical parameters P P P consists of the eight random variables a a a and b b b which are the vectors of two shape parameters of the Beta distribution while the lower and upper bounds are fixed. The statistical parameters are listed in Tab. 4.1. The mean value of P P P is taken equal to the nominal values, and the coefficient of variation of each entry inP P P is assumed to 5%. 68 Table 4.1: Statistical parameters of random inputs for Example I: beam structure Input variables Distribution a a a b b b q q q(fixed) r r r (fixed) Linear springk 1 (N*m) Beta 4 5 350 650 Rotational springk 2 (N*m/rad) Beta 4 5 400 600 Flexural stiffnessEI (N/m*m) Beta 4 5 80 186.67 Beam spanL (m) Beta 4 5 0.216 0.264 From the EPCE, the mean of predicted response PDF is 2:09 cm. We assume 10 artificial observations generated from a uniform distribution with bounds of (2.4, 2.6), which gives D D D=f2:42;2:43;2:58;2:55;2:41;2:51;2:54;2:50;2:54;2:43g cm. A second order EPCE was found to be sufficiently converged in the response PDF to carry out the foregoing studies. Thus, in the stochastic PCE constructed in Section 4.2, only the 0th-order and 1st-order PCE coefficients are random (N r = 5). The results of Bayesian inference on the coefficients of the PCE surrogate model are plotted. Fig. 4.2 shows the statistics of the random PCE coefficients directly build by EPCE according to Eq. (4.2), and the statistics of the joint prior of the random PCE coefficients that includes the error terms according to Eq. (4.13). It indicates that the terms representing the error in EPCE generate scatter around the ”mode” which is a deterministic relationship obtained by EPCE. Fig. 4.3 depicts the statistics of the joint posterior of the random PCE coefficients including the samples (lower left), marginal PDFs (diagonal) and joint PDFs (upper right), according to Eq. (4.16). The posterior samples have larger scatter compared to the prior. Fig. 4.4 exhibits the family of predicted PDF of QoI using prior and posterior PCE models in Eq. (4.18). It is found that the posterior PDF family is thinner than the prior family, which indicates that the posterior predictions have smaller scatter than the prior predictions. Assuming failure is associated with X mid > 2:45cm, the distribution of the probability of failure P f is obtained by evaluating P f for each sample in Fig. 4.4, and plotting the PDF of the resulting values, according to Eq. (4.21). The resulting PDFs from prior and posterior are shown in Fig. 4.5. The posterior distribution of failure probability is sharper than the prior prediction and shifts to the right, which indicates that the observations improve the credibility of the prediction and an increase in overall failure probability is forecasted. 69 On the other hand, the results of Bayesian inference on the physical parametersK K K in the physics model of the beam in Eq. (4.22) are also obtained. Fig. 4.6 shows the prior and posterior distribu- tions of K K K when the Gaussian noise in the likelihood s M = 30%. The prior of K K K corresponds to r = 0 in EPCE and the posterior ofK K K provides the bestP P P in consistence of the observations. For a clear comparison of the two Bayesian approaches, the prior and posterior predictions of QoI from updates of K K K and Z t t t are plotted together in Fig. 4.7. It shows that the update of K K K induces a shift of the response PDF and a change in shape. For the update ofZ t t t , shift in the family of response PDFs is not observed while the PDFs ensemble is narrower. Such influence is also reflected in the distribution of failure probability as a shift and change in the shape of its PDF. This is due to the fact that, a sample of PCE coefficients generates a full response PDF by the corresponding PCE surrogate model while a sample of physical parameters composes a physics model which predicts a value of QoI. That is to say, the PCE coefficients contain more information (i.e. probabilistic information) than the physical parameters. The inference of PCE coefficients thus allows a dual-level characterization of the uncertainty in QoI, which is more informative. Figure 4.2: Comparison of statistics of the prior X t t t (r) directly from EPCE (left) and the prior Z t t t (r) with error term (right) in Example I. 70 Figure 4.3: Posterior distribution ofZ t t t in Example I. Figure 4.4: The family of response PDFs computed by posterior PCEs (red) and the family by prior PCEs (blue) in Example I. 71 Figure 4.5: The posterior and prior distributions of the probability of failure in Example I. 4.5.2 Example II: Reinforced concrete shear wall Many literatures have suggested that epistemic uncertainty is non-ignorable in dynamic analy- sis of structures Ellingwood and Kinali (2009); Wang et al. (2015); Gardoni et al. (2002); Feng and Li (2016); Feng et al. (2020). To demonstrate the proposed methodologies in this context, a reinforced concrete shear wall model is investigated in this study. This model comes from an experimental study Thomsen IV and Wallace (2004). Fig. 4.8 depicts the geometry, dimension and reinforcement of the shear wall. There are two steps in the loading procedure. First, a constant axial load of 378 kN is applied on the top of the wall, followed by a cyclic lateral load achieved by controlling the displacement. The applied lateral drift consisted of a train of triangular pulses of alternating signs. Additional details of the setup and its loading are described elsewhere Thomsen IV and Wallace (2004). The purpose of the present analysis is to find the influence of the statistical parameters of mechanical properties of concrete and steel on the response PDF of the energy dissipated throughout the structure via hysteresis. Some of the material properties are considered as random variables, including the concrete elastic modulus E c , the concrete tensile strength f r , the concrete compressive strength f c and the 72 Figure 4.6: The posterior and prior distributions ofK K K by standard Bayesian inference (s M = 30%) in Example I. 73 Figure 4.7: The family of response PDFs computed by posterior PCEs (red); the family by prior PCEs (blue); prior (dashed green) and posterior (dashed yellow) PDFs of QoI by standard Bayesian inference ofK K K (s M = 30%) in Example I. Figure 4.8: Schematic of the physical setup for Example II: Reinforced concrete shear wall. 74 Table 4.2: Statistical parameters of random inputs for Example II: reinforced concrete shear wall Material Input variable Distribution a a a SW b b b SW q q q SW (fixed) r r r SW (fixed) Concrete Compressive strength f c (in Pa) Beta 2 2 3:91 10 7 5:29 10 7 Steel Yielding strength f y (in Pa) Beta 2 2 3:40 10 8 4:60 10 8 steel yielding strength f y . For concrete, E c , f r and f c are of course correlated, but for simplicity they were regarded as fully correlated in this chapter. According to the code ACI 318-19 318 (2019), the relationships between these parameters are, E c = 57;000 p f c ; f r = 7:5l p f c ; (4.23) where, the units are in psi. To calculate these concrete parameters, the f c is sampled from a Beta distribution first and the f r and E c are then generated according to Eq. 4.23. The steel strength is also modeled as a Beta random input. The material properties input to the shear wall structure are listed in Tab. 4.2. where, in this shear wall hysteresis problem,a a a SW andb b b SW are the vectors of two shape parameters of each input from the original dataset;q q q SW andr r r SW are the vectors of the lower and upper bounds of each input and assumed to be the same in any dataset. Thus, in this example, the vector P P P of random parameters has four components consisting of the shape parameters a a a SW andb b b SW . The mean value ofP P P is taken equal to the values estimated from the original dataset, and the coefficient of variation of each entry inP P P is assumed to 5%. We implement in Abaqus Hibbitt (2001) a model that follows the theoretical development in Feng et.al Feng et al. (2018) which features a multi-dimensional softened plasticity damage model. The steel material follows a Menegotto-Pinto model that includes strain-hardening, Baushinger ef- fects and tension stiffening and a multi-layer shell element is used for the shear wall Feng et al. (2018). Additional material properties include the concrete Poisson’s ratio, the steel elastic modu- lus and the steel hardening ratio which are deterministic inputs. 75 Figure 4.9: Comparison of statistics of the prior X t t t (r) directly from EPCE (left) and the prior Z t t t (r) with error term (right) in Example II. Figure 4.10: Posterior distribution ofZ t t t in Example II. 76 Figure 4.11: The family of response PDFs computed by posterior PCEs (red) and the family by prior PCEs (blue) in Example II. 0.00 0.01 0.02 0.03 0.04 0.05 Failure probabilit (Pr{x ≥ 37.5 kN*m}) 0 20 40 60 80 100 120 PDF Prior Posterior Figure 4.12: The posterior and prior distributions of the probability of failure in Example II. 77 A second order EPCE was found to be sufficiently converged in the response PDF to carry out the foregoing studies. In such case, the 0th-order and 1st-order PCE coefficients are random in the stochastic PCE constructed in Section 4.2, andN r = 3. Since the assessment and comparison of standard Bayesian analysis of physics parameters have been discussed in details using Example I, we focus on applying the proposed approach in this example. The statistics of the random PCE coefficients directly build by EPCE according to Eq. (4.2), and the joint prior of these coefficients that includes the EPCE error terms according to Eq. (4.13) are plotted in Fig. 4.9. Again, EPCE provides the deterministic relationships as the ”modes” and the error terms generate scatter around these modes. Using the Bayesian inference, the statistics of the joint posterior of the random PCE coefficients according to Eq. (4.16) is shown in Fig. 4.10. A clearly larger scatter is found compared with the prior samples. According to Eq. (4.18), the two families of response PDFs using prior and posterior PCE models are plotted together in Fig. 4.11, and the distribution of the probability of failure (P f ) associated with X > 37:5 kN*m is shown in Fig. 4.12. It can be clearly seen that the posterior family is thinner than the prior family, which means the posterior predictions has smaller scatter than the prior predictions. The posterior distribution of failure probability shifts to the right and is sharper than the prior distribution, thus indicating the observations result in an increase in the overall failure probability and gives higher credibility of the prediction. 4.6 Concluding Remarks We have presented a Bayesian parameter calibration approach based on a polynomial chaos model with prior informed by EPCE. The epistemic uncertainties associated with model inadequacy and incomplete probabilities are accounted for. The performance of the proposed methodology has been assessed using two analytical and numerical examples. Both cases indicate smaller scatters in the family of response PDFs, and the distribution of failure probability shifts and has a sharper 78 shape, in consistence with the observations. The Bayesian inference of coefficients in PCE sur- rogate model allows dual-level characterizations of the uncertainties in QoI, which enables more informative predictions and decision-making than updating the physical parameters in the physics model. It is worth mentioning that the model evaluations in associated Markov chains are per- formed by the surrogate models. Thus, the whole procedure is very efficient, with computational cost only in constructing the EPCE. In the original version of EPCE, if the statistical parameters P P P are estimated according to the MLE arguments, then their asymptotic distribution will be Gaussian. However, in the small data case, the distribution ofP P P will generally depend on the dataset. One could replace the probabilistic model of P P P with a higher-order PCE to account for more general form of the density function. But this manner may not be ideal because the sensitivities with respect to the epistemic variable would be complicated to take. In this work, the representation of the error associated with the model of P P P by a chaos expansion of QoI in Eq. (4.8) can not only account for more general form of the density function, but also preserves the convenience to take sensitivities as a straightforward post-processing of the EPCE. 79 Chapter 5 Seismic Hazard Forecastings 5.1 Introduction Earthquakes are extreme events which are naturally occurring physical phenomena. Seismic haz- ard can lead to a significant increase in the seismic risks, which may cause substantial economic and social losses. It is significant to forecast seismic hazards credibly, in order to produce earth- quake resistant design standards for civil protection, mitigation of heritage and existing buildings, and community resilience. Uncertainties in the natural systems and limited knowledge of these earthquake events imply that the forecasting of natural hazards is challenging and has to be based on stochastic modeling. Cornell C.A. first proposed the framework of probabilistic seismic hazard analysis (PSHA) Cornell (1968) which is a fundamental tool in the development of building codes that are critical in the design of structures able to withstand seismic impacts. This procedure comprehensively integrates the modules of earthquake occurrence, earthquake source, magnitude-frequency rela- tionship, and attenuation law. After half a century of development, although many advanced tech- niques have been proposed, the framework generally remains the same. The earthquake magnitude and the source-to-site distance are described as random variables. A key in the procedure is to use ground motion models (GMM) to describe the attenuation relationship of some intensity measure (IM) with respect to distance when an earthquake of some magnitude occurs. Typically, GMM is an empirical model of which the structure and parameters are estimated from historic data. The 80 IM is usually assumed to follow a log-normal distribution and the GMM determines the mean and variance of the log-normal random variable IM. Finally, combining these analysis based on the conditional probability, the rate (or probability) of exceeding various ground-motion levels at a site (or a map of sites) given all possible earthquakes is obtained (Baker, 2013). A probabilistically complete framework for seismic hazard forecastings must fully character- izes theepistemicuncertainty that represents our lack of knowledge about the system in the model’s representation of aleatory uncertainty that describes the randomness of the system McGuire et al. (2005); Commission et al. (1997). Although the use of PSHA is dominantly popular in the current state-of-the-art practice, widespread confusion remains regarding proper treatment of uncertainties (Bommer and Abrahamson, 2006; Atik et al., 2010). In most stochastic forecasting frameworks, the two types of uncertainty are generally treated as follows: the GMM is a simplified stochas- tic model that relates IM to several seismological parameters of an earthquake. Specifically, the mean and variance of log-normal assumed IM (i.e., outputs of GMMs) describe the uncertainty in IM associated with seismological parameters (i.e., inputs of GMMs). The GMM computes a sin- gle hazard curve which characterizes the aleatory uncertainty. The existence of alternative GMM models and the set of hazard curves computed from these models reflect the epistemic uncertainty. Each GMM is assigned a weight by hindcast performance of that model and/or through expert judgment Marzocchi and Jordan (2018). The weight represents a measure of the forecasting skill of a model with respect to the others, hence, it contains unavoidably subjectivity Marzocchi and Jordan (2017); Marzocchi et al. (2021). We present a stochastic framework that quantifies the various sources of uncertainty in seis- mic hazard forcasting coherently and allows to bring scientific discoveries to improve the model. The seismological parameters in a GMM are estimated from data but have physical meanings, for example, some of them describe the faulting mechanism. Therefore, on the one hand, the data error in estimating these parameters needs to be taken into account. Especially, when faced with the scarcity of strong motion data or in regions that are not earthquake-prone, the forecastings of seismic impact are extremely challenging (Pisarenko et al., 1996, 2014). On the other hand, the 81 data estimation processes can be replaced if we are able to simulate the faulting mechanism based on geoscience modeling. In such cases, a hierarchical model is constructed in which the unob- served geoscience model affects the corresponding seismological parameters and subsequently via the GMM affecting IM. The unobserved physics is a fine-scale model of which the inputs are un- known and described by random variables. We use random variables in a coarse scale to describe the inherent uncertainty in faulting mechanism that is independent from which geoscience model is selected. In this formulation, the standard GMM is extended to reflect subscale physics and additional information such that (1) we can understand the impacts of the each of the components and errors on the ground motion prediction so that scientific discoveries can be brought to improve our model, and (2) when ground motion data is obtained, the model can be calibrated through Bayesian inference. It should be noted that the “ontological error” in a model’s quantification of aleatory variability and epistemic uncertainty subjected to observations proposed by Marzocchi and Jordan Marzocchi and Jordan (2014, 2018) is not included in this framework. The reason is that once a good GMM (for example, the one with largest weight) is selected, we admit that the model is not perfect but is correct/useful. The aim is to build a physically insightful prior model by extending GMM to include hidden physics or/and data error, and subsequently to improve the model by sensitivity analysis Wang and Ghanem (2021) or Bayesian inference Baker and Gupta (2016); Lyubushin and Parvez (2010). Compared to most probabilistic hazard forecasting frameworks, the presented framework has the following features: (1) the aleatory uncertainty is propogated from the seismological parame- ters through the model instead of the log-normal assumption of IM. The ground motion intensity does not necessarily satisfy the log-normal assumption (Raschke, 2013; Pavlenko, 2015; Zhang and Pan, 2021). Actually, an inappropriate distribution type can lead to significantly different probability estimates Der Kiureghian and Ditlevsen (2009). Especially when the tail of the dis- tribution is of significance, for instance, which is associated with large ground motion in seismic hazard assessment, it is dangerous to make the log-normal assumption on magnitude. Bommer J.J. argued that the log-normal GMMs result in unrealistically high estimations of ground motion 82 intensities (Bommer and Abrahamson, 2006). (2) the epistemic uncertainty is described by unob- served physics instead of alternative hazard models. The lack of understanding in the observable behavior of earthquake systems is reflected in the inability to reliably predict large earthquakes in seismically active regions on short time scales Jordan et al. (2011). It is important to investi- gate physics-based methods that incorporate detailed representations of geophysical mechanisms in seismic hazard analysis Graves et al. (2011). We focus on a selected model and aim to bring scientific discoveries into our prediction. This avoids assigning weight subjectively and accounts for the epistemic uncertainty in a scientific and coherent manner. A series of work by the authors presented a novel stochastic polynomial chaos representa- tions that tackles the characterization and propagation of hierarchical uncertainties Ghanem et al. (2017); Wang and Ghanem (2021, 2019). The computational efficiency is enhanced through a uniform treatment of simultaneous dual-level random variables while the conceptually appealing uncertainty segregation for purposes of visualization and interpretation is preserved. The chapter is organized as follows. We first provide a review of the standard PSHA approach to deal with epistemic and aleatory uncertainties in Section 5.2. Then in Section 5.3, we describe the procedure of proposed framework to perform seismic hazard assessment based on the EPCE framework. Following that, the proposed procedure is illustrated by an example in Section 5.4 and we present the conclusions and some closing comments in Section 5.5. 5.2 Standard Probabilistic Seismic Hazard Analysis (PSHA) 5.2.1 Earthquake source characterizations The earthquake sources are capable of producing damaging ground motions at a site of interest. Once all possible sources are identified, we can characterize the earthquake sources by specifying the distributions of earthquake magnitude and source-to-site distance associated with earthquakes from these sources. 83 5.2.1.1 Model of magnitude Tectonic faults are capable of producing earthquakes of various sizes which are referred to as magnitudes Jordan et al. (2011). There are mainly two categories of magnitude representations that are typically considered in seismic hazard analysis: the Gutenberg-Richter recurrence law and the characteristic model. In general, it is agreed by the community that regional catalogs of seismicity are well described by the Gutenberg-Richter model while the characteristic model might be the option when sites along a specific fault or fault zone are concerned with (Wesnousky, 1994). Regardless of the various distinctions between them, the magnitude of earthquake is represented by a probability distribution denoted by f M (m). 5.2.1.2 Model of source-to-site distance To predict ground shaking at a site, it is also necessary to model the distribution of distances from earthquakes to the site of interest. For a given earthquake source, it is generally assumed that earthquakes will occur with equal probability at any location on the fault. The fault types are typically categorized into point, line or area sources, though any arbitrarily complex surface could be considered. Thus, given that locations are uniformly distributed, it is simple to identify the distribution of source-to-site distances, denoted by f R (r), by only using the geometry of the source. 5.2.2 Ground motion model The ground motion model (GMM) predicts the probability distribution of any ground motion inten- sityY , as a function of the earthquake magnitudem, distancer and many other predictor parameters ˆ C C C such as faulting mechanism, the near-surface site conditions, the potential presence of directivity effects. It is noted that ˆ C C C are deterministic values when used in GMMs. GMMs are typically devel- oped using statistical regression based on the datasets of observed ground motion intensities. Even after accounting for the uncertainty of magnitude and distance, it is obvious that there is significant 84 scatter in observed ground motion intensities, and thus to describe this probability distribution, GMMs take the general form as, lnY =N m lnY (m;r; ˆ C C C); s 2 (m;r; ˆ C C C) ; (5.1) wherelnY is the natural log of the ground motion intensity measure of interest such as peak ground acceleration (PGA) or spectral acceleration at a given period. It is noted that lnY is modeled as a random variable represented by a Gaussian distribution. The termsm lnY (m;r; ˆ C C C) ands(m;r; ˆ C C C) are the outputs of the GMM; they are the predicted mean and standard deviation, respectively, of the Gaussian variable lnY . Though over decades of development and refinement, the modern GMMs have become complex, consisting of many terms and tables for many coefficients, they always aim at predicting the termsm lnY (m;r; ˆ C C C) ands(m;r; ˆ C C C). 5.2.3 Combined calculation Oncem lnY (m;r; ˆ C C C) ands(m;r; ˆ C C C) are computed, the PDF and CDF of the intensity measure denoted by f Y (y) and F Y (y), respectively, are then obtained. Therefore, the probability of exceeding any level of the intensity measure for a given magnitude and distance can be expressed as, P(Y >yjm;r)= 1F Y (y); (5.2) By combining the probability distributions of magnitude and distance (i.e. f M (m) and f R (r)) using the total probability theorem, it results in, P(Y >y)= Z m max m min Z r max 0 P(Y >yjm;r)f M (m)f R (r)drdm; (5.3) Eq. 5.3 is a probability of exceedance given an earthquake, and does not include any information about how often earthquakes occur at the source of interest. A simple modification to Eq. 5.3 is 85 usually made by multiplying a scaling factor, to compute the rate rather than the probability of Y >y given the occurrence of an earthquake, which gives, l(Y >y)=l(m>m min ) Z m max m min Z r max 0 P(Y >yjm;r)f M (m)f R (r)drdm; (5.4) wherel(Y >y) is the rate ofY >y;l(m>m min ) is the rate of occurrence of earthquakes greater than m min from the source and is usually a determined constant. For instance, a rate of l(m > 6:5) = 0:01 indicates an earthquake larger than magnitude 6.5 occurs every 100 years. When multiple sources are considered, Eq. 5.4 can be generalized as, l(Y >y)= n src å i=1 l(m i >m min ) Z m max m min Z r max 0 P(Y >yjm;r)f M i (m)f R i (r)drdm; (5.5) where n src is the number of sources considered; M i and R i denote the magnitude and distance distributions for sourcei. A discrete version of Eq. 5.5 for numerical code can be expressed as, l(Y >y)= n src å i=1 l(m i >m min ) n M å j=1 n R å k=1 P(Y >yjm j ;r k )P(M i =m j )P(R i =r k ); (5.6) where the ranges of possibleM i andR i are discretized inton M andn R intervals, respectively. Eqs. 5.3 and 5.5 (or Eq. 5.6 equivalently) are the equations most commonly refered to in en- gineering seismic hazard assessments. To use which expression depends on the hazard associated with specific earthquakes or specified time period is concerned with. In general, they have inte- grated all the knowledge regarding rates of earthquake occurrence, the magnitudes and locations of potential earthquakes, and the distribution of ground shaking intensity due to those earthquakes. Each of those inputs can be determined through scientific studies of historic earthquakes and sta- tistical analysis of observed data. As the end result, the probability (or rate) of exceeding intensity measure levels of varying intensity at a site is ususally described as a hazard curve which is very useful for engineering decision-making and prediction of rare events (low exceedance rate or large 86 intentities) that are not possible to determine through direct observations. The procedure of PSHA is illustrated in Fig. 5.1a. 5.2.4 Uncertainty assessment The model error, often termed as “epistemic uncertainty” in natural hazard forecastings, is usually characterized by an ensemble of alternative hazard models and treated by the logic tree approach Bommer (2012). Each branch in the logic tree has a hazard model and generates a hazard curve. These branches have their own weights that represent the degree of belief that a given branch has the best model compared to the real physics. To combine the results from these branches, these approaches take a weighted mean of the results from multiple hazard models according to the total propbability theorem Abrahamson and Bommer (2005). That is, the mean rate of exceeding a cer- tain intensity measure is usually considered, which is the sum of the rates of exceedance from each branch multiplying its weight. The weights are dertermined by hindcast performance of that model and/or through expert judgment, thus, contains unavoidably subjectivity. We argue that a coher- ent approach that quantifies this “epistemic uncertainty” should seek for scientific representation with respect to a particular hazard model. That is, the approach should enable to bring scientific discoveries to improve the adopted hazard model. On the other hand, the statistical error due to lack of data in determining the seismological parameters (for example, the error about maximum magnitude that a seismic source is able to gen- erate) is rarely investigated in the community mainly due to the tremendous computational burden in considering different values of these parameters. This statistical error is an additional source of epistemic uncertainty, and can be either accounted for directly or reduced by the aforementioned discovery of hidden physics. The hazard curve estimated from a single GMM as Eq. 5.1 reflects the inherent variability of seismic systems, usually referred to as “aleatory uncertainty”. Typically, the IM is assumed to follow a log-normal distribution where the mean and variance of IM represent the aleatory uncer- tainty. However, the log-normal assumption could be dangerous when the tail of the distribution 87 (a) (b) Figure 5.1: Diagrams for quantifying aleatory and epistemic uncertainties by (a) standard PSHA and (b) EPCE-based procudures. is of significance, for example, which is associated with large ground motion in seismic hazard assessment. A rigorous approach that estimates IM’s distribution should quantatively assess the uncertainty in IM propogated from the uncertainty of each component. 5.3 EPCE-based Seismic Hazard Analysis A coherent and rigorous framework is presented in this section to characterize and propagate the various sources of uncertainty in seismic hazard assessment. The key idea is to specify hierar- chical uncertainties on the GMM and construct a unified formulation which propagates distinct uncertainties. 88 5.3.1 Deterministic hazard model A GMM in standard PSHA not only represents the physical mechanism of how the ground motion intensity is attentuated with respect to the seismological parameters, but also provides a proba- bilistic relationship between them based on a log-normal assumption, namely m lnY (m;r; ˆ C C C) and s(m;r; ˆ C C C). To loose the log-normal assumption, a strategy is to build a deterministic relationship as the physical model which mimics the true physical procedures and propagate the uncertainties from seismological parameters to the IM. Thus, we modify the GMM in Eq. 5.1 as the deterministic physical model of which the general form is expressed as, Y =g(m;r;C C C); (5.7) where g() is a deterministic function; C C C are random predictor parameters andY is the IM as the model output. Compared with standard PSHA, we modify the deterministic predictor parameters ˆ C C C into random variablesC C C which represents the inherent variabilities, referred to as “x -level” random variables for distinction with later descriptions. For the ground motion, we discard the log-normal assumption and representY as a random variable dependent on the random inputs (m;r;C C C). The physical model g() provides the mapping from a sample of(m;r;C C C) to the corresponding sample ofY . 5.3.2 Uncertainty representation The inputs of the physical model, denoted byk k k=(m;r;C C C)2R d , are modeled as random variables. These random seismological parameters are estimated from data but essentially represent the un- observed physical mechanisms (e.g., faulting mechanism, earthquake magnitude). If the faulting mechanism can be modeled using geoscience discoveries, the data estimation operations can be eliminated. In such cases, a hierarchical model is constructed in which the unobserved geoscience models influence the corresponding seismological parameters and subsequently impacts IM via the hazard model in Eq. 5.7. The unobserved physics is represented by a subscale model of which 89 the inputs are unknown, hence, described by “r-level” random variables. It is noted that there is a hierarchy of uncertainties where “r-level” uncertainty influences the distribution of seismological parameters which is described by “x -level” uncertainty. An alternative explanation of “r-level” uncertainty is that seismological parameters are estimated from limited data. For instance, the dis- tribution of magnitude has a parameter “maximum magnitude” which is determined by selecting the largest value in the observed dataset. If a larger database is available, it might be found that the largest value increases. Thus, “r-level” uncertainty reflects the statistical error in the parameter estimation. Again, the data estimation processes can be substituted by, for example, seismological discovery of earthquake magnitude. Therefore, “r-level” uncertainty is explained as “uncertainty of uncertainty”, defined as the error in modeling the uncertainty of seismological parameters. 5.3.3 Uncertainty propagation We use an EPCE formulation to propagate the hierarchical uncertainties decribed in Section 5.3.2 to the uncertainty in IM and subsequently obatain the hazard curve. The d-dimensional random vector of seismological parameters k k k is first expressed as a mapping from a d-dimensional vec- tor x x x =fx 1 ;;x d g of uncorrelated standard normal random variables using, for instance, the Rosenblatt transformation. The setx x x is referred to as the “x -level germ” in the EPCE. We introduce the m-dimensional vector P P P=fP 1 ;;P m g representing all the distribution pa- rameters of random seismological parameters k k k. These parameters are typically estimated based on a finite sample. Obviously, an insufficient dataset can result in statistical error in the predic- tion. Different estimation methods yield different probabilistic models for P P P with the Maximum Likelihood estimates (MLE) generally yielding an asymptotically (for large sample size) Gaus- sian distribution with variance that is inversely proportional to sample size. The modification of 90 the aformentioned EPCE to the case where each P i is decomposed according to its own stochas- tic dimension r i can be readily accommodated, with some increase in computational cost. Then, P i ;i= 1;:::;m can be modeled as a normal random variableP i N(m P i ;s 2 P i ), which results in, P i =m P i +s P i r i ; i= 1; :::m; (5.8) where m P i is the mean of P i , and s P i its standard deviation; m denotes the number of parameters from the setP P P that is presumed to be uncertain. Thus, gound motion intensity Y can be represented as a function of a combined germ h h h2 R (d+m) composed offx 1 ;:::;x d g andfr 1 ;:::;r m g which are all independent standard normal ran- dom variables, and is therefore denoted asY(h h h). RepresentingY(h h h) in an orthogonal polynomial expansion with respect toh h h yields the extended polynomial expansion ofY relative toh h h, Y(h h h)= å g g g2R d+m jg g gjp Y g g g y g g g (h h h); h h h2R d+m (5.9) wherefY g g g g are called EPCE coefficients; p denotes the highest order in the polynomial expan- sion; g g g is a (d+m)-dimensional multi-index, andfy g g g g denote normalized multivariate Hermite polynomials that can be expressed in terms of their univariate counterparts as follows, y g g g (h h h)= d+m Õ p=1 y g p (x p )= d+m Õ p=1 h a p (h p ) p g p ! ; h h h;g g g2R d+m (5.10) where, h g p represents the one-dimensional Hermite polynomial of order p. The collection of the multivariate polynomials forms an orthogonal set with respect to the multivariate Gaussian density function. The EPCE coefficientsY a a a are estimated using quadrature approximations to multidimensional integrals as follows, Y g g g = å q2Q Y(h h h q )y g g g (h h h q )w q ; jg g gj p; (5.11) 91 where,Q is the set of sparse quadrature points,q is a quadrature node inQ andw q is the associated weight. The quadrature level required to achieve a preset accuracy in approximating any Y g g g in- creases with the order of the associated polynomial,y g g g . For a given polynomial order p and germ dimensiond, the number of these EPCE coefficients, denoted byN epce , is equal to, N epce = (d+p+m)! (d+m)! p! : (5.12) Clearly, the numerical value ofY g g g depends both on the mapping fromk k k toY , and the mapping fromx x x tok k k. The former mapping encapsulates physical model in Eq. 5.7 while the latter mapping describes the probabilistic model of the random seismological parameters k k k, in a functional form that explicitly relates them to a set of independent Gaussian random variablesx x x . Uncertainty in the probabilistic model ofk k k is propagated into uncertainty aboutY through the composite map fromk k k tox x x andx x x toY . 5.3.4 Hazard curve We rely on Kernel Density Estimates (KDE) to represent the PDFs Davis et al. (2011). The PDF of intensity measureY , denoted by f Y (y) can be expressed as, f Y (y)= 1 Nh N å j=1 K yY (j) h ! (5.13) where we recall that, Y (j) = å jg g gjp Y g g g y g g g (h h h (j) ): (5.14) The Gaussian kernel is used for K with its bandwidth h determined following Silverman’s rule Silverman (1986) as, h= 4s 5 3N 1 5 ; (5.15) 92 where s is the sample standard deviation evaluated from the N samples. It should be noted that statistical properties of the KDE hinge on the samples being independently selected from the dis- tribution ofY . In our formulation, samples ofh h h2R d+m are independently sampled from a d+m- dimensional Gaussian distribution, and subsequently pushed through the PCE to yield samples of Y . The sample thus collected does not necessarily, a-priori, follow the distribution ofY . However, mean square convergence of EPCE implies its convergence in distribution. Thus, provided the PCE ofY is converged, samples collected from the EPCE will adhere to the distribution ofY . The cumulative distribution function (CDF) F Y (y) of intensity measure is computed from the PDF f Y (y) by, F Y (y)=P(Yy)= Z y ¥ f Y (u)du; (5.16) Then, the hazard curve which indicates the probability of exceeding any level of the intensity measure is derived as, P(Y >y)= 1F Y (y); (5.17) It is simple to generalize the foregoing approach if rate of earthquakes and multiple earthquake sources need to be considered as Eqs. 5.4 to 5.6 and is not discussed again in this section. Therefore, we construct a direct mapping to the hazard curve from the various sources of uncer- tainty in seismological parameters, rather than based on the controversial log-normal assumption of ground motion. 5.3.5 Stochastic model for hazard curves The EPCE allows to separate the random variabler r r fromx x x . By samplingr r r, a family of PDFs can be generated from Eq. 5.13 as, f Y (y;r r r)= 1 Nh N å j=1 K 0 @ yå jg g gjp Y g g g y g g g (x x x (j) ;r r r) h 1 A ; (5.18) 93 where y g g g (x x x (j) ;r r r) indicates that the polynomial y g g g is evaluated at the sample where the first d components are specified byx x x (j) , while the last(d+m) components remain as free variables. It can be seen that Eq. 5.13 is the distribution of the family of PDFs generated by Eq. 5.18, marginalized overr r r. Since the probability of exceedance (PoE) is a post-processing of PDF, the family of PoE can be derived by, P(Y >y;r r r)= 1 Z y ¥ f Y (u;r r r)du; (5.19) One important application of the foregoing ideas is to the characterization of PoE, themselves, as random variables. Then the distribution of PoE denoted byP f at a fixed hazard valuey c is then given by, f P f (y)= 1 N r h f N r å j=1 K xP(Y >y;r r r (j) ) h f ! ; (5.20) whereN r is the number of samples ofr r r used in estimating the KDE, andP (j) f ; j= 1;;N r is the j-th realization of PoE evaluated atr r r j . The Gaussian kernel is used for K with the bandwidth h f determined following Silverman’s rule as h f =(4s 5 f =3N r ) 1=5 , wheres f is the standard deviation estimated from theN r samples of PoE. The diagram in Fig. 5.1b depicts the procedure of the EPCE-based seismic hazard assessment. It should be mentioned that, the “experts’ distribution” Marzocchi and Jordan (2014) in standard PSHA is also built upon a family of hazard curves and appears to have the same meaning as the proposed Eq. 5.20. However, the family of hazard curves used to compute the experts’ distribution is obtained from a logic tree and the scatter of these curves indicates the distinct results from various hazard models (i.e., source models, GMMs). In contrast, the scatter in the ensemble of hazard curves in the proposed method as Eq. 5.19 reflects the error due to unobserved physics (or lack of data) for a particular hazard model, which has totally different meaning from the experts’ distribution. 94 5.4 Case Study To demonstrate the proposed framework and compare it with the standard PSHA approach, we apply both methods to study a point source under a specific earthquake. In this case, in standard PSHA, the uncertainty of ground motion is completely described by the standard deviation of log- normal assumption and controlled by the relevant deterministic seismology parameters. In contrast, in the proposed approach, the uncertainty of ground motion is characterized by the associated random seismology parameters. It is not difficult to generalize this case study to multiple sources and earthquakes. In this studied case, we consider the seismic hazard calculation for a particular site where the source-to-site distance is r = 200 km subject to an earthquake of 7.0 such that f R (r) and f M (m) are not involved. The GMM developed by Campbell (Campbell, 2003) is used in standard approach and is expressed as, ln(Y)=c 1 + f 1 (m)+ f 2 (m;r)+ f 3 (r); (5.21) where, f 1 (m)=c 2 m+c 3 (8:5m) 2 ; (5.22) f 2 (m;r)=c 4 ln(R)+(c 5 +c 6 m)r; (5.23) R= q (r) 2 +[c 7 exp(c 8 m)] 2 ; (5.24) f 3 (r)= 8 > > > > > > < > > > > > > : 0 ifrr 1 c 7 ln(r)ln(r 1 ) ifr 1 <rr 2 c 7 ln(r)ln(r 1 ) +c 8 ln(r)ln(r 2 ) ifrr 2 : (5.25) 95 Table 5.1: Statistical parameters ofC C C factors Random inputsC C C Nonimal value Distribution a a a b b b b b b lll (fixed) b b b u u u (fixed) c 1 -0.6104 Beta 3 3 -0.67144 -0.54936 c 2 0.451 Beta 3 3 0.4059 0.4961 c 3 -0.2090 Beta 3 3 -0.2299 -0.1881 c 4 -1.158 Beta 3 3 -1.2738 -1.0422 c 5 -0.00255 Beta 3 3 -0.002805 -0.002295 c 6 0.000141 Beta 3 3 0.0001269 0.0001551 c 7 0.299 Beta 3 3 0.2691 0.3289 c 8 0.503 Beta 3 3 0.4527 0.5533 r 1 70 Beta 3 (fixed) 3 (fixed) 63.0 77.0 r 2 130 Beta 3 (fixed) 3 (fixed) 117.0 143.0 whereY is the mean of the intensity measure (e.g. PGA), and the relations for the standard devia- tion ofY are given by, s Y = 8 > > < > > : c 11 +c 12 m ifm<M 1 c 13 ifm>M 1 ; (5.26) Again, this is a stochastic model in which the uncertainty is characterized by the mean and standard deviation of Y . In Campbell’s GMM, the factors c 1 ;:::;c 13 are deterministic constants that are estimated from data while have physical meanings. The constantsr 1 ,r 2 andM 1 are specified as 70 km, 130 km, and 7.16, respectively. In the EPCE-based approach, we formulate a deterministic physical model for seismic hazard using Eqs. 5.21 to 5.25 in Campbell’s model. Then the inputsk k k in the physical model is composed of(C C C) whereC C C=(c 1 ;:::;c 8 ;r 1 ;r 2 )2R 10 . Following the strategy of specifying hierarchical uncer- tanities described in Section 5.3.2, the statistical information ofC C C are provided in Tab. 5.1 wherein the nominal values are determined by ˆ C C C. Specifically, sincec 1 ;:::;c 8 represent unobserved physics, they are assigned with bothx -level andr-level uncertainties; r 1 and r 2 that determine the bounds of the distance are not associated with any hidden physics, thus only havex -level uncertainty. In this case, the combined germsh h h is inR 26 composed ofx x x2R 10 andr r r2R 16 . Figs. 5.2 and 5.3 exhibit the PDF and CDF of the PGA at the site using the EPCE-based ap- proach according to Eqs. 5.13 and 5.17, respectively, and compared with the results from standard 96 0.00 0.02 0.04 0.06 0.08 0.10 0.12 0.14 0.16 PGA (g) 0 5 10 15 20 25 30 35 PDF standard PSHA EPCE-based method Figure 5.2: PDF of PGA by the EPCE-based and standard PSHA approaches. 0.00 0.02 0.04 0.06 0.08 0.10 0.12 0.14 0.16 PGA (g) 0.0 0.2 0.4 0.6 0.8 1.0 CDF standard PSHA EPCE-based method Figure 5.3: CDF of PGA by the EPCE-based and standard PSHA approaches. 97 10 −2 10 −1 PGA (g) 10 −3 10 −2 10 −1 10 0 P oE standard PSHA EPCE-based method Figure 5.4: Hazard curve by the EPCE-based and standard PSHA approaches. 0.00 0.02 0.04 0.06 0.08 0.10 0.12 0.14 0.16 PGA (g) 0 10 20 30 40 PDF Figure 5.5: The family of PDFs of PGA by 1000 samples ofr r r in EPCE-based approach compared with PDF of PGA by the standard PSHA approach (bold blue). 98 Figure 5.6: The family of hazard curves by 1000 samples ofr r r in EPCE-based approach compared with the hazard curve by the standard PSHA approach (bold blue). 0.02 0.03 0.04 0.05 0.06 PoE (P[PGA ≥ 0.07g]) 0 10 20 30 40 50 60 PDF Figure 5.7: PDF of PoE that PGA> 0:07 g by the EPCE-based approach. 99 PSHA method. In Fig. 5.4, the two hazard curves computed by the proposed and classical methods indicate that low levels of intensity are exceeded relatively often, while high intensities are rare. It can be seen that the EPCE-based approach shows comparatively smaller PoE than the classic PSHA manner. In other words, classical approach provides more conservative prediction of seis- mic hazard than the EPCE-based method. This result agrees with the findings that PSHA generally provides higher estimates of ground motion Bommer and Abrahamson (2006). Then by samplingr r r, according to Eq. 5.18, the family of PDFs computed from EPCE is plotted in Fig. 5.5. Based on the PDF family, the family of hazard curves are obtained by Eq. 5.20 and exhibited in Fig. 5.6. The scatter in the hazard curve family indicates that the influence of the unobserved physics or/and lack of data is not negligible. Finally, Fig. 5.7 shows the distribution of PoE given a threshold of PGA>0.07 g. 5.5 Concluding Remarks We specify various sources of uncertainty (i.e., inherent variability, unobserved physics, data error) associated with seismic hazard in hierarchy and provide a paradigm for ground shaking predictions. Rather than assigning weights subjectively to integrate multiple hazard models, the epistemic un- certainty is described as unobserved physics and/or a lack of data that is represented as uncertainty in a subscale with respect to a particular model, which is then reflected in the probabilistic model of the seismological parameters. In contrast to the controversial log-normal assumption of ground motion intensity, a direct probabilistic mapping between the hazard curve and the various sources of uncertainty in seismological parameters is built. The uncertainty in ground motion depends on both the deterministic physical model that describes the mechanism of ground motion attenu- ation and the probabilistic model that includes the sources of uncertainty. This can be exploited as the core idea for comprehensive and coherent probabilistic characterization of seismic hazard problems to establish needs and priorities. 100 With this idea in mind, we introduce an EPCE surrogate model that represents the intensity measure in terms of these hierarchical uncertainties in a single level. The estimated hazard curve reflects the effect of all sources of uncertainty. Moreover, by separating r-level uncertainties, a family of hazard curves and distribution of PoE are obtained, which reflects the influence of unobserved physics or/and lack of data. This allows us to bring scientific discoveries to improve the model through sensitivity analysis or/and calibrate the model with observations through Bayesian inference in future research. The whole procedure is efficient, with computational cost only in constructing the EPCE. The performance of the proposed methodology has been assessed using an seismic hazard example. It indicates smaller exceeding probability than using standard PSHA method. 101 Chapter 6 Stochastic Multiscale Modeling 6.1 Introduction Composite materials have been increasingly employed in engineering and scientific practice due to their superior mechanical and physical behaviors. Fiber-reinforced polymer, woven fiber com- posites, and ceramic matrix composites, for example, have found their wide use in structural, aerospace, and mechanical industries. Typically, the coarse-scale behavior of composite materials is of interest for engineering design and optimization purposes, while is governed by its underlying fine-scale mechanisms. Thus, ever-growing multiscale models have been developed to explore ma- terial behavior by accounting for fine-scale influence instead of classical single-scale continuum mechanics Fish et al. (2021); Aluko et al. (2017); Wang and Sun (2018); Marfia and Sacco (2018). The choice of a multiscale method necessitates a trade-off between higher model fidelity with rising complexity and reduction in precision with increase in uncertainty Fish et al. (2021). That is, a complete deterministic modeling of the material system from ab initio theories (e.g., from quantum mechanics) is computationally prohibitive and not always applicable. Thus, a practical modeling strategy is to start from an intermediate scale (e.g., crystal scale) that require empirical input (e.g., slip parameter) Liu et al. (2021); Papadrakakis and Stefanou (2014). In such cases, once a multiscale model is selected, it is crucial to investigate the uncertainty associated with corresponding unobserved physics in finer scales. Typically, the effect of missing mechanisms is 102 characterized by the uncertainty in the model parameters at the finest observable scale, and de- scribed by the probabilistic model of these parameters estimated from data or knowledge directly. Making the analysis more complicated, insufficient data can cause additional error (i.e., statisti- cal error) in the probabilistic model. On the other hand, in any selected computational multiscale model, there always exists discrepancy between model prediction and the reality (i.e., model er- ror) in each individual mechanism. These model errors propagate via upscaling, along with error in manufacturing processes, and eventually have significant effects on the material behavior vari- ation due to the complex and nonlinear nature of composites. Therefore, to establish adequate design margins and achieve design criteria with reliable confidence, it is crucial but challenging to properly assess all these sources of uncertainty in the multiscale hierarchy. Furthermore, design of composite material system with desired behaviors and reduction in parameter space both entail targeting individual mechanisms across and within scales Kohler et al. (2004); Aranda et al. (2014). This poses additional challenge to percolate the change in each individual sub-system through the hierarchy. Sensitivity analysis is an assessment of how the uncertainty in the model output can be apportioned to the sources of uncertainty in the model input Saltelli (2002). The complexity of multiscale modeling makes it challenging and requires significant computational resources to understand the sensitivity of the overall material response to individual mechanisms Fish and Ghouali (2001). Uncertainty quantification (UQ) is the rational process of managing the interplay between data, models and decisions Ghanem et al. (2017). In the past decade, increasing research has been conducted to integrate multiscale models with UQ methodologies and algorithms in order to pro- vide reliable probabilistic predictions. Monte Carlo (MC) sampling Spanos and Kontsos (2008); Hiriyur et al. (2011); DeVita et al. (2005); Fu et al. (2005) is straightforward but is computa- tionally expensive due to the “curse of dimensionality” and the nested nature of the uncertainty sources in multiscale hierarchy. More efficient sampling techniques used in multiscale modeling include polynomial chaos approach Mehrez et al. (2018); Ghauch et al. (2019); Cl´ ement et al. 103 (2013); Greene et al. (2011); Tootkaboni and Graham-Brady (2010); Wu and Fish (2010), stochas- tic collocation method Kouchmeshky and Zabaras (2010), and multi-response Gaussian processes Bostanabad et al. (2018). Most prior works focus on uncertainty propagation of random model pa- rameters in the hierarchy, however, few have rigorously accounted for modeling errors, and there is even less work on the sensitivity analysis with respect to individual mechanisms and modeling errors. We claim that a systematic stochastic multiscale modeling approach must address the following sources of uncertainties associated with a well-defined hierarchical system: (a) parametric uncer- tainty: the uncertainty inherent in the input variables at the finest observable scale, denoted by K K K, which can be directly measured; (b) statistical error: the uncertainty associated with the statistical estimation of probability model ofK K K, which essentially reflects the error in assessing the effects of hidden physical mechanisms while can be reduced by acquiring more data of K K K; (c) model error: the uncertainty related to discrepancy between prediction by physical sub-model and the reality of individual mechanisms. In this chapter, we present a framework that rigorously and efficiently quantifies the hierarchi- cal uncertainties and modeling errors from aforementioned sources (a), (b) and (c), and exploits their both individual and combined effects across scales on the full probability distribution and the associated failure probability of material behaviors of interest. All types and sources of the nested uncertainties are propagated in a single-level using the extended polynomial chaos expan- sion (EPCE), allowing for explicit specification of each underlying source of uncertainties, and considerable reduction in the number of simulations Wang and Ghanem (2021, 2019). A basis adaptation scheme Tipireddy and Ghanem (2014) is implemented on the EPCE in cope with the “curse of dimensionality” through reduction of stochastic dimensions while retaining the response statistics in order to further enhance computational efficiency. Postprocessing the EPCE surro- gate model, sensitivity analysis is performed to investigate the impacts of modeling errors on the response PDF. The sensitivity of the response PDF with respect to the distribution parameters of input variables are evaluated through an EPCE-based kernel density construction (KDE) Wang and 104 Ghanem (2021). In this chapter, we extended the EPCE-KDE construction to assess sensitivities of response PDF to model parameters in the hierarchy. By integrating the tail of the aforementioned sensitivities, the reliability sensitivity is also directly computed. The stochastic multiscale framework proposed in this chapter has five key features as fol- lows: (1) coherent: all uncertainties across scales including parametric uncertainties, statistical and model errors are characterized and propagated systematically; (2) efficient: the computational cost in the full framework is to build an EPCE surrogate model; (3) informative: various metrics of response are developed to exploit the effects of each source of uncertainties; (4) general: it can be extended to complex systems that exhibit multi-scale, multi-physics, and multi-disciplinary inter- actions; (5) non-intrusive: it can be used to integrate any existing deterministic multiscale model as a black box. 6.2 Modeling and Propagation of Hierarchical Uncertainties and Modeling Errors 6.2.1 Representation of modeling errors The modeling errors taken into account in this framework include statistical error and model error. The subections introduce these two types of error, respectively. 6.2.1.1 Statistical error Given a well-defined physical model, the random vector K K K defines its finest observable scale. That is, the uncertainty in K K K is essentially caused by the unobserved mechanisms in finer scales. Ideally, we can imagine that if the finest-scale physics of K K K is observed, then K K K is not needed any more andX could be precisely described by a deterministic mapping from the finest-scale physics. Clearly, this perfect mapping will never be achieved. In practice, we often use experimental data or knowledge to estimate a distribution of K K K (i.e., the probabilistic model) at the finest observable 105 scale, to account for the effect of hidden mechanisms in a probabilistic manner. In many cases, additional uncertainty is caused by limited and noisy data collected for K K K. This produces error in the probabilistic model which is defined as statistical error. The mapping between vectorK K K from vectorx x x depends on the probabilistic model. We denote the distribution parameters ofK K K byP P P. In general, the values and dimension of vectorP P P depend on the source of data forK K K. We can estimate a set of values forP P P given a dataset. We introduce the m-dimensional vector P P P=fP 1 ;;P N P g representing all the distribution pa- rameters of the input random variables K K K. These parameters are typically estimated based on a finite sample. Different estimation methods yield different probabilistic models for P P P with the Maximum Likelihood estimates (MLE) generally yielding an asymptotically (for large sample size) Gaussian distribution with variance that is inversely proportional to sample size. We repre- sent the random vector P P P in a polynomial chaos decomposition relative to a new Gaussian germ r dat independent of x x x . Thus, r dat characterizes the statistical error. Motivated by asymptotic re- sults concerning MLE sampling distributions, we limit this PCE to a first order expansion, resulting in a Gaussian model forP P P. We also assume a one-dimensional PCE representation forP P P, imposing a strict dependence between the different P i , making them all linear transformations of the same scalar random variable r dat . This statistical dependence between components of P P P is justified by the observation that experimental evidence that influences our estimate of any one of the P i ’s is likely to also affect our estimates of all other components of P P P. We thus introducer dat as a stan- dard normal random variable independent ofx x x =fx 1 ;;x d g, and express the parameters of the input PDFs,P i , in the form, P i =m P i +s P i r dat ; i= 1;;N P ; (6.1) where m P i is the mean of P i , and s P i its standard deviation. Also, N P denotes the number of parameters from the setP P P that is presumed to be uncertain. 106 6.2.1.2 Model error The model error (a.k.a. model inadequacy), which refers to the discrepancy between predic- tion of physical model and reality, results from the uncertain effect of simplifications and inac- curate model representations of the modeled phenomenon. From the multiscale point of view, the complete physical model of the multiscale system is composed of hierarchical sub-models M( ( () ) )=fM 1 ();;M N M ()g representing all N M sub-models in the complete model structure. The error in each of these sub-models affects the overall prediction. Rather than random inputs in the finest observable scale K K K of which the probabilistic models are estimated directly from data, samples of the random inputs in coarser/non-finest scales, dentoed by K K K C C C =fK C 1 ;;K C N C g in R N C , are evaluated as outputs from lower-scale physical sub-models. It should be noted that N M andN C are independent and not necessarily equal. To account for the model error in evaluatingK K K C C C byM( ( () ) ), we represent error in sub-models in terms of a vectorr r r mod inR N M of standard normal random variables. AssumingM( ( () ) ) are correct, the estimation fromM( ( () ) ) can be regarded as the mean prediction and the model error is described as Gaussian fluctuations around the mean, which results in, K K K C C C =m m m K C +s s s K C r r r mod ; (6.2) wherem m m K C is the vector of model evaluations fromM( ( () ) ) ands s s K C its standard deviation. 6.2.2 Generalized extended polynomial chaos expansion The EPCE has been introduced in foregoing chapters. To include the modeling errors in prob- abilistic and physical models described in Section 6.2.1 into PCE and build a new surrogate model that propagates all sources of uncertainties and modeling errors, we introduce a vector r r r =(r r r mod ;r dat ) inR (N M +1) as the “r-level germs” and subsequently X can be represented as a function offx 1 ;:::;x d g, r dat , andfr mod 1 ;:::;r mod N m g which are all independent standard normal random variables. Thus,X is denoted byX(h h h) whereh h h =fx 1 ;:::;x d ;r 1 ;:::;r N M +1 g inR (d+N M +1) . 107 Expressing X as a function of h h h in the form X(h h h) and representing X(h h h) in an orthogonal poly- nomial expansion with respect toh h h, result in, X(h h h)= å jg g gjp X g g g y g g g (h h h) (6.3) wherefX g g g g denote the EPCE coefficients; p denotes the highest order in the polynomial expan- sion; h h h is a(d+N M + 1)-dimensional multi-indices, andfy g g g g represent normalized multivariate Hermite polynomials that can be expressed in terms of their univariate counterparts using the fol- lowing notation, y g g g (h h h)= d+N M +1 Õ p=1 y g p (h p )= d+N M +1 Õ p=1 h g p (h p ) p g p ! ; (6.4) where, h g p represents the one-dimensional Hermite polynomial of order p. The collection of the multivariate polynomials forms an orthogonal set with respect to the multivariate Gaussian density function. The original version of EPCE proposed by the authors accounts for statistical error Wang and Ghanem (2021) while the formulation in Eq. 6.3 extends the formulation to include model error. Thus, for distinction, we call Eq. 6.3 the generalized EPCE (gEPCE) to emphasize the different modelling errors involved in these two surrogate models. The dependence onx x x can be separated from the dependence onr r r, which results in the follow- ing useful representation, X(h h h)=X(x x x;r r r)= å a a a2R d ;b2R N M +1 ja a aj+jb b bjp X a a ab b b y a a a (x x x)y b b b (r r r); x x x2R d ; r r r2R N M +1 ; (6.5) 108 with the subscript a a ab b b being a d+N M + 1 multi-index formed as the concatenation of a a a and b b b. Further, to separate a singler i fromx x x and all other elements inr r r, it is derived as, X(h h h)=X(x x x;r r r i ;r i )= å a a a2R d ;w w w2R N M;t2R ja a aj+jw w wj+jtjp X a a aw w wt y a a a (x x x)y w w w (r r r i )y t (r i ); x x x2R d ; r r r i 2R N M ;r i 2R (6.6) with the subscript a a aw w wt being a d+N M multi-index formed as the concatenation of a a a, w w w and t, wherer r r i denotes all the elements inr r r exceptr i . The EPCE coefficients X g g g are estimated using quadrature rules to solve for the multidimen- sional integrals as follows, X g g g = å q h 2Q h X(h h h q h )y g g g (h h h q h )w q h ; jg g gj p; (6.7) where, Q h is the set of sparse quadrature points for EPCE, q h is a quadrature node in set Q h and w q h is the associated weight. The quadrature level required to achieve a preset accuracy in approximating any X g g g increases with the order of the associated polynomial, y g g g . For a given polynomial order p and germ dimension d+N M + 1, the number of these EPCE coefficients, denoted byN ec , is equal to, N ec = (d+N M + 1+p)! (d+N M + 1)! p! : (6.8) In this fasion, the dependence on r i can be separated from the dependence on other germs in h h h, which is readily implemented to perform sensitivity analysis with respect to individual germ in r r r. To pursue computational efficiency, we use basis adaptation introduced in foregoing chapter to reduce the number of simulations in computing the coefficients of gEPCE. 109 6.3 Methods of Quantifying the Effects of Modeling Errors and Random Parameters We rely on the KDE which to build the PDF of output. KDE can be used to build the PDF of QoI using the samples generated from gEPCE. This PDF indicates the combined influence of all uncertainties and modeling errors. To investigate the individual effect of each source of foregoing uncertainties, gEPCE and the KDE are the two key ingredients to derive the sensitivity measures as presented in Section 6.3.1. Then Sections 6.3.2 and 6.3.3 introduce how we investigate the effects of statistical and model errors, resepctively, using the derived sensitivities. Section 6.3.4 describes the approach to study the sensitivity of QoI to model parameters in finest observable scale. 6.3.1 Stochastic sensitivity measures The gEPCE in Eq. 6.3 is used and integrated with the KDE, to result in, f X (x)= 1 Nh N å j=1 K h xå jg g gjp X g g g y g g g (h h h (j) ) h ! ; h h h (j) 2R d+N M +1 : (6.9) This expression for f X involves summation over all (d+N M + 1) stochastic dimensions h h h and does not therefore express dependence on any of them. In order to retain sensitivity with respect to the r-level germs r r r =(r r r mod ;r dat )2R (N M +1) , we make use of Eq. 6.6 and replace Eq. 6.9 by the following equation, f X (x;r i )= 1 Nh å N j=1 K h 0 B B @ 1 h 0 B B @ x å a a a2R d ;w w w2R N M;t2R ja a aj+jw w wj+jtjp X a a aw w wt y a a a (x x x (j) )y w w w (r r r (j) i )y t (r i ) 1 C C A 1 C C A ; x x x (j) 2R d ; r r r (j) i 2R N M ; r i 2R (6.10) wherer i refers to all elements in vectorr r r exceptr i . 110 By taking the partial derivative of f X (x;r i ) in Eq. 6.10 with respect tor i 2r r r, the sensitivity of the PDF to an individualr-level germ, denoted by f X;r i (x;r i ), is given by, f X;r i (x;r i )= ¶ f X (x;r i ) ¶r i = 1 Nh N å j=1 K h 0 B B @ 1 h 0 B B @ x å a a a2R d ;w w w2R N M;t2R ja a aj+jw w wj+jtjp X a a aw w wt y a a a (x x x (j) )y w w w (r r r (j) i )y t (r i ) 1 C C A 1 C C A ; x x x (j) 2R d ; r r r (j) i 2R N M ; r i 2R; (6.11) Since germsr r r correspond toK K K C C C orP P P, the sensitivity of response PDF with respect toK K K C C C orP P P can then be derived through directional direvatives in Gaussian Hilbert space. 6.3.2 Influence of statistical error In this section, we investigate the influence on the response PDF of uncertainty in the parameters P P P characterizing the probabilistic model of random inputs in the finest observable scale. Two different approaches are explored to that end. The first approach is based on a sensitivity analysis, and the second approach provides an explicit expression of the probability measure on the PDF induced by uncertainty inP P P. 6.3.2.1 Total variation in PDF of QoI due to statistical error By making use of Eq. 6.10 in terms ofr dat and taking the directional derivative of f X (x;r dat ) with respect toP i , the sensitivity of the PDF to the distribution parameters of inputs, denoted by f X;P i (x), is given by, f X;P i (x;r dat )= ¶ f X (x;r dat ) ¶P i =s P i ¶ f X (x;r dat ) ¶r dat ; i= 1;;N P ; (6.12) 111 and using the sensitivity in Eq. 6.11 with respect tor dat into Eq. 6.12 results in, f X;P i (x;r dat )= s P i Nh 2 N å j=1 " xX(x x x (j) ;r r r (j) dat ;r dat ) h K h xX(x x x (j) ;r r r (j) dat ;r dat ) h ! å a a a2R d ;w w w2R N M;t2R ja a aj+jw w wj+jtjp X a a aw w wt y a a a (x x x (j) )y w w w (r r r (j) dat ) ¶yt(r dat ) ¶r dat # ; i= 1;;N P (6.13) wherer dat refers to all elements in vectorr r r exceptr dat , and we relied on the Gaussian form of the kernel. Eq. 6.13 provides a stochastic representation of the sensitivity of PDF with respect to probabilistic parameters in the input random variables. The sensitivity measure has the property that, for each value ofr dat , the integral of f X with respect tox is equal to zero. Given our probabilistic model for P i in accordance with Eq. 6.1, an interval for P i can be associated with a pre-specified confidence level c i . We note that this confidence level could be equally well have been specified onr dat . Denoting the upper and lower bounds of the associated confidence interval byu i andl i , respectively, permits us to express the precision ofP i as, DP i =l i u i ; i= 1;;N P ; (6.14) which can be used to develop the induced precision on f X (x). Specifically, using Eq. 6.13 results in, Df X (x;r dat )= N P å i=1 f X;P i (x;r dat )DP i : (6.15) Noting the recurrence equation for the derivative of univariate Hermite polynomials, h 0 n (x)=xh n (x)h n+1 (x); (6.16) and taking the mathematical expectation of the derivative of f X relative toP i , results in the follow- ing expression for the expected value ofDf X (x;r dat ), whereh:i denotes the expectation operator, hDf X (x; :)i= s P i Nh 2 N å j=1 " h xX(x x x (j) ;r r r (j) dat ;r dat ) h K h xX(x x x (j) ;r r r (j) dat ;r dat ) h ! å a a a2R d ;w w w2R N M;t2R ja a aj+jw w wj+jtjp X a a aw w wt y a a a (x x x (j) )y w w w (r r r (j) dat ) ¶yt(r dat ) ¶r dat i # ; i= 1;;N P (6.17) 112 The stochastic sensitivity approach considers the effect of statistical error that characterizes the uncertainty in the distribution parametersP i on the total variation of PDF of QoI. 6.3.2.2 Influence of statistical error on failure probability A stochastic model for the PDF of X has already been developed in Eqs. 6.9 and 6.10. While these equations were used above to characterize sensitivity measures, in this section they are used to characterize confidence in estimates of failure probability. The gEPCE allows to separate the germr dat fromx x x . Its PDF can be expressed in a more suggestive form than Eq. 6.10 as in, f X (x;r dat )= 1 Nh N å j=1 K h 0 @ xå jg g gjp X g g g y g g g (x x x (j) ;r r r (j) dat ;r dat ) h 1 A ; (6.18) wherey g g g (x x x (j) ;r r r (j) dat ;r dat ) indicates that the polynomialy g g g is evaluated at the sample where the firstd andN M components are specified byx x x (j) andr r r (j) dat , respectively, while ther dat component remains as a free variable. It can be seen that Eq. 6.9 is the distribution of the family of PDFs generated by Eq. 6.18, marginalized overr dat . One important application of the foregoing ideas is to the characterization of failure probabil- ities, themselves, as random variables. In many applications, the failure probability P f is defined as the probability of reaching or exceeding a critical threshold and is of great significance. This distribution is typically predicated on pre-specified probabilistic models for input parameters, and thus lends itself to the present analysis. To simplify the presentation, and without loss of generality, we assume a scalar description of the limit state in terms of a critical threshold for the QoI, denoted byX c . The failure probability,P f , is then given by the following integral, P f (r dat )= Z xX c f X (x;r dat )dx; (6.19) 113 where we have explicitly expressed the dependence of P f on the epistemic input uncertainty en- coded inr dat . The PDF ofP f computed by KDE is expressed as, f P f (x)= 1 N r dat h f N r dat å j=1 K h 0 @ xP (j) f h f 1 A ; (6.20) whereN r dat is the number of samples ofr dat used in estimating the KDE, andP (j) f ; j= 1;;N r dat is the j-th realization of failure probability evaluated at r j dat . The Gaussian kernel is used for K h with the bandwidth h f determined following Silverman’s rule as h f =(4s 5 f =3N r dat ) 1=5 , wheres f is the standard deviation estimated from theN r dat failure probability samples. 6.3.3 Influence of model error We explore the impact of the error in each sub-model of the overall physical model on the PDF of QoI. This impact is reflected in the parameters that are evaluated from a lower scale by these sub-models. Therefore, in this section, we present how to investigate the sensitivity of response PDF and its associated faliure probability with respect to the none-finest-scale parameters. 6.3.3.1 Sensitivity of PDF of QoI with respect to model parameters in none-finest scales By making use of Eq. 6.10 in terms of r mod i and taking the directional derivative of f X (x;r mod i ) with respect toK C i , denoted by f X;K C i (x), is given by, f X;K C i (x;r mod i )= ¶ f X (x;r mod i ) ¶K C i =s r mod i ¶ f X (x;r mod i ) ¶r mod i ; (6.21) where, K C i = m K C i +s K C i r mod i ; s K C i = 10%m K C i and m K C i are estimated from the corresponding fine-scale sub-model. Substituting the sensitivity in Eq. 6.11 with respect to r mod i into Eq. 6.21 results in, fX;KC i (x;r modi )= sKC i Nh 2 N å j=1 2 4 xX(x x x (j) ;r r r (j) modi ;r modi ) h K h 0 @ xX(x x x (j) ;r r r (j) modi ;r modi ) h 1 A å a a a2R d ;w w w2R N M;t2R ja a aj+jw w wj+jtjp Xa a aw w wtya a a(x x x (j) )yw w w(r r r (j) modi ) ¶yt(rmod i ) ¶rmod i 3 5 ; i= 1;;NM (6.22) 114 where r mod i refers to all elements in vector r r r except r mod i , and we relied on the Gaussian form of the kernel. By taking the mathematical expectation of the sensitivity in Eq. 6.22 with respect tor mod i , the following expression for the expected value of f X;K C i (x;r mod i ), denoted by a function d K C i (x), is expressed as, d K C i (x) D =E r mod i [f X;K C i (x;r mod i )] (6.23) To derive the analytical expression of d K C i (x), noting the recurrence equation for the derivative of univariate Hermite polynomials in Eq. 6.16 and taking the mathematical expectation of the derivative of f X relative to K C i , results in the following expression to replace Eq. 6.23, whereh:i denotes the expectation operator, d K C i (x)= s K C i Nh 2 N å j=1 2 4 h xX(x x x (j) ;r r r (j) modi ;r modi ) h K h 0 @ xX(x x x (j) ;r r r (j) modi ;r modi ) h 1 A å a a a2R d ;w w w2R N M;t2R ja a aj+jw w wj+jtjp X a a aw w wt y a a a (x x x (j) )y w w w (r r r (j) modi ) ¶yt(r mod i ) ¶r mod i i 3 5 ; i= 1;;N M : (6.24) The sensitivity index functiond K C i (x) is the functional global sensitivity index and its proper- ties include: (a) The sensitivity index curves with respect to all parameters across and within scales are compa- rable (one can plot the sensitivity index curves for allK C i ’s on the same coordinate for straightfor- ward comparison); (b) Ranking of importance of all the distribution parameters may change at different values of QoI; (c) The sign of the sensitivity index function suggests whether the PDF of QoI is increasing or decreasing with respect to the input variables; (d) The net area under each sensitivity index curve is equal to zero. The sensitivity index function provides a straightforward and efficient stochastic representation of the sensitivity of PDF of QoI with respect to all random model parameters in non-finest scales considering model error associated with the corresponding sub-models they rely on. 115 6.3.3.2 Sensitivity of failure probability with respect to model parameters in none-finest scales The sensitivity of failure probability can help to identify the importance ranking for all sub-model parameters in none-finest scales within the full model structure so as to guide the reliability-based design and risk assessment. Similar to Eq. 6.19, the failure probability dependent onr mod i is given by the following integral, F(r mod i )= Z xX c f X (x;r mod i )dx; (6.25) By taking the directional derivative of F(r mod i ) in Eq. 6.25 with respect to K C i , the sensitivity of the failure reliability to the non-finest-scale model parameters, denoted byF K C i (r mod i ), is given by, F K C i (r mod i )= ¶F(r mod i ) ¶K C i =s K C i dF(r mod i ) dr mod i ; i= 1;;N M ; (6.26) and substituting the formulation in Eq. 6.25 into Eq. 6.26 results in, F K C i (r mod i )=s K C i d dr mod i Z xX c f X (x;r mod i )dx ; i= 1;;N M ; (6.27) according to the Leibniz’s rule, Eq. 6.27 can be equivalently computed by, F K C i (r mod i )=s K C i Z xX c ¶ ¶r mod i f X (x;r mod i ) dx; i= 1;;N M : (6.28) By making use of the formulation in Eq. 6.11 with respect tor mod i and substituting into Eq. 6.28 results in, FKC i (r modi )= sKC i Nh 2 R xXc N å j=1 2 6 6 4 xX(x x x (j) ;r r r (j) modi ;r modi ) h K h 0 @ xX(x x x (j) ;r r r (j) modi ;r modi ) h 1 A å a a a2R d ;w w w2R N M;t2R ja a aj+jw w wj+jtjp Xa a aw w wtya a a(x x x (j) )yw w w(r r r (j) modi ) ¶yt(r modi ) ¶r modi 3 7 7 5 dx; i= 1:::NM; (6.29) where we relied on the Gaussian form of the kernel. Eq. 6.29 provides a stochastic representation of the sensitivity of failure probability with respect to each non-finest-scale model parameter. And 116 taking the mathematical expectation of Eq. 6.29 relative toK C i , results in the following expression for the expected value ofF K C i (r mod i ), denoted by a scalarS K C i , which results in, S K C i D =E r mod i [F K C i (r mod i )] (6.30) To derive the analytical expression ofS K C i , it results in the following expression to replace Eq. 6.30, whereh:i denotes the expectation operator, S KC i = s KC i Nh 2 R xXc N å j=1 2 6 6 4 h xX(x x x (j) ;r r r (j) modi ;r modi ) h K h 0 @ xX(x x x (j) ;r r r (j) modi ;r modi ) h 1 A å a a a2R d ;w w w2R N M;t2R ja a aj+jw w wj+jtjp X a a aw w wt y a a a (x x x (j) )y w w w (r r r (j) modi ) ¶y t (r modi ) ¶r modi i 3 7 7 5 dx; i= 1:::N M ; (6.31) where we have explicitly expressed the dependence of S K C i on the model parameter encoded in r mod i . To normalized the indicators, the weighted S K C i is used to represent the reliability sensitivity index, denoted byk K C i , which results in, k K C i = S K C i N M å j=1 S K C i ; i= 1;;N M : (6.32) wherek K C i is the reliability sensitivity measure which is a scalar index. The properties of the reli- ability sensitivity measurek K C i include: (a) N M å j=1 k K C i = 1; (b) 0k K C i 1. 6.3.4 Sensitivity of QoI with respect to parameters in finest observable scale The sensitivity of QoI (i.e.,X) to the random inputs in the finest observable scaleK K K is represented by the partial derivative ¶X ¶K i forK i 2K K K as independent random variable. The detailed derivation is 117 shown in Appendix A and the dependent case can be similarly obtained. The sensitivity of X to independentK i is expressed as, ¶X ¶K i = å ja a ajp X a a a ¶y a a a (x x x) ¶x i f K i (k i ) f x i (X i ) ; i= 1;;d (6.33) where f K i (k i ) and f x i (X i ) denote the PDF ofK i andx i , respectively, and they can be mapped from each other through inverse CDF. 6.4 Example: Multi-scale Car Composites Modeling The application of the proposed framework for stochastic multiscale analysis is demonstrated in this section on a car material model developed by the General Motors Company Aitharaju (2020). The full physical model of the car material is composed of two sub-models in hierarchy. The outputs of the fine-scale sub-model are a part of the inputs of the coarse-scale sub-model. Section 6.4.1 describes the fine-scale sub-models that map properties of fiber and resin/matrix to lamina properties. Section 6.4.2 introduces the coarse-scale sub-model which is a finite element model that computes the absorbed energy of the car composite material from the lamina properties under applied force. The implementation of the foregoing stochastic analysis approach that accounts for the uncertainties and modeling errors in this car material modeling is also described. Finally, Section 6.4.3 presents the construction of probabilistic models and representation of its associated statistical error. 6.4.1 Fine-scale physical sub-models Mechanical properties of lamina dependent on its constituting fiber and resin properties are evalu- ated in fine-scale simulations utilizing the “rule of mixtures”. The rule of mixtures, in general, can be thought as a weighted mean that predicts effective properties of composite materials. Due to the complexity and large nonlinearity in upscaling procedures, the error in the effective properties 118 estimated by rule of mixtures must be taken into account. Based on the gEPCE-based approach presented in this work, we introduce additive Gaussian terms to represent the error in the fine- scale sub-models and subsequently the fine-scale sub-models implemented with error terms are expressed as, Longitudinal elastic modulus: E l =E l f V f +E m (1V f )+s E l r E l (6.34) Transverse elastic modulus: E t = E t f E m E m V f +E t f (1V f ) +s E t r E t (6.35) Lamina mass density: r c =r f V f +r m (1V f ) (6.36) Major Poisson’s ratio: n 12 =n 12 f V f +n m (1V f )+s n 12 r n 12 (6.37) In-plane shear modulus in major direction 1-2: G 12 = G 12 f G m G m V f +G 12 f (1V f ) +s G 12 r G 12 (6.38) In-plane shear modulus in direction 2-3: G 23 = G 23 f G m G m V f +G 23 f (1V f ) +s G 23 r G 23 (6.39) The strength and associated elastic strain (i.e. stress-strain curve) for fiber and matrix/resin: S f = S f E l f (6.40) 119 S m = S m E m (6.41) Longitudinal tensile strength: If S f < S m , let V f;min = S m S f E m S m +S f S f E m Then S= 8 > > < > > : S m (1V f ); ifV f <V f;min S m (1V f ); otherwise If S f > S m , let V f;min = S m E m ( S f S m )E l f + S m E m Then S= 8 > > < > > : S m E l f V f +S m (1V f ); ifV f <V f;min S f V f ; otherwise DFAIL T = S =C DF T S E c +s DF T r DF T (6.42) Longitudinal compressive strength: S= 8 > > > < > > > : 2V f s V f E l f E m 3(1V f ) ; ifV f < 0:4 G m 1V f ; ifV f 0:4 DFAIL C = Cl =C DF c S E l +s DF C r DF C (6.43) 120 Transverse Tensile Strength: TT = S m 1+ E m E tf 1 V f DFAIL M = TT +s DF M r DF M (6.44) Plane Shear Strength: Let the ultimate shear strain and shear stress of the matrix/resin be related as, g S m = t S m G m the effective shear strength of the composite can be approximated as, S = 1+ G m G 12 f 1 V f g S m DFAIL S = S +s DF S r DF S (6.45) Transverse compressive strength: TC = SmC 1+ E m E t f 1 V f (6.46) 121 Additional Transverse Moduli: E c =E t ; n 13 =n 12 +s n 13 r n 13 ; n 23 =n 12 E t E l +s n 23 r n 23 ; G 13 =G 12 +s G 13 r G 13 (6.47) Tab. 6.1 describes the 15 input and 14 output random variables of the fine-scale sub-models, denoted by K K K F F F and X X X F F F , respectively. Subsequently in upscaling, X X X F F F become input parameters in the coarse-scale sub-model. The germs r r r mod , inR 12 , consist of standard Gaussian variables r E l , r E t ,r n 13 ,r n 23 ,r n 12 ,r G 12 ,r G 13 ,r G 23 ,r DF M ,r DF S ,r DF T , andr DF C to represent the error caused by the inadequacy in fine-scale physical sub-model with respect to 12 outputs E l , E t , n 13 , n 23 , n 12 , G 12 ,G 13 ,G 23 ,DFAIL M ,DFAIL S ,DFAIL T , andDFAIL C . A standard deviation of 10% of the mean sub-model prediction is given, that is,s s s K C = 10%m m m K C . In regard tor c andE c , the mass density of lamina can be accurately characterized by the weighted mean of the densities of its constituents, and the composite material’s tensile and compressive elastic modulus are equal Wu et al. (2018), thus model error related to these two parameters is not taken into account in this work. 6.4.2 Coarse-scale physical sub-model The constitutive properties of the car composite material are computed in the coarse-scale phys- ical sub-model. The simulation is conducted using an LS-DYNA Murray et al. (2007) finite el- ement model as shown in Fig. 6.1, developed by General Motors Company, of the three-point bending test of an eight-layer rectangular laminate that is 102.2 mm in length and 25.45 mm in width. Two rigid cylinders are used as supports for the laminate, while a third cylinder is used to apply the load at the middle of the span. The shell elements of the 8 layers were assigned the ENHANCED COMPOSITE DAMAGE material model, while the cylinders were assigned the 122 Table 6.1: Fine-scale model inputsK K K F F F and outputsX X X F F F No. Fine-scale model random inputsK K K F F F Fine-scale model outputsX X X F F F 1 Longitudinal elastic modulus of fiber: E l f Lamina density: r c 2 Transverse elastic modulus of fiber: E t f Longitudinal elastic modulus of Lamina: E l 3 Major Poisson’s ratio of fiber: n 12 f Transverse elastic modulus of Lamina: E t 4 Shear modulus of fiber in direction 1-2: G 12 f Compressive elastic modulus of Lamina: E c 5 Ultimate strength of fiber: S f Major Poisson’s ratio of Lamina: n 12 6 Fiber density: r f Poisson’s ratio of Lamina in 1-3 direction: n 13 7 Ultimate shear strain of fiber: t S m Poisson’s ratio of Lamina in 2-3 direction: n 23 8 Elastic modulus of resin: E m In-plane shear modulus in direction 1-2: G 12 9 Poisson’s ratio of resin: n m In-plane shear modulus in direction 2-3: G 23 10 Shear modulus of resin: G m In-plane shear modulus in direction 1-3: G 13 11 Strength of resin: S m Ultimate transverse tensile strength: DFAIL M 12 Resin density: r m Ultimate shear strength: DFAIL S 13 V olume percentage of fiber:V f Ultimate transverse strength: DFAIL T 14 Scale coefficent for tensile strangth:C DF T Ultimate compressive strength: DFAIL C 15 Scale coefficent for compresive strangth:C DF C RIGID material model. The simulations were set to end when the displacement under the loading cylinder reached 6 mm. A history of that displacement, as well as the load at that location, was recorded and used for subsequent compution of relevant QoIs. Figure 6.1: The finite element model of three-point bending test for the composite material. Input and output variables of the coarse-scale model are specified in Tab. 6.2. The input random variables K K K C C C are from two sources: (1) output variables from the fine-scale model K K K F F F ; and (2) geometry parameters of lamina including the angle between two lamina layersq and thickness of laminad. That is,K K K C C C =(K K K F F F ;q;d). The bottom 4 layers share the same mechanical properties, and differ in the fiber orientation that takes the values 0, 45, -45, 90. The top 4 layers are replicas of the bottom 4 layers and are assigned the same 4 material cards. All layers have varying thickness. Our 123 Table 6.2: Coarse-scale model inputsK K K C C C and output (i.e. QoI)X No. Coarse-scale model random inputsK K K C C C If fromX X X F F F Coarse-scale outputX 1 Lamina density: r c Yes Absorbed energy: Q A 2 Longitudinal elastic modulus of Lamina: E l 3 Transverse elastic modulus of Lamina: E t 4 Compressive elastic modulus of Lamina: E c 5 Major Poisson’s ratio of Lamina: n 12 6 Poisson’s ratio of Lamina in 1-3 direction: n 13 7 Poisson’s ratio of Lamina in 2-3 direction: n 23 8 In-plane shear modulus in major direction 1-2: G 12 9 In-plane shear modulus in major direction 2-3: G 23 10 In-plane shear modulus in major direction 1-3: G 13 11 Ultimate transverse tensile strength: DFAIL M 12 Ultimate shear strength: DFAIL S 13 Ultimate transverse strength: DFAIL T 14 Ultimate compressive strength: DFAIL C 15 Angle of lamina: q No 16 Thickness of lamina: d QoI is the absorbed energy, computed as the area under the load-displacement curve from 0–4mm displacement, and denoted byQ A , which characterizes composite performance under deformation, while alternative QoIs are technically similar and should be investigated in future research. 6.4.3 Probabilistic models The probabilistic models of parameters in finest observable scale that are estimated from experi- mental data are summarized in Tabs. 6.3 and 6.4. A distribution type is assumed for each random variable and subsequently the distribution parameters P P P are fitted from the data. In this case, the statistical error due to inadequate or lack of data is characterized by the shape parameters of Beta random variables and bounds of uniformly distributed variables. The fitted value of these distribu- tion parameters are used as the meanm m m P P P and a coefficient of variation (CoV) of 10% is given, that is,s s s P P P = 10%m m m P P P . 124 Table 6.3: Distribution parameters of fine-scale inputsK K K F F F Fine-scale random inputs Type s s s 1 1 1 s s s 2 2 2 b b b lll b b b u u u Longitudinal elastic modulus of fiber: E l f Beta 6 6 192 288 Transverse elastic modulus of fiber: E t f Beta 6 6 9.6 30 Major Poisson’s ratio of fiber: n 12 f Beta 6 6 0.24 0.48 Shear modulus in major direction 1-2: G 12 f Beta 6 6 17.6 26.4 Ultimate tensile stress of fiber: S f Beta 6 6 3.92 5.88 Density of fiber: r f Beta 6 6 1.28e-06 1.92e-06 Ultimate shear stress of matrix: t M ult Beta 6 6 0.024 0.036 Elastic modulus of matrix: E m Beta 6 6 2.88 9.0 Poisson’s ratio of matrix: n m Beta 6 6 0.24 0.48 Shear modulus of matrix: G m Beta 6 6 1.12 1.68 Ultimate tensile stress of matrix: S m Beta 6 6 0.0576 0.0864 Density of matrix: r m Beta 6 6 8.00e-07 1.2e-06 V olume fraction:V f Uniform 1 1 0.3 0.5 Scaling factor inDF T model:C DF T Beta 6 6 0.675 0.825 Scaling factor inDF C model:C DF C Beta 6 6 0.72 0.88 Table 6.4: Distribution parameters of inputs directly entering coarse-scale model Lamina random inputs Type b b b L l b b b L u Angle of lamina: q Uniform -5 5 Thickness of lamina: d Uniform 0.175 0.25 125 6.5 Results In the presented car composite material problem, the physical dimension isd= 17 and ther-level dimensions are r mod = 12 and r dat = 1. Thus, the total dimension of the problem is 30 (i.e., h h h2R 30 ). To evaluate the PDF of QoI Q A dependent on the 30-dimensional random inputs, the result from KDE based on the gEPCE as in Eq. 6.9 with basis adaptation is shown in Fig. 6.2. It is found that a four-dimensional adaptation can give a converged tail of the PDF and thus used in the remaining computation. In other words, we construct a four-dimensional gEPCE and map it back to the full dimension for the rest of analysis. Figure 6.2: PDF of absorbed energyQ A by adapted gEPCE. Fig. 6.3 shows the family of PDFs at 1000 samples ofr dat according to Eq. 6.10. The variation in these PDFs reflects the influence of statistical error in the probabilistic models. As indicated previously, the PDF in Fig. 6.2 is just the distribution over the ensemble of PDFs for which samples are shown in Fig. 6.3. On the other hand, given a 95% confidence interval for r data and based on the subsequently computed confidence intervals [l i ;u i ] for each P i , where i= 1;;N P , Fig. 6.4 shows the result of total variation in response PDF computed by the stochastic sensitivity approach according to 126 Figure 6.3: The family of PDFs associated with statistical error at 1000 realizations ofr dat . Eq. 6.15. The scatter in this ensemble is a reflection of the statistical error about the probabilistic model of the input parameters. To further evaluate the effect of statistical error on the failure probability, assuming failure is associated with Q A 2:0;2:5 and 3:0 kNmm, respectively, the distribution of the probability of failure P f is obtained by evaluating P f for each sample in Fig. 6.3, and plotting the PDFs of the resulting values. The resulting PDFs are shown in Fig. 6.5 in which the average probability of failure computed from the tail of the PDF in Fig. 6.2 is also plotted as the red dot for compari- son. Although confidence intervals for P f can be easily synthesized from the PDF, more accurate decision analysis can be developed by relying on the full PDF. To assess the impact of model error, Fig. 6.6 shows the sensitivity index function with respect to parameters that are evaluated from sub-models d K C i (x). In particular, Fig. 6.6(a) indicates that K C 1 which is the longitudinal elastic modulus of lamina E l has a significantly larger impact than other parameters. Thus, to visualize the influence of other parameters, Fig. 6.6(b) plots the same result as Fig. 6.6(a) withoutK C 1 . In addition, it is found thatE t ,G 12 ,G 13 andG 23 are in the second echelon in terms of importance ranking. 127 Figure 6.4: The family of change in PDF of QoI associated with statistical error at 1000 realizations ofr dat . Postprocessing the foregoing results according to Eq. 6.32, assuming failure is associated with Q A 2:5 kNmm, the reliability sensitivity indices with respect to parameters evaluated from sub- modelsk K C i are shown in Tab. 6.5. A similar ranking of importance is found in reliability sensitivity to the foregoing PDF sensitivity results. These results indicate that a more accurate physical model must be used to evaluateE l instead of the rule of mixture. In addition, the physical models forE t , G 12 ,G 13 andG 23 are also suggested to be refined. The sensitivities of QoI to input variables in finest observable scale are also crucial for design purposes. Tab. 6.6 shows the weighted mean of sensitivity ofQ A with respect toK K K. It can be seen that the densities of fiber and resin,r f andr m , have the largest impacts onQ A . 6.6 Concluding Remarks The present work introduces a coherent and efficient stochastic multiscale framework for char- acterization, propagation, and performance prediction in complex systems that has hierarchical structures in the presence of uncertainties and modeling errors across scales. A single-stage effi- cient treatment of parametric uncertainty, statistical error, and model inadequacy is realized by the 128 (a) (b) (c) Figure 6.5: PDF of failure probability associated with statistical error under the failure criteri- ons that (a) Q A 2:0 kNmm; (b) Q A 2:5 kNmm; and (c) Q A 3:0 kNmm, compared with corresponding mean failure probability (red dots). 129 −1 0 1 2 3 4 5 Absorbed energy (kN*mm) −0.8 −0.6 −0.4 −0.2 0.0 0.2 0.4 0.6 Sen itivity δ Kc 1 (x) δ Kc 2 (x) δ Kc 3 (x) δ Kc 4 (x) δ Kc 5 (x) δ Kc 6 (x) δ Kc 7 (x) δ Kc 8 (x) δ Kc 9 (x) δ Kc 10 (x) δ Kc 11 (x) δ Kc 12 (x) (a) −1 0 1 2 3 4 5 Absorbed energy (kN*mm) −0.004 −0.003 −0.002 −0.001 0.000 0.001 0.002 Sen itivity δ Kc 2 (x) δ Kc 3 (x) δ Kc 4 (x) δ Kc 5 (x) δ Kc 6 (x) δ Kc 7 (x) δ Kc 8 (x) δ Kc 9 (x) δ Kc 10 (x) δ Kc 11 (x) δ Kc 12 (x) (b) Figure 6.6: Sensitivity index functions d d d K C of PDF of Q A with respect to coarse-scale random model inputsK K K C C C (a)d K C i ;i= 1;;12 ; and (b)d K C i ;i= 2;;12. Table 6.5: Reliability sensitivity indices with respect to coarse-scale random inputsK K K C C C Coarse-scale inputs: K K K C C C Reliability sensitivity index Longitudinal elastic modulus: E l 0.988 Transverse elastic modulus: E t 2.337e-03 Poisson’s ratio in direction 1-3: n 13 6.076e-06 Poisson’s ratio in direction 2-3: n 23 4.689e-07 Major Poisson’s ratio: n 12 8.434e-05 Shear modulus in major direction 1-2: G 12 3.117e-03 Shear modulus in direction 1-3: G 13 2.414e-03 Shear modulus in direction 2-3: G 23 4.205e-03 Transverse tensile strength: DF M 9.119e-07 Plane shear strength: DF S 6.190e-07 Longitudinal tensile strength: DF T 2.152e-12 Longitudinal compressive strength: DF C 1.089e-11 130 Table 6.6: Sensitivity of QoIQ A with respect to random inputsK K K Random inputs: K K K Mean sensitivity ofQ A toK K K Longitudinal elastic modulus of fiber: E l f 1.097e-04 Transverse elastic modulus of fiber: E t f 3.605e-06 Major Poisson’s ratio of fiber: n 12 f 5.139e-06 Shear modulus of fiber in direction 1-2: G 12 f 2.634e-07 Ultimate strength of fiber: S f 5.188e-11 Fiber density: r f 2.915e-01 Ultimate shear strain of fiber: t S m 5.299e-09 Elastic modulus of resin: E m 2.703e-05 Poisson’s ratio of resin: n m 4.359e-05 Shear modulus of resin: G m 1.113e-04 Strength of resin: S m 3.996e-09 Resin density: r m 6.475e-01 V olume percentage of fiber:V f 9.649e-03 Scale coefficent for tensile strangth:C DF T 4.222e-09 Scale coefficent for compresive strangth:C DF C 3.536e-10 Angle of lamina: q 1.286e-05 Thickness of lamina: d 5.095e-02 EPCE. The statistical error is represented by distribution parameters of input variables that does not depend on any physical sub-model in a finer scale, and model error is described by model parameters evaluated by finer-scale models. Sensitivity analysis is performed to investigate the influence of statistical and model errors on the PDF of QoI and the associated failure probability through a combination of KDE and EPCE, thus allowing for a straightforward post-processing of the EPCE to determine the sensitivity of PDF to epistemic variables, with little extra computational effort. In addition, the sensitivity of QoI with respect to finest-scale input variables is evaluated using the constructed formulation. This stochastic multiscale framework is demonstrated using an application of complex mul- tiscale car materials. Actually, the proposed approach is not restricted to stochastic modeling of constitutive material properties, rather, is general and can be applied to any general problem of complex systems featuring multiphysics and multiscale interactions. 131 Chapter 7 Stochastic Optimal Control of Hypersonic Trajectories 7.1 Introduction Optimal control (OC) of dynamical systems, as a main tool in decision-making, has been ad- dressed with much interest across science and engineering fields over the past decades Kirk (2004); Lewis et al. (2012). Relevant approaches have been applied in multiscale modeling Dhia et al. (2011), post-hazard community recovery Nozhati et al. (2020), hydraulic seismic isolator Pagano et al. (2013), quasi-Newtonian flows Lee (2011), firefighting Khakzad (2021) and aircraft vibration Tourajizadeh and Zare (2016). In the aerospace industry, a trajectory (or flight path) can be defined as a set of time-dependent states of a vehicle, represented by the position and other state variables such as speed or directions. A trajectory generation can be seen as a prediction of the aircraft tra- jectory represented by a time-ordered sequence of aircraft states. Aircraft trajectory prediction is the core approach to perform air traffic management and for military use Tang et al. (2015); Vian and Moore (1989). To predict an aircraft trajectory, a mathematical model describing the aircraft motion needs to be defined, namely the equations of motion. The optimal control of trajectory generation for aircrafts, namely the optimal trajectory control (OTC) problem, is the focus of this chapter. OTC is a special case of OC problems that aims to determine the trajectory of an air- craft system, while minimizing performance objectives and meeting a set of initial and terminal conditions, path constraints and equations of motion. 132 Due to the nonlinearity and complexity in the aircraft dynamics, OTC problems are usually computed by numerical methods. A number of successful mathematical algorithms have been developed over the last decades Schultz and Zagalsky (1972); Enright and Conway (1992); Xu and Basset (2012); Wang and Grant (2017); Saranathan and Grant (2018). These techniques to solve OTC problems can be generally classified into the direct and indirect methods. The direct methods transform the OTC problem into a nonlinear programming problem to solve such that the algorithms have been implemented into many commercial software tools V on Stryk and Bulirsch (1992). In contrast, indirect methods require substantial computational work to be involved but are adopted here, because the rapid convergence of indirect methods is desired when constructing families of trajectories Kirk (2004). In this chapter, a multi-stage stabilized continuation scheme Grant and Braun (2015); Vedantam et al. (2020) is used to solve the indirect OTC problem and to generate samples of optimal trajectories. The trajectory computed from numerical simulation is usually considered as deterministic re- sult of the actual optimal trajectory with precisely defined inputs for decision making. However, the realistic aircraft trajectory could be quite different from the simulation result due to inevitable uncertainties embedded in the dynamical models for trajectory generation. In such cases, it is difficult to make effective decisions or even fail to meet objectives through only deterministic sim- ulation. Therefore, dealing with uncertainty in model-based trajectory inference is a crucial task which is growing up with increasingly more attention in the last decade Sun and Zheng (2015); Hu et al. (2018); Gonz´ alez-Arribas et al. (2018); Huang et al. (2019). From the literature, the emerg- ing sources of uncertainty in the OTC problem that need to handle are summarized as follows: (a) Uncertain initial and terminal conditions: All sensors produce measurement error. The measured states (e.g. speed, altitude, latitude, longitude) of an aircraft are corrupted by noise so that the induced measurement errors could impair the validity of the simulation result Merhav (1998). (b) Uncertain model parameters: The deterministic models used to generate trajectory can be gener- ally seen as simplified surrogates of the real physics. The randomness is usually embedded in those coefficients in the model. (c) Environmental uncertainties: Modeling the environment is always a 133 tough task. The aerodynamics used in the flight simulation, such as space-variant air density, wind gusts and turbulence, is clearly a non-negligible source of uncertainty Soler et al. (2020); Luders et al. (2016). (d) Uncertain path constraints: The aircraft sometimes needs to bypass obstacles or avoid enemy investigation about which the knowledge is not readily available. The randomness in the aforementioned uncertainties in OTC problem was mostly treated by robust dynamic programming or modeled as probability distributions. For the former approach, the idea is to model uncertain parameters using an uncertainty set such that a worst-case uncertainty realization is defined and solved Li et al. (2014). However, this is a conservative method that deteriorates the overall performance of the stochastic system. When the uncertainty is modelled as probability distributions, the Monte Carlo (MC) method was mostly employed in the community due to its comparatively easy concept and implementation, however, the associated computational cost is a great challenge. There are a few preliminary works using PCE to solve OTC problem under uncertainty, in which the mean and variance are used to represent the response statistics Fisher and Bhattacharya (2011); Casado et al. (2017). Though more efficient than MC method, it was found that high-dimension OTC problem also suffers from heavy computational burden when using PCE Prabhakar et al. (2010). Therefore, a coherent (physically and mathematically reasonable), comprehensive (all types of uncertainty included), efficient (reducing computational burden) and informative (providing in- sightful response statistical metrics other than first- and second-order moments) stochastic frame- work integrating uncertainty modeling, propagation and characterization of the response statistics is clearly needed in the community towards a next-generation optimal trajectory control paradigm, and is the main focus in this study. Uncertainty quantification (UQ) is the rational process of managing the interplay between data, models and decisions Ghanem et al. (2017). The uncertainty of input parameters are usually mod- eled as random variables with specified prior models for their probability distributions. Estimating the precise values of these distribution parameters is challenged due to lack of knowledge and er- rors in measurements Soize (2017). Thus, the source of uncertainty can be generally classified into 134 aleatory (or data) uncertainty and epistemic (or approximation) uncertainty Tsilifis et al. (2017). In OTC problem, the model inputs in the deterministic trajectory simulation can be seen as the nom- inal values. The uncertainty of the inputs are aleatory and modelled by probability distributions. For example, Wang Y . assumed the input parameters accociated with pilot behaviour and weather effects in a high-fidelity trajectory simulation model as Gaussian random variables and used Monte Carlo simulation technique Wang et al. (2021). They claimed that the choice of distribution type of the random inputs is arbitrary and assumed the standard deviation as 10% of the mean in the Gaussian model. Actually, when rare events in which small probabilities are of interest, as is the case in most reliability and risk analysis problems, the tail of the PDF becomes important Der Ki- ureghian and Ditlevsen (2009). In such cases, arbitrary choice of distribution type and parameters could be dangerous. Kiureghian A.D. advocated to parameterize the choice of the distribution so that the epistemic uncertainty in the probabilistic model is represented by the epistemic uncertainty in the distribution parameters Kiureghian (1989). In this chapter, we account for both the aleatory uncertainty inherent in the physical parameters and the epistemic uncertainty in the distribution parameters in the OTC problem, with a robust and efficient manner. PCE is an uncertainty quantification method that has been widely used in many areas across science and physics, and has a number of its applications in the aerospace engineering Hosder et al. (2010); Yao et al. (2011), with recognized robustness and efficiency. Conventionally, mostly dealing with aleatory uncertainty, PCE has shown demonstrated robustness and computational ef- ficiency. As for epistemic uncertainty quantification applications, a novel stochastic polynomial chaos representations, namely the extended polynomial chaos expansions (EPCE), that provide a uniform treatment of simultaneous aleatory and epistemic uncertainties was proposed Wang and Ghanem (2021, 2019). In EPCE, computational efficiency is enhanced through a functional repre- sentation of the response PDF in terms of the input aleatory and epistemic uncertainties combined in a single stage. Uncertainty segregation for purposes of visualization and interpretation is real- ized by a hierarchical treatment of the independent aleatory and epistemic random variables. In 135 this chapter, a basis adaptation scheme is built on the EPCE in cope with the “curse of dimension- ality” in order to further enhance computational efficiency. The reduction of dimension is realized by formulating a new polynomial chaos formulation in a lower dimension which still preserves the dominant response statistics Tipireddy and Ghanem (2014). Based on the theory, a strategy of assigning dominant sensitivity to the epistemic random variable is developed tailored to EPCE in consideration of accuracy. Thus, augmented with model reduction technique, the propagation of simultaneous epistemic and aleatory uncertainties to the PDF of the response is assessed by the EPCE, which has clear implications on optimal control and design under uncertainty. Another focus in this chapter is to provide comprehensive and useful response metrics. In light of the segregation property of epistemic and aleatory uncertainties in EPCE, the coefficients in aleatory PCE are random variables such that the response PDF is a random variable. Thus, the credibility of the PDF is assessed and interpreted as statistical scatter of the PDF, viewed as a statistical metric. In OTC problem, for example, we can investigate the distribution of the probability that terminal speed of the aircraft is no less than some given value. On the other hand, we use the directional sensitivities to quantify the effect of variation in statistical parameters. A composite map is constructed from statistical parameters to response PDF through integrating a PCE of the response within a kernel density estimate (KDE) of that response PDF. The map can propagate the epistemic uncertainty and evaluate the change in response PDF given any confidence level. Two objectives are thus provided in this chapter. First, we provide a coherent framework to model simultaneous aleatory and epistemic uncertainties in OTC model inputs and propagate the impact of both uncertainties to the predicted response in a computationally efficient manner. Sec- ondly, we provide response metrics that would lead to more confident and informative OTC pre- diction and decision-making. The rest of this chapter is organized as follows. Section 2 reviews the standard PCE approach in the beginning and then introduces the EPCE approach that takes into ac- count simultaneous aleatory and epistemic uncertainties. Chapter 2 has described the framework of quantifying the influence of aleatory and epistemic uncertainties on several response metrics 136 that used in this application. Section 7.2 introduces the model of optimal trajectory control that is studied in this chapter. Section 7.3 presents the numerical method applied to solve the optimal control problem and the construction of stochastic problem. Section 7.4 shows the results of ap- plying the stochastic framework to the optimal trajectory control problem. Section 7.5 presents the conclusions and some closing comments. 7.2 Optimal Trajectory Control Problem In this section, the formulation of deterministic optimal trajectary control model is presented. The mathematical model of the aircraft motion and the associated aerodynamics model are first intro- duced in subsections 7.2.1 and 7.2.2. Then the path constraint is described and finally the OTC problem is constructed. 7.2.1 Aerodynamic model Major aerodynamic parameters in the application of hypersonic trajectories include air density, lift and drag forces. The atmosphere is given an exponential model. The air densityr air at height h is represented by, r air (h)=r 0 exp(h=H) (7.1) where H is a scale height for atmosphere of earth set to H = 7500 m, and r 0 is the reference atmospheric density at the ground level,r 0 = 1:2 kg=m 3 . The lift forceL and drag forceD are computed as, L(a;h;v)= 0:5r air (h)v 2 C l (a)A ref D(a;h;v)= 0:5r air (h)v 2 C d (a)A ref (7.2) 137 wherea is the angle of attack,v is the speed of the vehicle, andA ref is the reference area of vehicle computed as the projected area of the shape on the plane normal to the flying dorection. For this study, the reference area A ref is assumed to be a disk with a diameter of 24 inches, which results in A ref =p(24 0:0254=2) 2 = 0:293 m 2 . For the lift coefficientC l and dragC d coefficient, the models are expressed as, C l = 1:5658a C d = 1:6537a 2 + 0:0612 (7.3) In these models, the value of constant parameters are summarized in Tab. 7.1. 7.2.2 Equations of motion For this work, an unpowered three-dimension entry trajectory of an air vehicle is studied. In flight dynamics, the angles of rotation in three dimensions about the vehicle’s center of gravity, namely pitch, roll and yaw, are the fundamental flight dynamics parameters. They refer to rotations about the respective axes starting from a defined steady flight equilibrium state as shown in Fig. 7.1a. To transform from the earth frame to the wind frame, the three Euler angles are the flight path angleg, the heading angley, and the bank angles. The description of angle parameters is shown in Fig. 7.1b. The heading angle y is the angle between due north and the horizontal component of the velocity vector, which describes which direction the aircraft is moving relative to cardinal directions. The flight path angle g is the angle between horizontal and the velocity vector, which describes whether the aircraft is climbing or descending. The bank angle s represents a rotation of the lift force around the velocity vector, which may indicate whether the airplane is turning. In addition, the angle of attacka is the angle between the vehicle’s reference line and the oncoming flow. As can be seen from Fig. 7.1b, the angle of attack a is the difference between pitch angle and flight path angleg. The dynamics of an aircraft system is usually modelled by a set of equations of motion (EOM) which can be nonlinear and discontinuous. Six degrees of freedom (6DOF) EOMs are composed by translational and rotational equations. The state variables are the position vector, velocity, pitch 138 (a) Three-dimension vehicle orientation and axes (b) Flight dynamics parameters Figure 7.1: Description of flight dynamics angle, pitch rate, weight and flight path angle in 6DOF EOMs. For simplifications, 6DOF EOMs can be decoupled into three degrees of freedom (3DOF) EOMs in which the state variables are the position, velocity, flight path angle and yaw angle of the aircraft. In this study, a 3DOF model is used and the equations of motion are composed of six ordinary differential eqautions (ODE) and there is a set of six state variables describing the aircraft trajectory, which results in, dh dt =vsin(g) dq dt =v cos(g)cos(y) rcos(f) df dt =v cos(g)sin(y) r dv dt = D m msin(g) r 2 dg dt =L cos(s) mv m vr 2 cos(g)+ v r cos(g) dy dt =L sin(s) mvcos(g) v r cos(g)cos(y)tan(f) (7.4) where h is height of vehicle (m); v is speed of vehicle (m/s); g is relative flight path angle (rad); y is heading angle (rad); q is longitude; f is latitude; r e is the earth (assumed spherical) radius equal to 6378 10 3 m and r=r e +h; m is the mass of vehicle. There are two control variables employed, including the angle of attac a = a(t) and the bank angle s = s(t). The six state variables arefh(t);q(t);f(t);v(t);g(t);y(t)g. All the state and control variables are dependent on timet. Besides, the value of constant parameters are summarized in Tab. 7.1. 139 Table 7.1: Value of constant parameters used in the model Parameters Value Scale heightH 7;500 m Surface densityr 0 1:2 kg=m 3 Vehicle reference areaA ref 0:293 m 2 Earth radiusr e 6:378 10 6 m Gravitational parameterm 3:986 10 14 m 3 =s 2 Vehicle massm 340:194 kg 7.2.3 Path constraints Path constraint, in this work, is defined as the planar area projected on the earth surface where the vehicle is forbidden to fly under some practical circumstances. To implement the path constraints into the optimal trajectiry control problem, the no-fly area is described by a circular area in the qf space (which is the earth surface) denoted by A u . In Fig. 7.2, the inner area of the red, blue and green circles present three examples of A u ’s that are centered at the same location (q; f)= (200 km; 50 km) with no-fly radiusR of 20 km, 50 km and 80 km, respectively. Figure 7.2: The path constraints of avoided circular region projected in theqf space with various radius centered at the same location. 140 7.2.4 Optimal control problem The objective of the application is to maximize the final velocityv t f at the desired end point given by (h t f ;q t f ;f t f ). Since to maximize v t f is equivalent to minimizev t f , the objective function is chosen to minimize J =v 2 t f so as to formulate a standard optimization problem. The model searches for the time-dependent control variablesa(t) ands(t), which results in, min a(t);s(t) J=v 2 f s:t:F F F = 0 0 0 (7.5) whereF F F consists of the set of equations of motion in Eq. 7.4, augmented with initial conditions, fh;q;f;v;g;yg(t= 0)=fh;q;f;v;g;ygj 0 ; (7.6) the final constraints, fh;q;fg(t=t f )=fh;q;fgj t f ; (7.7) and path contraints that any state of location(q(t);f(t)) satifies, (q(t);f(t)) = 2A u (7.8) Therefore, Eqs. 7.5- 7.8 constitute the deterministic optimal trajectory control problem for which a number of techniques can be applied to solve. The technique applied in this work will be introduced in the next section. An example set of initial and final conditions is listed in Tab. 7.2. The trajectories of example are computed and are shown in Fig. 7.3. The numerical scheme used to obtain these trajectories will be introduced in the next section. 141 0 50 100 150 200 250 300 350 400 Time (s) 0 5 10 15 20 25 30 35 40 Height (km) (a) Trajectory of height 0 50 100 150 200 250 300 350 400 Time (s) 600 800 1000 1200 1400 1600 1800 2000 Velocity (m/s) (b) Trajectory of velocity 0 50 100 150 200 250 300 350 400 Time (s) 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 Heading angle (rad) (c) Trajectory of heading angle Figure 7.3: Trajectories computed from the model with example constraints 142 Table 7.2: An example of initial, terminal and path constraints (a) Initial and terminal constraints Variable Initial condition Terminal condition Timet t 0 = 0 freet f Vehicle heighth h 0 = 40;000 m h t f = 0 Longitudeq q 0 = 0 q t f = 0 Latitudef f 0 = 0 f t f = 0 Vehicle velocityv v 0 = 2000 m=s freev t f flight path angleg g 0 =(90 10)p=180 rad freeg t f heading angley y 0 = 0 freey t f (b) Path constraints Location (q;f) Radius Path constraint No.1 (200000 m;5000 m) 25,000 m 7.3 Stochastic Optimal Trajectory Control Modeling Since aircraft dynamics is a highly nonlinear and complex mechanical system, optimal trajectory control is generally a computationally difficult problem. We first introduce the numerical scheme that is used to generate optimal trajectory simulation in subsection 7.3.1 and then construct the stochastic formulation of the OTC problem in subsection 7.3.2. 7.3.1 Numerical scheme We adopt a multi-stage stabilized continuation scheme that is tailored to hypersonic trajectory gen- eration applications using indirect optimal control techniques Grant and Braun (2015); Vedantam et al. (2020) and use the general purpose indirect trajectory optimization Python code named ”Bel- uga” for trajectory simulations. This numerical technique has advantages of high accuracy and computational efficiency. In tradition, direct methods are often used to perform trajectory optimization since these ap- proaches are relatively easy to implement and many commercial softwares are available to use, though the convergence to an optimal solutiion is not guaranteed. Recent study has demonstrated that indirect methods can solve trajectory optimization problems with high accuracy. Additionally, 143 the continuation scheme is suitable for solving the optimal control problem, and automated genera- tion of the necessary conditions of optimality is realized by combining the symbolic manipulation. Stabilized continuation is a particularly powerful boundary value problem solver that offers major advantages including adaptive step-size selection and guaranteed attenuation of terminal condition error over the continuation interval. Moreover, the multi-stage approach improves the computa- tional efficiency by solving the optimal control problem at a ”loose” integration tolerance in the beginning stages, then sequentially using the resulting solution to seed the subsequent stage of stabilized continuation to compute a higher quality solution that uses tighter tolerances. 7.3.1.1 Indirect methods When solving hypersonic trajectory optimization problems, the indirect methods have advantages of rapid convergence and necessary conditions satisfied, though there are costates introduced which increase the dimensionality and complexity of the problem. In indirect methods, necessary opti- mality conditions for the optimal control problem is first determined and then discretisation meth- ods are employed to compute for the resulting formulation. The optimal control problem is transformed into a two-point boundary value problem (TPBVP) when using indirect methods. The constraints and objective functional in Eqs. 7.5- 7.7 are the basic ingredients to derive the indirect optimal control formulation. To transform this into a TPBVP, the Euler-Lagrange necessary optimality conditions can be obtained from the calculus of variation as follows: H=l l l T fff+m m m T c c c (7.9) ˙ l l l = ¶H ¶x x x T (7.10) l l l(t f )= ¶F ¶x x x T t (7.11) 144 0 0 0= ¶H ¶u u u T (7.12) In Eq. 7.9,H represents the Hamiltonian of the dynamical system, l l l is defined as the vector of co-states which is the vector of Lagrange multipliers corresponding to each state such that l l l = [l h ;l q ;l f ;l v ;l g ;l y ] T , fff is the set of equations of motion, m m m is the vector of Lagrange multipliers corresponding to additional path constrains c c c; In Eq. 7.10, x x x is the vector of states such that x x x=[h;q;f;v;g;y] T , ˙ l l l is defined as the rate-of-change of the co-states such that l l l = [ ¶H ¶h h h ; ¶H ¶q q q ; ¶H ¶f f f ; ¶H ¶v v v ; ¶H ¶g g g ; ¶H ¶y y y ] T ; The application of Eq. 7.11 at t =t f and t = 0 results in additional boundary conditions to construct a well-posed TPBVP where F represents the non- integrated part of the objective functional with the initial and terminal state constraints adjoined; In Eq. 7.12,u represents the vector of control varaibles such thatu=[a(t);s(t)]. Thus, the optimal control problem is now represented as a TPBVP. The new state vector con- sists of the original states combined with the co-states, and the new boundary conditions are on both the original states and the co-states. 7.3.1.2 Multi-stage stabilized continuation The optimal control problem is recast into a TPBVP. Among a number of numerical techniques available for solving BVPs, shooting methods that solve an initial value problem associated with the BVP for the unknown boundary conditions is commonly adopted Allgower and Georg (2012). The continuation methods are a class of shooting methods and their principle is that a given prob- lem is embedded into a family of problems parameterized by a continuation parameter, and the solution is tracked by varying the continuation parameter from the initial problem with a known solution to the original problem Kotamraju and Akella (2000). The mathematical formulation of a continuation scheme is as follows, F(z(s);s)= 0 (7.13) 145 Eq. 7.13 presents the a TPBVP obtained from section 7.3.1.1 which is equivalent to the nonlinear system of equations. z z z(s) is the vector of all unknown variables associated with the TPBVP, in- cluding the unconstrained state and co-state initial conditions, and the time-of-flight in the case of free final-time problems states and co-states. The functionF(z(s);s) represents the boundary con- ditions, and the continuation parameter, s, acts to parameterize the system of nonlinear equations in Eq. 7.13 such that over the interval s2[s 0 ;s f ], a known solution is transformed into a desired solution. The functionF and the solution vectorz depend on the continuation parameters because the optimal control problem depends ons. For classical implementations of the continuation algorithm, the forward progression of the continuation parameter is typically user specified. However, along any curve of solution(z(s);s), the differential equation as follows always holds for Eq. 7.13, d ds F(z(s);s)= 0 (7.14) and the differential equation, namely the Davidenko equation, for z z z(s) is obtained from Eq. 7.14 as, dz ds =F 1 z F s (7.15) where the nonsingularity of the Jacobian matrixF z is assumed. Therefore, the TPBVP obtained from the optimal control problem can then be solved as an initial value problem (IVP) for a finite-dimensional ordinary differential equation, for which a number of established numerical algorithms exist. In addition, the differentiation removes the need to pick the step-size for the forward evolution of continuation parameters. Since the error in the solution might accumulate over the continuation interval, in this work, the stabilization techniques are applied to avoid accumulation of error in the integration process Vedantam et al. (2020); Ohtsuka and Fujii (1994). The stabilized continuation modifies Eq. 7.14 into the the expression as, d ds F(z(s);s)=AF(z z z(s);s)+v a (7.16) 146 where A denotes the gain matrix of the state feedback for the boundary condition error and A is selcted as a Hurwitz matrix; v a represents the additive control input denoted as the ”stabilizing term”. Even if a bad initial guess is provided to the continuation scheme such that the functional F(z;0)6= 0, it is guaranteed to set the to regulate the error variables F(z(s);s) to zero at s=s f by the open-loop control input: v a (s)=F T (0;s)W 1 (0;s f )F[z(0);0] (7.17) where F(0;s) denotes the transition matrix of the closed-loop system by the state feedback and W(0;s) denotes the controllability Gramian matrix expresses as [17 in Ohtsuka], W(s 1 ;s 2 )= Z s 2 s 1 F(s 1 ;s)F T (s 1 ;s)ds (7.18) andF(s 1 ;s) represents the state transition matrix expressed as, F(s 1 ;s)= expf[A(s 1 s)]g (7.19) The controllability Gramian matrixW(s 1 ;s 2 ) is nonsingular for any pair of(s 1 ;s 2 )(s 2 >s 1 ) because the transition matrixF(s 1 ;s) is nonsingular. Therefore the system of Eq. 7.16 is controllable by the additive input v a , and it is checked easily that F is set to zero by v a given as an open-loop control in Eq. 7.17. The open-loop control is useful when the solution cannot be obtained easily forF(z;0)= 0. The multi-stage approach involves solving the optimal control problem at a ”loose” integration tolerance, then sequentially using the resulting solution to seed the subsequent stage of stabilized continuation to compute a higher quality solution that uses tighter tolerances. This way the higher computational cost for solving with a ”strict’ integration tolerance is not paid in the beginning stages of the continuation process. 147 7.3.2 Parametrization of random inputs It has been suggested that uncertainty is non-negligible in optimal control of trajectories. There are parametric uncertainties, uncertain initial and terminal conditions, environmental uncertainties and uncertain path constraints in the optimal trajectory control problem and they are usually described with probabilistic models. However, a lack of knowledge or poor assumptions with respect to these random input parameters could result in epistemic uncertainty associated with their probabilistic models. Therefore, a key idea in the analysis is to simultaneously characterize and propagate the aleatory uncertainty related to the physics and the epistemic uncertainty associate with the statistical parameters of the input distributions. Then, via exploring the statistical metrics of the reponse, we aim to make predictions and decisions about the terminal vehicle velocity at a desired ending step. The Beta distribution is able to provide arbitrary shapes for the probability density function (PDF) over a bounded domain. In addition, thanks to the bounded support property, it is guar- anteed that physically unreasonable numerical simulations, such as non-negative parameter with negative value, are nonexistent in sampling. On the other hand, similar to the Gaussian distribution, the Beta distribution also has the feature of tapering in the PDF around the edges of the support, which is consistent with experimental observations. Among many types of Beta distribution, the four-parameter Beta distribution is commonly used, especially in engineering applications, to al- low physical upper and lower bounds for a random variable. Denoting a and b as the two shape parameters while q and r as the lower and upper bounds of a physical random variable k, respec- tively, then the PDF of the four-parameter Beta distribution of k can be expressed as Gupta and Nadarajah (2004), f(kja;b;q;r)= 1 B(a;b) (kq) a1 (rk) b1 (rq) a+b1 ; a> 0; b> 0; q<k<r (7.20) 148 whereB(a;b) is the Beta function written as, B(a;b)= G(a)G(b) G(a+b) ; (7.21) whereG(a) andG(b) are the gamma functions written as, G(a)= Z ¥ 0 k (a1) e k dk ; G(b)= Z ¥ 0 k (b1) e k dk : (7.22) In the OTC model, since the measurement of those state variables at t = 0 could be an sig- nificant error source in the dynamical system, it leads to a statistical description of the initial conditions. Thanks to the advanced measuring techniques, it is reasonable to give a comparatively small CoV (or range) to completely capture the error in the initial conditions. The bounds in each Beta random initial condition are set as follows: The nomial value of initial height h 0 is 40000 m and has a coefficient of variation (CoV) equal to 1:25%; The nomial value of initial velocity v 0 is 2000 m and has a CoV equal to 0:25%; The nomial value of initial relative flight path g 0 (rad) is (90 10)p=180 and has a CoV equal to 0:28%; The nomial value of initial headingy 0 (rad) is 0 and has a range of[0:1;0:1]. Optimal trajectory generation is sensitive to atmospheric density variations with altitude Vedan- tam et al. (2020). As introduced in Section 7.2.1, the air density r air is modeled by r air (h)= r 0 exp(h=H). To deal with the model error, a CoV of 5% is given tor air (h). Then it is equivalent to give a CoV of 5% to the surface air density r 0 with nominal value of 1:2 kg=m 3 such that r 0 acts as a random input that describes the uncertainty in air densityr air (h) through the aerodynamic model. The lift L and drag D forces, as parameters in the dynamical system, are functionals of one of the control variables a and are expressed as L(C l (a);h;v) and D(C d (a);h;v), respectively, in whichC l (a)= 1:5658a andC d (a)= 1:6537a 2 + 0:0612. Therefore, the coeffcients with respect 149 Table 7.3: Statistical parameters of random inputs Random inputs Distribution a a a o b b b o q q q (fixed) r r r (fixed) Initial heighth 0 (m) Beta 3 3 39950 40050 Initial velocityv 0 (m/s) Beta 3 3 1995 2005 Initial relative fight pathg 0 (rad) Beta 3 3 -1.40026 -1.39226 Initial headingy 0 (rad) Beta 3 3 -0.1 0.1 Surface air densityr 0 (kg=m 3 ) Beta 3 3 1.14 1.26 Model parameter of lift coefficientC c l Beta 3 3 1.50317 1.62843 Model parameter of drag coefficientC c d Beta 3 3 1.58755 1.71985 No-fly radiusR (m) Beta 3 3 0 50000 toa anda 2 in each submodel, denoted byC c l andC c d , have nomial values of 1:5658 and 1:6537 with CoV of 4% so as to describe the parametric uncertainties. The path cosntraint is an area where the vehicle is not allowed to fly. The location is usually unequivocal while the radius may be not. Thus, a range of radius between 0 and 50000 m is considered to include the full scale. Therefore, all these information is used to determine the lower and upper bounds that are fixed values and are symmetric to the nominal value of each input in the Beta distribution. On the other hand, the two shape parameters in the Beta distribution are assumed as 3 and 3 for each random input to construct a Gaussian-like shape but with bounded support property. Following the framework of extended PCE, an epistemic Gaussian random variabler is then introduced to take charge of errors in these shape parameters associated with all the random inputs, with a mean value of 3 and a CoV of 15%. The constructed probabilistic models of the random inputs are summarized in Tab. 7.3. 7.4 Results The presented EPCE-based framework is applied to the stochastic analysis of the optimal trajectory control problem. In this problem, a a a o and b b b o are the vectors of two shape parameters of each random input from the experts’ opinion; q q q and r r r are the vectors of the lower and upper bounds of each random input and assumed to be fixed. Thus, the vector of random parameters P P P include all 150 these 16 shape parameters. The mean value of P P P is taken as m P P P =(a a a o ;b b b o ), and the CoV of each entry inP P P is assumed as 15% (s P P P = 15%m P P P ). The resulting[l i ;u i ] interval, at the 95% confidence level, for each of the shape parameters is equal to [2:55;3:45]. In this section, the results are presented and analyzed. The PDF of QoI which is the terminal speed of the vehicle v t f using the KDE-based adaptive EPCE is shown in Fig. 7.4. A second-order EPCE was found to be sufficiently converged in the tail of the PDF to carry out the foregoing PDF characterization and sensitivity studies. Fig. 7.5 shows the family of PDFs computed at 10000 samples of epistemic variable r. Each PDF in Fig. 7.5 represents a prediction at one probabilistic model. It is found that the epistemic uncertainty associcated with imcomplete knowledge of inputs in the OTC model has non-negligible impacts on the prediction. As indicated previously, the PDF in Fig. 7.4 is just the distribution over the ensemble of PDFs for which samples are shown in Fig. 7.5. Postprocessing the above results, we select a threshold level for terminal aircraft speed of v t f 600 m/s which is associated with “failure” of the task, then the distribution of the probability of failure P f is obtained by evaluating P f for each sample in Fig. 7.5, and plotting the PDF of the resulting values. It is worth mentioning that the failure probability is itself characterized as a random variable with a computed scatter that reflects its credibility for critical decision making. The resulting PDF is shown in Fig. 7.6. Also, the red point in Fig. 7.6 represents the failure probability estimated from the PDF in Fig. 7.4 as a comparison. Although confidence intervals for P f can be easily synthesized from the PDF, more accurate decision analysis can be developed by relying on the full PDF. On the other hand, given a 95% confidence level for r, the confidence intervals [l i ;u i ] for eachP i , (i= 1;;N P ) are computed to be[2:55;3:45] for the shape parametersa a a o andb b b o of Beta distribution, and thenDP P P is obtained. Thus, Fig. 7.7 shows the result from the stochastic sensitivity approach. The figure shows a statistical ensemble of these sensitivities in which the scatter reflects the epistemic uncertainty about the probabilistic parameters of the input variables. 151 The shape of Df X by using the sensitivity method is as expected with the largest difference near the mode and a change of sign before tapering off to zero. The net area under the curve is equal to one. In our definition of Df X (x), the increments DP i are deterministic. The scatter in Df X is thus due to the scatter in the sensitivities. Compared with the two applications in our previous study Wang and Ghanem (2021), the application in this work clearly exhibits different dependence of the sensitivities on x and r, demonstrating the influence of the chaos coefficients of X on these sensitivities. It is also noted that the scatter in the sensitivities does not mirror the scatter in the PDF, and that the sensitivity of the PDF varies considerably along its support. In addtion, it indicates that the zero change in PDF in Fig. 7.4 is near its peak, because the family of variations in PDF estimated by the stoachstic sensitivity approach in Fig. 7.7 change their signs around 650 m/s which corresponds to the peak of PDF in Fig. 7.4. 450 500 550 600 650 700 V elocity (m/s) 0.000 0.002 0.004 0.006 0.008 0.010 0.012 0.014 f X (x) Figure 7.4: PDF from the adaptive extended PCE for OTC problem. 7.5 Concluding Remarks Optimal control of aircraft trajectories is generally a complex task and recent advances in the community have largely concentrated on the improvement of predicting one single solution for 152 Figure 7.5: The family of PDFs at 1000 realizations ofr for OTC problem. Figure 7.6: PDF of failure probability thatv t f 600 m/s for OTC problem. 153 Figure 7.7: Statistical samples for change in response PDF with 95% confidence interval ofr by the stochastic sensitivity approach for OTC problem. these challenging problems. We are working towards a next-generation optimal trajectory control paradigm, accounting for both aleatory uncertainty and epistemic uncertainty associated with in- accurate probability models, by employing novel uncertainty quantification techniques. The whole procedure is very efficient, with computational cost only in constructing the EPCE. Compared with preliminary attempts for dealing with optimal trajectory control problem under uncertainty in the literatures, this chapter provides an integrated framework for uncertainty modeling, forward propagation and characterization of response statistics in optimal trajectory control models, in a computationally efficient manner, accounting for both epistemic and aleatory uncertainties. In the case study, the terminal vehicle speed as the quantity of interest is investigated for the optimal control of an unpowered hypersonic aircraft trajectory and the result demonstrates the insight of the methodology. It shows that, other than the aleatory uncertainty in the physics, the epistemic uncertainty with respect to the statistical parameters of the distributions of OTC model inputs has remarkable influence to the reponse. As an outlook, from the application perspective, the presented approach can be applied to investigating other objective functionals as a general solver for optimal control problems under multi-uncertainties, with little additional conceptual novelty or mathematical effort. For instance, the framework could be applied to other phases of flight such as take-off, cruise or descent, by 154 simply substituting their physics models into the framework. It is further expected that, from the methodology perspective, the proposed approach could be integrated into a Bayesian framework for model validation. 155 Bibliography 318, A.C., 2019. Building Code Requirements for Structural Concrete (ACI 318-19): An ACI Standard: Commentary on Building Code Requirements for Structural Concrete (ACI 318R- 19). Technical Report. American Concrete Institute. Abrahamson, N.A., Bommer, J.J., 2005. Probability and uncertainty in seismic hazard analysis. Earthquake spectra 21, 603–607. Aitharaju, V ., 2020. Development and Integration of Predictive Models for Manufacturing and Structural Performance of Carbon Fiber Composites in Automotive Applications. Technical Report. General Motors LLC, Detroit, MI (United States). Allen, M., Maute, K., 2005. Reliability-based shape optimization of structures undergoing fluid– structure interaction phenomena. Computer Methods in Applied Mechanics and Engineering 194, 3472–3495. Allgower, E.L., Georg, K., 2012. Numerical continuation methods: an introduction. volume 13. Springer Science & Business Media. Aluko, O., Gowtham, S., Odegard, G.M., 2017. Multiscale modeling and analysis of graphene nanoplatelet/carbon fiber/epoxy hybrid composite. Composites Part B: Engineering 131, 82–90. Aranda, S., Berg, D., Dickert, M., Drechsel, M., Ziegmann, G., 2014. Influence of shear on the permeability tensor and compaction behaviour of a non-crimp fabric. Composites Part B: Engineering 65, 158–163. Arnst, M., Ghanem, R., Soize, C., 2010. Identification of bayesian posteriors for coefficients of chaos expansions. Journal of Computational Physics 229, 3134–3154. Atik, L.A., Abrahamson, N., Bommer, J.J., Scherbaum, F., Cotton, F., Kuehn, N., 2010. The variability of ground-motion prediction models and its components. Seismological Research Letters 81, 794–801. Au, S., 2005. Reliability-based design sensitivity by efficient simulation. Computers & structures 83, 1048–1061. Aven, T., Nøkland, T., 2010. On the use of uncertainty importance measures in reliability and risk analysis. Reliability Engineering & System Safety 95, 127–133. Baker, J.W., 2013. An introduction to probabilistic seismic hazard analysis. White paper version 2, 79. 156 Baker, J.W., Gupta, A., 2016. Bayesian treatment of induced seismicity in probabilistic seismic- hazard analysis. Bulletin of the Seismological Society of America 106, 860–870. Beer, M., Zhang, Y ., Quek, S.T., Phoon, K.K., 2013. Reliability analysis with scarce information: Comparing alternative approaches in a geotechnical engineering context. Structural Safety 41, 1–10. Bommer, J.J., 2012. Challenges of building logic trees for probabilistic seismic hazard analysis. Earthquake Spectra 28, 1723–1735. Bommer, J.J., Abrahamson, N.A., 2006. Why do modern probabilistic seismic-hazard analyses often lead to increased hazard estimates? Bulletin of the Seismological Society of America 96, 1967–1977. Borgonovo, E., 2007. A new uncertainty importance measure. Reliability Engineering & System Safety 92, 771–784. Borgonovo, E., Plischke, E., 2016. Sensitivity analysis: a review of recent advances. European Journal of Operational Research 248, 869–887. Bostanabad, R., Liang, B., Gao, J., Liu, W.K., Cao, J., Zeng, D., Su, X., Xu, H., Li, Y ., Chen, W., 2018. Uncertainty quantification in multiscale simulation of woven fiber composites. Computer Methods in Applied Mechanics and Engineering 338, 506–532. Campbell, K.W., 2003. Prediction of strong ground motion using the hybrid empirical method and its use in the development of ground-motion (attenuation) relations in eastern north america. Bulletin of the Seismological Society of America 93, 1012–1033. Casado, E., La Civita, M., Vilaplana, M., McGookin, E.W., 2017. Quantification of aircraft tra- jectory prediction uncertainty using polynomial chaos expansions, in: 2017 IEEE/AIAA 36th Digital Avionics Systems Conference (DASC), IEEE. pp. 1–11. Chabridon, V ., Balesdent, M., Bourinet, J.M., Morio, J., Gayton, N., 2018. Reliability-based sen- sitivity estimators of rare event probability in the presence of distribution parameter uncertainty. Reliability Engineering & System Safety 178, 164–178. Chaloner, K., Verdinelli, I., 1995. Bayesian experimental design: A review. Statistical Science 10, 273–304. Cl´ ement, A., Soize, C., Yvonnet, J., 2013. Uncertainty quantification in computational stochastic multiscale analysis of nonlinear elastic materials. Computer Methods in Applied Mechanics and Engineering 254, 61–82. Commission, N.R., et al., 1997. Recommendations for probabilistic seismic hazard analysis: guid- ance on uncertainty and use of experts. Technical Report. Nuclear Regulatory Commission. Cornell, C.A., 1968. Engineering seismic risk analysis. Bulletin of the seismological society of America 58, 1583–1606. 157 Crestaux, T., Le Maıtre, O., Martinez, J.M., 2009. Polynomial chaos expansion for sensitivity analysis. Reliability Engineering & System Safety 94, 1161–1172. Das, S., Ghanem, R., Spall, J., 2008. Asymptotic sampling distribution for polynomial chaos representation of data : A maximum-entropy and fisher information approach. SIAM Journal on Scientific Computing 30, 2207–2234. Davis, R.A., Lii, K.S., Politis, D.N., 2011. Remarks on some nonparametric estimates of a density function, in: Selected Works of Murray Rosenblatt. Springer, pp. 95–100. Der Kiureghian, A., Ditlevsen, O., 2009. Aleatory or epistemic? does it matter? Structural safety 31, 105–112. DeVita, J.P., Sander, L.M., Smereka, P., 2005. Multiscale kinetic monte carlo algorithm for simu- lating epitaxial growth. Physical Review B 72, 205421. Dhia, H.B., Chamoin, L., Oden, J.T., Prudhomme, S., 2011. A new adaptive modeling strategy based on optimal control for atomic-to-continuum coupling simulations. Computer Methods in Applied Mechanics and Engineering 200, 2675–2696. Dubourg, V ., Sudret, B., 2014. Meta-model-based importance sampling for reliability sensitivity analysis. Structural Safety 49, 27–36. Ehre, M., Papaioannou, I., Straub, D., 2020. A framework for global reliability sensitivity analysis in the presence of multi-uncertainty. Reliability Engineering & System Safety 195, 106726. Eldred, M.S., Swiler, L.P., Tang, G., 2011. Mixed aleatory-epistemic uncertainty quantification with stochastic expansions and optimization-based interval estimation. Reliability Engineering & System Safety 96, 1092–1113. Ellingwood, B.R., Kinali, K., 2009. Quantifying and communicating uncertainty in seismic risk assessment. Structural Safety 31, 179–187. Enright, P.J., Conway, B.A., 1992. Discrete approximations to optimal trajectories using direct transcription and nonlinear programming. Journal of Guidance, Control, and Dynamics 15, 994–1002. Feng, D., Li, J., 2016. Stochastic nonlinear behavior of reinforced concrete frames. ii: Numerical simulation. Journal of Structural Engineering 142, 04015163. Feng, D.C., Ren, X.D., Li, J., 2018. Cyclic behavior modeling of reinforced concrete shear walls based on softened damage-plasticity model. Engineering Structures 166, 363–375. Feng, D.C., Xie, S.C., Xu, J., Qian, K., 2020. Robustness quantification of reinforced concrete structures subjected to progressive collapse via the probability density evolution method. Engi- neering Structures 202, 109877. Fish, J., Ghouali, A., 2001. Multiscale analytical sensitivity analysis for composite materials. International Journal for Numerical Methods in Engineering 50, 1501–1520. 158 Fish, J., Wagner, G.J., Keten, S., 2021. Mesoscopic and multiscale modelling in materials. Nature materials 20, 774–786. Fisher, J., Bhattacharya, R., 2011. Optimal trajectory generation with probabilistic system uncer- tainty using polynomial chaos. Journal of dynamic systems, measurement, and control 133. Fu, C.C., Torre, J.D., Willaime, F., Bocquet, J.L., Barbu, A., 2005. Multiscale modelling of defect kinetics in irradiated iron. Nature materials 4, 68–74. Gardoni, P., Der Kiureghian, A., Mosalam, K.M., 2002. Probabilistic capacity models and fragility estimates for reinforced concrete columns based on experimental observations. Journal of En- gineering Mechanics 128, 1024–1038. Ghanem, R., 1999a. Higher order sensitivity of heat conduction problems to random data using the spectral stochastic finite element method. ASME Journal of Heat Transfer 121, 290–299. Ghanem, R., 1999b. Ingredients for a general purpose stochastic finite elements implementation. Computational Methods in Applied Mechanics and Engineering 168, 19–34. Ghanem, R., Doostan, A., 2006. On the construction and analysis of stochastic predictive models: Characterization and propagation of the errors associated with limited data. Journal of Compu- tational Physics 217, 63–81. Ghanem, R., Higdon, D., Owhadi, H., 2017. Handbook of uncertainty quantification. volume 6. Springer. Ghanem, R., Spanos, P.D., 1990. Polynomial chaos in stochastic finite elements. Journal of Applied Mechanics 57, 197–202. Ghanem, R.G., Doostan, A., Red-Horse, J., 2008. A probabilistic construction of model validation. Computer Methods in Applied Mechanics and Engineering 197, 2585–2595. Ghanem, R.G., Spanos, P.D., 2003. Stochastic finite elements: a spectral approach. Dover. Ghauch, Z.G., Aitharaju, V ., Rodgers, W.R., Pasupuleti, P., Dereims, A., Ghanem, R.G., 2019. In- tegrated stochastic analysis of fiber composites manufacturing using adapted polynomial chaos expansions. Composites Part A: Applied Science and Manufacturing 118, 179–193. Gonz´ alez-Arribas, D., Soler, M., Sanjurjo-Rivo, M., 2018. Robust aircraft trajectory planning under wind uncertainty using optimal control. Journal of Guidance, Control, and Dynamics 41, 673–688. Grant, M.J., Braun, R.D., 2015. Rapid indirect trajectory optimization for conceptual design of hypersonic missions. Journal of Spacecraft and Rockets 52, 177–182. Graves, R., Jordan, T.H., Callaghan, S., Deelman, E., Field, E., Juve, G., Kesselman, C., Maech- ling, P., Mehta, G., Milner, K., et al., 2011. Cybershake: A physics-based seismic hazard model for southern california. Pure and Applied Geophysics 168, 367–381. 159 Greene, M.S., Liu, Y ., Chen, W., Liu, W.K., 2011. Computational uncertainty analysis in multires- olution materials via stochastic constitutive theory. Computer Methods in Applied Mechanics and Engineering 200, 309–325. Guo, J., Du, X., 2007. Sensitivity analysis with mixture of epistemic and aleatory uncertainties. AIAA journal 45, 2337–2349. Gupta, A.K., Nadarajah, S., 2004. Handbook of beta distribution and its applications. CRC press. Helton, J., Johnson, J., Oberkampf, W., Storlie, C.B., 2007. A sampling-based computational strat- egy for the representation of epistemic uncertainty in model predictions with evidence theory. Computer Methods in Applied Mechanics and Engineering 196, 3980–3998. Helton, J.C., Johnson, J.D., Oberkampf, W., Sallaberry, C.J., 2006a. Sensitivity analysis in con- junction with evidence theory representations of epistemic uncertainty. Reliability Engineering & System Safety 91, 1414–1434. Helton, J.C., Johnson, J.D., Sallaberry, C.J., Storlie, C.B., 2006b. Survey of sampling-based meth- ods for uncertainty and sensitivity analysis. Reliability Engineering & System Safety 91, 1175– 1209. Hibbitt, Karlsson, S., 2001. Abaqus/standard user’s manual. Technical Report. Hibbitt, Karlsson & Sorensen. Hiriyur, B., Waisman, H., Deodatis, G., 2011. Uncertainty quantification in homogenization of heterogeneous microstructures modeled by xfem. International Journal for Numerical Methods in Engineering 88, 257–278. Hofer, E., Kloos, M., Krzykacz-Hausmann, B., Peschke, J., Woltereck, M., 2002. An approximate epistemic uncertainty analysis approach in the presence of epistemic and aleatory uncertainties. Reliability Engineering & System Safety 77, 229–238. Hosder, S., Walters, R.W., Balch, M., 2010. Point-collocation nonintrusive polynomial chaos method for stochastic computational fluid dynamics. AIAA journal 48, 2721–2730. Hu, L., Li, R., Xue, T., Liu, Y ., 2018. Neuro-adaptive tracking control of a hypersonic flight vehicle with uncertainties using reinforcement synthesis. Neurocomputing 285, 141–153. Huan, X., Marzouk, Y .M., 2013. Simulation-based optimal bayesian experimental design for non- linear systems. Journal of Computational Physics 232, 288–317. Huang, Y ., Li, H., Du, X., He, X., 2019. Mars entry trajectory robust optimization based on evidence under epistemic uncertainty. Acta Astronautica 163, 225–237. Jacquelin, E., Friswell, M.I., Adhikari, S., Dessombz, O., Sinou, J.J., 2016. Polynomial chaos expansion with random and fuzzy variables. Mechanical Systems and Signal Processing 75, 41–56. Jakeman, J., Eldred, M., Xiu, D., 2010. Numerical approach for quantification of epistemic uncer- tainty. Journal of Computational Physics 229, 4648–4663. 160 Jensen, H., Mayorga, F., Papadimitriou, C., 2015. Reliability sensitivity analysis of stochastic finite element models. Computer Methods in Applied Mechanics and Engineering 296, 327–351. Jordan, T., Chen, Y .T., Gasparini, P., Madariaga, R., Main, I., Marzocchi, W., Papadopoulos, G., Yamaoka, K., Zschau, J., et al., 2011. Operational earthquake forecasting: State of knowledge and guidelines for implementation. Annals of Geophysics . Kennedy, M.C., O’Hagan, A., 2001. Bayesian calibration of computer models. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 63, 425–464. Khakzad, N., 2021. Optimal firefighting to prevent domino effects: Methodologies based on dy- namic influence diagram and mathematical programming. Reliability Engineering & System Safety 212, 107577. Kirk, D.E., 2004. Optimal control theory: an introduction. Courier Corporation. Kiureghian, A.D., 1989. Measures of structural safety under imperfect states of knowledge. Journal of Structural Engineering 115, 1119–1140. Kohler, C., Kizler, P., Schmauder, S., 2004. Atomistic simulation of precipitation hardening in a-iron: influence of precipitate shape and chemical composition. Modelling and Simulation in Materials Science and Engineering 13, 35. Kotamraju, G.R., Akella, M.R., 2000. Stabilized continuation methods for boundary value prob- lems. Applied Mathematics and Computation 112, 317–332. Kouchmeshky, B., Zabaras, N., 2010. Microstructure model reduction and uncertainty quantifica- tion in multiscale deformation processes. Computational Materials Science 48, 213–227. Lee, H., 2011. Optimal control for quasi-newtonian flows with defective boundary conditions. Computer methods in applied mechanics and engineering 200, 2498–2506. Lewis, F.L., Vrabie, D., Syrmos, V .L., 2012. Optimal control. John Wiley & Sons. Li, J., Feng, D., Gao, X., Zhang, Y ., 2016. Stochastic nonlinear behavior of reinforced concrete frames. i: Experimental investigation. Journal of Structural Engineering 142, 04015162. Li, X., Nair, P.B., Zhang, Z., Gao, L., Gao, C., 2014. Aircraft robust trajectory optimization using nonintrusive polynomial chaos. Journal of Aircraft 51, 1592–1603. Liu, B., Sun, X., Bhattacharya, K., Ortiz, M., 2021. Hierarchical multiscale quantification of material uncertainty. Journal of the Mechanics and Physics of Solids 153, 104492. Lu, Z., Song, S., Yue, Z., Wang, J., 2008. Reliability sensitivity method by line sampling. Structural Safety 30, 517–532. Luders, B., Ellertson, A., How, J.P., Sugel, I., 2016. Wind uncertainty modeling and robust tra- jectory planning for autonomous parafoils. Journal of Guidance, Control, and Dynamics 39, 1614–1630. 161 Luyi, L., Zhenzhou, L., Jun, F., Bintuan, W., 2012. Moment-independent importance measure of basic variable and its state dependent parameter solution. Structural Safety 38, 40–47. Lyubushin, A.A., Parvez, I.A., 2010. Map of seismic hazard of india using bayesian approach. Natural hazards 55, 543–556. Marfia, S., Sacco, E., 2018. Multiscale technique for nonlinear analysis of elastoplastic and vis- coplastic composites. Composites Part B: Engineering 136, 241–253. Marzocchi, W., Jordan, T.H., 2014. Testing for ontological errors in probabilistic forecasting models of natural systems. Proceedings of the National Academy of Sciences 111, 11973– 11978. Marzocchi, W., Jordan, T.H., 2017. A unified probabilistic framework for seismic hazard analysis. Bulletin of the Seismological Society of America 107, 2738–2744. Marzocchi, W., Jordan, T.H., 2018. Experimental concepts for testing probabilistic earthquake forecasting and seismic hazard models. Geophysical Journal International 215, 780–798. Marzocchi, W., Selva, J., Jordan, T.H., 2021. A unified probabilistic framework for volcanic hazard and eruption forecasting. Natural Hazards and Earth System Sciences 21, 3509–3517. Marzouk, Y .M., Najm, H.N., 2009. Dimensionality reduction and polynomial chaos acceleration of bayesian inference in inverse problems. Journal of Computational Physics 228, 1862–1902. Marzouk, Y .M., Najm, H.N., Rahn, L.A., 2007. Stochastic spectral methods for efficient bayesian solution of inverse problems. Journal of Computational Physics 224, 560–586. McGuire, R.K., Cornell, C.A., Toro, G.R., 2005. The case for using mean seismic hazard. Earth- quake Spectra 21, 879–886. Mehrez, L., Fish, J., Aitharaju, V ., Rodgers, W.R., Ghanem, R., 2018. A pce-based multiscale framework for the characterization of uncertainties in complex systems. Computational Me- chanics 61, 219–236. Merhav, S., 1998. Aerospace sensor systems and applications. Springer Science & Business Media. Meynaoui, A., Marrel, A., Laurent, B., 2019. New statistical methodology for second level global sensitivity analysis. arXiv preprint arXiv:1902.07030 . Morio, J., 2011. Influence of input pdf parameters of a model on a failure probability estimation. Simulation Modelling Practice and Theory 19, 2244–2255. Morrison, R.E., Oliver, T.A., Moser, R.D., 2018. Representing model inadequacy: A stochastic operator approach. SIAM/ASA Journal on Uncertainty Quantification 6, 457–496. Murray, Y .D., et al., 2007. Users manual for LS-DYNA concrete material model 159. Technical Report. United States. Federal Highway Administration. Office of Research . . . . 162 Nannapaneni, S., Mahadevan, S., 2016. Reliability analysis under epistemic uncertainty. Reliabil- ity Engineering & System Safety 155, 9–20. Nozhati, S., Sarkale, Y ., Chong, E.K., Ellingwood, B.R., 2020. Optimal stochastic dynamic scheduling for managing community recovery from natural hazards. Reliability Engineering & System Safety 193, 106627. Ohtsuka, T., Fujii, H., 1994. Stabilized continuation method for solving optimal control problems. Journal of Guidance, Control, and Dynamics 17, 950–957. Pagano, S., Russo, R., Strano, S., Terzo, M., 2013. Non-linear modelling and optimal control of a hydraulically actuated seismic isolator test rig. Mechanical Systems and Signal Processing 35, 255–278. Papadrakakis, M., Stefanou, G., 2014. Multiscale modeling and uncertainty quantification of ma- terials and structures. Springer. Pavlenko, V ., 2015. Effect of alternative distributions of ground motion variability on results of probabilistic seismic hazard analysis. Natural Hazards 78, 1917–1930. Pisarenko, V ., Lyubushin, A., Lysenko, V ., Golubeva, T., 1996. Statistical estimation of seismic hazard parameters: maximum possible magnitude and related parameters. Bulletin of the Seis- mological Society of America 86, 691–700. Pisarenko, V ., Sornette, A., Sornette, D., Rodkin, M., 2014. Characterization of the tail of the distribution of earthquake magnitudes by combining the gev and gpd descriptions of extreme value theory. Pure and Applied Geophysics 171, 1599–1624. Prabhakar, A., Fisher, J., Bhattacharya, R., 2010. Polynomial chaos-based analysis of probabilistic uncertainty in hypersonic flight dynamics. Journal of guidance, control, and dynamics 33, 222– 234. Raissi, M., Perdikaris, P., Karniadakis, G.E., 2019. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differ- ential equations. Journal of Computational Physics 378, 686–707. Raschke, M., 2013. Statistical modeling of ground motion relations for seismic hazard analysis. Journal of seismology 17, 1157–1182. Saltelli, A., 2002. Sensitivity analysis for importance assessment. Risk analysis 22, 579–590. Saltelli, A., Tarantola, S., Campolongo, F., Ratto, M., 2004. Sensitivity analysis in practice: a guide to assessing scientific models. volume 1. Wiley Online Library. Saranathan, H., Grant, M.J., 2018. Relaxed autonomously switched hybrid system approach to indirect multiphase aerospace trajectory optimization. Journal of Spacecraft and Rockets 55, 611–621. Sargsyan, K., Huan, X., Najm, H., 2019. Embedded model error representation for Bayesian model calibration. International Journal for Uncertainty Quantification 9, 365–394. 163 Sargsyan, K., Najm, H., Ghanem, R., 2015. On the statistical calibration of physical models. International Journal of Chemical Kinetics 47, 246–276. Sarkar, A., Ghanem, R., 2002. Mid-frequency structural dynamics with parameter uncertainty. Computer Methods in Applied Mechanics and Engineering 191, 5499–5513. Sch¨ obi, R., Sudret, B., 2017. Uncertainty propagation of p-boxes using sparse polynomial chaos expansions. Journal of Computational Physics 339, 307–327. Sch¨ obi, R., Sudret, B., Marelli, S., 2017. Rare event estimation using polynomial-chaos kriging. ASCE-ASME Journal of Risk and Uncertainty in Engineering Systems, Part A: Civil Engineer- ing 3, D4016002. Schultz, R.L., Zagalsky, N.R., 1972. Aircraft performance optimization. Journal of Aircraft 9, 108–114. Shao, Q., Younes, A., Fahs, M., Mara, T.A., 2017. Bayesian sparse polynomial chaos expansion for global sensitivity analysis. Computer Methods in Applied Mechanics and Engineering 318, 474–496. Silverman, B.W., 1986. Density estimation for statistics and data analysis. volume 26. CRC press. Sobol, I.M., 2001. Global sensitivity indices for nonlinear mathematical models and their monte carlo estimates. Mathematics and computers in simulation 55, 271–280. Soize, C., 2013. Stochastic modeling of uncertainties in computational structural dynam- ics—recent theoretical advances. Journal of Sound and Vibration 332, 2379–2395. Soize, C., 2017. Uncertainty quantification. Springer. Soler, M., Gonz´ alez-Arribas, D., Sanjurjo-Rivo, M., Garc´ ıa-Heras, J., Sacher, D., Gelhardt, U., Lang, J., Hauf, T., Simarro, J., 2020. Infuence of atmospheric uncertainty, convective indicators, and cost-index on the leveled aircraft trajectory optimization problem. Transportation Research Part C: Emerging Technologies 120, 102784. Spanos, P., Kontsos, A., 2008. A multiscale monte carlo finite element method for determining mechanical properties of polymer nanocomposites. Probabilistic Engineering Mechanics 23, 456–470. Sudret, B., 2008. Global sensitivity analysis using polynomial chaos expansions. Reliability Engineering & System Safety 93, 964–979. Sun, L., Zheng, Z., 2015. Nonlinear adaptive trajectory tracking control for a stratospheric airship with parametric uncertainty. Nonlinear Dynamics 82, 1419–1430. Tang, X., Chen, P., Zhang, Y ., 2015. 4d trajectory estimation based on nominal flight profile extraction and airway meteorological forecast revision. Aerospace Science and Technology 45, 387–397. 164 Thomsen IV , J.H., Wallace, J.W., 2004. Displacement-based design of slender reinforced concrete structural walls—experimental verification. Journal of structural engineering 130, 618–630. Tipireddy, R., Ghanem, R., 2014. Basis adaptation in homogeneous chaos spaces. Journal of Computational Physics 259, 304–317. Tootkaboni, M., Graham-Brady, L., 2010. A multi-scale spectral stochastic method for homog- enization of multi-phase periodic composites with random material properties. International journal for numerical methods in engineering 83, 59–90. Tourajizadeh, H., Zare, S., 2016. Robust and optimal control of shimmy vibration in aircraft nose landing gear. Aerospace Science and Technology 50, 1–14. Tsilifis, P., Ghanem, R., Hajali, P., 2017. Efficient bayesian experimentation using an expected information gain lower bound. SIAM/ASA Journal on Uncertainty Quantification 5, 30–62. Valdebenito, M., Labarca, A., Jensen, H., 2013. On the application of intervening variables for stochastic finite element analysis. Computers & Structures 126, 164–176. Vedantam, M., Akella, M.R., Grant, M.J., 2020. Multi-stage stabilized continuation for indirect optimal control of hypersonic trajectories, in: AIAA Scitech 2020 Forum, p. 0472. Vian, J.L., Moore, J.R., 1989. Trajectory optimization with risk minimization for military aircraft. Journal of Guidance, Control, and Dynamics 12, 311–317. V on Stryk, O., Bulirsch, R., 1992. Direct and indirect methods for trajectory optimization. Annals of operations research 37, 357–373. Wang, C., Matthies, H.G., Xu, M., Li, Y ., 2018. Dual interval-and-fuzzy analysis method for tem- perature prediction with hybrid epistemic uncertainties via polynomial chaos expansion. Com- puter Methods in Applied Mechanics and Engineering 336, 171–186. Wang, K., Sun, W., 2018. A multiscale multi-permeability poroplasticity model linked by recursive homogenizations and deep learning. Computer Methods in Applied Mechanics and Engineering 334, 337–380. Wang, P., Lu, Z., Tang, Z., 2013. An application of the kriging method in global sensitivity analysis with parameter uncertainty. Applied Mathematical Modelling 37, 6543–6555. Wang, Y ., Pang, Y ., Chen, O., Iyer, H.N., Dutta, P., Menon, P., Liu, Y ., 2021. Uncertainty quantifi- cation and reduction in aircraft trajectory prediction using bayesian-entropy information fusion. Reliability Engineering & System Safety 212, 107650. Wang, Z., Gao, Z., Wang, Y ., Cao, Y ., Wang, G., Liu, B., Wang, Z., 2015. A new dynamic testing method for elastic, shear modulus and poisson’s ratio of concrete. Construction and Building Materials 100, 129–135. Wang, Z., Ghanem, R., 2019. Stochastic sensitivities across scales and physics. EMI 2019 . 165 Wang, Z., Ghanem, R., 2021. An extended polynomial chaos expansion for pdf characterization and variation with aleatory and epistemic uncertainties. Computer Methods in Applied Mechan- ics and Engineering 382, 113854. Wang, Z., Grant, M.J., 2017. Constrained trajectory optimization for planetary entry via sequential convex programming. Journal of Guidance, Control, and Dynamics 40, 2603–2615. Wang, Z., Jia, G., 2020. Augmented sample-based approach for efficient evaluation of risk sensi- tivity with respect to epistemic uncertainty in distribution parameters. Reliability Engineering & System Safety 197, 106783. Wesnousky, S.G., 1994. The gutenberg-richter or characteristic earthquake distribution, which is it? Bulletin of the Seismological Society of America 84, 1940–1959. Willcox, K.E., Ghattas, O., Heimbach, P., 2021. The imperative of physics-based modeling and inverse theory in computational science. Nature Computational Science 1, 166–168. Wu, W., Fish, J., 2010. Toward a nonintrusive stochastic multiscale design system for composite materials. International Journal for Multiscale Computational Engineering 8. Wu, W., Wang, Q., Li, W., 2018. Comparison of tensile and compressive properties of carbon/glass interlayer and intralayer hybrid composites. Materials 11, 1105. Wu, Y .T., 1994. Computational methods for efficient structural reliability and reliability sensitivity analysis. AIAA journal 32, 1717–1723. Xu, Y ., Basset, G., 2012. Sequential virtual motion camouflage method for nonlinear constrained optimal trajectory control. Automatica 48, 1273–1285. Yao, W., Chen, X., Luo, W., Van Tooren, M., Guo, J., 2011. Review of uncertainty-based multidis- ciplinary design optimization methods for aerospace vehicles. Progress in Aerospace Sciences 47, 450–479. Yin, S., Yu, D., Luo, Z., Xia, B., 2018. An arbitrary polynomial chaos expansion approach for response analysis of acoustic systems with epistemic uncertainty. Computer Methods in Applied Mechanics and Engineering 332, 280–302. Zhang, J., TerMaath, S., Shields, M.D., 2020. Imprecise global sensitivity analysis using bayesian multimodel inference and importance sampling. Mechanical Systems and Signal Processing 148, 107162. Zhang, M., Pan, H., 2021. Application of generalized pareto distribution for modeling aleatory variability of ground motion. Natural Hazards , 1–19. 166 Appendices A Sensitivity of QoI to an independent model input The sensitivity of QoI X with respect to an independent random model input K, represented by the partial derivative ¶X ¶K , can be derived using polynomial chaos expansion as follows. First, the sensitivity decomposed using the chain rule results in, ¶X(x x x) ¶K = ¶X(x x x) ¶x dx dK ; (A.1) where x is a standard Gaussian variable in x x x that corresponds to K. The first term on the right hand side of Eq. A.1 is represented by the PCE, which results in, ¶X(x x x) ¶x = å ja a ajp X a a a ¶y a a a (x x x) ¶x ; (A.2) Next, to deal with the second term on the right hand side of Eq. A.1, the mapping betweenx andK is represented by an inverse cumulative density function asx =F 1 F K (k) , thenF K (k)=F(x), whereF() denotes the cumulative density function of a random variable. Therefore, it results in, dx dK = dF 1 F K (k) dF K (k) dF K (k) dK = dF 1 F K (k) dF K (k) f K (k); (A.3) 167 where f K (k) is the PDF of K. Let y=F K (k), what needs to be derived is dF 1 (y) dy and then plug y=F K (k) into the derived result. It is obvious that, F F 1 (y) =y; (A.4) and thus, dF F 1 (y) dy = dy dy = 1; (A.5) Again, by chain rule, dF F 1 (y) dy = dF F 1 (y) dF 1 (y) dF 1 (y) dy ; (A.6) Combining Eqs. A.5 and A.6 results in, dF F 1 (y) dF 1 (y) dF 1 (y) dy = 1; (A.7) and thus, dF 1 (y) dy = 1 dF F 1 (y) dF 1 (y) = 1 dF F 1 (F K (k)) dF 1 (F K (k)) = 1 dF(x) dx = 1 f x (X) ; (A.8) where f x (X) is the PDF ofx . Therefore, Eq. A.3 is written as, dx dK = f K (k) f x (X) ; (A.9) 168 Finally, the sensitivty ¶X ¶K is derived as, ¶X ¶K = å ja a ajp X a a a ¶y a a a (x x x) ¶x f K (k) f x (X) : (A.10) 169
Abstract (if available)
Abstract
This work focuses on characterizing and managing inference for physical systems under uncertainties and modeling errors. To this end, contributions made to advance the state of the art in uncertainty quantification (UQ) include: (1) surrogate modeling that enables unified and efficient characterization and propagation of various sources of uncertainties; (2) sensitivity measures that quantitatively assess the impact of information on the full probability density function (PDF) and the probability of failure (PoF)/reliability; (3) Bayesian model calibration that provides physically insightful priors and reduced computational cost; (4) stochastic multiscale modeling that quantifies hierarchical uncertainties and modeling errors. These approaches constitute a systematic stochastic framework, grounded in polynomial chaos formalism, for credible design, analysis, and optimization of complex systems in engineering and science. Applications in civil, mechanical, aerospace engineering and seismic hazard analysis are investigated based on the proposed framework.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Comprehensive uncertainty quantification in composites manufacturing processes
PDF
Risk assessment, intrinsic interpolation and computationally efficient models for systems under uncertainty
PDF
Design optimization under uncertainty for rotor blades of horizontal axis wind turbines
PDF
Efficient inverse analysis with dynamic and stochastic reductions for large-scale models of multi-component systems
PDF
Model, identification & analysis of complex stochastic systems: applications in stochastic partial differential equations and multiscale mechanics
PDF
Analytical and experimental studies in modeling and monitoring of uncertain nonlinear systems using data-driven reduced‐order models
PDF
Algorithms for stochastic Galerkin projections: solvers, basis adaptation and multiscale modeling and reduction
PDF
Uncertainty management for complex systems of systems: applications to the future smart grid
PDF
Hybrid physics-based and data-driven computational approaches for multi-scale hydrologic systems under heterogeneity
PDF
Stochastic and multiscale models for urban and natural ecology
PDF
Assume-guarantee contracts for assured cyber-physical system design under uncertainty
PDF
Model falsification: new insights, methods and applications
PDF
Stochastic data assimilation with application to multi-phase flow and health monitoring problems
PDF
Physics-based data-driven inference
PDF
Damage detection using substructure identification
PDF
Spectral optimization and uncertainty quantification in combustion modeling
PDF
A novel hybrid probabilistic framework for model validation
PDF
Efficient stochastic simulations of hydrogeological systems: from model complexity to data assimilation
PDF
Inverse modeling and uncertainty quantification of nonlinear flow in porous media models
PDF
Accurate and efficient uncertainty quantification of subsurface fluid flow via the probabilistic collocation method
Asset Metadata
Creator
Wang, Zhiheng
(author)
Core Title
A polynomial chaos formalism for uncertainty budget assessment
School
Viterbi School of Engineering
Degree
Doctor of Philosophy
Degree Program
Civil Engineering
Degree Conferral Date
2022-05
Publication Date
04/16/2022
Defense Date
03/01/2022
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
Engineering and Scientific Applications,Epistemic and aleatory uncertainties,Extended polynomial chaos expansion,OAI-PMH Harvest,Sensitivity,uncertainty quantification
Format
application/pdf
(imt)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Ghanem, Roger (
committee chair
), Jordan, Thomas (
committee member
), Masri, Sami (
committee member
)
Creator Email
wangzhiheng921124@163.com,zhihengw@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-oUC110964826
Unique identifier
UC110964826
Document Type
Dissertation
Format
application/pdf (imt)
Rights
Wang, Zhiheng
Type
texts
Source
20220416-usctheses-batch-926
(batch),
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the author, as the original true and official version of the work, but does not grant the reader permission to use the work if the desired use is covered by copyright. It is the author, as rights holder, who must provide use permission if such use is covered by copyright. The original signature page accompanying the original submission of the work to the USC Libraries is retained by the USC Libraries and a copy of it may be obtained by authorized requesters contacting the repository e-mail address given.
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Repository Email
cisadmin@lib.usc.edu
Tags
Engineering and Scientific Applications
Epistemic and aleatory uncertainties
Extended polynomial chaos expansion
uncertainty quantification