Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Stochastic data assimilation with application to multi-phase flow and health monitoring problems
(USC Thesis Other)
Stochastic data assimilation with application to multi-phase flow and health monitoring problems
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
STOCHASTIC DATA ASSIMILATION WITH APPLICATION TO MULTI-PHASE FLOW AND HEALTH MONITORING PROBLEMS by George A. Saad A Dissertation Presented to the FACULTY OF THE GRADUATE SCHOOL UNIVERSITY OF SOUTHERN CALIFORNIA In Partial Fulfillment of the Requirements for the Degree DOCTOR OF PHILOSOPHY (CIVIL ENGINEERING) December 2007 Copyright 2007 George A. Saad Dedication To My Beloved Mother ii Acknowledgments I would like to express my sincere gratitude and appreciation to my advisor, Professor Roger Ghanem, for his insightful suggestions, technical guidance, support and encour- agement throughout the course of this study. I would also like to thank my thesis committee members, Professor Sami Masri, Professor Erik Johnson, and Professor Iraj Ershaghi, for their review, discussion, and valuable comments on the thesis. Furthermore, I would like to extend special thanks to my family for their support and encouragement in completing this work. Without their support, this work would not be possible. The studies reported in this thesis were supported in part by grants from the National Science Foundation (NSF), and the Center for Interactive Smart Oilfield Technologies (CiSoft). iii Table of Contents Dedication ii Acknowledgments iii List of Tables vii List of Figures viii Abstract xi Chapter 1 Introduction 1 1.1 Reservoir Characterization . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2 Front Tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.3 Structural Health Monitoring . . . . . . . . . . . . . . . . . . . . . . . 6 1.4 Summary of Original Contributions . . . . . . . . . . . . . . . . . . . 7 1.5 Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Chapter 2 Uncertainty Representation 10 2.1 Karhunen-Loeve Expansion . . . . . . . . . . . . . . . . . . . . . . . 11 2.2 Polynomial Chaos Expansion . . . . . . . . . . . . . . . . . . . . . . . 13 2.3 Polynomial Chaos Representation of Some Functions of Stochastic Pro- cesses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 2.3.1 The Lognormal Process . . . . . . . . . . . . . . . . . . . . . 16 2.3.2 Product of Two or More Stochastic Processes . . . . . . . . . . 17 2.3.3 Ratio of Two Stochastic Processes . . . . . . . . . . . . . . . . 18 2.4 Uncertainty Propagation: The Spectral Stochastic Finite Element Method 19 2.5 Stochastic Model Reductions . . . . . . . . . . . . . . . . . . . . . . . 21 2.6 Application to the Embankment Dam Problem . . . . . . . . . . . . . . 22 2.6.1 Implementation Details . . . . . . . . . . . . . . . . . . . . . . 24 2.6.2 Analysis of Results . . . . . . . . . . . . . . . . . . . . . . . . 30 iv Chapter 3 Sequential Data Assimilation for Stochastic Models 34 3.1 Overview of Data Assimilation . . . . . . . . . . . . . . . . . . . . . . 34 3.1.1 Control Theory . . . . . . . . . . . . . . . . . . . . . . . . . . 34 3.1.2 Direct Minimization Methods . . . . . . . . . . . . . . . . . . 37 3.1.3 Estimation Theory . . . . . . . . . . . . . . . . . . . . . . . . 38 3.2 The Kalman Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 3.2.1 Derivation of the Kalman Gain . . . . . . . . . . . . . . . . . . 40 3.2.2 Relationship to Recursive Bayesian Estimation . . . . . . . . . 44 3.3 The Extended Kalman Filter . . . . . . . . . . . . . . . . . . . . . . . 45 3.4 The Unscented Kalman Filter . . . . . . . . . . . . . . . . . . . . . . . 46 3.5 The Ensemble Kalman Filter . . . . . . . . . . . . . . . . . . . . . . . 48 3.5.1 Practical Implementation of the EnKF . . . . . . . . . . . . . . 49 3.6 The Particle Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 3.7 Coupling of Polynomial Chaos with the EnKF . . . . . . . . . . . . . . 52 3.7.1 Representation of Error Statistics . . . . . . . . . . . . . . . . 52 3.7.2 Analysis Scheme . . . . . . . . . . . . . . . . . . . . . . . . . 53 Chapter 4 Multiphase Flow in Porous Media 56 4.1 Flow Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 4.2 Features of the Reservoir and the Fluids . . . . . . . . . . . . . . . . . 57 4.2.1 Density and Viscosity . . . . . . . . . . . . . . . . . . . . . . 58 4.2.2 Porosity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 4.2.3 Saturation and Residual Saturation . . . . . . . . . . . . . . . 59 4.2.4 Intrinsic and Relative Permeability . . . . . . . . . . . . . . . . 59 4.2.5 Capillary Pressure . . . . . . . . . . . . . . . . . . . . . . . . 61 4.3 Characterization of Reservoir Simulation Models . . . . . . . . . . . . 63 4.3.1 Uncertainty Quantification . . . . . . . . . . . . . . . . . . . . 64 4.3.2 Model Formulation . . . . . . . . . . . . . . . . . . . . . . . . 66 4.4 Numerical Applications . . . . . . . . . . . . . . . . . . . . . . . . . . 68 4.4.1 One Dimensional Buckley-Leverett Problem . . . . . . . . . . 69 4.4.1.1 Case 1: Homogeneous Medium . . . . . . . . . . . . 70 4.4.1.2 Case 2: Continuous Intrinsic Permeability and Poros- ity Profiles . . . . . . . . . . . . . . . . . . . . . . . 70 4.4.1.3 Case 3: Discontinuous Porosity and Intrinsic Perme- ability Profiles . . . . . . . . . . . . . . . . . . . . . 71 4.4.2 Two-Dimensional Water Flood Problem - Scenario 1 . . . . . . 74 4.4.3 Two-Dimensional Water Flood Problem - Scenario 2 . . . . . . 75 4.4.3.1 Case 1: Three Dimensions First Order Approximation 77 4.4.3.2 Case 2: Coupled Three Dimensions Second Order Approximation . . . . . . . . . . . . . . . . . . . . . 79 4.4.3.3 Case 3: Coupled Ten Dimensions Second Order Approx- imation . . . . . . . . . . . . . . . . . . . . . . . . . 83 v 4.5 Control of Fluid Front Dynamics . . . . . . . . . . . . . . . . . . . . . 85 4.5.1 Example 1: Known Medium Properties . . . . . . . . . . . . . 87 4.5.2 Example 2: Stochastic Medium Properties . . . . . . . . . . . . 88 Chapter 5 Structural Health Monitoring of Highly Nonlinear Systems 94 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 5.2 Numerical Application . . . . . . . . . . . . . . . . . . . . . . . . . . 95 5.2.1 Non-parametric Representation of the Non-Linearity . . . . . . 98 5.2.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 5.2.2.1 Example 1: Δt = 5 Time Steps . . . . . . . . . . . 100 5.2.2.2 Example 2: Δt = 20 Time Steps . . . . . . . . . . . 102 Chapter 6 Sundance 107 6.1 Guidelines for Using Sundance . . . . . . . . . . . . . . . . . . . . . . 108 6.2 SSFEM Using Sundance . . . . . . . . . . . . . . . . . . . . . . . . . 108 6.2.1 Example: One-dimensional Bar with a Random Stiffness . . . . 109 6.2.2 Step by Step Explanation . . . . . . . . . . . . . . . . . . . . . 110 6.2.2.1 Boilerplate . . . . . . . . . . . . . . . . . . . . . . . 110 6.2.2.2 Getting the Mesh . . . . . . . . . . . . . . . . . . . 111 6.2.2.3 Defining the Domains of Integration . . . . . . . . . 111 6.2.2.4 Defining the Spectral Basis and the Spatial Interpola- tion Basis . . . . . . . . . . . . . . . . . . . . . . . 112 6.2.2.5 Defining the Model Parameters and Excitation . . . . 112 6.2.2.6 Defining the Unknown and Test Functions . . . . . . 114 6.2.2.7 Writing the Weak form . . . . . . . . . . . . . . . . 114 6.2.2.8 Writing the Essential Boundary Conditions . . . . . . 115 6.2.2.9 Creating the Linear Problem Object . . . . . . . . . . 115 6.2.2.10 Specifying the Solver and Solving the Problem . . . . 115 6.2.3 Complete Code for the Bar Problem . . . . . . . . . . . . . . . 116 Chapter 7 Conclusion 124 References 126 Appendices 132 Appendix A Complete SUNDANCE Reservoir Characterization Codes 133 A.1 Two-Dimensional Water Flooding Problem - Scenario 1 . . . . . . . . . 133 A.2 Two-Dimensional Flow Control Problem . . . . . . . . . . . . . . . . . 142 vi List of Tables 2.1 Probability of failure for different load and material strength combinations 33 4.1 Uncertainty Representation Using Polynomial Chaos . . . . . . . . . . 69 5.1 Bouc-Wen Model Coefficients . . . . . . . . . . . . . . . . . . . . . . 97 vii List of Figures 2.1 Contribution of successive scales of fluctuations in the KL expansion . . 13 2.2 Schematic of the Embankment Dam . . . . . . . . . . . . . . . . . . . 23 2.3 Proposed Mesh of 2160 elements . . . . . . . . . . . . . . . . . . . . . 24 2.4 Eigenvaluesλ n of the Exponential Covariance Kernel . . . . . . . . . . 24 2.5 The Significance of third order chaos expansion on the response. . . . . 26 2.6 Coarse Mesh of 60 elements . . . . . . . . . . . . . . . . . . . . . . . 27 2.7 The response representation using a 15 dimensional second order expan- sion obtained via SSFEM and SSFEM with Stochastic Model Reductions 29 2.8 Eigenvalues of equation 2.47 . . . . . . . . . . . . . . . . . . . . . . . 30 2.9 The response’s representation obtained from 150000 Monte Carlo Sim- ulations as Compared to those resulting from a 60 dimensional second order Chaos expansion . . . . . . . . . . . . . . . . . . . . . . . . . . 32 3.1 The Kalman Filter Loop. . . . . . . . . . . . . . . . . . . . . . . . . . 43 4.1 Typical Relative Permeability Curves. . . . . . . . . . . . . . . . . . . 61 4.2 Determination of the wetting phase. . . . . . . . . . . . . . . . . . . . 63 4.3 Typical Capillary Pressure-Saturation Curve. . . . . . . . . . . . . . . 63 4.4 Case 1: Estimate of medium properties: (a) Mean intrinsic permeability, (b) Polynomial Chaos coefficients of intrinsic permeability, (c) mean porosity, and (d) Polynomial Chaos coefficients of porosity . . . . . . . 71 4.5 Case 2: Estimate of medium properties: (a) Mean intrinsic permeability, (b) Polynomial Chaos coefficients of intrinsic permeability, (c) mean porosity, and (d) Polynomial Chaos coefficients of porosity . . . . . . . 72 viii 4.6 Case 2. The probability density function of the estimated medium prop- erties: (a) intrinsic permeability, (b) porosity . . . . . . . . . . . . . . . 72 4.7 Case 2: (a) Mean residual water saturation, (b) Polynomial Chaos coef- ficients of the estimated residual water saturation . . . . . . . . . . . . 73 4.8 Case 3. Estimate of medium properties: (a) Mean intrinsic permeability, (b) Polynomial Chaos coefficients of intrinsic permeability, (c) mean porosity, and (d) Polynomial Chaos coefficients of porosity . . . . . . . 73 4.9 Case 3: (a) Mean residual water saturation, (b) Polynomial Chaos coef- ficients of the last estimated residual water saturation . . . . . . . . . . 74 4.10 (a) mean of estimated intrinsic permeability, (b) true intrinsic perme- ability, (c) mean of estimated porosity, and (d) true porosity . . . . . . . 75 4.11 Polynomial Chaos coefficients of the estimated intrinsic permeability, (a)ξ 1 (b)ξ 2 , (c)ξ 3 , and (d)ξ 2 1 −1 . . . . . . . . . . . . . . . . . . . . 76 4.12 Polynomial Chaos coefficients of the estimated intrinsic porosity, (a)ξ 1 (b)ξ 2 , (c)ξ 3 , and (d)ξ 2 1 −1 . . . . . . . . . . . . . . . . . . . . . . . 77 4.13 (a) Mean of the estimated residual water saturation, (b) true residual water saturation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 4.14 Example 2 Scenario 2: Problem Description . . . . . . . . . . . . . . . 78 4.15 Example 2 Scenario 2: The Medium properties of the forward problem . 79 4.16 Case 1: True (left) and Estimated (right) Water Saturation Profiles . . . 80 4.17 Case 1: Estimated Chaos Coefficients of the exponential representation of the Medium properties . . . . . . . . . . . . . . . . . . . . . . . . . 81 4.18 Case 1: The mean and variance of Estimated Medium propertyα = K φ . 81 4.19 Case 2: True (left) and Estimated (right) Water Saturation Profiles . . . 82 4.20 Case 2: Estimated Chaos Coefficients of the exponential representation of the Medium properties . . . . . . . . . . . . . . . . . . . . . . . . . 83 4.21 Case 2: The mean and variance of Estimated Medium propertyα = K φ . 84 4.22 Case3: True (left) and Estimated (right) Water Saturation Profiles . . . . 84 4.23 Case 3: The mean and variance of Estimated Medium propertyα = K φ . 85 ix 4.24 Front control flow chart . . . . . . . . . . . . . . . . . . . . . . . . . . 86 4.25 Flow Control Example: Problem Setup . . . . . . . . . . . . . . . . . . 87 4.26 The coefficientα representing the Medium Properties . . . . . . . . . . 88 4.27 Example 1: The estimated injection rate (a) Mean (b) Higher Order Coefficients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 4.28 Example 1: Target (left) and Mean Estimated (right) Water Saturation Profiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 4.29 Example 2: The estimated injection rate (a) Mean (b) Higher Order Coefficients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 4.30 Example 2: Target (left) and Mean Estimated (right) Water Saturation Profiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 4.31 Example 2: estimated coefficients of the medium properties . . . . . . 93 5.1 Shear Building under analysis. . . . . . . . . . . . . . . . . . . . . . . 96 5.2 The Elcentro Excitation Applied to the Structure. . . . . . . . . . . . . 98 5.3 (a) Estimate of the first floor displacement ( Δt = 5 time steps), (b) Estimate of the first floor velocity (Δt = 5 time steps). . . . . . . . . . 101 5.4 (a) Estimate of the fourth floor displacement ( Δt = 5 time steps), (b) Estimate of the fourth floor velocity (Δt = 5 time steps). . . . . . . . . 101 5.5 Estimate of the Mean floor parameters ( Δt = 5 time steps): (a) first floor, (b) second floor, (c) third floor, and (d) fourth floor . . . . . . . . 102 5.6 Probability density function of estimated floor 1 parameters ( Δt = 5 time steps): (a) “a”, (b) “b”, (c) “c”, and (d) “d” . . . . . . . . . . . . . 103 5.7 (a) Estimate of the first floor displacement ( Δt = 20 time steps), (b) Estimate of the first floor velocity (Δt = 20 time steps). . . . . . . . . . 104 5.8 (a) Estimate of the fourth floor displacement ( Δt = 20 time steps), (b) Estimate of the fourth floor velocity (Δt = 20 time steps). . . . . . . . 104 5.9 Estimate of the Mean floor parameters ( Δt = 20 time steps): (a) first floor, (b) second floor, (c) third floor, and (d) fourth floor . . . . . . . . 105 5.10 Probability density function of estimated floor 1 parameters ( Δt = 20 time steps): (a) “a”, (b) “b”, (c) “c”, and (d) “d” . . . . . . . . . . . . . 106 x Abstract Model-based predictions are critically dependent on assumptions and hypotheses that are not based on first principles and that cannot necessarily be justified based on known prevalent physics.Constitutive models, for instance, fall under this category. While these predictive tools are typically calibrated using observational data, little is usually done with the scatter in the thus-calibrated model parameters. In this study, this scatter is used to characterize the parameters as stochastic processes and a procedure is developed to carry out model validation for ascertaining the confidence in the predictions from the model. Most parameters in model-based predictive tools are heterogeneous in nature and have a large range of variability. Thus the study aims at improving these predictive tools by using the Polynomial Chaos methodology to capture this heterogeneity and provide a more realistic description of the system’s behavior. Consequently, a data assimilation technique based on forecasting the error statistics using the Polynomial Chaos method- ology is developed. The proposed method allows the propagation of a stochastic rep- resentation of the unknown variables using Polynomial Chaos instead of propagating an ensemble of model states forward in time as is suggested within the framework of the Ensemble Kalman Filter (EnKF) . This overcomes some of the drawbacks of the EnKF. Using the proposed method, the update preserves all the statistics of the posterior unlike the EnKF which maintains the first two moments only. At any instant in time, xi the probability density function of the model state or parameters can be easily obtained by simulating the Polynomial Chaos basis. Furthermore it allows representation of non- Gaussian measurement and parameter uncertainties in a simpler, less taxing way without the necessity of managing a large ensemble. The proposed method is used for realis- tic nonlinear models, and its efficiency is first demonstrated for reservoir characteriza- tion using automatic history matching and then for tracking the fluid front dynamics to maximize the waterflooding sweeping efficiency by controlling the injection rates. The developed methodology is also used for system identification of civil structures with strong nonlinear behavior. History matching, the act of calibrating a reservoir model to match the observed reservoir behavior, has been extensively studied in recent years. Standard methods for reservoir characterization required adjoint or gradient based methods to compute the gradient of the objective function and consequently minimize it. The computational cost of such methods increases exponentially as the number of model parameters or observational data increase. Recently, the EnKF was introduced for automatic history matching . The Ensemble Kalman Filter uses a Monte Carlo scheme for achieving sat- isfactory history matching results at a relatively low computational cost. In this study, the developed data assimilation methodology is used for improving the prediction of reservoir behavior. To enhance the forecasting ability of a reservoir model, first a better description of the reservoir’s geological and petrophysical features using a stochastic approach is adopted, and then the new data assimilation method based on forecasting the error statistics using the Polynomial Chaos (PC) methodology, is employed. The reservoir model developed in this study is that of multiphase immiscible flow in a ran- domly heterogeneous porous media. The model uncertainty is quantified by modeling the intrinsic permeability and porosity of the porous medium as stochastic processes via their PC expansions. The Spectral Stochastic Finite Element Method (SSFEM) is xii used to solve the multiphase flow equations. SSFEM is integrated within SUNDANCE 2.0, a software developed in Sandia National Laboratories for solving partial differential equations using finite element methods. Thus, SUNDANCE is used for the analysis or prediction step of the reservoir characterization, and an algorithm using the newmat C++ library is developed for updating the model via the new data assimilation methodology. Using the same underlying physics, the proposed method is coupled with a control loop for the purpose of optimizing the fluid front dynamics in flow in porous media problem through rate control methods. The rate control is carried out using the devel- oped data assimilation technique; the water injection rates are included as part of the state vector, and are continuously updated so as to minimize the mismatch between the predicted front and a specified target front. The second application of the proposed method aims at presenting a robust sys- tem identification technique for strongly nonlinear dynamics by combining the filtering methodology with a non-parametric representation of the system’s nonlinearity. First a non-parametric representation of the system nonlinearity is adopted. The Polynomial Chaos expansion is employed to characterize the uncertainty within the model, and then the proposed data assimilation technique is used to characterize the robust stochastic polynomial representation of the system’s non-linearity. This enables monitoring the system’s reaction and identifying any obscure changes within its behavior. The pre- sented methodology is applied for the structural health monitoring of a four story shear building subject to a ground excitation. The corresponding results prove that the pro- posed system identification techniques accurately detects changes in the system behavior in spite of measurement and modeling noises. The results obtained from these two applications depict the efficiency of the pro- posed method for parameter estimation problems. The combination of the Ensemble Kalman Filter with the Polynomial Chaos methodology thus proves to be an efficient xiii sequential data assimilation technique that surpasses standard Kalman Filtering tech- niques while maintaining a relatively low computational cost. xiv Chapter 1 Introduction Data assimilation aims at predicting and estimating true state unknowns by combining the observed information with the underlying dynamical model. It is an interdisciplinary field involving engineering, mathematics, and physical sciences. Most data assimilation methods can be classified as either control or estimation theory methods. Control the- ory methods are mainly the gradient based methods. The computational cost of these methods increases exponentially as the number of unknown parameters increases. This study focuses on estimation theory methods with Kalman Filter at their heart. It aims at developing a new data assimilation methodology based on combining the Kalman Filtering techniques with the Polynomial Chaos methodology. The Kalman Filter is a sequential data assimilation methodology, that integrates the model forward in time, and whenever measurements are available, they are used to reini- tialize the model before the integration continues [Kal60]. It is an optimal state estima- tion process applied to dynamical systems involving random perturbations. The Kalman Filter provides a linear, unbiased, minimum variance algorithm to optimally estimate the state of the system from noisy measurements. If the model turns out to be nonlinear or a parameter estimation is required, a linearization procedure is usually performed in deriving the filtering equations [CC91]. The Kalman Filter thus obtained is known as the extended Kalman Filter (EKF). The ensemble Kalman Filter was lately introduced [Eve94] to overcome some of the drawbacks of the EKF. The EnKF considers the effects of the higher order statistics that were neglected by the EKF due to linearization, but pre- serves the first two moments only when propagating the error statistics. It also reduces 1 the computational cost of the EKF by presenting simpler techniques for propagating the model errors. This study presents a data assimilation method based on forecasting the error statis- tics using the Polynomial Chaos methodology. Instead of propagating an ensemble of model states forward in time as is suggested within the framework of the EnKF, the proposed method allows the propagation of a stochastic representation of the unknown variables using Polynomial Chaos. This overcomes some of the drawbacks of the EnKF. Using the proposed method, the update preserves all the statistics of the posterior unlike the EnKF which maintains the first two moments only. Furthermore, at any instant in time, the probability density function of the model state or parameters can be easily obtained by simulating the Polynomial Chaos basis. It also allows the representation of non-Gaussian measurement and parameter uncertainties in a simpler, less taxing way without the necessity of managing a large ensemble. The computational load for rele- vant accuracy is comparable to that of the EnKF and the storage size is limited to the number of terms in the PC expansion of the model states. The proposed method can be used for realistic nonlinear models, and its efficiency is demonstrated on a reser- voir characterization using automatic history matching, for front tracking of the flow in porous media, and for system identification of structures with strong nonlinearities. 1.1 Reservoir Characterization Reservoir simulation is a powerful tool for reservoir characterization and management. It enhances the production forecasting process. The efficiency of a reservoir model relies on its ability to characterize the geological and petrophysical features of the actual field. One of the most commonly used methods for reservoir characterization is the automatic history matching methodology. History matching aims at estimating reservoir 2 parameters such as porosities and permeabilities so as to minimize the square of the mismatch between observations and computed values. In short, history matching is a parameter estimation problem aiming at finding the probability density function of the parameters and associated model states based on mea- surements related to these states and possibly the parameters themselves [Eve05]. Well established history matching techniques rely on the gradient type methods for mini- mization of cost functions [Ca74, CDL75, Ma93]. The gradient-based approach was extended for multiscale estimation [GMNU03, Aan05]. Further the gradual deforma- tion method introduced by Roggero and Hu [RH98] has gained some interest. These traditional history matching techniques suffer from some drawbacks [NBJ06]: • They are usually only performed after period of years on a campaign basis. • Ad hoc matching techniques are applied, and the process usually involves manual adjustment of model parameters instead of systematic updating. • Measurement and modeling uncertainties in the state vector are usually not explic- itly taken into account. • Often, the resulting history matched model violates essential geological con- straints. • The most critical issue is that although the updated model may reproduce the production data perfectly, it has little or no predictive capacity because it may have been over fitted by adjusting a large number of unknown parameters using a much smaller number of measurements. 3 Various techniques for automated history matching have been developed over the past decade to address these issues. The Ensemble Kalman Filter (EnKF) intro- duced by Evensen [Eve94, BLE98, Eve03] was recently used for online estima- tion of reservoir parameters and state variables given the production history data [NMV02, NJA V03, GO05, GR05, LO05, WC05]. Although, the EnKF yields satis- factory history matching results it has some drawbacks. The EnKF copes badly with non-Gaussian probability density functions since it requires a large ensemble size to accurately characterize the statistics of the corresponding functions [Kiv03]. Thus, this study presents a variation of the EnKF that allows the propagation of a stochastic rep- resentation of the variables using the Polynomial Chaos expansion. Consequently all the statistics of the random variables are preserved. First, it is required to accurately represent the geological features of the reservoir and the fluid flowing within, and con- sequently uncertainty must be quantified. Major sources of uncertainty in the process of history matching and production forecasting include [AML05]: • Measurement errors • Uncertain description of geological and fluid parameters • Model errors and imperfections The model uncertainty is quantified by modeling the intrinsic permeability and porosity of the porous medium as stochastic processes via their Polynomial Chaos expansions. This yields a system of coupled nonlinear stochastic partial differential equations. The Spectral Stochastic Finite Element Method (SSFEM) proposed by Ghanem and Spanos [GS03] is employed to solve this system at discrete time steps. SSFEM is an extension of the deterministic finite element method (FEM) to stochastic boundary value problems. In the context of SSFEM, the randomness in the problem is regarded as an additional dimension with an associated Hilbert space ofL 2 functions in 4 which a set of basis functions is identified. This set is known as Polynomial Chaos and is used to discretize the random dimension [GS03, Wie38]. The SSFEM is integrated within SUNDANCE 2.0 [Lon04], a finite element partial differential equation solver, for maximal computational efficiency. SUNDANCE is a toolkit that allows construction of an entire parallel simulator and its derivatives using a high-level symbolic language. It is sufficient to specify the weak formulation of the partial differential equation and its discretization along with a finite element mesh to obtain a solution. The proposed method is then used to filter the stochastic representations obtained by the SSFEM. The proposed history matching technique is used for the characterization of both one dimen- sional and two dimensional heterogeneous reservoirs. Different ways for representing the medium uncertainties are discussed, and the obtained results reveal the efficiency of the proposed scheme. 1.2 Front Tracking Another important aspect in reservoir simulations is the maximization of some measure of the displacement efficiency. In cases where the injected fluid composition is fixed, the only given way to control the front dynamics is through controlling the allocation of the injected fluid to the injection wells. Therefore, to maximize the sweeping efficiency of the water flooding process, an algorithm to update the injection flow rate so as to maintain a specified front is devised. The algorithm uses the proposed filtering scheme and it aims at minimizing the mismatch between the predicted water saturation and a discretization of the objective front. This is coupled with the history matching technique described earlier rendering a flow control problem which takes into consideration the effects of parametric, modeling, and measurement uncertainties. This has been an active research area in the past couple of decades [NBJ06, SY00, FR86, FR87, Ash88]. 5 Using the same underlying physics described earlier, the proposed method is coupled with a control loop for the purpose of optimizing the fluid front dynamics in flow in porous media problem through rate control methods. The rate control is carried out using the developed data assimilation technique; the water injection rates are included as part of the state vector, and are continuously updated so as to minimize the mismatch between the predicted front and a specified target front. The efficiency of this control approach is demonstrated on a two dimensional reservoir model with two injectors and multiple producers. The aim is to maintain a uniform flow throughout the simulation. 1.3 Structural Health Monitoring Numerous structural engineering problems exhibit non-linear dynamical behavior with uncertain and complex governing laws. With the recent technological advances, sensors and other monitoring devices became more accurate and abundant. Therefore, their use for health monitoring of civil structures gained popularity. The major problem that remained is to devise the proper mathematical models that can cope with and analyze the humongous measurements data flow. This has been a very active research area over the past decade [GS95, LBL02, ZFYM02, FBL05, FIIN05, GF06]. System identification and damage detection in most real life structures must be adapted to uncertainties and noise sources that can not be modeled as Gaussian pro- cesses. These uncertainties are typically associated with modeling, parametric, and measurement errors. In cases where these uncertainties are significant, standard iden- tification and damage detection techniques are either unsuitable or inefficient. In this study, a system identification procedure based on coupling robust non-parametric non- linear models with the Polynomial Chaos methodology in the context of the Kalman Filtering techniques is presented. First a non-parametric representation of the system 6 nonlinearity is adopted. The Polynomial Chaos expansion is employed to characterize the uncertainty within the model, and then the proposed data assimilation technique is used to characterize the robust stochastic polynomial representation of the system’s non- linearity which enables monitoring the system’s reaction and identifying any obscure changes within its behavior. The presented methodology is applied for the structural health monitoring of a four story shear building subject to a ground excitation. The corresponding results prove that the proposed system identification techniques accurately detects changes in the system behavior in spite of measurement and modeling noises. 1.4 Summary of Original Contributions This section provides a summary of the original contributions presented in this study: • A stochastic sequential data assimilation technique based on forecasting the error statistics using the Polynomial Chaos methodology is presented. This enables the calibration of predictive models by characterizing the parameters as stochastic processes. • The application of the proposed technique for reservoir characterization is an innovative approach that allows approximating the higher order statistics of the production forecast. The use of PC to represent the uncertainty in the reservoir model and the measurement help convey the actual medium in a more realistic way. • A novel control approach using the Polynomial Chaos Kalman Filter is presented for optimizing the fluid front dynamics in porous media using rate control. 7 • The Polynomial Chaos methodology is used to characterize the uncertainty in robust non-parametric representations of structural systems nonlinearities. The latter is coupled with the developed data assimilation technique to render a robust structural health monitoring methodology. The details of all the aforementioned approaches are presented in the following chap- ters. 1.5 Outline Chapter two gives a review of the various methods used in this study to represent uncer- tainties. It presents the details of the Polynomial Chaos expansion, and shows the imple- mentation details of the Spectral Stochastic Finite Element Method. Furthermore, chap- ter two present a model reduction technique to improve the computational efficiency of the Spectral Stochastic Finite Element Method for solving high dimensional stochastic systems. In chapter three, sequential data assimilation techniques with a focus on the Kalman Filter and its various nonlinear extensions are discussed. The implementation details of a novel data assimilation technique based on coupling the Ensemble Kalman Filter with the Polynomial Chaos methodology are given. Chapter four presents the basics of transport and flow in porous media. The gov- erning flow equations are derived, and the various geological and petrophysical features of the reservoir and fluid parameters are discussed. The efficiency of automatic His- tory matching using the proposed filter is demonstrated. Furthermore, a rate control technique for optimizing the sweep efficiency in the water flooding process is devised. In chapter five, the proposed filter is applied for the system identification of highly nonlinear structures. Non-parametric representations of the structural nonlinearity are 8 discussed, and the setup is applied to the structural health monitoring of a four story shear building subject to a ground excitation. Chapter six gives a description of SUNDANCE’s capabilities. It presents the imple- mentation details of the spectral library within SUNDANCE, and gives a detailed exam- ple as a guide for users to follow. Finally, some concluding remarks and future research direction are mentioned. 9 Chapter 2 Uncertainty Representation Uncertainty in physical data and phenomena could be attributed to two sources. The first one is the inherently irregular phenomena which could not be described determin- istically, and the other is the phenomena that are not uncertain in nature but to which uncertainties could be attributed due to lack of available data [GS03]. Therefore, in order for any numerical model of a physical system to be useful, these uncertainties have to be accurately represented in the model and proper numerical schemes should be used for the analysis. To have a complete understanding of the approach used in this study, it is important to introduce some mathematical concepts. LetH be the Hilbert space of functions [Ode79] defined over a domainD with values on the real line R. Denote by (Ω,Ψ,P) a probability space where Ω represents the domain,Ψ a measurable subset of the domain, andP is a measure onΨ withP(Ω) = 1 . A random variable is defined as a mappingΩ→P . Letx be an element ofD andθ an element of Ω. Then, Θ denotes the space of functions mapping Ω to the real line. The inner products overH andΘ are defined using the Lebesgue measure and the probability measure respectively. For any two elementsh i (x) andh j (x) inH, the inner product is defined as, (h i (x),h j (x)) = Z D h i (x)h j (x)dx. (2.1) Similarly, for any two elementsα(θ) andβ(θ) inΩ, the inner product is defined as, (α(θ),β(θ)) = Z Ω α(θ)β(θ)dP, (2.2) 10 wheredP is a probability measure. Elements in the Hilbert spaces described above are said to orthogonal if their inner product is zero. A random process is then defined as a function in the product spaceD×Ω. In what follows, some of the available techniques for addressing problems with stochasticity are reviewed. First different methods for representing the uncertainty are described, and then the details of the Spectral Stochastic Finite Element Method (SSFEM) as a tool for analyzing stochastic models is presented. The computational cost of the SSFEM depends on the mesh size, the dimension of the polynomial chaos expan- sion of the random variables, and the order of the polynomial chaos (PC) expansion of the solution. Specifically, the size of the linear system resulting from the SSFEM increases rapidly as the number of terms in the PC expansion grows. Therefore, a stochastic model reduction technique is presented to help cope with this problem. 2.1 Karhunen-Loeve Expansion In cases where uncertainties possess a well defined covariance function, the Monte Carlo simulation is the most widely used method for representing the randomness. It consists of sampling the functions in a random, collocation like, scheme. Therefore, it requires a large ensemble of samples to accurately represent the process statistics. The Karhunen- Loeve expansion[Loe77] is a more theoretically appealing way to represent these func- tions. It is a Fourier-type series representation of random processes. It is based on the spectral decomposition of the covariance function of the stochastic process being represented. The expansion takes the following form: Y(x,θ) =hY(x)i+ ∞ X i=1 ξ i (θ) p λ i φ i (x) (2.3) 11 where, hξ i i = 0, hξ i ξ j i =δ ij , i,j = 1,...,μ ξ 0 ≡ 1. (2.4) In the above equation,θ is a random index spanning a domain in the space of random events, and the productξ i (θ) √ λ i represents the random amplitude associated with the deterministic shape functionsφ i (x), andh.i denotes the mathematical expectation oper- ator. The sets {φ i } ∞ i=1 and {λ i } ∞ i=1 are associated with the solution to the covariance kernel integral eigenvalue problem, Z D C(x 1 ,x 2 )φ i (x 2 )dx 2 =λ i φ i (x 1 ) (2.5) whereD denotes the spatial extent of the stochastic process, andC(x 1 ,x 2 ) is the covari- ance function. In practice, the Karhunen-Loeve expansion is usually truncated to a finite number of terms, Y(x,θ) =hY(x)i+ μ X i=1 ξ i (θ) p λ i φ i (x). (2.6) The number of terms is dependent on the distribution of the random pattern of the stochastic processes being modeled. The closer the process is to white noise, the more terms are required, while if a random variable is to be represented, a single term in the expansion is sufficient. This is directly correlated with eigenvalues resulting from solving the covariance kernel problem. The series is truncated after the firstμ largest generalized eigenvalues of 2.5 such that P μ i=1 λ i / P i λ i is sufficiently close to one. The monotonic decay of the eigenvalues is guaranteed by the symmetry of the covariance function and the rate of the decay is related to the correlation length of the process being expanded. Figure 2.1 shows the monotonic decay of the eigenvaluesλ i . The Karhunen-Loeve expansion is mean-squared convergent and optimal provided that the process being expanded has a finite variance. It is optimal in the sense that the 12 0 50 100 150 200 0 10 20 30 40 50 60 70 80 90 Mode Number EigenValue Figure 2.1: Contribution of successive scales of fluctuations in the KL expansion mean square error resulting from a finite representation of the processY(x,θ) is mini- mized. The disadvantage of the Karhunen Loeve expansion is that its use is limited to the prior knowledge of the covariance function. Therefore more general representation are necessary for representing different processes. 2.2 Polynomial Chaos Expansion The Polynomial Chaos expansion is more general than the Karhunen Loeve expansion in the sense that it does not require prior knowledge of the covariance function. It is used to represent the solution of a stochastic partial differential equation which is usually a function of the problem parameters. The expansion involves an infinite basis set that completely spans theL 2 space of random variables, and whose elements are orthogonal polynomials. It can be shown [CM47] that, provided the solution has a finite variance, 13 this functional dependence could be expressed in terms of polynomials in Gaussian random variables, in the form, u(x;θ) =a 0 (x)Γ 0 + ∞ X i 1 =1 a i 1 (x)Γ 1 (ξ i 1 (θ))+ (2.7) ∞ X i 1 =1 ∞ X i 2 =1 a i 1 i 2 (x)Γ 2 (ξ i 1 (θ)ξ i 2 (θ))+... In this equation,Γ n (ξ i 1 ,...,ξ in ) denotes then th order polynomial chaos in the variables (ξ i 1 ,...,ξ in ). These are generalizations of the multidimensional Hermite polynomials, anda i 1 ,...,i N are deterministic coefficients in the expansion [Wie38]. Upon introducing a one to one mapping to a set of ordered indices{ψ i } and truncating the polynomial chaos expansion after theP th term, the above equation can be written as u(x;θ) = P X j=0 u j (x)ψ j (θ). (2.8) These polynomials are orthogonal with respect to the joint probability measure of (ξ i 1 ,...,ξ in ); i.e. their inner product, hψ j ψ k i, is equal to zero for j 6= k. The j th order polynomial,ψ j (ξ) can be explicitly evaluated as, ψ o (ξ) = 1 (2.9) ψ 1 (ξ i 1 ) =ξ i 1 (2.10) ψ 2 (ξ i 1 ,ξ i 2 ) =ξ i 1 ξ i 2 −δ i 1 i 2 (2.11) ψ 3 (ξ i 1 ,ξ i 2 ,ξ i 3 ) =ξ i 1 ξ i 2 ξ i 3 −ξ i 1 δ i 1 i 2 −ξ i 2 δi 1 i 3 −ξ i 3 δ i 1 i 2 (2.12) 14 whereδ ij is the Kronecker delta. In general, the Polynomial Chaos of ordern can be obtained as, ψ n (ξ i 1 ,...,ξ in ) = (−1) n e 1 2 ξ T ξ ( ∂ n e −1 2 ξ T ξ ∂ξ i 1 ...ξ in ). (2.13) Once the deterministic coefficients{u i } are calculated, a complete probabilistic charac- terization of the stochastic process is achieved. A truncated Polynomial Chaos can be refined along the random dimension by either adding more random variables to the set{ξ i (θ)} or by increasing the order of the polyno- mials in the PC expansion. Note that the total number of termsP +1 in a PC expansion with order less than or equal top inM random dimensions is given by P +1 = (p+M)! M!p! . (2.14) 2.3 Polynomial Chaos Representation of Some Func- tions of Stochastic Processes In many stochastic systems, one is often faced with the obstacle of efficiently repre- senting functions of stochastic processes be they dependent or independent, Gaussian or Non-Gaussian. These function could be ratios, products, or even polynomials of stochastic processes. As long as these functions belong to L 2 , they admit their own Polynomial Chaos representations. In this study algorithms are devised to perform these functions on the PC expansion of the random processes. 15 2.3.1 The Lognormal Process Consider the processl(x,z) obtained from the Gaussian field Y(x,z) via exponentia- tion. This process is a stochastic field having a lognormal marginal probability distribu- tion. It can be represented in terms of multidimensional Hermite polynomials orthogo- nal with respect to the Gaussian measure [Gha99], l(x,z;θ) = P X i=0 ψ i (θ)l i (x,z). (2.15) The zeroth order term in the polynomial chaos expansion refers to the mean of the process and is given by, l 0 (x,z) = exp[μ Y + σ 2 Y 2 ] (2.16) whereμ Y andσ Y denote the mean and standard deviation of the Gaussian random field Y respectively. The higher order terms are l i (x,z) = hl(x,z)ψ i i hψ 2 i i = e Y(x,z) ψ i hψ 2 i i . (2.17) The denominator can be easily evaluated and tabulated, and the numerator could be expressed as [Gha99] hl(x,z)ψ i i = exp[Y 0 (x,z)+ 1 2 N X j=1 Y 2 j (x,z)]hψ i (η)i, (2.18) wherehψ i (η)i represents the average of the polynomial chaos centered aroundY i . Con- sequently, the lognormal process can be represented as, l(x,z;θ) =l o (x,z)(1+ N X i=1 ξ i (θ)Y i (x,z)+ 16 N X i=1 N X j=1 (ξ i (θ)ξ j (θ)−δij) h(ξ i ξ j −δij) 2 i Y i (x,z)Y j (x,z)+...). (2.19) 2.3.2 Product of Two or More Stochastic Processes Consider two random processes, A(x,θ) andB(x,θ), and their respective Polynomial Chaos approximations: ˆ A(x,θ) = P X i=0 A i (x)ψ i (ξ), (2.20) ˆ B(x,θ) = P X j=0 B j (x)ψ j (ξ). (2.21) We need to determine the PC expansion of the process,C, obtained from the product ofA andB, ˆ C(x,θ) = P X k=0 C k (x)ψ k (ξ) = P X i=0 A i (x)ψ i (ξ) P X j=0 B j (x)ψ j (ξ). (2.22) TheC k coefficients are obtained by projection on a higher order chaos. C k (x) = P X i=0 P X j=0 A i (x)B j (x)C ijk ∀k∈ 0,...,P (2.23) where C ijk = hψ i ψ j ψ k i hψ 2 k i . (2.24) This can be easily extended to the product of three processes, D l (x) = P X i=0 P X j=0 P X k=0 A i (x)B j (x)C k (x)D ijkl ∀l∈ 0,...,P (2.25) 17 where D ijkl = hψ i ψ j ψ k ψ l i hψ 2 l i . (2.26) For computing the PC expansion of a polynomial, each product is computed sep- arately using the above machinery, and then addition and subtraction are carried on. Addition and Subtraction are performed by adding/subtracting the corresponding PC coefficients of the variables being added/subtracted. 2.3.3 Ratio of Two Stochastic Processes Consider the stochastic process given by a ratio of two different stochastic processes C(x;θ) = A(x;θ) B(x;θ) (2.27) using the polynomial chaos expansion to represent the latter equation yields, P X k=0 C k (x)ψ k (ξ) = P P i=0 A i (x)ψ i (ξ) P P j=0 B j (x)ψ j (ξ) (2.28) cross-multiplying, and requiring the difference to be orthogonal to the approximating space spanned by the Polynomial Chaos{ψ l } P l=0 yields, P X k=0 P X j=0 C k (x)B j (x) hψ j ψ k ψ l i hψ 2 l i =A l (x) ∀l∈ 0,...,P. (2.29) Expanding the equation for all values of l results in a system of linear algebraic equations which when solved gives the deterministic chaos coordinates ofC. The size of the resulting system is equal to the number of terms used in the chaos representation of the approximation. 18 2.4 Uncertainty Propagation: The Spectral Stochastic Finite Element Method The SSFEM proposed by Ghanem and Spanos [GS03] is an extension of the determin- istic finite element methodFEM to stochastic boundary value problems. In the context of SSFEM, the randomness in the problem is regarded as an additional dimension with an associated Hilbert space ofL 2 functions in which a set of basis functions is identified. This set is known as Polynomial Chaos and is used to discretize the random dimension [GS03, Wie38]. In the deterministic finite element method, the spatial domain is replaced by a set of nodes that represent the finite element mesh and the resulting solution is expressed in terms of nodal degrees of freedomu i ,i = 1,...,N augmented into a vectorU. The strain energyV e stored in each elementA e is expressed as, V e = 1 2 Z A e σ T (x,z)(x,z)dA e (2.30) wheredA e is a differential element inA e , andσ(x,z) and(x,z) are the stress and strain vector respectively. For simplicity, assume that the model specified for the analysis is a plane strain linearly elastic one, the stress maybe expressed in terms of the strain as, σ =D e (2.31) whereD e is the stochastic plane strain matrix of constitutive relations. The two dimen- sional displacement vector u(θ) representing the longitudinal and transverse displace- ments within each element can be expressed in terms of the nodal displacements of the element in the form u(θ) =N e (r,s)U e (θ) (2.32) 19 where N e (r,s) is the local interpolation matrix, U e (θ) is the random nodal response vector, andr ands are local coordinates over the element. The total strain energyV is obtained by summing the contributions from all the elements. This procedure gives V = 1 2 Ne X e=1 U eT Z 1 0 Z 1 0 B eT D e (x,z;θ)B e |J e |drdsU e (2.33) where|J e | denotes the determinant of the Jacobian of the transformation that maps an arbitrary element (e) onto the three-nodded triangle with sides equal to one, andB e is the matrix that describes the dependence of strains on displacements. The polynomial chaos expansion of the matrix of constitutive properties may be substituted in the above equation to transform the latter into, V = 1 2 U T P X k=0 K k ψ k (θ)U. (2.34) Minimizing the total potential energy with respect toU, and expanding the response vectorU using Polynomial Chaos, lead to the following equation P 1 X i=0 K i ψ i P 2 X j=0 U j ψ j −F = 0 (2.35) whereF is the vector of nodal forces, andU j is the deterministic coordinate vector of the response and is evaluated as the solution of the following system of algebraic equations P 2 X j=0 K jk U j =F k , k = 0,1,2,...,P 2 (2.36) where, K jk = P 1 X i=0 C ijk K i (2.37) 20 F k =hψ k Fi (2.38) C ijk =hψ i ψ j ψ k i. (2.39) The size of the resulting linear system could rapidly increase with the growing num- ber of terms in the polynomial chaos expansion, and with a fine mesh, one could foresee significant computational challenges. This emphasizes the need for establishing a reduc- tion method that computationally simplifies the problem while preserving the accuracy and veracity of the SSFEM. 2.5 Stochastic Model Reductions The stochastic model reduction for chaos representation method [DGRH] aims at obtain- ing an alternative to the polynomial chaos basis for representing the response. In order to do so, the method involves solving the problem using the complete order polyno- mial chaos basis on a coarse mesh. From the resulting solution, the covariance kernel of the response is estimated, and used to approximate the dominant Karhunen-Loeve representation of the response, u(x,z;θ) =hu(x,z)i+ μ X i=1 √ ν i η i (θ)φ i (x,z). (2.40) Moreover, noting the differentiability properties of the responseu, it has been suggested [DGRH] that optimality in a more adapted functional space can be achieved by using a modified version of the Karhune-Loeve expansion known as the Hilbert-Karhunen- Loeve representation. Accordingly the sets{ν i } and{φ i } are associated with the solu- tion of the following eigenvalue problem (R(.,x 2 ,z 2 ),φ i (x 2 ,z 2 )) E(D) =ν i φ i (x 1 ,z 1 ) ∀i. (2.41) 21 where R(x 1 ,z 1 ,x 2 ,z 2 ) = h(u(x 1 ,z 1 ) − hu(x 1 ,z 1 )i)(u(x 2 ,z 2 ) − hu(x 2 ,z 2 )i)i, and (.,.) E(D) denotes the inner product induced by the energy norm associated with the mean of the process D(x,z). The random variables {η i } characterized on the coarse mesh are assumed to retain their joint probabilistic measure as the solution is approx- imated on the fine mesh. Furthermore, each of the η i ’s is expressed in a Polynomial Chaos representation that is assumed to be well approximated on the coarse mesh. This method was applied for a Benchmark study that was presented in ICOSSAR’05 and whose details will be presented in what follows [GSD07]. 2.6 Application to the Embankment Dam Problem The embankment dam problem of the benchmark study is treated using the newly devel- oped Stochastic Model Reduction for Polynomial Chaos Representations method. The elastic and shear moduli of the material, in the present problem, are modeled as two stochastic processes that are explicit functions of the same process possessing a rela- tively low correlation length. The state of the system can thus be viewed as a function defined on a high-dimensional space, associated with the fluctuations of the underlying process. In such a setting, the spectral stochastic finite element method (SSFEM) for the specified spatial discretization is computationally prohibitive. The approach adopted in this paper enables the stochastic characterization of a fine mesh problem based on the high dimensional polynomial chaos solution of a coarse mesh analysis. The problem to be examined is that of an embankment dam with trapezoidal cross-section made of earth or rock fill subject to compressive deterministic loading conditions as is shown in 22 Figure 2.2: Schematic of the Embankment Dam Figure 2.2. Due to the nature of the material, it is proposed to model the low strain elas- tic and shear moduli as non-homogeneous lognormal random fieldsE(x,z) andG(x,z) respectively, E(x,z) =m E (z)+σ E (z) e Y(x,z) −m y σ y (2.42) G(x,z) =m G (z)+σ G (z) e Y(x,z) −m y σ y , (2.43) wherem E (z) andm G (z) are the means,σ E (z) andσ G (z) are the standard deviations, Y(x,z) is a homogeneous, zero mean, unit variance Gaussian random field with given autocorrelation function R Y (Δx,Δz) given by eq. 2.44, m Y = E[e Y ] = e 0.5 and σ 2 Y =Var[e Y ] =e 2 −e [ICO04], R Y (ΔX,ΔZ) = exp(− |ΔX| 10 − |ΔZ| 3 ). (2.44) Table 2 of [ICO04] presents the properties of the low strain elastic parameters. The elas- tic moduli are represented as dependent stochastic processes possessing a relatively low correlation length, and consequently the problem can be viewed as a function defined on a very high dimensional space. The spatial domain is discretized into triangular, 3-node, plane strain finite elements. Figure 2.3 shows the proposed mesh of 2160 elements. 23 0 10 20 30 40 50 60 70 80 90 100 110 0 10 20 Figure 2.3: Proposed Mesh of 2160 elements 2.6.1 Implementation Details First, the Karhunen-Loeve expansion is employed to help reduce the dimensionality of the problem. A Galerkin type procedure is applied to transform the Fredholm equation to a generalized eigenvalue problem which is solved for the eigenvaluesλ i ’s and eigen- vectors φ i ’s of the covariance kernel. Figure 2.4 shows the computed eigenvalues of the given covariance kernel. It shows that the decay of the eigenvalues is rather slow because of the relatively low correlation length provided; this indicates the necessity of using a big number of terms in theKL expansion. 0 50 100 150 200 0 10 20 30 40 50 60 70 80 90 Mode Number EigenValue Figure 2.4: Eigenvaluesλ n of the Exponential Covariance Kernel Knowing the significance of each of the random dimensions in the problem, the total number of terms in the polynomial chaos representation of the response must be 24 decided. Many terms are often used in the Polynomial Chaos expansion. For stability issues, the minimum order of chaos expansion of the processD(x,z) should be at least twice that of the responseu [MK04]. This way the computational cost of the solution is greatly reduced without sacrificing the method’s accuracy. The problem at hand is solved for various combinations of the number of terms in the theKL and increasing orders ofPC expansions to test the convergence of the response. Due to computational difficulties in experimenting with high dimensional solutions, the problem was first solved, based on the proposed fine mesh, using a relatively low number of dimensions in theKL expansion, and the solution was investigated for sec- ond, third, and fourth orderPC expansions. It turns out, as is shown in figure 2.5 that the significance of adding the third order terms is negligible. Figures 2.5 a and b show the nodal variances convergence of the the horizontal and vertical displacements for 2 terms in the KL expansion of the solution (M = 2) and as the order of the PC expansion (p) is increased. Similar results were obtained for 3 and 4 terms in theKL expansion, and results obtained using the stochastic model reductions for 15 and 20 terms in the KL expansion confirmed that the second order is sufficient to portray the uncertainty in the response. Although the SSFEM converged for a second order polynomial chaos expansion, results were still vastly remote from the solution obtained through a Monte Carlo simulation of the problem. This implies that more terms in theKL expansion are needed. As noted earlier, as the number of terms in the polynomial chaos expansion increases, the computational cost of SSFEM rises significantly. The size of the resulting algebraic system that is to be solved for the deterministic coefficients of the response is N(P +1)×N(P +1) whereN is the number of nodal degrees of freedom. 25 0 200 400 600 800 1000 1200 0 0.5 1 1.5 2 2.5 3 3.5 x 10 −4 Node Number Nodal Variance Horizontal Displacement M=2 P=1 M=2 P=2 M=2 P=3 0 200 400 600 800 1000 1200 0 0.2 0.4 0.6 0.8 1 1.2 1.4 x 10 −3 Node Number Nodal Variance Vertical Displacement M=2 P=1 M=2 P=2 M=2 P=3 (a) (b) Figure 2.5: The Significance of third order chaos expansion on the response. For the embankment dam problem, the proposed mesh has2294 degrees of freedom. In order to attain convergence, it was determined through computational experimenta- tion using the stochastic model reduction method that about 60 terms in theKL expan- sion are needed. The corresponding total number of terms needed in the polynomial chaos second order expansion is1891 which leaves us with a linear algebraic system in 4337954 unknowns. Solving such a system requires a super computer with very high memory capacity. To solve this problem, the stochastic model reduction method is used. A coarse mesh of 60 elements shown in figure 2.6 is used to carry out the full SSFEM analysis. The solution obtained from solving the problem on the coarse scale is expressed as, U(θ) = P X i=0 U i ψ i (θ) (2.45) where,U is anN×1 vector containing the horizontal and vertical nodal displacements. Having the above approximation, the covariance kernel of the response is estimated as R UU =h( P X i=1 U i ψ i )( P X j=1 U j ψ j ) T i = P X i=1 U i U T i ψ 2 i (2.46) 26 0 10 20 30 40 50 60 70 80 90 100 110 0 10 20 Figure 2.6: Coarse Mesh of 60 elements Now, the Hilbert-Karhunen-Loeve expansion can be obtained by solving equation 2.41 using a Galerkin projection scheme on the coarse mesh, which will lead to a discrete generalized eigenvalue problem whose size is determined by the number of degrees of freedom of the coarse discretization. In discrete form, the eigenvalue problem becomes, K T 0 RK 0 f =νK 0 f (2.47) where K 0 is the stiffness matrix associated with the mean of the process D(x,z) and obtained from the SSFEM solution on the coarse mesh. It should be noted thatK 0 is readily available from the current analysis on the coarse mesh. The matrixR is the covariance of the solution, ν is a diagonal matrix whose elements are the eigenvalues of the generalized eigenvalue problem, and f a matrix containing the corresponding eigenvectors. The Hilbert-Karhunen-Loeve representation is given by u(x,z;θ) =hu(x,z)i+ X i √ ν i f i (x,z)η i (θ) (2.48) The above expansion can be truncated after the firstμ largest generalized eigenvalues of (2.47) such that P μ i=1 ν i / P i ν i is sufficiently close to one. Now, the set of random 27 variables {η i } μ i=1 are recasted by a linear transformation of the set of the polynomial chaos basis{ψ i } P i=1 [DGRH]. η i (θ) = P X j=1 α ij ψ j (θ) ∀i (2.49) where α ij = f T i K 0 U j √ ν i ; ∀i,j. (2.50) The fine scale problem involves the finite dimensional space of piecewise continu- ous polynomials corresponding to the spatial discretization shown in figure 2 and the space of random variables spanned by the basis{η i } μ i=1 [DGRH]. The application of the SSFEM becomes P X i=0 K i ψ i μ X j=0 U j η j −F = 0 (2.51) μ X j=0 K jk U j =F k , k = 0,1,2,...,μ (2.52) where, K jk = P X i=0 d ijk K i (2.53) F k =hη k Fi (2.54) d ijk =hψ i η j η k i (2.55) d ijk = * ψ i P X m=1 α jm ψ m ! P X n=1 α kn ψ n !+ = P X m=1 P X n=1 α jm α kn hψ i ψ m ψ n i = P X m=1 P X n=1 α jm α kn c imn ∀ j,k. (2.56) Where the coefficientsc imn :=hψ i ψ m ψ n i are already tabulated [GS03]. 28 0 200 400 600 800 1000 1200 −5 0 5 10 15 20 x 10 −3 Node Number Nodal Mean Horizontal Displacement Stochastic Model Reduction SSFEM 0 200 400 600 800 1000 1200 −0.14 −0.12 −0.1 −0.08 −0.06 −0.04 −0.02 0 Node Number Nodal Mean Vertical Displacement Stochastic Model Reductions SSFEM (a) (b) 0 200 400 600 800 1000 1200 0 0.2 0.4 0.6 0.8 1 1.2 1.4 x 10 −4 Node Number Nodal Variance Horizontal Displacement Stochastic Model Reduction SSFEM 0 200 400 600 800 1000 1200 0 0.5 1 1.5 2 2.5 3 3.5 x 10 −4 Node Number Nodal Variance Vertical Displacement Stochastic Model Reductions SSFEM (c) (d) Figure 2.7: The response representation using a 15 dimensional second order expansion obtained via SSFEM and SSFEM with Stochastic Model Reductions The stochastic model reduction is first verified on a reduced version of the problem. The second order solution in 15 stochastic dimensions is obtained using only SSFEM and the combination of SSFEM and stochastic model reductions. The results show great compatibility and are presented in figure 2.7. The problem is solved using the stochastic model reduction for chaos expansions for a range of terms in theKL expansion starting with 15 dimensions, for which compari- son is made with SSFEM, and ending with 60 dimensions which gives close results to those obtained from a Monte Carlo simulation. Using a sixty dimensional polynomial expanded up to second order, 1891 bases are required to capture the uncertainty of the response using SSFEM analysis on the coarse scale problem. Using the coarse mesh 29 solution to obtain the covariance matrix of the response, the eigenvalue problem (2.47) is solved to obtain the number of significant basis for the fine mesh analysis. Figure 2.8 gives the decay of the eigenvalues of (2.47). Note that 29 new bases are sufficient to capture the uncertainty in the response for the fine scale problem. The response obtained using these 29 basis in a SSFEM analysis on the fine scale is presented in the analysis section. Clearly the computation costs associated with 29 bases in random dimensions are dramatically less than those of the full order SSFEM with 1891 bases. 0 10 20 30 40 50 60 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 x 10 4 Mode Number Eigenvalue Figure 2.8: Eigenvalues of equation 2.47 2.6.2 Analysis of Results To assess the validity of the results obtained by the SSFEM and test its effectiveness and convergence, the same problem is also treated using a Monte Carlo Simulation. The Monte Carlo simulation logic involves solving the whole system once for every realization in a statistical sample associated with the random system, and consequently synthesizing a corresponding sample of the random solution [Sch01b]. The elastic and shear moduli are simulated by sampling an n-dimensional Gaussian random vector and 30 simulating the fieldY(x,z) based on the solution of the integral eigenvalue problem of the covariance kernel given by 2.44. Y(x,z;θ) =Y 0 (x,z)+ N X i=1 p λ i φ i (x,z)ξ i (θ) (2.57) For each simulation, the response is obtained by solving the deterministic finite element problem. After 150,000 simulations, the solution seems to have converged in both mean and variance. The results obtained from the 150,000 Monte Carlo simulations are plotted together with those from SSFEM with 60 random dimensions expanded up to second order in figure 2.9. The responses obtained from the two methods seem to be very coherent, although while using the same machine for the computations, the Monte Carlo simulations take about 24 hours while the SSFEM with stochastic model reductions take less than 8 hours. The goal behind solving the problem is estimating the probability of failure of the embankment dam for different combinations of the cohesive strengthc, the friction angle φ, the soil mass densityγ, and the top loadq. Failure is defined when the Mohr-Coulomb criterion is satisfied in at least one point in the analysis domain, τ ≥c+σ n tanφ (2.58) whereτ is the shear stress, andσ n is the normal stress. For plane strain conditions, element stresses in the analysis plane are calculated as: σ =DBU, (2.59) 31 0 200 400 600 800 1000 1200 −5 0 5 10 15 20 x 10 −3 Node Number Nodal Mean Horizontal Displacement M=60 P=2 MC = 150000 0 200 400 600 800 1000 1200 −0.14 −0.12 −0.1 −0.08 −0.06 −0.04 −0.02 0 Nodal Mean Vertical Displacement Node Number M=60 P=2 MC 150000 (a) (b) 0 200 400 600 800 1000 1200 0 1 2 3 4 5 6 7 x 10 −5 Node Number Nodal Variances Horizontal Displacement M=60 P=2 MC 150000 0 200 400 600 800 1000 1200 0 0.5 1 1.5 2 2.5 3 x 10 −4 Node Number Nodal Variances Vertical Displacement M=60 P=2 MC 150000 (c) (d) Figure 2.9: The response’s representation obtained from 150000 Monte Carlo Simula- tions as Compared to those resulting from a 60 dimensional second order Chaos expan- sion whereσ is a 3-dimensional vector whose components areσ x ,σ y , andτ xy respectively. Representing these terms by their Polynomial Chaos decompositions yields, P 2 X i=0 σ i η i = P 2 X i=0 P 1 X j=0 D j BU i η i ψ j . (2.60) Accordingly, the deterministic shapes of the stress vector are calculated from those of the constitutive relations matrix and response vector via the above relation by projecting on a higher order chaos. Consequently, the stress vector corresponding to each element in the mesh, is represented by a polynomial function of sixty Gaussian random variables. 32 Table 2.1: Probability of failure for different load and material strength combinations 1 2 3 Cohesive Strengthc 125 KPa 225 KPa 150 KPa Friction Angleφ 30 o 22 o 40 o Mass Densityγ 1800Kg/m 3 1800Kg/m 3 1800Kg/m 3 Top Loadq 30 KPa 30 KPa 30 KPa P f (MonteCarlo) 1.70 E -03 2.20 E-05 3.40E-04 P f (M = 60P = 2) 2.0E-03 2.0E-05 6.5E-04 This vector is simulated by sampling the ξ’s and the Mohr-Coulomb failure criteria is checked for each element.Clearly, the computational cost of sampling the basis is negligible as compared to solving the problem via a Monte Carlo Simulation. Failure is defined when the Mohr-Coulomb failure criterion is satisfied in at least one element in the domain. 100,000 samples of the stress vector are simulated and the failures obtained are presented in table 1. 33 Chapter 3 Sequential Data Assimilation for Stochastic Models 3.1 Overview of Data Assimilation Data assimilation is a novel and versatile methodology for estimating unknown state variables and parameters. It relies on a set of observational data and the underlying dynamical principles governing the system under observation. General schemes for data assimilation often relate to either estimation theory or control theory, but some approaches like direct minimization, stochastic and hybrid methods can be used in both frameworks. Most of these schemes are based on the least-squares criteria which have had a great success. The optimality of the least squares may degenerate in the cases of very noisy and sparse data, or in the presence of multi-modal probability density functions. In such situations melding criteria, such as the maximum likelihood, minimax criterion or associated variations might be more appropriate. [RL00]. 3.1.1 Control Theory Control theory or variational assimilation approaches such as the generalized inverse and adjoint methods perform a global time-space adjustment of the model solution to all observations and thus solve a smoothing problem. The aim of these methods is to minimize a cost function reducing the time space misfit between the simulated data and the observations, with the constraints of the model equations and their parameters. The 34 dynamical model could be either considered as a strong constraint, i.e. it is to be exactly fulfilled, or a weak one. In the case where the dynamical system, F(θ) = 0, describing the temporal evo- lution of the models state variable θ is a strong constraint, the variational approach is known as the adjoint method. Within the assimilation period the initial and boundary conditions and the parameter values control the evolution of the model variables. Vari- ables describing these conditions are commonly denoted as control variables, u. The approach consists of minimizing a cost function J resulting from the summation of two components. The first weights the uncertainties in the initial conditions, boundary conditions and parameters with their respective a priori error covariances. The other is the sum over time of all data-model misfits at observation locations, weighted by mea- surement error covariances [RL00]. This optimization procedure is called the adjoint method, because the adjoint model equations offer a sophisticated but relative cheap way to calculate the gradient of the cost function. This is done by introducing the Lagrange function, L =J + Z D Z T λ(x,t)F(θ,x,t)dtdx, (3.1) whereD andT are the spatial and temporal domains respectively. Partial differentiation with respect to the Lagrange multipliers , also denoted as adjoint variables, returns the model equations: ∂L ∂λ =F(θ) = 0, (3.2) while the differentiation with respect to θ yields the adjoint model equations, which describe the temporal evolution of the Lagrange multipliers: ∂L ∂θ =Adj(λ)+ ∂J ∂θ = 0. (3.3) 35 The third condition entails differentiating with respect to the control variablesu: ∂L ∂u = 0. (3.4) The latter insures that the optimal choice of the control variablesu have been selected. Under the above condition the gradient∇ u J can be easily calculated as: ∇ u J =∇ u L = ∂L ∂u (3.5) = ∂L ∂u + ∂L ∂λ ∂λ ∂u + ∂L ∂θ ∂θ ∂u . (3.6) The second and third term in the above equation vanish since ∂L ∂θ = 0 and ∂L ∂λ = 0, and thus the calculation of the gradient∇ u J simplifies to, ∇ u J = ∂J ∂u + Z D Z T λ(x,t) ∂F(θ,x,t) ∂u dtdx. (3.7) The generalized inverse method is used when the inverse problem is expanded to weakly fit the constraints of both the data and the dynamics. Using this approach, the estimate is still defined in a least square sense; the cost function to be minimized is the same as that in the adjoint method, but a third term consisting of the dynamical model uncertainties weighted by a priori model error covariances is now added. The dynamical model uncertainties thus couple the state evolution with the adjoint evolution. This coupling renders the iterative solution of the backward and forward equations difficult [RL00]. 36 3.1.2 Direct Minimization Methods These methods aim at minimizing cost functions similar to those defined for the general- ized inverse problem but without utilizing the Euler-Lagrange equations. Iterative meth- ods are generally used to determine the direction of the descending cost function. At each iteration, a line search minimization is performed to determine the lowest descend- ing direction. The method of steepest descend is the simplest of these approaches, but it is the slowest since it converges linearly. The conjugate-gradient method on the other hand calculates the search direction orthogonal to the local Hessian Matrix. It has a good convergence rate and a low storage requirements. Other descent methods are the New- ton and quasi-Newton methods, but the problem with all these approaches is that they are very sensitive to the choice of initial conditions. This is because they are basically local minimization schemes. When the cost functions become sufficiently non-linear, non-local methods become more attractive. Of these methods, it is important to note the methods of simulated annealing and Genetic algorithms. In the simulated annealing method, each point s of the search space is compared to a state of some physical system, and the function to be minimized is interpreted as the internal energy of the system in that state. Therefore the goal is to bring the system, from an arbitrary initial state, to a state with the minimum possible energy. The advantages are the origin in theoretical physics, which leads to convergence criteria, and the relative independence on the specifics of the cost function and initial guess, which allows nonlocal searches. The main disadvantages are the large computer requirements and, in practice, uncertain convergence to the global minimum [KGV83]. Genetic algorithms are based upon searches generated in analogy to the genetic evo- lution of natural organisms. At each iteration of the search, the genetic scheme keeps a population of approximate solutions. The population is evolved by manipulations of past 37 populations that mimic genetic transformations such that the likelihood of producing better data-fitted generations increases for new populations. Genetic algorithms allow nonlocal minimum searches, but convergence to the global minimum is not assured due to the limited theoretical base [Sch01a]. 3.1.3 Estimation Theory In estimation theory, statistical approaches are used to estimate the state of a dynam- ical system by combining all available knowledge pertaining to the system including the measurements and the modeling theories. Of significant importance in the estima- tion process is the a priori hypotheses and melding criterion since they determine the influence of dynamics and data onto the state estimate. One of the most widely used tools in estimation theory is the Kalman Filter, which gives a sequential, unbiased, minimum error variance estimate based upon a linear com- bination of all past measurements and dynamics. Many extensions of the Kalman Filter have been developed over time to tackle the different challenges associated with the problem of sequential data assimilation. The Kalman Filter and its various extensions are the focus of this chapter. In what follows, a review of the most commonly used Kalman Filtering techniques is presented, leading us to the description of proposed data assimilation methodology which combines the Kalman Filtering techniques with the Polynomial Chaos machinery. 3.2 The Kalman Filter The Kalman Filter (KF) also known as the Kalman-Bucy Filter was developed in 1960 by R.E. Kalman [Kal60]. It is an optimal sequential data assimilation method for linear dynamics and measurement processes with Gaussian error statistics. It provides a linear, 38 unbiased, minimum variance algorithm to optimally estimate the state of the system from noisy measurements. With the recent technological advancements, the KF became more useful for very complex real-time applications such as, video and laser tracking systems, satellite navigation, radars, ballistic missile trajectories estimation, Fire control . . . . This section is devoted for developing the Kalman filtering ”prediction-correction” algorithm based on the optimality criterion of least-squares unbiased estimation. Con- sider a linear system with the following state-space description, x k+1 =A k x k +Γ k ξ k , (3.8) z k =H k x k +η k (3.9) whereA k ,Γ k , andH k are constant known matrices, and{ξ k } and{η k } are respectively system and observation noise sequences with known statistical information. For the KF, {ξ k } and {η k } are assumed to be sequences of zero-mean Gaussian white noise such thatVar(ξ k ) = Q k andVar(η k ) = R k are positive definite matrices andE(ξ k η l ) = 0 for allk andl. The initial statex 0 is also assumed to be independent ofξ k andη k for all k. Let ˆ x k/k−1 be the to be the a priori state estimate at stepk given knowledge of the process prior to step k, and ˆ x k/k be the a posteriori state estimate at step k given the measurementz k . The a priori and a posteriori estimate of the errors are then defined as, e k/k−1 =x k − ˆ x k/k−1 (3.10) e k/k =x k − ˆ x k/k , (3.11) 39 and their respective error covariance as, P k/k−1 =E[e k/k−1 e T k/k−1 ] (3.12) P k/k =E[e k/k e T k/k ]. (3.13) The a posteriori estimate ˆ x k/k is obtained as a combination of the a priori estimate ˆ x k/k−1 , the measurementz k , and the measurement predictionH k ˆ x k/k−1 , ˆ x k/k = ˆ x k/k−1 +K G (z k −H k ˆ x k/k−1 ). (3.14) The difference (z k −H k ˆ x k/k−1 ) is known as the innovation sequence or residual. The matrix K G is chosen to be the gain or blending factor that minimizes the a posteriori error covariance 3.13. The following sections presents the details for achieving this matrix. 3.2.1 Derivation of the Kalman Gain The error covariance matrix associated with the a posteriori estimate is defined as: P k =E[e k/k e T k/k ] =E[(x k − ˆ x k/k )(x k − ˆ x k/k ) T ]. (3.15) If we replace equation 3.9 in 3.14 and substitute the resultant in 3.15, the expression for the a posteriori covariance matrix becomes: P k =E[[(x k − ˆ x k/k−1 )−K G (H k x k +η k −H k ˆ x k/k−1 )] (3.16) [(x k − ˆ x k/k−1 )−K G (H k x k +η k −H k ˆ x k/k−1 )] T ]. 40 Evaluating the expectations and noting that(x k −ˆ x k/k−1 ) is the a priori estimation error gives, P k = (I−K G H k )P k/k−1 (I−K G H k ) T +K G R k K T G . (3.17) The latter is a general expression, and it is valid for allK G , optimal or otherwise. The goal is to find a specificK G that minimizes the individual terms along the major diagonal ofP k . This optimization can be done in several ways, but the documentation mostly fol- lowed in the literature is that which follows the completing the square approach [Bro83]. The subscripts are dropped in the remaining of this derivation to avoid unnecessary clut- ter in the expressions. In what follows the a priori estimate is denoted by a− superscript. Upon expanding and regrouping the terms in theP k , we have: P =P − −(KHP − −P − H T K T )+K(HP − H T +R)K T , (3.18) where the second term in the above equation is Linear inK, and the third is a quadratic function ofK. It is assumed the (HP − H T +R) is symmetric, and thus can be written in factored form asSS T i.e., SS T =HP − H T +R. (3.19) Therefore, the expression ofP may now be rewritten as, P =P − −(KHP − −P − H T K T )+KSS T K T . (3.20) In order to complete the square,P is expressed in the form, P =P − +(KS−A)(KS−A) T −AA T , (3.21) 41 whereA is independent fromK. If 3.21 is expanded and compared term by term with 3.20, the following equality must hold: KSA T +AS T K T =KHP − +P − H T K T . (3.22) Hence, it is easily noticed that 3.22 is only satisfied ifA is expressed as, A =P − H T (S T ) −1 . (3.23) Only the second term in 3.21 involvesK; it is the product of a matrix with its transpose, which ensures that all terms along the major diagonal will be nonnegative. The aim is to minimize the diagonal termsK; therefore, the best way to do that is to adjustK so that the middle term in 3.21 is zero. Hence,K is chosen so that KS =A. (3.24) The Kalman Gain matrix is thus given as, K = AS −1 (3.25) = P − H T (S T ) −1 S −1 = P − H T (SS T ) −1 . But,SS T is the factored form of (HP − H T +R), and therefore the final expression of the optimumK is, K =P − H T (HP − H T +R) −1 (3.26) 42 Combining all the results above, we arrive at the Kalman filter algorithm [CC91]: P 0,0 =Var(x 0 ) (3.27) P k,k−1 =A k−1 P k−1,k−1 A T k−1 +Γ k−1 Q k−1 Γ T k−1 K G =P k,k−1 H T k (H k P k,k−1 H T k +R k ) −1 P k,k = (I−K G H k )P k,k−1 ˆ x 0/0 =E(x 0 ) ˆ x k/k−1 =A k−1 ˆ x k−1,k−1 ˆ x k/k = ˆ x k/k−1 +K G (z k −H k ˆ x k/k−1 ) k = 1,2,... Figure 3.1: The Kalman Filter Loop. 43 3.2.2 Relationship to Recursive Bayesian Estimation The true state of the dynamical system is considered as an unknown Markov process. Similarly the observational data are basically the hidden states of a Markov model. The Markov assumption implies that the true state is independent of all earlier states given the immediately previous one, p(x k /x 0 ,x 1 ,...,x k−1 ) =p(x k /x k−1 ). (3.28) Furthermore, the measurement at thek th time step is only dependent on the current state and conditionally independent of all other states given the current one, p(z k /x 0 ,x 1 ,...,x k ) =p(z k /x k ). (3.29) Under these assumptions the probability density function of all the states in the hidden Markov Model can be expressed as, p(x 0 ,...,x k ,z 1 ,...,z k ) =p(x 0 )Π k i=1 p(z i /x i )p(x i /x i−1 ). (3.30) The Kalman Filter is used to estimate the statex; the associated probability density is that corresponding to the current state conditioned on all the measurements available at the current time, p(x k /z 0 ,z 1 ,...,z k−1 ) = Z p(x k /x k−1 )p(x k−1 /z 0 ,z 1 ,...,z k−1 )dx k−1 . (3.31) After thek th measurement we have, p(x k /z 0 ,z 1 ,...,z k ) = p(z k /x k )p(x k /z 0 ,...,z k−1 ) p(z k /z 0 ,...,z k−1 ) . (3.32) 44 The denominator is a normalizing term, p(z k /z 0 ,z 1 ,...,z k−1 ) = Z p(z k /x k )p(x k /z 0 ,z 1 ,...,z k−1 )dx k , (3.33) and the remaining probability distributions are, p(x k /x k−1 ) = N(Ax k−1 ,Q k ) (3.34) p(z k /x k ) = N(Hx k ,R k ) (3.35) p(x k−1 /z 0 ,...,z k−1 ) = N(ˆ x k ,P k−1 ), (3.36) where N(m,σ) is a the standard normal distribution with mean m and a σ standard deviation. 3.3 The Extended Kalman Filter Once the model dynamics turn out to be nonlinear, a linearization technique is imple- mented in deriving the filter equations. A real time linear Taylor approximation approach through which the nonlinear functions are recursively linearized around most recent estimate is considered. The resulting filter is known as the extended Kalman Filter (EKF). Note that the model nonlinearity also arises when solving parameter esti- mation problems even when the underlying dynamics is that of a simple linear model. Therefore, a nonlinear model arises once there exists a nonlinear dynamical model of the form, x k+1 =f k (x k )+B k (x k )ξ k (3.37) z k =g k (x k )+η k , (3.38) 45 or when a system (even a linear one) is augmented with an equation describing the dynamics of the unknown parameters. The EKF yields statistically acceptable results for most nonlinear dynamical systems but there is a high chance that EKF updates are poorer than the nominal ones especially in the events where the initial uncertainty and measurement error are large [Bro83]. After linearizing the nonlinear function, the system is treated in the same way as in the KF. Consequently the EKF algorithm is [CC91]: P 0,0 =Var(x 0 ), ˆ x 0 =E(x 0 ) (3.39) For k = 1,2,..., P k,k−1 = ∂f k−1 ∂x k−1 (ˆ x k−1 ) P k−1,k−1 ∂f k−1 ∂x k−1 (ˆ x k−1 ) T +B k−1 (ˆ x k−1 )Q k−1 B T k−1 (ˆ x k−1 ) ˆ x k/k−1 =f k−1 (ˆ x k−1 ) K G =P k,k−1 ∂g k ∂x k (ˆ x k/k−1 ) T . " ∂g k ∂x k (ˆ x k/k−1 ) P k,k−1 ∂g k ∂x k (ˆ x k/k−1 ) T +R k # −1 P k,k = I−K G ∂g k ∂x k (ˆ x k/k−1 ) P k,k−1 ˆ x k/k = ˆ x k/k−1 +K G z k −g k (ˆ x k/k−1 ) 3.4 The Unscented Kalman Filter The unscented Kalman filter [JU97] (UKF) is built on the idea that a set of discretely sampled points can be used to characterize the mean and covariance of the unknown model state. It uses a deterministic sampling technique known as the unscented trans- form to pick a minimal set of sample points (called sigma points) around the mean. 46 These sigma points are then propagated through the non-linear functions and the covari- ance of the estimate is then recovered. Using the unscented transformation approach, an n-dimensional random variablex with mean ¯ x and covarianceP xx is approximated by 2n+1 weighted points given by X 0 = ¯ x W 0 =k/(n+k) (3.40) X i = ¯ x+( p (n+k)P xx ) i W i = 1/2(n+k) X i+n = ¯ x−( p (n+k)P xx ) i W i+n = 1/2(n+k), where k ∈ R, ( p (n+k)P xx ) i is the i th row or column of the matrix square root of (n +k)P xx and W i is the weight associated with the i th point. The algorithm of the UKF can thus be summarized in the following algorithm: [JU97] 1. Create the set of sigma points by applying the set of equations 3.41 on the system’s initial conditions. 2. Each point in the set is propagated through the process model, X i (k+1/k) =f[X i (k/k)]. (3.41) 3. The predicted mean is computed as ˆ x(k+1/k) = 2n X i=0 W i X i (k+1/k). (3.42) 4. The corresponding covariance is computed as, P(k+1/K) = 2n X i=0 W i X i (k+1/k)− ˆ x(k+1/k)X i (k+1/k)− ˆ x(k+1/k) T . (3.43) 47 5. Each of the prediction points is instantiated through the observation model, Z i (k+1/k) =h[X i (k+1/k),k] (3.44) 6. The predicted observation is calculated by ˆ z(k+1/k) = 2n X i=0 W i Z i (k+1/k). (3.45) 7. Given that the observation noise is additive and independent, the innovation covariance is, P zz (k+1/k) =R(k+1)+ (3.46) 2n X i=0 W i [Z i (k+1/k)− ˆ z(k+1/k)][Z i (k+1/k)− ˆ z(k+1/k)] T 8. The cross correlation matrix is determined by P xz (k+1/k) = 2n X i=0 W i [X i (k+1/k)−ˆ x(k+1/k)][X i (k+1/k)−ˆ z(k+1/k)] T (3.47) 3.5 The Ensemble Kalman Filter The Ensemble Kalman Filter (EnKF) proposed by Evensen [Eve94] and clarified by Burgers et al. [BLE98] aims at resolving some of the drawbacks of the EKF. The EnKF is based on forecasting the error statistics using Monte Carlo methods which turns out to be a better alternative to solving the traditional and computationally expensive approx- imate error covariance equation used in the EKF. It was designed to resolve two major problems related to the use of the EKF with nonlinear dynamics in large state spaces. 48 The first is that the EKF adopts an approximate closure scheme, and the second is the huge computational requirements associated with the storage and forward integration of the error covariance matrix [Eve03]. Since its development, the EnKF has been extensively used in various research areas such as ocean engineering, weather forecast, petroleum engineering, hydrology, system identification . . . 3.5.1 Practical Implementation of the EnKF As mentioned earlier, the EnKF propagates an ensemble of state vectors forward in time. The initial ensemble is chosen so that it properly represents the error statistics of the initial guess of the model states. Therefore, the initial ensemble is usually created by adding some kind of perturbations to a best-guess estimate, and then the ensemble is integrated over a time interval covering a few characteristic time scales of the dynamical system. Let A be the matrix holding the ensemble membersx i ∈< n , A = (x 1 ,x 2 ,...,x N )∈< n×N , (3.48) whereN is the number of ensemble members, andn is the size of the model state vector. The ensemble mean is then given by, ¯ A = A1 N , (3.49) where1 N ∈< N×N is a matrix with all its elements equal to1/N. The ensemble pertur- bation matrix is defined as, A 0 = A− ¯ A = A(I− 1 N ), (3.50) 49 and the ensemble covariance matrix P e ∈< n×n as, P e = A 0 (A 0 ) T N−1 . (3.51) A new ensemble of observations is generated at each correction step, by adding perturbations drawn from a distribution with zero mean and covariance equal to the measurement error covariance matrix [BLE98]. Let d∈< m , wherem is the number of measurements, be a vector of measurements. The ensemble of observations, D = (d 1 , d 2 ,..., d N )∈< m×N , (3.52) is obtained by perturbing the measurement vector d as follows, d j = d+ j , j = 1,...,N. (3.53) The corresponding measurement error covariance matrix is given by R e = ΥΥ T N−1 , (3.54) whereΥ is the ensemble of perturbations, Υ = ( 1 , 2 ,..., N )∈< m×N . (3.55) The analysis or updating step can be performed on either each of the model state ensemble members, x k/k j =x k/k−1 j + G ke (d j −Hx k/k−1 j ), (3.56) 50 or on the ensemble itself A k/k = A k/k−1 + G ke (D− HA k/k−1 ), (3.57) where G ke is the KF gain matrix G ke = P e H T (HP e H T + R e ) −1 . (3.58) 3.6 The Particle Filter The particle filters, also known as the Sequential Monte Carlo methods (SMC), are a set of flexible simulation-based methods for sampling from a sequence of proba- bility distributions; each distribution being only known up to a normalising constant. These methods are similar to the importance sampling methods and they often serve as replacements for the Ensemble Kalman Filter and the Unscented Kalman Filter with the advantage that they approach the Bayesian optimal estimate if a sufficient number of samples is used. These filters are a sequential analogue of the Markov Chain Monte Carlo (MCMC) methods. While the MCMC is used to model the full posterior distri- bution p(x 0 ,x 1 ,...,x k /y 0 ,y 1 ,...,y k ) of a hidden state or parameter x given a set of observationy 0 ,y 1 ,...,y k , the particle filter aims at estimatingx k from the a posteriori distributionp(x k /y 0 ,y 1 ,...,y k ) [DdFG01]. One of the most commonly used particle filtering algorithms, is the Sampling Impor- tance Resampling (SIR) algorithm. It is a sequential verion of the importance sampling method that aims at approximating a distribution using a weighted set of particles. Parti- cle filters are beyond the scope of this study, a brief overview of these filters is mentioned for the sake of completeness. 51 3.7 Coupling of Polynomial Chaos with the EnKF The filter used in this study allows the propagation of a stochastic representation of the unknown variables using Polynomial Chaos. This overcomes some of the draw- backs of the EnKF. Using the proposed method, At any instant in time, the probability density function of the model state or parameters can be easily obtained by simulat- ing the Polynomial Chaos basis. Furthermore, this method allows representation of non-Gaussian measurement and parameter uncertainties in a simpler, less taxing way without the necessity of managing a large ensemble. 3.7.1 Representation of Error Statistics The Kalman Filter defines the error covariance matrices of the forecast and analyzed estimate in terms of the true state as, P f = (x f −x t )(x f −x t ) T , (3.59) P a = (x a −x t )(x a −x t ) T , (3.60) wherehi denotes the mathematical expectation,x is the model state vector at a particular time, and the superscripts f, a, and t represent the forecast, analysis, and true state, respectively. However, in the Polynomial Chaos based Kalman Filter, the true state is not known, and therefore the error covariance matrices are defined using the Polynomial Chaos representations of the model state. In the PCKF, the model state is given by, x = P X i=0 x i ψ i (ξ), (3.61) 52 whereP +1 is the number of terms in the Polynomial Chaos expansion of the state vec- tor, and{ψ i } is the set of Hermite polynomials. Consequently, the covariance matrices are defined around the mean, the zeroth order term, of the stochastic representation, P f ≈ * ( P X i=0 x f i ψ i −x f 0 )( P X i=0 x f i ψ i −x f 0 ) T + (3.62) ≈ * ( P X i=1 x f i ψ i )( P X i=1 x f i ψ i ) T + ≈ P X i=1 x f i x f T i ψ 2 i , and similarly, P a ≈ P X i=1 x a i x a T i ψ 2 i . (3.63) The Polynomial Chaos representation depicts all the information available through the complete probability density function, and therefore allows the propagation of all the statistical moments of the unknown parameters and variables. The observations are also treated as random variables represented via a Polynomial Chaos expansion with a mean equal to the first-guess observations. Since the model and measurement errors are assumed to be independent, the latter is represented as a Markov process. 3.7.2 Analysis Scheme For computational efficiency, the dimensionality and order of the Polynomial Chaos expansion are homogenized through out the solution. These parameters are initially defined based on the uncertainty within the problem at hand and are assumed to be constant thereafter. Since the model state and measurement vectors are assumed inde- pendent, the Polynomial Chaos representation of these variables have a sparse structure. 53 Let A be the matrix holding the chaos coefficients of the state vectory∈R n , A = (x 0 ,x 1 ,...,x P )∈R n×P+1 , (3.64) whereP+1 is the total number of terms in the Polynomial Chaos representation ofx and n is the size of the model state vector. The mean ofx is stored in the first column of A and is denoted byx 0 . The state perturbations are given by the higher order terms stored in the remaining columns. Consequently, the state error covariance matrix P∈R n×n is defined as P = P X i=1 x i x T i ψ 2 i . (3.65) Given a vector of measurementsd∈R m , withm being the number of measurements, a Polynomial chaos representation of the measurements is defined as D = P X j=0 d j ψ j (ξ), (3.66) where the mean d 0 is given by the actual measurement vector, and the higher order terms represent the measurement uncertainties. The representationD can be stored in the matrix B = (d 0 ,d 1 ,...,d P )∈R m×P+1 . (3.67) From Eq. 3.66, we can construct the measurement error covariance matrix R = P X i=1 d i d T i ψ 2 i ∈R m×m . (3.68) 54 The Kalman Filter forecast step is carried out by employing a stochastic Galerkin scheme, and the assimilation step simply consists of the traditional Kalman Filter cor- rection step applied on the Polynomial Chaos expansion of the model state vector, P X i=0 x a i ψ i = P X i=0 x f i ψ i + PH T (HPH T + R) −1 ( P X i=0 d i ψ i − H P X i=0 x f i ψ i ). (3.69) Projecting on an approximating space spanned by the Polynomial Chaos{ψ i } P i=0 yields, x a i =x f i + PH T (HPH T + R) −1 (d i − Hx f i ) fori = 0,1,...,P. (3.70) In matrix form, the assimilation step is expressed as, A a = A f + PH T (HPH T + R) −1 (B− HA f ). (3.71) 55 Chapter 4 Multiphase Flow in Porous Media Flow through porous media is a problem encountered in many realms of science and engineering. Applications include ground water hydrology, reservoir engineering, soil mechanics, chemical and Biomedical engineering . . . . In this study the focus is on reser- voir engineering, but the proposed methods could be easily extended to other disciplines of multiphase flow in porous media. Fluid motions in porous media are governed by the same fundamental laws that govern their motion in the atmosphere, pipelines, rivers . . . . These are the mass, momentum, and energy conservation laws. All the laws considered in this study are valid at the macroscopic level, i.e. for a volume of porous medium which is infinitely large with respect to the size of fluid particles and of the pores, but can be infinitely small with respect to the size of the field itself [CJ86]. 4.1 Flow Equations In order to model a multiphase flow system, conservation of mass for each existing phase is required. The general form of these equations can be expressed as ∂ ∂t (φρ i S i ) =−∇. ρ i ~ V i +q i i = 1,2,... (4.1) where i denotes the fluid phase, φ is the porosity of the medium, ρ i is the density of phasei,S i is the phase saturation, ~ V i is the phase velocity,q i is a source/sink term, and t is time. 56 Darcy’s laws is commonly used for calculating phase flow velocities. It was devel- oped by Darcy in mid 19th century for one phase flow only. Muskat [Mus49] showed experimentally that Darcy’s law remains valid for each fluid separately when two or more immiscible fluids share the pore space. Darcy’s law for phase velocities is given by ~ V i =− ~ KK ri μ i (∇P i −ρ i g∇z(x)) (4.2) where ~ K is the intrinsic permeability,K ri is the relative permeability of phasei,μ i is the viscosity of phasei,P i is the pressure of phasei,g is the gravitational acceleration, and z is the depth of the fluid. Substituting Eqs. 4.2 into Eqs. 4.1 yields the general form of the flow equations for all phases, ∂ ∂t (φρ i S i ) =∇. " ρ i ~ KK ri μ i (∇P i −ρ i g∇z(x)) # +q i i = 1,2,... (4.3) The latter set of equations signify that the flow in porous media is driven by gravity, pressure gradients, and viscous forces. It also incorporates the effects of porous matrix compressibility, fluid compressibility, capillary pressure, and spatial variability of per- meability and porosity. The nonlinearity arises from the interdependence of the phase relative permeabilities and capillary pressure on the phase saturations. 4.2 Features of the Reservoir and the Fluids In this section, the details of the geological features of the reservoir and the fluids are presented. In the following, the physical properties and characteristics of all the unknown parameters in eqs. 4.3 are described. 57 4.2.1 Density and Viscosity The density and the viscosity are intrinsic fluid properties. Density is defined as the mass per unit volume, it varies mainly with temperature. In this study, fluid density variation is neglected. The state equation for an ideal liquid with constant compressibility is given by ρ =ρ o exp[c(p−p o )], (4.4) and that for an ideal gas is ρ = W RT p (4.5) whereρ o is the fluid density at some reference pressure,p o ,c is the fluid’s compressibil- ity,W is the molecular weight,R is the gas constant, andT is the absolute temperature. Viscosity is the measure of the resistance to flow caused by the internal friction within the fluid. Viscosity is also a function of temperature; it decreases as temperature increases. The differences in the viscosities of two interacting immiscibe fluids cause instabilities resulting in a phenomena known as viscous fingering. Fingering manifests itself as unexpected macroscopic deformations in the inter facial boundary separating the two fluids. 4.2.2 Porosity Porosity of a medium is the percentage of void space existing between soil particles. It is a function of the size-sorting and shape of the soil grains. Methods for computing the porosity of a medium are well studied and documented [Bea72]. Consider at a point in the porous medium a volume ΔU i much larger than a single pore or grain. For this volume the following ratio is determined, φ i = (ΔU v ) i /ΔU i (4.6) 58 where(ΔU v ) i is the volume of void space withinΔU i . The medium’s volumetric poros- ity,φ, at that point is defined as: φ = lim ΔU i →ΔUo (ΔU v ) i ΔU i (4.7) whereΔU o is the representative elementary volume (REV) of the porous medium at the specified point. 4.2.3 Saturation and Residual Saturation Saturation of a fluid is the ratio of the volume of the void filled with fluid to the total void volume. The values of the saturations vary between zero and one, and the sum of all coexisting fluid phases add up to one leading to the following constraint n X i s i = 1, (4.8) wheren is the number of existing phases in the system. Two critical values of the phase saturations are identified, (1)S wc the saturation at which the injected fluid starts to flow, and (2)S nc the saturation at which the displaced fluid ceases to flow. S wc andS nc are the known as residual saturations. 4.2.4 Intrinsic and Relative Permeability The Intrinsic permeability ~ K is a tensorial quantity depending on the pore size opening available for fluid flow. Moreover, ~ K depends on the nature of the fluid saturating the porous medium. Relative permeability signifies the fraction of the pore space available for the phase to flow. Recent studies [BBM87, MG87] have suggested that the relative permeability 59 varies with the saturation in a tensorial behavior. For two phase flow, Brooks and Corey [BC64] derived empirically-based expressions for the relative permeabilities of the wet- ting and non wetting phases. Figure 4.1 presents typical two-phase relative permeability- saturation relations. These permeability curves are expressed as K rw =S (2+3λ)/λ e (4.9) and, K rn = (1−S e ) 2 (1−S (2+λ)/λ e ) (4.10) where K rw and K rn are the relative permeability of the wetting non-wetting phases respectively, λ is a model fitting parameter related to the pore size distribution of the soil material, andS e is the reduced saturation given by S e = S w −S wc S m −S wc , (4.11) whereS m is the maximum wetting phase saturation. In the case of three-phase flow systems, theoretical models have been developed for three-phase relative permeability curves [Sto70, DP89, FM84]. Two common beliefs coexist for three-phase relative permeabilities. The first argues that the heterogeneity’s effect on the relative permeability leads to the hysteretic behavior demonstrated by the curves [Dul91, LP87, KP87]. The second argument suggests that the fluid channel net- work available for the flow is mainly dependent on the fluid phase contents and not on the saturation history of the medium which leads to the conclusion that relative perme- ability curves are non-hysteretic [LGN89]. 60 0 0.2 0.4 0.6 0.8 1 0 0.5 1 Sw Relative Permeability Krw Krn Figure 4.1: Typical Relative Permeability Curves. 4.2.5 Capillary Pressure The capillary pressure is the pressure differences that occur across the fluid-fluid inter- faces of the coexisting phases in the subsurface. It is defined as p c =p nw −p w , (4.12) where p c is the capillary pressure, p nw is the pressure of the non-wetting phase, and p w is the pressure of the wetting phase. In order to show which of the fluids is the wetting one, one has to look at the meniscus separating the two fluids in a capillary tube: the concavity of the meniscus is oriented toward the non-wetting fluid [CJ86]. Figure 4.2 demonstrates this phenomena. The capillary pressure increases with a decrease in contact angle between the phases in the pore size. The smaller the pore radius, the higher is the resistance to non-wetting phase to occupy a pore that is preoccupied with a wetting phase. Therefore, the non-wetting phase usually occupies the larger pore spaces 61 where there exist lower capillary resistance, and the wetting phase occupies the smaller ones. The capillary pressure can be expressed as a function of the wetting phase satura- tion only. Empirical relationships describing this functional dependence are known as the capillary pressure-saturation curves. Figure 4.3 shows a typical capillary pressure- saturation curve. Brooks and Corey [BC64] developed a close form expression that represents these curves for two phase flow. Brooks and Corey’s equation is, p c =p d S −1/λ e p c ≥p d , (4.13) where p d is the displacement or threshold pressure which first gives rise to the oil phase permeability. Later, Parker et.al [PL87] extended Brooks and Corey’s model to three-phase flow and came up with an expression for the capillary pressure that can be reduced to two- phase flow as well. This model is given by p c =P d S −1/m e −1 1−m pc≥ 0, (4.14) wherem is a model fitting parameter. 62 Figure 4.2: Determination of the wetting phase. 0 0.2 0.4 0.6 0.8 1 1 1.2 1.4 1.6 1.8 2 2.2 2.4 2.6 2.8 3 Wetting Phase Saturation Scaled Capillary Pressure Drainage Imbibition Figure 4.3: Typical Capillary Pressure-Saturation Curve. 4.3 Characterization of Reservoir Simulation Models Reservoir simulation is a powerful tool for reservoir characterization and management. It enhances the production forecasting process. The efficiency of a reservoir model relies on its ability to characterize the geological and petrophysical features of the actual field. 63 One of the most commonly used methods for reservoir characterization is automatic history matching. History matching aims at estimating reservoir parameters such as porosities and permeabilities so as to minimize the square of the mismatch between observations and computed values. The heterogeneities of the geological formations and the uncertainties associated with the medium properties render history matching a very complex and challenging phenomena. In this study a novel history matching methodology based on the Polynomial Chaos Kalman Filter is presented. This study deals with the two-phases immiscible flow water flooding reservoir engi- neering problem. Initially the porous medium is assumed to be fully saturated with oil, and water is pumped through one well to push the oil out through other wells in the field. The governing flow equations consist of the water continuity equation, φ ∂S w ∂t =∇. " ~ KK rw μ w (∇P w −ρ w g∇z(x)) # +q w , (4.15) and the oil continuity equation, φ ∂S o ∂t =∇. " ~ KK ro μ o (∇P o −ρ o g∇z(x)) # +q o . (4.16) These equations are subject to the following constraints: S w +S o = 1 (4.17) p c (S w ) = p o −p w . (4.18) 4.3.1 Uncertainty Quantification The heterogeneity in the porous medium is dealt with in the probabilistic sense, i.e. by modeling the intrinsic permeability and porosity of the medium as stochastic processes 64 via their Polynomial Chaos expansion. In this study, two scenarios are used for repre- senting the uncertain medium properties. In the first, both the intrinsic permeability and porosity are represented as stochastic processes using the Polynomial Chaos expansion: K(x,θ)≈ ˆ K(x,θ) = M X i=0 K i (x)ψ i (ξ(θ)), (4.19) φ(x,θ)≈ ˆ φ(x,θ) = M X j=0 φ j (x)ψ j (ξ(θ)), (4.20) where{K i (x)} and{φ i (x)} are sets of deterministic functions to be estimated using the proposed sequential data assimilation technique. The second scenario suggests representing the intrinsic permeability of the porous medium as a stochastic process while modeling the porosity as a random variable. Therefore the ratio, α(x,θ), of the intrinsic permeability and porosity of the medium is represented as, α(x,θ) = K(x,θ) φ(θ) ≈ exp( P X i=0 α i (x)ψ i (ξ(θ))), (4.21) where{α i (x)} is a set of deterministic functions to be estimated using the proposed sequential data assimilation technique. Although both scenarios work well, the first requires more monitoring to guarantee that the estimated numbers remain physically valid. The solution of the resulting system of stochastic partial differential equations is also represented via Polynomial Chaos, S w (x,θ)≈ ˆ S w (x,θ) = P X k=0 S w k (x)ψ k (ξ), (4.22) 65 and P w (x,θ)≈ ˆ P w (x,θ) = P X k=0 P w k (x)ψ k (ξ), (4.23) where{P w k } and{S w k } are the deterministic nodal vectors to be solved for,P denotes the number of terms in the Polynomial Chaos expansion, and {ψ k (ξ)} is a basis set consisting of orthogonal polynomial chaoses of consecutive orders. The relative permeabilities and the capillary pressure are functions of water satu- ration, and therefore, they are represented using their Polynomial Chaos expansion as well. Fourth order polynomial approximations of the Brooks-Corey Model are adopted and the relative permeabilities and capillary pressure are approximated as, K rw ≈ ˆ K rw = 4 X i=0 ( P X j=0 S e j ψ j ) i c i +η(ξ), (4.24) K ro ≈ ˆ K ro = 4 X i=0 ( P X j=0 S e j ψ j ) i c i +η(ξ), (4.25) and P c ≈ ˆ P c = 4 X i=0 ( P X j=0 S e j ψ j ) i c i +η(ξ), (4.26) whereη represents the modeling error, and the coefficientsc 0 i s are numerically cal- culated based on the value of the Brooks-Corey model fitting parameter,λ. 4.3.2 Model Formulation The error, , resulting from truncating the Polynomial Chaos expansions at a finite num- ber of terms is minimized by forcing it to be orthogonal to the solution space spanned 66 by the basis set of orthogonal stochastic processes appearing in the Polynomial Chaos expansion [GS03]. Mathematically, this is expressed as h,ψ m i m = 1,2,..., (4.27) where h.i denotes mathematical expectation. This will yield a system of N ×P + 1 nonlinear coupled algebraic equations to be solved for the deterministic coefficients of the water saturation and pressure, where N is the number of nodes in the spatially discretized mesh, and P + 1 is the total number of terms in the Polynomial Chaos expansion [Hat98]. If the first scenario is adopted, the system is represented as, P X i=0 P X j=0 φ i ∂S w j ∂t hψ i ψ j ψ m i = (4.28) ∇. " P X i=0 P X j=0 P X k=0 ~ K i K rw j μ w (∇P w k )hψ i ψ j ψ k ψ m i # +q w hψ m i ∀m P X i=0 P X j=0 φ i −∂S w j ∂t hψ i ψ j ψ m i = (4.29) ∇. " P X i=0 P X j=0 P X k=0 ~ K i K rw j μ w ∇(P w k +P c k )hψ i ψ j ψ k ψ m i # +q o hψ m i ∀m, where the expectationshψ i ψ j ψ m i andhψ i ψ j ψ k ψ m i are easily calculated and tabu- lated in the literature [GS03]. On the other hand, when the second scenario is employed, the resulting system is expressed as, 67 P X j=0 ∂S w j ∂t hψ j ψ m i = (4.30) ∇. " P X j=0 P X k=0 K rw j μ w (∇P w k ) * exp( P X i=0 α i ψ i )ψ j ψ k ψ m +# +q w hψ m i ∀m P X j=0 −∂S w j ∂t hψ j ψ m i = (4.31) ∇. " P X j=0 P X k=0 K rw j μ w ∇(P w k +P c k ) * exp( P X i=0 α i ψ i )ψ j ψ k ψ m +# +q o hψ m i ∀m. Evaluating the above expectations is not a trivial task; an algorithm is developed to evaluate them as a function ofhψ i ψ j ψ m i by the using the method of change of variables followed by employing the following Hermite polynomials identity, 2 n/2 ψ n ( x+y √ 2 ) = n X k=0 n k ψ k (x)ψ n−k (y). (4.32) The spectral stochastic finite element method capabilities are developed within SUNDANCE [Lon04], and the package is used to numerically solve the above stochas- tic systems. Although the implementation of the second scenario is computationally more expensive, it guarantees the positiveness of the estimated medium parameters. 4.4 Numerical Applications Two synthetic sets of problems are selected to assess the efficiency of the proposed history matching technique. The first one explores the one dimensional two-phase water flooding, while the second solves the two dimensional two-phase model. In both sets, the model state vector consists of all the reservoir variables that are uncertain and need to be specified. These include the phase saturations and pressures as well as the suggested 68 Table 4.1: Uncertainty Representation Using Polynomial Chaos Source of Uncertainty Representation Parametric (1) ξ 1 , ξ 2 1 −1 Parametric (2) exp(ξ 1 ) Modeling ξ 2 Measurement ξ 3 representation of the medium properties. The objective is to statistically estimate the medium properties through measurements of the water saturation at specific locations. The modeling and measurement errors are assumed to be independent Gaussian noises, and therefore, they are represented using one dimensional, first order Polyno- mial Chaos expansions. Unlike the EnkF where the model error is represented using an additive noise, in PCKF the model error is incorporated in the Brooks-Corey model. In order to accommodate the fact that the medium properties may deviate from Gaus- sianity, the unknown porosity and permeability of the medium are either modeled as dependent one dimensional, second order Polynomial Chaos expansions according to scenario one proposed earlier, or their ratio is expressed as an exponential function of a one dimensional first order Polynomial Chaos Expansion as is explained in the second scenario. Table 4.1 details the uncertainty representation in the numerical model. 4.4.1 One Dimensional Buckley-Leverett Problem The first test problem is that of the incompressible water flood Buckley-Leverett exam- ple. The relative permeabilities within the model are given by the Brooks-Corey model. The reservoir is horizontal with a length of 1000ft, cross-sectional area of 10000ft 2 , and constant initial water saturationS w i = 0.16. Oil is produced atx = 1000ft at a rate of426.5ft 3 /day, and water is injected atx = 0 at the same rate. Three case studies are 69 developed on this problem. In all three cases, scenario one is adopted for representing the parametric uncertainties. 4.4.1.1 Case 1: Homogeneous Medium To Test the validity of the proposed approach, the same Brooks-Corey model parameter, λ, is assumed for both the forward and inverse analysis. The field is assumed to be homogeneous with an intrinsic permeability of 270md and a porosity of 0.275, and measurements of the water saturation and pressure are available each50ft every10 time steps. Figure 4.4 gives the statistical properties of the estimated parameters; it represents the variation of these parameters with time. It is noticed that as more measurements are available, the mean estimate converges toward the true model parameters, and the Polynomial Chaos coefficients decay exponentially indicating a deterministic estimate. This is expected since the model used to estimate the reservoir state is identical to the model used for generating the measurements. 4.4.1.2 Case 2: Continuous Intrinsic Permeability and Porosity Profiles In this case continuous profiles for the porosity and intrinsic permeability are assumed. The measurements of the water saturation and pressure are also assumed available each 50ft every10 time steps. However, the Brooks-Corey Model used to generate the mea- surements has a fitting parameter λ = 2.2 while the the model used in the filtering scheme hasλ = 2.0. This will result in uncertainties within the estimate associated with modeling errors. Figure 4.5 represents a Polynomial Chaos estimate of the medium properties obtained after 2000 updates. Figure 4.6 shows the probability density func- tions (pdf’s) of the estimated porosity and intrinsic permeability at quarter span. It 70 (a) (b) (c) (d) Figure 4.4: Case 1: Estimate of medium properties: (a) Mean intrinsic permeability, (b) Polynomial Chaos coefficients of intrinsic permeability, (c) mean porosity, and (d) Polynomial Chaos coefficients of porosity is clear from the obtained pdf’s that the porosity and intrinsic permeability have non- Gaussian properties. In order to validate the estimated parameters, the estimated residual water saturation is plotted against the true model state in figure 4.7. 4.4.1.3 Case 3: Discontinuous Porosity and Intrinsic Permeability Profiles The only difference between cases 2 and 3 is that in the latter the porosity and intrinsic permeability have discontinuous profiles. Figure 4.8 presents the estimated medium properties. In Figure 4.9, the estimated residual water saturation profile is plotted along with the true model state at different times during the simulation. 71 0 200 400 600 800 1000 200 400 600 800 1000 1200 1400 1600 1800 Distance Intrinsic Permeability Mean Estimate True Model 0 200 400 600 800 1000 0 5 10 15 20 25 Distance Chaos Coefficients of Permeability ξ 1 ξ 2 ξ 1 2 − 1 (a) (b) 0 200 400 600 800 1000 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Distance Porosity Mean Estimate True Model 0 200 400 600 800 1000 −0.025 −0.02 −0.015 −0.01 −0.005 0 Distance Chaos Coefficients of Porosity ξ 1 ξ 2 ξ 1 2 − 1 (c) (d) Figure 4.5: Case 2: Estimate of medium properties: (a) Mean intrinsic permeability, (b) Polynomial Chaos coefficients of intrinsic permeability, (c) mean porosity, and (d) Polynomial Chaos coefficients of porosity 1050 1100 1150 1200 1250 0 0.005 0.01 0.015 0.02 0.025 0.03 0.035 Intrinsic Permeability (K) probability density function 0.16 0.18 0.2 0.22 0.24 0.26 0.28 0.3 0 5 10 15 20 25 30 35 40 45 Porosity probability density function (a) (b) Figure 4.6: Case 2. The probability density function of the estimated medium properties: (a) intrinsic permeability, (b) porosity 72 0 100 200 300 400 500 600 700 800 900 1000 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Distance Residual Water Saturation True Model Mean Estimate 0 100 200 300 400 500 600 700 800 900 1000 −0.01 −0.005 0 0.005 0.01 0.015 0.02 0.025 Distance Chaos Coefficients of Estimated Saturation ξ 1 ξ 2 ξ 1 2 −1 (a) (b) Figure 4.7: Case 2: (a) Mean residual water saturation, (b) Polynomial Chaos coeffi- cients of the estimated residual water saturation 0 200 400 600 800 1000 100 200 300 400 500 600 700 800 Distance Intrinsic Permeability Exact Model Mean Estimate 0 200 400 600 800 1000 −40 −30 −20 −10 0 10 20 30 Distance Chaos Coefficients of Intrinsic Permeability ξ 1 ξ 2 ξ 3 ξ 1 2 − 1 (a) (b) 0 200 400 600 800 1000 0.22 0.24 0.26 0.28 0.3 0.32 0.34 0.36 Distance Porosity Exact Model Mean Estimate 0 200 400 600 800 1000 −6 −5 −4 −3 −2 −1 0 1 2 3 4 x 10 −3 Distance Chaos Coefficients of Porosity ξ 1 ξ 2 ξ 3 ξ 1 2 − 1 (c) (d) Figure 4.8: Case 3. Estimate of medium properties: (a) Mean intrinsic permeability, (b) Polynomial Chaos coefficients of intrinsic permeability, (c) mean porosity, and (d) Polynomial Chaos coefficients of porosity 73 0 200 400 600 800 1000 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Distance Residual Water Saturation Exact Model Mean Estimate 0 200 400 600 800 1000 −4 −3 −2 −1 0 1 2 3 4 5 6 x 10 −3 Distance Chaos Coefficients of Residual Water Saturation ξ 1 ξ 2 ξ 3 ξ 1 2 − 1 (a) (b) Figure 4.9: Case 3: (a) Mean residual water saturation, (b) Polynomial Chaos coeffi- cients of the last estimated residual water saturation 4.4.2 Two-Dimensional Water Flood Problem - Scenario 1 Two example problems are studies to explore the two dimensional water flood exam- ple. In these problems, the relative permeabilities are also represented by a polynomial approximation of the Brooks-Corey model. While the first example adopts scenario one to represent the parametric uncertainties, scenario two is used in the second. The first example consists of a rectangular reservoir with a length of 60ft, width of 60ft, cross-sectional area of 1ft 2 , and constant initial water saturationS w i = 0.20. Oil is produced atx = 60ft at a rate of 0.0323ft 3 /day, and water is injected atx = 0 at the same rate. Measurements are available at 20 equidistant points within the domain at a frequency of 10 time steps. Figure 4.10 gives the means of the estimated medium properties. Although the estimated and true parameters are slightly different, it can be noticed from figure 4.13 that the water front is captured accurately. This can be explained by either the uncertainty prevalent in the estimated parameters and shown in figures 4.11 and 4.12 or the non-uniqueness of the solution 74 (a) (b) (c) (d) Figure 4.10: (a) mean of estimated intrinsic permeability, (b) true intrinsic permeability, (c) mean of estimated porosity, and (d) true porosity 4.4.3 Two-Dimensional Water Flood Problem - Scenario 2 The second scenario is used to represent the parametric uncertainty in the second group of problems. This problem consists of a rectangular domain of a length of100ft, width of 60ft, cross-sectional area of 1ft 2 , and constant initial water saturationS w i = 0.20. Oil is produced from one well at a rate of 7.94ft 3 /day, and water is injected from two different point sources at rates of 3.83 and 4.11ft 3 /day, respectively. Figure 4.14 shows location of these wells along with the measurement locations. In this example, 75 (a) (b) (c) (d) Figure 4.11: Polynomial Chaos coefficients of the estimated intrinsic permeability, (a) ξ 1 (b)ξ 2 , (c)ξ 3 , and (d)ξ 2 1 −1 measurements are also available at a frequency of 10 time steps. Figure 4.15 presents the the medium properties of the forward problem used to generate the measurements. In order, to assess the importance of the different terms in the Polynomial Chaos expansion used to represent the unknown model parameters, three different approxima- tions for the parametric and modeling uncertainties are assumed. 76 (a) (b) (c) (d) Figure 4.12: Polynomial Chaos coefficients of the estimated intrinsic porosity, (a)ξ 1 (b) ξ 2 , (c)ξ 3 , and (d)ξ 2 1 −1 4.4.3.1 Case 1: Three Dimensions First Order Approximation For this part, it is assumed that the unknown model parameters are represented as: α = exp(α 0 +α 1 ξ 1 ), (4.33) 77 (a) (b) Figure 4.13: (a) Mean of the estimated residual water saturation, (b) true residual water saturation Figure 4.14: Example 2 Scenario 2: Problem Description whereα 0 andα1 are the unknown coefficients to be estimated using the filtering scheme. In this sub-example, the modeling and measurement error are considered Gaussian rep- resented byξ 2 andξ 3 respectively. It is important to note that via this representation, the parametric uncertainty is independent from other noise sources. Figure 4.16 presents the agreement between the mean estimated and the actual water saturation profiles. Figure 4.17 represents the coefficients of the exponential chaos 78 Figure 4.15: Example 2 Scenario 2: The Medium properties of the forward problem expansion used to represent the ratio of the permeability and porosity, and figure 4.18 shows the mean and variance of the estimated ratio. 4.4.3.2 Case 2: Coupled Three Dimensions Second Order Approximation For this part, it is assumed that the unknown model parameters are represented as: α = exp(α 0 +α 1 ξ 1 +α 2 ξ 2 +α 3 ξ 3 ), (4.34) whereα 0 ,α1,α 2 andα 3 are the unknown coefficients to be estimated using the filtering scheme. In this sub-example, the modeling error is considered Gaussian represented byξ 3 , while the modeling uncertainties are modeled as a second order one dimensional expansion in terms ofξ 1 andξ 2 1 −1. Figure 4.19 shows that using the latter approximation, a better agreement between the mean estimated and the actual water saturation profiles is achieved. Figure 4.20 79 (a) (b) (c) (d) (e) (f) Figure 4.16: Case 1: True (left) and Estimated (right) Water Saturation Profiles 80 (a) (b) Figure 4.17: Case 1: Estimated Chaos Coefficients of the exponential representation of the Medium properties (a) (b) Figure 4.18: Case 1: The mean and variance of Estimated Medium propertyα = K φ represents the coefficients of the exponential chaos expansion used to represent the ratio of the permeability and porosity, and figure 4.21 shows the mean and variance of the estimated ratio. 81 (a) (b) (c) (d) (e) (f) Figure 4.19: Case 2: True (left) and Estimated (right) Water Saturation Profiles 82 Figure 4.20: Case 2: Estimated Chaos Coefficients of the exponential representation of the Medium properties 4.4.3.3 Case 3: Coupled Ten Dimensions Second Order Approximation In case three, it is assumed that the unknown model parameters are represented as: α = exp(α 0 + 10 X i=1 α i ξ i ), (4.35) where{α i } 10 i=1 are the unknown coefficients to be estimated using the filtering scheme. In this sub-example, the modeling error is considered Gaussian represented byξ 1 0, while the modeling uncertainties are modeled as a second order one dimensional expansion in terms ofξ 1 andξ 2 1 −1. 83 (a) (b) Figure 4.21: Case 2: The mean and variance of Estimated Medium propertyα = K φ Figure 4.22 shows the compatibility between the mean estimated and the actual water saturation profiles, and figure 4.23 shows the mean and variance of the estimated ratio. (a) (b) Figure 4.22: Case3: True (left) and Estimated (right) Water Saturation Profiles 84 (a) (b) Figure 4.23: Case 3: The mean and variance of Estimated Medium propertyα = K φ 4.5 Control of Fluid Front Dynamics Having characterized the reservoir simulation model, maximization of the displacement efficiency becomes the new target. Such optimization is possible by controlling the injection rate for a fixed arrangement of injection and production points. In this study a fundamental approach which relies on the Polynomial Chaos Kalman Filter to control the injection rates is provided. The benefits of such an approach are best realized when “smart” wells having both measurement and control equipment are used. The approach consists of two filtering loops as is portrayed in figure 4.24. The measurements are used to update the model parameters, and based on the most recent updates another control loop is used to control the injection rates. The objective is to minimize the mismatch between the predicted front and a pre-specified target. The methodology is demon- strated in a simple water flooding example using two injectors and multiple producers, each equipped with several individually controllable inflow control valves. The model parameters (permeabilities) and dynamic states (pressures and saturations) are updated using measurements of the water saturation at various boreholes within the medium. The setup of the problem is shown in figure 4.25. The problem objective is to maintain 85 a uniform flow by adjusting the inflow rates. Two variations of the problem are solved. The first assumes that the medium properties are known a priori, and thus the problem at hand simplifies to a control problem only. The second problem aims at estimating both the medium properties and controlling the injection rate to maintain a uniform front. Figure 4.24: Front control flow chart 86 Figure 4.25: Flow Control Example: Problem Setup 4.5.1 Example 1: Known Medium Properties In the example 1, it is assumed that the medium properties are given. The ratioK/P is represented as exp(α) whereα is shown in figure 4.26. Therefore, the problem simpli- fies to a control problem only where the injection rates are adjusted every 10 time steps so as to minimize the difference between the predicted front and a pre-specified target function. Two sources of uncertainty are represented in this example. The first is related to the uncertainty in the injection rate. Although the injection rate is a deterministic concept in practice, the uncertainty associated with it is modeled as a one dimensional second order polynomial chaos expansion. This can be explained by associating the scatter within the estimated rate with that of the predicted front. The aim is to deter- mine a combination of rates that will render the desired front. The second source of uncertainty is associated with the exactness of the target function. If it have a very small tolerance, the latter uncertainty vanishes. In this problem it is modeled as a Gaussian noise with a relatively small variance. Figure 4.27 shows the evolution of the estimated injection rate with time. Figure 4.28 gives a comparison between the predicted and estimated water saturation profiles at different times during the simulation. 87 Figure 4.26: The coefficientα representing the Medium Properties 0 500 1000 1500 2000 2500 3000 3500 300 320 340 360 380 400 420 Time Mean Injection Rate 0 500 1000 1500 2000 2500 3000 3500 −1 −0.5 0 0.5 1 1.5 2 Time Chaos Coefficients of Injecting Rate ξ 1 ξ 2 ξ 1 2 −1 (a) (b) Figure 4.27: Example 1: The estimated injection rate (a) Mean (b) Higher Order Coef- ficients 4.5.2 Example 2: Stochastic Medium Properties The second example is a combination of the reservoir characterization problem described in the previous section and that of the control problem presented in exam- ple 1. Here it is assumed that the medium properties are random and not known a priori. Thus the ratioK/P is modeled as: α = exp(α 0 +α 1 ξ 1 +α 2 ξ 2 +α 3 ξ 3 ), (4.36) 88 whereα 0 ,α1,α 2 andα 3 are the unknown coefficients to be estimated using the filtering scheme. Furthermore, in this example it is assumed that the characteristic model is not known exactly, and therefore modeling errors are introduced. These are represented as one dimensional PC expansions,ξ 2 . As in the history matching problems described earlier, the measurement errors are also represented as one dimensional PC expansions, ξ 3 . Finally, the uncertainty associated the water injection rates is represented as a one dimensional second order PC expansion,ξ 1 . Figure 4.29 shows the evolution of the estimated injection rate with time. Figure 4.30 gives a comparison between the predicted and estimated water saturation profiles at different times during the simulation. Furthermore, figure 4.31 gives the estimated coefficientsα i of the medium proper- ties. 89 (a) (b) (c) (d) (e) (f) Figure 4.28: Example 1: Target (left) and Mean Estimated (right) Water Saturation Profiles 90 0 100 200 300 400 500 600 700 200 400 600 800 1000 1200 1400 1600 1800 2000 Time Mean Injection Rate 0 100 200 300 400 500 600 700 −2 0 2 4 6 8 10 Time Chaos Coefficients of Injection Rate ξ 1 ξ 2 ξ 3 ξ 1 2 −1 (a) (b) Figure 4.29: Example 2: The estimated injection rate (a) Mean (b) Higher Order Coef- ficients 91 (a) (b) (c) (d) (e) (f) Figure 4.30: Example 2: Target (left) and Mean Estimated (right) Water Saturation Profiles 92 Figure 4.31: Example 2: estimated coefficients of the medium properties 93 Chapter 5 Structural Health Monitoring of Highly Nonlinear Systems 5.1 Introduction With the recent developments in monitoring technologies such as high performance sen- sors, optical or wireless networks, and the global position system, health monitoring of civil structures became a more accurate, faster, and cost efficient process. How- ever, significant challenges associated with modeling the physical complexity of sys- tems comprising these structures remain. This is mainly due to the fact that these sys- tems exhibit non-linear dynamical behavior with uncertain and complex governing laws. These uncertainties render standard system identification techniques either unsuitable or inefficient. Therefore, the need rises for robust system identification algorithms that can tackle the aforementioned challenges. This has been a very active research area over the past decade [GS95, LBL02, ZFYM02, FBL05, FIIN05, GF06]. Sequential data assimilation has been widely used for structural health monitoring and system identification problems. Many extensions of the Kalman Filter were devel- oped as adaptations to important classes of these problems. While the Extended Kalman Filter may fail in the presence of high non-linearities, Monte Carlo based Kalman Fil- ters usually give satisfactory results. The Ensemble Kalman Filter (EnKF) [Eve94] was recently used for damage detection in strongly nonlinear systems [GF06], where it is 94 combined with non-parametric modeling techniques to tackle structural health monitor- ing for non-linear systems. The EnKF uses a Monte Carlo Simulation scheme for char- acterizing the noise in the system, and therefore allows representing non-Gaussian per- turbations. Although this combination gives good results, it requires a relatively accurate representation of the non-linear system dynamics. It also requires a large ensemble size to quantify the non-Gaussian uncertainties in such systems and consequently imposes a high computational cost. The objective of this study is to propose a new system identification approach based on coupling robust non-parametric non-linear models with the Polynomial Chaos methodology [GS03] in the context of the Kalman Filtering techniques. The proposed approach uses a Polynomial Chaos expansion of the nonparametric representation of the system’s non-linearity to statistically characterize the system’s behavior. A filtering technique that allows the propagation of a stochastic representation of the unknown vari- ables using Polynomial Chaos is used to identify the chaos coefficients of the unknown parameters in the model. The introduced filter is a modification of the EnKF that uses the Polynomial Chaos methodology to represent uncertainties in the system. This allows the representation of non-Gaussian uncertainties in a simpler, less taxing way without the necessity of managing a large ensemble. It also allows obtaining the probability den- sity function of the model state or parameters at any instant in time by simply simulating the Polynomial Chaos basis. 5.2 Numerical Application The efficiency of the proposed method is assessed by applying it to the structural health monitoring of a the four story shear building shown in figure 5.1. This model has a constant stiffness on each floor and a 5% damping ratio in all modes. All structural 95 elements of this frame are assumed to involve hysteretic behavior, and it is supposed that a change in the hysteretic loop of the first floor element occurs at some point. It is of utmost importance to localize that point in time and track the state of the system throughout and subsequent to that point. Figure 5.1: Shear Building under analysis. A synthetically generated dataset representing measurements of the displacements and velocities at each floor is obtained by representing the hysteretic restoring force by the Bouc-Wen model, which is therefore considered as the exact hysteretic behavior of the system. Thus, the equation of motion of the system is given by, M¨ u(t)+ C˙ u(t)+αK el u(t)+(1−α)K in z(x,t) =−Mτ¨ u g (t), (5.1) where M, C, K el , and K in are the mass, damping, elastic and inelastic stiffness matrices respectively;α is the ratio of the post yielding stiffness to the elastic stiffness,τ is the 96 Table 5.1: Bouc-Wen Model Coefficients BW Coef. pre-change post-change α 0.15 0.15 β 0.1 10 γ 0.1 10 A 1 1 n 1 1 influence vector, u is the diplacement vector, x is the interstorey drift vector, andz is n-dimensional evolutionary hysteretic vector whosei th component is give by the Bouc- Wen model as, ˙ z i =A i ˙ x i −β i |˙ x i ||z i | n i −1 −γ i ˙ x i |z i | n i ,i = 1,...,n. (5.2) Table 5.1 presents the Bouc-Wen model parameters adopted in this application. The structure is subject to a base motion specified by a time series consistent with the El- Centro earthquake shown in figure 5.2, and a change of the first floor hysteric behavior is assumed to take place five seconds after the excitation. Two monitoring scenarios are considered. In both scenarios, observations of the floor displacements and velocities are available at all floors. However, the frequency of measurements is varied to explore the impact of hardware limitations on the perfor- mance of the filter. In the first scenario, it is assumed that measurements are available every 5 time steps, while they are given every 20 time steps during the second scenario. A nonparametric representation of the system nonlinearity is adopted, and the filter- ing technique is used to characterize the latter representation in order to capture any ambiguous behavior of the structure examined. 97 0 5 10 15 20 25 30 35 −0.4 −0.3 −0.2 −0.1 0 0.1 0.2 0.3 Time (sec) ElCentro Ground Motion Figure 5.2: The Elcentro Excitation Applied to the Structure. 5.2.1 Non-parametric Representation of the Non-Linearity The proposed filtering methodology is combined with a non-parametric modeling tech- nique to tackle structural health monitoring of non-linear systems in a fashion similar to that used earlier by Ghanem [GF06], but instead of adopting a deterministic non- parametric representation of the non-linearity, a stochastic representation via Polyno- mial Chaos is used. The basic idea behind the non-parametric identification technique used is to determine an approximating analytical function ˆ F , that approximates the actual system non-linearities, with the form of ˆ F including suitable basis functions that are adapted to the problem at hand [MCC + 04]. For general non-linear systems, a suit- able choice of basis would be the list of terms in the power series expansion in the doubly indexed series S = imax X i=0 jmax X j=0 u i ˙ u j , (5.3) 98 whereu and ˙ u are used to represent the system’s displacement and velocity respectively. Therefore, ifi max = 3 andj max = 3, the basis functions become basis ={1, ˙ u, ˙ u 2 , ˙ u 3 ,u,u˙ u,u˙ u 2 ,u˙ u 3 ,u 2 ,u 2 ˙ u,u 2 ˙ u 2 ,u 2 ˙ u 3 ,u 3 ,u 3 ˙ u,u 3 ˙ u 2 ,u 3 ˙ u 3 }. (5.4) This notation readily generalizes from the SDOF case presented in Eqs. 5.3 and 5.4 as shown later. In the proposed method the displacements and velocities are stochastic processes represented by their Polynomial Chaos expansion. Thus, the approximating function is also expressed as a stochastic processes via a Polynomial Chaos representa- tion. The model adopted within the Kalman Filter is hence given by M¨ u(t)+ F(u, ˙ u) =−Mτ¨ u g (t), (5.5) whereF is the non-parametric representation of the non-linearity whosei th floor com- ponent is given by F i ≈ X j F i j (u, ˙ u)ψ j = X j a i j ψ j ( X k (u i k −u i−1 k )ψ k )+ X j a i+1 j ψ j ( X k (u i k −u i+1 k )ψ k ) + X j b i j ψ j ( X k (u i k −u i+1 k )ψ k ) 2 + X j b i+1 j ψ j ( X k (u i k −u i+1 k )ψ k ) 2 + X j c i j ψ j ( X k (˙ u i k − ˙ u i−1 k )ψ k )+ X j c i+1 j ψ j ( X k (˙ u i k − ˙ u i+1 k )ψ k ) + X j d i j ψ j ( X k (u i k −u i−1 k )ψ k )( X l (˙ u i l − ˙ u i−1 l )ψ l ) + X j d i+1 j ψ j ( X k (u i k −u i+1 k )ψ k )( X l (˙ u i l − ˙ u i+1 l )ψ l ). (5.6) 99 In Eq. 5.6{a j },{b j },{c j }, and{d j } represent the Chaos coefficients of the unknown parameters to be identified. The fourth order Runge-Kutta method is used for the time stepping, and a stochastic Galerkin scheme is employed to solved the system at each time step. 5.2.2 Results In this application, it is assumed that observations of displacements and velocities from all floors are available. The noise signals perturbing both the model and measurements are modeled as first order, one dimensional, independent, Polynomial Chaos expan- sions having zero-mean and an RMS of 0.05 and 0.001 respectively. The parameters’ uncertainties on the other hand, are modeled as second order, one dimensional, Poly- nomial Chaos expansions whose coefficients are to be determined in accordance with the available observations. This is done to incorporate the possibility that the unknown parameters may deviate from Gaussianity. Furthermore, it is assumed that the first floor undergoes a change in its hysteretic behavior 5 seconds after the ground excitation. The purpose of the application is to detect this behavioral change. 5.2.2.1 Example 1: Δt = 5 Time Steps In the first example, it is assumed that the measurements of the floors’ displacements and velocities are available every 5 time steps. Figures 5.3 and 5.4 describe the tracking of the displacement and velocity for the first and fourth floor respectively. Excellent match between the results estimated using the Polynomial Chaos based Kalman Filter and the true states is observed. Figure 5.5 presents the evolution of the mean of the unknown parameters identified by the proposed filtering technique. Error bars representing the scatter in the estimated parameters are also present in figure 5.5. The different jumps within the parameters are associated with the perks in the corresponding excitation. 100 0 5 10 15 20 25 30 35 −3 −2 −1 0 1 2 3 4 x 10 −3 Time Floor 1 Displacement Exact Mean Estimate 0 5 10 15 20 25 30 35 −0.025 −0.02 −0.015 −0.01 −0.005 0 0.005 0.01 0.015 0.02 0.025 Time Floor 1 Velocity Exact Mean Estimate (a) (b) Figure 5.3: (a) Estimate of the first floor displacement (Δt = 5 time steps), (b) Estimate of the first floor velocity (Δt = 5 time steps). 0 5 10 15 20 25 30 35 −8 −6 −4 −2 0 2 4 6 8 10 x 10 −3 Time Floor 4 Displacement Exact Mean Estimate 0 5 10 15 20 25 30 35 −0.08 −0.06 −0.04 −0.02 0 0.02 0.04 0.06 Time Floor 4 Velocity Exact Mean Estimate (a) (b) Figure 5.4: (a) Estimate of the fourth floor displacement ( Δt = 5 time steps), (b) Estimate of the fourth floor velocity (Δt = 5 time steps). Further investigation of the parameters indicate that the main changes take place in the first floor following the 5sec time interval. Note that the parameters a and c in floors 1 and 2 undergo the greatest jumps since they are associated with inter-story drift and velocity, respectively. One of the main advantages of using the Polynomial Chaos Kalman filter is that is provides a scatter around the estimated parameters. This is represented by the probability density functions corresponding to each of the estimated parameters. Figure 5.6 presents the pdf’s of the estimated floor 1 parameters. 101 0 5 10 15 20 25 30 35 0 0.5 1 1.5 2 2.5 3 3.5 x 10 4 Time Mean Floor 1 Parameters c a d b 0 5 10 15 20 25 30 35 0 0.5 1 1.5 2 2.5 x 10 4 Time Mean Floor 2 Parameters c a d b (a) (b) 0 5 10 15 20 25 30 35 0 0.5 1 1.5 2 2.5 x 10 4 Time Mean Floor 3 Parameters c a d b 0 5 10 15 20 25 30 35 0 0.5 1 1.5 2 2.5 x 10 4 Time Mean Floor 4 Parameters c a d b (c) (d) Figure 5.5: Estimate of the Mean floor parameters ( Δt = 5 time steps): (a) first floor, (b) second floor, (c) third floor, and (d) fourth floor 5.2.2.2 Example 2: Δt = 20 Time Steps In the second run, to test the limitation of the monitoring hardware and software, it is assumed that the measurements are available every 20 time steps. Figures 5.7 and 5.8 describe the tracking of the displacement and velocity for the first and fourth floor respectively. While the match between the results estimated using the Polynomial Chaos based Kalman Filter and the true states is observed is not as good as the previous exam- ple, the estimate still gives a good representation of the true state. Figure 5.9 presents the evolution of the mean of the unknown parameters identified by the proposed filtering technique. It is again readily noted that the change in hysteretic behavior is concentrated 102 9500 9550 9600 9650 9700 9750 0 0.002 0.004 0.006 0.008 0.01 0.012 0.014 0.016 0.018 "a" 410 420 430 440 450 460 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1 "b" (a) (b) 1.69 1.7 1.71 1.72 1.73 1.74 1.75 x 10 4 0 1 2 3 4 5 6 7 8 x 10 −3 "c" 960 965 970 975 980 985 0 0.05 0.1 0.15 0.2 0.25 "d" (c) (d) Figure 5.6: Probability density function of estimated floor 1 parameters ( Δt = 5 time steps): (a) “a”, (b) “b”, (c) “c”, and (d) “d” at the first floor at a time after 5sec. Figure 5.10 presents the probability density func- tions associated with the estimated first floor parameters. Although the mean of the identified parameters is closely comparable to that in example 1, it is obvious that the associated scatter has increased, and thus indicating a higher coefficient of variation. 103 0 5 10 15 20 25 30 35 −6 −4 −2 0 2 4 6 x 10 −3 Time Floor 1 Displacement Exact Mean Estimate 0 5 10 15 20 25 30 35 −0.03 −0.02 −0.01 0 0.01 0.02 0.03 Time Floor 1 Velocity Exact Mean Estimate (a) (b) Figure 5.7: (a) Estimate of the first floor displacement ( Δt = 20 time steps), (b) Esti- mate of the first floor velocity (Δt = 20 time steps). 0 5 10 15 20 25 30 35 −0.015 −0.01 −0.005 0 0.005 0.01 Time Floor 4 Displacement Exact Mean Estimate 0 5 10 15 20 25 30 35 −0.08 −0.06 −0.04 −0.02 0 0.02 0.04 0.06 Time Floor 4 Velocity Exact Mean Estimate (a) (b) Figure 5.8: (a) Estimate of the fourth floor displacement ( Δt = 20 time steps), (b) Estimate of the fourth floor velocity (Δt = 20 time steps). 104 0 5 10 15 20 25 30 35 0 0.5 1 1.5 2 2.5 3 x 10 4 Time Mean Floor 1 Parameters c a d b 0 5 10 15 20 25 30 35 0 0.5 1 1.5 2 2.5 x 10 4 Time Mean Floor 2 Parameters c a d b (a) (b) 0 5 10 15 20 25 30 35 0 0.5 1 1.5 2 2.5 x 10 4 Time Mean Floor 3 Parameters c a d b 0 5 10 15 20 25 30 35 0 0.5 1 1.5 2 2.5 x 10 4 Time Mean Floor 4 Parameters c a d b (c) (d) Figure 5.9: Estimate of the Mean floor parameters (Δt = 20 time steps): (a) first floor, (b) second floor, (c) third floor, and (d) fourth floor 105 9300 9350 9400 9450 9500 9550 9600 0 0.002 0.004 0.006 0.008 0.01 0.012 0.014 0.016 "a" 380 390 400 410 420 430 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1 "b" (a) (b) 1.65 1.66 1.67 1.68 1.69 1.7 1.71 x 10 4 0 1 2 3 4 5 6 7 8 x 10 −3 "c" 950 955 960 965 970 975 0 0.05 0.1 0.15 0.2 0.25 "d" (c) (d) Figure 5.10: Probability density function of estimated floor 1 parameters (Δt = 20 time steps): (a) “a”, (b) “b”, (c) “c”, and (d) “d” 106 Chapter 6 Sundance Sundance is a system for rapid development of high-performance parallel finite-element solutions of partial differential equations [Lon04]. This toolkit software is developed mainly by Kevin Long in SANDIA National Laboratories. The high level nature of Sundance relieves from worrying about the tedious and error-prone bookkeeping details. It also allows a high degree of flexibility in the formulation, discretization, and the solution of the problem. Moreover, Sundance can both assemble and solve problems in parallel. One of its design goal, is to make parallel solutions as simple as serial ones. Sundance is written in C++ programming language with optional python wrappers. Therefore, it is necessary to know how to write and compile C++ codes to be able to use it. Only a fraction of the objects and methods that make up Sundance are ever needed in user code; most are used internally by Sundance. Solution of partial differential equations is a complicated endeavor with many subtle difficulties, and there is no standard approach to tackle all PDE’s. Sundance is a set of high-level objects that will allow the user to build his own simulation code with a mini- mal effort. Although, these objects shield the user from the rather tedious bookkeeping details required in writing a finite-element code, they still require him to understand how to do a proper formulation and discretization of a given problem. 107 6.1 Guidelines for Using Sundance • Finite Element methods for solving PDE’s are based on the PDE’s weak form. Therefore, Sundance requires describing PDE’s by employing a high-level sym- bolic notation for writing weak forms. • The spatial discretization or mesh of problem can be created internally for simple one dimensional or two dimensional problems, but for more complex problems it is required to use a third-party mesher and import that mesh into Sundance. • The FEM approximates PDE solution by a system of linear or nonlinear algebraic equations. Many factors influence this approximation, namely, choice of weak form, method of imposing Boundary Conditions, basis functions for the unknowns in the problem, and more. • Sundance has a built in Library linking to the famous TRILINOS solvers. There- fore it suffice to identify the solver required for the problem and specify its param- eters such as the tolerance level, the maximum number of iterations, linearization or iteration method for nonlinear problems , and more. • Solution of a time-dependent problem can be reduced to solving a sequence of linear (or possibly nonlinear) problems, by timestepping or marching. Again, there are many possible marching algorithms [Lon04]. 6.2 SSFEM Using Sundance In order to make use of Sundance in solving stochastic partial differential equations via the SSFEM, certain modifications to the code are done. A spectral library is developed 108 within Sundance to allow the high level implementation of the SSFEM. It entails defin- ing a spectral basis whose type, dimensionality and order are to be specified along with methods for computing the expectations over the space approximated by the specified basis. 6.2.1 Example: One-dimensional Bar with a Random Stiffness Consider a bar element of LengthL, clamped at both ends and subject to a deterministic static uniform body loadP . It is assumed that the modulus of elasticityE associated with the bar is a realization of a lognormal random process indexed over the spatial domain occupied by the bar. It is also assumed that the bar has a uniform cross-sectional area,A. The governing differential equation for this problem is, d dx (AE du dx ) =P. (6.1) Although, this is a simple example, many of the complex Sundance syntax can be demonstrated through its solution scheme. The boundary conditions, BCs, for the above problem are, u = 0 atx = 0andx =L. With the equation and BCs in hand, we can write the problem in weak form. Multiplying by a test functionv and integrating by parts, we have Z interior dv dx AE du dx − Z interior vP (6.2) 109 6.2.2 Step by Step Explanation In this section a walk through the code for solving the problem is detailed. At the end, the complete code is listed for reference. 6.2.2.1 Boilerplate A dull but essential first step is to show the boilerplate C++ common to nearly every Sundance code: #include "Sundnace.hpp" int main(int argc, char ** argv) { try { Sundance::init(&argc, &argv); / * * Code Goes Here * / } catch(exception& e) { Sundance::handleException(e); } Sundance::finalize(); } These lines control initialization and result gathering for the problem, initializing and finalizing MPI if MPI is being used, and other administrative tasks. 110 6.2.2.2 Getting the Mesh A mesh object is used by Sundance to represent a tessellation of the problem domain. There are many ways of getting a mesh, all abstracted with the MeshSource interface. Sundance is designed to work with different mesh underlying implementations, the choice of which is done by specifying a MeshType object. In this example, we use the BasicSimplicialMeshType which is a lightweight parallel simplicial mesh. For the simple geometry associated with this problem, the discretization is done internally by Sundance as follows, MeshType meshType = new BasicSimplicialMeshType(); MeshSource mesher = new PartitionedLineMesher(0.0, 1.0, 10, meshType); Mesh mesh = mesher.getMesh(); In this example, it is assumed that the total bar length is1.0 discretized into10 elements. 6.2.2.3 Defining the Domains of Integration Having defined the mesh, it is time to define the domains on which the mesh equa- tions and boundary conditions are to be applied. CellFilter objects are used to represent domains and subdomains within a mesh. For this problem, we only need to define the interior and the edges where the boundary conditions are to be applied. For the latter purpose, the CellPredicate object is used. This object is a binary command used to test whether a cell is in the boundary domain or not. CELL_PREDICATE(LeftPointTest,{return fabs(x[0])< 1e-8;}); CELL_PREDICATE(RightPointTest,{return fabs(x[0]-1.0)< 1e-8;}); This identifies the extremities of the domain located atx = 0 andx = 1.0. Now, the CellFilter object is used to define the separate domains: 111 CellFilter interior = new MaximalCellFilter(); CellFilter points = new DimensionalCellFilter(0); CellFilter leftPoint = points.subset(new LeftPointTest()); CellFilter rightPoint = points.subset(new RightPointTest()); 6.2.2.4 Defining the Spectral Basis and the Spatial Interpolation Basis The spectral basis is defined by specifying its type, the number of dimensions and the order of the expansion used to generate it. For the purpose of implementing the SSFEM, the Hermite polynomials basis is used and it is generated as follows, int ndim = 1; int order = 6; SpectralBasis sbasis = new HermiteSpectralBasis(ndim, order); Similarly, a spatial interpolation basis is defined by specifying its type and order. Given these bases, we can now construct both the physical dicsretespace object and the random one. DiscreteSpace discSpace(mesh, new Lagrange(1), vecType); The latter specifies that the problem utilizes a first order lagrangian interpolation on the mesh. 6.2.2.5 Defining the Model Parameters and Excitation The modulus of elasticity is defined as a lognormal process. Using the standard PCE expansion for a lognormal variable, the different coefficients of the one-dimensional sixth order expansion are obtained and stored in data files. Now, we have to read these files into Sundance vectors and define the associated Spectral Expressions. Sundance has built in function to read data files into Discrete Functions. This is done as follows, 112 Expr E0 = new DiscreteFunction(discSpace, 0.0, "E0"); Vector<double> vec = DiscreteFunction:: discFunc(E0)->getVector(); const RefCountPtr<DOFMapBase>& dofMap = DiscreteFunction::discFunc(E0)->map(); ifstream is("E0.dat"); int nNodes; is >> nNodes; TEST_FOR_EXCEPTION(mesh.numCells(0) != nNodes, RuntimeError "number of nodes in data file fieldData.dat is " << nNodes << " but number of nodes in mesh is " << mesh.numCells(0)); Array<int> dofs(1); double fVal; for (int i=0; i<nNodes; i++) { dofMap->getDOFsForCell(0, i, 0, dofs); int dof = dofs[0]; is >> fVal; vec.setElement(dof, fVal); } DiscreteFunction::discFunc(E0)->setVector(vec); In a similar fashion, the remaining PCE coefficients of E are read into Sundance. The excitation force is deterministic and hence its spectral representation is represented by its value as the mean of the expansion and zero higher order terms. / * array of coefficients for the spectra expression * / Array<Expr> Coeff(sbasis.nterms()); 113 Coeff[0] = 1.0; Coeff[1] = 0.0; Coeff[2] = 0.0; Coeff[3] = 0.0; Coeff[4] = 0.0; Coeff[5] = 0.0; Coeff[6] = 0.0; Expr F = new SpectralExpr(sbasis, Coeff); Expr A = 1.0; Expr E = new SpectralExpr(sbasis, List(E0, E1, E2, E3, E4, E5, E6)); 6.2.2.6 Defining the Unknown and Test Functions Expressions representing the test and unknown functions are defined easily: Expr u = new UnknownFunction(new Lagrange(1), sbasis, "u"); Expr v = new TestFunction(new Lagrange(1), sbasis, "v"); 6.2.2.7 Writing the Weak form To write the weak form, we need to define the Quadrature rule for computing the integra- tion first. We’ll use second-order Gaussian quadrature. The weak form with a quadrature specification is written in Sundance as: QuadratureFamily quad2 = new GaussianQuadrature(2); Expr eqn = Integral(interior, (dx * v) * (E * (dx * u)), quad2) + Integral(interior, v * F, quad2) ; 114 6.2.2.8 Writing the Essential Boundary Conditions The weak form above contains the physics in the body of the domain plus the Neumann BCs on the edges. We still need to apply the Dirichlet boundary condition on the inlet, which we do with an EssentialBC object Expr bc = EssentialBC(rightPoint, v * u , quad2) + EssentialBC(leftPoint, v * u, quad2); 6.2.2.9 Creating the Linear Problem Object A LinearProblem object contains everything that is needed to assemble a discrete approximation to our PDE: a mesh, a weak form, boundary conditions, specification of test and unknown functions, and a specification of the low-level matrix and vector rep- resentation to be used. We will use Epetra as our linear algebra representation, which is specified by selecting the corresponding VectorType subtype, VectorType<double> vecType = new EpetraVectorType(); Now the Linear problem object is defined as, LinearProblem prob(mesh, eqn, bc, v, u, vecType); 6.2.2.10 Specifying the Solver and Solving the Problem The BICGSTAB method with ILU preconditioning is used for solving the problem. Level one preconditioning alond with a tolerance of 10 −14 within 1000 iterations is used. The whole setup is configured using the Parameterlist TRILINOS object to read the specs from an xml file. ParameterXMLFileReader reader("bicgstab.xml"); ParameterList solverParams = reader.getParameters(); 115 LinearSolver<double> linSolver = LinearSolverBuilder::createSolver(solverParams); Then the problem is simply solved by using the following command, Expr soln = prob.solve(linSolver); 6.2.3 Complete Code for the Bar Problem #include "Sundance.hpp" CELL_PREDICATE(LeftPointTest, {return fabs(x[0]) < 1.0e-10;}); CELL_PREDICATE(RightPointTest, {return fabs(x[0]-1.0) < 1.0e-10;}); int main(int argc, char ** argv) { try { Sundance::init(&argc, &argv); VectorType<double> vecType = new EpetraVectorType(); / * Create a mesh. It will be of type BasisSimplicialMesh, and will * be built using a PartitionedLinedMesher. * / MeshType meshType = new BasicSimplicialMeshType(); MeshSource mesher = new PartitionedLineMesher(0.0, 1.0, 10, meshType); Mesh mesh = mesher.getMesh(); / * Create a cell filter that will identify the maximal cells * in the interior of the domain * / CellFilter interior = new MaximalCellFilter(); CellFilter points = new DimensionalCellFilter(0); CellFilter leftPoint = points.subset(new LeftPointTest()); CellFilter rightPoint = points.subset(new RightPointTest()); / * Create the Spectral Basis * / int ndim = 1; 116 int order = 6; SpectralBasis sbasis = new HermiteSpectralBasis(ndim, order); / * create an empty (zero-valued) discrete function * / DiscreteSpace discSpace(mesh, new Lagrange(1), vecType); Expr E0 = new DiscreteFunction(discSpace, 0.0, "E0"); Vector<double> vec = DiscreteFunction::discFunc(E0)->getVector(); const RefCountPtr<DOFMapBase>& dofMap = DiscreteFunction::discFunc(E0)->map(); ifstream is("E0.dat"); int nNodes; is >> nNodes; TEST_FOR_EXCEPTION(mesh.numCells(0) != nNodes, RuntimeError, "number of nodes in data file fieldData.dat is " << nNodes << " but number of nodes in mesh is " << mesh.numCells(0)); Array<int> dofs(1); double fVal; for (int i=0; i<nNodes; i++) { dofMap->getDOFsForCell(0, i, 0, dofs); int dof = dofs[0]; is >> fVal; vec.setElement(dof, fVal); } DiscreteFunction::discFunc(E0)->setVector(vec); / * create an empty (zero-valued) discrete function * / Expr E1 = new DiscreteFunction(discSpace, 0.0, "E1"); Vector<double> vec1 = DiscreteFunction::discFunc(E1)->getVector(); const RefCountPtr<DOFMapBase>& dofMap1 = DiscreteFunction::discFunc(E1)->map(); ifstream is1("E1.dat"); 117 is1 >> nNodes; TEST_FOR_EXCEPTION(mesh.numCells(0) != nNodes, RuntimeError, "number of nodes in data file fieldData.dat is " << nNodes << " but number of nodes in mesh is " << mesh.numCells(0)); for (int i=0; i<nNodes; i++) { dofMap1->getDOFsForCell(0, i, 0, dofs); int dof = dofs[0]; is1 >> fVal; vec1.setElement(dof, fVal); } DiscreteFunction::discFunc(E1)->setVector(vec1); / * create an empty (zero-valued) discrete function * / Expr E2 = new DiscreteFunction(discSpace, 0.0, "E2"); Vector<double> vec2 = DiscreteFunction::discFunc(E2)->getVector(); const RefCountPtr<DOFMapBase>& dofMap2 = DiscreteFunction::discFunc(E2)->map(); ifstream is2("E2.dat"); is2 >> nNodes; TEST_FOR_EXCEPTION(mesh.numCells(0) != nNodes, RuntimeError, "number of nodes in data file fieldData.dat is " << nNodes << " but number of nodes in mesh is " << mesh.numCells(0)); for (int i=0; i<nNodes; i++) { dofMap2->getDOFsForCell(0, i, 0, dofs); int dof = dofs[0]; is2 >> fVal; vec2.setElement(dof, fVal); } 118 DiscreteFunction::discFunc(E2)->setVector(vec2); / * create an empty (zero-valued) discrete function * / Expr E3 = new DiscreteFunction(discSpace, 0.0, "E3"); Vector<double> vec3 = DiscreteFunction::discFunc(E3)->getVector(); const RefCountPtr<DOFMapBase>& dofMap3 = DiscreteFunction::discFunc(E3)->map(); ifstream is3("E3.dat"); is3 >> nNodes; TEST_FOR_EXCEPTION(mesh.numCells(0) != nNodes, RuntimeError, "number of nodes in data file fieldData.dat is " << nNodes << " but number of nodes in mesh is " << mesh.numCells(0)); for (int i=0; i<nNodes; i++) { dofMap3->getDOFsForCell(0, i, 0, dofs); int dof = dofs[0]; is3 >> fVal; vec3.setElement(dof, fVal); } DiscreteFunction::discFunc(E3)->setVector(vec3); / * create an empty (zero-valued) discrete function * / Expr E4 = new DiscreteFunction(discSpace, 0.0, "E4"); Vector<double> vec4 = DiscreteFunction::discFunc(E4)->getVector(); const RefCountPtr<DOFMapBase>& dofMap4 = DiscreteFunction::discFunc(E4)->map(); ifstream is4("E4.dat"); is4 >> nNodes; TEST_FOR_EXCEPTION(mesh.numCells(0) != nNodes, RuntimeError, "number of nodes in data file fieldData.dat is " 119 << nNodes << " but number of nodes in mesh is " << mesh.numCells(0)); for (int i=0; i<nNodes; i++) { dofMap4->getDOFsForCell(0, i, 0, dofs); int dof = dofs[0]; is4 >> fVal; vec4.setElement(dof, fVal); } DiscreteFunction::discFunc(E4)->setVector(vec4); / * create an empty (zero-valued) discrete function * / Expr E5 = new DiscreteFunction(discSpace, 0.0, "E5"); Vector<double> vec5 = DiscreteFunction::discFunc(E5)->getVector(); const RefCountPtr<DOFMapBase>& dofMap5 = DiscreteFunction::discFunc(E5)->map(); ifstream is5("E5.dat"); is5 >> nNodes; TEST_FOR_EXCEPTION(mesh.numCells(0) != nNodes, RuntimeError, "number of nodes in data file fieldData.dat is " << nNodes << " but number of nodes in mesh is " << mesh.numCells(0)); for (int i=0; i<nNodes; i++) { dofMap5->getDOFsForCell(0, i, 0, dofs); int dof = dofs[0]; is5 >> fVal; vec5.setElement(dof, fVal); } DiscreteFunction::discFunc(E5)->setVector(vec5); / * create an empty (zero-valued) discrete function * / 120 Expr E6 = new DiscreteFunction(discSpace, 0.0, "E6"); Vector<double> vec6 = DiscreteFunction::discFunc(E6)->getVector(); const RefCountPtr<DOFMapBase>& dofMap6 = DiscreteFunction::discFunc(E6)->map(); ifstream is6("E6.dat"); is6 >> nNodes; TEST_FOR_EXCEPTION(mesh.numCells(0) != nNodes, RuntimeError, "number of nodes in data file fieldData.dat is " << nNodes << " but number of nodes in mesh is " << mesh.numCells(0)); for (int i=0; i<nNodes; i++) { dofMap6 ->getDOFsForCell(0, i, 0, dofs); int dof = dofs[0]; is6 >> fVal; vec6.setElement(dof, fVal); } DiscreteFunction::discFunc(E6)->setVector(vec6); / * Create Spectral Unknown And test Functions * / Expr u = new UnknownFunction(new Lagrange(1), sbasis, "u"); Expr v = new TestFunction(new Lagrange(1), sbasis, "v"); / * Create differential operator and coordinate functions * / Expr x = new CoordExpr(0); Expr dx = new Derivative(0); / * We need a quadrature rule for doing the integrations * / QuadratureFamily quad2 = new GaussianQuadrature(2); / * array of coefficients for the spectra expression * / Array<Expr> Coeff(sbasis.nterms()); Coeff[0] = 1.0; Coeff[1] = 0.0; Coeff[2] = 0.0; Coeff[3] = 0.0; Coeff[4] = 0.0; 121 Coeff[5] = 0.0; Coeff[6] = 0.0; Expr F = new SpectralExpr(sbasis, Coeff); Expr A = 1.0; Expr E = new SpectralExpr(sbasis, List(E0, E1, E2, E3, E4, E5, E6)); / * Define the weak form * / Expr eqn = Integral(interior, (dx * v) * (E * (dx * u)), quad2) + Integral(interior, v * F, quad2) ; / * Define the Dirichlet BC * / Expr bc = EssentialBC(rightPoint, v * u , quad2)+ EssentialBC(leftPoint, v * u, quad2); / * We can now set up the linear problem! * / LinearProblem prob(mesh, eqn, bc, v, u, vecType); / * Read the parameters for the linear solver from an XML file * / ParameterXMLFileReader reader("bicgstab.xml"); ParameterList solverParams = reader.getParameters(); LinearSolver<double> linSolver = LinearSolverBuilder::createSolver(solverParams); / * solve the problem * / Expr soln = prob.solve(linSolver); FieldWriter wr = new MatlabWriter("Spectralbar"); wr.addMesh(mesh); wr.addField("u0", new ExprFieldWrapper(soln[0])); wr.addField("u1", new ExprFieldWrapper(soln[1])); wr.addField("u2", new ExprFieldWrapper(soln[2])); wr.addField("u3", new ExprFieldWrapper(soln[3])); wr.addField("u4", new ExprFieldWrapper(soln[4])); wr.addField("u5", new ExprFieldWrapper(soln[6])); wr.write(); 122 } catch(exception& e) { Sundance::handleException(e); } Sundance::finalize(); } 123 Chapter 7 Conclusion The combination of Polynomial Chaos with the Ensemble Kalman Filter renders an efficient data assimilation methodology that surpasses standard Kalman Filtering tech- niques while maintaining a relatively low computational cost. Although the proposed method employs traditional Kalman Filter updating schemes, it preserves all the error statistics, and hence allows the computation of the probability density function of the uncertain parameters and variables at all time steps. This is done by simply simulating the PC representations of these parameters. This simulation has a negligible computa- tional cost. Applying the proposed method for calibrating reservoir simulation models is an innovative approach that allows approximating the higher order statistics of the pro- duction forecast. The use of PC to represent the uncertainty in the reservoir model and the measurement help convey the actual medium in a more realistic way. The proposed method is tested on the one-dimensional Buckley-Leverett and the two-dimensional Five Spot two-phase immiscible flow problems using synthesized measurement data. A novel approach for representing the reservoir medium properties as a rational function is pre- sented, and the obtained results depict the validity and effectiveness of the proposed method. The efficiency of the method is also conveyed through applying it for optimizing the fluid front dynamics in porous media by using injection rate control. The obtained results show that representing the controls as part of the Kalman Filter state vector is an effective approach for controlling the flow and maintaining a desired target. 124 The data assimilation technique is also applied for health monitoring of highly non- linear structures. Together with the non-parametric representation of the nonlineari- ties, the approach constitutes an effective system identification technique that accurately detects any changes in the systems behavior. The Polynomial Chaos representation of the non-parametric model for the nonlinearities is a robust innovative approach that per- mits damage identification and tracking the dynamical state beyond that point. Using Polynomial Chaos, the uncertainty associated with the assumed non-parametric model is inherently present and thus represents the actual nonlinearity in a more accurate way. 125 References [Aan05] S.I. Aanonsen. Efficient history matching using a multiscale technique. SPE reservoir simulation symposium, (SPE 92758), 2005. [AML05] M. Alvarado, D. McVay, and W. Lee. Quantification of uncertainty by combining forecasting with history matching. Petroleum Science and Tech- nology, 23:445–462, 2005. [Ash88] H. Asheim. Maximization of water sweep efficiency by controlling injec- tion and production rates. SPE, (18365), 1988. [BBM87] J. Bear, C. Brestar, and P.C. Menier. Effective and relative permeabilities of anistropic porous media. Transport in porous media, 2:301–316, 1987. [BC64] R.H. Brooks and T. Corey. Hydraulic properties of porous media. Techni- cal Report 3, Civ. Eng. Dept, Colorado State Univ., Fort Collins, Hydrol. paper 1964. [Bea72] J. Bear. Dynamics of Fluids in Porous Media. Dover Publications, New York, 1972. [BLE98] G. Burgers, P.J. Van Leeuwen, and G. Evensen. Analysis scheme in the ensemble kalman filter. Monthly Weather Review, 126(6):1719–1724, 1998. [Bro83] R.G. Brown. Introduction to Random Signal Analysis and Kalman Filter- ing. John Wiley and Sons Inc., USA, 1983. [Ca74] W.H. Chen and al. A new algorithm for automatic history matching. SPE Journal, 14(6):593–608, 1974. [CC91] C. Chui and G. Chen. Kalman Filtering with real time applications. Springer-Verlag, USA, 2nd edition, 1991. [CDL75] G. Chavent, M. Dupuy, and P. Lemonnier. History matching by use of optimal control theory. SPE Journal, 15(1):74–86, 1975. 126 [CJ86] G. Chavent and J. Jaffre. Mathematical Models and Finite Elements for Reservoir Simulation. Studies in mathematics and its applications. Elsevier Science Publishers B.V ., 1986. [CM47] R. Cameron and W. Martin. The orthogonal development of nonlinear functionals in series of fourier-hermite functionals. Ann. MAth, 48:385– 392, 1947. [DdFG01] A Doucet, N de Freitas, and N Gordon. Sequential Monte Carlo Methods in Practice,. Springer, 2001. [DGRH] A. Doostan, R. Ghanem, and J. Red-Horse. Stochastic model reductions for chaos represenations. CMAME. submitted. [DP89] M. Delshad and G.A. Pope. Comparison of the three-phase oil relative permeability models. Transport in Porous Media, 4(1):59–83, 1989. [Dul91] F.A.L. Dullien. Porous media: Fluid Transport and Pore Structure. Aca- demic Press, San Diego, California, 1991. [Eve94] G. Evensen. Sequential data assimilation with a nonlinear quasi- geostrophic model using monte carlo methods to forecast error statistics. Journal of Geophysical Research, 99:10143–10162, 1994. [Eve03] G. Evensen. The ensemble kalman filter: Theoretical formulation and prac- tical implementation. Ocean Dynamics, 53:343–367, 2003. [Eve05] G. Evensen. The combined parameter and state estimation problem. Com- putational Geosciences, 2005. submitted. [FBL05] G. Franco, R. Betti, and S. Lus. Identification of structural systems using an evolutionary strategy. Journal of Engineering Mechanics, 130:1125–1139, 2005. [FIIN05] T. Furukawa, M. Ito, K. Izawa, and M. Noori. System identification of base isolated building using seismic response data. Journal of Engineering Mechanics, 131:268–275, 2005. [FM84] F.J. Fayers and J.P. Mathews. Evaluation of normalized stone’s methods for estimating three phase relative permeabilities. SPE Journal, 24:224–232, 1984. [FR86] Z. Fathi and W.F. Ramirez. Use of optimal control theory for comput- ing optimal injection policies for enhanced oil recovery. Automatica, 22(1):333, 1986. 127 [FR87] Z. Fathi and W.F. Ramirez. Optimization of an enhanced oil recovery pro- cess with boundary controls: a large scale nonlinear maximization. Auto- matica, 23(3):301, 1987. [GF06] R. Ghanem and G. Ferro. Health monitoring for strongly non-linear sys- tems using the ensemble kalman filter. Structural Control and Health Mon- itoring, 13:245–259, 2006. [Gha99] R. Ghanem. The nonlinear gaussian spectrum of log-normal stochastic processes and variables. Journal of Appl Mech-T ASME, 66(4):964–973, 1999. [GMNU03] A. Grimstad, T. Mannseth, G. Naevdal, and H. Urkeda. Adaptive multi- scale permeabilty estimmation. Computational Geosciences, 7:1–25, 2003. [GO05] Y . Gu and D.S. Oliver. History matching of the punq-s3 reservoir model using the ensemble kalman filter. SPE Journal, 10(2):217–224, 2005. [GR05] G. Gao and A. Reynolds. Quantifying the uncertainty for the punq-s3 prob- lem in a bayesian setting with rml and enkf. SPE reservoir simulation symposium, (SPE 93324), 2005. [GS95] R. Ghanem and M. Shinozuka. Structural systems identification i: Theory. Journal of Engineering Mechanics, 121:255–264, 1995. [GS03] R. Ghanem and P. Spanos. Stochastic Finite Elements: A Spectral Approach. Dover Publications, Inc., revised ed. edition, 2003. [GSD07] Roger Ghanem, George Saad, and Alireza Doostan. Efficient solution of stochastic systems: application to the embankment dam problem. Struc- tural Safety, 29:238–251, 2007. [Hat98] S. Hatoum. A Computational Model for the Analysis of Multiphase flow in Random Porous Media. PhD thesis, University of New York at Buffalo, 1998. [ICO04] ICOSSAR05. Benchmark study on reliability estimation in higher dimen- sions of structural systems. 2004. Communication 1. [JU97] Simon J. Julier and Jeffrey K. Uhlmann. A new extension of the kalman filter to nonlinear systems. In AeroSense: The 11th International Sympo- sium on Aerospace/Defense Sensing, Simulation and Controls, Multi Sen- sor Fusion, Tracking and Resource Management II. SPIE, 1997. [Kal60] R.E. Kalman. A new approach to linear filtering and perdiction problems. ASME Journal of Basic Engrg., 82D:35–45, 1960. 128 [KGV83] S. Kirkpatrick, C. D. Gelatt, and M. P. Vecchi. Optimization by simulated annealing. Science, 220(4598):671–680, 1983. [Kiv03] G.A. Kivman. Sequential parameter estimation for stochastic systems. Nonlinear Processes in Geophysics, 10:253–259, 2003. [KP87] J.B. Kool and J.C. Parker. Development and evaluation of closed-form expressions for hysteretic soil hydraulic properties. Water Resource Research, 23(1):105–114, 1987. [LBL02] H. Lus, R. Betti, and R. Longman. Obtaining refined-first order predictive models of linear structural systems. Earthquake Engineering and Struc- tural Dynamics, 31:1413–1440, 2002. [LGN89] L. Luckner, M.T. Van Genuchten, and D.R. Nielsen. A consistent set of parametric models for the two phase flow of the immiscible fluids in the subsurface. Water Resources Research, 25(10):2187–2193, 1989. [LO05] N. Liu and D.S. Oliver. Critical evaluation of the ensemble kalman filter on history matching of geologic facies. SPE Reservoir Evaluation and Engineering, 8(6):470–477, 2005. [Loe77] M. Loeve. Probability Theory. Springer, New York, 4th edition, 1977. [Lon04] K. Long. Sundance 2.0 tutorial. Technical report, Sandia National Lab, 2004. [LP87] R.J. Lenhard and J.C. Parker. A model for hysteretic constitutive relations governing multiphase flow .2. permeability-saturation relations. Water Resources Research, 23(12):2197–2206, 1987. [Ma93] E.M. Makhlouf and al. A general history matching algorithm for three phase, three dimensional petroleum reservoir. SPE Advanced Technology Series, 1(2):83–91, 1993. [MCC + 04] S.F. Masri, J.P. Caffrey, T.K. Caughey, A.W. Smyth, and A.G. Chassiakos. Identification of the state equation in complex non-linear systems. Inter- national Journal of Non-Linear Mechanics, 39:1111–1127, 2004. [MG87] A. Montoglou and L.W. Gelhar. Effective hydraulic conductivities of tran- sient unsaturated flow in stratified soils. Water Resources Research, 23:57– 67, 1987. [MK04] H. Matthies and A. Keese. Galerkin methods for linear and nonlinear ellip- tic stochastic partial differential equations. Comput. Methods Appl. Mech. Engng., 2004. In Press. 129 [Mus49] M. Muskat. Physical Principles of Oil Production. Mc Graw Hill, New York, 1949. [NBJ06] Geir Naevdal, D. Roald Brouwer, and Jan-Dirk Jansen. Waterflooding using closed loop control. Computational Geosciences, 10:37–60, 2006. [NJA V03] G. Naevdal, L.M. Johnsen, S.I. Aanonsen, and E. Vefring. Reservoir mon- itoring and continuous model updating using the ensemble kalman filter. SPE Annual Technical Conference and Exhibition, (SPE 84372), 2003. [NMV02] G. Naevdal, T. Mannseth, and E. Vefring. Near well reservoir monitoring through ensemble kalman filter. Proceeding of SPE/DOE Improved oil recovery Symposium, (SPE 84372), 2002. [Ode79] J.T. Oden. Applied Functional Analysis. Prentice Hall, New Jersey, 1979. [PL87] J.C. Parker and R.J. Lenhard. A model for hysteretic constitutive rela- tions governing multiphase flow. 1. saturation-pressure relations. Water Resources Research, 23(12):2187–2196, 1987. [RH98] F. Roggero and L. Hu. Gradual deformation of continuous geostatical mod- els for history matching. SPE Annual Technical Conference and Exhibition, (SPE 49004), 1998. [RL00] A. Robinson and P. Lermusiaux. Overview of data assimilation. Harvard reports in physical/interdisciplinary ocean science, 2000. [Sch01a] Lothar M Schmitt. Theory of Genetic Algorithms, 259:1–61, 2001. [Sch01b] G. Schueller. Computational stochastic mechanics - recent advances. Com- puters and Structures, 79(22-25):2225–2234, 2001. [Sto70] H.L. Stone. Probability model for estimationg three phase relative perme- ability. Journal of Petroleum Technology, 20:214–218, 1970. [SY00] Bagus Sudaryanto and Yannis Yortsos. Optimization of fluid front dynam- ics in porous media using rate control. 1. equal mobility fluids. Physics of Fluids, 12(7):1656–1670, 2000. [WC05] X. Wen and W. Chen. Real time reservoir model updationg using the ensemble kalman filter. SPE reservoir simulation symposium, (SPE 92991), 2005. [Wie38] N. Wiener. The homogeneous chaos. Am. J. Math., 60:897–936, 1938. 130 [ZFYM02] H. Zhang, G. Foliente, Y . Yang, and F. Ma. Parameter identification of elas- tic structures under dynamic loads. Earthquake Engineering and Structural Dynamics, 31:1113–1130, 2002. 131 Appendices 132 Appendix A Complete SUNDANCE Reservoir Characterization Codes A.1 Two-Dimensional Water Flooding Problem - Sce- nario 1 #include "Sundance.hpp" int main(int argc, char ** argv) { try { MPISession::init(&argc, &argv); / * We will do our linear algebra using Epetra * / VectorType<double> vecType = new EpetraVectorType(); MeshType meshType = new BasicSimplicialMeshType(); MeshSource mesher = new ExodusNetCDFMeshReader("square_60x60.ncdf", meshType); Mesh mesh = mesher.getMesh(); / * Create a cell filter that will identify the maximal cells in the interior of the domain * / CellFilter interior = new MaximalCellFilter(); CellFilter edges = new DimensionalCellFilter(1); CellFilter Source = edges.labeledSubset(1); CellFilter Sink = edges.labeledSubset(2); / * Create the Spectral Basis * / int ndim = 3; int order = 2; int nterm = 5; / * Hermite Basis truncated after 5 terms * / SpectralBasis sbasis = new HermiteSpectralBasis(ndim, order, nterm); / * Create unknown and test functions, discretized using first-order 133 * Lagrange interpolants * / BasisFamily L1 = new Lagrange(2); Expr Se = new UnknownFunction(L1, sbasis, "Se"); Expr Ns = new TestFunction(L1, sbasis, "Ns"); Expr Pw = new UnknownFunction(L1, sbasis, "Pw"); Expr Nn = new TestFunction(L1, sbasis, "Nn"); / * Create differential operator and coordinate functions * / Expr x = new CoordExpr(0); Expr y = new CoordExpr(1); Expr dx = new Derivative(0); Expr dy = new Derivative(1); Expr grad = List(dx, dy); / * We need a quadrature rule for doing the integrations * / QuadratureFamily quad2 = new GaussianQuadrature(4); DiscreteSpace d1(mesh, L1, vecType); DiscreteSpace discSpace(mesh, List(L1, L1) , sbasis, vecType); / * Initial Guess for the dynamic states * / Expr u0 = new DiscreteFunction(discSpace, 0.0, "u0"); / * Initial Condition for the Saturation * / Expr Se00 = new DiscreteFunction(d1, 0.0, "Se00"); Expr Se01 = new DiscreteFunction(d1, 0.0, "Se01"); Expr Se02 = new DiscreteFunction(d1, 0.0, "Se02"); Expr Se03 = new DiscreteFunction(d1, 0.0, "Se03"); Expr Se04 = new DiscreteFunction(d1, 0.0, "Se04"); Expr Se0 = new SpectralExpr(sbasis, List(Se00, Se01, Se02, Se03, Se04)); / * Medium Porosity * / Expr P0 = new DiscreteFunction(d1, 0.20, "P0"); Expr P1 = new DiscreteFunction(d1, 0.0, "P1"); Expr P2 = new DiscreteFunction(d1, 0.0, "P2"); Expr P3 = new DiscreteFunction(d1, 0.0, "P3"); Expr P4 = new DiscreteFunction(d1, 0.0, "P4"); / * Medium Permeability * / Expr K0 = new DiscreteFunction(d1, 170.0, "K0"); Expr K1 = new DiscreteFunction(d1, 0.0, "K1"); Expr K2 = new DiscreteFunction(d1, 0.0, "K2"); Expr K3 = new DiscreteFunction(d1, 0.0, "K3"); Expr K4 = new DiscreteFunction(d1, 0.0, "K4"); Vector<double> K1Vec = DiscreteFunction::discFunc(K1)->getVector(); Vector<double> P1Vec = DiscreteFunction::discFunc(P1)->getVector(); Vector<double> K4Vec = DiscreteFunction::discFunc(K4)->getVector(); Vector<double> P4Vec = DiscreteFunction::discFunc(P4)->getVector(); 134 int nNodes1 = mesh.numCells(0); / * number of nodes in mesh * / int nElems = mesh.numCells(1); / * number of elements * / int vecSize = nElems + nNodes1; / * crossectional Area * / double As = 1.0; / * initial guess for the scatter of the unknown medium properties * / double x1, x2, x3, x4; int seed = 1000; for(int MC=0; MC<vecSize; MC++) { seed += 1; StochasticLib1 sto(seed); if ((MC%1) == 0) { x1 = sto.Normal(0,10); x2 = sto.Normal(0,0.01); x3 = sto.Normal(0,5); x4 = sto.Normal(0,0.005); K1Vec.setElement(MC, (x1)); P1Vec.setElement(MC, (x2)); K4Vec.setElement(MC, (x3)); P4Vec.setElement(MC, (x4)); } } DiscreteFunction::discFunc(K1)->setVector(K1Vec); DiscreteFunction::discFunc(P1)->setVector(P1Vec); DiscreteFunction::discFunc(K4)->setVector(K4Vec); DiscreteFunction::discFunc(P4)->setVector(P4Vec); Expr P = new SpectralExpr(sbasis, List(P0, P1, P2, P3, P4)); Expr K = new SpectralExpr(sbasis, List(K0, K1, K2, K3, K4)); double MuO = 1.735e06; / * Oil Viscosity * / double MuW = 1.735e05; / * Water Viscosity * / double Pe = 10000.0; double INJ = 0.0323; / * Water Injection rate * / / * Setup the time stepping with timestep = 0.1 * / double deltaT = 0.10; double Sm = 0.35; / * residual saturation * / / * The coefficients of the different powers of the chaos coefficients * / Expr Squad = pow(Se,4); Expr Scube = pow(Se,3); Expr Ssquare = pow(Se,2); 135 / * Computing the relative pemeabilities and capillary pressure * / Expr Krw = 0.864 * Squad + 0.201 * Scube - 0.065 * Ssquare ; Krw.setcoeff(3, 0.05 * Krw.getcoeff[1]); / * modeling error * / Expr Kro = 1.595 * Squad - 4.784 * Scube + 5.994 * Ssquare - 3.805 * Se + 1.0; Kro.setcoeff(3, 0.05 * Kro.getcoeff[1]); / * modeling error * / Expr Pc = Pe * (-4.286 * Scube + 8.974 * Ssquare - 6.993 * Se + 3.152); Pc.setcoeff(3, 0.05 * Pc.getcoeff[1]); / * modeling error * / / * Weak Form of coupled PDE * / Expr Weqn = Integral(interior, Ns * (As * (1.0-Sm) * (P * (Se - Se0))), quad) + Integral(interior, (dx * Ns) * (As/MuW * (Krw * K * (dx * Pw)) * deltaT), quad) - Integral(Source, Ns * INJ * deltaT,quad); Expr Oeqn = Integral(interior,-Nn * (As * (1.0-Sm) * (P * (Se - Se0))), quad) + Integral(interior, (dx * Nn) * (As/MuO * (Kro * K * (dx * (Pw + Pc))) * deltaT), quad); Expr eqn = Weqn + Oeqn; / * Boundary Conditions * / Expr bc = EssentialBC(leftPoint, Nn * (Pw-10000),quad); / * setting up the nonlinear problem * / NonlinearOperator<double> F = new NonlinearProblem(mesh, eqn, bc,List(Ns,Nn), List(Se,Pw), u0, vecType); ParameterXMLFileReader reader("nox.xml"); ParameterList noxParams = reader.getParameters(); NOXSolver solver(noxParams, F); int Nn = nNodes1; / * array of the global ID’s of the measurement locations * / int GID[22] = {1, 4, 5, 8, 11, 12, 14, 15, 18, 19, 22, 25, 27, 30, 31, 32, 34, 37, 40, 41, 42, 47}; int LID[Nn]; // get local ID of node given GID const RefCountPtr<DOFMapBase>& dofMap = DiscreteFunction::discFunc(Se00)->map(); for (int i=0; i< Nn; i++) { int ID = mesh.mapGIDToLID(0, i); Array<int> dofIndices; dofMap -> getDOFsForCell(0,ID,0, dofIndices); LID[i] = dofIndices[0]; } / * Open the data file * / FILE * mFile; mFile = fopen("../true/SMeasurement.dat", "r"); 136 int nSteps = 20001; for(int ns=0; ns < nSteps; ns++) { solver.solve(); / * Reading the solution into vectors so that it could be transfered to the Newmat Library for the assimilation step * / Expr Pww0 = new DiscreteFunction(d1, 0.0, "Pww0"); Expr Pww1 = new DiscreteFunction(d1, 0.0, "Pww1"); Expr Pww2 = new DiscreteFunction(d1, 0.0, "Pww2"); Expr Pww3 = new DiscreteFunction(d1, 0.0, "Pww3"); Expr Pww4 = new DiscreteFunction(d1, 0.0, "Pww4"); Vector<double> SolnVec = DiscreteFunction::discFunc(u0)->getVector(); Vector<double> Se00V = DiscreteFunction::discFunc(Se00)->getVector(); Vector<double> Se01V = DiscreteFunction::discFunc(Se01)->getVector(); Vector<double> Se02V = DiscreteFunction::discFunc(Se02)->getVector(); Vector<double> Se03V = DiscreteFunction::discFunc(Se03)->getVector(); Vector<double> Se04V = DiscreteFunction::discFunc(Se04)->getVector(); Vector<double> Pww0V = DiscreteFunction::discFunc(Pww0)->getVector(); Vector<double> Pww1V = DiscreteFunction::discFunc(Pww1)->getVector(); Vector<double> Pww2V = DiscreteFunction::discFunc(Pww2)->getVector(); Vector<double> Pww3V = DiscreteFunction::discFunc(Pww3)->getVector(); Vector<double> Pww4V = DiscreteFunction::discFunc(Pww4)->getVector(); Vector<double> K0V = DiscreteFunction::discFunc(K0)->getVector(); Vector<double> K1V = DiscreteFunction::discFunc(K1)->getVector(); Vector<double> K2V = DiscreteFunction::discFunc(K2)->getVector(); Vector<double> K3V = DiscreteFunction::discFunc(K3)->getVector(); Vector<double> K4V = DiscreteFunction::discFunc(K4)->getVector(); Vector<double> P0V = DiscreteFunction::discFunc(P0)->getVector(); Vector<double> P1V = DiscreteFunction::discFunc(P1)->getVector(); Vector<double> P2V = DiscreteFunction::discFunc(P2)->getVector(); Vector<double> P3V = DiscreteFunction::discFunc(P3)->getVector(); Vector<double> P4V = DiscreteFunction::discFunc(P4)->getVector(); for (int n=0; n<vecSize; n++) { Se00V.setElement(n, SolnVec.getElement(10 * n)); Se01V.setElement(n, SolnVec.getElement(10 * n+1)); Se02V.setElement(n, SolnVec.getElement(10 * n+2)); Se03V.setElement(n, SolnVec.getElement(10 * n+3)); Se04V.setElement(n, SolnVec.getElement(10 * n+4)); Pww0V.setElement(n, SolnVec.getElement(10 * n+5)); Pww1V.setElement(n, SolnVec.getElement(10 * n+6)); Pww2V.setElement(n, SolnVec.getElement(10 * n+7)); Pww3V.setElement(n, SolnVec.getElement(10 * n+8)); Pww4V.setElement(n, SolnVec.getElement(10 * n+9)); } 137 / * Measurements exist every 10 timesteps * / if ( ((ns% 10) == 0) && (ns > 0)) { / * the coefficients of the state vector containing the dynamic states , Se and Pw, and the model parameters, K and P. * / ColumnVector A0(4 * Nn); ColumnVector A1(4 * Nn); ColumnVector A2(4 * Nn); ColumnVector A3(4 * Nn); ColumnVector A4(4 * Nn); / * populating the state vectors with the Sundance solution * / for(int i=1; i<=4 * Nn; i++) { if(i<=Nn) { A0(i) = 0.64 * Se00V.getElement(LID[i-1]) + 0.16; A1(i) = 0.64 * Se01V.getElement(LID[i-1]); A2 (i) = 0.64 * Se02V.getElement(LID[i-1]); A3(i) = 0.64 * Se03V.getElement(LID[i-1]); A4(i) = 0.64 * Se04V.getElement(LID[i-1]); } else if ((i > Nn) && (i<=2 * Nn)) { A0(i) = Pww0V.getElement(LID[i-(Nn+1)]); A1(i) = Pww1V.getElement(LID[i-(Nn+1)]); A2(i) = Pww2V.getElement(LID[i-(Nn+1)]); A3(i) = Pww3V.getElement(LID[i-(Nn+1)]); A4(i) = Pww4V.getElement(LID[i-(Nn+1)]); } else if ((i > 2 * Nn) && (i<=3 * Nn)) { A0(i) = K0V.getElement(LID[i-(2 * Nn+1)]); A1(i) = K1V.getElement(LID[i-(2 * Nn+1)]); A2(i) = K2V.getElement(LID[i-(2 * Nn+1)]); A3(i) = K3V.getElement(LID[i-(2 * Nn+1)]); A4(i) = K4V.getElement(LID[i-(2 * Nn+1)]); } else if ( (i > 3 * Nn) && (i <= 4 * Nn)) { A0(i) = P0V.getElement(LID[i-(3 * Nn+1)]); A1(i) = P1V.getElement(LID[i-(3 * Nn+1)]); A2(i) = P2V.getElement(LID[i-(3 * Nn+1)]); A3(i) = P3V.getElement(LID[i-(3 * Nn+1)]); A4(i) = P4V.getElement(LID[i-(3 * Nn+1)]); } } // State Error Covariance Matrix Matrix P1m(4 * Nn,4 * Nn); P1m = 0; 138 P1m = A1 * A1.t() * Exp.expectation(0,1,1) + A2 * A2.t() * Exp.expectation(0,2,2) + A3 * A3.t() * Exp.expectation(0,3,3) + A4 * A4.t() * Exp.expectation(0,4,4); // Measurement Matrix const int mp = 22; Matrix Z(mp,nterm); Z = 0.0; ColumnVector mt(mp); for(int i=1; i<=mp; i++) { fscanf(mFile, "%le", &mt(i)); } Z.column(1) << mt; ColumnVector er1(mp); / * Measurement Error * / for(int i=1; i<=mp; i++) { if (mt(i) > 0.20) { er1(i) = 0.001 * (mt(i)-0.20); } else { er1(i) = 0.0; } } Z.column(3) << er1; // Measurement Error Covariance Matrix Matrix Pzz(mp,mp); Pzz = 0; Pzz = er1 * er1.t(); / * Defining the observation matrix * / Matrix H(mp,4 * Nn); H = 0; int bct = 1; for(int i=0; i<mp; i++) { H(bct,GID[i]) = 1.0; bct ++; } 139 //Gain Matrix Matrix KG(4 * Nn,mp); Matrix KG1 = H * P1m * H.t(); Matrix KG2 = Pzz + KG1; Matrix KG3 = KG2.i(); KG = (P1m * H.t()) * KG3; //Update A0 = A0 + KG * (Z.column(1) - H * A0); A1 = A1 - KG * H * A1; A2 = KG * Z.column(3); A3 = A3 - KG * H * A3; A4 = A4 - KG * H * A4; // Give back to Sundance for (int i=1; i<=Nn; i++) { Se00V.setElement(LID[i-1], (A0(i)-0.20)/0.65); Se01V.setElement(LID[i-1], A1(i)/0.65); Se02V.setElement(LID[i-1], A2(i)/0.65); Se03V.setElement(LID[i-1], A3(i)/0.65); Se04V.setElement(LID[i-1], A4(i)/0.65); Pww0V.setElement(LID[i-1], A0(i+Nn)); Pww1V.setElement(LID[i-1], A1(i+Nn)); Pww2V.setElement(LID[i-1], A2(i+Nn)); Pww3V.setElement(LID[i-1], A3(i+Nn)); Pww4V.setElement(LID[i-1], A4(i+Nn)); K0V.setElement(LID[i-1], A0(i+2 * Nn)); K1V.setElement(LID[i-1], A1(i+2 * Nn)); K2V.setElement(LID[i-1], A2(i+2 * Nn)); K3V.setElement(LID[i-1], A3(i+2 * Nn)); K4V.setElement(LID[i-1], A4(i+2 * Nn)); P0V.setElement(LID[i-1], A0(i+3 * Nn)); P1V.setElement(LID[i-1], A1(i+3 * Nn)); P2V.setElement(LID[i-1], A2(i+3 * Nn)); P3V.setElement(LID[i-1], A3(i+3 * Nn)); P4V.setElement(LID[i-1], A4(i+3 * Nn)); } DiscreteFunction::discFunc(Se00)->setVector(Se00V); DiscreteFunction::discFunc(Se01)->setVector(Se01V); DiscreteFunction::discFunc(Se02)->setVector(Se02V); DiscreteFunction::discFunc(Se03)->setVector(Se03V); DiscreteFunction::discFunc(Se04)->setVector(Se04V); DiscreteFunction::discFunc(Pww0)->setVector(Pww0V); DiscreteFunction::discFunc(Pww1)->setVector(Pww1V); 140 DiscreteFunction::discFunc(Pww2)->setVector(Pww2V); DiscreteFunction::discFunc(Pww3)->setVector(Pww3V); DiscreteFunction::discFunc(Pww4)->setVector(Pww4V); DiscreteFunction::discFunc(K0)->setVector(K0V); DiscreteFunction::discFunc(K1)->setVector(K1V); DiscreteFunction::discFunc(K2)->setVector(K2V); DiscreteFunction::discFunc(K3)->setVector(K3V); DiscreteFunction::discFunc(K4)->setVector(K4V); DiscreteFunction::discFunc(P0)->setVector(P0V); DiscreteFunction::discFunc(P1)->setVector(P1V); DiscreteFunction::discFunc(P2)->setVector(P2V); DiscreteFunction::discFunc(P3)->setVector(P3V); DiscreteFunction::discFunc(P4)->setVector(P4V); Expr Se0 = new SpectralExpr(sbasis, List(Se00, Se01, Se02, Se03, Se04)); Expr P = new SpectralExpr(sbasis, List(P0, P1, P2, P3, P4)); Expr K = new SpectralExpr(sbasis, List(K0, K1, K2, K3, K4)); / * Write out the updated state to VTK files * / FieldWriter wr3 = new VTKWriter("K" + Teuchos::toString(ns/10)); wr3.addMesh(mesh); wr3.addField("K0", new ExprFieldWrapper(K0)); wr3.addField("K1", new ExprFieldWrapper(K1)); wr3.addField("K2", new ExprFieldWrapper(K2)); wr3.addField("K3", new ExprFieldWrapper(K3)); wr3.addField("K4", new ExprFieldWrapper(K4)); wr3.write(); FieldWriter wr4 = new VTKWriter("P" + Teuchos::toString(ns/10)); wr4.addMesh(mesh); wr4.addField("P0", new ExprFieldWrapper(P0)); wr4.addField("P1", new ExprFieldWrapper(P1)); wr4.addField("P2", new ExprFieldWrapper(P2)); wr4.addField("P3", new ExprFieldWrapper(P3)); wr4.addField("P4", new ExprFieldWrapper(P4)); wr4.write(); } else { DiscreteFunction::discFunc(Se00)->setVector(Se00V); DiscreteFunction::discFunc(Se01)->setVector(Se01V); DiscreteFunction::discFunc(Se02)->setVector(Se02V); DiscreteFunction::discFunc(Se03)->setVector(Se03V); DiscreteFunction::discFunc(Se04)->setVector(Se04V); Expr Se0 = new SpectralExpr(sbasis, List(Se00, Se01, Se02, Se03, Se04)); } 141 fclose(mFile); } catch(exception& e) { Sundance::handleException(e); } Sundance::finalize(); } A.2 Two-Dimensional Flow Control Problem In this example, the unknown medium properties are modeled as an exponential function of a Polynomial Chaos expansion. Therefore the standard spectral library implemented in Sundance can not be used to solve the problem. In what follows a work around for the problem is presented, it entails expanding the equations for each coefficient in the chaos expansion independently and then assemble and solve a larger system. This also gives an insight on the implementation details of the spectral library within Sundance. #include "Sundance.hpp" / * Function to create a list of unknown functions of arbitrary size * / Expr makeBigUnknownFunctionList(int numFuncs, const BasisFamily& basis) { Array<Expr> big(numFuncs); for (unsigned int i=0; i<big.size(); i++) { big[i] = new UnknownFunction(basis); } return new ListExpr(big); } / * Function to create a list of test functions of arbitrary size * / Expr makeBigTestFunctionList(int numFuncs, const BasisFamily& basis) { Array<Expr> big(numFuncs); for (unsigned int i=0; i<big.size(); i++) { big[i] = new TestFunction(basis); } return new ListExpr(big); } 142 / * Function to create a discrete space with an arbitrary number of functions * / DiscreteSpace makeBigDiscreteSpace(const Mesh& mesh, int numFuncs, const BasisFamily& basis, const VectorType<double>& vecType) { BasisArray big(numFuncs); for (unsigned int i=0; i<big.size(); i++) { big[i] = basis; } return DiscreteSpace(mesh, big, vecType); } / * Function to evaluate the average of three Hermite Polynomials of Known dimension and order where one is expressed as an exponential function. The function takes as input DiscreteFunction Expr and returns Expr as well * / Expr CijkList(Expr alpha1, Expr alpha2, Expr alpha3, DiscreteSpace d1, int VecSize, cijk Exp, int i, int j,int k) { Vector<double> alpha1V = DiscreteFunction::discFunc(alpha1)->getVector(); Vector<double> alpha2V = DiscreteFunction::discFunc(alpha2)->getVector(); Vector<double> alpha3V = DiscreteFunction::discFunc(alpha3)->getVector(); Expr rtn = new DiscreteFunction(d1, 0.0, "rtn"); Vector<double> rtnV = DiscreteFunction::discFunc(rtn)->getVector(); VectorSpace<double> sS = alpha1V.space(); int lowc = sS.lowestLocallyOwnedIndex(); int highc = lowc +sS.numLocalElements(); for (int ix=lowc; ix<highc; ix++) { rtnV.setElement(ix,Exp.sumexpectation3(alpha1V.getElement(ix), alpha2V.getElement(ix), alpha3V.getElement(ix), i, j, k)); } DiscreteFunction::discFunc(rtn)->setVector(rtnV); return rtn; } int main(int argc, char ** argv) { try { Sundance::init(&argc, &argv); / * We will do our linear algebra using Epetra * / VectorType<double> vecType = new EpetraVectorType(); MeshType meshType = new BasicSimplicialMeshType(); MeshSource mesher = new ExodusNetCDFMeshReader("doubleloop1.ncdf", meshType); 143 Mesh mesh = mesher.getMesh(); / * Create a cell filter that will identify the maximal cells * in the interior of the domain * / CellFilter interior = new MaximalCellFilter(); CellFilter edges = new DimensionalCellFilter(1); CellFilter Source1 = edges.labeledSubset(1); CellFilter Source2 = edges.labeledSubset(2); CellFilter Sink = edges.labeledSubset(3); const int nf = 10 / * Number of unknown functions * /; / * Create unknown and test functions, discretized using first-order * Lagrange interpolants * / BasisFamily basis = new Lagrange(2); Expr Unk = makeBigUnknownFunctionList(nf, basis); Expr Se0 = Unk[0]; Expr Se1 = Unk[1]; Expr Se2 = Unk[2]; Expr Se3 = Unk[3]; Expr Se4 = Unk[4]; Expr Pw0 = Unk[5]; Expr Pw1 = Unk[6]; Expr Pw2 = Unk[7]; Expr Pw3 = Unk[8]; Expr Pw4 = Unk[9]; Expr Test = makeBigTestFunctionList(nf, basis); Expr Ns0 = Test[0]; Expr Ns1 = Test[1]; Expr Ns2 = Test[2]; Expr Ns3 = Test[3]; Expr Ns4 = Test[4]; Expr Nn0 = Test[5]; Expr Nn1 = Test[6]; Expr Nn2 = Test[7]; Expr Nn3 = Test[8]; Expr Nn4 = Test[9]; / * Create differential operator and coordinate functions * / Expr x = new CoordExpr(0); Expr y = new CoordExpr(1); Expr dx = new Derivative(0); Expr dy = new Derivative(1); Expr grad = List(dx, dy); / * We need a quadrature rule for doing the integrations * / QuadratureFamily quad2 = new GaussianQuadrature(4); DiscreteSpace d1(mesh, basis, vecType); DiscreteSpace discSpace = makeBigDiscreteSpace(mesh, nf, basis, vecType); / * Initial Guess * / Expr u0 = new DiscreteFunction(discSpace, 0.0, "u0"); / * Initial Condition * / Expr Seo0 = new DiscreteFunction(d1, 0.0, "Seo0"); Expr Seo1 = new DiscreteFunction(d1, 0.0, "Seo1"); Expr Seo2 = new DiscreteFunction(d1, 0.0, "Seo2"); 144 Expr Seo3 = new DiscreteFunction(d1, 0.0, "Seo3"); Expr Seo4 = new DiscreteFunction(d1, 0.0, "Seo4"); / * Cross-sectional Area * / double As = 10000; / * The intrinsic permeability of the medium K, and its porosity P * / int nNodes1 = mesh.numCells(0); int nElems = mesh.numCells(1); int vecSize = nElems + nNodes1; / * representation of alpha = K/P * / Expr alpha0 = new DiscreteFunction(d1, 0.0, "alpha0"); Expr alpha1 = new DiscreteFunction(d1, 0.0, "alpha1"); Expr alpha2 = new DiscreteFunction(d1, 0.0, "alpha1"); Expr alpha3 = new DiscreteFunction(d1, 0.0, "alpha1"); / * Dimensions and order of spectral basis * / int ndim = 3; int order = 2; const int nterm = 5; / * Randomly Generated Initial Guess for alpha * / Vector<double> alpha0V = DiscreteFunction::discFunc(alpha0)->getVector(); Vector<double> alpha1V = DiscreteFunction::discFunc(alpha1)->getVector(); Vector<double> alpha2V = DiscreteFunction::discFunc(alpha2)->getVector(); Vector<double> alpha3V = DiscreteFunction::discFunc(alpha3)->getVector(); int32 seed = 19850; // random seed StochasticLib1 sto(seed); for (int i=0; i<vecSize; i++) { alpha0V.setElement(i, 2.93 + 0.1 * sto.Normal(0,1)); seed++; alpha1V.setElement(i, 0.005 * sto.Normal(0,1)); seed++; alpha2V.setElement(i, 0.005 * sto.Normal(0,1)); seed++; alpha3V.setElement(i, 0.005 * sto.Normal(0,1)); } DiscreteFunction::discFunc(alpha1)->setVector(alpha1V); DiscreteFunction::discFunc(alpha2)->setVector(alpha2V); DiscreteFunction::discFunc(alpha3)->setVector(alpha3V); double MuO = 1.735; / * Oil Viscosity * / double MuW = 1.735; / * Water Viscosity * / double Pe = 10000.0; / * Initial Guess for the Water Injection Rate to be Controlled * / / * Source 1 * / double Qa0 = 300.00; 145 double Qa1 = 10.00; double Qa2 = 0.0; double Qa3 = 0.0; double Qa4 = 5.50; / * Source 2 * / double Qb0 = 300.00; double Qb1 = 10.00; double Qb2 = 0.0; double Qb3 = 0.0; double Qb4 = 5.50; / * Setup the time stepping with timestep = 0.005 * / double deltaT = 0.005; double Sm = 0.35; cijk Exp(ndim, order); / * Evaluating the coefficients of the various orders of the unknown saturation * / Expr Squad0 = (1 * 1 * Se0 * Se0 * Se0 * Se0 + 6 * 1 * Se0 * Se0 * Se1 * Se1 + 6 * 1 * Se0 * Se0 * Se2 * Se2 + 6 * 1 * Se0 * Se0 * Se3 * Se3 + 6 * 2 * Se0 * Se0 * Se4 * Se4 + 12 * 2 * Se0 * Se1 * Se1 * Se4 + 4 * 8 * Se0 * Se4 * Se4 * Se4 + 1 * 3 * Se1 * Se1 * Se1 * Se1 + 6 * 1 * Se1 * Se1 * Se2 * Se2 + 6 * 1 * Se1 * Se1 * Se3 * Se3 + 6 * 10 * Se1 * Se1 * Se4 * Se4 + 1 * 3 * Se2 * Se2 * Se2 * Se2 + 6 * 1 * Se2 * Se2 * Se3 * Se3 + 6 * 2 * Se2 * Se2 * Se4 * Se4 + 1 * 3 * Se3 * Se3 * Se3 * Se3 + 6 * 2 * Se3 * Se3 * Se4 * Se4 + 1 * 60 * Se4 * Se4 * Se4 * Se4 )/ 1; Expr Squad1 = (4 * 1 * Se0 * Se0 * Se0 * Se1 + 12 * 2 * Se0 * Se0 * Se1 * Se4 + 4 * 3 * Se0 * Se1 * Se1 * Se1 + 12 * 1 * Se0 * Se1 * Se2 * Se2 + 12 * 1 * Se0 * Se1 * Se3 * Se3 + 12 * 10 * Se0 * Se1 * Se4 * Se4 + 4 * 12 * Se1 * Se1 * Se1 * Se4 + 12 * 2 * Se1 * Se2 * Se2 * Se4 + 12 * 2 * Se1 * Se3 * Se3 * Se4 + 4 * 68 * Se1 * Se4 * Se4 * Se4 )/ 1; Expr Squad2 = (4 * 1 * Se0 * Se0 * Se0 * Se2 + 12 * 1 * Se0 * Se1 * Se1 * Se2 + 4 * 3 * Se0 * Se2 * Se2 * Se2 + 12 * 1 * Se0 * Se2 * Se3 * Se3 + 12 * 2 * Se0 * Se2 * Se4 * Se4 + 12 * 2 * Se1 * Se1 * Se2 * Se4 + 4 * 8 * Se2 * Se4 * Se4 * Se4 )/ 1; Expr Squad3 = (4 * 1 * Se0 * Se0 * Se0 * Se3 + 12 * 1 * Se0 * Se1 * Se1 * Se3 + 12 * 1 * Se0 * Se2 * Se2 * Se3 + 4 * 3 * Se0 * Se3 * Se3 * Se3 + 12 * 2 * Se0 * Se3 * Se4 * Se4 + 12 * 2 * Se1 * Se1 * Se3 * Se4 + 4 * 8 * Se3 * Se4 * Se4 * Se4 )/ 1; Expr Squad4 = (4 * 2 * Se0 * Se0 * Se0 * Se4 + 6 * 2 * Se0 * Se0 * Se1 * Se1 + 6 * 8 * Se0 * Se0 * Se4 * Se4 + 12 * 10 * Se0 * Se1 * Se1 * Se4 + 12 * 2 * Se0 * Se2 * Se2 * Se4 + 12 * 2 * Se0 * Se3 * Se3 * Se4 + 4 * 60 * Se0 * Se4 * Se4 * Se4 + 1 * 12 * Se1 * Se1 * Se1 * Se1 + 6 * 2 * Se1 * Se1 * Se2 * Se2 + 6 * 2 * Se1 * Se1 * Se3 * Se3 + 6 * 68 * Se1 * Se1 * Se4 * Se4 + 6 * 8 * Se2 * Se2 * Se4 * Se4 + 6 * 8 * Se3 * Se3 * Se4 * Se4 + 1 * 544 * Se4 * Se4 * Se4 * Se4 )/ 2; Expr Scube0 = (1 * 1 * Se0 * Se0 * Se0 + 3 * 1 * Se0 * Se1 * Se1 + 3 * 1 * Se0 * Se2 * Se2 + 3 * 1 * Se0 * Se3 * Se3 + 3 * 2 * Se0 * Se4 * Se4 + 3 * 2 * Se1 * Se1 * Se4 + 1 * 8 * Se4 * Se4 * Se4 )/ 1; Expr Scube1 = (3 * 1 * Se0 * Se0 * Se1 + 6 * 2 * Se0 * Se1 * Se4 + 1 * 3 * Se1 * Se1 * Se1 + 3 * 1 * Se1 * Se2 * Se2 + 3 * 1 * Se1 * Se3 * Se3 + 3 * 10 * Se1 * Se4 * Se4 )/ 1; Expr Scube2 = (3 * 1 * Se0 * Se0 * Se2 + 3 * 1 * Se1 * Se1 * Se2 + 1 * 3 * Se2 * Se2 * Se2 + 3 * 1 * Se2 * Se3 * Se3 + 3 * 2 * Se2 * Se4 * Se4 )/ 1; Expr Scube3 = (3 * 1 * Se0 * Se0 * Se3 + 3 * 1 * Se1 * Se1 * Se3 + 3 * 1 * Se2 * Se2 * Se3 + 1 * 3 * Se3 * Se3 * Se3 + 3 * 2 * Se3 * Se4 * Se4 )/ 1; Expr Scube4 = (3 * 2 * Se0 * Se0 * Se4 + 3 * 2 * Se0 * Se1 * Se1 + 3 * 8 * Se0 * Se4 * Se4 + 3 * 10 * Se1 * Se1 * Se4 + 3 * 2 * Se2 * Se2 * Se4 + 3 * 2 * Se3 * Se3 * Se4 + 1 * 60 * Se4 * Se4 * Se4 )/ 2; Expr Ssquare0 = (1 * 1 * Se0 * Se0 + 1 * 1 * Se1 * Se1 + 1 * 1 * Se2 * Se2 + 1 * 1 * Se3 * Se3 + 1 * 2 * Se4 * Se4)/ 1; Expr Ssquare1 = (2 * 1 * Se0 * Se1 + 2 * 2 * Se1 * Se4 )/ 1; 146 Expr Ssquare2 = (2 * 1 * Se0 * Se2 )/ 1; Expr Ssquare3 = (2 * 1 * Se0 * Se3 )/ 1; Expr Ssquare4 = (2 * 2 * Se0 * Se4 + 1 * 2 * Se1 * Se1 + 1 * 8 * Se4 * Se4 )/ 2; / * Coefficients of water relative permeability * / Expr Krw0 = 0.8723 * Squad0 + 0.1632 * Scube0 - 0.0392 * Ssquare0 + 0.0037 * Se0; Expr Krw1 = 0.8723 * Squad1 + 0.1632 * Scube1 - 0.0392 * Ssquare1 + 0.0037 * Se1; Expr Krw2 = 0.8723 * Squad2 + 0.1632 * Scube2 - 0.0392 * Ssquare2 + 0.0037 * Se2 + 0.1 * Krw0; Expr Krw3 = 0.8723 * Squad3 + 0.1632 * Scube3 - 0.0392 * Ssquare3 + 0.0037 * Se3; Expr Krw4 = 0.8723 * Squad4 + 0.1632 * Scube4 - 0.0392 * Ssquare4 + 0.0037 * Se4; / * Coefficients of oil relative permeability * / Expr Kro0 = -2.7777 * Squad0 + 4.9888 * Scube0 - 1.3543 * Ssquare0 - 1.8568 * Se0 + 1.0; Expr Kro1 = -2.7777 * Squad1 + 4.9888 * Scube1 - 1.3543 * Ssquare1 - 1.8568 * Se1; Expr Kro2 = -2.7777 * Squad2 + 4.9888 * Scube2 - 1.3543 * Ssquare2 - 1.8568 * Se2 + 0.1 * Kro0; Expr Kro3 = -2.7777 * Squad3 + 4.9888 * Scube3 - 1.3543 * Ssquare3 - 1.8568 * Se3; Expr Kro4 = -2.7777 * Squad4 + 4.9888 * Scube4 - 1.3543 * Ssquare4 - 1.8568 * Se4; / * Coefficients for the capillary pressure * / Expr Pc0 = Pe * (3.9051 * Squad0 - 12.0155 * Scube0 + 14.2754 * Ssquare0 - 8.4337 * Se0 + 3.2688); Expr Pc1 = Pe * (3.9051 * Squad1 - 12.0155 * Scube1 + 14.2754 * Ssquare1 - 8.4337 * Se1); Expr Pc2 = Pe * (3.9051 * Squad2 - 12.0155 * Scube2 + 14.2754 * Ssquare2 - 8.4337 * Se2) + 0.1 * Pc0; Expr Pc3 = Pe * (3.9051 * Squad3 - 12.0155 * Scube3 + 14.2754 * Ssquare3 - 8.4337 * Se3); Expr Pc4 = Pe * (3.9051 * Squad4 - 12.0155 * Scube4 + 14.2754 * Ssquare4 - 8.4337 * Se4); / * Weak forms of the PDE’s corresponding to the different terms in the chaos expansion * / Expr Weqn0 = Integral(interior, Ns0 * As * (1.0-Sm) * (Se0 - Seo0), quad2) + Integral(interior,(grad * Ns0) * As/MuW * exp(alpha0 + 0.5 * alpha1 * alpha1 + 0.5 * alpha2 * alpha2 + 0.5 * alpha3 * alpha3) * (CijkList(alpha1 ,alpha2,alpha3, d1, vecSize, Exp, 0, 0, 0) * Krw0 * (grad * Pw0) + CijkList(alpha1,alpha2,alpha3, d1, vecSize, Exp, 1, 1, 0) * Krw1 * (grad * Pw1) + CijkList(alpha1,alpha2,alpha3, d1, vecSize, Exp, 2, 2, 0) * Krw2 * (grad * Pw2) + CijkList(alpha1,alpha2,alpha3, d1,vecSize, Exp, 3, 3, 0) * Krw3 * (grad * Pw3) + CijkList(alpha1,alpha2,alpha3, d1, vecSize, Exp, 4,4, 0) * Krw4 * (grad * Pw4) ) * deltaT, quad2) - Integral(Source1, Ns0 * Qa0 * deltaT, quad2) - Integral(Source2, Ns0 * Qb0 * deltaT, quad2); Expr Weqn1 = Integral(interior, Ns1 * As * (1.0-Sm) * (Se1 - Seo1), quad2) + Integral(interior, (grad * Ns1) * As/MuW * exp(alpha0 + 0.5 * alpha1 * alpha1 + 0.5 * alpha2 * alpha2 + 0.5 * alpha3 * alpha3) * (CijkList(alpha1, alpha2,alpha3, d1, vecSize, Exp, 0, 1, 1) * Krw0 * (grad * Pw1) + CijkList(alpha1,alpha2,alpha3, d1, vecSize, Exp, 1, 0, 1) * Krw1 * (grad * Pw0) + CijkList(alpha1,alpha2,alpha3, d1, vecSize, Exp, 1, 4, 1) * Krw1 * (grad * Pw4) + CijkList(alpha1,alpha2,alpha3, d1, vecSize, Exp, 4, 1, 1) * Krw4 * (grad * Pw1) ) * deltaT, quad2)- Integral(Source1, Ns1 * Qa1 * deltaT,quad2) - Integral(Source2, Ns1 * Qb1 * deltaT,quad2); Expr Weqn2 = Integral(interior, Ns2 * As * (1.0-Sm) * (Se2 - Seo2), quad2) + Integral(interior, (grad * Ns2) * As/MuW * exp(alpha0 + 0.5 * alpha1 * alpha1 + 0.5 * alpha2 * alpha2 + 0.5 * alpha3 * alpha3) * (CijkList(alpha1, alpha2,alpha3, d1, vecSize, Exp, 0, 2, 2) * Krw0 * (grad * Pw2) + CijkList(alpha1,alpha2,alpha3, d1, vecSize, Exp, 2, 0, 2) * Krw2 * (grad * Pw0) ) * deltaT, quad2) - Integral(Source1, Ns2 * Qa2 * deltaT,quad2) - Integral(Source2, Ns2 * Qb2 * deltaT,quad2); Expr Weqn3 = Integral(interior, Ns3 * As * (1.0-Sm) * (Se3 - Seo3), quad2) + Integral(interior, (grad * Ns3) * As/MuW * exp(alpha0 + 0.5 * alpha1 * alpha1 + 0.5 * alpha2 * alpha2 + 0.5 * alpha3 * alpha3) * (CijkList(alpha1, alpha2,alpha3, d1, vecSize, Exp, 0, 3, 3) * Krw0 * (grad * Pw3) + CijkList(alpha1,alpha2,alpha3, d1, vecSize, Exp, 3, 0, 3) * Krw3 * (grad * Pw0) ) * deltaT, quad2) - Integral(Source1, Ns3 * Qa3 * deltaT,quad2) - Integral(Source2, Ns3 * Qb3 * deltaT,quad2); 147 Expr Weqn4 = Integral(interior, Ns4 * As * (1.0-Sm) * 2 * (Se4 - Seo4), quad2) + Integral(interior, (grad * Ns4) * As/MuW * exp(alpha0 + 0.5 * alpha1 * alpha1 + 0.5 * alpha2 * alpha2 + 0.5 * alpha3 * alpha3) * (CijkList(alpha1, alpha2,alpha3, d1, vecSize, Exp, 0, 4, 4) * Krw0 * (grad * Pw4) + CijkList(alpha1,alpha2,alpha3, d1, vecSize, Exp, 1, 1, 4) * Krw1 * (grad * Pw1) + CijkList(alpha1,alpha2,alpha3, d1, vecSize, Exp, 4, 0, 4) * Krw4 * (grad * Pw0) + CijkList(alpha1,alpha2,alpha3, d1, vecSize, Exp, 4, 4, 4) * Krw4 * (grad * Pw4) ) * deltaT, quad2)- Integral(Source1, Ns4 * Qa4 * deltaT,quad2) - Integral(Source2, Ns4 * Qb4 * deltaT,quad2); Expr Oeqn0 = Integral(interior, -Nn0 * As * (1.0-Sm) * (Se0 - Seo0), quad2) + Integral(interior, (grad * Nn0) * As/MuO * exp(alpha0 + 0.5 * alpha1 * alpha1 + 0.5 * alpha2 * alpha2 + 0.5 * alpha3 * alpha3) * (CijkList(alpha1, alpha2,alpha3, d1, vecSize, Exp, 0, 0, 0) * Kro0 * (grad * (Pw0+Pc0)) + CijkList(alpha1,alpha2,alpha3, d1, vecSize, Exp, 1, 1, 0) * Kro1 * (grad * (Pw1+Pc1)) + CijkList(alpha1,alpha2,alpha3, d1, vecSize, Exp, 2, 2, 0) * Kro2 * (grad * (Pw2+Pc2)) + CijkList(alpha1,alpha2,alpha3, d1, vecSize, Exp, 3, 3, 0) * Kro3 * (grad * (Pw3+Pc3)) + CijkList(alpha1,alpha2,alpha3, d1, vecSize, Exp, 4, 4, 0) * Kro4 * (grad * (Pw4+Pc4)) ) * deltaT, quad2) + Integral(Sink, Nn0 * 0.4 * (Qa0+Qb0) * deltaT,quad2); Expr Oeqn1 = Integral(interior, -Nn1 * As * (1.0-Sm) * (Se1 - Seo1), quad2) + Integral(interior, (grad * Nn1) * As/MuO * exp(alpha0 + 0.5 * alpha1 * alpha1 + 0.5 * alpha2 * alpha2 + 0.5 * alpha3 * alpha3) * (CijkList(alpha1, alpha2,alpha3, d1, vecSize, Exp, 0, 1, 1) * Kro0 * (grad * (Pw1+Pc1)) + CijkList(alpha1,alpha2,alpha3, d1, vecSize, Exp, 1, 0, 1) * Kro1 * (grad * (Pw0+Pc0)) + CijkList(alpha1,alpha2,alpha3, d1, vecSize, Exp, 1, 4, 1) * Kro1 * (grad * (Pw4+Pc4)) + CijkList(alpha1,alpha2,alpha3, d1, vecSize, Exp, 4, 1, 1) * Kro4 * (grad * (Pw1+Pc1)) ) * deltaT, quad2) + Integral(Sink, Nn1 * 0.4 * (Qa1+Qb1) * deltaT,quad2); Expr Oeqn2 = Integral(interior, -Nn2 * As * (1.0-Sm) * (Se2 - Seo2), quad2) + Integral(interior, (grad * Nn2) * As/MuO * exp(alpha0 + 0.5 * alpha1 * alpha1 + 0.5 * alpha2 * alpha2 + 0.5 * alpha3 * alpha3) * (CijkList(alpha1, alpha2,alpha3, d1, vecSize, Exp, 0, 2, 2) * Kro0 * (grad * (Pw2+Pc2)) + CijkList(alpha1,alpha2,alpha3, d1, vecSize, Exp, 2, 0, 2) * Kro2 * (grad * (Pw0+Pc0)) ) * deltaT, quad2) + Integral(Sink, Nn2 * 0.4 * (Qa2+Qb2) * deltaT,quad2); Expr Oeqn3 = Integral(interior, -Nn3 * As * (1.0-Sm) * (Se3 - Seo3), quad2) + Integral(interior, (grad * Nn3) * As/MuO * exp(alpha0 + 0.5 * alpha1 * alpha1 + 0.5 * alpha2 * alpha2 + 0.5 * alpha3 * alpha3) * (CijkList(alpha1 ,alpha2,alpha3, d1, vecSize, Exp, 0, 3, 3) * Kro0 * (grad * (Pw3+Pc3)) + CijkList(alpha1,alpha2,alpha3, d1, vecSize, Exp, 3, 0, 3) * Kro3 * (grad * (Pw0+Pc0)) ) * deltaT, quad2) + Integral(Sink, Nn3 * 0.4 * (Qa3+Qb3) * deltaT,quad2); Expr Oeqn4 = Integral(interior, -Nn4 * As * (1.0-Sm) * (2 * (Se4 - Seo4)), quad2) + Integral(interior, (grad * Nn4) * As/MuO * exp(alpha0 + 0.5 * alpha1 * alpha1 + 0.5 * alpha2 * alpha2 + 0.5 * alpha3 * alpha3) * (CijkList(alpha1, alpha2,alpha3, d1, vecSize, Exp, 0, 4, 4) * Kro0 * (grad * (Pw4+Pc4)) + CijkList(alpha1,alpha2,alpha3, d1, vecSize, Exp, 1, 1, 4) * Kro1 * (grad * (Pw1+Pc1)) + CijkList(alpha1,alpha2,alpha3, d1, vecSize, Exp, 4, 0, 4) * Kro4 * (grad * (Pw0+Pc0)) + CijkList(alpha1,alpha2,alpha3, d1, vecSize, Exp, 4, 4, 4) * Kro4 * (grad * (Pw4+Pc4))) * deltaT, quad2) + Integral(Sink, Nn4 * 0.4 * (Qa4+Qb4) * deltaT,quad2); / * The above set of equations is automatically generated using a C++ algorithm that takes the number of dimensions and order as input * / Expr eqn = Weqn0 + Oeqn0 + Weqn1 + Oeqn1 + Weqn2 + Oeqn2 + Weqn3 + Oeqn3 + Weqn4 + Oeqn4; Expr bc = EssentialBC(Sink, Nn0 * (Pw0-10000.0),quad2)+ EssentialBC(Sink, Nn1 * (Pw1-0.0),quad2) + EssentialBC(Sink, Nn2 * (Pw2-0.0),quad2) + EssentialBC(Sink, Nn3 * (Pw3-0.0),quad2) + EssentialBC(Sink,Nn4 * (Pw4- 0.0),quad2); NonlinearOperator<double> F = new NonlinearProblem(mesh, eqn, bc,Test,Unk, u0, vecType); 148 ParameterXMLFileReader reader("nox.xml"); ParameterList noxParams = reader.getParameters(); NOXSolver solver(noxParams, F); printf("Solving System \n"); const int Nn = nNodes1; / * Measurement Locations * / int GID1[13] = {5, 11, 19, 23, 33, 61, 69, 86, 87, 111, 117, 126, 131}; int GID[Nn]; / * Target function is treated as a dicretized function on the mesh and thus considered as measurements available at all nodal locations * / int LID[Nn]; for (int i=1; i<=Nn; i++) { GID[i-1] = i; } // get local ID of node given GID const RefCountPtr<DOFMapBase>& dofMap = DiscreteFunction::discFunc(Seo0)->map(); for (int i=0; i< Nn; i++) { int ID = mesh.mapGIDToLID(0, i); Array<int> dofIndices; dofMap -> getDOFsForCell(0,ID,0, dofIndices); LID[i] = dofIndices[0]; } FILE * o1File; / * Output file to record the updated flow at Source 1 * / o1File = fopen("Inflow1.dat","w"); FILE * o2File; / * Output file to record the updated flow at Source 2 * / o2File = fopen("Inflow2.dat","w"); FILE * mFile; / * File Containing the Discretized Target Function * / mFile = fopen("../truemodel/STarget.dat", "r"); FILE * m1File; / * File Containing the observation Data * / m1File = fopen("../truemodel/SMM.dat","r"); int nSteps = 2001; for(int ns=0; ns < nSteps; ns++) { solver.solve(); Expr Pww0 = new DiscreteFunction(d1, 0.0, "Pww0"); 149 Expr Pww1 = new DiscreteFunction(d1, 0.0, "Pww1"); Expr Pww2 = new DiscreteFunction(d1, 0.0, "Pww2"); Expr Pww3 = new DiscreteFunction(d1, 0.0, "Pww3"); Expr Pww4 = new DiscreteFunction(d1, 0.0, "Pww4"); Vector<double> SolnVec = DiscreteFunction::discFunc(u0)->getVector(); Vector<double> Seo0V = DiscreteFunction::discFunc(Seo0)->getVector(); Vector<double> Seo1V = DiscreteFunction::discFunc(Seo1)->getVector(); Vector<double> Seo2V = DiscreteFunction::discFunc(Seo2)->getVector(); Vector<double> Seo3V = DiscreteFunction::discFunc(Seo3)->getVector(); Vector<double> Seo4V = DiscreteFunction::discFunc(Seo4)->getVector(); Vector<double> Pww0V = DiscreteFunction::discFunc(Pww0)->getVector(); Vector<double> Pww1V = DiscreteFunction::discFunc(Pww1)->getVector(); Vector<double> Pww2V = DiscreteFunction::discFunc(Pww2)->getVector(); Vector<double> Pww3V = DiscreteFunction::discFunc(Pww3)->getVector(); Vector<double> Pww4V = DiscreteFunction::discFunc(Pww4)->getVector(); for (int n=0; n<vecSize; n++) { Seo0V.setElement(n, SolnVec.getElement(10 * n)); Seo1V.setElement(n, SolnVec.getElement(10 * n+1)); Seo2V.setElement(n, SolnVec.getElement(10 * n+2)); Seo3V.setElement(n, SolnVec.getElement(10 * n+3)); Seo4V.setElement(n, SolnVec.getElement(10 * n+4)); Pww0V.setElement(n, SolnVec.getElement(10 * n+5)); Pww1V.setElement(n, SolnVec.getElement(10 * n+6)); Pww2V.setElement(n, SolnVec.getElement(10 * n+7)); Pww3V.setElement(n, SolnVec.getElement(10 * n+8)); Pww4V.setElement(n, SolnVec.getElement(10 * n+9)); } / * Updating the System parameters and states whenever a measurement is available * / if ( ((ns% 100) == 0) && (ns > 0)) { ColumnVector A0(3 * Nn); ColumnVector A1(3 * Nn); ColumnVector A2(3 * Nn); ColumnVector A3(3 * Nn); ColumnVector A4(3 * Nn); for(int i=1; i<=3 * Nn; i++) { if(i<=Nn) { A0(i) = 0.65 * Seo0V.getElement(LID[i-1]) + 0.20; A1(i) = 0.65 * Seo1V.getElement(LID[i-1]); A2(i) = 0.65 * Seo2V.getElement(LID[i-1]); A3(i) = 0.65 * Seo3V.getElement(LID[i-1]); A4(i) = 0.65 * Seo4V.getElement(LID[i-1]); } 150 else if ((i > Nn) && (i<=2 * Nn)) { A0(i) = Pww0V.getElement(LID[i-(Nn+1)]); A1(i) = Pww1V.getElement(LID[i-(Nn+1)]); A2(i) = Pww2V.getElement(LID[i-(Nn+1)]); A3(i) = Pww3V.getElement(LID[i-(Nn+1)]); A4(i) = Pww4V.getElement(LID[i-(Nn+1)]); } else if ((i > 2 * Nn) && (i<=3 * Nn)) { A0(i) = alpha0V.getElement(LID[i-(2 * Nn+1)]); A1(i) = alpha1V.getElement(LID[i-(2 * Nn+1)]); A2(i) = alpha2V.getElement(LID[i-(2 * Nn+1)]); A3(i) = alpha3V.getElement(LID[i-(2 * Nn+1)]); A4(i) = 0.0; } } // State Error Covariance Matrix Matrix P1m(3 * Nn,3 * Nn); P1m = 0; P1m = A1 * A1.t() * Exp.expectation(0,1,1) + A2 * A2.t() * Exp.expectation(0,2,2) + A3 * A3.t() * Exp.expectation(0,3,3) + A4 * A4.t() * Exp.expectation(0,4,4); // Measurement Matrix const int mp = 13; Matrix Z(mp,nterm); Z = 0.0; ColumnVector mt(mp); for(int i=1; i<=mp; i++) { fscanf(m1File, "%le", &mt(i)); } Z.column(1) << mt; ColumnVector er1(mp); for(int i=1; i<=mp; i++) { if (mt(i) > 0.20) { er1(i) = 0.001 * mt(i); } else { er1(i) = 0.0; } 151 } Z.column(4) << er1; // Measurement Error Covariance Matrix Matrix Pzz(mp,mp); Pzz = 0; Pzz = er1 * er1.t(); / * Defining the observation matrix * / Matrix H(mp,3 * Nn); H = 0; int bct = 1; for(int i=0; i<mp; i++) { H(bct,GID1[i]) = 1.0; bct ++; } cout << "computing the gain" <<endl; //Gain Matrix Matrix KG(3 * Nn,mp); Matrix KG1 = H * P1m * H.t(); Matrix KG2 = Pzz + KG1; Matrix KG3 = KG2.i(); KG = (P1m * H.t()) * KG3; A0 = A0 + KG * (Z.column(1) - H * A0); A1 = A1 - KG * H * A1; A2 = A2 - KG * H * A2; A3 = KG * Z.column(4); A4 = A4 - KG * H * A4; // Give back to Sundance for (int i=1; i<=Nn; i++) { Seo0V.setElement(LID[i-1], (A0(i)-0.20)/0.65); Seo1V.setElement(LID[i-1], A1(i)/0.65); Seo2V.setElement(LID[i-1], A2(i)/0.65); Seo3V.setElement(LID[i-1], A3(i)/0.65); Seo4V.setElement(LID[i-1], A4(i)/0.65); Pww0V.setElement(LID[i-1], A0(i+Nn)); 152 Pww1V.setElement(LID[i-1], A1(i+Nn)); Pww2V.setElement(LID[i-1], A2(i+Nn)); Pww3V.setElement(LID[i-1], A3(i+Nn)); Pww4V.setElement(LID[i-1], A4(i+Nn)); alpha0V.setElement(LID[i-1], A0(i+2 * Nn)); alpha1V.setElement(LID[i-1], A1(i+2 * Nn)); alpha2V.setElement(LID[i-1], A2(i+2 * Nn)); alpha3V.setElement(LID[i-1], A3(i+2 * Nn)); } DiscreteFunction::discFunc(Seo0)->setVector(Seo0V); DiscreteFunction::discFunc(Seo1)->setVector(Seo1V); DiscreteFunction::discFunc(Seo2)->setVector(Seo2V); DiscreteFunction::discFunc(Seo3)->setVector(Seo3V); DiscreteFunction::discFunc(Seo4)->setVector(Seo4V); DiscreteFunction::discFunc(Pww0)->setVector(Pww0V); DiscreteFunction::discFunc(Pww1)->setVector(Pww1V); DiscreteFunction::discFunc(Pww2)->setVector(Pww2V); DiscreteFunction::discFunc(Pww3)->setVector(Pww3V); DiscreteFunction::discFunc(Pww4)->setVector(Pww4V); DiscreteFunction::discFunc(alpha0)->setVector(alpha0V); DiscreteFunction::discFunc(alpha1)->setVector(alpha1V); DiscreteFunction::discFunc(alpha2)->setVector(alpha2V); DiscreteFunction::discFunc(alpha3)->setVector(alpha3V); FieldWriter wr2 = new VTKWriter("alpha" + Teuchos::toString(ns/100)); wr2.addMesh(mesh); wr2.addField("alpha0", new ExprFieldWrapper(alpha0)); wr2.addField("alpha1", new ExprFieldWrapper(alpha1)); wr2.addField("alpha2", new ExprFieldWrapper(alpha2)); wr2.addField("alpha3", new ExprFieldWrapper(alpha3)); wr2.write(); } / * Update the system states and control every other timestep to minimize the mismatch between the predicted state and the target function * / else if ( ((ns% 2) == 0) && (ns >= 0)) { ColumnVector AC0(2 * Nn + 2); ColumnVector AC1(2 * Nn + 2); ColumnVector AC2(2 * Nn + 2); ColumnVector AC3(2 * Nn + 2); ColumnVector AC4(2 * Nn + 2); for(int i=1; i<=2 * Nn; i++) { 153 if(i<=Nn) { AC0(i) = 0.65 * Seo0V.getElement(LID[i-1]) + 0.20; AC1(i) = 0.65 * Seo1V.getElement(LID[i-1]); AC2(i) = 0.65 * Seo2V.getElement(LID[i-1]); AC3(i) = 0.65 * Seo3V.getElement(LID[i-1]); AC4(i) = 0.65 * Seo4V.getElement(LID[i-1]); } else if ((i > Nn) && (i<=2 * Nn)) { AC0(i) = Pww0V.getElement(LID[i-(Nn+1)]); AC1(i) = Pww1V.getElement(LID[i-(Nn+1)]); AC2(i) = Pww2V.getElement(LID[i-(Nn+1)]); AC3(i) = Pww3V.getElement(LID[i-(Nn+1)]); AC4(i) = Pww4V.getElement(LID[i-(Nn+1)]); } } AC0(2 * Nn+1) = Qa0; AC1(2 * Nn+1) = Qa1; AC2(2 * Nn+1) = Qa2; AC3(2 * Nn+1) = Qa3; AC4(2 * Nn+1) = Qa4; AC0(2 * Nn+2) = Qb0; AC1(2 * Nn+2) = Qb1; AC2(2 * Nn+2) = Qb2; AC3(2 * Nn+2) = Qb3; AC4(2 * Nn+2) = Qb4; // State Error Covariance Matrix Matrix PC1m((2 * Nn+2),(2 * Nn+2)); PC1m = 0; PC1m = AC1 * AC1.t() * Exp.expectation(0,1,1) + AC2 * AC2.t() * Exp.expectation(0,2,2) + AC3 * AC3.t() * Exp.expectation(0,3,3) + AC4 * AC4.t() * Exp.expectation(0,4,4); // Measurement Matrix const int Cmp = 160; Matrix ZC(Cmp,nterm); ZC = 0.0; ColumnVector mtC(Cmp); for(int i=1; i<=Cmp; i++) { fscanf(mFile, "%le", &mtC(i)); } ZC.column(1) << mtC; ColumnVector Cer1(Cmp); 154 for(int i=1; i<=Cmp; i++) { if (mtC(i) > 0.20) { Cer1(i) = 0.001 * mtC(i); } else { Cer1(i) = 0.0; } } ZC.column(4) << Cer1; // Measurement Error Covariance Matrix Matrix PCzz(Cmp,Cmp); PCzz = 0; PCzz = Cer1 * Cer1.t(); / * Defining the observation matrix * / Matrix HC(Cmp,2 * Nn+2); HC = 0; int bct = 1; for(int i=0; i<Cmp; i++) { HC(bct,GID[i]) = 1.0; bct ++; } cout << "computing the gain" <<endl; //Gain Matrix Matrix KGC(2 * Nn+2,Cmp); Matrix KG1C = HC * PC1m * HC.t(); Matrix KG2C = PCzz + KG1C; Matrix KG3C = KG2C.i(); KGC = (PC1m * HC.t()) * KG3C; //Update AC0 = AC0 + KGC * (ZC.column(1) - HC * AC0); AC1 = AC1 - KGC * HC * AC1; AC2 = AC2 - KGC * HC * AC2; AC3 = KGC * ZC.column(4); AC4 = AC4 - KGC * HC * AC4; 155 // Give back to Sundance for (int i=1; i<=Nn; i++) { Seo0V.setElement(LID[i-1], (AC0(i)-0.20)/0.65); Seo1V.setElement(LID[i-1], AC1(i)/0.65); Seo2V.setElement(LID[i-1], AC2(i)/0.65); Seo3V.setElement(LID[i-1], AC3(i)/0.65); Seo4V.setElement(LID[i-1], AC4(i)/0.65); Pww0V.setElement(LID[i-1], AC0(i+Nn)); Pww1V.setElement(LID[i-1], AC1(i+Nn)); Pww2V.setElement(LID[i-1], AC2(i+Nn)); Pww3V.setElement(LID[i-1], AC3(i+Nn)); Pww4V.setElement(LID[i-1], AC4(i+Nn)); } Qa0 = AC0(2 * Nn+1); Qa1 = AC1(2 * Nn+1); Qa2 = AC2(2 * Nn+1); Qa3 = AC3(2 * Nn+1); Qa4 = AC4(2 * Nn+1); Qb0 = AC0(2 * Nn+2); Qb1 = AC1(2 * Nn+2); Qb2 = AC2(2 * Nn+2); Qb3 = AC3(2 * Nn+2); Qb4 = AC4(2 * Nn+2); DiscreteFunction::discFunc(Seo0)->setVector(Seo0V); DiscreteFunction::discFunc(Seo1)->setVector(Seo1V); DiscreteFunction::discFunc(Seo2)->setVector(Seo2V); DiscreteFunction::discFunc(Seo3)->setVector(Seo3V); DiscreteFunction::discFunc(Seo4)->setVector(Seo4V); DiscreteFunction::discFunc(Pww0)->setVector(Pww0V); DiscreteFunction::discFunc(Pww1)->setVector(Pww1V); DiscreteFunction::discFunc(Pww2)->setVector(Pww2V); DiscreteFunction::discFunc(Pww3)->setVector(Pww3V); DiscreteFunction::discFunc(Pww4)->setVector(Pww4V); fprintf(o1File, "%d %le %le %le %le %le\n", ns, Qa0, Qa1, Qa2, Qa3, Qa4); fprintf(o2File, "%d %le %le %le %le %le\n", ns, Qb0, Qb1, Qb2, Qb3, Qb4); fflush(o1File); fflush(o2File); } else { DiscreteFunction::discFunc(Seo0)->setVector(Seo0V); 156 DiscreteFunction::discFunc(Seo1)->setVector(Seo1V); DiscreteFunction::discFunc(Seo2)->setVector(Seo2V); DiscreteFunction::discFunc(Seo3)->setVector(Seo3V); DiscreteFunction::discFunc(Seo4)->setVector(Seo4V); } if ( ((ns % 1) == 0) && (ns >= 0) ) { FieldWriter wr1 = new VTKWriter("S" + Teuchos::toString(ns/1)); wr1.addMesh(mesh); wr1.addField("S0", new ExprFieldWrapper(Seo0)); wr1.addField("S1", new ExprFieldWrapper(Seo1)); wr1.addField("S2", new ExprFieldWrapper(Seo2)); wr1.addField("S3", new ExprFieldWrapper(Seo3)); wr1.addField("S4", new ExprFieldWrapper(Seo4)); wr1.write(); } } fclose(mFile); fclose(o1File); fclose(o2File); } catch(exception& e) { Sundance::handleException(e); } Sundance::finalize(); } 157
Abstract (if available)
Abstract
Model-based predictions are critically dependent on assumptions and hypotheses that are not based on first principles and that cannot necessarily be justified based on known prevalent physics. Constitutive models, for instance, fall under this category. While these predictive tools are typically calibrated using observational data, little is usually done with the scatter in the thus-calibrated model parameters. In this study, this scatter is used to characterize the parameters as stochastic processes and a procedure is developed to carry out model validation for ascertaining the confidence in the predictions from the model.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Efficient inverse analysis with dynamic and stochastic reductions for large-scale models of multi-component systems
PDF
Model, identification & analysis of complex stochastic systems: applications in stochastic partial differential equations and multiscale mechanics
PDF
Integration of multi-physics data into dynamic reservoir model with ensemble Kalman filter
PDF
Algorithms for stochastic Galerkin projections: solvers, basis adaptation and multiscale modeling and reduction
PDF
Uncertainty quantification and data assimilation via transform process for strongly nonlinear problems
PDF
Design optimization under uncertainty for rotor blades of horizontal axis wind turbines
PDF
The role of counter-current flow in the modeling and simulation of multi-phase flow in porous media
PDF
Stochastic and multiscale models for urban and natural ecology
PDF
Efficient stochastic simulations of hydrogeological systems: from model complexity to data assimilation
PDF
Stochastic oilfield optimization under uncertainty in future development plans
PDF
A polynomial chaos formalism for uncertainty budget assessment
PDF
Data discovery on liquefaction-induced lateral ground deformations
PDF
Analytical and experimental methods for regional earthquake spectra and structural health monitoring applications
PDF
A method for characterizing reservoir heterogeneity using continuous measurement of tracer data
PDF
History matching for fractured reservoirs with Hough-transform-based parameterization
PDF
Smart buildings: synergy in structural control, structural health monitoring and environmental systems
PDF
Data worth analysis in geostatistics and spatial prediction
PDF
Hybrid physics-based and data-driven computational approaches for multi-scale hydrologic systems under heterogeneity
PDF
Reconstruction and estimation of shear flows using physics-based and data-driven models
PDF
Discounted robust stochastic games with applications to homeland security and flow control
Asset Metadata
Creator
Saad, George A.
(author)
Core Title
Stochastic data assimilation with application to multi-phase flow and health monitoring problems
School
Viterbi School of Engineering
Degree
Doctor of Philosophy
Degree Program
Civil Engineering
Publication Date
09/18/2007
Defense Date
08/31/2007
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
data assimilation,flow in porous media,Kalman filter,OAI-PMH Harvest,polynomial chaos,Structural health monitoring
Language
English
Advisor
Ghanem, Roger (
committee chair
), Ershaghi, Iraj (
committee member
), Johnson, Erik A. (
committee member
), Masri, Sami F. (
committee member
)
Creator Email
gsaad@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-m824
Unique identifier
UC1224271
Identifier
etd-Saad-20070918 (filename),usctheses-m40 (legacy collection record id),usctheses-c127-553951 (legacy record id),usctheses-m824 (legacy record id)
Legacy Identifier
etd-Saad-20070918.pdf
Dmrecord
553951
Document Type
Dissertation
Rights
Saad, George A.
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Repository Name
Libraries, University of Southern California
Repository Location
Los Angeles, California
Repository Email
cisadmin@lib.usc.edu
Tags
data assimilation
flow in porous media
Kalman filter
polynomial chaos