Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Interactive and immersive coastal hydrodynamics
(USC Thesis Other)
Interactive and immersive coastal hydrodynamics
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
INTERACTIVE AND IMMERSIVE COASTAL HYDRODYNAMICS by Sasan Tavakkol ___________ A Dissertation Presented to the FACULTY OF THE GRADUATE SCHOOL UNIVERSITY OF SOUTHERN CALIFORNIA In Partial Fulfillment of the Requirements for the Degree DOCTOR OF PHILOSOPHY (CIVIL ENGINEERING) May 2019 Copyright 2019 Sasan Tavakkol i Acknowledgements Five years ago, when I was getting that one-way ticket from Tehran to Los Angeles, I could not imagine how wonderful my PhD journey is going to be. I will always treasure the memories and cherish those who cared for me beyond the measure. It takes a village to raise a PhD student! But I will do my best to express my gratitude to everyone who helped me along the way in this note, starting with those who paid the bills. I am grateful for the funding provided by the Viterbi Dean’s PhD Fellowship which covered the entire first year of my studies and assisted with the rest. This dissertation was partially funded by the Office of Naval Research (ONR) award numbers N00014- 13-1-0624 and N00014-17-1-2878. Furthermore, I would like to acknowledge funding from USC’s Integrated Media Systems Center (IMSC). Finally, I acknowledge funding from National Science Foundation (NSF), Division of Computer and Network Systems (CNS), and Division of Information & Intelligent Systems (IIS) with award numbers 1461963 and 1320149, respectively. Now, let me express my deep gratitude to the people who made my journey pleasant and successful in the next few paragraphs. First and foremost, I would like to express my sincere gratitude to my advisor, Professor Patrick Lynett. I only realize in retrospect, the difficulty of managing a curious but vulnerable international student while striking a balance between multiple factors to lead him to success. Patrick, you gave me the freedom I needed in my research but were always available to guide me. I never worried about my next paycheck or reimbursements while I was initiating ideas and pursuing them. I completed a M.Sc. in Computer Science and even tried business school and entrepreneurship competitions. You let me work remotely from Chicago, New York, and Mountain View to accompany my wife or to do full-time internships for Google and Niantic. I can count more and more examples, but I just can’t thank you enough, so let me leave this paragraph unfinished. ii I extend a special gratitude to Professor Cyrus Shahabi and Dr. Seon Ho Kim who hosted me at IMSC and co-advised the Chapter 4 of this dissertation where we tackle problems in geospatial data management and crowdsourcing during disasters. Cyrus and Seon Ho, I was privileged to work with you and our collaborations directed my career and helped me achieve my goal, for which I am truly grateful. Indeed, a very especial thank you goes to all my teachers from the very beginning to the end of my formal education. I certainly was lucky to have some of them as my mentors along the way. I would like to thank the members of my qualifying exam committee Professor J.J. Lee for his enlightening questions and suggestions, Dr. Felipe de Barros for our friendly and valuable discussions, and specially Professor Aiichiro Nakano for encouraging and supporting me to pursue my career goals. Patrick and Cyrus were also of course in the committee, whom I have already expressed my gratitude to. My final defense committee members were Professor Costas Synolakis and Dr. Meisam Razaviyayn in addition to Patrick who was the chair. Costas, it was such an honor and pleasure to work beside you in the Tsunami Research Center. I will hold dear your lectures, life advices, and humor. Meisam, I am so grateful that you accepted to join my committee and truly appreciate your kindness for saving the day! USC also deserves my deep gratitude for the opportunities it created by offering a variety of courses and the unmatched freedom it gave students in choosing their coursework which for me spanned five programs from business school to computer science. I believe USC, genuinely, cares about its graduate students, especially the international ones who may need some special care. Top-notch research centers, competitions, and well, fun social events did not go unnoticed. Thank you! Leaving abroad would have been impossible if it was not for my friends. I start with Omid Davtalab and Simin A. Karvigh, who welcomed me to the States and helped me make Los Angeles feel a bit more like home. Ali Ghahramani was my single source of information and a dear roommate. I was lucky to befriend Dr. Farrokh Jazizadeh who quickly became my informal mentor and helped me shape my future. Ali Kazemian, whom I lured to come to USC, was the true companion, through good and difficult times, though he is not the best PES/FIFA player. Thank you all! A very special thanks goes to Mehrdad Shokrabadi, who let me crash on his coach every now and then and was like family. I befriended Mehrdad more than 2 decades ago in the first grade of primary school. We did our bachelors and masters together at Tehran Polytechnic, and I feel iii blessed that we did our PhD’s in the same city, tough he couldn’t experience a top school like USC, and ended up at UCLA. I had the privilege of having great officemates who soon became close friends. A special thanks goes to Nikos Kalligeris and Aykut Ayca for our countless intellectual discussions, brainstorming sessions, and memorable lunches and night outs. My hearty thanks to Hang Sik Ko, Adam Keen, Hoda El Safty, Luis Montoya, Vassilios Skanavis, Kostis Douligeris and our new comer Ezgi Cinar, for being there when I needed them and for making our lab a joyful one. I would also like to thank the administrative staff of the Civil and Environmental Engineering Department, Computer Science Department, and Viterbi Graduate Office who went out of their ways to help me. Jennie, Dragana, Tessie, Vangie, Alexis, Erick, Geraldine, Jake, Christine, Lizsl, Jennifer, and Tracy, thank you! I should also thank and explicitly acknowledge those who had direct contributions to parts of this dissertation. I acknowledge significant contribution from Ali Kazemian in designing and implementing the 3D Sand Printer, and Vassilios Skanavis for the same about the Augmented Reality Sandbox. I thank and acknowledge my mentee interns, Jodi Yip (high school), Karen Urosa (undergraduate), Zhao Zhao (graduate), and Anurag Syal (graduate), who gave me a hand completing my dissertation, and more importantly taught me mentorship. I acknowledge contributions from my co-author Hien To and helps from Aykut Ayca in Chapter 4 of this dissertation. I kept my family to the end because they are so special to me. Roshanak, my lovely wife, we did it! I will not work after midnight or on the weekends anymore. At least, not until I find my side research hustle … Thank you so much for enduring this journey with me and always supporting me. I also want to thank my parents, Manouchehr and Farideh, as well as my brother and sister, Ehsan and Elham, for their endless support. But I also would like to apologize for selfishly leaving them to pursue my dream where I knew I won’t be coming back home for years. Little did I know that they will be banned from visiting me too. I look forward to a better world, one with more love and less borders. To end this lengthy acknowledgement, sweet, and as a small tribute to all the amazing scientists who shaped our world and well, made this dissertation possible, I present my academic tree in Figure i. The “advisor → student” relationships are mostly derived from Wikipedia. Note that in iv some cases the relationship between an advisor and student is not a doctorate one. I am fascinated to see this many world-famous scientists as my academic ancestors and inspired to keep learning and loving science. Figure i: Academic tree of Sasan Tavakkol v Contents Chapter 1 - Introduction and Motivation.................................................................................. 1 1.1 Objectives of Study and Proposed Approach ................................................................. 4 1.1.1 Developing a Robust Mathematical Model .............................................................. 4 1.1.2 Interactive Simulations and Visualization ................................................................ 5 1.1.3 Coupling with Crowdsourced Data ........................................................................... 5 1.1.4 Immersive Visualization ........................................................................................... 6 1.2 Structure and Arrangement of Study .............................................................................. 7 Chapter 2 - Mathematical Model ............................................................................................. 8 2.1 Governing Equations ...................................................................................................... 8 2.2 Numerical Discretization ................................................................................................ 9 2.3 Time Integration............................................................................................................ 11 2.3.1 Uniform Time-stepping .......................................................................................... 11 2.3.2 Adaptive Time-stepping.......................................................................................... 12 2.3.2.1 Third-order Adaptive Adams-Bashforth Equation .......................................... 13 2.3.2.2 Variable-step Second Order Finite Difference Equations ............................... 15 2.3.2.3 Prediction Equations ........................................................................................ 16 2.4 Boundary Conditions .................................................................................................... 17 2.4.1 Solid Wall ............................................................................................................... 17 2.4.2 Sinewave ................................................................................................................. 18 2.4.3 Sponge Layer .......................................................................................................... 18 2.4.4 Irregular Waves ....................................................................................................... 20 2.4.5 Time Series ............................................................................................................. 20 vi 2.5 Wave Breaking.............................................................................................................. 21 2.6 Friction .......................................................................................................................... 21 2.7 Solitary Waves .............................................................................................................. 21 Chapter 3 - Software Development ........................................................................................ 23 3.1 Celeris Zero ................................................................................................................... 24 3.1.1 Boussinesq 1D ........................................................................................................ 24 3.1.2 Boussinesq 2D ........................................................................................................ 25 3.2 Celeris Advent .............................................................................................................. 25 3.2.1 Software Documentation ........................................................................................ 25 3.2.1.1 Source files ...................................................................................................... 26 3.2.1.2 Input and output files ....................................................................................... 27 3.2.1.3 Implementation ................................................................................................ 27 3.2.1.4 Compilation ..................................................................................................... 30 3.2.1.5 Running Celeris Advent .................................................................................. 30 3.2.1.6 Key Features .................................................................................................... 30 Simulation speed ........................................................................................................... 30 Vivid Visualization ....................................................................................................... 31 Interactivity ................................................................................................................... 33 3.2.2 Numerical Validations ............................................................................................ 34 3.2.2.1 Run-up on a planar beach ................................................................................ 34 3.2.2.2 Wave focusing on a semicircular shoal ........................................................... 37 3.2.2.3 Steady flow over submerged obstacle ............................................................. 39 3.2.2.4 Solitary wave run-up on a conical island ......................................................... 41 Uniform time-stepping .................................................................................................. 41 Adaptive time-stepping ................................................................................................. 51 3.2.3 Suggestion for Future Work.................................................................................... 52 vii 3.3 Celeris Base .................................................................................................................. 53 3.3.1 Key Differences ...................................................................................................... 54 3.3.2 Software Documentation ........................................................................................ 55 3.3.2.1 Source Files ..................................................................................................... 55 3.3.2.2 Implementation ................................................................................................ 58 Structure ........................................................................................................................ 60 Solver ............................................................................................................................ 61 Compute Shaders .......................................................................................................... 63 Rendering Mesh Generation ......................................................................................... 64 Shading and Materials................................................................................................... 65 Map Rendering.............................................................................................................. 67 Mouse Pointer Gauge .................................................................................................... 71 Buoy .............................................................................................................................. 72 3.3.2.3 Input and Output Files ..................................................................................... 73 3.3.2.4 Compilation ..................................................................................................... 73 3.3.2.5 Running Celeris Base ...................................................................................... 74 3.3.3 Numerical Validation .............................................................................................. 75 3.4 Appendix (User Manual of Celeris Advent) ................................................................. 77 3.4.1 Introduction ............................................................................................................. 77 3.4.2 Input files ................................................................................................................ 78 3.4.2.1 Model Settings ................................................................................................. 78 3.4.2.2 Field Parameters .............................................................................................. 80 3.4.2.3 Initial Conditions ............................................................................................. 81 3.4.2.4 Boundary Conditions ....................................................................................... 81 3.4.3 Output files.............................................................................................................. 82 viii 3.4.4 Graphical User Interface (GUI) .............................................................................. 83 3.4.5 Running the Software ............................................................................................. 83 3.4.6 Tutorial Case ........................................................................................................... 84 Chapter 4 - Coupling with Crowdsourced Data ..................................................................... 86 4.1 Post-disaster Data Crowdsourcing ................................................................................ 87 4.2 Related Works ............................................................................................................... 90 4.3 Methodology ................................................................................................................. 91 4.3.1 Modeling Uncertainty of Events ............................................................................. 92 4.3.2 Modeling Significance of Events ............................................................................ 93 4.3.3 Problem Definition and Solution ............................................................................ 93 4.3.4 Reasoning ................................................................................................................ 94 4.3.5 Continuous Monitoring ........................................................................................... 95 4.3.5.1 Updates in Time............................................................................................... 96 4.3.5.2 Local Proximity Enhancement ........................................................................ 96 4.3.6 Crowdreviewing ...................................................................................................... 97 4.3.7 Coupling with Simulation Software........................................................................ 98 4.4 Performance Evaluation ................................................................................................ 99 4.4.1 2001 Nisqually earthquake ..................................................................................... 99 4.4.1.1 Experimental Setup.......................................................................................... 99 4.4.1.2 Data Acquisition and Interpretation System .................................................. 100 4.4.1.3 Realistic Synthetic Data ................................................................................. 101 4.4.1.4 Experiment Results ........................................................................................ 103 4.4.1.5 Effect of Local Proximity Enhancement ....................................................... 107 4.4.1.6 Crowdreviewing ............................................................................................ 109 4.4.2 2014 South Napa Earthquake................................................................................ 110 4.4.2.1 Data Collection .............................................................................................. 111 ix 4.4.2.2 Experiment..................................................................................................... 115 4.4.2.3 Crowdreviewing ............................................................................................ 119 4.4.3 Coupling with Celeris ........................................................................................... 122 4.5 Appendix I .................................................................................................................. 131 4.6 Appendix II ................................................................................................................. 133 Chapter 5 - Immersive Visualization ................................................................................... 136 5.1 Celeris VR ................................................................................................................... 137 5.1.1 Implementation ..................................................................................................... 138 5.1.2 Running Celeris VR .............................................................................................. 139 5.1.3 Suggestions for Further Research and Development ............................................ 140 5.2 HAZAR ....................................................................................................................... 141 5.2.1 Methodology and Design ...................................................................................... 143 5.2.1.1 Projection Mapping ....................................................................................... 145 5.2.1.2 Depth Camera ................................................................................................ 146 5.2.1.3 Coding and Mathematical Model .................................................................. 146 5.2.2 Sand 3D Printer ..................................................................................................... 146 5.2.2.1 Frame and Linear Stage ................................................................................. 147 5.2.2.2 Motor and Driver ........................................................................................... 148 5.2.2.3 Control Software............................................................................................ 148 5.2.2.4 Printing Nozzle .............................................................................................. 149 5.2.2.5 Sand Feeding System..................................................................................... 150 5.2.2.6 Testing and Operation ................................................................................... 150 5.2.3 Suggestions for Further Research and Developments .......................................... 151 5.2.3.1 Sand 3D Printer ............................................................................................. 151 5.2.3.2 Scalability ...................................................................................................... 153 5.2.3.3 Generic Release ............................................................................................. 153 x 5.2.3.4 Celeris Hazar ................................................................................................. 154 Chapter 6 - Conclusion ......................................................................................................... 155 6.1 Mathematical Model ................................................................................................... 155 6.2 Interactive Simulations and Visualization .................................................................. 155 6.3 Coupling with Crowdsourced Data ............................................................................. 156 6.4 Immersive Visualization ............................................................................................. 157 References ........................................................................................................................... 158 xi List of Figures Figure 2.1: Damping coefficient inside the sponge layer. ............................................................ 19 Figure: 3.1: Simplified flowchart of Celeris. ................................................................................ 26 Figure 3.2: Sample CML input file ............................................................................................... 28 Figure 3.3: Sample shader code which handles a solid wall boundary condition. ....................... 28 Figure 3.4: Cyclic reduction algorithm and its implementation on GPU. .................................... 29 Figure 3.5: Visualizations of an experiment with a realistic bathymetry. Celeris can visualize the surfaces in a realistic mode with reflections and refractions (a), with a colormap on the terrain (b) and with a colormap on the water surface (c). Video is available at https://youtu.be/yks7ePXbRyU. ....................................................................................................................................................... 31 Figure 3.6: A screenshot of Celeris Advent (v1.2.4). The water surface is colormapped according to its elevation and the inundated area on the island is highlighted. ............................................. 33 Figure 3.7: Comparison of numerical non-dimensional maximum run-up of solitary waves on a 1:19.85 beach versus non-dimensional wave height with experimental data[36]. ....................... 35 Figure 3.8: Breaking solitary wave run-up and rundown on a planar beach at t( g/h) 0.5 = (a) 15, (b) 20, (c) 25, (d) 45. The solid line represents the numerical results and the dots represent the experimental data of Synolakis [39] ............................................................................................. 36 Figure 3.9: Wave amplitude harmonics along the centerline for a = 0.0075 m and T = 2 s. Solid lines are numerical results from Celeris, symbols are experimental data from Whalin [40]. ....... 38 Figure 3.10: Water surface elevation for the case with a = 0.0075 m and T = 2 s from Whalin [40] experiments. .................................................................................................................................. 38 Figure 3.11: Bathymetry of flume in the submerged obstacle experiment ................................... 39 Figure 3.12: Vortex shedding behind the submerged obstacle in different times (y axis is horizontal). .................................................................................................................................... 40 Figure 3.13: Experimental data (circles) and numerical simulation (solid line) for u and v component of velocity at point #1 and point #2 for the submerged obstacle experiment [45]. The subscripts correspond to the point number. .................................................................................. 41 xii Figure 3.14: Experimental setup of the conical island. The gauge locations are shown by dots and the wave approaches the island from the left. ............................................................................... 43 Figure 3.15: Experimental (– –) and numerical (–) time series for the interaction of a solitary wave with H/d=0.04 on a conical island, at gauges #6, #9, #16, and #22 (a-d) ..................................... 44 Figure 3.16: Experimental (– –) and numerical (–) time series for the interaction of a solitary wave with H/d=0.09 on a conical island, at gauges #6, #9, #16, and #22 (a-d) ..................................... 45 Figure 3.17: Experimental (– –) and numerical (–) time series for the interaction of a solitary wave with H/d=0.18 on a conical island, at gauges #6, #9, #16, and #22 (a-d) ..................................... 46 Figure 3.18: Snapshots of conical island with H/d = 0.18 near the time of maximum run-up at the front face (a), collision of wrapping waves (b), and maximum run-up on the back face (c). The 47 Figure 3.19: Numerical (solid line) and measured (x) maximum horizontal run-up for H/d = 0.04 (a), 0.09 (b), and 0.018 (c). ........................................................................................................... 50 Figure 3.20: Time-step change during the conical island experiment with adaptive time integration for different values of ϵ. ................................................................................................................ 52 Figure 3.21: Celeris Base, running inside Unity3D. ..................................................................... 54 Figure 3.22: Some of important source files in Celeris Base. ...................................................... 57 Figure 3.23: YAML implementation of the GameManager object in the Main.unity scene of Celeris Base. ................................................................................................................................. 59 Figure 3.24 Unity3D editor and its different windows. ................................................................ 60 Figure 3.25: Game window and the Play button in Unity3D editor. ............................................ 60 Figure 3.26: Class UML diagram of Celeris Base solvers............................................................ 63 Figure 3.27: Rendering infrastructure in Celeris Base with disassembled tiles. .......................... 65 Figure 3.28: Textures of grid (a), jet colormapping (b), and terrain colormapping (c), used in Celeris Base renderings................................................................................................................. 66 Figure 3.29: Sample photorealistic rendering in Celeris Base with different sky and grid options. ....................................................................................................................................................... 66 Figure 3.30: A variant of Mercator map projection. ..................................................................... 68 xiii Figure 3.31: Geographic map of the coast of Newport, OR, retreived from Google Static Maps API by Celeris Base. ..................................................................................................................... 70 Figure 3.32: Simulatoin of the coast of Newport, OR, using Celeris Base with rendering the geographic map of the location. .................................................................................................... 71 Figure 3.33: Mouse pointer acts as a gauge in Celeris Base. ........................................................ 72 Figure 3.34: Deploying virtual buoys in Celeris Base. ................................................................. 73 Figure 3.35: Experimental (– –) and numerical (–) time series for the interaction of a solitary wave with H/d=0.18 on a conical island, at gauges #6, #9, #16, and #22 (a-d), simulated by Celeris Base. ....................................................................................................................................................... 76 Figure 3.36: Numerical (solid line) and measured (●) maximum horizontal run-up for H/d = 0.018 simulated by Celeris Base. ............................................................................................................ 77 Figure 3.37: A sample XML element. .......................................................................................... 78 Figure 3.38: An XML element with attributes containing another element. ................................ 78 Figure 3.39: A sample CML input file.......................................................................................... 80 Figure 3.40: A truncated sample Celeris bathymetry file. ............................................................ 81 Figure 3.41: Experimental setup of the conical island. The gauge locations are shown by dots and the wave approaches the island from the left. ............................................................................... 85 Figure 4.1: Data collection and interpretation framework for a single time-step. ........................ 89 Figure 4.2: Entropy of a binary random variable .......................................................................... 95 Figure 4.3: Incorporating crowdreviewing in the data collection and interpretation framework. 97 Figure 4.4: Two-way coupling of tsunami simulation software with data collection and interpretation framework. ............................................................................................................. 98 Figure 4.5: Spectral acceleration at a period of 0.3s for the 2001 Nisqually earthquake. ............ 99 Figure 4.6: Fragility curves for bridge damages accounting bridge age [55]. ............................ 100 Figure 4.7: Screenshot of MediaQ website, showing a debris line at Dockweiler State Beach, CA, after the 2016 El Niño. ................................................................................................................ 101 Figure 4.8: Screenshot of MediaQ website showing after (a) and before (b) scene for the 2016 El Niño in Santa Monica, CA. ......................................................................................................... 102 xiv Figure 4.9: All highway bridges (circles), damaged bridges (red triangles), and DYFI response boxes (squares) in the disaster area of the 2001 Nisqually earthquake. ..................................... 103 Figure 4.10: Entropy against the logarithm of daily traffic. Pareto layers on shown in color. ... 104 Figure 4.11: Images in the first time-step. Larger markers show bridges with larger entropy. .. 105 Figure 4.12: Total uncertainty percentage in time after the disaster, for different values of L and r. ..................................................................................................................................................... 106 Figure 4.13: Total percentage of damaged bridges discovered at the end of the experiment. .... 107 Figure 4.14: Percentage of damaged bridges discovered with (solid line) and without (dashed line) local proximity updates. .............................................................................................................. 108 Figure 4.15: Damaged bridges discovered without (blue triangles) and with (blue and red triangles) local proximity enhancement for L = 20 and r = 1.00. ............................................................... 108 Figure 4.16: Total percentage of damaged bridges discovered by using crowdreviewing. ........ 110 Figure 4.17: Intensity map of the 2014 Napa Earthquake from USGS. ..................................... 111 Figure 4.18: Cumulative temporal distribution of tweets vs. DYFI reports. .............................. 112 Figure 4.19: Tweets time versus distance from the epicenter (black dots) and earthquake arrival in seismic stations (red dots). .......................................................................................................... 113 Figure 4.20: Spatial distribution of tweets reporting damage (red), no damage (green), and uninformative tweets (blue). ....................................................................................................... 114 Figure 4.21: Cumulative temporal distribution of tweets reporting damage, no damage, and uninformative tweets. .................................................................................................................. 115 Figure 4.22: Moderate damage fragility curves for masonry buildings [68]. ............................. 116 Figure 4.23: Probability of damage to buildings after the 2014 Napa Earthquake ..................... 116 Figure 4.24: Total information gain overtime for two prioritization strategies. ......................... 117 Figure 4.25: Number of damaged buildings discovered overtime for two prioritization strategies. ..................................................................................................................................................... 118 Figure 4.26: Probably of damage for discovered damaged buildings over time. ....................... 119 Figure 4.27: Training on the damage levels for the “w/ training” group of participants. ........... 121 xv Figure 4.28: Crowd and expert assessment of the damage level in the building images from 2014 South Napa earthquake. .............................................................................................................. 122 Figure 4.29: Crowd and expert assessment of the damage level in the building images from 2014 South Napa earthquake for groups w/ and w/o training. ............................................................ 122 Figure 4.30: Oregon coast around Newport. The solid rectangle shows the simulation area in Celeris and the dashed rectangle shows the disaster area considered for the experiment. ......... 124 Figure 4.31: Topography of the disaster area considered in the experiment. ............................. 124 Figure 4.32: Initial expected flood probability of cells in the disaster area. ............................... 127 Figure 4.33: Inundation of the disaster area for different values of κ. The red dots shows the randomly generated messages in the disaster area. ..................................................................... 128 Figure 4.34: Updating E(Pi) for κ = 0.5 by running scenarios and interpreting messages. The numbered red dots show the message added in each step. ......................................................... 129 Figure 4.35: Updating E(Pi) for κ = 1.0 by running scenarios and interpreting messages. The numbered red dots show the message added in each step. ......................................................... 130 Figure 4.36: Updating E(Pi) for κ = 1.5 by running scenarios and interpreting messages. The numbered red dots show the message added in each step. ......................................................... 131 Figure 4.37: Images from damaged buildings during 2014 South Napa earthquake, shown to participants in a survey to assess the damage level. Images are labeled by their ID’s. .............. 132 Figure 5.1: Oculus Rift VR headset and accessories. ................................................................. 138 Figure 5.2: Checkbox to enable VR in Unity ............................................................................. 139 Figure 5.3: Movement control in Celeris VR. ............................................................................ 140 Figure 5.4: Simulating a coastal area with Celeris VR. .............................................................. 140 Figure 5.5: Virtual flume setup in Celeris VR. ........................................................................... 141 Figure 5.6: Spatially Augmented Reality on the Sydney Opera House during Vivid Sydney 2013. ..................................................................................................................................................... 142 Figure 5.7: Basic structure of HAZAR. ...................................................................................... 144 Figure 5.8: First (a) and second (b) prototypes of HAZAR project built at USC. ..................... 144 Figure 5.9: HAZAR prototype in action. Dark blue color shows augmented water. ................. 145 xvi Figure 5.10: Preliminary tests of HAZAR sand 3D printer. ....................................................... 147 Figure 5.11: The wooden frame and guide rails of HAZAR sand 3D printer. ........................... 148 Figure 5.12: The motor system of HAZAR sand 3D printer. ..................................................... 148 Figure 5.13: Iteration 1 of the nozzle design (variable deposition hole size). ............................ 149 Figure 5.14: Iteration 2 of the nozzle design (auger) .................................................................. 149 Figure 5.15: Iteration 3 of the nozzle design (lead screw and nut) ............................................. 150 Figure 5.16: Sand feeding system of HAZAR sand 3D printer .................................................. 150 Figure 5.17: Final assembly of HAZAR sand 3D printer. .......................................................... 151 Figure 5.18: Test result of HAZAR sand 3D printer (video available at https://youtu.be/yCcQeWGv8qw?t=2m43s). .............................................................................. 152 Figure 5.19: Presentation of HAZAR at GTC2016. ................................................................... 152 Figure 5.20: Possible use cases of WonderCast in education. .................................................... 153 xvii Abstract The aim of this dissertation is to democratize high-order and high-performance coastal wave modeling by developing a user-friendly GPU accelerated software which utilizes state-of-the-art visualization technologies. We call our software, Celeris, the Latin word for “quick”. This software is the first-of-its-kind with unprecedented features such as interactivity and an immersive environment. Celeris is open source and needs minimum preparation to run on a user-level laptop. The software solves the extended Boussinesq equations using a hybrid finite volume–finite difference method and supports moving shoreline boundaries. The simulation and visualization are performed on the GPU, which enables the software to run faster than real-time. Celeris supports simultaneous visualization with both photorealistic and colormapped rendering capabilities along with many other novel features. In this dissertation we elaborate the steps of developing this software such as the governing equations, our robust numerical method, parallelization algorithms, coding implementation, visualizations, validations, and so on. We also develop an entropy-based framework to prioritize data interpretation in disasters and couple it with Celeris to mitigate the most significant problems in disaster situation: collecting data from the disaster area, estimating the situation, and making decision under uncertainty. This framework ranks the crowdsourced data from the disaster area according to the amount of expected information contained in each piece. Then, a simulation software like Celeris is run, tuned by the collected information, to provide a better picture of the disaster scope. The new estimate of the disaster scope is then used to better prioritize the crowdsourced data for interpretation. Continuing this loop helps emergency managers to use their limited interpretation resources to gain the most amount of information out of the disaster area. 1 Chapter 1 - Introduction and Motivation Recent catastrophic tsunamis such as the Tohoku Earthquake and Tsunami in Japan (2011) and disastrous hurricanes such as Hurricane Sandy (2012), Hurricane Harvey (2017), and Hurricane Michael (2018), have raised the global awareness for the urgent need to understand the response of developed coastal regions to tsunamis and wind waves. While these natural disasters are mostly unpreventable, the following catastrophes can be avoided by careful engineering and thorough simulations [1]. Although there are well-known wave solvers (e.g., COULWAVE) to simulate tsunamis and wind-waves, there is no solution for a faster than real-time wave forecasting in coastal regions. Furthermore, currently available solutions often need to run on cluster of CPU’s which is not accessible by many engineering firms as well as some researchers. Finally, the engineering tools available for coastal wave simulations are not equipped with modern features such as interactivity or immersive visualization. Our efforts in this research overcomes these shortcomings. We define our main objective as follows: Democratizing high-order and high-performance coastal wave modeling by leveraging the GPU in a user-friendly software and modernizing it by leveraging the state-of-the-art visualization technologies Research with the Boussinesq-type equations has led to transformative changes in coastal engineering simulation and practice over the last few decades (e.g. [2]) These equations are powerful for the study of nearshore dynamics, including both nonlinear and dispersive effects. While Boussinesq-type equations are capable of simulating relatively short-waves, they are computationally more expensive than their counterpart, non-linear shallow water (NLSW) equations. Boussinesq-type equations have widespread usage in scientific applications and coastal research, but their application in practical engineering and operations is limited. The main reason 2 for such a restriction is the high computational time cost of the Boussinesq type models, which can be hours to days for large domains. The computational effort needed for Boussinesq-type equations hinders real-time simulations using them, unless with parallel processing on dozens to hundreds of CPU cores [3]. Such supercomputing facilities are neither easily accessible nor inexpensive, particularly for the types of often low-budget coastal and civil engineering projects for which they are applicable. In addition to the high cost of accessing such facilities, running console based Boussinesq-type models and post-processing simulation results are often sophisticated and need some level of expertise which normally is only available at research institutes and universities. While Graphics Processing Units (GPUs), are affordable alternatives to accelerate these numerical models, they are not often leveraged for Boussinesq equations, perhaps because these equations do not easily lend themselves to massively parallel numerical schemes, due to their embedded implicit methods and large numerical stencils. A significant effort in the nearshore wave model community towards developing Boussinesq models has occurred in the past decade (e.g., [4]). Assuming that both nonlinearity and frequency dispersion are weak and are in the same order of magnitude, Peregrine [5] derived the “standard” Boussinesq equations for variable depth in terms of the depth-averaged velocity and the free surface displacement. Numerical results based on the standard Boussinesq equations or the equivalent formulations have been shown to give predictions that compared quite well with field data [6] and laboratory data [7]. Because it is required that both frequency dispersion and nonlinear effects are weak and of the same order, the standard Boussinesq equations are not applicable to very shallow water depth, where the nonlinearity becomes more important than the frequency dispersion, and also to the deep-water depth, where the frequency dispersion is of order of one. The standard Boussinesq equations break down when the depth is greater than one-fifth of the equivalent deep-water wavelength. For many engineering applications, where the incident wave energy spectrum consists of many frequency components, a lesser depth restriction is desirable. To extend the applications to shorter waves (or deeper water depth) many modified forms of Boussinesq-type equations have been introduced (e.g., [8]–[10]). It has been demonstrated that the “modified” Boussinesq equations are able to simulate wave propagation from intermediate water depth (water depth to wavelength ratio is about 0.5) to shallow water including the wave-current interaction [11]. 3 Despite the success of the modified Boussinesq equations in intermediate water depth, these equations are still restricted to weak nonlinearity. As waves approach shore, wave height increases due to shoaling until eventually breaking. The wave-height to water depth ratios associated with this physical process violate the weakly nonlinear assumption. This restriction can be readily removed by eliminating the weak nonlinearity assumption (e.g., [12], [13]). Numerical implementations of the highly-nonlinear, Boussinesq-type equations include FUNWAVE (Fully Nonlinear Boussinesq Wave Model, e.g., [9]) and COULWAVE (Cornell University Long and Intermediate Wave Model, e.g., [14]). These models have been applied to a wide variety of topics, including rip and longshore currents [15], wave runup [16], wave-current interaction [17], and wave generation by underwater landslides [18], among many others. While high-order models provide a better representation of physical processes, they can suffer from two important drawbacks. First, as the second-order corrections become more complex, the computation requirements, and thus time, to solve the system can increase greatly. Often for practical cases, simulations run at a fraction of real time on hundreds of cores. Secondly, while it can be clearly shown that the high-order models do agree better with analytical solutions and controlled laboratory experiments, the impact of these corrections becomes blurred upon application in real field cases. The inherent errors and uncertainties when hindcasting or forecasting a field site can overwhelm the high-order corrections. In these cases, there is little, if any, practical justification for an expensive model. Moreover, it can be very beneficial to certain users to have a coarse numerical representation of a site, if it can be provided very quickly. With a rapid wave simulations tool, simultaneous observation of the simulation results in a realistic way can significantly help researchers to understand the complex costal processes. Moreover, providing an interactive simulation environment, where the user can alter the water surface and the bathymetry while the model is running, can give an even further boost to the functionality of a Boussinesq model. However, the standard practice in most, if not all, of the wave simulation models is to run the simulation on a PC or a cluster and write the results to disk at certain checkpoints. The results are later visualized using different tools. To the best of author’s knowledge, there is no Boussinesq-type modeling package which offers concurrent visualization or interactivity. To address the discussed demands in the coastal engineering community, we developed a tremendously fast Boussinesq-type wave solver running on the GPU. We also developed a 3D 4 visualizer and an interactive framework to incorporate the GPU solver. The result is a software, called Celeris, which not only runs faster-than real time for large domain sizes needed in engineering projects, but also it provides unprecedented features such as concurrent visualization, an interactive interface, and an immersive environment. We introduced the first version of software, Celeris Advent, in [19] and [20] and validated it against famous benchmarks. Celeris can encourage engineers to adapt the Boussinesq model in low-budget engineering projects which would in turn increase the reliability of the coastal designs and reduce the cost of the projects and their maintenance. Moreover, the rapid simulation and concurrent visualization of Celeris, makes it a suitable model, and to the best of our knowledge the only capable one, for faster than real-time wind-wave and tsunami forecasting in coastal regions. This capability can be extremely useful in managing coastal disasters where seconds matter. Our major objective in this research, was developing a simulation and visualization software for coastal waves based on the extended Boussinesq equations [8], [21] with the features discussed earlier, namely, faster than real-time simulations speed, concurrent realistic visualization, and interactivity. However, we took a step further and proposed a two-way coupling of such a system with the data stream collected from the disaster area. We, finally, explored the opportunities in immersive visualization of coastal waves to help engineers work in an interactive, collaborative environment and also raise the global awareness about coastal disasters by providing tools to generate simulations and animations which can engage the public (e.g., 360º videos or Virtual Reality apps). 1.1 Objectives of Study and Proposed Approach 1.1.1 Developing a Robust Mathematical Model The first step to achieve our goal of developing interactive and immersive coastal hydrodynamics tools was developing a robust mathematical model which can take in interactive changes without getting unstable. Furthermore, such a model must be suitable to be solved massively in parallel. We developed a mathematical model and a hybrid finite volume – finite difference discretization for the extended Boussinesq equations such that the system can be solved on the GPU. The utilized discretization scheme provides the model with the required robustness. We also developed several equations and algorithms to force boundary conditions and alter them interactively. 5 1.1.2 Interactive Simulations and Visualization We developed a software capable of faster than real-time simulation of coastal waves and their runup on developed urban areas. We call our open source software Celeris; the Latin word for “quick”. This software is powered by GPU and can provide faster than real-time simulations on ordinary user level laptop. While such a fast simulation is essential for an effective disaster management and monitoring, it is not enough. The results of the simulations in an effective system must be easily and concurrently accessible to the end user, which means that post-processing visualization techniques are not acceptable. Furthermore, altercations in the model must be immediate, such that the user can interactively change the simulations parameters and rapidly test different scenarios. Finally, installing and running the software must be easy for coastal engineers to encourage them using it. Therefore, in addition to the rapid simulation capabilities, concurrent 3D visualization, interactivity, and ease of use as the features of Celeris are also among the objectives of this study. Celeris, to the best of our knowledge, is the first interactive software for simulation of nonlinear, coastal waves. In this software, the user can interact with the water surface and topography using system’s mouse, for instance to add/remove water or raise/drop terrain. A GUI is also provided, by which the user can change the numerical and physical parameters on the fly. For example, a solitary wave can be added to the solution field or a sinewave can be introduced to a boundary, all while the model is running. The concurrent visualization in Celeris can also revolutionize the standard practice in the coastal engineering community. Simultaneous observation of results in Celeris, with photorealistic or colormapped rendering, can significantly help researchers to understand coastal processes in a specific event. We discuss development of several series of Celeris, such as Celeris Zero, Celeris Advent, and Celeris Base. Our software is released as open-source, in hope that researchers will further improve it and add features to it. 1.1.3 Coupling with Crowdsourced Data We propose a two-way coupling of Celeris with the data stream from the disaster area. In this two- way coupling framework, firstly, the numerical simulations provide us with the scope of the disaster and therefore enable us to collect the geo-tagged data related to the disaster area and interpret the high priority ones. This interpretation provides us with more information about the 6 real scope of the disaster and therefore helps us to tune our numerical simulation and continue the loop. Such a cyclic system, would rapidly increase the situational awareness of the authorities and help them make more informed decisions. While the main direction of this research is studying coastal hazards, we keep the coupling framework as general as possible to span its use cases to other types of disasters. The data coupling framework needs to be fed with continues data from the disaster field. However, after a disaster, obtaining data from the disaster area becomes cumbersome. The conventional method of dispatching inspection teams for field surveys is time-consuming and inefficient. Therefore, we propose data crowdsourcing as an efficient alternative means of data collecting. The data sources, can be sensors, CCTV cameras, text and twitter messages from the local people, phone calls to 911, and so on. Coupling data crowdsourcing with numerical models of disasters is a relatively new subject of interest. Two prior research works on applications of such a system in flood management can be found in [22], [23] While crowdsourcing has been shown to be a cost-effective and time-efficient way to acquire disaster data [24] and then to analyze the collected data [25], normally the number of collected data surpasses the maximum number which can be analyzed by orders of magnitude. Therefore, we introduce a framework to prioritize data analysis in order to maximize the amount of information we can possibly extract under limited resources. We study several examples using the proposed framework and discuss its different aspects. 1.1.4 Immersive Visualization We, finally, explore the opportunities in applications of recent visualization techniques in coastal disaster simulation and management. We introduce our system consisted of hardware and software to visualize the simulation results of Celeris in an immersive way using virtual reality and spatial augmented reality. We call the virtual reality version of the software, Celeris VR and out augmented reality system as “Natural Hazards Spatial Augmented Reality Basin” abbreviated as HAZAR 1 . Immersive visualization in HAZAR takes the next step in interactivity and lets the individuals to work together in a collaborative environment. The compelling immersive environment provided by our software not only helps engineers and researchers to better 1 HAZAR is the Turkish for Khazar, another name for the Caspian Sea. It also means “cautious” in Persian and Arabic. 7 understand complex coastal processes, but also it facilitates education and raising awareness about coastal disasters. 1.2 Structure and Arrangement of Study The introduction, motivations and objectives of the study were briefly explained in this chapter. In Chapter 2, we discuss the developments of the mathematical model behind our software. In Chapter 3, we introduce Celeris and discuss the details of the development process. Chapter 4 is dedicated to the data coupling and prioritization framework and Chapter 5 discusses our development of immersive visualization techniques in coastal engineering research. Finally, we recap our study in the last chapter, Chapter 6. 8 Chapter 2 - Mathematical Model Finite volume method (FVM), shock-capturing, flux reconstruction, and limiters can make wave solvers more robust; such approaches are now commonly found in NLSW models (e.g. [26]). However applying these techniques to Boussinesq-type equations is not straightforward [27]. Employing FVM and the associated solution-smoothing schemes significantly improves the robustness of the model. This is of high relevance here, as our goal is to provide an interactive simulation environment, where the user can alter the water surface and the bathymetry while the model is running. This interactive and immersive environment also needs fast concurrent 3-D visualization. We choose a hybrid finite volume-finite difference scheme to solve the governing equations. This hybrid discretization enables the software to benefit from the robustness of FVM, while retaining the high accuracy of the Boussinesq-type model. To achieve high computational speed, we solve the equations using the GPU, providing faster than real-time simulation speed on an average user laptop. 2.1 Governing Equations The extended Boussinesq equations derived by Madsen and Sørensen [21] are a suitable set for a hybrid finite volume-finite difference scheme [3]. These equations for 2DH flow read as: 𝐔 𝑡 +𝐅 (𝐔 ) 𝑥 +𝐆 (𝐔 ) 𝑦 +𝐒 (𝐔 )=0 (2.1) 𝐔 =[ ℎ 𝑃 𝑄 ],𝐅 (𝐔 )= [ 𝑃 𝑃 2 ℎ + 𝑔 ℎ 2 2 𝑃𝑄 ℎ ] ,𝐆 (𝐔 )= [ 𝑄 𝑃𝑄 ℎ 𝑄 2 ℎ + 𝑔 ℎ 2 2 ] ,𝐒 (𝐔 )=[ 0 𝑔 ℎ𝑧 𝑥 +𝜓 1 +𝑓 1 𝑔 ℎ𝑧 𝑦 +𝜓 2 +𝑓 2 ] where U is the conservative variables vector, F(U) and G(U) are the advective flux vectors, and S(U) is the source term which includes bottom slope, friction, and dispersive terms. h is the total 9 water depth. P and Q are the depth-integrated mass fluxes in x and y directions respectively, where the x-y plane makes the horizontal solution field. Subscripts x and y denote spatial differentiation, with respect to the corresponding direction, and subscript t denotes temporal differentiation. z is the bottom elevation measured from a fixed datum. f1 and f2 are the bottom friction terms and g is the gravitational acceleration coefficient. ψ1 and ψ2 are the modified dispersive terms defined as: 𝜓 1 = −(𝐵 + 1 3 )𝑑 2 (𝑃 𝑥𝑥𝑡 +𝑄 𝑥𝑦𝑡 )−𝐵𝑔 𝑑 3 (𝜂 𝑥𝑥𝑥 +𝜂 𝑥𝑦𝑦 ) −𝑑 𝑑 𝑥 ( 1 3 𝑃 𝑥𝑡 + 1 6 𝑄 𝑦𝑡 +2𝐵𝑔𝑑 𝜂 𝑥𝑥 +𝐵𝑔𝑑 𝜂 𝑦𝑦 )−𝑑 𝑑 𝑦 ( 1 6 𝑄 𝑥𝑡 +𝐵𝑔𝑑 𝜂 𝑥𝑦 ) (2.2) 𝜓 2 = −(𝐵 + 1 3 )𝑑 2 (𝑃 𝑥𝑦𝑡 +𝑄 𝑦𝑦𝑡 )−𝐵𝑔 𝑑 3 (𝜂 𝑦𝑦𝑦 +𝜂 𝑥𝑥𝑦 ) −𝑑 𝑑 𝑦 ( 1 3 𝑄 𝑦𝑡 + 1 6 𝑃 𝑥𝑡 +2𝐵𝑔𝑑 𝜂 𝑦𝑦 +𝐵𝑔𝑑 𝜂 𝑥𝑥 )−𝑑 𝑑 𝑥 ( 1 6 𝑃 𝑦𝑡 +𝐵𝑔𝑑 𝜂 𝑥𝑦 ) (2.3) where d is the still water depth and B = 1/15 is the calibration coefficient for dispersion properties of the equations. The free surface elevation is η = w – ws, where w is the water surface elevation and ws is the still water surface elevation both measured from the fixed datum. We use [w, P, Q] T as the set of unknown variables, in which w = h + z. To avoid introducing unnecessary complication to the equations, we refrain from substituting h with w – z; however, that is how h is calculated in practice. Assuming constant bottom elevation in time, we have wt = ht. The extended Boussinesq equations provide sufficiently accurate linear dispersion and shoaling characteristics for values of kd < 3, where k is the wave number. Note that these equations automatically reduce to the Saint-Venant system of non-linear shallow water equations (NLSW) for d = 0. In locations where still water surface elevation is not defined, such as on lands above the sea level, we set d = 0 so the solver automatically reduces to NLSW. 2.2 Numerical Discretization Following Wei and Kirby [28], Eq. (2.1) can be rearranged as: 𝑤 𝑡 =𝐸 (𝑃 ,𝑄 ) (2.4) 𝑈 𝑡 ∗ =𝐹 (ℎ,𝑃 ,𝑄 )+[𝐹 ∗ (𝑄 )] 𝑡 (2.5) 𝑉 𝑡 ∗ =𝐺 (ℎ,𝑃 ,𝑄 )+[𝐺 ∗ (𝑃 )] 𝑡 (2.6) where newly introduced quantities are defined by 10 𝑈 ∗ =𝑃 − 1 3 𝑑 𝑑 𝑥 𝑃 𝑥 −(𝐵 + 1 3 )𝑑 2 𝑃 𝑥𝑥 (2.7) 𝑉 ∗ =𝑄 − 1 3 𝑑 𝑑 𝑦 𝑄 𝑦 −(𝐵 + 1 3 )𝑑 2 𝑄 𝑦𝑦 (2.8) 𝐸 (𝑃 ,𝑄 )=−(𝑃 𝑥 +𝑄 𝑦 ) (2.9) 𝐹 (ℎ,𝑃 ,𝑄 )=−( 𝑃 2 ℎ + 𝑔 ℎ 2 2 ) 𝑥 −( 𝑃𝑄 ℎ ) 𝑦 −𝑔 ℎ𝑧 𝑥 −𝑓 1 +𝐵𝑔 𝑑 3 (𝜂 𝑥𝑥𝑥 +𝜂 𝑥𝑦𝑦 ) +𝐵𝑔 𝑑 2 (𝑑 𝑥 (2𝜂 𝑥𝑥 +𝜂 𝑦𝑦 )+𝑑 𝑦 𝜂 𝑥𝑦 ) (2.10) 𝐺 (ℎ,𝑃 ,𝑄 )=−( 𝑄 2 ℎ + 𝑔 ℎ 2 2 ) 𝑦 −( 𝑃𝑄 ℎ ) 𝑥 −𝑔 ℎ𝑧 𝑦 −𝑓 2 +𝐵𝑔 𝑑 3 (𝜂 𝑦𝑦𝑦 +𝜂 𝑥𝑥𝑦 ) +𝐵𝑔 𝑑 2 (𝑑 𝑦 (2𝜂 𝑦𝑦 +𝜂 𝑥 𝑥 )+𝑑 𝑥 𝜂 𝑥𝑦 ) (2.11) 𝐹 ∗ (𝑄 )= 1 6 𝑑 𝑑 𝑥 𝑄 𝑦 + 1 6 𝑑 𝑑 𝑦 𝑄 𝑥 +(𝐵 + 1 3 )𝑑 2 𝑄 𝑥𝑦 (2.12) 𝐺 ∗ (𝑄 )= 1 6 𝑑 𝑑 𝑥 𝑃 𝑦 + 1 6 𝑑 𝑑 𝑦 𝑃 𝑥 +(𝐵 + 1 3 )𝑑 2 𝑃 𝑥𝑦 (2.13) This rearrangement allows us to rewrite Eq. (2.1) as ODE’s in time. The left-hand side terms in Eq. (2.4)-(2.6) are discretized in time. [F*(Q)]t and [G*(P)]t are evaluated by extrapolation in time and the rest of the terms in the right-hand side are known in the current time step. Following [3] and [27] we use a hybrid FVM-FDM discretization to solve these equations on uniform Cartesian grids. The spatial domain is discretized by rectangular cells with fixed dimensions of Δx and Δy. Each cell plays the role of a control volume for the FVM discretization. Cell centers and their corresponding cell averaged values are used as the grid points in FDM. The advective terms along with the bottom slope term is discretized using a second-order well-balanced positivity preserving central-upwind scheme introduced by Kurganov and Petrova [26]. This scheme, sometimes known as KP07, is a finite volume method to solve the Saint-Venant system of shallow water equations. The rest of the terms are discretized using central FDM. KP07 preserves stationary steady states (i.e. the scheme is well-balanced) and guarantees the positivity of the computed fluid depth. It supports a dry state with no need to keep track of the wet- dry front and it can accommodate discontinuous bottom topography. Moreover it is particularly suitable for implementation on the GPU [29]. We found this scheme to be a robust and accurate 11 method, even with the single precision implementation of the GPU. The method is well-suited for interactive and high-performance design of Celeris. Since the details of this scheme can be found in [26], we only describe its layout. The original KP07 for solving shallow water equations consists of these steps: 1- Unknown variables, [w, P, Q] T , are linearly reconstructed (evaluated) at cell interfaces applying a generalized minmod limiter on their derivatives. 2- A simple conservative correction is applied on w to preserve the positivity of h. Flow velocities, u and v, are calculated from 𝑢 = √2ℎ(𝑃 ) √(ℎ 4 +max(ℎ 4 ,𝜖 ) , 𝑣 = √2ℎ(𝑄 ) √(ℎ 4 +max(ℎ 4 ,𝜖 ) (2.14) where ϵ is a small predefined tolerance to avoid division by very small values or zero. 3- Fluxes are computed at each cell interface employing the central-upwind scheme. 4- Source terms are evaluated, and unknown variables are found for the next time step. To use KP07 as the FVM solver of our scheme, we add the dispersive terms as source terms discretized by central FDM, in the last step. 2.3 Time Integration 2.3.1 Uniform Time-stepping Uniform time integration is performed by a third-order Adams-Bashforth scheme as the predictor step, and an optional fourth-order Adams-Moulton scheme as the corrector step. The predictor step reads as 𝑤 𝑖𝑗 𝑛 +1 =𝑤 𝑖𝑗 𝑛 + Δ𝑡 12 (23𝐸 𝑖𝑗 𝑛 −16𝐸 𝑖𝑗 𝑛 −1 +5𝐸 𝑖𝑗 𝑛 −2 ) (2.15) 𝑈 𝑖𝑗 ∗𝑛 +1 =𝑈 𝑖𝑗 ∗𝑛 + Δ𝑡 12 (23𝐹 𝑖𝑗 𝑛 −16𝐹 𝑖𝑗 𝑛 −1 +5𝐹 𝑖𝑗 𝑛 −2 )+2𝐹 𝑖𝑗 ∗𝑛 −3𝐹 𝑖𝑗 ∗𝑛 −1 +𝐹 𝑖𝑗 ∗𝑛 −2 (2.16) 𝑉 𝑖𝑗 ∗𝑛 +1 =𝑉 𝑖𝑗 ∗𝑛 + Δ𝑡 12 (23𝐺 𝑖𝑗 𝑛 −16𝐺 𝑖𝑗 𝑛 −1 +5𝐺 𝑖𝑗 𝑛 −2 )+2𝐺 𝑖𝑗 ∗𝑛 −3𝐺 𝑖𝑗 ∗𝑛 −1 +𝐺 𝑖𝑗 ∗𝑛 −2 (2.17) where the superscripts denote the step number in time, with n being the last step with known variables. The predictor step is explicit in time, which means that all the variables on the right- hand side of the equations are known. The corrector step is performed by 12 𝑤 𝑖𝑗 𝑛 +1 =𝑤 𝑖𝑗 𝑛 + Δ𝑡 24 (9𝐸 𝑖𝑗 𝑛 +1 +19𝐸 𝑖𝑗 𝑛 −5𝐸 𝑖𝑗 𝑛 −1 +𝐸 𝑖𝑗 𝑛 −2 ) (2.18) 𝑈 𝑖𝑗 ∗𝑛 +1 =𝑈 𝑖𝑗 ∗𝑛 + Δ𝑡 24 (9𝐹 𝑖𝑗 𝑛 +1 +19𝐹 𝑖𝑗 𝑛 −5𝐹 𝑖𝑗 𝑛 −1 +𝐹 𝑖𝑗 𝑛 −2 )+𝐹 𝑖𝑗 ∗𝑝 −𝐹 𝑖𝑗 ∗𝑛 (2.19) 𝑉 𝑖𝑗 ∗𝑛 +1 =𝑉 𝑖𝑗 ∗𝑛 + Δ𝑡 24 (9𝐺 𝑖𝑗 𝑛 +1 +19𝐺 𝑖𝑗 𝑛 −5𝐺 𝑖𝑗 𝑛 −1 +𝐺 𝑖𝑗 𝑛 −2 )+𝐺 𝑖𝑗 ∗𝑝 −𝐺 𝑖𝑗 ∗𝑛 (2.20) The corrector step is implicit in time. In order to solve it, the n+1 terms are calculated by the predictor step (or by the previous corrector iteration) then the corrector step is iterated for a predefined number of times, or until the variables converge. Since the variables at previous time steps are not defined in the very first two time-steps of the simulation (i.e., n=1 and n=2), a first order Euler time integration is used for those two steps. The water surface elevation, w n+1 , is directly found by solving Eq. (2.15) or (2.18). However, to calculate the flux terms, P n+1 and Q n+1 the following set of implicit equations must be solved: 𝐴 𝑖𝑗 𝑥 𝑃 𝑖 −1,𝑗 +𝐵 𝑖𝑗 𝑥 𝑃 𝑖𝑗 +𝐶 𝑖 ,𝑗 𝑥 𝑃 𝑖 +1,𝑗 =𝑈 𝑖𝑗 ∗ (2.21) 𝐴 𝑖𝑗 𝑦 𝑄 𝑖 ,𝑗 −1 +𝐵 𝑖𝑗 𝑦 𝑄 𝑖𝑗 +𝐶 𝑖𝑗 𝑦 𝑄 𝑖 ,𝑗 +1 =𝑉 𝑖𝑗 ∗ (2.22) where 𝐴 𝛼 = 𝑑 𝑑 𝛼 6Δ𝛼 −(𝐵 + 1 3 ) 𝑑 2 Δ𝛼 2 , 𝐵 𝛼 =1+2(𝐵 + 1 3 ) 𝑑 2 Δ𝛼 2 , 𝐶 𝛼 =− 𝑑 𝑑 𝛼 6Δ𝛼 −(𝐵 + 1 3 ) 𝑑 2 Δ𝛼 2 (2.23) Eq. (2.21)/Eq. (2.22) results in a tridiagonal system of equations for each row/column of cells in the x/y direction. To efficiently solve these set of equations on the GPU, we use the cyclic reduction (CR) method which is described in more detail later. 2.3.2 Adaptive Time-stepping The need for an adaptive time-stepping formulation arose from our observations of model’s instability in experiments with runup on steep surfaces. In these cases, the flow velocity grows rapidly and the subsequent increase in the local CFL number leads to instability. We also observed that employing only the predictor step, with a sufficiently small time-step, is satisfactory. Therefore, we only derive the adaptive formulation for the Adams-Bashforth third order time integration, i.e. the predictor step but not the corrector step. In the adaptive mode, the software keeps the maximum local CFL at a constant value, by using a variable time-step. 13 2.3.2.1 Third-order Adaptive Adams-Bashforth Equation Kirby et al. [30] introduced a high-order adaptive time-stepping solver for Boussinesq-type equations using Runge–Kutta time-stepping. But, to the best of our knowledge the adaptive equations for Adams-Bashforth time-stepping are not given in the literature. Therefore, we derive them in a general format and then use them for our own purpose. We aim to solve the following ODE: 𝑋 ′ =𝑓 (𝑡 ,𝑋 ), 𝑋 (𝑡 0 )=𝑋 0 (2.24) where X’ denotes derivative of X with respect to t. Let Xi+1 denote our target variable at the next time-step, ti+1, and Xi denote the same variable in the current time-step ti. We can make an approximation of f(t, X) by the third degree polynomial, p(t), such that: 𝑝 (𝑡 𝑖 −𝑠 )=𝑓 (𝑡 𝑖 −𝑠 ,𝑋 𝑖 −𝑠 ), 𝑓𝑜𝑟 𝑠 =0,1,𝑎𝑛𝑑 2 (2.25) Employing the Lagrange formula for polynomial interpolation we have: 𝑝 (𝑡 )= ∑ ( ∏ 𝑡 −𝑡 𝑘 𝑡 𝑗 −𝑡 𝑘 𝑖 𝑘 =𝑖 −2 𝑘 ≠𝑗 )𝑋 𝑗 ′ 𝑖 𝑗 =𝑖 −2 (2.26) where X’j = f(tj, Xj). Now we can write: 𝑋 𝑖 +1 =𝑋 𝑖 +∫ 𝑝 (𝑡 ) 𝑑𝑡 𝑡 𝑖 +1 𝑡 𝑖 (2.27) 𝑋 𝑖 +1 =𝑋 𝑖 +∫ ( ∑ ( ∏ 𝑡 −𝑡 𝑘 𝑡 𝑗 −𝑡 𝑘 𝑖 𝑘 =𝑖 −2 𝑘 ≠𝑗 )𝑋 𝑗 ′ 𝑖 𝑗 =𝑖 −2 )𝑑𝑡 𝑡 𝑖 +1 𝑡 𝑖 (2.28) 14 𝑋 𝑖 +1 =𝑋 𝑖 +(∫ 𝑡 −𝑡 𝑖 −1 𝑡 𝑖 −2 −𝑡 𝑖 −1 × 𝑡 −𝑡 𝑖 𝑡 𝑖 −2 −𝑡 𝑖 𝑑𝑡 𝑡 𝑖 +1 𝑡 𝑖 )𝑋 𝑖 −2 ′ +(∫ 𝑡 −𝑡 𝑖 −2 𝑡 𝑖 −1 −𝑡 𝑖 −2 × 𝑡 −𝑡 𝑖 𝑡 𝑖 −1 −𝑡 𝑖 𝑑𝑡 𝑡 𝑖 +1 𝑡 𝑖 )𝑋 𝑖 −1 ′ +(∫ 𝑡 −𝑡 𝑖 −2 𝑡 𝑖 −𝑡 𝑖 −2 × 𝑡 −𝑡 𝑖 −1 𝑡 𝑖 −𝑡 𝑖 −1 𝑑𝑡 𝑡 𝑖 +1 𝑡 𝑖 )𝑋 𝑖 ′ (2.29) Using the sliding technique to substitute t with t+ti and introducing Δti = ti+1-ti, Δti-1 = ti-ti-1, and Δti-2 = ti-1-ti-2 we can write: 𝑋 𝑖 +1 =𝑋 𝑖 +(∫ 𝑡 +Δ𝑡 𝑖 −1 Δ𝑡 𝑖 −2 × 𝑡 Δ𝑡 𝑖 −1 +Δ𝑡 𝑖 −2 𝑑𝑡 Δ𝑡 𝑖 0 )𝑋 𝑖 −2 ′ −(∫ 𝑡 +(Δ𝑡 𝑖 −1 +Δ𝑡 𝑖 −2 ) Δ𝑡 𝑖 −2 × 𝑡 Δ𝑡 𝑖 −1 𝑑𝑡 Δ𝑡 𝑖 0 )𝑋 𝑖 −1 ′ +(∫ 𝑡 +(Δ𝑡 𝑖 −1 +Δ𝑡 𝑖 −2 ) Δ𝑡 𝑖 −1 +Δ𝑡 𝑖 −2 × 𝑡 +Δ𝑡 𝑖 −1 Δ𝑡 𝑖 −1 𝑑𝑡 Δ𝑡 𝑖 0 )𝑋 𝑖 ′ (2.30) Finally, after integration we have: 𝑋 𝑖 +1 =𝑋 𝑖 + Δ𝑡 𝑖 6 [( Δ𝑡 𝑖 Δ𝑡 𝑖 −1 × 2Δ𝑡 𝑖 +6Δ𝑡 𝑖 −1 +3Δ𝑡 𝑖 −2 Δ𝑡 𝑖 −1 +Δ𝑡 𝑖 −2 +6)𝑋 𝑖 ′ −( Δ𝑡 𝑖 Δ𝑡 𝑖 −1 × 2Δ𝑡 𝑖 +3Δ𝑡 𝑖 −1 +3Δ𝑡 𝑖 −2 Δ𝑡 𝑖 −2 )𝑋 𝑖 −1 ′ +( Δ𝑡 𝑖 Δ𝑡 𝑖 −2 × 2Δ𝑡 𝑖 +3Δ𝑡 𝑖 −1 Δ𝑡 𝑖 −1 +Δ𝑡 𝑖 −2 )𝑋 𝑖 −2 ′ ] (2.31) Eq. (2.31) is the third order adaptive Adams-Bashforth time integration equation. As a correctness check, let’s apply Δti = Δti-1 = Δti-2 = Δt in this equation: 𝑋 𝑖 +1 =X+ Δ𝑡 12 [23𝑋 𝑖 ′ −16𝑋 𝑖 −1 ′ +5𝑋 𝑖 −2 ′ ] which is the same third order Adams-Bashforth equation we use for uniform time-stepping. 15 2.3.2.2 Variable-step Second Order Finite Difference Equations In order to solve Eq. (2.5) and Eq. (2.6) using Eq. (2.31) we also need to derive the second order finite difference discretization equation for variable time-steps. Let’s approximate Y with 𝑌̂ , and its derivative 𝑌 ′ = 𝑑 𝑦 𝑑𝑡 with 𝑌̂ ′. We use a polynomial approximation for 𝑌 such that it satisfies 𝑌̂ (𝑡 𝑖 −𝑠 )=𝑌 (𝑡 𝑖 −𝑠 ), 𝑓𝑜𝑟 𝑠 =0,1,𝑎𝑛𝑑 2 (2.32) Let’s define: 𝑌̂ =𝑐 0 +𝑐 1 (𝑡 −𝑡 𝑖 )+𝑐 2 (𝑡 −𝑡 𝑖 ) 2 (2.33) The finite difference approximation of 𝑌 𝑖 ′ yields: 𝑌 𝑖 ′≈ 𝑌 𝑖 ̂ ′ =𝑐 1 (2.34) To meet the conditions in Eq. (2.32), we must have: [ 1 0 0 1 −Δ𝑡 𝑖 −1 (Δ𝑡 𝑖 −1 ) 2 1 −(Δ𝑡 𝑖 −1 +Δ𝑡 𝑖 −2 ) (Δ𝑡 𝑖 −1 +Δ𝑡 𝑖 −2 ) 2 ][ 𝑐 0 𝑐 1 𝑐 2 ]=[ 𝑌 𝑖 𝑌 𝑖 −1 𝑌 𝑖 −2 ] (2.35) Inverting the matrix of coefficients in (2.35), we can find the value of c1: 𝑌 𝑖 ′ ≈𝑌 𝑖 ̂ ′ =𝑐 1 = 2Δ𝑡 𝑖 −1 + Δ𝑡 𝑖 −2 Δ𝑡 𝑖 −1 (Δ𝑡 𝑖 −1 +Δ𝑡 𝑖 −2 ) 𝑌 𝑖 − Δ𝑡 𝑖 −1 + Δ𝑡 𝑖 −2 Δ𝑡 𝑖 −1 Δ𝑡 𝑖 −2 𝑌 𝑛 −1 + Δ𝑡 𝑖 −1 Δ𝑡 𝑖 −2 (Δ𝑡 𝑖 −1 +Δ𝑡 𝑖 −2 ) 𝑌 𝑖 −2 (2.36) For the special case where Δti-1 = Δti-2 = Δt 𝑌 𝑖 ′ = 3 2 Δ𝑡 𝑌 𝑖 − 2 Δ𝑡 𝑌 𝑖 −1 + 1 2Δ𝑡 𝑌 𝑖 −2 = 1 2Δ𝑡 (3𝑌 𝑖 −4𝑌 𝑖 −1 +𝑌 𝑖 −2 ) which is the well-known 2 nd order backward difference. Similarly, to derive 𝑌 𝑖 −1 ′ , we can use the following polynomial approximation and solve it to find 𝑌 𝑖 −1 ̂ ′ : 𝑌̂ =𝑐 0 +𝑐 1 (𝑡 −𝑡 𝑖 −1 )+𝑐 2 (𝑡 −𝑡 𝑖 −1 ) 2 (2.37) [ 1 Δ𝑡 𝑖 −1 (Δ𝑡 𝑖 −1 ) 2 1 0 0 1 −Δ𝑡 𝑖 −2 (Δ𝑡 𝑖 −2 ) 2 ][ 𝑐 0 𝑐 1 𝑐 2 ]=[ 𝑌 𝑖 𝑌 𝑖 −1 𝑌 𝑖 −2 ] (2.38) 16 𝑌 𝑖 −1 ′ ≈ Δ𝑡 𝑖 −2 Δ𝑡 𝑖 −1 (Δ𝑡 𝑖 −1 +Δ𝑡 𝑖 −2 ) 𝑦 𝑖 + Δ𝑡 𝑖 −1 − Δ𝑡 𝑖 −2 Δ𝑡 𝑖 −1 Δ𝑡 𝑖 −2 𝑦 𝑛 −1 − Δ𝑡 𝑖 −1 Δ𝑡 𝑖 −2 (Δ𝑡 𝑖 −1 +Δ𝑡 𝑖 −2 ) 𝑦 𝑖 −2 (2.39) For the special case where Δti-1 = Δti-2 = Δt 𝑌 𝑖 −1 ′ = 1 2 Δ𝑡 𝑌 𝑖 + 0 Δ𝑡 𝑌 𝑖 −1 + 1 2Δ𝑡 𝑌 𝑖 −2 = 1 2Δ𝑡 (𝑌 𝑖 −𝑌 𝑖 −2 ) which is the well-known 2 nd order central difference. To derive 𝑌 𝑖 −2 ̂ ′ we can write: 𝑌̂ =𝑐 0 +𝑐 1 (𝑡 −𝑡 𝑖 −2 )+𝑐 2 (𝑡 −𝑡 𝑖 −2 ) 2 (2.40) [ 1 Δ𝑡 𝑖 −1 +Δ𝑡 𝑖 −2 (Δ𝑡 𝑖 −1 +Δ𝑡 𝑖 −2 ) 2 1 Δ𝑡 𝑖 −2 (Δ𝑡 𝑖 −2 ) 2 1 0 0 ][ 𝑐 0 𝑐 1 𝑐 2 ]=[ 𝑌 𝑖 𝑌 𝑖 −1 𝑌 𝑖 −2 ] (2.41) 𝑌 𝑖 −2 ′ =− Δ𝑡 𝑖 −2 Δ𝑡 𝑖 −1 (Δ𝑡 𝑖 −1 +Δ𝑡 𝑖 −2 ) 𝑌 𝑖 + Δ𝑡 𝑖 −1 + Δ𝑡 𝑖 −2 Δ𝑡 𝑖 −1 Δ𝑡 𝑖 −2 𝑌 𝑖 −1 − Δ𝑡 𝑖 −1 +2Δ𝑡 𝑖 −2 Δ𝑡 𝑖 −2 (Δ𝑡 𝑖 −1 +Δ𝑡 𝑖 −2 ) 𝑌 𝑖 −2 (2.42) For the special case where Δti-1 = Δti-2 = Δt 𝑌 𝑖 −2 ′ =− 1 2 Δ𝑡 𝑌 𝑖 + 2 Δ𝑡 𝑌 𝑖 −1 − 3 2Δ𝑡 𝑌 𝑖 −2 =− 1 2Δ𝑡 (𝑌 𝑖 −4𝑌 𝑖 −1 +3𝑌 𝑖 −2 ) which is the well-known 2 nd order forward difference. 2.3.2.3 Prediction Equations Now, we have the tools to derive the prediction equations of w, U * , and V * with adaptive time- stepping. Deriving the equation for w is straightforward and yields: 𝑤 𝑖𝑗 𝑛 +1 =𝑤 𝑖𝑗 𝑛 + Δ𝑡 𝑛 6 [( Δ𝑡 𝑛 Δ𝑡 𝑛 −1 × 2Δ𝑡 𝑛 +6Δ𝑡 𝑛 −1 +3Δ𝑡 𝑛 −2 Δ𝑡 𝑛 −1 +Δ𝑡 𝑛 −2 +6)𝐸 𝑖𝑗 𝑛 −( Δ𝑡 𝑛 Δ𝑡 𝑛 −1 × 2Δ𝑡 𝑛 +3Δ𝑡 𝑛 −1 +3Δ𝑡 𝑛 −2 Δ𝑡 𝑛 −2 )𝐸 𝑖𝑗 𝑛 −1 +( Δ𝑡 𝑛 Δ𝑡 𝑛 −2 × 2Δ𝑡 𝑛 +3Δ𝑡 𝑛 −1 Δ𝑡 𝑛 −1 +Δ𝑡 𝑛 −2 )𝐸 𝑖𝑗 𝑛 −2 ] (2.43) 17 For U * we have: 𝑈 𝑖𝑗 ∗ 𝑛 +1 =𝑈 𝑖𝑗 ∗ 𝑛 + Δ𝑡 𝑛 6 [( Δ𝑡 𝑛 Δ𝑡 𝑛 −1 × 2Δ𝑡 𝑛 +6Δ𝑡 𝑛 −1 +3Δ𝑡 𝑛 −2 Δ𝑡 𝑛 −1 +Δ𝑡 𝑛 −2 +6)(𝐹 𝑖𝑗 𝑛 +(𝐹 𝑖𝑗 ∗ 𝑛 ) 𝑡 ) −( Δ𝑡 𝑛 Δ𝑡 𝑛 −1 × 2Δ𝑡 𝑛 +3Δ𝑡 𝑛 −1 +3Δ𝑡 𝑛 −2 Δ𝑡 𝑛 −2 )(𝐹 𝑖𝑗 𝑛 −1 +(𝐹 𝑖𝑗 ∗ 𝑛 −1 ) 𝑡 ) +( Δ𝑡 𝑛 Δ𝑡 𝑛 −2 × 2Δ𝑡 𝑛 +3Δ𝑡 𝑛 −1 Δ𝑡 𝑛 −1 +Δ𝑡 𝑛 −2 )(𝐹 𝑖𝑗 𝑛 −2 +(𝐹 𝑖𝑗 ∗ 𝑛 −2 ) 𝑡 )] (2.44) To calculate U * we must also derive the equations for F * t. We will use the second order finite difference equations derived in the previous section: (𝐹 𝑖𝑗 ∗ 𝑛 ) 𝑡 = 2Δ𝑡 𝑛 −1 + Δ𝑡 𝑛 −2 Δ𝑡 𝑛 −1 (Δ𝑡 𝑛 −1 +Δ𝑡 𝑛 −2 ) 𝐹 𝑖𝑗 ∗ 𝑛 − Δ𝑡 𝑛 −1 + Δ𝑡 𝑛 −2 Δ𝑡 𝑛 −1 Δ𝑡 𝑛 −2 𝐹 𝑖𝑗 ∗ 𝑛 −1 + Δ𝑡 𝑛 −1 Δ𝑡 𝑛 −2 (Δ𝑡 𝑛 −1 +Δ𝑡 𝑛 −2 ) 𝐹 𝑖𝑗 ∗ 𝑛 −2 (2.45) (𝐹 𝑖𝑗 ∗ 𝑛 −1 ) 𝑡 = Δ𝑡 𝑛 −2 Δ𝑡 𝑛 −1 (Δ𝑡 𝑛 −1 +Δ𝑡 𝑛 −2 ) 𝐹 𝑖𝑗 ∗ 𝑛 + Δ𝑡 𝑛 −1 − Δ𝑡 𝑛 −2 Δ𝑡 𝑛 −1 Δ𝑡 𝑛 −2 𝐹 𝑖𝑗 ∗ 𝑛 −1 − Δ𝑡 𝑛 −1 Δ𝑡 𝑛 −2 (Δ𝑡 𝑛 −1 +Δ𝑡 𝑛 −2 ) 𝐹 𝑖𝑗 ∗ 𝑛 −2 (2.46) (𝐹 𝑖𝑗 ∗ 𝑛 −2 ) 𝑡 =− Δ𝑡 𝑛 −2 Δ𝑡 𝑛 −1 (Δ𝑡 𝑛 −1 +Δ𝑡 𝑛 −2 ) 𝐹 𝑖𝑗 ∗ 𝑛 + Δ𝑡 𝑛 −1 + Δ𝑡 𝑛 −2 Δ𝑡 𝑛 −1 Δ𝑡 𝑛 −2 𝐹 𝑖𝑗 ∗ 𝑛 −1 − Δ𝑡 𝑛 −1 +2Δ𝑡 𝑛 −2 Δ𝑡 𝑛 −2 (Δ𝑡 𝑛 −1 +Δ𝑡 𝑛 −2 ) 𝐹 𝑖𝑗 ∗ 𝑛 −2 (2.47) Prediction equation for V * is derived similarly, and not represented here for the sake of brevity. 2.4 Boundary Conditions Two layers of ghost cells are considered at each boundary and are used to implement the boundary conditions. Five types of boundary condition are implemented in Celeris: fully reflective solid wall, sinewave maker, sponge layer, irregular wavemaker, and time series. 2.4.1 Solid Wall Solid walls are considered as fully reflective boundaries. To impose this condition, the values on the closest two cells to the boundary are mirrored on the ghost cells. Mirroring ensures the following conditions are met: 18 (𝑃 ,𝑄 ).𝐧 =0, ∇𝑤 .𝐧 =0, (2.48) where n is the normal vector to the solid wall. 2.4.2 Sinewave To generate sinewaves with a given period (T), amplitude (a), and direction (θ), at the boundary, the values for η, P, and Q are assigned as follows 𝜂 =𝑎 sin(𝜔𝑡 −𝑘 𝑥 𝑥 −𝑘 𝑦 𝑦 ) (2.49) 𝑃 =𝑐 cos𝜃 𝜂 (2.50) 𝑄 =𝑐 sin𝜃 𝜂 (2.51) where 𝑐 = 𝜔 𝑘 , 𝜔 = 2𝜋 𝑇 , 𝑘 𝑥 =cos(𝜃 )𝑘 , 𝑘 𝑦 =sin(𝜃 )𝑘 (2.52) k, the wave number, is calculated using Eckart’s [31] approximate solution for the dispersion relation: 𝑘 = 𝜔 2 𝑔 √coth( 𝜔 2 𝑑 𝑔 ) (2.53) This implementation does not allow treatment of waves approaching the boundary and it can be used only if nonlinearity is insignificant. 2.4.3 Sponge Layer Sponge layer boundary condition in Celeris is implemented by an adjusted version of a method introduced by Tonelli and Petti [27]. They implemented this boundary condition by multiplying the values of η, P, and Q by a damping coefficient, γ(α), defined as 𝛾 (𝛼 )= 1 2 (1+cos(𝜋 𝐿 𝑠 −𝐷 (𝛼 ) 𝐿 𝑠 )) (2.54) where 𝛼 is substituted with x or y for boundaries perpendicular to the x-axis or y-axis, Ls is the width of the sponge layer, and D(α) is the normal distance to the absorbing boundary. The damping is only applied to the cells which are located inside the sponge layer. Tonelli and Petti [27] did not 19 explain whether they apply this coefficient every time-step, or in certain time intervals. Figure 2.1 shows the value of γ(α) inside the sponge layer. Figure 2.1: Damping coefficient inside the sponge layer. We implemented this method in Celeris Advent v1.0.0, applying the coefficient on the flow parameters every time-step and observed that this implementation turns out to have an undesired steep damping effect. This problem occurs because the method has a compound effect in damping the waves. Consider the crest of a solitary wave with wave height of H0. In the first time-step that the crest of the wave enters a west-side sponge layer, it gets damped by γ(Ls - δ) where δ is the distance of the crest from the beginning of the sponge layer. This is equal to the distance the wave has traveled in one timestep, Δt. In the next time step, the already damped crest gets damped by an even larger coefficient of γ(Ls - 2δ). This compound effect makes the crest of the wave to diminish very quickly. We show that the correct approach is to damp the waveheight (and other flow parameters) such that the locus of the wave crest follows a shape similar to the one shown in Figure 2.1. To achieve this goal the flow parameters must be multiplied by a coefficient like, λ(α), at each time-step, such that the compound effect of this new coefficient resembles the shape of γ(α). In other words: 20 𝛾 (𝐿 𝑠 −𝑛𝛿 )=∏𝜆 (𝐿 𝑠 −𝑖𝛿𝑥 ) 𝑛 𝑖 =0 (2.55) We can write: 𝛾 (𝐿 𝑠 −𝑛𝛿 )=𝜆 (𝐿 𝑠 −𝑛𝛿 )×𝛾 (𝐿 𝑠 −(𝑛 −1)𝛿 ) (2.56) or 𝜆 (𝐿 𝑠 −𝑛𝛿 )= 𝛾 (𝐿 𝑠 −𝑛𝛿 ) 𝛾 (𝐿 𝑠 −(𝑛 −1)𝛿 ) (2.57) Substituting Ls - nδ with α we have: 𝜆 (𝛼 )= 𝛾 (𝛼 ) 𝛾 (𝛼 +𝛿 ) (2.58) As mentioned earlier, δ is the distance the wave crest travels in Δt. Therefore, assuming the wave celerity, c, we have: 𝜆 (𝛼 )= 𝛾 (𝛼 ) 𝛾 (𝛼 +𝑐 Δ𝑡 ) (2.59) Eq. (2.59) shows that the perfect damping coefficient for each wave depends on its celerity and therefore an ideal sponge layer, must treat each wave differently, which cannot be easily achieved. As a workaround, we use the shallow water wave celerity (i.e., 𝑐 =√𝑔𝑑 ) as an approximation for c in Eq. (2.59) and damp the flow parameters by λ(α) at each time-step. This method is implemented in Celeris Advent v1.3.4 and forward. 2.4.4 Irregular Waves To generate irregular waves on a boundary, we superpose a series of sinewaves with known periods, amplitudes, directions, and phases given in a text file. The superposition is done by a parallel sum reduction algorithm on the GPU to avoid excessive slowdown. The final values for η, P, and Q are forced on the boundary similar to the sinewave boundary condition. 2.4.5 Time Series Celeris also accepts uniform time series as a boundary condition. This condition is given to the software as a text file, in which a series of (t, η, P, Q) quadruples are given in a file (each on a 21 separate line). In each time step, the values of η, P, and Q on the boundary is calculated by linear interpolation in time using the input file and then forced on the boundary. 2.5 Wave Breaking Wave breaking is not implemented in Celeris with a direct treatment. However, our experiments show that the numerical dissipation of the scheme caused primarily by using the minmod limiter imitates physical dissipation introduced by wave breaking. Kirby et al. [30] also discuss that in models with shock-capturing schemes the implementation of an explicit formulation for breaking wave dissipation might be unnecessary. MOST model is another example which numerical dissipation mimics wave breaking [32], [33]. As discussed before, our solver effortlessly reduces to the NLSW equations to continue simulating the run-up on the beach. 2.6 Friction Friction terms in Eq. (2.1), which are particularly significant in run-up measurements, are given by: [ 𝑓 1 𝑓 2 ]=𝑓 [ 𝑃 𝑄 ] √𝑃 2 +𝑄 2 ℎ 2 (2.60) where f is the friction coefficient. In Celeris, the user can either opt to set the friction coefficient as a constant value or use the Manning’s equation to derive it locally as: 𝑓 = 𝑔 𝑛 2 ℎ 1 3 ⁄ (2.61) where n is the Manning’s roughness coefficient. To avoid division by very small values of h or zero, the same technique as in Eq. (2.14) is used. 2.7 Solitary Waves A solitary wave propagates on a horizontal bottom at a constant celerity and without change in its shape. Boussinesq equations permit such a wave with stationary shape provided that non-linear and dispersive effects are in balance. Celeris can take a set of solitary waves in its input file, with given wave heights, directions, and crest locations. These waves can be also added later via the 22 GUI and while the model is running. We superpose a solitary wave to the solution domain by adding η, P, and Q in each cell by values given by: 𝜂 𝑠 =𝐻 𝑠 sech(𝑘 𝑠 ((𝑥 −𝑥 0 )cos(𝜃 )+(𝑦 −𝑦 0 )sin(𝜃 ))) 2 (2.62) [ 𝑃 𝑠 𝑄 𝑠 ]=𝑐 𝑠 𝜂 𝑠 [ cos(𝜃 ) sin(𝜃 ) ] (2.63) where Hs is the solitary wave height, θ is its direction, and (x0, y0) is the initial crest location. ks and cs are wavenumber and celerity of the solitary wave given by: 𝑘 𝑠 = √ 3|𝐻 𝑠 | 4𝑑 3 (2.64) 𝑐 𝑠 =√𝑔 (𝐻 𝑠 +𝑑 ) (2.65) Using the absolute value of Hs in Eq. (2.64) allows insertion of a depression wave (i.e. single trough) in the software with negative wave heights. However, such a wave is not expected to maintain its shape. 23 Chapter 3 - Software Development Development of Celeris consisted of two major steps. Firstly, a robust solution for the mathematical model was developed such that it could be implemented on the GPU. The second step was the implementation of a user-friendly software which can drive the model. To fulfill the first step, we firstly implemented the derived equations and the mathematical model in MATLAB to validate them and improve them where necessary. We call this MATLAB series of the model Celeris Zero. The purpose of developing Celeris Zero, was mostly validating the underlying mathematical model of the software, and not its implementation on the GPU. Development of the mathematical model and its implementation in Celeris Zero started in spring of 2014 and the first version was running in early 2015. After we became confident that our model is fitted for our goal of developing an interactive and immersive coastal simulation software, we started the developments of the first official series of Celeris, called Celeris Advent, in winter 2015. Our goal in development of Celeris Advent was to provide a hassle-free software which can be run on off-the-shelf Windows machines with minimum preparation. Therefore, we selected Microsoft’s Direct3D library and its HLSL shader language to harness the power of the GPU, rather than general purpose GPU programming libraries such as CUDA, which requires some level of prior knowledge for preparation of the system and runs only on devices with NVIDIA GPU’s. Celeris Advent is implemented mostly in C++ and HLSL, and it is an open-source code developed and redistributed under the terms of the GNU General Public License as published by the Free Software Foundation. We submitted the first version of Celeris Advent (i.e., v1.0.0) for peer review to the journal of Computer Physics Communications in September 2016 and soon after we released the software (v.1.2.0) to the public in December 2016. As the time of writing this section (October 2018), Celeris Advent is downloaded over 1500 times by users spanning over 50 countries and its user 24 manual has been translated to Farsi, Spanish and Italian by independent users. Members of academia, industry, and government from top organizations such as Purdue University, Taylor Engineering Inc., and NOAA have put the software to use. Applications of Celeris Advent extend from research on complicated coastal phenomena to recreational surf forecasts. At USC we use Celeris to provide a five-day forecast of wave conditions at over 10 selected US coasts, available at http://coastal.usc.edu/waves/. Despite the widespread success of Celeris Advent as a simulation tool, we observed that other researchers were not successful in developing the software further as an open source tool. Furthermore, we realized that implementing new features, such as an immersive environment, on Celeris Advent is time consuming. These issues made us to think about revamping the software in a platform that makes further development easier. Celeris has a lot in common with video games, for example, GPU based physics, attractive visualization, interactivity, and immersive visualization. Therefore, we redeveloped Celeris from scratch in a game engine called Unity3D. This redevelopment has a lot of advantages. For example, adding new features such as virtual reality (VR) to the software is much easier in Unity3D as there are thousands of ready to use libraries (called “assets”) available to download or purchase on Unity3D Asset Store. We are optimistic that the introduced ease of development will make scientists and researchers embrace the revamped software as a base for further developments; hence, we called the new series Celeris Base. 3.1 Celeris Zero 3.1.1 Boussinesq 1D We implemented a one-dimensional (1D) version of our model in MATLAB, called Celeris Zero, and validated it for simple cases such as a running solitary wave in a domain with periodic boundary condition. This version runs entirely on the CPU. The main purpose of developing the 1D version was ensuring the correctness of our hybrid FVM – FDM discretization, and robustness of the solver. 25 3.1.2 Boussinesq 2D After preliminary tests on the 1D version of the Celeris Zero, we extended the model into the second dimension. However, the model was extremely slow on the CPU and therefore we had no choice other than using the GPU, despite the fact the MATLAB GPU implementation was not going to be much helpful later. We also validated the 2D version of Celeris Zero for simple cases and ensured that our mathematical model and numerical discretization are working well. The 2D version of Celeris Zero remained as a reference point and a development sandbox for the later series of the software. 3.2 Celeris Advent Celeris Advent is a tremendously fast Boussinesq-type wave solver running on the GPU. Furthermore, it has a 3D visualizer and an interactive environment to drive the GPU solver. It not only runs faster-than real time for large domain sizes needed in engineering projects, but also it provides unprecedented features such as concurrent visualization and an interactive interface. We introduced the first version Celeris Advent, in [19] and [20] and validated it against famous benchmarks. Most of the text in this chapter are derived from these two references. Celeris Advent can be downloaded from www.celeria.org and the source files are available at https://github.com/SasanTV/Celeris/. As of writing this document, the software is downloaded over 1500 times by engineers and researchers spanning over 50 countries. According to the emails we received from our users, tens of researchers and students have used Celeris in their research projects and theses. Similarly, many engineering firms and national organizations who used the software in their projects reached us with feature requests or questions. Finally, several faculty members use this interactive software for teaching coastal engineering classes in universities such as Purdue University, University of Wisconsin Madison, Middle East Technical University, etc. 3.2.1 Software Documentation The fast computational speed of Celeris Advent comes from its GPU implementation for solving the governing equations and visualizing the results. We distribute Celeris Advent in its compiled version along with its open-source codes under GNU General Public License as published by the Free Software Foundation. We recommend users to work with the compiled version and try to recompile the software only if necessary. It must be added that shader files are compiled at the 26 runtime, therefore careful changes in those files do not require recompilation of the software. For instance, partially reflective boundary condition can be introduced by changing the code of sponge layer boundary condition in “compute.hlsl” without any recompilation. 3.2.1.1 Source files Celeris Advent is written in C++ and Microsoft’s shader language, HLSL, and it is coded on top of an earlier open source demo project for modeling shallow water flows (Stephen Thompson, personal communication). Figure: 3.1 shows the simplified diagram of software flow in Celeris Advent. The file named “main.cpp” takes care of the flow including reading the input file and calling the right functions in the loop. The bulk of the code is found in “engine.cpp”. This file contains all the codes that drive the GPU and calls appropriate shaders for simulation and graphics rendering. It also writes data on the disk at an optional user defined frequency. Simulation shaders are found in “compute.hlsl” and graphics shaders are in “graphics.fx”. Finally, the GUI is managed by “gui_manager.cpp”. Figure: 3.1: Simplified flowchart of Celeris. 27 3.2.1.2 Input and output files The input setup for a specific experiment can be given to Celeris as an XML (EXtensible Markup Language) file. XML files can be easily edited by any standard text editor. They are encoded with a set of labels (tags) in a format which is readable for both human and machine. In order to distinguish Celeris XML input files from generic XML files, we use CML as the extension of these files. A sample CML file is shown in Figure 3.2. In the input file, the model type can be chosen between Boussinesq and NLSW. The friction equations can be also selected to be Manning or Quadratic. The field dimension and grid sizes must be also entered in the input file. The bathymetry (topography) of the domain can be given as the relative or absolute path to a formatted ASCII file. The initial condition can be also set by entering the path to a formatted ASCII file which contains the initial values for w, P, and Q. Moreover, several solitary waves can be placed as the initial conditions. The boundary types must be chosen for each boundary among “Solid”, “Sponge”, and “SineWave”. Most of the values given in the input file to the software can be later altered via GUI. Finally, the user can opt to save the w, P, and Q data periodically on the disk at its associated cost. To minimize the slow-down, the user can choose to only save data on specific grid points (gauges) and/or several ranges. The output files are written into a formatted ASCII file. 3.2.1.3 Implementation Shader languages such as HLSL are designed around the idea that GPUs generate pictures [34]. Therefore, to solve a computational problem with shaders, the problem must be reformulated in terms of graphics primitives and the data must be stored within textures. 2D textures are matrix- like data structures which are well-suited for our 2D domain. Each cell in a texture, a texel, may have several floating-point variables in order to describe traits of the texel. In Celeris, we mostly use float4 type texels which include three single precision floating point variable for the texel color, namely “r”, “g”, “b”, and one for the alpha channel, named “a”. We use these variables to store flow parameters. For instance, a 2D-texture of size nx x ny is defined to store the latest state of the flow. In each computational cell, w, P, and Q are stored in “r”, “g”, and “b”, while “a” is remained unused. Each step of numerical scheme described earlier is performed by passing several textures such as flow state, bathymetry, gradients, etc. as resources to a shader and getting one output, or as called in graphics terminology, render target texture. A sample shader to apply solid wall boundary condition is shown in Figure 3.3. 28 <?xml version="1.0"?> <Experiment> <name>Sample Experiment</name> <!-- Settings for Model --> <model type = "BSNQ"> <parameters epsilon = 5e-12 correctionStepsNum = 2 timestep = 0.005></parameters> <friction type = "Manning" coef = 0.0> </friction> </model> <!-- Settings for Solution field --> <fieldDimensions width = 30 length = 30 stillWaterElevation = 0></fieldDimensions> <gridSize nx = 601 ny = 601></gridSize> <bathymetryFilePath> \resources\bathy.cbf </bathymetryFilePath> <!-- Settings for Initial Condition --> <hotStartFilePath> N/A </hotStartFilePath> <solitaryWave H = 0.05 theta = 0 xc = 5 yc = 15></solitaryWave> <solitaryWave H = 0.05 theta = -45 xc = 5 yc = 25></solitaryWave> <!-- Settings for Boundaries--> <westBoundary type = "SineWave" seaLevel = 0 widthNum = 2> <sineWave amplitude = .01 period = 2 theta = 0></sineWave> </westBoundary> <eastBoundary type = "Sponge" seaLevel = 0 widthNum = 20></eastBoundary> <southBoundary type = "Solid" seaLevel = 0 widthNum = 2></southBoundary> <northBoundary type = "Solid" seaLevel = 0 widthNum = 2></northBoundary> <!-- Settings for Logging Data--> <logData doLog = true logStep = 20> <logPath>C:\conical_island\</logPath> <range filename = "island"> <bottomLeft x = 228 y = 228></bottomLeft> <topRight x = 374 y = 374></topRight> </range> <gauges filename = "gauges">229,302,249,302,353,302,354,302,301,249</gauges> </logData> </Experiment> Figure 3.2: Sample CML input file float4 WestBoundarySolid(VS_OUTPUT input) : SV_TARGET { const float3 in_state_real = txState.Load(int3(4-input.tex_idx.x,input.tex_idx.y,0)).rgb; return float4(in_state_real.r, -in_state_real.g, in_state_real.b, 0); } Figure 3.3: Sample shader code which handles a solid wall boundary condition. 29 After performing a user-defined number of computational time steps, the flow state and terrain are passed to the graphics renderer. Several shaders are applied in order to visualize the results with options for photorealistic rendering or value color-mapping. The most challenging part of the implementation is solving the tridiagonal matrix systems within the numerical scheme. The classic algorithm to solve such a system is the Thomas algorithm consisting of a forward elimination and backward substitution. However, this algorithm is inherently serial. Employing such an algorithm will generally need copying data from the GPU to the main memory, running the serial solver and copying the results back on the GPU. Such a process will significantly increase the running time of the software and will become the bottle- neck for large domains. In Celeris, solving the tridiagonal system is accomplished using the cyclic reduction (CR) algorithm [35]. CR also consists of two phases: forward reduction and backward substitution. In the forward reduction phase, the system is successively reduced to a smaller system with half the number of unknowns, until a system of 2 unknowns is achieved which can be solved trivially. In the backward substitution phase, the other half of the unknowns are found by substituting the previously found values into the equations. This process is illustrated in Figure 3.4 Figure 3.4: Cyclic reduction algorithm and its implementation on GPU. 30 3.2.1.4 Compilation Celeris Advent was initially written and could be compiled in Microsoft Visual C++ 2008 Express Edition; however, we ported the software to Microsoft Visual C++ 2018 and released it under the same license. The solution file, named “Celeris.sln”, is included in the redistributions. For successful compilation the latest DirectX SDK must be correctly installed. Celeris Advent consists of three open source projects: a wrapper around operating system functions including Direct3D, named “Coerci”, a GUI library named “Guichan”, and the main project named “Celeris”. The project Celeris uses an open source XML parser called “TinyXML”. Two folders called “shaders” and “graphics” are also included in the redistribution zip file. These folders contain the shader codes and graphics textures (colormaps, font, etc.) and they must be placed appropriately in the solution folder such that they are found by the code. 3.2.1.5 Running Celeris Advent As mentioned in the previous sections, Celeris starts based on a CML input file; however, the user can change most of the settings from the GUI while the model is running. Celeris can be easily launched by running the file named “Celeris.exe”. After launch, the software will look into the file named “setting.init” to find the absolute path to the input CML file. If such a path is not provided or the path is invalid, Celeris will ask the user to choose the input file from a file browser window. After a successful launch, the numerical experiment begins immediately, and the results are visualized in a 3D environment with a movable camera. Using the GUI, the user can change the numerical and physical parameters of the experiment such as the grid sizes, friction coefficient, boundary conditions, etc. Solitary waves can be also superposed to the field with a given location, height, and direction. Experiments can be paused or reset. The GUI of Celeris is briefly explained in a video available at https://youtu.be/pCcnPU7PCrg and in more details in its User Manual which can be found as an appendix to this chapter (see Section 3.4). 3.2.1.6 Key Features Simulation speed Celeris Advent leverages the power of the GPU to solve the flow equations massively in parallel, giving the software a lot of computational power. We have simulated coastal regions as large as 10 Km 2 with over 7 million grid points, two times faster than real-time. While this unprecedent 31 simulation speed is a major advantage of Celeris, it was not the main goal of its development, but it was a requirement to achieve our main goal of interactivity. Vivid Visualization Celeris Advent provides various visualization options. The water surface can be visualized in a photorealistic mode, where reflection and refraction of rays hitting the water surface are calculated using the Fresnel equations, or by applying a colormap. This colormap can be set to represent η, u, v, etc. Several terrain textures are also available to enhance the visualization. The user can apply a colormap on the terrain as well. A grid with a custom scale can be laid over the surfaces to improve the illustration of the surface elevation. The visualization techniques we used in Celeris are often very interesting and novel, however, we refrain from discussing the details of equations we developed or borrowed for visualizations purposes only, for the sake of brevity. Figure 3.5 shows a combination of these different options for visualization of an experiment with a realistic relief and sinewaves on one of the boundaries. The software can also highlight the inundated areas according to a user-selected threshold (Figure 3.6). (a) Figure 3.5: Visualizations of an experiment with a realistic bathymetry. Celeris can visualize the surfaces in a realistic mode with reflections and refractions (a), with a colormap on the terrain (b) and with a colormap on the water surface (c). Video is available at https://youtu.be/yks7ePXbRyU. 32 (b) (c) Figure 3.5 (Continued): Visualizations of an experiment with a realistic bathymetry. Celeris can visualize the surfaces in a realistic mode with reflections and refractions (a), with a colormap on the terrain (b) and with a colormap on the water surface (c). Video is available at https://youtu.be/yks7ePXbRyU. 33 Figure 3.6: A screenshot of Celeris Advent (v1.2.4). The water surface is colormapped according to its elevation and the inundated area on the island is highlighted. Interactivity A key and unprecedent feature of Celeris Advent is its interactivity. The software not only allows the user to switch between different visualization options, but also, they can alter the simulation parameters while the model is running. For example, new boundary condition can be introduced in the middle of an experiment or the friction coefficient can be changed. Furthermore, Celeris Advent even lets the user alter the bathymetry as the model is running. This interactivity lets the researchers and engineers run an experiment in different conditions very quickly by altering the parameters on the go. 34 3.2.2 Numerical Validations 3.2.2.1 Run-up on a planar beach Solitary wave propagation over a planar beach is experimentally studied by Synolakis [36]. In these experiments the beach slope was 1:19.85 and tens of trials were performed covering a wide range of solitary wave heights. This data-set is used for numerical validation many times by several researchers [16], [37], [38]. We simulate these experiments with a dozen wave heights in the range of 0.005 < H/d < 0.5 and we compare our numerical maximum vertical run-up to the experimental values. The chosen range for wave height covers both breaking and non-breaking waves. For simulations with H/d < 0.01 we use Δx/h = 0.0625 and Δt (g/h) 0.5 = 0.0075 and for the rest of simulations we use Δx/d = 0.25 and Δt (g/d) 0.5 = 0.03. The width of the simulation field is kept constant at W/d = 1 but the length is chosen between 100 < L/d < 1000 m such that it appropriately accommodates the solitary wave. The beach is located close to the east boundary. The west boundary is a sponge layer and the two other boundaries are solid walls. Following Lynett et al. [16], we generated three sets of experiments with different constant quadratic bottom friction coefficients, f = 0.0, 0.01, and 0.001. Figure 3.7 compares the numerical results with experimental data, where maximum vertical run-up and solitary wave height are scaled by the water depth. For non-breaking solitary waves with H/d < 0.01, the bottom friction does not affect the maximum run-up, and the results agree quite well with experiments. For larger breaking waves, the numerical results for different bottom frictions begin to diverge. Note that Celeris does not employ an explicit wave-breaking model. However, the minmod limiter used in the numerical scheme, introduces sufficient numerical dissipation to resemble wave breaking. The achieved results are consistent with results of Lynett et al. [16]. 35 Figure 3.7: Comparison of numerical non-dimensional maximum run-up of solitary waves on a 1:19.85 beach versus non-dimensional wave height with experimental data[36]. Synolakis [39] also provides snapshots of the water surface elevation using photographs of the waves during the run-up and run-down. One particular set of these snapshots with H/d = 0.28 is used by several researches to evaluate their models. The results for this numerical experiment in Celeris is compared with experimental data in Figure 3.8. Following [37] we used a friction factor of f = 0.0075 in this simulation. The comparisons indicate the ability of Celeris to accurately predict the run-up and run-down process for a breaking wave. 36 Figure 3.8: Breaking solitary wave run-up and rundown on a planar beach at t( g/h) 0.5 = (a) 15, (b) 20, (c) 25, (d) 45. The solid line represents the numerical results and the dots represent the experimental data of Synolakis [39] 37 3.2.2.2 Wave focusing on a semicircular shoal We extend our validation to 2-D problems by firstly simulating the experiments of Whalin [40]. He studied the non-linear refraction–diffraction of regular waves propagating over a semicircular shoal in a wave tank which was 25.6 m long and 6.096 m wide. The water depth in the tank was gradually decreased from 0.4572 m to 0.1524 m. The bathymetry can be expressed by 𝑧 ={ 0 0≤𝑥 <10.67−𝐺 (10.67−𝐺 −𝑥 )25 ⁄ 10.67−𝐺 ≤𝑥 <18.29−𝐺 0.3048 18.29−𝐺 ≤𝑥 (3.1) where G(y) = [y(6.096 - y)] 1/2 , 0 ≤ y ≤ 6.096. Harmonic analysis was performed on surface elevation time series along the tank centerline to obtain the amplitude of frequency components. The Whalin [40] experiments have become one of the standard benchmarks for Boussinesq wave models, and are used for model validation by several authors in previous studies [27], [41]–[43]. We study the case with incoming wave amplitude of a = 0.0075 m and period of T = 2 s. We simulate this experiment with Celeris in 35 m x 6.096 m, imposing a sinewave boundary condition on the west boundary, and a 5 m sponge layer on the east boundary. The north and south boundaries are solid walls. The domain is discretized by 2000 x 65 cells with a time step of 0.001 s. On each cell along the centerline of the tank, the amplitudes of the first, second, and third harmonics are calculated based on FFT analyses and then they are compared to those of Whalin’s experimental data in Figure 3.9. A snapshot of water elevation is shown in Figure 3.10. The regular sinewaves coming from the boundary focus on the semicircular shoal, and higher harmonics appear due to the non-linear effects. The focusing of the waves can be clearly seen in Figure 3.10. The vertical scale is exaggerated by a factor of 80 in the software. 38 Figure 3.9: Wave amplitude harmonics along the centerline for a = 0.0075 m and T = 2 s. Solid lines are numerical results from Celeris, symbols are experimental data from Whalin [40]. Figure 3.10: Water surface elevation for the case with a = 0.0075 m and T = 2 s from Whalin [40] experiments. 39 3.2.2.3 Steady flow over submerged obstacle Lloyd & Stansby [44] examined the wake and vortex shedding behind a sloping obstacle in a shallow water flow. The conical obstacle remains submerged in this experiment, and the wake is generated on the lee side of the obstacle. With this benchmark we test the software’s ability to generate a separation zone and consequently the oscillatory vortex stream behind the obstacle. We consider the test case SB4_02 in [45]. In this experiment, a steady discharge with a velocity of U=0.115 m/s, water depth of h=0.054 m, and the Reynolds number of Re=6210 is created in a flume with the length of 9.84 m and width of 1.52 m. The conical obstacle is placed on the center of the channel width, 5 m from the upstream control point. The side slopes of the obstacle are ~8 degrees, its height is 0.049 m, and its diameter at the base is 0.75 m. The top of the obstacle is flattened. Figure 3.11 shows a plot of the experiment bathymetry. Figure 3.11: Bathymetry of flume in the submerged obstacle experiment The values of the velocity components, u and v, are recorded over the time on two points behind the obstacle in both numerical and laboratory experiments. Point #1 is on the centerline of the channel and 1.02 m behind the center of the obstacle and point #2 is on the same y value as point #1, but 0.27 m closer to the upper wall. We use a uniform time series boundary condition to let the flow enter the flume. The formation time and characteristics of the wake depends on the Manning’s coefficient, n. We used 0≤n≤0.02 with grid size of 0.015 m × 0.015 m. A street of vortices was observed for all the experiments, however for the frictionless case (n = 0) the boundaries of the street did not become stable and kept moving. The period of fluctuations in velocity components varied by Manning’s coefficient, and we observed the best agreement with the experimental data for n = 0.02. Rest of the results in this section are based on the case with n = 0.02. It must be noted that the recommended Manning’s coefficient for this experiment in [45] is n = 0.01. Using the vertical vorticity visualization tool in Celeris Advent, after a quite short spin-up (about 60s), the vortex shedding can be clearly seen (Figure 3.12). The asymmetricity in the flow 40 grows larger until a constant stream of vortices forms at t = 100s. The pattern of the flow remains unchanged after this time. t = 40s t = 60s t = 80s t = 100s t = 500s Figure 3.12: Vortex shedding behind the submerged obstacle in different times (y axis is horizontal). Figure 3.13 compares the experimental data with our numerical results at point #1 and point #2. The computed u component of velocity at both points underestimates the laboratory measurements, 41 though the period of oscillation is captured correctly. The computed v component shows a good agreement with the recorded amplitude and period of oscillations on both points. It should be noted that other numerical Boussinesq-type models also do not provide perfect accuracy for this experiment [46]. Figure 3.13: Experimental data (circles) and numerical simulation (solid line) for u and v component of velocity at point #1 and point #2 for the submerged obstacle experiment [45]. The subscripts correspond to the point number. 3.2.2.4 Solitary wave run-up on a conical island Uniform time-stepping As the final validation test, we reproduce the experiments of Briggs et al. [47] for solitary wave interaction around a conical island; a test case frequently used to validate numerical models [16], [37], [48], [49]. The experimental setup is shown in Figure 3.14. A circular island with 7.2 m base 42 diameter and ¼ side slope was located in a 30 m x 25 m wave tank with 0.32 m depth. Three cases with target relative wave heights of H/d=0.05, 0.10, and 0.20 were simulated and the wave maximum run-up on the island and surface elevation time series on several gauges were recorded. We simulate the conical island experiments in a 30 m x 30 m numerical domain with the conical island in the center and a soliton placed as an initial condition near the west boundary. Sponge layers are imposed on the boundaries parallel to the soliton, and solid walls on the two other boundaries. The domain is discretized by 601x601 cells with a constant time step of 0.005 s. Bottom friction is neglected in these simulations. The test cases are performed with relative wave heights of H/d=0.04, 0.09, and 0.18 which are slightly smaller than the target wave heights, but closer to those observed downstream of the wave maker. Reduced wave heights are also used by [16], [37], and [49]. Gauge #6 and #9 are located in front of the island, while gauge #16 is on the side, and gauge #22 is behind the island (Figure 3.15). The numerical surface elevation compared to the experimental results are shown in Figure 3.15 to Figure 3.17. The leading wave height and its shape is predicted very well in all cases. The initial draw-down is also predicted quite well for two cases with larger wave heights. However, the draw-down is underestimated for the case with the smallest wave height. This deviation is consistent with numerical results of the previously cited references. 43 Figure 3.14: Experimental setup of the conical island. The gauge locations are shown by dots and the wave approaches the island from the left. 44 Figure 3.15: Experimental (– –) and numerical (–) time series for the interaction of a solitary wave with H/d=0.04 on a conical island, at gauges #6, #9, #16, and #22 (a-d) 45 Figure 3.16: Experimental (– –) and numerical (–) time series for the interaction of a solitary wave with H/d=0.09 on a conical island, at gauges #6, #9, #16, and #22 (a-d) 46 Figure 3.17: Experimental (– –) and numerical (–) time series for the interaction of a solitary wave with H/d=0.18 on a conical island, at gauges #6, #9, #16, and #22 (a-d) 47 Snapshots of the experiment for the case with H/d=0.18 are shown in Figure 3.18. The moment of maximum run-up on the front face of the island is shown in Figure 3.18a. The time when the wrapping waves collide behind the island is captured in Figure 3.18b. In these two figures, the water surface is rendered by a colormap representing the lateral velocity, v, where lighter colors represent positive values and darker colors represent negative values. Figure 3.18c shows the time of the maximum run-up on the back face of the island. The colormap in this figure represents η. (a) Figure 3.18: Snapshots of conical island with H/d = 0.18 near the time of maximum run-up at the front face (a), collision of wrapping waves (b), and maximum run-up on the back face (c). The vertical scale is exaggerated by a factor of 10 48 (b) (c) Figure 3.18: (Continued) Snapshots of conical island with H/d = 0.18 near the time of maximum run-up at the front face (a), collision of wrapping waves (b), and maximum run-up on the back face (c). The vertical scale is exaggerated by a factor of 10 49 The predictions of the current model are slightly better than the numerical results in [16], [37], and [49]. For instance, the double-peak in Figure 3.17c is resolved better in the current model. This might be because of the finer resolutions used in the current study, which were feasible only due to the fast computational speed of Celeris. For example, Fuhrman and Madsen [49] reported a 3.3 h simulation running time on a single 3.2 GHz Pentium 4 processor, with a 234×201 computational grid. Celeris Advent completes this test with the same number of cells in less than 15 s, on a modest PC with NVIDIA Quadro K600 graphics card and a 1.8 GHz Intel Xeon CPU. The experiment takes less than 2 s on a gaming laptop with an NVIDIA GeForce GTX 1060 graphics card. The agreement of numerical results with measured values in the case with the highest wave height is the most interesting one, as in this case the soliton breaks along the island. Fuhrman and Madsen [49] model, which does not utilize a breaking model, over predicts the run-up for gauge #22 by about 25% for this case. However predictions of Lynett et al. [16] and Tonelli and Petti [37], which consider the wave breaking, are closer to the measurements. Our model also has a close prediction at this location, which confirms that the minmod flux limiter employed in Celeris is doing a good job in imitating the breaking models. Finally, the numerical and measured horizontal maximum run-ups are compared in Figure 3.19. The horizontal run-ups are scaled by the initial shoreline radius (2.32 m). The wave direction is from west to east. A threshold of δ = sΔx/3 is chosen for water depth to determine the maximum run-up. The run-up values for the selected δ were invariant for different grid sizes. The agreement for all cases is very good and comparable to that achieved by Lynett et al.[16], Fuhrman and Madsen [49], and Tonelli and Petti [37]. The run-up on the back face of the island is also captured very well in these simulations. This run-up is generated by the superposition of the waves wrapping around the island. 50 (a) (b) Figure 3.19: Numerical (solid line) and measured (x) maximum horizontal run-up for H/d = 0.04 (a), 0.09 (b), and 0.018 (c). 51 (c) Figure 3.19 (Continued): Numerical (solid line) and measured (x) maximum horizontal run-up for H/d = 0.04 (a), 0.09 (b), and 0.018 (c). Adaptive time-stepping We developed the adaptive time-stepping feature long after releasing and validating the software. That is why all the benchmarks mentioned so far, are performed by uniform time-stepping. In this section we re-run the conical island benchmark with adaptive time-stepping to validate this feature. The domain is discretized by 301x301 cells with an initial time step of 0.005 s, and no corrector step, which is equivalent to CFL = 0.11. We only run the experiment for H/d= 0.18. The numerical results for adaptive and uniform time-stepping at the gauges and island were identical and therefore are not shown here. However, the adaptive mode was able to run experiments with ϵ values as small as 10 -13 , but the simulation gets unstable with uniform time-stepping for ϵ < 10 -12 in this experiment. Figure 3.20 shows how time-step size (dt) changes during the experiment with adaptive time integration for different values of ϵ. As the wave reaches the island and run-up occurs, a thin layer of water is created. The value of hu divided by the small value of h results in a large local velocity and therefore an increase in the local CFL number. Therefore, the software decreases the size of the time-step in order to keep the CFL number constant. The thickness of the 52 water layer on the island is somewhat limited by the value of ϵ. That is why the drop in the time- step size gets smaller for larger values of ϵ. Figure 3.20: Time-step change during the conical island experiment with adaptive time integration for different values of ϵ. 3.2.3 Suggestion for Future Work Development of a software never actually ends, and there is always space for improvements. We released Celeris Advent as an open source software and hope that researchers will continue improving the software and adding new features. In this section we discuss one possible improvement which can make the software run faster. In any kind of algorithmic programming there is a tradeoff between memory (space) and computational time. This tradeoff becomes more important in GPU programming, because the I/O from CPU to GPU is quite expensive. While we developed Celeris Advent bearing in mind the possible tradeoffs, there is still opportunities to make the software run faster. We suggest researchers to explore the code to potentially decrease the computational time by considering techniques and tradeoffs such as memory-caching vs. recalculation, GPU memory access pattern 53 and padding, CPU threading, loop unrolling, etc. It should be noted, that minimizing the computational time was not our only objective in development of Celeris, and therefore we did not aim to sacrifice the features of the software such as visualization, interactivity, and readability of its codes, to make it run faster. 3.3 Celeris Base Celeris Advent was developed in native C++ and HLSL code with little help from external libraries. For example, all 3D visualization techniques such as camera renderings, camera movements, skybox, reflection, etc. were implemented from scratch. While such an approach has its own advantages such as a possible higher performance, it is not without drawbacks. The main issue for us was the difficulty of adding new features, such as immersive visualization, to the software. Furthermore, Celeris Advent was developed with focus on the core scientific functionality and less attention to recommended software architectural pattern such as modularity, object-oriented programming, or model–view–controller (MVC) pattern. We observed that other researchers were not successful in developing the software further or even recompiling it after a year passed from the initial release date. These issues made us to revamp the software. Celeris has a lot in common with video games, for example, GPU based physics, attractive visualization, interactivity, and immersive visualization. Therefore, we redeveloped Celeris from scratch in a game engine called Unity3D. This redevelopment has a lot of advantages. For example, adding new features such as virtual reality (VR) to the software is much easier in Unity3D as there are hundreds of ready to use libraries (called “assets”) available to download or purchase on Unity3D Asset Store. The main difference between Celeris and a game is the accuracy of the physics engine that we need in Celeris. That is why we did not rely on the physics engine of Unity3D and developed our own engine for the wave simulation based on the exact same equations described in Chapter 2 and used in Celeris Advent. Using a popular development platform such as Unity3D provides developers with a large community support which was not available for developing Celeris Advent. Furthermore, we paid more attention to software design in the new version and employed state-of-the-art engineering practices to develop a modular software. We are optimistic that these features and the introduced ease of development will make scientists and researchers embrace the software as a base for further 54 developments; hence, we call the new version Celeris Base. Figure 3.21 shows the software inside its development platform, Unity3D. Figure 3.21: Celeris Base, running inside Unity3D. 3.3.1 Key Differences The key differences between Celeris Base and Celeris Advent are in their implementations. Celeris Base is developed in Unity3D using C# while Celeris Advent is developed in C++. The computations are done in both using shaders, but we used a different kind of shaders in Celeris Base, called compute shaders. As the naming suggests, these shaders are designed to handle computations, unlike the regular shaders used in Celeris Advent, which are designed to render graphics. Compute shaders are also written in HLSL, however, there is no need to build a dummy graphics pipeline to run them. They can be simply run on the GPU using a “Dispatch” call. This makes using them much easier for general purpose programming. For visualizations, we use regular shaders in Celeris Base just like Celeris Advent, however Unity3D shaders are written in a variant of HLSL called Cg. We paid more attention to the software architecture of Celeris Base and used recommended techniques in object-oriented programming such as polymorphism, design patterns, dependency injection, etc. This careful engineering makes the software a true base for further developments. 55 For example, we discuss how a VR version of Celeris is developed on top of Celeris Base in the next chapters. Finally, Celeris Base has several new features compared to Celeris Advent. For example, the user can put a gauge on a point to show the water surface diagrams in real time or they can record a 360° video of the experiment (e.g., https://youtu.be/tJeGviPzwEs). Celeris Base, is also released as an open source software under the permissive software license, MIT license. This license puts only very limited restriction on reuse of the software and therefore, leaves more room and motivation for future innovative developments based on Celeris Base. Celeris Base uses proprietary software libraries (such as Surface Waves, by Code Animo) which are excluded from the open source license. 3.3.2 Software Documentation 3.3.2.1 Source Files A Unity3D project is normally consisted of several folders and hundreds of files, however, the Assets folder is the one which contains most of the important files. The rest of the folders may contain some auto generated files such as libraries, etc. In the Assets folder of Celeris Base, there are three major folders: Celeris, Code Animo, and UI. The Celeris folder contains most of the works done to build the software. Figure 3.22 shows the directory paths and files in each subdirectory of this folder. The Scripts folder hosts most of the codes written to create the solver and rest of the software. Most of the files implement only one class which handles one specific action or feature in the software. The main flow of the software is controlled by GameManager.cs. The Boussinesq solver is driven by TL17Driver.cs 2 . There is also a driver for an NLSW solver, called KP07Driver.cs 3 . Both classes are derived from a base class implemented in KP07Base.cs. The base class contains the codes to solve the common advective terms between the two solvers. This base class itself is a derived class of Solver.cs, which connects the solver classes to rendering infrastructure and compute shaders. The Solver class also contains shared 2 TL17 stands for Tavakkol and Lynett (2017) [19]. 3 KP07 stands for Kurganov and Petrova (2007) [26]. 56 features such as logging, and implements virtual functions such as TimeStep() which are overridden by the child classes. Two main compute shaders, TL17.compute and KP07.compute, contain the kernels (GPU codes) for the Boussinesq and NLSW solvers, respectively. These two files import (include) a compute shader, called base_KP07.compute, which like the driver classes, contain the common kernels between two algorithms. Furthermore, each main compute shader include other compute shaders such as time_integration.compute, tools.compute, etc. The numerous files and classes in Celeris Base, in contrast to Celeris Advent, is an indication of our modular and object-oriented design for the second series of our software. This design not only was very helpful in development of the product, it is expected to significantly benefit further developments on top of Celeris Base. For example, a new Boussinesq model (say based on [30]) driver class can be developed and implemented, which can inherit from Solver.cs and leverage the ready-to-use framework. The folder GPGPU contains the rendering shaders and non-solver compute shaders. The rendering shaders have a .shader extension, and may import files ending in .cginc. Shaders are applied to objects through files caller “Material”, which are in the Materials folder. The entry point in Unity3D projects are called “scenes”. The only scene in Celeris Base is Main.unity which should be opened from Unity3D editor to start and run the software. The second important folder under Assets, is Code Animo. This folder contains a commercial asset (a Unity3D library) called Surface Waves. This asset is a gaming level water simulator which helped us with visualization of surfaces (water and terrain) as well as constructing the foundation of the software. It is important to note that this commercial library, is not (and could not be) released under our open source license. The last major folder in Celeris Base, is the UI folder under Assets. This folder, as the naming suggests, contains the codes which create our UI. For the sake of brevity, we refrain from explaining rest of the source files. 57 Celeris │ LICENSE.txt │ ├───CML Examples │ │ │ ├───Conical Island │ │ conical_island.cml │ │ conical_island.cbf │ │ ... │ │ │ └─── ... │ ├───Materials │ TerrainMap.mat │ WaterParula.mat │ ... │ ├───Prefabs │ Buoy.prefab │ Buoy.prefab.meta │ ├───Scripts │ │ CmlElement.cs │ │ GameManager.cs │ │ InitializeArray.cs │ │ MapManager.cs │ │ ReadCBF.cs │ │ ReadCML.cs │ │ SimSetting.cs │ │ Singleton.cs │ │ ... │ │ │ ├───GPGPU │ │ │ ... │ │ │ │ │ ├───ComputeShaders │ │ │ CopyWater… .compute │ │ │ │ │ ├───Rendering │ │ │ WaveRenderer.cs │ │ │ │ │ └───Rendering Shaders │ │ ComplexWater.cginc │ │ Heightmap… .cginc │ │ Photoreal… .shader │ │ WaterColor… .shader │ │ ... │ │ ⋮ ⋮ ⋮ ⋮ │ ├───Mouse │ │ Buoy.cs │ │ MouseInteraction.cs │ │ PointerGauge.cs │ │ ... │ │ │ └───Solvers │ │ Boundaries.compute │ │ Solver.cs │ │ │ ├───Boussinesq │ │ TL17.compute │ │ TL17Driver.cs │ │ │ ├───NLSW │ │ KP07.compute │ │ KP07Driver.cs │ │ │ └───Shared │ base_KP07.compute │ boundary_… .compute │ commons.compute │ globals.compute │ helpers.compute │ IrregularWaves.cs │ KernelLookUp.cs │ KP07Base.cs │ textures.compute │ time_integ… .compute │ tools.compute │ ├───Scenes │ Main.unity │ ├───Textures │ Colormap.bmp │ Contours.bmp │ ... │ └───Third Party ... Figure 3.22: Some of important source files in Celeris Base. 58 3.3.2.2 Implementation Unity3D is a popular game engine which provide the developer with a rendering engine, scripting support, etc. along with a powerful visual editing tool. Every content in Unity, such as an avatar, the camera, a popping sound, the logic, etc. is a GameObject, or at least is driven by one. GameObject is a class, and every GameObject in the game is an instance of it. The behavior of the GameObject’s are defined and controlled by components. For example, a piece of code must be attached to the camera GameObject to control its movement. Note that components are instances of classes, and therefore have some properties or variables. These variables are set to certain values, serialized and saved, such that each time the GameObject is instantiated, the variables of its components takes the pre-defined values. In our camera example, a speed variable may define the speed of the camera movement, and it can be set to a certain value, say 10. This value will be assigned to the variable each time we start the game. The entry point to a Unity project is a scene. A scene is the collection of all the GameObjects, components, and their connections. Unity scenes are written in YAML 4 (rhymes with camel), which is a data serialization language. For example, Figure 3.23 shows the YAML implementation of an object in the Main.unity scene of Celeris Base. The YAML file shows us that the type of the object is GameObject. The number after the ampersand (11076900) is the unique ID of this object. The name of the object is given by m_Name and, in this case, is GameManger. This object has several components, each shown by its unique ID. It also has some properties and some corresponding values. Each component of this object must be defined by a similar YAML file and its link to the the GameObject is set by the ID of the GameObject (e.g., m_GameObject: {fileID: 11076900}). 4 YAML stands for “YAML Ain't Markup Language” which is a recursive acronym. YAML initially was an acronym for “Yet Another Markup Language”! 59 --- !u!1 &11076900 GameObject: m_ObjectHideFlags: 0 m_CorrespondingSourceObject: {fileID: 0} m_PrefabInternal: {fileID: 0} serializedVersion: 6 m_Component: - component: {fileID: 11076903} - component: {fileID: 11076902} - component: {fileID: 11076904} - component: {fileID: 11076901} - component: {fileID: 11076905} - component: {fileID: 11076907} - component: {fileID: 11076906} m_Layer: 0 m_Name: GameManager m_TagString: Untagged m_Icon: {fileID: 0} m_NavMeshLayer: 0 m_StaticEditorFlags: 0 m_IsActive: 1 Figure 3.23: YAML implementation of the GameManager object in the Main.unity scene of Celeris Base. The number of lines in the aggregated YAML file of Celeris Base is over 140,000. But, luckily, we rarely, if at all, need to edit the YAML files of a scene. Instead, Unity3D has a visual editor where all these objects and their components are shown. Figure 3.24 shows the editor and its different windows. The hierarchy window shows all the GameObject’s in the scene and is a visualized form of the YAML file. Selecting each GameObject, shows its components and properties in the inspector window. The values of variables can be changed in the inspector and stored. Also, components can be attached to the game object by simply dragging and dropping them in the inspector window. The project window, shows all the files and libraries in the project, like a file explorer. Finally, the scene is visualized in the scene view. It lets you to visually navigate and edit the scene similar to a CAD software. However, note that, often, the scene view is not complete until the game starts. That is because usually the components and attached codes add more objects or visual effects to the scene. For example, in Celeris, before starting the software there is no simulation or bathymetry to see. The software starts running by clicking the Play button and is shown in the Game window (see Figure 3.25). The game window is what the user will see in the stand-alone version of the software. Unity3D 2018 for windows comes with the Visual Studio 2018 as the IDE. Unity supports C# as its main programming language. 60 Figure 3.24 Unity3D editor and its different windows. Figure 3.25: Game window and the Play button in Unity3D editor. Structure With the brief explanation of the Unity game engine, we can now discuss the structure of Celeris Base scene. There are four major GameObject’s in the scene: Main Camera, GameManager, Engine, and Canvas. The Main Camera, renders what the user will see in the field. Similar to Celeris Advent, it has a position and a Rotation tuple for yaw, pitch, and roll. However, unlike Celeris Advent, we did not need to write any codes to create this camera. A few components are attached to this object, where the most important one is a script to control the movement of the camera by the user. 61 GameManager object, drive the software flow, quite similar to the main.cpp file in Celeris Advent. This object also has several components, but the most important one is a script called GameManager.cs. The GameManager class uses the singleton design pattern, which means that only one instance of the class can exist. This is useful for GameManger to act as the sole coordinator of actions across the system. Furthermore, the one possible instance of the class is easily accessible from other classes just by calling the static public property of the class called “Instance”. GameManger defines the entire framework of the software, such as the CML loader, solver, rendering infrastructure, etc. It also contains the “Update” function, which is called every frame to run the solver for a pre-defined number of steps, update visualizations, etc. The Engine GameObject, is the rendering engine of Celeris Base. It has several children (sub- objects) to take care of rendering the water surface and the terrain. We will explain this object in more detail in the coming sections. Canvas is a GameObject that all UI elements must be a child of it. Canvas always has the EventSystem object which takes care of the messaging system. All UI elements of the Celeris Base are (and must be) under Canvas. Several scripts are attached to the UI elements to apply the required logic. For example, a script called “BoundaryPanelScript” is attached to the boundary tab UI element, which connects the GUI to the simulation engine. It takes the values of the parameters for the boundary from the GUI and updates the simulation parameters accordingly. The author would like to acknowledge the help from our intern, Zhao Zhao (graduate student at USC) in implementing the wireframe of the UI in Unity. Solver Celeris Base has two fluid dynamics model, one for solving NLSW equations and one for extended Boussinesq equations. The NLSW solver is based on a finite-volume model developed by Kurganov and Petrova (2007) [21], and therefore called KP07. Our Boussinesq solver, which was described in Chapter 2 in detail, is based on a hybrid finite-volume finite-difference model that we introduced in Tavakkol and Lynett (2017) [19], and therefore we call this solver TL17. Celeris Base has a core Solver class which is implemented in Solver.cs file and defines the foundation for running a model on the GPU. For example, RenderTexture’s are defined in this class. It also has methods to hook the simulation parameters (coming from a CML file) to the model as well as data logging functions. However, it does not implement the functions to solve any equations. It is in fact, a parent class with a virtual TimeStep() method which should be overridden by a child class implementing this function and solving the motion equations. As 62 explained in Chapter 2, TL17 uses KP07 to solve the advective terms of the extended Boussinesq equations and therefore share several functions with this model. We refactored the common functions of two models into a class called KP07Base which inherits from Solver. Two classes called TL17Driver and KP07Driver are derived from KP07Base to handle their corresponding models. Figure 3.26 shows the UML diagram of Solver, KP07Base, KP07Driver, and TL17Driver classes. Some major or representative variables and methods are shown in this diagram, as well as the inheritance hierarchy. The Solver class has a reference to a ComputeShader, called Compute, which is set to either a KP07 or TL17 compute shader by the subclasses. It also has an instance of a ComputeShader called renderComputeShader, which connects the model results to the visualization tools. The simTextureManager, hosts all the RenderTexture’s in the software and takes care of creating and destroying them as necessary. KP07Base class defines the shared functions between KP07 and TL17 models to solve the advective terms. For example, pass1 and pass2 functions take care of reconstructing [w, P, Q] T values and computing fluxes at cell interfaces. It also implements the time integration algorithms as virtual functions. This class defines, implements, and applies all the boundary conditions in the software. Furthermore, it handles bathymetry as well as applying the initial conditions such as solitary waves. KP07Driver is a relatively thin class derived from KP07Base. It defines the final step in solving the NLSW equations as a function called pass3, which sums up the fluxes and source terms to calculate the time derivatives of the flow parameters. It also implements the TimeStep function by calling pass1, pass2, pass3 and then a time integration method. TimeStep also calls the boundaryCondtion and the base TimeStep functions from KP07Base. TL17Driver is the class which drivers the TL17 model and solves the extended Boussinesq equations. It implements a function called pass3 which, similar to KP07Driver, aggregates the results from pass2 with source and dispersive terms. The class also defines crucial functions such as those to solve the tridiagonal matrices using cyclic reduction algorithm. TL17Driver, overrides the time integration functions to add the necessary changes, but yet calls the base time integration functions within. Finally, it implements the TimeStep function by calling pass1, pass2, pass3, time integration, and tridiagonal solver functions. Note that all the computational functions in the Solver 63 class and its children, are drivers to run compute shaders on the GPU, and do not implement any CPU solutions. Figure 3.26: Class UML diagram of Celeris Base solvers. Compute Shaders Celeris Base has a key difference with Celeris Advent in leveraging the power of GPU. We used compute shaders in Celeris Base, in contrast to pixel shaders that we used in Celeris Advent. Compute shaders are very similar to pixel shaders in language and syntax, however they are designed for the primary purpose of computation and therefore fit better for our purpose. Unlike using pixel shaders for general purpose programming, which requires setting up a dummy graphics pipeline, driving the compute shaders are simply done by a function called Dispatch. Compute shaders of Unity closely match Microsoft’s DirectCompute technology. They also have some similarities with CUDA programming language which perhaps is more popular in the coastal 64 engineering research community. We hope that using compute shaders instead of the pixel shaders in Celeris Base will remove the barriers for coastal researchers to further develop the mathematical and numerical model of the software. All the files with the extension of .compute are compute shaders. Most of them are located at Assets/Celeris/Scripts/Solver/Shared folder. These compute shaders implement the shared functions between TL17 and KP07 models, which, themselves, are implemented in TL17.compute and KP07.compute files. Rendering Mesh Generation One of the challenges in developing Celeris Base, was rendering the wave and terrain surfaces. Unity3D allow mesh generation from heightmaps, however, it only takes square shapes, i.e., a heightmap with equal number of grid points in x and y directions. The generated mesh can be scaled independently in x and y directions to form a rectangle, however, for shapes with larger difference in number of grid points, the precision of the mesh in one direction need to be either sacrificed or overloaded. To overcome this limitation, we adapted a visualization system from a commercial library called Surface Waves by Code Animo. Although this library was also designed to render only square domains, it had many useful tools which helped us design and implement our own surface rendering tools. In Celeris Base, we tile a rectangular grid with smaller square grids of size 128 × 128, and then slightly scale up the squares in one direction to fit the domain. Figure 3.27 shows this infrastructure for a rectangular domain with disassembled tiles for a better illustration. The number of squares in each direction is calculated by dividing the number of simulation cells in that direction (i.e., nx or ny) by the size of square tiles (i.e., 128), and then rounding the result to the nearest integer number. To avoid low visualization resolution, we use at least two tiles in each direction. This system introduces a new challenge: the number of the grid points in the underlaying computational model differs from the number in the rendering infrastructure, and therefore there is not a 1:1 match between the mathematical mesh and visualization mesh in the software. We solve this problem by defining a linear UV mapping between two meshes and sampling the compute textures for visualization purposes. 65 Figure 3.27: Rendering infrastructure in Celeris Base with disassembled tiles. Shading and Materials Rendering in Unity is done by using materials, shaders and textures. Materials are component attached to a GameObject with a mesh component and define its rendering properties. A material normally has references to the textures it uses, tiling information, color tints etc. It also uses a shader which takes care of the math of calculating the color of each pixel. A shader normally takes one or more textures as its input and uses them for calculating the surface color. Celeris Base uses several materials, shaders, and textures to deliver its vivid visualization. We briefly explain photorealistic and colormap shading in this section and leave rendering the geographic map of the domain to the next section. Shaders in Celeris Base are used for more than just coloring the pixels. We use vertex shaders to displace the mesh points every frame to create the water surface. The vertex shader is where the sampling which we mentioned in the previous section happens. The vertex shader runs for each 66 vertex of the mesh and samples the water surface texture (populated by compute shaders) to displace the vertex according to the water surface elevation. Photorealistic shading in Celeris Base is done by the photorealistic.shader file, which is attached to the SimpleWater.mat material. This shader adds the effects from light and skybox reflection to the albedo color of a pixel. It also has a reference to a grid texture (Figure 3.28a). This grid texture is sampled, scaled, color reversed, and then added to the pixel’s color to render the grid visual effect in Celeris Base. Figure 3.29 shows a scene rendered with the photorealistic shader with two different skyboxes and two grid modes. (a) (b) (c) Figure 3.28: Textures of grid (a), jet colormapping (b), and terrain colormapping (c), used in Celeris Base renderings. Figure 3.29: Sample photorealistic rendering in Celeris Base with different sky and grid options. 67 Colormapping is also done using surface shaders. Two colormap shaders exist in Celeris Base: WaterColormap.shader for the water surface and TerrainColormap.shader for the terrain. In colormap shaders, a colormapped texture is given to the shader as an input. For example, Figure 3.28b shows the jet colormap used for shading the water surface according to a flow variable and a linear mapping specified by max and min values. In water colormapping, the shader gets the value of the shading variable from the vertex shader (e.g., water surface elevation), then calculates the UV coordinates from 𝑈 = 𝐶 −𝐶 𝑚𝑖𝑛 𝐶 𝑚𝑎𝑥 −𝐶 𝑚𝑖𝑛 ; 𝑉 =0.5 (3.2) where C is the value of the shading variable, scaled by its given minimum and maximum values. Since the given textures are constant in the V direction, any value between 0 and 1 can be used for V. These UV coordinate values are then used to sample the color from the colormapped texture. The terrain colormap works similarly with a slight difference. In case of the terrain, two distinct linear mapping functions are used for heights above and below the sea level. Eq. (3.3) shows the equations used to calculate the U and V values in the terrain colormapping shader. Terrain colormapping shader takes the texture shown in Figure 3.28c as an input. { 𝑈 = 𝐶 −𝐶 𝑚𝑖𝑛 𝐶 𝑚𝑖𝑛 , 𝐶 ≤0 𝑈 = 𝐶 𝐶 𝑚𝑎𝑥 , 𝐶 >0 ; 𝑉 =0.5 (3.3) Map Rendering Celeris Base has a new feature which enables it to automatically download the geographic map of the simulation location and render it on the terrain. To specify the geographic location of the simulation, the latitude and longitude of the center of the field must be given in the CML input file. The map of the location is then downloaded by sending a request to the Google Static Maps API. This API takes several variables to set the visual effects of the map along with the latitude and longitude of the map center and its zoom level. Celeris must calculate the required zoom level such that the retrieved image covers the entire simulation field, and then crop the image appropriately. In order to explain how we implemented these calculations in Celeris, we first need to explain how web-based maps, map the spherical earth surface on a cylinder. 68 Perhaps, all the major web map services (e.g., Google, Bing, OpenStreetMap, etc.) use a mapping technique called Web Mercator (or Google Web Mercator) which is a variant of Mercator projection. This projection, maps the locations from a sphere (earth) to a cylinder. The cylinder, is then unwrapped to form the well-known planar map we see on our screens or mobile phones. Figure 3.30 illustrates Mercator projection in a diagram. Figure 3.30: A variant of Mercator map projection 5 . In Celeris Base, a class called GoogleStaticMap handles downloading and other calculations related to the geographic maps. This class is part of a Unity3D asset developed by the author and published on the Unity Asset store under the MIT License. The asset is available at http://u3d.as/FHW. Let’s define the Mx and My as the coordinates in the Web Mercator such that for the full earth map both coordinates span from -1 to 1. The following equations maps the longitude and latitude to Mx and My 𝑀 𝑥 = 𝑙𝑜𝑛 𝜋 (3.4) 𝑀 𝑦 = 1 𝜋 ln(𝑡𝑎𝑛 ( 𝜋 4 + 𝑙𝑎𝑡 2 )) (3.5) 5 Derivative from USGS image file and published on Wikimedia Commons in February 2005. 69 where lon and lat are longitude and latitude in radians. Since the Mercator projects poles to infinity, the poles cannot be shown on the map, and therefore the map is cut off at 85.051129° north and south. This latitaude is where we have My = ±1. The Google Map API receives an integer value called zoom level. This value is used to calculate the tile size which will be cropped from the cylinder and returned to the user. If zoom denotes this value, 2 zoom is the number of tiles in each direction. For example, zoom = 0 corresponds to the entire map, and zoom = 1 divides the full map into 2×2 = 4 tiles. Given the lat/lon of a location and the zoom level, the API calculates the tile size and return an image of that size. In Celeris, we do the reverse calculation to find the right zoom level. The first step in the reverse calculation is finding the lat/lon of corners of the field by using the haversine formula to convert distances in meters (width and length) to lon and lat. After the required zoom level is calculated, Celeris Base assembles and sends a REST request to the Google Static Maps API. Celeris Sends the request for the satellite map. The response is a tile of the map centered in the location specified in the request. This image is given to a shader defined in TerrainMap.shader. The shader receives four parameters (two in each direction) to form a linear UV mapping between the domain and its geographic map. Remember, that the geographic map returned by Google is larger than the field. Figure 3.31 shows the geographic map of the coast of Newport, OR, retrieved from Google Static Map API by Celeris Base to render the terrain in an example experiment in the area. Figure 3.32 shows this map projected on the terrain in the simulation. 70 Figure 3.31: Geographic map of the coast of Newport, OR, retreived from Google Static Maps API by Celeris Base. 71 Figure 3.32: Simulatoin of the coast of Newport, OR, using Celeris Base with rendering the geographic map of the location. Mouse Pointer Gauge The mouse pointer works like a gauge in Celeris Base, such that by hovering it over any point in the simulation field, the software shows the flow parameters and the position of the point in the field. This feature is implemented in an optimized way such that it does not slow down the software. To find the location of the pointer on the field, the software casts a ray from the pointer and in the heading direction of the camera. Then, it calculates the intersection of this ray (a line in the 3D space) with a horizontal plane placed on the mean sea-level. There is some error associated with this approach since the water surface elevation might be lower or higher than the mean sea- level and therefore the ray intersection with the water surface does not necessarily match with its intersection with the mean sea-level plane. However, the error is not significant, especially since 72 the position of the gauge is also reported using the same technique, the flow parameters always correspond to the gauge location, even if the location is not precisely under the mouse pointer. Finding the exact intersection of the ray and water surface is computationally expensive and, considering the small error margin, not worth it. After finding the position of the mouse pointer on the field, the software reads the flow data from the textures using the GetPixel method, provided by Unity3D. Figure 3.33: Mouse pointer acts as a gauge in Celeris Base. Buoy Celeris Base implements an interesting feature, where the user can deploy a “buoy” in the field and observe the readings from the buoy for the water height in real time. This feature is implemented using the tools used to implement the mouse pointer gauge feature. The values read from the GPU at the location of the buoy using these tools, are then plugged into a plotter asset which is shown on the screen (see Figure 3.34). Five buoys can be deployed by clicking on the scene. They can also be removed by clicking on the plotter legend while pressing the Ctrl button. 73 Figure 3.34: Deploying virtual buoys in Celeris Base. 3.3.2.3 Input and Output Files Celeris Base uses the same format for Input and Output files as in Celeris Advent. It takes a CML file and CBF file to set the simulation parameters and the bathymetry of the experiment and generates text-based logging files as its output. The only difference is that, the values in the CML file must be surrounded by double quotes (i.e. “”). This requirement is imposed by the XML reader which ships with the Unity3D as an internal C# library. 3.3.2.4 Compilation Unity3D is a cross-platform game engine, and therefore it can build projects for several platforms. However, using compute shaders imposes some limitations in building the project for different platforms. We developed Celeris Base on a Windows machine with the intention of running it on Windows machines. Per Unity 2018 documentation, Unity is now capable of translating compute shader into different shading languages to support different platforms. We quote the Unity 2018 document as follows: 74 “Platforms where compute shaders work: • Windows and Windows Store, with a DirectX 11 or DirectX 12 graphics API and Shader Model 5.0 GPU • macOS and iOS using Metal graphics API • Android, Linux and Windows platforms with Vulkan API • Modern OpenGL platforms (OpenGL 4.3 on Linux or Windows; OpenGL ES 3.1 on Android). Note that Mac OS X does not support OpenGL 4.3 • Modern consoles (Sony PS4 and Microsoft Xbox One)” Although Unity supports compute shaders on several platforms there is not a guarantee that Celeris Base will run on all of these platforms. At the early stages of developing Celeris Base, we successfully compiled and ran the software on a Linux machine. We expect that our software can still be built for Linux machines, with some fiddling. We never tried compiling Celeris for macOS or mobile devices. In summary, we believe that the software can be built for different platforms, but the process will involve some research and some necessary changes in the software. For example, Windows’ DirectX gracefully handles out-of-bound access in textures by returning zero, while some platforms may crash the GPU. Such cases must be found and removed from Celeris Base to prepare for compilation for different platforms. To build Celeris for Windows, you should first set the build platform to “PC, Mac & Linux Standalone” from File > Build Setting, then, set the target platform to Windows in the same setting window and build the project. 3.3.2.5 Running Celeris Base Running Celeris Base on Windows is exactly similar to running Celeris Advent and is done by running the executable file. Celeris Base firstly looks at the setting.init file, and if a path to a CML file is not provided, it opens a file browser window, where the user can select and open the CML file. The GUI of Celeris Base also resembles Celeris Advent, with some minor differences. In order to use the geographic map feature of Celeris Base, a Google Static Maps API key must be provided in the GoogleAPIKey.txt file. 75 3.3.3 Numerical Validation Since Celeris Base uses the same mathematical and numerical model as Celeris Advent, validations of Celeris Advent can be accounted for Celeris Base as well. However, in order to ensure that we have correctly implemented the solution in Celeris Base, we validate the model against one of the experiments of Briggs et al. [47] for solitary wave interaction around a conical island. The experiment and simulation setup are explained in section 3.2.2.4. We only present the results for the case with H/d = 0.18. We used a grid size of 301×301 and a timestep of 0.0025 s. Figure 3.35 and Figure 3.36 compare the experimental and numerical results for the time-series at gauges and maximum horizontal runup on the island, respectively. The figures show that the numerical results given by Celeris Base for this experiment are in good agreement with the experimental results. 76 Figure 3.35: Experimental (– –) and numerical (–) time series for the interaction of a solitary wave with H/d=0.18 on a conical island, at gauges #6, #9, #16, and #22 (a-d), simulated by Celeris Base. 77 Figure 3.36: Numerical (solid line) and measured (●) maximum horizontal run-up for H/d = 0.018 simulated by Celeris Base. 3.4 Appendix (User Manual of Celeris Advent) This section is dedicated to the User Manual of Celeris Advent as published in [19] and explains how to use the version v1.2.2 of the software. It is worth to note that this User Manual is translated to several languages such as Spanish, Farsi, and Italian by independent users and the translation are available online by third parties. 3.4.1 Introduction This guide documents the use of wave simulation and visualization software, Celeris. This software solves the extended Boussinesq equations and is accelerated by the GPU. Celeris uses a hybrid finite volume – finite difference scheme to discretize the governing equations which are used in their conservative form. The tuple (w, P, Q) describes the flow parameters in a cell, where w is the water surface elevation from a fixed datum. P and Q are the integrated mass fluxes in x and y directions, respectively. Numerical and implementation details of the software can be found elsewhere and are out of scope of this writing. Celeris takes an XML file as the input setup for a specific experiment, but it also has an interactive GUI. It can also write data to ASCII formatted 78 files. This document elaborates on the file formats and the GUI. Note that Celeris uses metric system. 3.4.2 Input files As mentioned earlier we chose XML format as the input setup file for experiments, mostly because it is both human and machine readable. Any standard text editors, such as notepad++, can be used to edit these files. XML files consist of elements. An element generally begins with a start-tag and ends with the matching end-tag; any character in-between are the contents of the element. For instance, Figure 3.37 shows an element which describes the name of an experiment. <name>Conical Island</name> Figure 3.37: A sample XML element. An element can contain other elements, and in fact all the elements in an XML file must be located inside the “root” element. In addition to content, elements can also have attributes. For example, Figure 3.38 shows the element <westBoundary>. This element has three attributes, namely, type, seaLevel, and widthNum. It also contains another element, <sineWave>. This figure also shows a sample comment. Comments in XML begin with <!-- and end with -->. <!-- Settings for Boundaries--> <westBoundary type = "SineWave" seaLevel = 0 widthNum = 2> <sineWave amplitude = .01 period = 2 theta = 0></sineWave> </westBoundary> Figure 3.38: An XML element with attributes containing another element. In order to distinguish Celeris input files from generic XML files, we set the extension of our input files to CML. These files, as any XML file, can be also easily edited with a standard text editor. Figure 3.39 shows a complete CML for a sample experiment. We will walk through this file and explain its elements. The root element is <Experiment> and everything must be places inside this element. The name of the project is placed in the <name> element. This string value is shown on the title bar of Celeris. 3.4.2.1 Model Settings The model settings are incorporated in the element <model> which have an attribute type and two elements <parameters> and <friction>, each with several attributes. The type attribute can be set 79 to “BSNQ” or “NLSW” to switch between Boussinesq model and shallow water equations. The attribute epsilon is the value of ϵ in the governing equations which is used to avoid divisions by zero. The value of this parameter can have effect on the runup measurements and stability of the model. Larger values make the code more stable but increases the artifacts in simulating runups. The simulation results in areas with water depth less than ϵ ¼ may have anomalies. The attribute correctionStepsNum sets the number of corrections steps 6 and the attribute timestep is the size of timestep (dt) is seconds. The attribute type of the element friction can be switched between “Manning” and “Quadratic”. Depending on the value of type, the attribute coef is either the Manning’s roughness coefficient (n) in SI units, or quadratic friction coefficient (f). <?xml version="1.0"?> <Experiment> <name>Sample Experiment</name> <!-- Settings for Model --> <model type = "BSNQ"> <parameters epsilon = 5e-12 correctionStepsNum = 2 timestep = 0.005></parameters> <friction type = "Manning" coef = 0.0> </friction> </model> <!-- Settings for Solution field --> <fieldDimensions width = 30 length = 30 stillWaterElevation = 0></fieldDimensions> <gridSize nx = 601 ny = 601></gridSize> <bathymetryFilePath> \resources\bathy.cbf </bathymetryFilePath> <!-- Settings for Initial Condition --> <hotStartFilePath> N/A </hotStartFilePath> <solitaryWave H = 0.05 theta = 0 xc = 5 yc = 15></solitaryWave> <solitaryWave H = 0.05 theta = -45 xc = 5 yc = 25></solitaryWave> <!-- Settings for Boundaries--> <westBoundary type = "SineWave" seaLevel = 0 widthNum = 2> <sineWave amplitude = .01 period = 2 theta = 0></sineWave> </westBoundary> <eastBoundary type = "Sponge" seaLevel = 0 widthNum = 20></eastBoundary> <southBoundary type = "Solid" seaLevel = 0 widthNum = 2></southBoundary> <northBoundary type = "Solid" seaLevel = 0 widthNum = 2></northBoundary> <!-- Settings for Logging Data--> 6 In the latest version of Celeris Advent the correction step must be set to zero. 80 <logData doLog = true logStep = 20> <logPath>C:\conical_island\</logPath> <range filename = "island"> <bottomLeft x = 228 y = 228></bottomLeft> <topRight x = 374 y = 374></topRight> </range> <gauges filename = "gauges">229,302,249,302,353,302,354,302</gauges> </logData> </Experiment> Figure 3.39: A sample CML input file. 3.4.2.2 Field Parameters Elements <fieldDimensions>, <gridSize>, and <bathymetryFilePath> set the properties of the solution field. The first element in this list has three attributes, width, length, and stillWaterElavation which all must be given in meters. Note that in Celeris width of the field is defined in x direction, and length of it is defined in y direction. Still water elevation is defined according to a fix datum, and most of the time is set to zero. The element <gridSize> has attributes nx and ny which define the number of cells is x and y direction, respectively. <bathymetryFilePath> is the relative or absolute path to a Celeris bathymetry file (*.cbf). If a relative path is used, Celeris will attempt to find the file relative to the location of the CML file itself. Celeris bathymetry files are ASCII formatted text files, with the extension “cbf”. Figure 3.40 shows a sample cbf file. As can be seen, the file starts with two tags [nx] and [ny] which determine the cell numbers in x and y directions, respectively. More descriptive tags will be added to this section in the next versions of Celeris. The properties section ends with a series of “=” signs. Afterwards the bathymetry matrix is given. A short MATLAP script called “write_bathy.m” is provided with Celeris to help creating bathymetry files. If the cell numbers in the bathymetry file ([nx] and [ny]) are different from the cell numbers (nx and ny) in the experiment, Celeris will use linear interpolation to generate the bathymetry. 81 [nx] 100 [ny] 50 ==================================== -0.45000000 -0.40000000 -0.35000000 ... -0.45000000 -0.40000000 ... -0.45000000 ... ⁞ Figure 3.40: A truncated sample Celeris bathymetry file. 3.4.2.3 Initial Conditions Two types of initial conditions can be set in Celeris. Firstly, a Celeris hot-start file (*.chf) can be given to the software as the initial condition by setting the content of the element <hotStartFilePath> to the relative or absolute path of the file. This file must define the values of w, P, and Q on the whole domain and is formatted similar to Celeris output files. Celeris does not use interpolation for hot-start files and therefore, the given file must have the exact same cell numbers as the experiment in the both directions. Secondly, several solitary waves can be added to the field as the initial conditions. The properties of a solitary wave must be set as the attributes of the element <solitaryWave>. These attributes are the waveheight in meters (H), wave direction with respect to x axis in degrees (theta), and coordinates of the center of the wave (xc and yc). A CML file can have several solitary wave elements. 3.4.2.4 Boundary Conditions Celeris uses two layers of ghost cells on each side of the field to implement the boundary conditions. This means that for an experiment with mesh size of nx X ny, Celeris defines a matrix of size (nx+4) X (ny+4) to store the flow parameters. Cell indices start from 0 in both direction and ends with nx+3 or ny+3 on each direction. The west and east boundaries are defined parallel to the y axis and are located on xj = 2 and xj = nx+1, respectively. Similarly, the south and north boundaries are defined parallel to the x axis and are located on yi = 2 and yi = ny+1, respectively. The boundary conditions for the four sides of the field must be all set in the CML file, as the attributes of the elements <westBoundary>, <eastBoundary>, <southBoundary>, and <northBoundary>. The type attribute of the boundary in Celeris Advent can be chosen among “Solid” for reflective solid walls, “Sponge” for sponge layers, and “SineWave” for a sinewave maker. The seaLevel 82 attribute is the elevation of the still water on the boundary, which for most of coastal engineering experiments must have the same value as stillWaterElevation. The widthNum attribute defines the number of boundary cells. This value must be set to 2 for “Solid” and “SineWave” boundaries and to the appropriate number, corresponding the sponge layer length, for “Sponge” boundary. If the boundary type is set to “SineWave”, the boundary element must contain a <sineWave> element which describes the properties of the target sinewave. The attributes of this element are amplitude in meters, period in seconds, and direction with respect to x axis in degrees. If the type of the boundary is not “SineWave” this element will be ignored. 3.4.3 Output files Celeris can save the flow parameters w, P, and Q to ASCII formatted files. Since writing is generally expensive in time, Celeris has the option to save data only in a given interval and given locations in the field. The element <logData> is the right place to set output or as it is called in Celeris, logging properties and it contains several other elements. The attribute doLog must be set to true to enable, and to false to disable logging. logStep defines the logging interval as the number of timesteps. The content of element <logPath> must set to the relative or absolute path of the folder which is intended to contain the log files. This folder must already exist as Celeris will not create it. Locations for logging data are defined by cell indices in the field. Two types of locations can be defined. The first kind is a range which is defined by two opposite points of a rectangle in the field. Ranges are defined in the element <range> where the attribute filename is the name of the file for the defined range. The range element must contain two other elements <bottomLeft> and <topRight> each with two attributes x and y. These elements define the corner of a rectangle with the minimum and maximum x and y indices, respectively. It is worth to emphasize that the x and y attributes are cell indices, and not distances in meter. Also, the user must be aware to always consider the correct numbering of cells as described earlier in the boundary conditions section. A CML file can contain several ranges for logging data. For example, if in an experiment the run-up on two islands are subjects of interest, it is recommended to define two ranges, each one covering one of the islands, rather than a large range covering both. As mentioned before, writing data to a file adversely affects the speed of simulation. 83 The data can be also logged on given gauges and saved to a file with the name filename. The gauge coordinates are given as (x,y) tuples inside the element <gauges> as shown in Figure 3.39. The tuples are comma separated and similar to coordinates for ranges, their values are cell indices. For example, in the experiment shown in Figure 3.39, the flow parameters will be saved to the file “gauges.txt” located in the folder “C:\conical_island”, on gauges (229, 302), (249, 302), (353, 302), and (354,302), every 20 timesteps. A CML file can only have one element of <gauges>. Celeris output files are ASCII formatted text files with the common extension “txt”. The flow parameters on computational cells are logged in this file as one tuple of (j, i, w, P, Q, alpha) on each line. j and i are the cell indices in x and y directions respectively. w, P, and Q are the flow properties. alpha is written to file for debugging purposes, is generally equal to zero, and must be ignored by users. 3.4.4 Graphical User Interface (GUI) Celeris as the first interactive software for simulation of coastal processes, has an embedded GUI which can be used to change simulation and visualization parameters while the model is running. For example, the user can change mesh size, friction coefficient, or the CFL number using the GUI. Boundary conditions can be changed, and solitary waves can be superposed on the field as the model is running. Celeris has also several visualization options such as photorealistic and colormapped rendering. The GUI is better understood by running the software and experiencing it. It is also introduced in a short video available at https://youtu.be/pCcnPU7PCrg, and therefore we do not explain the details of GUI in this guide. 3.4.5 Running the Software Celeris needs the recent version of DirectX to be properly installed on a Windows machine. This library is now integrated as part of the OS and therefore Celeris will run on a recent Windows machine with no preparation. However, depending on the last update of the OS, an update might be needed. The software is run by launching the file “Celeris.exe”. If an error regarding a DirectX file is shown, please update your DirectX from here (https://bit.ly/2hB8hsK). The absolute path to the CML input file of an experiment can be given in a file called “setting.init”, which is located in the same folder as “Celeris.exe”. If a path is not given or if it is invalid, Celeris will automatically ask user to locate the CML file in file browser window. Afterwards the software will immediately 84 start the experiment and will visualize the results as the simulation progresses. More than one instance of Celeris can be run simultaneously, however it should be noted that all the instances may share the computing power of GPU. 3.4.6 Tutorial Case In this section we show, step by step, how to simulate the conical island benchmark in Celeris. This benchmark is based on experiments of Briggs et al. [47] for solitary wave interaction around a conical island and is frequently used to validate numerical models. To assist with writing bathymetry files and post-processing logged data, a few MATLAB scripts are distributed with Celeris as well. These files are not part of the software and MATLAB is not required to use Celeris. The first step in a simulation is to generate the bathymetry of the experiment. Figure 3.41 shows the experimental setup of conical island benchmark. We simulate this benchmark in a 30 m x 30 m numerical domain. The circular conical island with a base diameter of 7.2m and side slope of ¼ is centered in the solution field. The process is done in the following steps: Open the “Examples” folder and copy “template.cml” into the “Tutorial Case” folder. This folder also contains a file named “conical_island_ready.cml” which is the ready-to-use input file for this tutorial case. We want to edit “template.cml” and make it look like “conical_island_raedy.cml”. This folder also has a CBF file which contains the bathymetry. A log folder with a few MATLAB files for post-processing must be also there. Rename the “tutorial_case.cml” file to “conical_island.cml” and open it in a standard text editor software such as Notepad++ (download). Follow the next steps as shown in the video file “Celeris Tutorial Case.mp4”. In the tutorial case we simulate the “Case C” with H/d = 0.18, where H is the solitary wave height and d is the depth. Sponge layers are imposed on the boundaries parallel to the soliton, and solid walls on the two other boundaries. The domain is discretized by 301x301 cells with a constant time step of 0.005 s. Bottom friction is neglected. 85 Figure 3.41: Experimental setup of the conical island. The gauge locations are shown by dots and the wave approaches the island from the left. The simulation results in an area surrounding the island and locations of gauges shown in Figure 3.41 are selected to be saved on the disk on each 20 timesteps. After running the simulation, the logged results can be visualized in MATLAB using the provided scripts. The “gauges_read.m” script plots the simulation results and experimental ones on the 4 gauges shown in the figure. The “runup_island.m” script plots the maximum horizontal runup around the island for both experimental and numerical results 86 Chapter 4 - Coupling with Crowdsourced Data Karagiannis and Synolakis [50] studied fifty disaster exercises and identified information gathering from the field, decision-making under uncertainty, and running estimates of the situation among the most challenging tasks for emergency managers. In this chapter we aim to develop tools to assist emergency managers with the mentioned three challenges. We introduce a crowdsourcing framework to facilitate collecting data from the disaster area, and a ranking algorithm to efficiently interpret the data into information under limited resources. This gathered information reduces the uncertainty in the disaster area and therefore helps the decision makers to make more informed decisions. Furthermore, we couple our framework with Celeris as an example of using it to run estimates of the situation in a tsunami scenario. The fast computational speed of Celeris along with its ease of use makes it an appropriate system to be used in coastal disaster response processes. We aim to use Celeris in a two-way coupling with the data collected from the disaster area such that the numerical simulations help us to prioritize interpretation of the geo-tagged data, and the interpreted data (thus information) help us to tune the input parameters to Celeris and get a better understanding of the scope of the disaster. This loop continues to decrease the uncertainty about the disaster and the impacted area. We introduced the foundations of our framework in [51] and [52]. Some major parts of this chapter are derived from these references. The data coupling framework needs to be fed with continues data from the disaster field. However, after a disaster, obtaining data from the disaster area becomes cumbersome. The conventional method of dispatching inspection teams for field surveys is time-consuming and inefficient. Therefore, we propose data crowdsourcing as an efficient alternative means of data collecting. The data sources can be sensors, CCTV cameras, text and twitter messages from the local people, phone calls to 911, and so on. Coupling data crowdsourcing with numerical models of disasters is a relatively new subject of interest. Two prior research works on applications of such a system in flood management can be found in [22], [23] 87 While the main goal of this research is studying coastal hazards, we keep the coupling framework as general as possible to span its use cases to other types of disasters. Without loss of generality, we consider earthquakes as the running example to introduce our framework. We chose earthquakes because they are the most well-studies and most common disasters worldwide. There are plenty of data about earthquakes intensity and reports of their damages. For example, the intensity map of all recent earthquakes can be obtained from USGS website (United States Geological Survey) as “ShakeMaps”. There are also plenty of crowdsourced data about earthquakes which let us evaluate our framework. Crowdsourcing has been shown to be a cost-effective and time-efficient way to acquire disaster data [24] and then to analyze the collected data [25], however, normally the amount of collected data surpasses the maximum amount which can be analyzed, by orders of magnitude. Therefore, an efficient strategy to prioritize data analysis is necessary. In this chapter we firstly discuss our prioritization strategy for crowdsourced data which is based on the entropy concept. We introduced this framework in an emergency management workshop at ACM SIGSPATIAL 2016 [51]. This strategy ranks the data according to their expected value of information and significance. Data prioritization is an essential part of our two-way coupling scheme and therefore will be discussed in detail. 4.1 Post-disaster Data Crowdsourcing After a disaster happens, the uncertainty about the disaster area abruptly increases and the conditions of entities in the disaster area such as the residents, buildings, bridges, etc. become unknown. The authorities need to quickly obtain information from the disaster area to increase their situational awareness and make informed decisions. The conventional method is to dispatch inspection teams to perform field surveys and collect information from the field. This method is time-consuming and normally difficult to execute right after the disaster [53]. For example, authors in [54] estimate that inspecting 61 bridges after an earthquake in Purchase Parkway, Western Kentucky, will exceed 40 hours for a non-trained inspection team and 22 hours for a trained one. Ranf et al. [55] propose a prioritization method for dispatching teams for bridges inspections. They show that by employing their prioritization strategy, 80% of the damaged bridges in the 2001 Nisqually earthquake could be identified by inspecting 14% of all the bridges (481 out of 3,407) in the disaster area. However even inspecting 481 bridges may take several days or weeks to complete, relying on the limited number of public sector inspection teams. Japan has a program for post-earthquake quick inspection of buildings in which, volunteer architects and civil engineers from the private sector inspect buildings to evaluate the damage level and prevent 88 secondary disasters. The public sector trains inspectors and registers them as volunteers [56]. Machine vision inspection is suggested in [57] for rapid post-earthquake building evaluation; however, the proposed framework still relies on the visual data collected from the field by inspection teams, which significantly limits the speed of the evaluation. We propose a data crowdsourcing and prioritization framework for rapid inspection and decision- making. We model the system by defining transmitters, messages, channels, and receivers. A transmitter is an entity located in the disaster area which produces messages and sends them over a channel. These messages are collected and prioritized on a server. A receiver is another entity which receives the high priority messages from the server and interprets them. The transmitters can be on-site sensors, CCTV cameras, local people, cell phones, etc. The possible channels are phone lines, a website, a mobile application, etc. Several kinds of messages can be transmitted, such as images, videos, text, and phone calls. Finally, the receivers can be off-site decision-makers, a computer vision software, etc. The framework is sketched for one cycle (i.e., one time-step) in Figure 4.1. This cycle is repeated until messages are no longer sent from the disaster area or for a desired period. The data prioritization parameters are adjusted after each cycle according to the information acquired from interpreted messages as feedback from the receivers. 89 Figure 4.1: Data collection and interpretation framework for a single time-step. A major challenge of crowdsourcing in disaster situation is that the number of receivers and their interpretation power is often limited; therefore, not all the messages could be interpreted. Thus, a prioritization method is needed to decide which messages should be interpreted first by the receivers such that the retrieved information from the disaster area is maximized or equivalently the uncertainty about the disaster area is minimized. To quantify the expected value of information contained in a message, we describe the possible events in the disaster area (e.g., whether a particular bridge is damaged or not) by binary random variables. Then, we calculate the entropy of messages according to the events that they may cover. In addition to the entropy, we also consider the significance (i.e., importance) of messages in prioritization, as we may prefer to interpret messages covering an important object or area first (e.g., a high traffic bridge). We propose a multi-objective optimization problem that jointly maximizes the significance of and the expected value of the information contained in the messages. We adopt the well- known Pareto optimization technique to select high priority messages for interpretation. 90 To evaluate our framework, we firstly conduct an extensive set of experiments on the 2001 Nisqually earthquake. The damaged bridges after this earthquake are documented in [58]. We apply our framework using a realistic synthetic data set generated based on crowdsourced data obtained from United States Geological Survey (USGS), in order to retrieve information from the field and discover damaged bridges. The experimental results show that our framework can identify over 80% of the damaged bridges and reduce the total uncertainty to 30% with 80 expert reviewers after a relatively short time. We also evaluate the feasibility of reviewing crowdsourced data by a crowd of 1000 non-expert first, termed as “crowdreviewing”, and then by a smaller set of expert reviewers. With crowdreviewing, the framework identifies over 50% of the damaged bridges by employing only 8 expert reviewers. We also apply our framework to a post-disaster crowdsourcing scenario on Twitter, after 2014 South Napa Earthquake. We compare our entropy-based prioritization with a prioritization strategy based on local probability of damage. Furthermore, we investigate the feasibility of crowdreviewing by conducting an experimental survey and comparing the damage assessments done by the crowd with those of experts. Finally, we demonstrate the use case of our framework coupled with Celeris for extracting real-time information from the crowd in a tsunami scenario in Newport, OR. Our two-way coupling helps tuning the numerical model for nowcasting and hindcasting, such that the numerical simulation determines the prioritization of messages, and the interpreted messages enhances the numerical simulation by adjusting the numerical parameters. 4.2 Related Works While disaster-related data can be acquired in many ways such as from automatic sensor readings, reports from field workers and civilians, etc., crowdsourcing has been shown to be a cost-effective and time- efficient way to acquire disaster data [24] and then to analyze the collected data [25]. Moreover, in contrast to sensor reading, crowdsourcing is not limited to a fixed set of locations and is normally not subject to infrastructural damages. For example, the Ushahidi-Haiti project (i.e., Mission “4636”) gathered more than 80,000 text messages from on-site users (on-site rescuers, local people, etc.) after the 2010 Haiti earthquake. Using the collected data, off-site volunteers created a post-disaster map of Haiti revealing undamaged roads, buildings, hospitals, and shelters [59]. According to our proposed framework, in this project, the on-site rescuers and local people were transmitters, text messages and emails were messages, the short message service (SMS) and the Internet were the corresponding channels, and the first responders and off-site users were the receivers. 91 To increase the situational awareness, authors in [60] proposed a framework for efficient data collection and effective analysis of user-generated videos. They achieved fast and efficient data collection by prioritizing data transmission under limited bandwidth, and effective analysis by evenly distributing the collected data among the analysts. A recent study [52] extends [60] by defining urgency maps and continuous monitoring of the disaster area. The authors in [52] evaluate their framework on the 2014 South Napa Earthquake as a realistic case study. The focus in [60] and [52], is on prioritizing visual data such as images and videos according to spatial urgency values defined in the disaster area. Unlike [52], [60] that prioritize data based on urgency values, the current study prioritizes the interpretation of disaster related messages considering their significance and expected value of information. This approach enables us to maximize retrieved information about significant objects in the field, under limited resources such as limited number of analysts or bandwidth. Moreover, the current study is not limited to images and videos and considers any kind of data such as text, voice, etc., which are also valuable in understanding the disaster situation [59]. A system that is closely related to our work is GeoQ (geo-q.com/geoq/), which is a crowdsourcing platform for disaster data, developed by US National Geospatial-Intelligence Agency. GeoQ allows the analysts to collect geographic structured observations across a large area but manages the work in smaller regions. A disaster area is partitioned into small regions (e.g., 1km 2 squares), so called work cells, which are assigned to the analysts. The role of the analysts is to evaluate the data in their allocated work cells (e.g., social data from Twitter and YouTube, imagery data from satellites) to identify any relevant incident associated with disasters (e.g., a building on fire, road block) that needs to be marked on the GeoQ’s crisis map. Our current work complements GeoQ by proposing a method to prioritize collected data for decision- making so that the authorities are not overwhelmed by the flood of data. 4.3 Methodology The major goal of our framework is reducing the uncertainty in the disaster area, because less uncertainty means more situational awareness. We firstly quantify the uncertainty as follows. We define binary random variables to describe the events (e.g., whether a bridge is damaged or not) in the disaster area and calculate the uncertainty using the entropy of these random variables. For example, we can assign a random variable, Xi, to each bridge in the disaster area, which takes the value of 1 if the bridge is damaged and 0 otherwise. The Bernoulli distribution of such a random variable can be defined according to the 92 spatial intensity of the disaster and pre-processed curves, such as in this case, fragility curves for bridges 7 . Note that, inspecting bridges is the running example in this study; however, our framework can be used for any post-disaster decision-making process, for example about collapsed buildings, road closures, hazardous materials leakages, etc. For the sake of simplicity, we assume the random variables defined in the disaster area are all mutually independent. In the following subsections, we first propose a model to quantify the uncertainty or the amount of information that can be obtained from the disaster events, through analyzing the collected messages (Section 4.3.1). We then, present a simple model to quantify the significance of the events (Section 4.3.2). 4.3.1 Modeling Uncertainty of Events Let’s define the random variable Xi such that the probability of event Xi = 1 is pi, and the probability of the event Xi = 0 is qi = 1-pi. The entropy of Xi defines the expected value of information contained in its single trial and it can be calculated by the binary entropy function, Hb, as 𝐻 (𝑋 𝑖 )=𝐻 𝑏 (𝑝 𝑖 )=−(𝑝 𝑖 log𝑝 𝑖 +𝑞 𝑖 log𝑞 𝑖 ) (4.1) We choose the base of the logarithm to be 2, and consequently the unit of entropy to be bits. A message revealing the value of Xi is expected to contain Hb(pi) bits of information. For example, if the probability of damage of a particular bridge is p = 0.5, the entropy of an image covering this bridge is exactly 1 bit, according to Eq. (4.1). The total uncertainty in the disaster area can be calculated as 𝐻 𝑇 =∑𝐻 𝑏 (𝑝 𝑖 ) 𝑁 𝑖 =1 (4.2) where N is the total number of binary random variables defined in the disaster area. Now, let’s define mj,i to be the probability that message Mj covers event Xi = 0|1. Since entropy is additive for independent trials, the total entropy of Mj is 𝐻 𝑀 𝑗 =∑𝑚 𝑗 ,𝑖 𝐻 𝑏 (𝑝 𝑖 ) 𝑛 𝑖 =1 (4.3) 7 Fragility curves describe the conditional probability that a structure reaches at least a given damage state as a function of the ground motion [80]–[82]. 93 where n is the number of independent binary events covered by Mj. Assuming receivers’ capacity is finite, the total amount of information that all the receivers can interpret in a given time T is defined as 𝐻 𝑅 = ∑𝐻 𝑀 𝑗 𝐿 𝑗 =1 (4.4) where L is the maximum number of messages that can be interpreted in the time T. The constraint L can be imposed for several reasons. For instance, the number of receivers and their interpretation power can limit the number of messages which can be interpreted in a given time. The channel capacity can be also the dominant limitation. 4.3.2 Modeling Significance of Events The significance (i.e., importance) of information about different events is not necessarily the same. For example, the significance of information about a bridge might be more than the significance of information about a single building. It is also possible that one instance of a random variable (e.g., a bridge with higher traffic) to be more significant than another one (e.g., a bridge with lower traffic). To accommodate the significance of information in the framework, we define the significance of Mj as 𝑆 𝑀 𝑗 =∑𝑚 𝑗 ,𝑖 𝑆 𝑖 𝑛 𝑖 =1 (4.5) where Si is the significance of Xi. The value of Si must be defined in advance for all random variables. For example, in the case that we study in Section 4.4, we use the daily traffic of each bridge, on a logarithmic scale, as its significance. The total expected significance that all the receivers can obtain in the time T is 𝑆 𝑅 = ∑𝑆 𝑀 𝑗 𝐿 𝑗 =1 (4.6) 4.3.3 Problem Definition and Solution In this section, we define a multi-objective optimization problem that jointly maximizes the total entropy, Eq. (4.4), and total significance, Eq. (4.6), that receivers can obtain from the disaster area during the time- step size T. Note that T directly translates to the maximum number of messages, L. To solve this problem, we suggest using multi-objective maximization techniques since there might be conflicting objectives. Particularly, 94 we use Pareto optimization to iteratively select messages on the Pareto front. Our algorithm is shown in Algorithm 4.1. The Pareto front of the messages pool is identified according to the entropy and significance of the messages (Line 4). The messages on the Pareto front are removed from the pool and added to the set of selected messages, only if the size of this set does not exceed L (Line 5-7). Otherwise, a sufficient number of messages on the Pareto front are randomly chosen, added to the set of selected messages, and removed from the messages pool, such that the total number of selected messages equals L (Line 8-14). Algorithm 4.1: Messages Prioritization 1: Input: crowdsourced messages: msgs and L 2: Output: L number of messages: selected 3: Initialize: selected to an empty list 4: while (msgs.length < L and msgs.length > 0): 5: prf = paretoFront(msgs.traffic, msgs.entropy) 6: if (selected.length + prf.length <= L): 7: selected.append(prf) 8: msgs.remove(prf) 9: else: 10: while (selected.length < L): 11: random_index = rand() 12: selected.append(prf(random_index)) 13: msgs.remove(prf(random_index)) 14: prf.remove(random_index) 4.3.4 Reasoning The first objective of the problem is maximizing the total retrieved entropy from the disaster area. Considering Eq. (4.1), the entropy becomes the maximum when p = 0.5 (see Figure 4.2). In the example of assigning random variables to bridges after an earthquake, this means that our approach gives a higher priority to the bridge with damage probability around 0.5. It may sound counter-intuitive since we are less willing to interpret messages covering bridges with higher damage probabilities (e.g., p = 0.9). However, this is justifiable according to our objective which is maximizing the situational awareness. If the damage probability of a bridge is close to 1, we are almost certain about the serious condition of that bridge (i.e., we know for sure that the bridge is damaged). Consequently, we are not willing to allocate resources for interpreting messages about that bridge. The reasoning behind the second objective (maximizing total significance) is straightforward. We prefer to interpret the messages that cover more significant events. For example, if the entropy of two 95 messages is the same, we prefer to interpret the message which covers an important highway bridge with a high traffic before the one which covers a bridge in a remote area with a low daily traffic. Thus far, we prioritize data for one time-step size T. We next, extend our solution to multiple time-steps in Section 4.3.5. Figure 4.2: Entropy of a binary random variable 4.3.5 Continuous Monitoring After a disaster, for a persistent increase in situational awareness, we expect to continuously receive and interpret messages from the disaster field for a certain period. To handle this stream of messages, we solve the problem defined in Section 4.3.3 in discrete time-steps of size T. After each time-step, newly received messages are added to the messages pool, and L number of messages are selected from the pool for interpretation. After interpreting a message, Mj, covering an event, Xi = 0|1, we must update the probability of that event according to the information we may have retrieved. In the ideal case, after interpretation, the value of Xi is revealed and we can set pi to either 1 or 0. However, in the real world, a single message may not reveal the value of Xi due to the insufficient coverage of message, channel noise, and/or imperfect receivers. For example, a single image may not cover the whole bridge. A non-expert off-site user or a computer vision software may not be able to detect a damage visible in the image. We suggest an updating system considering these facts in the next section. 96 4.3.5.1 Updates in Time Let’s define a binary random variable, Ij, for interpretation of the message Mj, such that Ij = 1 is the event where Mj reveals Xi = 1. The other case for interpretation is Ij = 0, which means that the value of Xi is not revealed. We define the conditional probability of interpretation of a message as 𝑟 𝑗 =Pr(𝐼 𝑗 =1|𝑋 𝑖 =1) (4.7) For the sake of simplicity, we assume rj is constant for all interpretations and substitute it with r. For the case where an interpretation has revealed Xi = 1, we assume Pr(𝑋 𝑖 =1|𝐼 𝑗 =1)=1 (4.8) Employing the Bayes’ theorem gives Pr(𝑋 𝑖 =1|𝐼 𝑗 =0)=(1−𝑟 ) 𝑝 𝑖 1−𝑟 𝑝 𝑖 (4.9) Eq. (4.9) calculates the probability of Xi = 1 after an unsuccessful interpretation. Thus, we can update pi in the next time-step (t + 1) based on its value in the current time-step (t), according to 𝑝 𝑖 𝑡 +1 =(1−𝑟 ) 𝑝 𝑖 𝑡 1−𝑟 𝑝 𝑖 𝑡 (4.10) where superscripts refer to the time-steps number. We use Eq. (4.10) to update the probability of events and consequently entropy of messages after unsuccessful interpretations (Ij = 0) in each time-step. After a successful interpretation (Ij = 1) the value of Xi is revealed (Xi = 1) and therefore we can set pi = 1. 4.3.5.2 Local Proximity Enhancement The probability distribution for the binary variables defined in a disaster area (e.g., fragility curves) are often based on limited observations of prior events and therefore are error prone. Moreover, the spatial disaster intensity (e.g., ShakeMaps in earthquakes) are generally estimated based on a network of sensors, numerical analysis, etc., and may contain errors as well. Hence, we propose a method to increase the probability, pi, of Xi, if event Xj = 1 is observed nearby, where Xj is of the same type of Xi (e.g., both are assigned to bridges). Let’s assume that the probability pi, of Xi = 1 is statistically calculated as the maximum likelihood estimate of ki number of observations, where ki,1 of observations were Xi = 1. Thus, we have 97 𝑝 𝑖 = 𝑘 𝑖 ,1 𝑘 𝑖 (4.11) If we observe another Xi = 1 in the next trial, we can update the value of pi as 𝑝 𝑖 ≔ 𝑘 𝑖 ,1 +1 𝑘 𝑖 +1 = 𝑝 𝑖 𝑘 𝑖 +1 𝑘 𝑖 +1 (4.12) Whether to use Eq. (4.12) to update the probability of random variables in each time-step or not, is judgmental and must be studied case by case. 4.3.6 Crowdreviewing In the crisis, resources are generally very limited and assigning several experts to review crowdsourced messages might not be feasible. An alternative is to harness the power of the crowd again, this time for reviewing the crowdsourced images (crowdreviewing). We embed the crowdreviewing step in our previously explained framework (see Figure 4.1) between the prioritization step and interpretation step (see Figure 4.3). Crowdreviewing help us filter the data before sending it for review. However, we expect the non-expert crowd to make more mistakes in interpreting the messages and therefore aggregate the opinion of a few crowd receivers before making a decision. Figure 4.3: Incorporating crowdreviewing in the data collection and interpretation framework. 98 4.3.7 Coupling with Simulation Software We propose a real-time two-way coupling of a fast numerical simulation tool, such as Celeris, as an application for our crowdsourcing framework. In this two-way coupling framework, firstly, the numerical simulations provide us with the scope of the disaster and an initial damage probability. This information enables us to collect the geo-tagged data related to the disaster area and interpret the high priority ones, providing us with more information about the real scope of the disaster and therefore helping us to tune our numerical simulation and continue the loop. Such a cyclic system, rapidly increases the situational awareness of the authorities and helps them make more informed decisions. Coupling data crowdsourcing with numerical models of disasters is a relatively new subject of interest. Two prior research works on applications of such a system in flood management can be found in [16], [17]. Figure 4.4 shows our coupling framework. In section 4.4.2.3 we show an example of coupling the prioritization framework with Celeris for a tsunami scenario in Newport, OR. Figure 4.4: Two-way coupling of tsunami simulation software with data collection and interpretation framework. 99 4.4 Performance Evaluation 4.4.1 2001 Nisqually earthquake 4.4.1.1 Experimental Setup We evaluate our proposed framework show in Figure 4.1 with the 2001 Nisqually earthquake which had a moment magnitude of 6.8. This earthquake damaged 78 bridges as documented in [58]. Ranf et al. [55] studied these bridges and developed fragility curves accordingly, which showed that the percentage of damaged bridges correlates best with the spectral acceleration 8 at a period of 0.3s (SA03). We use the ShakeMaps from the USGS published spatial SA03 data of the 2001 Nisqually earthquake as shown in Figure 4.5 and apply our framework for continuous monitoring introduced in Section 4.3.5. Figure 4.5: Spectral acceleration at a period of 0.3s for the 2001 Nisqually earthquake. 8 Spectral acceleration (SA) is approximately the maximum acceleration experienced by a building during an earthquake and can be measured at different oscillation frequencies. 100 We define a binary random variable, Xi, for each highway bridge in the disaster area such that Xi = 1 and Xi = 0 indicate damaged and undamaged bridges, respectively. We queried the Hazus-MH database to collect all the 3257 highway bridges in the disaster area along with their daily traffic number, and used the fragility curves developed in [55] (see Figure 4.6) to calculate the initial pi values for Xi = 1. The logarithm of the daily traffic of a bridge is used as its significance factor. Figure 4.6: Fragility curves for bridge damages accounting bridge age [55]. 4.4.1.2 Data Acquisition and Interpretation System For this study, we assume that on-site users send images from bridges using our MediaQ mobile application. MediaQ is an online media data management system [61], which can be used to collect visual, audio and text data from the disaster area. MediaQ also collects the location and direction data (called metadata) for images and videos. This metadata itself can be also interpreted as useful information after a disaster. For example, in Figure 4.7 video trajectory extracted from the metadata, shows the debris line after the 2016 El Niño in Dockweiler State Beach, CA. We also extended MediaQ system to query the Google Street View to provide a “before scene” for visual data collected from the disaster area. For 101 instance, Figure 4.8a shows an image from beach erosion in Santa Monica Beach, CA, after the 2016 El Niño, and Figure 4.8b shows the “before scene”, queried from Google Street View. MediaQ can be also used by researchers in post-disaster field surveys such as [62], [63] for rapid data collection. The images sent by MediaQ users from the disaster area are collected on the server-side and are prioritized for interpretation. Thus, in this case, the on-site users are the transmitters, the images are the messages, MediaQ is the channel, and the server-side analysts are the receivers. We assume that the constraint, L, is imposed by the number of analysts, i.e., our analysts analyze a maximum number of L images in time T. Since randomness is involved in the experiments, all the results shown in this section are average values from 15 iterations, unless stated otherwise. 4.4.1.3 Realistic Synthetic Data To evaluate the performance of our proposed framework for the 2001 Nisqually earthquake case, we must generate synthetic crowdsourced data with realistic spatiotemporal distribution. For this purpose, we use the real-world data from USGS program, called “Did You Feel It (DYFI)?” which lets people report the level of shaking when they feel an earthquake. These data are accessible on USGS website at http://earthquake.usgs.gov/data/dyfi/. Figure 4.7: Screenshot of MediaQ website, showing a debris line at Dockweiler State Beach, CA, after the 2016 El Niño. 102 (a) (b) Figure 4.8: Screenshot of MediaQ website showing after (a) and before (b) scene for the 2016 El Niño in Santa Monica, CA. We use the spatial distribution of these historical reports to simulate and synthesize the spatial distribution of the user-generated messages from the disaster area. Furthermore, we generate the number of messages in each time-step proportional to the number of DYFI responses in the same period. We use time-step size of T = 5 min, assuming that it takes 5 mins for an analyst to view and analyze an image to decide if the bridge is damaged. We continue retrieving information for 3 hours (36 time-steps). Out of 78 actual damaged bridges, 71 are covered by USGS ShakeMap and 62 are covered by our synthetic data. Figure 4.9 shows the map of the disaster area. The circles show all the highway bridges in the area and their color indicates their daily traffic (i.e., the total volume of vehicle traffic, divided by 365 days) on a logarithmic scale. Damaged bridges are marked by downward triangles. DYFI response boxes are also shown. Each response box in this figure, covers a 10km x 10km area and contains a certain number of responses. A total number of 7702 DYFI responses are located within the boundaries of the map. In each time-step, we generate a certain number of images in each response box according to the temporal distribution of DYFI data, and randomly assign each image to a bridge in the same response- box. This means that we assume each image covers one, and only one bridge. 103 Figure 4.9: All highway bridges (circles), damaged bridges (red triangles), and DYFI response boxes (squares) in the disaster area of the 2001 Nisqually earthquake. 4.4.1.4 Experiment Results To prioritize images for interpretation, we use Pareto maximization as described in Section 4.3.3. We plot the entropy of images in the messages pool against the logarithm of the bridges’ daily traffic that they cover. Figure 4.10 shows this plot in the first time-step. L number of images (corresponding to L number of analysts) are selected from Pareto layers, shown in different colors in this figure. Figure 4.11 shows the location of images in the first time-step in the disaster area. Selected images are marked by squares in this figure. Marker colors show the logarithm of the daily traffic and marker sizes show the entropy of bridges. We firstly evaluate our framework without employing updates for local proximity. The effects of these updates are studied next. 104 To evaluate the performance of our framework we monitor the value of total uncertainty in the disaster area as a function of the elapsed time. Figure 4.12 illustrates the results for the total uncertainty, Eq. (4.2), for a different number of analysts (L) and variable values of r. The y-axis is the sum of the uncertainty of all the bridges in the disaster area and the x-axis shows the time after the earthquake in minutes. Figure 4.10: Entropy against the logarithm of daily traffic. Pareto layers on shown in color. 105 Figure 4.11: Images in the first time-step. Larger markers show bridges with larger entropy. The uncertainty value is normalized by the total uncertainty in the disaster area, right after the disaster. Higher number of analysts leads to a steeper decrease in the total uncertainty, as we can interpret more images in a given time. As expected, the larger values of r tend to cause a larger decrease in the total uncertainty as well. In the best case with L = 80 and r = 1, the total uncertainty is reduced to 50% after only 1 hour and 30% after 3 hours. We also count the number of damaged bridges that our framework discovers after the last time-step (i.e., after 3 hours). This number is normalized by the total number of bridges covered by our synthetic data (62) and reported in percentage in Figure 4.13. The number of discovered damaged bridges also increases by employing more analysts or by any increase in r. In the best case, with L = 80 and r = 1, over 80% of damaged bridges are identified by the end of the experiment. 106 Figure 4.12: Total uncertainty percentage in time after the disaster, for different values of L and r. 107 Figure 4.13: Total percentage of damaged bridges discovered at the end of the experiment. 4.4.1.5 Effect of Local Proximity Enhancement In this section, we evaluate the effect of local proximity updates explained in Section 4.3.5.2. The fragility curves developed in [55] are based on 78 bridges and categorized into several classes and several probabilistic levels. Thus, we may assume that the probability of pi for a bridge in these fragility curves is based on roughly 10 observations, and therefore we use ki = 10 in our experiments. Since for all cases where r ≠ 1, we can never discover an undamaged bridge (Xi = 0), we only update pi, if a damaged bridge (Xi = 1) is discovered nearby. Each time a damaged bridge is discovered, we update pi of all the bridges which are in a box of 10km x 10km centered on the newly discovered damaged bridge. Figure 4.14 compares the percentage of damaged bridges discovered with and without using local proximity updates. As shown in this figure, the results are improved by updates up to 30%. Figure 4.15 shows the damaged bridges discovered with and without local proximity updates for a single trial with L = 20 and r = 1.00. We observe that the local proximity updates lead to clusters damaged bridges being identified. 108 Figure 4.14: Percentage of damaged bridges discovered with (solid line) and without (dashed line) local proximity updates. Figure 4.15: Damaged bridges discovered without (blue triangles) and with (blue and red triangles) local proximity enhancement for L = 20 and r = 1.00. 109 4.4.1.6 Crowdreviewing We showed that with 80 experts (r = 1.00) receivers can identify more than 80% of the damaged bridges from crowdsourced images (Figure 4.13). However, in the crisis situation, resources are generally very limited and assigning 80 experts to review crowdsourced images might not be feasible. An alternative is to harness the power of the crowd again, this time for reviewing the crowdsourced images (crowdreviewing). To investigate the feasibility of crowdreviewing, we alter our framework such that 100 images are selected in each time-step from the Pareto layers and each image is reviewed by 10 off-site users from the crowd. This means that we need 1000 receivers from the crowd. Since the crowd are typically not well trained to detect bridge damages, we let r take a very small value from the set {0.02, 0.04, 0.08, 0.16}. Moreover, we assume that the crowd may interpret an image as a damaged bridge, while the bridge is not damaged. We set the probability of this error to 0.01: P(𝐼 𝑗 =1|𝑋 𝑖 =0)=0.01 If any of the 10 reviewers interpret an image as a damaged bridge (either correctly or mistakenly), we add that image to a set of images, S, which will be reviewed by a few experts at the end of each time-step. Since the number of experts is limited, in each time-step, L numbers of images from the set S are reviewed by experts according to our prioritization strategy introduced in Section 4.3.3. If the number of images in S is less than L, all of them are reviewed. At the end of each time-step, the experts decide if a bridge is indeed damaged. Figure 4.16 shows the percentage of damaged bridges discovered by crowdreviewing for different values of L and r. We observe that more than 50% of damaged bridges can be identified by only 8 experts if the crowd detect the damaged bridges with a probability of r = 0.08 only. To achieve the same best result as the case without employing crowdreviewing (i.e., 80% of damaged bridges were discovered with 80 experts), we will need only 16 experts when r = 0.16. These results confirm that crowdreviewing is feasible and can be helpful in the crisis situation where a large number of expert analysts may not always be available. 110 Figure 4.16: Total percentage of damaged bridges discovered by using crowdreviewing. 4.4.2 2014 South Napa Earthquake In this section, we study the 2014 South Napa earthquake [64], which was the largest earthquake in the Bay Area since 1990. This earthquake occurred in the North San Francisco Bay Area on August 24 at 03:20:44 PDT, with a moment magnitude of 6.0. The maximum Modified Mercalli Intensity (MMI) of the earthquake was VIII which corresponds to a “severe” intensity. MMI indicates the local effects of an earthquake on a scale from I (not felt) to XII (total destruction). For a complete list of MMI levels and associated descriptions see [65]. MMI values depend on the distance from the earthquake epicenter and the local soil characteristics and are generally published by USGS a few minutes after the earthquake. Figure 4.17 shows the MMI map for the 2014 Napa earthquake taken from USGS website. 111 Figure 4.17: Intensity map of the 2014 Napa Earthquake from USGS. 4.4.2.1 Data Collection We collected geotagged tweets using Twitter streaming API and a publicly available dataset in [66], which consists of IDs of geotagged tweets within the United States from June 2014 to November 2015. The author would like to acknowledge efforts of Dr. Hien To, a PhD Candidate at USC at the time of conducting this research, in collecting the tweets. We extracted only geo-tagged tweets for August 24, 2014 (event day). We used matching-based technique proposed in [67] to filter tweets relevant to this earthquake and found 8131 tweets. Figure 4.18 compares the cumulative temporal distribution of the tweets with that of the DYFI reports. The graph shows that while the tweets have a very similar temporal distribution to the DYFI data, they are acquired significantly faster than the DYFI reports right after the earthquake. Next, we plot the elapsed time of each tweet after the earthquake against its distance from the epicenter in Figure 4.19. The arrival time of earthquake in seismic stations is also shown in this figure, which shows that the spatiotemporal distribution of the tweets follows the speed of the seismic wave, as 112 expected. The figure shows that the first tweet (“OMG earthquake!”) was posted only 21 seconds after the earthquake occurred. Figure 4.18: Cumulative temporal distribution of tweets vs. DYFI reports. 113 Figure 4.19: Tweets time versus distance from the epicenter (black dots) and earthquake arrival in seismic stations (red dots). We filtered the dataset further down and only considered tweets which contain the word “damage”. This filtering decreases the number of tweets to only 305 tweets, allowing us to manually read and label them. The last tweet in our set is posted 19 hr after the earthquake. We labeled each tweet as “damage” if it reports damage to a local building, “no damage” if it reports a local building was undamaged, and “uninformative” if the tweet did not disclose information regarding the status of the local building. We observed several types of uninformative tweets including tweets reporting damage elsewhere (e.g., from local news agencies). The data are shown in Figure 4.20 where tweets reporting damage are shown in red and those reporting no damage are shown in green. The uninformative tweets are shown in blue. 114 Figure 4.20: Spatial distribution of tweets reporting damage (red), no damage (green), and uninformative tweets (blue). The cumulative temporal distribution of labeled tweets is shown in Figure 4.21. This figure indicates that the “no damage” tweets dominate in the first hour after the earthquake, while there are only a few reports of damage in the first 30 minutes. This shows that people are slow to tweet about damaged buildings, but they report lack of damage very quickly. The “uninformative” tweets steadily increases until the fifth hour after the earthquake when the number of these tweets rapidly increases. This time coincides with the 8am-9am local time, when the local news agencies start reporting the event. At the same time, the unaffected people also hear the news and use tweeter to get more information or emotionally support victims. The stream of the informative tweets almost stops after eight hours. 115 Figure 4.21: Cumulative temporal distribution of tweets reporting damage, no damage, and uninformative tweets. 4.4.2.2 Experiment We apply our framework to prioritize and interpret our set of 305 tweets from the South Napa earthquake by defining a binary random variable, Xi, for each building in the disaster area such that Xi = 1 and Xi = 0 indicate damaged and undamaged building, respectively. We assume that each tweet reports the status of a building. Of course, this is not true about the tweets which we found to be uninformative. However, we assume that the labels of the tweets are invisible to our framework, until a tweet is selected and interpreted. We use the ATC-13 masonry buildings fragility curves [68] (see Figure 4.22) to calculate the initial pi values for Xi = 1. To simplify the process, we only consider the probability of moderate damage. The entropy of each tweet is therefore calculated according to its pi value. To define the significance of each tweet, we need information about occupancy of each building. Since we do not have access to this information, we consider the significance of all tweets to be equal. This assumption simplifies the problem to a single objective maximization problem. 116 Figure 4.22: Moderate damage fragility curves for masonry buildings [68]. Figure 4.23 shows the probability of damage to buildings in the disaster area after the 2014 Napa Earthquake which is calculated based on the USGS ShakeMap and ATC-13 masonry buildings fragility curves [68]. We assign a probability of damage to each geo-tagged tweet in our filtered set according to this probability map and calculate the corresponding entropy. Figure 4.23: Probability of damage to buildings after the 2014 Napa Earthquake We assume that we can interpret only 10% of the total number of the tweets in our set, i.e. 30 out of 305 tweets. Distributing this interpretation power uniformly during the 19-hour experiment means that we 117 can acquire and interpret one tweet each 38 minutes. We run two experiments with this setup. First the framework chooses the most informative tweet (i.e. tweet with the largest entropy) every 38 minutes. Then we repeat this experiment, but this time the framework chooses the tweet with the largest probability of damage. In each experiment we look for two metrics: the total information gain in bits and the total number of damaged buildings discovered over time. Figure 4.24 and Figure 4.25 compare these metrics between the experiments. Figure 4.24: Total information gain overtime for two prioritization strategies. 118 Figure 4.25: Number of damaged buildings discovered overtime for two prioritization strategies. Let’s assume that the purpose of crowdsourcing in this case is distributing rescue teams and resources to the people in need. Figure 4.24 and Figure 4.25 show that, as expected, the information gain is larger when the framework prioritizes the tweets according to their entropy, and more damaged buildings are discovered when the framework prioritizes the tweets according to their probability of damage. This may raise the question that why one should choose entropy prioritization only to discover less damaged buildings. The benefit of entropy prioritization can be explained by Figure 4.26 which compares the damage probability of discovered damaged buildings over time using each prioritization method. Almost all the discovered damaged buildings using the probability prioritization have damage probabilities higher than 90%. However, more than half of the discovered damaged buildings using the entropy prioritization have damage probabilities less than 50%. We argue that the decision makers can dispatch rescue teams to the area with high damage probability (say >90%) immediately and without relying on the confirming tweets. Perhaps, it is better to tolerate a small margin of error (in this case 10%) and dispatch rescue teams blindly, than wasting time and interpretation resources to confirm damage in an area where we are almost sure that damage has occurred. The entropy prioritization method helps us make informed decisions for areas with medium damage probability and, among them, allocate resourced only to those areas which are 119 actually damaged. Note that the areas with medium intensity in a disaster is normally much larger than the areas with higher intensity and therefore allocating resources to the areas with medium damage intensity needs more careful planning. Figure 4.26: Probably of damage for discovered damaged buildings over time. 4.4.2.3 Crowdreviewing In this section we examine the accuracy of crowdreviewing by non-experts on real geo-tagged images from the disaster area in the 2014 South Napa earthquake. These images and their associated data are published by the Earthquake Engineering Research Institute and are available at http://eqclearinghouse.org/map/2014-08-24-south-napa/ [69]. The dataset contains hundreds of geo- tagged images from buildings after the Napa earthquake taken and annotated by tens of individuals. Each photo also has information about the damage level to the building assigned by the observer. Four damage levels exist: light, moderate, severe, and collapsed. Observers are also categorized to undergraduate student, graduate student, and professional or academic (>5 years of experience). We only considered the images labeled by professionals or academics so we can claim that images in our dataset are labeled by experts. We randomly selected 15 images, such that at least three images of each damage level in included in our dataset. The set of images is shown in Appendix II (see Section 4.6). We conducted a survey in March 2016 over social media inviting participants to label each image with one of the four damage levels. 120 The survey attracted 107 participants with unknown demographics resulting in 78 completed surveys. Participants were given the following description: “Thank you for participating in this survey. We expect the survey to take about 5 minutes to complete. Let's assume it is 2014, right after the Napa earthquake in northern California. You have voluntarily participated in an online effort to survey the post- disaster damages to the disaster area. In the next pages, you will see 15 images of damaged buildings. For each image, please evaluate the damage level among "Light", "Moderate", "Severe", and "Collapsed". There is no particular order or a certain number of instances in each case. Please do your best and consider yourself in the real situation after a disastrous earthquake.” We randomly divided the participants into two groups. The participants in the first group were immediately taken to the survey to evaluate the damage level in the images. However, the participants in the second group were initially shown one sample image in each damage level with a short verbal explanation (see Figure 4.27). We call the first group “w/o training” and the second group “w/ training”. To aggregate the results from the crowd, we quantified each damage level with a number from 1 to 4 in an ascending order, i.e., 1 for light, and 4 for collapsed damage level. We aggregated the data by averaging the numerical labels and rounding the result to correspond to one of the damage levels. We also studied different methods of aggregating the results, such as considering the mode of the labels or assigning larger weight to labels from participants who get better results in the first few images. While these aggregating methods showed improved results, we refrain from discussing them further in this study. That is because choosing the aggregation method after observing the data tends to be biased, and therefore we only present the averaging method which we had chosen before starting the survey. 121 Figure 4.27: Training on the damage levels for the “w/ training” group of participants. Figure 4.28 shows the average damage level as a real number as well as a rounded average corresponding to one of the damage levels. This figure is rendered for all the 78 participants, without classifying them into “w/ training” and “w/o training”. As can be seen, the crowd correctly assessed the damage level in 2/3 of cases. In the remaining 1/3 cases, their assessments were only 1 level far from those of experts. Figure 4.29 compares the assessment of the crowd categorized into two groups of “w/ training” and “w/o training”. The group without training correctly assess only 2/5 of the cases, with errors as large as 2 levels. The trained group, on the other hand, correctly assess 2/3 of the cases, with a maximum error of 1 level. In this study we showed that a crowd with a minimal training assessed the damage buildings correctly in 67% of the cases. The error in rest of the cases was confined and at an acceptable level. More study is 122 required to develop better aggregating techniques such as assigning a trustworthy value to each participant based on their past performance and other metrics. Figure 4.28: Crowd and expert assessment of the damage level in the building images from 2014 South Napa earthquake. Figure 4.29: Crowd and expert assessment of the damage level in the building images from 2014 South Napa earthquake for groups w/ and w/o training. 4.4.3 Coupling with Celeris To evaluate and showcase coupling the prioritization framework with a disaster simulation software, we set up an experiment where we couple the framework with Celeris. This experiment studies a hypothetical tsunami in Newport, OR, caused by a megathrust earthquake in Cascadia subduction zone. 123 A magnitude 9.0 earthquake scenario in this zone is introduced by Cascadia Region Earthquake Workgroup (CREW) [70] and used frequently by researchers to discuss the tsunami risk in the US West Coast. We simulated this scenario, calling it the reference scenario, using MOST model [71] and derived the wave time series near Newport. OR. We scale the waveheights in the reference scenario time series to generate larger or smaller scenarios, calculating the flow velocities from the linear theory. The author acknowledges significant help from Dr. Aykut Ayca, a former student at USC Tsunami Research Center, in simulating the reference scenario and nearshore propagation data which were used as the boundary condition for Celeris. Let κ denote the coefficient which scales the reference scenario. We assume that after an earthquake in the Cascadia subduction zone, the scientists believe a tsunami hit Newport where κ is defined by a normal distribution with mean value of 1 and standard deviation of 0.33. Therefore, the probability of κ > κ0 is defined by a cumulative normal distribution function. To reduce our problem to a discreet one, and without loss of generality, we assume n possible scenarios. In the current study we chose n = 41 discreet probabilities corresponding to scale values of 1/3 < κ < 5/3 with steps of 1/30. Note that, the given values and assumptions are only for illustration purposes and should not be used otherwise. Figure 4.30 shows the Oregon coast around Newport. We simulate the tsunami scenarios in a fairly large area to get better results (solid rectangle). However, we apply our framework to a smaller area where more inundation is expected (dashed rectangle), due to the lower elevation, called the disaster area. Figure 4.31 shows the topography of the disaster area. 124 Figure 4.30: Oregon coast around Newport. The solid rectangle shows the simulation area in Celeris and the dashed rectangle shows the disaster area considered for the experiment. Figure 4.31: Topography of the disaster area considered in the experiment. 125 The disaster area is divided into 270×220 cells with sizes of approximately 0.2×0.2 degrees. This size is consistent with the resolution of the bathymetry we have on file for the area. We assign a binary variable like Xi to each cell where Xi = 1 is the event when the cell i is flooded, and Xi = 0 is the event when the cell i is dry. These variables are assumed to be mutually independent for simplicity. We assume that a total number of 1000 of messages are available from the disaster area where interpreting each of them reveals whether the containing cell is flooded or dry. In this experiment we are dealing with two kinds of uncertainty. First, the scale of tsunami, κ, is unknown. Second, since we have not run any simulations, we are uncertain which cells get flooded by each scenario. We define a lower bound probability, Pl,i, and an upper bound probability, Pu,i, for the probability of Xi = 1, Pi. Initially the lower and upper bounds are set to 0 and 1, respectively. We assume that Pi can take any of the n possible values which lie between Pl,i and Pu,i, with a uniform probability. Therefore, the expected value of Pi becomes the average of its lower and upper bounds. This means that the initial probability for all the cells is set to 0.5. The cells which fall inside the ocean or the river are considered flooded with Pl,i and Pu,i set to 1. After we simulate a scenario like s with probability of Psc,s, we can set the lower bound probability of all the cells flooded by this scenario to Psc,s, because scenarios with a smaller probability are stronger and flood all the cells flooded by smaller tsunamis. Similarly, we can set the upper bound probability of all the cells not flooded by scenario s to Psc,s, This is how simulating a scenario updates the probabilities. Simulating all the scenarios should lead to Pl,i = Pu,i in the entire domain. As mentioned before, there is an uncertainty about the probability of a cell being flooded. Note that this uncertainty is different from the uncertainty of a cell being flooded. The former kind of uncertainty in each cell is defined by H(𝑃 𝑖 )=− ∑𝑃 (𝑝 𝑖 ,𝑘 )log(𝑃 (𝑝 𝑖 ,𝑘 )) 𝑛 𝑘 =1 =− ∑ 𝑃 𝑢 ,𝑖 −𝑃 𝑙 ,𝑖 𝑛 log( 𝑃 𝑢 ,𝑖 −𝑃 𝑙 ,𝑖 𝑛 ) 𝑢 𝑘 =𝑙 =log((𝑃 𝑢 ,𝑖 −𝑃 𝑙 ,𝑖 )𝑛 ) (4.13) The latter kind of uncertainty in each cell is defined by 𝐻 (𝑋 𝑖 )=− ∑𝑃 (𝑥 𝑖 ,𝑘 )log(𝑃 (𝑥 𝑖 ,𝑘 )) 2 𝑘 =1 = −𝑃 𝑖 log(𝑃 𝑖 )−(1−𝑃 𝑖 )log (1−𝑃 𝑖 ) (4.14) 126 Running a scenario on Celeris reduces the difference between Pu,i and Pl,i in those cells which get updated, and therefore reducing the uncertainty, H(Pi), or in other words, gives us more information about the disaster area. In our framework, in each step we select and run the scenario which reduces the sum of H(Pi) for all cells, more than all other scenarios. In Appendix I (see Section 4.5) we show that binary search algorithm can be explained using this logic. Interpreting a message is also likely to give us some information. If a message reveals the cell j to be flooded, all the cells like i with Pu,j < Pl,i must be also flooded. On the other hand, if a message reveals the cell j to be dry, all the cells like i with Pu,i < Pl,j must be also dry. We can set the lower and upper bounds for the flooded cells to 1, and for the dry cells to 0. This procedure updates the probability of the cells after reading a message. Setting the probability of a cell to 0 or 1 reduces it uncertainty, H(Xi), to zero, giving us more information about the disaster area. In our framework, in each step we select a message which we expect to give us the most information about the disaster area. This is done by calculating the expected amount of information the message Mj in cell j reveals about itself and other cells. Let E(Hj) denote this value: E(𝐻 𝑗 )=𝑃 𝑗 × ∑ 𝐻 (𝑋 𝑖 ) 𝑓𝑜𝑟 𝑖 𝑠𝑢𝑐 ℎ 𝑃 𝑢 ,𝑗 < 𝑃 𝑙 ,𝑖 +(1−𝑃 𝑗 )× ∑ 𝐻 (𝑋 𝑖 ) 𝑓𝑜𝑟 𝑖 𝑠𝑢𝑐 ℎ 𝑃 𝑢 ,𝑖 < 𝑃 𝑙 ,𝑗 (4.15) In each step, we calculate E(Hj) for all messages, and then interpret the one with the maximum value of E(Hj). In summary, we start with a disaster area where the lower and upper bound probabilities of a cell being flooded is set to 0 and 1, respectively. Then, we select a scenario which gives us the maximum information about the lower and upper bound probabilities in the cells and update these probability values accordingly. A message which its interpretation is expected to reveal the most information about the disaster area is selected and interpreted. This message will likely reveal some other cells to be either flooded or dry. It is also possible that the message interoperation does not reveal any information in a step. Therefore, we keep all messages in a list, and revisit them in each step. After reading the message we update the probability bounds in each cell. We also update the scenario probabilities according to the message status. For example, if the message reveals a cell with upper bound probability of 0.5 is dry, then the disaster scale must be smaller than 1. This loop is repeated until we reduce the uncertainty in the disaster area to zero, i.e., Pu,i = Pl,i = 0|1, for all cells. 127 We apply our coupling framework on values of κ = 0.5, 1, and 1.5. In each experiment, we assume a tsunami associated with the given value of κ happened, which is unknown to us, and we are trying to understand the scope of the disaster by running our coupled prioritization framework. Figure 4.32 shows the initial expected probability of cells for getting flooded, which is the same for all 3 experiments. As seen in the figure, all the cells have values equal to 0.5, except for those located inside the ocean or the river, which have values of 1. A probability of 0.5 is the least biased value, since we initially do not have any information about the scope of the disaster. Figure 4.32: Initial expected flood probability of cells in the disaster area. Figure 4.33 shows the inundation for the scenarios we study in this section. As mentioned, we assume in each case that Figure 4.33 shows the real scope of the disaster in the area, but it is unknown to our framework. The randomly generated messages are also shown in this figure by red dots. For each case, we generated 2000 random messages, removed those falling inside the ocean or the river, and then kept the first 1000 messages. 128 κ = 0.5 κ = 1.0 κ = 1.5 Figure 4.33: Inundation of the disaster area for different values of κ. The red dots shows the randomly generated messages in the disaster area. Figure 4.34 to Figure 4.36 show how the expected probability of the cells getting flooded gets updated in each step, after running a scenario and then choosing and interpreting a message. The location of the message which is added in each step is shown in the figures. As mentioned before, all the previous messages get revisited in each step, in hope to get more information out of them. These experiments show how quickly we can uncover the scope of the disaster by reading only a few messages. For cases with κ = 0.5 and 1, the flooded area was completely discovered after reading 6 messages. For the case with κ = 1.5, the number was even smaller at 4. The first scenario which is selected and ran for all the experiment is κ = 1. This is because in the beginning we have no information about the disaster area and the least biased scenario is κ = 1. However, after reading the first message, the framework may take different paths in each experiment. For example, in the case κ = 0.5, the framework selects and runs κ = 1 as the first scenario. Then the first message reveals that the κ should be smaller than 1, and therefore the probability of all the areas not flooded by the κ = 1 scenario are set to zero. In contrast, in cases with κ = 1 and 1.5, the framework first runs the scenario with κ = 1, but the first message does not give any information, leaving the expected probabilities set only by running the scenario κ = 1. Repeating these steps in each case eventually uncovers the flooded area. 129 Figure 4.34: Updating E(Pi) for κ = 0.5 by running scenarios and interpreting messages. The numbered red dots show the message added in each step. 130 Figure 4.35: Updating E(Pi) for κ = 1.0 by running scenarios and interpreting messages. The numbered red dots show the message added in each step. 131 Figure 4.36: Updating E(Pi) for κ = 1.5 by running scenarios and interpreting messages. The numbered red dots show the message added in each step. 4.5 Appendix I The images used in the survey discussed in section 4.4.2.3 are shown in Figure 4.37. 132 Figure 4.37: Images from damaged buildings during 2014 South Napa earthquake, shown to participants in a survey to assess the damage level. Images are labeled by their ID’s. 1 2 3 4 5 6 7 8 9 133 Images 1, 2, 3, 5, 6, and 8, 9, 10, 12, and 15 are taken by L. Whitehurst, Walter P Moore. Images 4 and 13 are taken by Fred Turner, California Seismic Safety Commission. Images 7, 11, 14 are taken by Glen Granholm, ETC USA. Figure 4.37 (Continued): Images from damaged buildings during the 2014 South Napa earthquake, shown to participants in a survey to assess their damage level. Images are labeled by their ID’s. 4.6 Appendix II In this section, we show how an entropy prioritization can justify that binary search is optimized. The purpose of this section is not to provide a mathematically sound proof, but an illustration of how entropy minimization is connected to the binary search. Let’s assume that we are given a sorted monotonically ascending array of integers of size N, called A. We want to check whether a specific number, t, is in this array. Applying the binary search, we should check the element in the middle, and decide whether we should explore the left half or right half (or if we are done). We show that checking an element on the array gives us "information" and reduces our 10 11 12 13 14 15 134 uncertainty about the array. Furthermore, we show that choosing the element on the midpoint is expected to give us the maximum amount of information. Initially, we don't have any information about the array except that it is sorted. Therefore, each element on the array has an equal chance of 1/N for being our target value, t. The total uncertainty (entropy) of this system is H 𝑇 =− ∑ 1 𝑁 log( 1 𝑁 ) 𝑁 𝑘 =1 =log (𝑁 ) (4.16) Note that N is an arbitrary number in the above equation, so this equation is true about any similar array, such as the left or right half of the array A, after we choose one. Assume that N is very large such that our chance of hitting right on the t is very slim and negligible. If we choose the i-th element of the array, Ai, two scenarios are possible: 1. If t < Ai, we must explore the left half. This case gives us the following information: The probability of elements on the right half to be our target is zero, while the probability of the ones on the left is 1/i. 2. If t > Ai, we must explore the right half. This case gives us the following information: The probability of elements on the left half to be our target is zero, while the probability of the ones on the right is 1/(N-i). We want to calculate the expected amount of information that we get by checking element Ai, and then find i such it maximizes this value. To calculate this expected value, we firstly need to calculate the entropy of each scenario explained above and multiply them by the probability of that scenario. H 𝑖 = 𝑖 𝑁 log(𝑖 )+ 𝑁 −𝑖 𝑁 log(𝑁 −𝑖 ) (4.17) Eq. (4.17) gives us the expected amount of entropy remained in the system after reading element Ai. If we subtract this value from the total uncertainty given by Eq. (4.16), we will be left with the amount of information we expect to receive by checking element Ai: ΔH 𝑖 =H 𝑡 −H 𝑖 (4.18) Now, if we want to get the most out of every trial we must maximize the amount of information we expect to obtain. Assuming i to be a real number rather than an integer, we can set its derivative to zero to get the optimal i: 135 d di ΔH 𝑖 =0− 𝑑 𝑑𝑖 H 𝑖 =0 d di H 𝑖 = log(𝑖 )−log(𝑁 −𝑖 ) 𝑁 =0 log(𝑖 )=log(𝑁 −𝑖 ) 𝑖 = 𝑁 2 (4.19) Eq. (4.19) shows that we must choose the middle element in the array A to maximize the expected information we obtain. Following this logic in each step, leads to the binary search algorithm. The same result could be obtained if we aimed to minimize the expected entropy in the system in each step, instead of maximizing the expected information obtained. 136 Chapter 5 - Immersive Visualization Architects and engineers to this day, often build small-scaled physical models of their designs, despite widespread availability of modeling software with incredible capabilities. Physical models let the designers capture a better spatial understanding of their 3D models and feel them in the real world. However, the flexibility of a 3D modeling software is a huge advantage over naturally lifeless physical models and therefore, there exist several researches with practical applications to combine the worlds of physical models and computer software to benefit from each one to some extent. For example, a Virtual Reality (VR) design software, lets the user to find themselves in a virtual 3D environment, while using their body motion to interact with 3D models in this world. Similarly, an Augmented Reality (AR) design software helps designers to bring their models to life and view them in the real world. Despite existence of several commercial applications of VR and AR in 3D modeling and design, practical applications in numerical scientific simulation and visualization are, if any, rare. These novel visualization techniques can significantly help coastal researchers to have a better understanding of complex processes in a visual format, but to the best of author’s knowledge, there is no immersive visualization system with applications in coastal engineering. A well designed immersive visualization system, would enable individuals to collaborate with each other in a shared visualization environment. Such a collaboration environment is invaluable for activities like harbor design and maintenance, hazards mitigation plans, education, etc. In this chapter we explore opportunities in employing immersive visualization techniques in coastal hazards simulation and management to provide an interactive, collaborative environment with concurrent simulation and visualization. 137 5.1 Celeris VR Applications of virtual reality (VR) in science spans from research in medical sciences and surgery to molecular physics [72]–[74]. The most common VR technology uses VR headsets which generate realistic images, audio and other sensations in accordance with user’s real time movement such that they simulate a user's physical presence in a virtual environment. For example, a user can move in a virtual museum and observe different items, or with helps of controllers they can grab objects and interact with the environment. Virtual reality can help coastal researchers and engineers by taking them into a virtual world where they can interact with the waves and observe them in an immersive way. The potential of such technology is endless. For instance, imagine entering a harbor on a boat and experiencing the waves first hand in VR, before even the harbor is built, or setting up a virtual laboratory with several flumes and wave tanks where students and researchers can collaborate. Despite the huge potential of applications of VR in coastal research, to the best of our knowledge, there is no prior work in this area. One reason is that developing a VR coastal simulation software requires super- fast interactive models, which did not exist before development of Celeris Advent. Development of Celeris Advent changed the conventional view of coastal hydrodynamics and its successor, Celeris Base, made countless opportunities for innovation in coastal research. In this section we discuss Celeris VR, a series of Celeris built on top of Celeris Base which provides the users an immersive environment. We introduced Celeris VR for the first time in [75] and later in [76]. We targeted our software for Oculus Rift headsets, which are one of the most advanced and popular VR headsets in the market as 2018. These headsets ship with two touch controllers and two constellation sensors (Figure 5.1). The sensors can track the headset and controllers with submillimeter precision, positioning the head and hands of the user precisely in the virtual environment. Oculus Rift API is integrated in the Unity3D engine. In fact, this integration and ease of development, was one of our motivations in developing Celeris Base using Unity3D. 138 Figure 5.1: Oculus Rift VR headset and accessories. 9 5.1.1 Implementation We developed Celeris Base using Unity3D bearing in mind our plan for its integration with a VR headset. Unity3D is one of the leading game engines and a pioneer in virtual and augmented reality. The VR support in Unity3D 2018 is simply enabled by checking a checkbox in Edit > Project Settings > Player (see Figure 5.2). It is not an exaggeration to assert that integrating VR into Celeris Advent, could have taken as much time as developing the entire Celeris Base from scratch plus checking the VR checkbox in Unity3D. Although, getting started with the VR in Unity3D is simple, there still is need for some development and integration efforts. For example, the player movement and interaction in the field using the controllers needs some coding, which can also be eased by plugging in and modifying ready to use libraries and Unity assets. 9 Photo by Evan Amos, Vanamo Media Public Domain, distributed under a CC BY-SA 2.0 license. 139 Figure 5.2: Checkbox to enable VR in Unity 5.1.2 Running Celeris VR Celeris VR looks very similar to Celeris Base, except that it lets the user mount an Oculus Rift headset, jump into the virtual world, and move around as a flying object. Figure 5.3 shows the controllers to move the player in the field. The left controller lets the user to move in different directions and the right one lets them to rotate. Note that, the user can also rotate their view in the field by physically rotating in the room, however the rotation controller helps them to adjust their view and direction of movement easier. Figure 5.4 shows a user (the author) working with Celeris VR, simulating a coastal area. In this scene, two buoys are deployed in the field and their real-time plots are shown on the screen. The immersive environment in Celeris VR provides a unique experience for users. The feel of presence inside a numerical coastal simulation is unprecedent and it lets the user to observe the coastal phenomenon as if in person, but with addition of powerful visualization tools such as buoys, colormapping, etc. Celeris VR is also released as an open source software and under MIT License. The external libraries used in the software, carry their own licenses. 140 Figure 5.3: Movement control in Celeris VR. Figure 5.4: Simulating a coastal area with Celeris VR. 5.1.3 Suggestions for Further Research and Development Celeris VR opens the road for countless innovative features, which we suggest researchers to pursue. For example, adding a ship simulation component to the software can leverage the 141 immersive environment of Celeris VR for training ship crew, or adding features to model wave surfing can open an interesting area in the entertainment field. We also suggest using Celeris VR as a virtual lab for education. For example, a virtual flume can be setup and students can generate different kinds of waves and observe their propagation (Figure 5.5). Such a virtual lab can be very beneficial for training undergraduate and graduate students, especially in universities which do not have an actual lab. Figure 5.5: Virtual flume setup in Celeris VR. 5.2 HAZAR We are particularly interested in Spatial Augmented Reality (SAR), which is a projection mapping technique to cast videos and images on irregularly shaped objects. In contrast to augmented reality (AR), which augments user’s view (e.g., on a smart phone) with images of virtual objects, SAR renders virtual objects directly into the physical environment. Among several methods which can deliver this image augmentation, we consider using a video projector to augment 2D imagery on an irregularly shaped surface. A popular example of SAR is video projection on buildings which is shown in Figure 5.6. 142 Figure 5.6: Spatially Augmented Reality on the Sydney Opera House during Vivid Sydney 2013 10 . SAR was introduced by Raskar et al. [77] almost two decades ago, however, its applications have become more practical with the recent developments in the GPU technology. The key advantage of SAR for our application is that the display environment is not dedicated to a single user, and therefore, SAR easily allows collaborative applications among a group of users. It should be noted that in some versions of SAR (e.g., rendering 3D objects on 2D planar surfaces) the user must be head-tracked, which would hinder using this technology in a collaborative form. To remove this restriction we use the table-top SAR technology introduced in [78], where the physical models in the environment match the structure of the 3D graphics models used by the computer. However in contrast to [78], we are interested in dynamic projection mapping, where the changes made by the user in the 3D environment are automatically tracked and accounted for by the system. 10 Photo by Steve Garvie distributed under a CC BY-SA 2.0 license. 143 5.2.1 Methodology and Design We constructed a platform that provides SAR for coastal engineers to collaboratively interact with a 3D model of a coastal region using their bare hands in a basin full of ordinary sand. The sand surface resembles the topography of a coastal region and it is augmented by imagery using a video projector, such that it looks like the top view of that coastal region. This system is under development on top of Celeris Base and is named Celeris Hazar. In this section we discuss the prior work we did which makes development of Celeris Hazar possible. Development of Celeris Hazar is led by the author in a separate effort which is not part of this dissertation. In order to assess the different aspects of the Celeris Hazar project, we replicated the Augmented Reality Sandbox developed by the research group of Oliver Kreylos at UC-Davis [79]. The codes of this system are released as open source and therefore are widely used all over the world. We built a preliminary prototype on top of this open source project, which we call HAZAR, i.e. without Celeris in the name. However, for reasons which will be discussed later, we abandoned this open source project and started development of Celeris Hazar from scratch and on top of Celeris Base. HAZAR is powered by the GPU and its primary components are a 100cm×75cm sandbox full of ordinary white sand, a depth camera, such as Xbox Kinect Sensor, a video projector, and a PC (see Figure 5.7). Visitors rearrange sand; the Kinect camera “sees” the changed elevations, and the PC-connected projector projects a varying 2D elevation contours onto the sand, based on the Kinect data. Furthermore, “water” is added to the digital projection by coupling a shallow water solver with the system. Our first and second prototypes of HAZAR are shown in Figure 5.8. The author would like to acknowledge Vassilios Skanavis (PhD Student at USC), for his help in designing and building of the prototypes. Furthermore, contributions of our two wonderful summer interns, Karen Urosa, and Jodi Yip, are also acknowledged. The first prototype is shown in action in Figure 5.9. In this figure the contour lines are augmented on the sand surface and they would react to any changes to this surface in near real- time. The dark blue color is water flow simulated by shallow water equations. Naturally, the water flow follows the topography of the area. A video of this system is available at https://youtu.be/KduOO322WIA. We have also equipped the second prototype with a sand 3D 144 printer which is able to print a topography given in a file. In the following subsections, we will discuss the different components that make up the HAZAR project. Figure 5.7: Basic structure of HAZAR 11 . (a) (b) Figure 5.8: First (a) and second (b) prototypes of HAZAR project built at USC. 11 Figure adapted from http://idav.ucdavis.edu/~okreylos/ResDev/SARndbox/ in October 2016. 145 Figure 5.9: HAZAR prototype in action. Dark blue color shows augmented water. 5.2.1.1 Projection Mapping The process of projecting digital imagery on an object is called projection mapping. In static mapping (i.e., the objects are stationary and solid) the 3D environment needs to be captured once as 3D models. Then the video or images are rendered as textures on the desired surfaces in a 3D visualization software (e.g., Blender) and the software camera view is projected back on the 3D environment using a video projector. The final render of the view needs adjustments to match the 3D environment. These adjustments are made according to the relative coordinates of the projector to objects, its angular orientation, and lens specification of the projector. In dynamic mapping the objects in the 3D scene can be moved. For example, in HAZAR, the user can rearrange the sand to make random topographies. This dynamic nature, requires an 146 automated mapping system between the 3D environment in the real world and 3D models in the software. It also requires an automated process to make adjustments to the final render according to the altercations in the 3D environment. The former is delivered by using a depth camera, and the latter is done through transformations matrices inside the software. 5.2.1.2 Depth Camera HAZAR needs a means to dynamically capture the 3D surface of the sand inside the basin. An appropriate, inexpensive option is the Kinect Sensor. This sensor has an infrared (IR) laser projector along with a monochrome CMOS sensor. The IR laser projector, projects an IR grid on the environment, and calculates the depth (i.e., distance from the sensor) according to the distorted image of this grid captured by the CMOS sensor. The result is a depth map of the camera view which is used in HAZAR to track changes on the sand surface. 5.2.1.3 Coding and Mathematical Model The first software for HAZAR (i.e., the version based on [79]) is coded in C++ and OpenGL’s shader language GLSL. While it successfully demonstrated the concept of using spatially augmented reality in a tabletop sandbox, it has a few short-comings which encourages us to develop our own software on a different platform. Firstly, this code is not designed in a modular fashion and adding our desired functionality to this system could take unnecessary extra effort. For instance, adding a layer of crowdsourced messages after a disaster, or even designing a modern user interface could be laborious in this system. Secondly, this code uses shallow water equations to simulate water flow which were not suitable for simulating wind waves in intermediate water depths. Moreover, the flow simulation features of the code are primitive and does not include essential features such as introducing initial and boundary conditions. Finally, running this software needs some levels of expertise in using console-based software, which could limit the target audience. Considering these facts, we are developing our own software, called Celeris Hazar, using Celeris Base. 5.2.2 Sand 3D Printer To automate the process of making topographies in HAZAR, we designed a sand 3D printer. This project, called 3D Sander, was developed by the author in joint with Ali Kazemian, a PhD student from the Contour Crafting Lab of USC. In this project, a motor driven sand nozzle is 147 installed on HAZAR which enables us to make the topography (bathymetry) of a certain region on the sand surface, given a topographic file (Figure 5.11). Note that such a 3D printing system did not exist prior to this project and therefore, we designed a customized nozzle and feeding system of our own. In this section we briefly explain the design and construction process of sand printer. The main components can be named as the linear stage, motors, motor driver, nozzle, and sand feeding system. Figure 5.10: Preliminary tests of HAZAR sand 3D printer. 5.2.2.1 Frame and Linear Stage A frame was designed with a size such that the printer exactly fits the sandbox. As such, a 30 x 40 inches wooden frame was built to function as a firm base for the printer. Initially, two low-profile sleeve-bearing guide rails with carriages were installed on the frame (Figure 5.11a). After preliminary tests, it was observed that the guide rail on the short axis undergoes undesirable deformation and therefore this guide rail was replaced by another mechanism consisting of two rods and linear bearings (Figure 5.11b). 148 (a) (b) Figure 5.11: The wooden frame and guide rails of HAZAR sand 3D printer. 5.2.2.2 Motor and Driver In order to obtain high holding torque and high position control accuracy, stepper motors are used in our 3D printer driving system. Three stepper motors are installed in this system. Two of them support mobility of the nozzle along the width and length of the basin. A third stepper motor is installed on the nozzle to control the opening of this nozzle and the rate of sand deposit. For the motor drivers, we used gShield and Arduino. The former is a shield on top of Arduino which let it control 3-axis CNC machine. The motor system is shown Figure 5.12. Figure 5.12: The motor system of HAZAR sand 3D printer. 5.2.2.3 Control Software There is several open source software to control gShield, which among them we chose Universal Gcode Sender (UGS). This software controls the motion of the nozzle according to a given G code. UGS is a Java based code and thus it is cross-platform. 149 5.2.2.4 Printing Nozzle In order to build a robust sand printer, a customized nozzle needed to be designed and fabricated. In this regard, several nozzles were designed and tested, three of which will be shortly explained. Figure 5.13 presents the first iteration on the nozzle design. It includes a 10x20cm cylinder with a free rotating lid which is controlled by a stepper motor and shaft. After testing this nozzle, we observed that the weight of the cylinder containing sand, deforms the guide rail and the momentum causes vibrations. Also, the rotation of the lid did not lead to sufficient control on the rate of the sand deposit. As such, a second nozzle was designed and fabricated (Figure 5.14). Figure 5.13: Iteration 1 of the nozzle design (variable deposition hole size). The second nozzle design was based on an auger to be controlled with a stepper motor. After designing the auger and 3D printing it, we observed that fine sand particles get stuck between auger blades and nozzle cylinder wall and hinder its movement. Figure 5.14: Iteration 2 of the nozzle design (auger) The final nozzle design employs a lead screw (connected to motor with flexible coupling) with a cone attached to the tip of the screw. Moreover, there is a plate with a nut fixed on it, and the motor connected to the plate with springs. Therefore, when the shaft rotates, the whole motor and 150 lead screw go up or down which causes the cone to open or close the nozzle. This nozzle design seemed to work perfectly and became our final design Figure 5.15: Iteration 3 of the nozzle design (lead screw and nut) 5.2.2.5 Sand Feeding System HAZAR Sand 3D printer is designed such that a small nozzle is used to print sand, while it is continuously fed using a sand feeding system. To deliver sand to the nozzle, a wooden structure was built to hold the sand container along with a free rotating rod (Figure 5.16). This rod is connected to the sand container and guides a tube which carries sand (driven by gravity) from container to the nozzle. Figure 5.16: Sand feeding system of HAZAR sand 3D printer 5.2.2.6 Testing and Operation After different components of the sand printer were designed and proved to be fully functional, they were assembled to make HAZAR sand 3D printer (Figure 5.17). Also, four limit switches were added to the linear stage to prevent any damage to the nozzle and the stage due to possible errors. A test result of the system is shown in Figure 5.18. We presented our HAZAR prototype at 151 NVIDIA’s GPU Technology Conference in April 2016, San Francisco. The project attracted a lot of attention and was one of the main attractions in the exhibit (Figure 5.19). An informative video on the sand 3D printer is available at https://youtu.be/yCcQeWGv8qw. 5.2.3 Suggestions for Further Research and Developments 5.2.3.1 Sand 3D Printer While our sand 3D printer is mechanically functional, it is still not ready to be coupled with HAZAR. We suggest further developing the software which connects this printer to the HAZAR software. A closed feedback system between the sand 3D printer and the Kinect Sensor can control the printing process. Numerous tests are needed to calibrate the printing speed and deposit rate. We also suggest developing algorithms to reduce the printing time by choosing an optimal or near optimal path for printing. Finally, we suggest enabling HAZAR to provide visual guides to help the user rapidly and manually shape the base of the topography, such that the sand 3D printer is only used to fine tune this base topography. Figure 5.17: Final assembly of HAZAR sand 3D printer. 152 Figure 5.18: Test result of HAZAR sand 3D printer (video available at https://youtu.be/yCcQeWGv8qw?t=2m43s). Figure 5.19: Presentation of HAZAR at GTC2016. 153 5.2.3.2 Scalability We suggest a scalable design for HAZAR such that several HAZAR basins can be coupled together to simulate a larger coastal area such that waves and flow can pass between basins. Scaling up the size of the HAZAR does introduce technical complexities; however, the added benefit of allowing tens of visitors to interact with the system simultaneously to perform collaborative tasks is huge. 5.2.3.3 Generic Release In addition to applications of HAZAR in disaster management and simulation of coastal processes, one can imagine several other use cases as in education, gaming, and other scientific simulations. Therefore, we suggest continuing the unfinished efforts of the author in developing a generic version of HAZAR, called WonderCast, such that other developers and researchers can develop on top of it for their own purposes. Figure 5.20 shows a few possible developments on top of WonderCast. Figure 5.20a shows an educational game on coastal hazards, such that tsunamis and hurricanes can be developed to promote the awareness of coastal hazards among children and young adults. Figure 5.20b shows a virtual farm that can be used in schools aligned with the “Let’s Move!” campaign to promote eating healthy among students. Finally, Figure 5.20c shows use case of WonderCast in a science center such as Natural History Museum, where visitors can discover virtual fossils using real tools or their bare hands. (a) (b) (c) Figure 5.20: Possible use cases of WonderCast in education. 154 5.2.3.4 Celeris Hazar We strongly suggest continuing the efforts of the author in developing Celeris Hazar, a system which connects Celeris Base to the HAZAR project. Such software will introduce a new dimension to the coastal modeling and will become another breakthrough in the field. Our efforts in developing Celeris Hazar is released under the MIT License as an open source software. 155 Chapter 6 - Conclusion The current study is consisted of four major parts which were discussed in separate chapters. We firstly, studied the development of a robust mathematical model for solving the extended Boussinesq equations on the GPU. Then, we discussed the development of a software, Celeris, for faster than real-time simulations of long and wind-waves with features such as concurrent visualization and interactivity. Then, we proposed a framework to couple crowdsourced data with Celeris, with a special focus on data prioritization. Finally, we studied the opportunities in employing immersive visualization techniques to provide a collaborative environment for coastal engineers and a compelling package for educational purposes. While the future works on each project was discussed in the corresponding chapter, we will briefly review all four topics. 6.1 Mathematical Model We discussed development of our mathematical model discretization of the extended Boussinesq equations by a hybrid finite volume – finite difference scheme. Furthermore, we explained our adapted algorithms to solve the system massively in parallel and on the GPU. The developed mathematical model proved to be a robust system which let us develop an interactive software around it. Further research can be done in this area to add new components such as sediment transport and buoyancy. 6.2 Interactive Simulations and Visualization In Chapter 2, Celeris as an open source software for coastal wave simulation and visualization, was introduced. The structure of the software was sketched and its components were elaborated. Celeris was validated for breaking and non-breaking waves as well as for currents by comparing its results with several standard benchmarks; 156 The main feature of the software, in addition to its fast computational speed, was its interactivity. The user can change the physical and numerical parameters of an experiment via a GUI, while the model is running. Numerous visualization options including photorealistic rendering are provided. A compiled version of Celeris Advent is distributed along with its source codes under terms of the GNU General Public License. Celeris Advent harnesses the GPU by using Direct3D libraries, and it can run on any recent Windows machine with minimum preparation. We also introduced Celeris Base which is revamped version of Celeris Advent developed on a cross-platform game engine, called Unity3D. Celeris Base is expected to smooth the way for other developers to easily adapt the solver into their own software or develop it further. 6.3 Coupling with Crowdsourced Data We proposed a two-way coupling of Celeris with crowdsourced data from the disaster area. The geo-tagged data about the disaster area were prioritized according to intensity estimates from Celeris. Then, the high priority data were interpreted, and the obtained information was used to enhance the numerical results. We kept the coupling framework as general as possible to span its use cases to other types of disasters. We introduced a crowdsourcing framework and a prioritization strategy for collecting and interpreting data from a disaster area. We model the disaster area as a pool of binary random variables, such that each binary random variable represents a subject of interest (e.g., a bridge or a building). The prioritization is done according to both the expected value of information contained in messages (entropy) and the significance of messages. The entropy of messages is computed based on the entropy of events it may cover. This approach enables us to rapidly reduce the total uncertainty in the disaster area by interpreting important messages which are expected to contain the most amount of information. We evaluated our framework for inspecting bridges after the 2001 Nisqually earthquake and harnessing tweets to assess building damages after 2014 Napa earthquake. The results demonstrated the capability of the framework. We also studied the feasibility of reviewing the crowdsourced data by a crowd in both cases with synthetic and real data and showed that crowdreviewing can be an effective means in disaster situation. Finally, we coupled our framework with Celeris and studied a tsunami scenario in Newport, OR. 157 6.4 Immersive Visualization We introduced Celeris VR, a version of Celeris which provides the users with an immersive environment. In this software, the user can interact with the model as if they are inside the simulation environment. We also constructed a platform that provides a collaborative immersive environment for coastal engineers using spatial augmented reality. This system consists of closed loop between a depth camera, a PC, and video projector which are all installed on top of a basin full of ordinary sand. The sand surface resembles the topography of a coastal region and it is augmented by imagery using a video projector, such that it looks like the top view of that region. Any rearrangement of the sand surface is captured and accounted for in the model. We call this project HAZAR. We suggested further development of a software called Celeris Hazar, which connects the HAZAR project to Celeris. Furthermore, we developed a first-of-its-kind 3D sand printer which can potentially make topographies in the sandbox. 158 References [1] C. Synolakis and U. Kâno\uglu, “The Fukushima accident was preventable,” Phil. Trans. R. Soc. A, vol. 373, no. 2053, p. 20140379, 2015. [2] A. B. Kennedy, R. A. Dalrymple, J. T. Kirby, and Q. Chen, “Determination of inverse depths using direct Boussinesq modeling,” J. Waterw. port, coastal, Ocean Eng., vol. 126, no. 4, pp. 206–214, 2000. [3] K. S. Erduran, S. Ilic, and V. Kutija, “Hybrid finite-volume finite-difference scheme for the solution of Boussinesq equations,” Int. J. Numer. Methods Fluids, vol. 49, no. 11, pp. 1213– 1232, 2005. [4] M. Kazolea, A. I. Delis, and C. E. Synolakis, “Numerical treatment of wave breaking on unstructured finite volume approximations for extended Boussinesq-type equations,” J. Comput. Phys., vol. 271, pp. 281–305, 2014. [5] D. H. Peregrine, “Long waves on a beach,” J. Fluid Mech., vol. 27, no. 04, pp. 815–827, 1967. [6] S. Elgar and R. T. Guza, “Shoaling gravity waves: Comparisons between field observations, linear theory, and a nonlinear model,” J. Fluid Mech., vol. 158, pp. 47–70, 1985. [7] D. G. Goring, “Tsunamis--the propagation of long waves onto a shelf,” California Institute of Technology, 1978. [8] P. A. Madsen, R. Murray, and O. R. Sorensen, “A new form of the Boussinesq equation with improved linear dispersion characteristics,” Coast. Eng., vol. 15, pp. 371–388, 1991. [9] O. Nwogu, “Alternative form of Boussinesq equations for nearshore wave propagation,” J. Waterw. port, coastal, Ocean Eng., vol. 119, no. 6, pp. 618–638, 1993. [10] Y. Chen and P. L.-F. Liu, “Modified Boussinesq equations and associated parabolic models for water wave propagation,” J. Fluid Mech., vol. 288, pp. 351–381, 1995. [11] Q. Chen, P. A. Madsen, H. A. Schäffer, and D. R. Basco, “Wave-current interaction based on an enhanced Boussinesq approach,” Coast. Eng., vol. 33, no. 1, pp. 11–39, 1998. [12] P. L.-F. Liu, “Model equations for wave propagations from deep to shallow water,” Adv. Coast. Ocean Eng., vol. 1, pp. 125–157, 1994. 159 [13] G. Wei, J. T. Kirby, S. T. Grilli, and R. Subramanya, “A fully nonlinear Boussinesq model for surface waves. Part 1. Highly nonlinear unsteady waves,” J. Fluid Mech., vol. 294, p. 71, 1995. [14] P. Lynett, P. L. F. Liu, K. I. Sitanggang, and D. Kim, “Modeling wave generation, evolution, and interaction with depthintegrated, dispersive wave equations COULWAVE code manual,” Cornell Univ. Long Intermed. Wave Model. Packag., 2002. [15] Q. Chen, R. A. Dalrymple, J. T. Kirby, A. B. Kennedy, and M. C. Haller, “Boussinesq modeling of a rip current system,” J. Geophys. Res. Ocean., vol. 104, no. C9, pp. 20617– 20637, 1999. [16] P. J. Lynett, T.-R. Wu, and P. L.-F. Liu, “Modeling wave runup with depth-integrated equations,” Coast. Eng., vol. 46, no. 2, pp. 89–107, 2002. [17] S. Ryu, M. H. Kim, and P. J. Lynett, “Fully nonlinear wave-current interactions and kinematics by a BEM-based numerical wave tank,” Comput. Mech., vol. 32, no. 4–6, pp. 336–346, 2003. [18] P. Lynett and P. L.-F. Liu, “A numerical study of submarine--landslide--generated waves and run--up,” in Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, 2002, vol. 458, no. 2028, pp. 2885–2910. [19] S. Tavakkol and P. Lynett, “Celeris: A GPU-accelerated open source software with a Boussinesq-type wave solver for real-time interactive simulation and visualization,” Comput. Phys. Commun., vol. 217, pp. 117–127, 2017. [20] S. Tavakkol and P. Lynett, “Opportunities for interactive, physics-driven wave simulation using the boussinesq-Type model, celeris,” in Proceedings of the Coastal Engineering Conference, 2016, vol. 35. [21] P. A. Madsen and O. R. Sørensen, “A new form of the Boussinesq equations with improved linear dispersion characteristics. Part 2. A slowly-varying bathymetry,” Coast. Eng., vol. 18, no. 3–4, pp. 183–204, 1992. [22] L. Smith, Q. Liang, P. James, and W. Lin, “Assessing the utility of social media as a data source for flood risk management using a real-time modelling framework,” pp. 1–11, 2015. [23] C. Rossi et al., “COUPLING CROWDSOURCING , EARTH OBSERVATIONS , AND 160 E-GNSS IN A NOVEL FLOOD EMERGENCY SERVICE IN THE CLOUD,” pp. 2703– 2706, 2015. [24] M. F. Goodchild and J. A. Glennon, “Crowdsourcing geographic information for disaster response: a research frontier,” Int. J. Digit. Earth, vol. 3, no. 3, pp. 231–241, 2010. [25] A. S. Vivacqua and M. R. S. Borges, “Taking advantage of collective knowledge in emergency response systems,” J. Netw. Comput. Appl., vol. 35, no. 1, pp. 189–198, 2012. [26] A. Kurganov and G. Petrova, “A second-order well-balanced positivitity preserving central- upwind scheme for the Saint-Venant system,” Commun. Math. Sci., vol. 5, no. 1, pp. 133– 160, 2007. [27] M. Tonelli and M. Petti, “Hybrid finite volume - finite difference scheme for 2DH improved Boussinesq equations,” Coast. Eng., vol. 56, no. 5–6, pp. 609–620, 2009. [28] G. Wei and J. T. Kirby, “Time-dependent numerical code for extended Boussinesq equations,” J. Waterw. Port, Coastal, Ocean Eng., vol. 121, no. 5, pp. 251–261, 1995. [29] A. R. Brodtkorb, M. L. Sætra, and M. Altinakar, “Efficient shallow water simulations on GPUs: Implementation, visualization, verification, and validation,” Comput. Fluids, vol. 55, pp. 1–12, 2012. [30] F. Shi, J. T. Kirby, J. C. Harris, J. D. Geiman, and S. T. Grilli, “A high-order adaptive time- stepping TVD solver for Boussinesq modeling of breaking waves and coastal inundation,” Ocean Model., vol. 43–44, pp. 36–51, 2012. [31] C. Eckart, “The propagation of gravity waves from deep to shallow water,” in Gravity waves, 1952, p. 165. [32] V. V. Titov and C. E. Synolakis, “Modeling of breaking and nonbreaking long-wave evolution and runup using VTCS-2,” J. Waterw. Port, Coastal, Ocean Eng., vol. 121, no. 6, pp. 308–316, 1995. [33] V. Titov, U. Kâno\uglu, and C. Synolakis, “Development of MOST for real-time tsunami forecasting,” American Society of Civil Engineers, 2016. [34] J. D. Owens et al., “A Survey of General Purpose Computation on Graphics Hardware,” vol. 26, no. 1, pp. 80–113, 2007. [35] Y. Zhang, J. Cohen, and J. D. Owens, “Fast tridiagonal solvers on the GPU,” ACM 161 SIGPLAN Not., vol. 45, no. 5, p. 127, 2010. [36] B. Costas Emmanuel Synolakis, “The runup of solitary waves,” J . Fluid Mech, vol. 185, pp. 523–545, 1987. [37] M. Tonelli and M. Petti, “Finite volume scheme for the solution of 2D extended Boussinesq equations in the surf zone,” Ocean Eng., vol. 37, no. 7, pp. 567–582, 2010. [38] C. E. Synolakis, E. N. Bernard, V. V Titov, U. Kâno\uglu, and F. I. Gonzalez, “Validation and verification of tsunami numerical models,” in Tsunami Science Four Years after the 2004 Indian Ocean Tsunami, Springer, 2008, pp. 2197–2228. [39] C. E. Synolakis, “The runup of long waves,” 1986. [40] R. W. Whalin, “The limit of applicability of linear wave refraction theory in a convergence zone.,” 1971. [41] O. R. Sørensen, H. A. Schäffer, and L. S. Sørensen, “Boussinesq-type modelling using an unstructured finite element technique,” Coast. Eng., vol. 50, no. 4, pp. 181–198, 2004. [42] S. Liu, Z. Sun, and J. Li, “An unstructured FEM model based on Boussinesq equations and its application to the calculation of multidirectional wave run-up in a cylinder group,” Appl. Math. Model., vol. 36, no. 9, pp. 4146–4164, 2012. [43] D. Bigoni, A. P. Engsig-Karup, and C. Eskilsson, “Efficient uncertainty quantification of a fully nonlinear and dispersive water wave model with random inputs,” J. Eng. Math., pp. 1–27, 2016. [44] P. M. Lloyd and P. K. Stansby, “Shallow-water flow around model conical islands of small side slope. II: Submerged,” J. Hydraul. Eng., vol. 123, no. 12, pp. 1068–1077, 1997. [45] P. M. Lloyd and P. K. Stansby, “Shallow-water flow around model conical islands of small side slope. I: Surface piercing,” J. Hydraul. Eng., vol. 123, no. 12, pp. 1057–1067, 1997. [46] P. J. Lynett et al., “Inter-model analysis of tsunami-induced coastal currents,” Ocean Model., vol. 114, pp. 14–32, 2017. [47] M. J. Briggs, C. E. Synolakis, G. S. Harkins, and D. R. Green, “Laboratory experiments of tsunami runup on a circular island,” Pure Appl. Geophys. PAGEOPH, vol. 144, no. 3–4, pp. 569–593, 1995. [48] V. V Titov and C. E. Synolakis, “Numerical Modelling of Tidal Wave Runup,” ASCE J. 162 Waterw. Port, Coast. Ocean Eng., vol. 124, no. 4, pp. 157–171, 1998. [49] D. R. Fuhrman and P. a Madsen, “Simulation of nonlinear wave run-up with a high-order Boussinesq model,” Coast. Engng. – Amsterdam, vol. 55, pp. 139–154, 2008. [50] G. M. Karagiannis and C. Synolakis, “Twenty challenges in incident planning,” J. Homel. Secur. Emerg. Manag., vol. 14, no. 2, 2017. [51] S. Tavakkol, H. To, S. H. Kim, P. Lynett, and C. Shahabi, “An entropy-based framework for efficient post-disaster assessment based on crowdsourced data,” in Proceedings of the Second ACM SIGSPATIALInternational Workshop on the Use of GIS in Emergency Management - EM-GIS ’16, 2016, pp. 1–8. [52] H. To, S. Tavakkol, S. H. Kim, and C. Shahabi, “On Acquisition and Analysis of Visual Data for Crowdsourcing Disaster Response,” 2016. [53] T. Yabe, K. Tsubouchi, and A. Sudo, “Estimating Evacuation Hotspots using GPS data : What happened after the large earthquakes in Kumamoto , Japan ?,” in Proc. of the 5th International Workshop on Urban Computing, 2016. [54] N. Koike and I. E. Harik, “Post-Earthquake Rapid Inspection Planning for Bridges in Western Kentucky,” pp. 179–184, 2012. [55] R. T. Ranf, M. O. Eberhard, and S. Malone, “Post earthquake prioritization of bridge inspections,” Earthq. Spectra, vol. 23, no. 1, pp. 131–146, 2007. [56] Q. Inspection, “Keep inhabitants from the danger of buildings damaged by a major earthquake,” 2002. [57] S. German et al., “Machine Vision-Enhanced Postearthquake Inspection,” J. Comput. Civ. Eng., vol. 27, no. 6, pp. 622–634, 2013. [58] R. T. Ranf, M. O. Eberhard, and M. P. Berry, “Damage to Bridges during the 2001 Nisqually Earthquake, PEER Report 2001/15,” no. November, pp. 1–46, 2001. [59] J. Heinzelman and C. Waters, Crowdsourcing crisis information in disaster-affected Haiti. US Institute of Peace, 2010. [60] H. To, S. H. Kim, and C. Shahabi, “Effectively crowdsourcing the acquisition and analysis of visual data for disaster response,” in Big Data (Big Data), 2015 IEEE International Conference on, 2015, pp. 697–706. 163 [61] S. H. Kim, Y. Lu, G. Constantinou, C. Shahabi, G. Wang, and R. Zimmermann, “Mediaq: mobile multimedia management system,” in Proceedings of the 5th ACM Multimedia Systems Conference, 2014, pp. 224–235. [62] N. Kalligeris et al., “Lagrangian flow measurements and observations of the 2015 Chilean tsunami in Ventura, CA,” Geophys. Res. Lett., vol. 43, no. 10, 2016. [63] P. L.-F. Liu et al., “Observations by the international tsunami survey team in Sri Lanka,” Science (80-. )., vol. 308, no. 5728, p. 1595, 2005. [64] “U. S. G. Survey. M6.0 - 6km nw of american canyon, california.,” United States Geological Survey. [Online]. Available: http://earthquake.usgs.gov/earthquakes/eventpage/nc72282711. [65] H. O. Wood and F. Neumann, “Modified Mercalli intensity scale of 1931,” Bull. Seismol. Soc. Am., vol. 21, no. 4, pp. 277–283, 1931. [66] J. Pfeffer and F. Morstatter, “Geotagged Twitter posts from the United States: A tweet collection to investigate representativeness,” DOI, vol. 10, p. 1166, 2016. [67] H. To, S. Agrawal, S. H. Kim, and C. Shahabi, “On Identifying Disaster-Related Tweets: Matching-based or Learning-based?,” in Multimedia Big Data (BigMM), 2017 IEEE Third International Conference on, 2017, pp. 330–337. [68] C. Rojahn and R. L. Sharpe, Earthquake damage evaluation data for California. Applied technology council, 1985. [69] “Buildings-Basic Dataset, data obtained from South Napa Earthquake Data Archive Map,” Earthquake Engineering Research Institute, 2016. [Online]. Available: http://eqclearinghouse.org/map/2014-08-24-south-napa/ . [70] C. R. E. Workgroup, Cascadia subduction zone earthquakes: A magnitude 9.0 earthquake scenario. Oregon Department of Geology and Mineral Industries, 2013. [71] V. V Titov and F. I. Gonzalez, “Implementation and testing of the method of splitting tsunami (MOST) model,” 1997. [72] M. A. Spicer and M. L. J. Apuzzo, “Virtual reality surgery: neurosurgery and the contemporary landscape,” Neurosurgery, vol. 52, no. 3, pp. 489–498, 2003. [73] A. Sharma et al., “Immersive and interactive exploration of billion-atom systems,” Presence 164 Teleoperators Virtual Environ., vol. 12, no. 1, pp. 85–95, 2003. [74] B. Horton, E. Moen, A. Nakano, and T. Wei, “Virtual Reality Integration Techniques in Computational Materials Research.” [75] P. Lynett and S. Tavakkol, “Interactive And Immersive Coastal Hydrodynamic Simulation,” in 36th International Conference on Coastal Engineering, 2018. [76] P. Lynett and S. Tavakkol, “Immersive Coastal Hydrodynamic Simulation,” in AGU Fall Meeting Abstracts, 2018. [77] R. Raskar, G. Welch, and H. Fuchs, “Spatially Augmented Reality,” no. 919, pp. 1–7, 1998. [78] R. Raskar, G. Welch, and W. Chen, “Table-Top Spatially-Augmented Reality : Bringing Physical Models to Life with Projected Imagery.” [79] S. E. Reed et al., “Shaping watersheds exhibit: An interactive, augmented reality sandbox for advancing earth science education,” in AGU Fall Meeting Abstracts, 2014, vol. 1, p. 1. [80] N. Shiraki, M. Shinozuka, J. E. Moore, S. E. Chang, H. Kameda, and S. Tanaka, “System risk curves: probabilistic performance scenarios for highway networks subject to earthquake damage,” J. Infrastruct. Syst., vol. 13, no. 1, pp. 43–54, 2007. [81] K. R. Karim and F. Yamazaki, “A simplified method of constructing fragility curves for highway bridges,” Earthq. Eng. Struct. Dyn., vol. 32, no. 10, pp. 1603–1626, 2003. [82] M. Shokrabadi, M. Banazadeh, M. Shokrabadi, and A. Mellati, “Assessment of seismic risks in code conforming reinforced concrete frames,” Eng. Struct., vol. 98, pp. 14–28, 2015.
Abstract (if available)
Abstract
The aim of this dissertation is to democratize high-order and high-performance coastal wave modeling by developing a user-friendly GPU accelerated software which utilizes state-of-the-art visualization technologies. We call our software, Celeris, the Latin word for “quick”. This software is the first-of-its-kind with unprecedented features such as interactivity and an immersive environment. Celeris is open source and needs minimum preparation to run on a user-level laptop. The software solves the extended Boussinesq equations using a hybrid finite volume–finite difference method and supports moving shoreline boundaries. The simulation and visualization are performed on the GPU, which enables the software to run faster than real-time. Celeris supports simultaneous visualization with both photorealistic and colormapped rendering capabilities along with many other novel features. In this dissertation, we elaborate the steps of developing this software such as the governing equations, our robust numerical method, parallelization algorithms, coding implementation, visualizations, validations, and so on. ❧ We also develop an entropy-based framework to prioritize data interpretation in disasters and couple it with Celeris to mitigate the most significant problems in a disaster situation: collecting data from the disaster area, estimating the situation, and making a decision under uncertainty. This framework ranks the crowdsourced data from the disaster area according to the amount of expected information contained in each piece. Then, simulation software like Celeris is run, tuned by the collected information, to provide a better picture of the disaster scope. The new estimate of the disaster scope is then used to better prioritize the crowdsourced data for interpretation. Continuing this loop helps emergency managers to use their limited interpretation resources to gain the most amount of information out of the disaster area.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Immersive computing for coastal engineering
PDF
Understanding human-building interactions through perceptual decision-making processes
PDF
Where geospatial software development and video game development intersect: using content analysis to better understand disciplinary commonalities and facilitate technical exchange
PDF
High-throughput methods for simulation and deep reinforcement learning
PDF
Building information modeling based design review and facility management: Virtual reality workflows and augmented reality experiment for healthcare project
PDF
The projection immersed boundary method for compressible flow and its application in fluid-structure interaction simulations of parachute systems
PDF
Stochastic inference for deterministic systems: normality and beyond
PDF
Two-step study designs in genetic epidemiology
PDF
Multiscale and multiresolution approach to characterization and modeling of porous media: From pore to field scale
PDF
A study of diffusive mass transfer in tight dual-porosity systems (unconventional)
PDF
Modeling and simulation of complex recovery processes
PDF
Understanding human-building-emergency interactions in the built environment
Asset Metadata
Creator
Tavakkol, Sasan
(author)
Core Title
Interactive and immersive coastal hydrodynamics
School
Viterbi School of Engineering
Degree
Doctor of Philosophy
Degree Program
Civil Engineering
Publication Date
02/15/2019
Defense Date
10/30/2018
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
adaptive time integration,Boussinesq,Celeris,crowdsourcing,disaster response,GPU,OAI-PMH Harvest,open source,shader programming,virtual reality,wave modeling software
Format
application/pdf
(imt)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Lynett, Patrick (
committee chair
), Razaviyayn, Meisam (
committee member
), Synolakis, Costas (
committee member
)
Creator Email
sasan.tavakkol@gmail.com,tavakkol@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c89-121663
Unique identifier
UC11675667
Identifier
etd-TavakkolSa-7081.pdf (filename),usctheses-c89-121663 (legacy record id)
Legacy Identifier
etd-TavakkolSa-7081.pdf
Dmrecord
121663
Document Type
Dissertation
Format
application/pdf (imt)
Rights
Tavakkol, Sasan
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
adaptive time integration
Boussinesq
Celeris
crowdsourcing
disaster response
GPU
open source
shader programming
virtual reality
wave modeling software