Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Immersive computing for coastal engineering
(USC Thesis Other)
Immersive computing for coastal engineering
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
IMMERSIVE COMPUTING FOR COASTAL ENGINEERING
by
Zili Zhou
A Dissertation Presented to the
FACULTY OF THE USC GRADUATE SCHOOL
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
DOCTOR OF PHILOSOPHY
(CIVIL ENGINEERING)
August 2022
Copyright 2022 Zili Zhou
ii
Acknowledgments
It has been a great 4 years of my life. The most common word to describe PhD life is stressful, but
I think it should be … constantly stressful and of course exceptionally REWARDING. Not only I
have gained a lot of new knowledge and done the research I am interested in, but most importantly
I have met the best advisor and colleagues. They become my life friends. Every week, I look
forward to the Friday group meeting no matter it is in-person or online. I love the small chat before
the official meeting and have enjoyed every minute of colleagues sharing their life stories. It has
been my weekly happy hour. I feel so lucky to feel a sense of belonging to this group when I am a
Pacific away from home. Especially during quarantine, my advisor and colleagues at the USC
Tsunami Research Center become my family. Without them, I wouldn’t have gone this far and
completed my PhD program. I am particularly grateful to my advisor Professor Patrick Lynett for
teaching me, guiding me, enduring my mistakes, and encouraging me to accept challenges. I can
still feel the excitement when I received the offer letter from Pat four years ago. And after four
years, I have gained more exciting moments like testing codes successfully, passing quals, or even
as small as learning fun stories of Coastal Engineering fieldwork in our group meetings.
I want to thank all my family and friends in China. Their online video or voice chat has
accompanied me, a night owl, for hundreds of late nights. Particularly, the daily silly chat with my
twin sister guarantees that I never feel alone. Even though the quarantine separated my
grandparents and I forever, I’m sure they would be proud of me getting my PhD degree. I also
want to thank all my friends in the United States. They have not only visited me in Los Angeles
iii
and explored the city with me, but also have kindly invited me to their residing cities like Atlanta,
San Francisco, and Madison. I am honored to have them as my friends and hope our friendships
will last a lifetime.
At last, I want to thank my world’s cutest cat, Truman. In September 2019, I adopted him
when he was three. After that, I found the panacea for all difficulties: Truman’s snuggling with
me. He travels with me all over the country, helps me gain confidence to work on my projects, and
sometimes even assists me in coding the apps (i.e., he lies on the keyboard and presses the
comment symbols like “//”). Truman is the best thing to happen to me since my sister’s dog Eike.
Thank you, Truman, for making PhD life a lot easier.
iv
Table of Contents
Acknowledgments........................................................................................................................... ii
List of Tables ................................................................................................................................ vii
List of Figures .............................................................................................................................. viii
Abstract ......................................................................................................................................... xii
Chapter 1 Introduction ............................................................................................................... 1
1.1 Motivation ............................................................................................................... 1
1.2 Justification ............................................................................................................. 2
1.3 Objectives of Study ................................................................................................. 2
1.3.1 Models of Coastal Processes ......................................................................... 2
1.3.2 Augmented Reality (AR) Mobile App for Coastal Engineering Education .. 5
1.3.3 Combination of Coastal Laboratory Experiments and Numerical
Simulations based on Augmented Reality (AR) ....................................................... 9
1.3.4 Web-based Virtual Reality (VR) Visualization of Global Tsunamis .......... 11
1.4 Dissertation Contributions .................................................................................... 12
Chapter 2 AR-tsunami iOS/iPadOS App: Interactive Augmented Reality Tsunami
Simulation System for Coastal Hazard Education ........................................................................ 13
2.1 Introduction ........................................................................................................... 13
2.2 Related Work ........................................................................................................ 15
2.3 Mathematical solvers in AR-tsunami.................................................................... 18
2.3.1 Procedural Wave Model .............................................................................. 18
2.3.2 Shallow Water Equation Model................................................................... 22
2.3.3 Boussinesq-type Equation Model ................................................................ 26
2.3.4 Boundary Conditions ................................................................................... 28
2.4 Method of Building AR-tsunami App .................................................................. 35
2.4.1 iOS/iPadOS Augmented Reality Framework ARKit .................................. 35
2.4.2 Wave Simulation and Visualization ............................................................ 37
2.4.3 AR-tsunami Architecture ............................................................................. 37
2.5 AR-tsunami App Demos ....................................................................................... 42
2.5.1 Demo of Placing a Virtual Tsunami on Real-world Surfaces ..................... 42
2.5.2 Demo of Immersing a Person into a Virtual Tsunami ................................. 43
2.5.3 Demo of Displaying a 3D Tsunami Inundating a Beach ............................. 47
v
Chapter 3 AR-sandbox iOS/iPadOS App: Interactive Augmented Reality Sandbox with
Wave Simulation for Coastal Engineering ................................................................................... 49
3.1 Introduction ........................................................................................................... 49
3.2 Related Work ........................................................................................................ 51
3.3 Numerical Simulation for Wave Run-up in AR-sandbox ..................................... 52
3.4 Method of Building AR-sandbox app ................................................................... 56
3.4.1 LiDAR Scanner on New Apple iPad Pro .................................................... 56
3.4.2 Wave Simulation Method ............................................................................ 57
3.4.3 AR-sandbox Architecture ............................................................................ 57
3.5 AR-sandbox App Demos ...................................................................................... 61
3.5.1 Demo of Basic Features in AR-sandbox ..................................................... 61
3.5.2 Demo of Enhanced AR-sandbox with Instant Wave Simulations Add-on . 63
3.5.3 Demo of Adding Digital Objects ................................................................. 65
Chapter 4 WebVR-tsunami Web App: Web-based Convenient Tool for 3D Global Tsunami
Visualization and Coastal Hazard Education ................................................................................ 70
4.1 Introduction ........................................................................................................... 70
4.2 Related Work ........................................................................................................ 73
4.3 Transoceanic Tsunami Numerical Simulation ...................................................... 75
4.3.1 Modeling for the Transoceanic Propagation of Tsunamis ........................... 75
4.3.2 Open-source Wave Solver COULWAVE ................................................... 77
4.4 Core Technologies to Build WebVR-tsunami ...................................................... 78
4.4.1 Create Simulation Video: Map Projection and Video Format Selection ..... 78
4.4.2 Build a 3D WebVR Earth with WebXR Framework A-Frame ................... 79
4.4.3 Add HTML DOM Manipulation with JavaScript library jQuery ................ 80
4.4.4 Fast Deliver Content Online Using Cloud Storage Service and High-speed
Global Content Delivery Network Service Using Amazon Web Services (AWS) . 80
4.5 Showcase: 2022 Hunga Tonga Tsunami............................................................... 82
4.5.1 2022 Hunga Tonga Volcano Eruption Induced Tsunami ............................ 82
4.5.2 Visualization on Computer and Mobile Browsers....................................... 84
4.5.3 Visualization on VR headsets ...................................................................... 87
Chapter 5 Future Work ............................................................................................................ 89
5.1 AR-tsunami Future Work: Suggestion for Enhancing AR-tsunami with Deep
Learning Models ............................................................................................................. 89
5.2 AR-sandbox Future Work: Enabling Multiuser Experience in AR-sandbox ....... 92
5.3 WebVR-tsunami Future Work: Web-based Augmented Reality (AR)
Visualization for global tsunamis ................................................................................... 94
5.4 Future Work of a Software Bundle for Coastal Engineering ................................ 97
Chapter 6 Conclusion ............................................................................................................... 99
vi
Bibliography ............................................................................................................................... 102
vii
List of Tables
Table 3-1: Three types of 3D breakwaters (https://3dwarehouse.sketchup.com/) and their
markers .............................................................................................................................. 66
viii
List of Figures
Figure 1-1: The demonstration of Celeri Base (2020). (a) Sasan Tavakkol, the co-developer of
the Celeris Base VR software (Tavakkol and Lynett 2020), is showing how to use
Celeris Base with VR equipment. (b) one of the virtual views in Celeris Base. ................ 6
Figure 1-2: Apple Inc. uses augmented reality (AR) technology to immersively show the
features of the new Apple Watch Series 7 on an iPhone or an iPad for promoting this
new product in 2021 (https://www.apple.com/apple-watch-series-7). (a) the table in
the real world without any AR effect. (b) the virtual Apple Watch Series 7 is put on
the table. (c) customers can change the position, the rotation, and the size of the virtual
Apple Watch. (d) customers can virtually “wear” the watch.............................................. 7
Figure 1-3: App icons of AR-tsunami and AR-sandbox on an iPad Pro. (a) AR-tsunami and
AR-sandbox apps on iPad Pro 12 4
th
generation (shown in the red rectangle). (b) the
app icon of AR-tsunami. (c) the app icon of AR-sandbox. ................................................ 9
Figure 2-1: Comparison of non-AR’s and AR’s efficiency in attracting attention and
providing emotional impact. The photo resources are collected from The Weather
Channels online. The Weather Channels report the storm surge by traditional maps in
(a) and by AR in (c); The Weather Channels report the tornado by traditional maps in
(b) and by AR in (d). ......................................................................................................... 13
Figure 2-2: the diagram of the total water depth
h
, the bottom elevation
z
, and the
instantaneous wave height
measured from still water depth. ....................................... 23
Figure 2-3: Motion of ideal fluid on the solid surface .................................................................. 29
Figure 2-4: Motion of viscous fluid on the solid surface .............................................................. 30
Figure 2-5: Fully reflective solid wall boundary with two layers of ghost cells (yellow dots).
The domain is gridded for numerical discretization, and the dots are located at the
center of the grid for deploying the central-upwind scheme. Blue dots represent the
inner water domain cells; two layers of green dots represent the water domain cells
adjacent to the boundary; two layers of yellow dots represent the ghost cells, whose
value will mirror the value of respective green dots; black dots are solid wall cells. ...... 33
ix
Figure 2-6: Display the tsunami video behind the cover story on iPhone using image tracking
technology in ARKit. (a) the cover page of the San Francisco Chronicle on Monday,
December 27, 2004, reporting the huge 2004 tsunami in Indonesia; (b) four
screenshots in serial time showing the AR effect of image tracking – sub-photo b.1
shows in the beginning camera is shooting the newspaper cover page with the original
cover photo; sub-photo b.2 shows the app has detected the cover photo and covered it
with the tsunami video of 2004 Indonesia tsunami; sub-photo b.3 shows the video is
virtually played in the app; sub-photo b.4 shows moving the camera closer to the
newspaper can zoom in the video correspondingly. ......................................................... 36
Figure 2-7: Architecture of AR-tsunami ....................................................................................... 41
Figure 2-8: AR-tsunami demo on a pier. (a) the pier in the real world without AR effect; (b)
by using AR-tsunami, the pier is covered by small waves. .............................................. 42
Figure 2-9: AR-tsunami demo on the driveway. (a) the driveway in the real world without AR
effect; (b) by using AR-tsunami, the driveway and the house are flooded by waves. ...... 43
Figure 2-10: AR-tsunami demo on a lake. (a) the lake in the real world without AR effect; (b)
AR-tsunami has detected the lake surface and placed the still water on it, and virtually
submerges the hand with people occlusion effect; (c) AR-tsunami displays the tsunami
waves................................................................................................................................. 44
Figure 2-11: AR-tsunami demo on a sloping lawn. (a) the sloping lawn in the real world
without AR effect; (b) by using AR-tsunami, the author is walking in the virtual still
water which is sloping along with the lawn; (c) by using AR-tsunami, the author can
look down and put the feet out of “water”; (d, e, f) by using AR-tsunami, the author is
“submerged” by the big waves. ........................................................................................ 46
Figure 2-12: AR-tsunami demo of wave propagations on a complicated bathymetry. (a)
screenshots of different moments of virtual wave propagation on the Beach of Crete in
the AR-sandbox app; (b) screenshots of different angles and moments of virtual wave
propagation on the Ventura Harbor in the AR-tsunami app. The people occlusion AR
effect can be shown having been added in the AR-tsunami app. ..................................... 48
Figure 3-1: The setup of the Project AR-sandbox. To view the depth map on a sandbox, users
only need a real sandbox, a regular projector, and an iPad with the AR-sandbox app
installed. ............................................................................................................................ 50
Figure 3-2: Diagram of tsunami run-up ........................................................................................ 54
Figure 3-3: Architecture of AR-sandbox ...................................................................................... 60
x
Figure 3-4: AR-sandbox demo of dropping a paper cube to show the depth map is changed
instantly with human interaction. (a-c) the serial photos of dropping a paper cube onto
the sand (the projector is turned off to better see the sand and cubes); (d-f) the serial
screenshots of the AR-sandbox app on an iPad Pro. (a) and (d), (b) and (e), (c) and (f)
are taken in the same moment. The depth map is updated 60 frames every second......... 62
Figure 3-5: Enhanced AR-sandbox demo of generating the topography of an island from a
hand and the instant wave simulation add-on. (a) run the enhanced AR-sandbox on an
iPad Pro 12 4
th
generation and aim the rear camera at the right hand. (b) after the
LiDAR scanner function scanned the right hand, an island is created with the contour
of the right hand and the background. The sinewave simulation on the right boundary
is instantly added on. The waves propagate towards the island. (c) the waves run up
the island and inundate parts of the shore which are referred to and generated from the
right little finger. ............................................................................................................... 63
Figure 3-6: The demo of UC Davis’ sandbox (S. Reed et al. 2016). (a) a scoop of real sands is
added to the virtual lake area; (b) after a second, the depth map has been changed
accordingly and shows a new island in the virtual lake. It is the augmented reality
sandbox done by UC Davis W.M.Keck Center for Active Visualization in the Earth
Sciences (KeckCAVES), UC Davis Tahoe Environmental Research Center, Lawrence
Hall of Science, and ECHO Lake Aquarium and Science Center (S. Reed et al. 2016). . 64
Figure 3-7: 3D object House and its marker. The 3D virtual building is downloaded from
online 3D objects libraries: https://3dwarehouse.sketchup.com/. (a) the marker of the
3D building. It is the photo of the author’s cat. (b) the marker is being tracked and the
3D building has been placed on the marker. (c) a different angle and different distance
to see the 3D building. ...................................................................................................... 66
Figure 3-8: Put "virtual" buildings on the sandbox via labels. The 3D models are downloaded
from https://3dwarehouse.sketchup.com/. (a-1, b-1, c-1) markers of tetrapod, core-loc,
and tetrahedron on the sandbox without AR effect. (a-2, b-2, c-2) respectively place
the breakwater tetrapod, core-loc, tetrahedron, and the beach house on their marker.
Figure 3-8(a-3, b-3, c-3) To view the 3D object on a colorful depth map generated by
AR-sandbox. ..................................................................................................................... 69
Figure 4-1: Comparison between traditional VR applications and WebVR applications. ........... 71
Figure 4-2: Comparison between traditional visualization of global tsunamis and Project
WebVR-tsunami. .............................................................................................................. 72
Figure 4-3: Architecture of WebVR-tsunami ............................................................................... 82
xi
Figure 4-4: Screenshot of https://zilizhou.com/lynett in the Safari browser Version 15.4 on
macOS ............................................................................................................................... 84
Figure 4-5: Sequential screenshots of https://zilizhou.com/lynett in the Chrome browser
Version 99.0.4844.59 on iPhone 11 Pro Max. (a) Hunga Tonga Tsunami simulation
visualization of 0.8 hours since the eruption. (b) Hunga Tonga Tsunami simulation
visualization at 3.1 hours since the eruption. (c) Hunga Tonga Tsunami simulation
visualization at 3.1 hours since eruption after the northward rotation by screen touch.
(d) Hunga Tonga Tsunami simulation visualization at 3.1 hours since eruption after
zooming in by screen touch. (e) Hunga Tonga Tsunami simulation visualization at 3.1
hours since eruption after zooming out by screen touch. .................................................. 86
Figure 4-6: WebVR 3D visualization of 2022 Hunga Tonga Tsunami viewed by HTC VIVE.
(a) The VR view. White hands represent the two controllers; (b) The reality view of
the author using HTC VIVE to see the WebVR Hunga Tonga Tsunami in
https://zilizhou.com/become_archimedes. ........................................................................ 88
Figure 5-1: Demonstration of multiuser experience in AR-sandbox. ........................................... 94
Figure 5-2: Screenshots of the visualization of a 3D globe with the 2022 Hunga Tonga
Tsunami shockwave (marked as a red line) based on WebAR. (a) aim the rear camera
at the Hiro marker. (b) the animation of the 2022 Hunga Tonga Tsunami shockwave
on a 3D earth is placed on the Hiro marker. ..................................................................... 96
Figure 5-3: Immersive Computing for Coastal Engineering (ICCE) software bundl ................... 98
xii
Abstract
Augmented reality (AR) is a technology that integrates 3D virtual objects into the physical world
in real-time, while virtual reality (VR) is a technology that immerses users in an interactive 3D
virtual environment. The fast development of augmented reality (AR) and virtual reality (VR)
technologies has reshaped how people interact with the physical world. This dissertation will
outline the deliverables from two unique AR and one Web-based VR coastal engineering projects
and will motivate the next stage in the development of the augmented reality package for coastal
students, engineers, and planners. Three projects demonstrate the completed aspects of this effort
– 1) Project AR-tsunami for promulgating education about coastal hazards, 2) Project AR-sandbox
for combining laboratory experiments and numerical simulations in coastal engineering 3) Project
WebVR-tsunami for providing a convenient tool for 3D tsunami visualization and education.
Project AR-tsunami and Project AR-sandbox produce two user-friendly and GPU-accelerated
iOS/iPadOS apps – AR-tsunami and AR-sandbox.
Combining the features of plane detection and people occlusion in ARKit with the
Boussinesq-type wave solver in Celeris, AR-tsunami can automatically render a tsunami on the
ground and provide an immersive experience of the impact of tsunamis for users. The goal of this
experience is to elicit an emotional response in users and influence future planning decisions, and
ultimately push a more proactive approach to tsunami preparedness. AR-sandbox utilizes the
xiii
LiDAR Scanner on Apple’s new generation of iPad Pro to gather a “point cloud” sampling of
arbitrary surfaces and generate a high-resolution digital elevation model (DEM) as the bathymetric
map for the hydrodynamic simulation. The wave simulation and visualization start instantly after
the DEM is transferred to wave solvers, and the resulting simulation is projected on the sandbox
through a projector. With AR-sandbox, coastal engineers can view the virtual waves interacting
with real-world sand. AR-sandbox combines laboratory experiments and numerical simulations
and provides better practicability and maneuverability. Project WebVR-tsunami produces an
online web-based VR tool using the numerical simulation of the 2022 Hunga Tonga Tsunami by
COULWAVE as a showcase (https://www.zilizhou.com/lynett). These are the first apps of their
kind to bring an interactive, immersive, and convenient experience to coastal hazard stakeholders
and coastal engineers.
The penultimate goal is to develop the software bundle ICCE (Immersive Computing for
Coastal Engineering). In ICCE, 1) AR-sandbox gathers the depth data from user input in the small
laboratory sandbox, 2) all the gathered data are stored in a cloud-based AWS file system, 3) AR-
tsunami uses the depth data collected from AR-sandbox for tsunami simulations, and 4) rendering
of simulation output via WebVR-tsunami provides an online entrance to an immersive experience
of coastal hazards. The goal of this software package is to provide a user-accessible interactive
and immersive AR and VR experience that can educate and inform stakeholders on coastal
processes and hazards. Additionally, this software suite will provide a testbed for developing new
coastal protection and disaster preparedness solutions that can be utilized by coastal engineers,
scientists, and planners.
1
Chapter 1 Introduction
1.1 Motivation
Recent destructive tsunamis and hurricanes, such as the Hunga Tonga Tsunami (2022), the
Indonesia Tsunami (2018), Japan Tsunami (2011), Chile Tsunami (2010), Hurricane Dorian
(2019), Hurricane Michael (2018), Hurricane Maria (2017), and Hurricane Ida (2021) have caused
significant damage to the coastlines of affected areas, Mitigating the damage associated with these
events becomes a global concern. Therefore, researchers and scientists have developed efficient
local warning systems and computational simulations, such as the NOAA Tsunami Program
(https://tsunami.gov/), COULWAVE (P Lynett et al. 2002), GeoClaw (Berger et al. 2011), and
Celeris (Tavakkol and Lynett 2017), to study the hazards, predict the arrival time and reduce
damage.
Currently, most models run on high-end computers and require previous experience and
engineering knowledge. The first goal of this dissertation is to make coastal wave simulations easy
to access and understand for everyone, not just for experts. The second goal is to use cutting-edge
technologies to provide stable, immersive, and interactive experiences, such as Augmented Reality
(AR) and Virtual Reality (VR). The theme of this dissertation can be defined as follows:
Developing mobile apps for high-order and high-performance coastal wave modeling
which can provide immersive experiences via AR or VR, real-time interaction using
GPU, and convenient accessibility via mobile apps or the web.
2
1.2 Justification
This research aims to 1) efficiently educate the public about the damage of coastal hazards like
tsunamis, 2) combine laboratory experiments with instant computational simulations for flexibly
performing hands-on engineering tasks. Applying AR & WebVR in coastal engineering
applications is a new and novel method. The achievements of this dissertation are two AR mobile
apps and a WebVR application for interactive coastal waves visualization.
1.3 Objectives of Study
1.3.1 Models of Coastal Processes
Well-developed and common models for coastal processes included COMCOT (Cornell Multi-
grid Coupled Tsunami Model) (P.L.-F. Liu, Woo, and Cho 1998), COULWAVE (Cornell
University Long and Intermediate Wave Modeling Package) (P Lynett et al. 2002), Celeris
(Tavakkol and Lynett 2017), GEOCLAW (Berger et al. 2011), TUNAMI-N1 and -N2 (Shuto
et al. 2006), and FUNWAVE (Fully Nonlinear Wave Model) (Kirby et al. 1998). Solving
coastal engineering problems efficiently requires different simplified equations derived from the
full Navier Stokes equations, driven by the various physical assumptions, and initial and boundary
conditions. In this dissertation, the focus will be on innovatively visualizing tsunami waves striking
the coast or propagating across the ocean. Most coastal wave models are based on the following
three incompressible fluid equations.
(1) Navier-Stokes Equations
The fluid dynamics of an incompressible fluid are governed by the Navier-Stokes equations
3
2
p
t
+ = − +
u
uu u
(1-1)
0 = u (1-2)
where u is the velocity vector, p the pressure, the vector gradient,
2
the Laplacian operator,
the fluid density, and the dynamic fluid viscosity.
(2) Boussinesq-type Equations
With the Boussinesq approximation (Boussinesq 1897), the Navier-Stokes equations can
be deduced into the solvable depth-integrated flow equations in two horizontal dimensions for an
incompressible fluid. The resulting equations are Boussinesq-type equations. The following is the
Boussinesq-type model that is used in this research (Madsen and Sørensen 1992).
0
t x y
h PQ + + =
(1-3)
22
11
0
2
tx
y
x
P gh PQ
P ghz f
hh
+ + + + + + =
(1-4)
2
22
2
0
2
y t
x
y
PQ Q gh
Q ghz f
hh
+ + + + + + =
(1-5)
where h is the total water depth, P and Q respectively the depth-integrated mass fluxes in x and
y direction, g the gravitational acceleration coefficient, z the bottom elevation measured from
still water depth,
1
f and
2
f respectively the bottom friction terms in the two directions,
1
and
2
the dispersive Boussinesq terms defined by
4
2
1
3
1
( ) ( )
3
xxt xyt xxx xyy
B d P Q Bgd
= − + + − +
1 1 1
2
3 6 6
x xt xt xx yy y xt xy
dd P Q Bgd Bgd dd Q Bgd
− + + + − +
(1-6)
23
2
1
( ) ( )
3
xyt yyt yyy xxy
B d P Q Bgd
= − + + − +
1 1 1
2
3 6 6
yt yt yy x yt y xx xy
dd Q P Bgd Bgd dd P Bgd
− + + + − +
(1-7)
where d is the still water depth, 1/15 B = is the calibration coefficient for dispersion properties
of the equations, and is the free surface elevation. Boussinesq-type equations can be used to
describe strongly nonlinear complex phenomena, such as waves with crests, irregular waves,
waves in the surf zone, and waves on a current (Fredsoe and Deigaard 1992, 107).
(3) Nonlinear Shallow Water Equations
Without the high-order dispersive terms, i.e.
1
0 = and
2
0 = , Boussinesq-type
equations reduce to Nonlinear Shallow Water (NLSW) Equations. NLSW equations are widely
used in tsunami warning systems and simulations in large-scale areas:
0
t x y
h PQ + + =
(1-8)
22
1
0
2
tx
y
x
P gh PQ
P ghz f
hh
+ + + + + =
(1-9)
22
2
0
2
y
y
x
t
PQ Q gh
Q ghz f
hh
+ + + + + =
(1-10)
where h is the total water depth, P and Q respectively the depth-integrated mass fluxes
in x and y direction, g the gravitational acceleration coefficient, z the bottom elevation
5
measured from still water depth,
1
f and
2
f respectively the bottom friction terms in the two
directions.
1.3.2 Augmented Reality (AR) Mobile App for Coastal Engineering Education
We aim to use visualization to promote the understanding of waves impacting the coastline,
and Augmented Reality is the tool we have chosen. Augmented reality (AR) technology integrates
3D virtual objects into the real world in real time (Azuma 1997). An advantage of AR is the ability
to function on portable mobile devices like an iPad or iPhone, which many people use in daily life.
A similar term “virtual reality” (VR) is defined as an interactive 3D virtual environment generated
by computers in which people are immersed (Burdea and Coiffet 2003). VR has a much stricter
requirement of equipment and space, such as the example of VR software shown in Figure 1-1.
For example, people can experience “mixed reality” with AR apps on their smartphones anytime
and anywhere. While using VR software, users need to have a powerful computer, a VR headset
to display the virtual environment, two controllers to make choices or movements in a virtual world,
usually two trackers to enable body tracking, and sufficient space to allow body movement. Thus,
compared to VR, AR is much more convenient and is more likely to be widely used.
6
(a) (b)
Figure 1-1: The demonstration of Celeri Base (2020). (a) Sasan Tavakkol, the co-developer of
the Celeris Base VR software (Tavakkol and Lynett 2020), is showing how to use Celeris Base
with VR equipment. (b) one of the virtual views in Celeris Base.
Due to its convenience and interaction features, AR is commonly used in gaming and
entertainment, such as in the bestselling AR mobile game Pokémon GO. Since 2020, the IT
industries have applied AR in digital advertising. For example, Apple uses AR technology to
promote the new 2021 Apple Watch Series 7. Customers can scroll down the product webpage
and open the AR app page in Apple’s web browser Safari on an iPhone or iPad. Customers will be
asked to turn on and move the back camera first, so the AR app can detect a table or ground surface
and put the virtual Apple Watch Series 7 on the real table/ground (Figure 1-2-b). Customers can
move, rotate, and change the size of the virtual Apple Watch to see its detail (Figure 1-2-c). What’s
more, since the People Occlusion and Motion Capture features have been added, users can virtually
wear the new Apple Watch Series (Figure 1-2-d). With this AR experience, customers can be
easily acquainted with the 3D appearance of the Apple Watch Series 7 and can see more details
by changing the size of the virtual watch.
7
Figure 1-2: Apple Inc. uses augmented reality (AR) technology to immersively show the features
of the new Apple Watch Series 7 on an iPhone or an iPad for promoting this new product in
2021 (https://www.apple.com/apple-watch-series-7). (a) the table in the real world without any
AR effect. (b) the virtual Apple Watch Series 7 is put on the table. (c) customers can change the
position, the rotation, and the size of the virtual Apple Watch. (d) customers can virtually “wear”
the watch.
There are many augmented reality SDKs. The two most powerful and widely used
frameworks are Apple’s ARKit and Google's ARcore. ARKit is integrated on iOS/iPadOS systems
and supports Apple’s native powerful APIs like 3D graphics SceneKit and Metal. Also, ARKit is
compatible with third-party libraries such as Unity IDE. These advantages make it suitable for
creating apps with a high demand for memory and rendering effects like video games. ARCore
works with camera-installed devices operating on Android 7.0 or later and can also support limited
iOS versions to use its AR affects Cloud Anchors and Augmented Faces. Compared to ARKit, it
8
does not support key AR features like Motion Capture, Simultaneous Front and Back Camera,
Multiple Face Tracking, People Occlusion, and machine learning algorithms for surface detection
(Amin and Govilkar 2015). There are other relatively light augmented reality SDKs. Simple CV
is an open-source augmented reality SDK whose advantage is compatibility with C++, Java, and
Python. The basic versions of Vuforia SDK and Kudan SDK are free, and Kudan has an object
recognition feature and so can be useful for commercial payment verification. In this study, ARKit
is chosen as the augmented reality framework for AR apps. Understanding the capabilities of AR
technology and knowing the convenience of portable devices, engineers or scientists can develop
mobile apps using AR technology as the visualization method of a computational fluid dynamics
simulation. In this study, the purpose of developing the AR apps is for coastal hazard education
and coastal engineering.
Blending virtual waves with the real ground using AR technology enables users to directly
access the impact of waves and other implicit information about fluids. Based on this, the AR-
tsunami mobile app has been developed for coastal hazard education (Figure 1-3-b). By using the
AR-tsunami app on their iPhone/iPad, users can see coastal hazards like a tsunami “flooding” the
beach they stand on and as they are “submerged” into the waves. Also, the uniqueness of AR
technology can be used to perform hands-on engineering. Detailed information about AR-tsunami
will be introduced in Chapter 2.
9
Figure 1-3: App icons of AR-tsunami and AR-sandbox on an iPad Pro. (a) AR-tsunami and AR-
sandbox apps on iPad Pro 12 4
th
generation (shown in the red rectangle). (b) the app icon of AR-
tsunami. (c) the app icon of AR-sandbox.
1.3.3 Combination of Coastal Laboratory Experiments and Numerical Simulations based
on Augmented Reality (AR)
Coastal Engineering research is generally studied by employing laboratory experiments or
numerical simulations. Controlled laboratory experiments can provide precise observation data
and help investigate the unknown characteristics, while numerical simulation results can reveal the
subtleties of the physical process representing the existing physical properties, such as the different
kinds of wave dispersion and amplitudes introduced in section 1.3.1. Laboratory experiments in
Coastal Engineering are often conducted to study strong non-linear phenomena (Mazzanti and De
Blasio 2011; Sansón and Van Heijst 2000; Wolfe and Cenedese 2006; Park et al. 2017; Carrillo et
al. 2001) or coastal scenarios with a dynamic bathymetry (Türker, Yagci, and Kabdasli 2019;
10
Dupeyrat et al. 2011; Sánchez-Arcilla et al. 2011; Irish et al. 2014). The observations of many
laboratory experiments, such as the experiments on river inflow and tsunami run-up, are supported
or justified by follow-up or previously proved numerical simulation solutions. As an example, for
studying alongshore transport mechanisms, Horner-Devine et al. (2006) simulated a coastal river
inflow by setting up an annular water tank on a rotating table and using a diffuser attached to the
tank to introduce water into the tank. The velocity and buoyancy fields of water were measured,
and the measurements derived the relationship between the inflow Rossby number and the rate of
accumulation, which was confirmed by previous numerical model data created by Fong and Geyer
(2002). Many studies on the impact of tsunami inundation have compared the results of laboratory
experiments and numerical simulation. Laboratory experiments have the advantage of mimicking
high-resolution inundation in complex onshore environments, such as coastal forests, irregular
topography, or complex infrastructure. While, with the same setup, numerical simulations are more
economical, fast, easy to reproduce, and convenient to improve. Combining these two approaches
can benefit coastal engineers.
In this dissertation, the AR-sandbox mobile iOS/iPadOS app has been developed for
combining laboratory experiments and numerical simulation via Augmented Reality (Figure 1-3-
c). By using the AR-sandbox app on their iPhone/iPad, users can see the projected depth color map
and fluids change in real-time, while they have interactions with the real-world sandbox, like
shoveling some sand to a different position. The governing equations of wave simulations in both
AR-tsunami and AR-sandbox are Boussinesq-type equations or Nonlinear Shallow Water
Equations introduced in section 1.2.1. Details on AR-sandbox will be discussed in Chapter 3.
11
1.3.4 Web-based Virtual Reality (VR) Visualization of Global Tsunamis
Virtual reality (VR) differs from Augmented Reality (AR) which “augments” the reality by
providing the interactions between the virtual objects and the real world. On the other hand, VR
delivers a completely virtual world, which offers an immersive experience. VR is generally created
and installed in the form of a software program on a PC. With the improvement of both software
and hardware, Web-based Virtual Reality (WebVR) is a brand-new revolutionary technology. The
characteristics of online visualization and cloud storage enable WebVR applications to be easier
to develop and faster to share. This makes WebVR more appealing for scientific visualization and
flexible education. In this dissertation, a development method for an interactive 3D WebVR
visualization of a global tsunami event is proposed. We aim to provide a state-of-the-art, efficient,
and immersive way to educate the public about natural hazard events.
12
1.4 Dissertation Contributions
This dissertation made the following unique contributions to the area of coastal hazard
communication and coastal engineering:
1) Developed two novel Augmented Reality applications of effective human and coastal
hazard interaction.
2) Developed one novel Web-based Virtual Reality application for global tsunami
visualization.
3) Demonstrated a method to process data and perform complex calculations at high speed
on mobile devices.
4) Described a novel method using augmented reality to allow engineers to conduct hands-
on coastal engineering.
13
Chapter 2 AR-tsunami iOS/iPadOS App:
Interactive Augmented Reality Tsunami
Simulation System for Coastal Hazard Education
2.1 Introduction
Traditionally, maps are the main tool to present extreme natural events. Nowadays, with extended
reality technology like Augmented Reality (AR), the characteristics of those events are emphasized
or improved (Figure 2-1). Demonstration with AR gives a stronger impression by allowing users
to witness the hazard itself, and people can better capture the features of each hazard and connect
it to their lives.
Figure 2-1: Comparison of non-AR’s and AR’s efficiency in attracting attention and providing
emotional impact. The photo resources are collected from The Weather Channels online. The
14
Weather Channels report the storm surge by traditional maps in (a) and by AR in (c); The
Weather Channels report the tornado by traditional maps in (b) and by AR in (d).
Augmented reality can allow people to visualize complex spatial relations and capture
abstract concepts (Arvanitis et al. 2009), to experience phenomena that are not experienced by the
users such as a tsunami event (Zhou and Lynett 2020), and to interact with virtual objects (Ayoub
and Pulijala 2019). With these educational benefits, VR has emerged as a key technology for
education (Wu et al. 2013; Nincarean et al. 2013). Inspired by the effectiveness of these
applications of augmented reality systems in education, we created the augmented reality system
for coastal hazard education – AR-tsunami. The goal of Project AR-tsunami is to perform coastal
hazard education by visualizing tsunami waves and blending the virtual waves into reality. The
aim is to rapidly improve situational awareness of the severity of coastal hazards.
The main contributions of the Project AR-tsunami are:
1) AR-tsunami is the first and only mobile app using AR technology to visualize tsunami
waves.
2) AR-tsunami detects the horizontal surface like the beach or the driveway and places the
tsunami waves on it so that the virtual waves blend into the real world.
3) AR-tsunami recognizes people and performs people occlusion when visualizing the
virtual waves so that users can have an immersive experience.
4) A new GPU-required nonlinear wave solver has been tested running properly in Metal
Shading Language (MSL) on iOS/iPadOS devices, based on the wave solvers written in
15
different shader language called HLSL in the open-source software Celeris Base and the
proprietary software library Surface Wave.
5) A way to access the input data online is proposed. The input data of wave computations
are stored on the Amazon Web Service file system and are accessed online through
WebRequest and WebResponse Classes in the .NET framework.
2.2 Related Work
For tsunami or coastal wave simulations, many popular computational models have been widely
used in the tsunami warning system, coastal hazard mitigation programs, and scientific research.
GeoClaw is an open-source software written in Fortran 95 for simulating depth-averaged flows by
Shallow Water Equations (SWE) with adaptive refinement (Berger et al. 2011). COULWAVE
solves various depth-integrated wave models including Nonlinear Shallow Water Equations
(NSWE) and several Boussinesq-type models (P Lynett et al. 2002), and it’s widely applied for
simulating landslide or nearshore tsunami propagation, evolution, and inundation. TUNAMI-N2
(Goto et al. 1997) and its parallel version (Oishi, Imamura, and Sugawara 2015), and MOST (V.V.
Titov and Gonzalez 1997) with its developed version (V. Titov, Kânoğlu, and Synolakis 2016) are
all commonly applied for tsunami warning and analysis. SPHysics uses Smoothed Particle
16
Hydrodynamics (SPH) to simulate violent phenomena like the wave breaking, sloshing, and wave
impacts on a structure (Gomez-Gesteira et al. 2012; Gómez-Gesteira et al. 2012).
For issuing hazard warnings, the GIS program at the National Center for Atmospheric
Research (NCAR) has been directing the bulk of the effort (Moloney and Unger 2014; Fanfarillo,
Del Vento, and Nichols 2018; B. Morrow and Lazo 2015; Lazo et al. 2014; B.H. Morrow et al.
2015; Schumacher et al. 2010; Deser et al. 2012; Bostrom et al. 2016). For example, NCAR studied
the conversations about hazards on the social media stream Twitter, analyzed the level of people’s
risk perceptions, concluded that social media messages have the potential to perform effective
communication, and discussed the computational modeling of a coupled natural-human system
(Morss et al. 2017). The Center for Complex Hydrosystems Research at the University of Alabama
has proposed a multi-hazard indexing approach for an improved and effective communication of
hurricane hazards (Song et al. 2020).
For AR applications used for training and communication, the AR concept has been used
in the virtual training of oral and maxillofacial surgery (Ayoub and Pulijala 2019), and in the art
design education for developing high school art students’ independent thinking and creativity
(Bower et al. 2014), and in math and science classes to facilitate students’ learning of geometry
(Chang, Morreale, and Medicherla 2010; Kaufmann and Schmalstieg 2002). Also, augmented
reality (AR) or virtual reality (VR) has been applied to facilitate hazard communication and
education. These studies focus mainly on construction safety hazard recognition (Tixier and Albert
2013; R Eiris Pereira et al. 2019; Li et al. 2018; Le et al. 2015). For instance, learners put a marker
on their body and other learners can use smart devices to scan the marker and virtually put the
17
Personal Protective Equipment (PPE) on their body so that learners can acquire safety knowledge
about PPE (Le et al. 2015). Most studies build a 3D virtual reality scene for learners to experience
the construction environments. For environmental hazard education, augmented reality technology
has been applied to visualize the Antarctic Ice Sheet using the Microsoft HoloLens (Boghosian et
al. 2019). HoloFlood is an augmented framework for flood preparedness and management by
simulating flood scenarios (Demir et al. 2018). It is mainly designed for use with Microsoft
HoloLens and mobile devices with augmented reality capabilities. Apart from visualizing the flood
hazard in a real environment, HoloFlood can also estimate the flood damage and which populations
will be affected, and these features can help authorities mitigate risks, such as selecting evacuation
routes or allocating resources. With these recent state-of-the-art AR applications, AR has been
proved to be an efficient way to conduct hazard risk communication and engineering training (Lee
2012; Ricardo Eiris Pereira, Gheisari, and Esmaeili 2018; Albert et al. 2014; Zhu et al. 2018; Pedro,
Le, and Park 2016).
To apply the concept of AR, researchers should keep in mind that AR technology is highly
memory-consuming and needs a powerful hardware platform to perform. Nowadays, few portable
devices can handle AR computing locally. The Apple-designed graphics processing unit (GPU) in
iOS, iPadOS, and tvOS devices can optimize performance and power efficiency. Apple-designed
GPU can keep the calculations in fast local memory, so the new iPhone and iPad are ideal portable
devices for AR apps. People can also use external and portable GPUs to ensure computing.
NVIDIA’s Jetson is built with an NVIDIA Pascal-family GPU and is the world’s leading
embedded AI computing device. It includes small form-factor, high-performance computers called
18
Jetson modules. Jetson is used to speed up development, and it can support a wide range of
standard hardware interfaces which makes it easy to integrate into different products. In this study,
the new iPad Pro and iPhone Pro with Apple-designed GPU are the hardware platform to run AR
apps.
2.3 Mathematical solvers in AR-tsunami
2.3.1 Procedural Wave Model
Procedural waves in ocean wave animation are a type of standardized, simplified, and
closed-form representatives of water waves, which concentrate on animating rich details of the
ocean surface without high computational expense (Jeschke et al. 2020; Hinsinger, Neyret, and
Cani 2002). Procedural wave models are commonly used in Computer Graphics industries for
modern open sea visual effects practice.
Wave physical properties and characteristic shapes of water are used in procedural wave
models to describe surface point displacements. In fluid mechanics, a typical way to generalize
surface point displacements is to consider that each particle on the free surface travels in a circle
around its rest position. The model in the XZ plane can be described as:
( )
00
sin t kx x x Ak − −= (2-1)
( )
00
cos t kx z z Ak − −=
(2-2)
19
where
00
( , ) xz is the constant position of the particle at rest, A the wave amplitude referred to the
location at rest, t the time, z the Z axis pointing up, the angular velocity (frequency), k the
wavevector. A summation of these surface point displacements shows a pattern of the well-known
Gerstner swell, where the trajectories of particles of water present a stationary circle or ellipse. A
common type of procedural wave models (Fournier and Reeves 1986; Sébastien Thon and
Ghazanfarpour 2002; Sebastien Thon, Dischler, and Ghazanfarpour 2000; Ts'o and Barsky 1987;
Jeschke et al. 2020; Hinsinger, Neyret, and Cani 2002) sum these circular motions described in the
Equation (2-1) – (2-2) and solves the phases of water trains on the ocean surface. The model can
be summarized as:
( )
1
sin
ii
N
i
i
ii
x A k t k x
=
= − +
(2-3)
( )
1
cos
i i i i
N
i
z A t k z
=
= − +
(2-4)
where
i
is the initial offset, and N is the total number of wave frequencies. This type of
procedural wave models considers the wave surface as a summation of different independent
waves, and they simulate a train of trochoids.
Instead of producing a sum of different water surface wavelets, another common type of
procedural models uses the Fast Fourier Transform (FFT) to decompose a height field with the
same spectrum as the water surface. The height field of a collection of real cosine waves can be
built as:
( )
,
,
( , , ) cos ( , )
i j k ij
ij
h x z t A k x z t = − +
(2-5)
20
where h is the height field, ( , ) xz the XZ plane, t the time,
, ij
A the real-valued constant
amplitude, k the wave vector in the direction of the wave’s motion,
k
the angular velocity
(frequency) whose relationship with wave number k is
k
gk = ,
ij
the constant phase shift
direction. Equation (2-5) can be used to describe any point at any time, but the numerical
simulation of non-linear waves requires much computational time. A FFT-type procedural wave
model will transform it into a summation form that rapidly computes wave height everywhere in
one computation. Equation (2-5) after Fast Fourier Transform is (Bridson 2015):
( ) ( ) ,
/2 1 /2 1
/2 1 /2 1
11
,,
11
22
ij k i j k
nn
i
pq i
j
j
t
nn
ij
t
h e A e A
−−
− − −
−
−−
−−
= − +
−
= − +
=+
( ) 1 2 ( )/ ip jq n
e
−+
(2-6)
where p and q are the integer indices in the grid space of nn points,
These FFT-type procedural wave models simulate different waves simultaneously, which
saves computational time compared to the Gerstner-type procedural wave models. Another
advantage is that the solution can be repetitiously used, which enables the resulting surface region
to expand freely (Hinsinger, Neyret, and Cani 2002). But the preliminary results of FFT-type
procedural wave models are sine waves, which hardly match the water particle movement
described in the Equation (2-1) – (2-2). Other procedural wave models are developed to overcome
the limitations of Gerstner-type or FFT-type models, such as bounding procedural wave motions
in a wave cage (Jeschke et al. 2020).
Different from Computational Fluid Dynamics (CFD) (Blazek 2015; Tu, Yeoh, and Liu
2018; Chung 2002; Anderson and Wendt 1995) and Smoothed Particle Hydrodynamics (SPH)
21
models (Monaghan 1992, 2005; Gingold and Monaghan 1977; M. Liu and Liu 2010), procedural
wave models only focus on surface motion and eliminate other computations which are necessary
to simulate certain types of realistic wave phenomena, such as strongly nonlinear wave run-ups.
Procedural waves should be used as a wave model to achieve computational flexibility rather than
knowledge of fluid phenomenology (Tessendorf 2001).
The FFT-type and Gerstner-type procedural wave models in the ocean renderer Crest
(https://github.com/wave-harmonic/crest) have been adapted in AR-tsunami to simulate wave
propagation across open sea. The basic numerical scheme employed by Crest to generate
procedural waves is the wave spectrum sampling based on the FFT model proposed by Fréchot
(2006). After dividing the finite range of wave frequencies
min max
, into M samples, the
relationship between the amplitude of each wave A and the directional spectrum S is:
( ) ( )
2
2 , ,
mm
mm
AS
=
d d
(2-7)
where is the constant phase shift direction. For a FFT-based model, sampling the wave spectrum
in the Cartesian wavenumber domain is more appealing. The directional spectrum respected to
frequencies ( ) , S can be converted to the directional spectrum respected to wavenumber
( )
Sk
by the equation below:
( ) ( ) ,
1
,
2
k
g
S k S gk
kk
= (2-8)
22
where k is the wave vector, k the wavenumber, g the gravity. In Crest, the basic simulation
method is the wave spectrum FFT method on a meshed field, and a Gerstner wave shape
component can be added to the wave if a strong swell (or nonlinear) simulation is needed. We can
change the wave spectrum and the wavelength of different waves in the class OceanRenderer in
the procedural model of AR-tsunami to control the resulting shape of the water surface. We can
also add the shape asset OceanRenderer to generate Gerstner waves.
2.3.2 Shallow Water Equation Model
Unlike procedural waves which linearly sum different wave trains and focus on expressing
wave surface displacements, a Shallow Water Equation (SWE) model simulates the water body
in a Eulerian framework. SWE models are derived from the basic Navier-Stokes equations. The
basic Navier-Stokes equations in the Cartesian coordinate systems are:
0 = u (2-9)
2
p
t
+ = − +
u
uu u
(2-10)
where u is the velocity vector, p the pressure, the vector gradient,
2
the Laplacian operator,
the fluid density and the dynamic fluid viscosity. SWE models can be divided into two types
– the linear shallow water equations (LSW) and the nonlinear shallow water equations (NLSW).
NLSW equations are widely used in tsunami warning systems and simulations in large-scale areas.
23
The NLSW solver in the wave model Celeris (Tavakkol and Lynett 2017) is adapted in AR-
tsunami for tsunami simulations. Celeris’s NLSW model in conservative form is:
0
t x y
h PQ + + =
(2-11)
22
1
0
2
tx
y
x
P gh PQ
P ghz f
hh
+ + + + + =
(2-12)
22
2
0
2
y
y
x
t
PQ Q gh
Q ghz f
hh
+ + + + + =
(2-13)
where h is the total water depth, P and Q respectively the depth-integrated mass fluxes in x and
y direction, g the gravitational acceleration coefficient, z the bottom elevation measured from
the still water depth,
1
f and
2
f respectively the bottom friction terms in the two directions. We
show the diagram of the total water depth h and the bottom elevation z in Figure 2-2.
Figure 2-2: the diagram of the total water depth h , the bottom elevation z , and the instantaneous
wave height measured from still water depth.
24
The numerical discretization for the NLSW model in Celeris follows the second-order
central-upwind finite-volume scheme introduced by Kurganov and Petrova (2007). It is a finite
volume method (FVM) that is often used to solve the Saint-Venant system of Shallow Water
Equation models without any energy terms taken into play. This scheme is well-balanced and can
ensure the depth values stay positive. In this finite volume scheme under the 2D case, Equation (2-
11) – (2-13) will be computationally solved as follows:
( ) ( ) ( )
t
xy
+ + = U F U G U S U
(2-14)
( ) ( ) ( )
22
1
22 2
0
, , ,
2
2
x
y
P
Q
h
P gh PQ
P ghz f
hh
Q ghz f
PQ
Q gh
h
h
= = + = = +
+
+
U F U G U S U
(2-15)
where ( ) ,,
N
x y t U is the unknown conservative vector, ( ) FU and ( ) GU are the advective
flux vectors, and ( ) SU is the source term including bottom slop term 𝑔 ℎ𝑧 and friction term f .
How to drive a second-order well-balanced positivity preserving 2D central-upwind scheme for
Equation (2-14) – (2-15) is discussed in detail by Kurganov and Petrova (2007) and Kurganov,
Prugger, and Wu (2017), whose work is based on the 2D Godunov-type central approach
(Arminjon, Viallon, and Madrane 1998). To perform the central-upwind finite-volume scheme,
the domain is gridded into the computational cells
, 1/2 1/2 1/2 1/2
: , ,
j k j j k k
C x x y y
− + − +
=
of size
25
, jk
C x y = centered at
( ) ( )
1/2 1/2 1/2 1/2
, ( ) / 2,( ) / 2
j k j j j j
x y x x y y
− + − +
= + + , and the cell averaged
solution of the unknown conservative vector can be calculated by:
1/2, 1/2, , 1/2 , 1/2
,,
j k j k j k j k
j k j k
d
dt x y
+ − + +
−−
= − − +
F F G G
US (2-16)
where
, jk
S is the cell averaged source term:
( )
,
,
1
jk
jk
C
xy
S S U dxdy
(2-17)
To guarantee positivity preserving of the total water depth h , a conservative correction is
applied on the x -direction velocity u and the y -direction velocity v :
( )
( )
( )
( )
4 4 4 4
22
max , max ,
h hu h P
u
h h h h
==
++
(2-18)
( )
( )
( )
( )
4 4 4 4
22
max , max ,
h hv h Q
v
h h h h
==
++
(2-19)
where is a constant tolerance to avoid the denominators to be too small values or zero.
The time integration for the NLSW model implemented in Celeris is divided into the
predictor step and the corrector step. The predictor step uses a third-order Adams-Bashforth
scheme (Durran 1991) which is explicit in time, and the following corrector step uses an optional
fourth-order Adams-Moulton scheme (Moulton 1926) which is implicit in time. This common
Uniform Time-stepping method is justified to be efficient in initial value problems (J.C. Butcher
2016; Psihoyios and Simos 2005). Another type of time integration methods called Adaptive Time-
26
stepping also applied in Celeris is introduced in detail in Tavakkol (2018). The time integration
used in AR-tsunami is performed by the uniformed third-order Adams-Bashforth scheme.
2.3.3 Boussinesq-type Equation Model
The Boussinesq-type wave solver in the wave model Celeris Base (Tavakkol and Lynett 2020) has
been adapted in AR-tsunami. With Boussinesq approximation (Boussinesq 1897), the Navier-
Stokes equations can be deduced into the solvable depth-integrated flow equations in two
horizontal dimensions for an incompressible fluid. The resulting equations are Boussinesq-type
equations. A Boussinesq-type model can be described as (Madsen and Sørensen 1992):
0
t x y
h PQ + + =
(2-20)
22
11
0
2
tx
y
x
P gh PQ
P ghz f
hh
+ + + + + + =
(2-21)
2
22
2
0
2
y t
x
y
PQ Q gh
Q ghz f
hh
+ + + + + + =
(2-22)
where h is the total water depth, P and Q respectively the depth-integrated mass fluxes in x and
y direction, g the gravitational acceleration coefficient, z the bottom elevation measured from
still water depth,
1
f and
2
f respectively the bottom friction terms in the two directions,
1
and
2
the dispersive Boussinesq terms. Compared with the NLSW model (Equation (2-11) – (2-13)),
the Boussinesq-type Equation Model in Celeris Base owns the extra dispersive Boussinesq terms
as another source, and they are defined by:
27
2
1
3
1
( ) ( )
3
xxt xyt xxx xyy
B d P Q Bgd
= − + + − +
1 1 1
2
3 6 6
x xt xt xx yy y xt xy
dd P Q Bgd Bgd dd Q Bgd
− + + + − +
(2-23)
23
2
1
( ) ( )
3
xyt yyt yyy xxy
B d P Q Bgd
= − + + − +
1 1 1
2
3 6 6
yt yt yy x yt y xx xy
dd Q P Bgd Bgd dd P Bgd
− + + + − +
(2-24)
where d is the still water depth, 1/15 B = is the calibration coefficient for dispersion properties
of the equations, and is the free surface elevation. Boussinesq terms
1
and
2
are the main
mark distinguishing short waves from long wave. Boussinesq-type equations can be used to
describe strongly nonlinear complex phenomena, such as waves with crests, irregular waves,
waves in the surf zone, waves on a current (Fredsoe and Deigaard 1992, 107).
The numerical scheme for the Boussinesq-type model (Equation (2-20) - (2-24)) in Celeris
is a hybrid FVM-FDM discretization on a uniform Cartesian grid. The advective flux terms are
discretized using the same finite-volume well-balanced positivity preserving central-upwind
scheme for the NLSW model introduce in section 2.3.2, while the newly added dispersive
Boussinesq terms ( )
2 1
, are discretized by the second-order central finite difference method
(FDM) and added into the computation before running into next time step. The domain is gridded
by the cells
, 1/2 1/2 1/2 1/2
: , ,
j k j j k k
C x x y y
− + − +
=
, and the FDM for the source term can be described
as:
28
( )
1
2 2
1
0
x
y
ghz f
ghz f
= + +
++
SU
(2-25)
, 1/2, 1/2, , 1/2 , 1/2
1
4
j k j k j k j k j k
S S S S S
− + − +
= + + + +
(2-26)
where is a disturbance corrector.
The time integration used in AR-tsunami for the Boussinesq-type model is performed by
the uniformed third-order Adams-Bashforth scheme.
2.3.4 Boundary Conditions
There are two main categories of boundary conditions: the solid-fluid interface boundary
condition, and the fluid-fluid interface boundary condition. For the bottom boundary conditions at
the solid-fluid interface, if the solid is fluid impermeable, the boundary condition must be satisfied
that the fluid must never penetrate the solid. Mathematically, this implies that the relative velocity
component of the fluid perpendicular to the solid surface must be zero:
0
n
v =
(2-27)
where the fluid is ideal with a zero viscosity (shown in Figure 2-3).
29
Figure 2-3: Motion of ideal fluid on the solid surface
For a viscous fluid, there are molecular attraction forces between the viscous fluid and solid
surface. These forces drag the flow layer close to the solid surface completely stationary. Therefore,
for the viscous fluid, not only the normal velocity component on the solid surface must be zero,
but also its tangential velocity must be zero (viscosity coefficient 0 = ) (shown in Figure 2-4):
0 = v (2-28)
30
Figure 2-4: Motion of viscous fluid on the solid surface
No matter how small the viscosity is, all physical fluids must be subject to the no-slip
boundary conditions (Lauga, Brenner, and Stone 2005; Richardson 1973). According to the
impermeability and the zero-velocity condition in Equations (2-24) and (2-25), the bottom
boundary condition on the seabed surface in a 3D CFD or SPH tsunami modeling is:
hh
n x y z x y
= = +
+ + +
n i j k i j k
0
hh
x x y y z
+
= + =
(2-29)
31
where is the velocity potential, n represents the unit external normal vector of the water bottom,
h is the total water depth, are the unit normal vector in directions, and since it is the bottom
boundary condition, zh =− .
In ocean modeling (Shchepetkin and McWilliams 2005; Haidvogel et al. 2008; P Lynett et
al. 2002; Berger et al. 2011; Tavakkol and Lynett 2020), the sea surface boundary is a fluid-fluid
interface boundary involving the water body and the upper air. If the fluid (water or air) is moving
and the interface is unsteady, for viscous fluid (i.e. real fluid), the no-slip boundary condition
(Equation (2-29)) must be applied to each fluid related to the interface; if one of the fluids is an
ideal fluid, there is no no-slip boundary condition for the inviscid fluid; if both fluids are ideal,
there is no resistance force on the interface.
Commonly used ocean wave models are COMCOT (Cornell Multi-grid Coupled Tsunami
Model) (P.L.-F. Liu, Woo, and Cho 1998), COULWAVE (Cornell University Long and
Intermediate Wave Modeling Package) (P Lynett et al. 2002), Celeris (Tavakkol and Lynett 2017),
GEOCLAW (Berger et al. 2011), TUNAMI-N1 and -N2 (Shuto et al. 2006), and FUNWAVE
(Fully Nonlinear Wave Model) (Kirby et al. 1998). They all apply the free surface boundary
condition for the upper boundary of ocean waves. Free surface refers to the interface between
water and air. The pressure on the free surface is equal to the pressure of the external fluid at the
interface, and then fluid particles on the free surface will always stay locating on the free surface.
This indicates that the normal velocity of the fluid particles on the free surface is the same as that
of the free surface, and it can be expressed mathematically as:
32
n
=
= n v n
(2-30)
where the velocity potential of the fluid particle, v is the velocity on the free surface, and n
represents the unit external normal vector of the free surface.
For the wave solvers in AR-tsunami, the surface is also incompressible, and the water
density is constant, i.e. /0 = . Therefore, the continuity equation (Equations (2-9), (2-11)
and (2-20)) in a 2D case is simplified to:
0
uv
xy
+
=
(2-31)
Since the water is ideal, i.e. viscosity coefficient 0 = , on the sea surface, the boundary
condition is:
z t x x y y
=
++
(2-32)
where is the instantaneous wave height measured from still water depth.
In AR-tsunami, the procedural wave models are adapted from Crest and the CFD wave
solvers are adapted from Celeris. In the procedural wave models, the simulation on the boundaries
aims to approximate physical wave behaviors instead of simulating the fluid phenomenon. There
are no boundary conditions taken into computation in both FFT-type and Gerstner-type procedural
wave models. For common FFT-type models, the computation for the resulting sine wave trains
can be used as a tile to duplicate and then expand to a larger domain. This can avoid simulating
the boundary. For common Gerstner-type procedural wave models, the wavelength is shortened
near the beach or a steep terrain. But the models are not boundary-aware. The CFD wave solvers
in Celeris provide five different lateral boundary conditions: sinewave maker, irregular wavemaker,
33
sponge layer, fully reflective solid wall, and time series. For the solid wall boundary condition, it
is similar to the sea bottom condition (Equation (2-27)), where the depth-integrated mass fluxes
P and Q equal to zero in the direction perpendicular to the solid wall. To achieve this boundary
condition, the two layers of ghost cells mirror the values on the adjacent two cells to the boundary
(See Figure 2-5).
Figure 2-5: Fully reflective solid wall boundary with two layers of ghost cells (yellow dots). The
domain is gridded for numerical discretization, and the dots are located at the center of the grid
for deploying the central-upwind scheme. Blue dots represent the inner water domain cells; two
34
layers of green dots represent the water domain cells adjacent to the boundary; two layers of
yellow dots represent the ghost cells, whose value will mirror the value of respective green dots;
black dots are solid wall cells.
In AR-tsunami, the wave is generated by the sinewave maker boundary condition. In
Celeris, with a given amplitude a , period T , and direction . the sinewaves on the boundary can
be generated via giving the following values to the free surface elevation , and the depth-
integrated mass fluxes P and Q :
( )
sin
xy
xk t y ak − =−
(2-33)
cos Pc =
(2-34)
sin Qc =
(2-35)
where
( ) ( ) cos , si
2
, n ,
xy
k k k k c
kT
= == = (2-36)
where k is the given wave number.
35
2.4 Method of Building AR-tsunami App
2.4.1 iOS/iPadOS Augmented Reality Framework ARKit
In AR-tsunami, Apple’s Augmented Reality framework ARKit is used for the augmented reality
features. Coded in the programming language Swift, ARKit provides technologies like 1) device
motion tracking to better blend virtual objects, depending on camera position, 2) scene
understanding to detect a real-world plane through machine learning, 3) image tracking to read a
photo and use it as an anchor for virtual object, and 4) people occlusion to make AR experience
more immersive. Also, using Swift to code iOS/iPadOS app, developers can access other powerful
Apple frameworks or products, such as the new user interface toolkit SwiftUI, the powerful
front/back camera, and Apple’s A13 Bionic Chip with a graphics processor.
Developers can create AR apps with these technologies using the front or rear camera of
an iPhone or an iPad. Figure 2-6 shows an image tracking app. The app detects the cover photo
and plays a tsunami video behind the newspaper cover story above the cover photo. This image
tracking technology will be used in the Project AR-sandbox for placing virtual 3D breakwaters
and houses on their markers, which is discussed in section 3.6.
36
(a) (b)
Figure 2-6: Display the tsunami video behind the cover story on iPhone using image tracking
technology in ARKit. (a) the cover page of the San Francisco Chronicle on Monday, December
27, 2004, reporting the huge 2004 tsunami in Indonesia; (b) four screenshots in serial time
showing the AR effect of image tracking – sub-photo b.1 shows in the beginning camera is
shooting the newspaper cover page with the original cover photo; sub-photo b.2 shows the app
has detected the cover photo and covered it with the tsunami video of 2004 Indonesia tsunami;
sub-photo b.3 shows the video is virtually played in the app; sub-photo b.4 shows moving the
camera closer to the newspaper can zoom in the video correspondingly.
37
2.4.2 Wave Simulation and Visualization
The procedural wave model in Crest is used in AR-tsunami to simulate simple open water view
and waves. The NLSW-type solver and the Boussinesq-type wave solver in Celeris Base
(Tavakkol and Lynett 2020) is adapted into AR-tsunami to simulate and visualize nonlinear waves.
Celeris is a GPU-accelerated open-source software simulating the Boussinesq-type waves on
Windows operating system. Celeris Base is built on Unity in C#. To make the solver run on mobile
devices like iPad or iPhone, the compute shaders are transformed and compiled into Swift and
Metal Shading Language (MSL) so the solver can be built on iOS/iPadOS mobile operating
systems. The solver on iOS/iPadOS is also GPU-accelerated, which ensures an iPhone/iPad
capable of running the simulation on the device.
2.4.3 AR-tsunami Architecture
To transfer wave solvers and shading shaders in Celeris Base to AR-tsunami requires the following
steps:
1) Extract the simulation engine of Celeris Base:
a. SimulationFlow.cs – the SimulationFlow Framework in the Surface Wave
Software Code Animo which is utilized to control the simulation pipeline of
Celeris
b. ReadCBF.cs, ReadCML.cs, and SimSetting.cs – the scripts in charge of reading
bathymetry data, wave resources data, and initial/boundary conditions
c. TL17.compute and TL17Driver.cs – the solver of Boussinesq Model
38
d. KP07.compute and KP07Driver.cs – the solver of Nonlinear Shallow Water
(NLSW) Model
e. boundary_shaders.compute, time_integration.compute, and boundary.compute –
the compute shaders to deal with boundary conditions and time integration
2) Extract the renderers and rendering materials of Celeris Base:
a. TerrainRenderer.cs and WaveRenderer.cs– the rendering tool first developed by
software Code Animo and further developed by Celeris
b. SimpleTerrain.shader – the shader for shading the terrain surface
c. FjordWater.shader – the shader for shading the wave surface
3) Store input data on the Amazon Web Service file system.
4) Change the method of getting input data. In ReadCBF.cs andReadCML.cs, the original
approach of accessing input data from a local directory in the Window system is replaced
by accessing online through WebRequest and WebResponse Classes in the
framework .NET.
5) Create a new wave simulation project in Unity 3D using the extracted simulation solvers
and renderers.
6) In Building Settings of the Unity project, select the current scene and switch the platform
from PC to iOS.
7) Install ARKit API Packages in the Unity project to ensure the new wave simulation can
run on iOS. The needed packages and their versions are AR Foundation 4.1.7, ARCore
XR Plugin 4.1.7, and ARKit XR Plugin 4.1.7.
39
8) Remove the GameObject Main Camera in the Unity project. Add two GameObjects AR
Session and AR Session Origin with a GameObject AR Camera as its child. In the
GameManager, add the AR Camera as the camera.
9) Change the hierarchy of GameObjects. Move the wave simulation and texture renders
into AR Session Origin as its children. This ensures the simulation is running in an AR
environment.
10) Add Component AR Plane Manager in the GameObjects GameManager, Wave
Renderer, and Terrain Renderer, and select Horizontal as the Detection Mode.
11) Code a C# script AutoPlacementOfObjectsInPlane.cs. Set the first detected horizontal
plane as the position of the virtual waves. This ensures the horizontal plane is
automatically detected and the resulting virtual wave will be displayed on the first
detected horizontal plane. When the user is moving the camera, the virtual wave plane
will still be fixed in the real world.
12) There are different Pose Drivers in the ARKit XR Plugin of Unity to determine the
position relation between the user (camera) and the virtual objects. The Tracked Pose
Driver is used in AR-tsunami and added as the third component in the GameObject AR
Camera. Tracked Pose Driver can track the user’s motion and map the virtual object into
the coordinates of the real world according to the user’s 3D position.
13) In the Project Setting of the Unity project, add Metal Editor Support and Metal API
Validation in Player. Add descriptive sentences for Camera Usage Description and
Microphone Usage Description in Player.
40
14) Create an iOS build of the Unity project. The iOS build is an Xcode project written in
Programming Language Objective-C.
15) Open the iOS build In Xcode and sign your team account. In item TARGETS, choose
the iOS Deployment Target is version 15.0 and click the boxes of iPhone and iPad in
General Settings. The new version of iOS can also be automatically converted to iPadOS
by Xcode when building iOS apps on an iPad.
16) Enable People Occlusion by adding the ARFrameSemantic
PersonSegmentationWithDepth option to the configuration’s frame semantics.
17) Connect an iPhone 11 or new version, or an iPad 3
rd
generation or new version to the
Mac and use Xcode to build the project on the mobile device.
Figure 2-7 shows the architecture of the AR-tsunami iOS/iPadOS app. In brief, the workflow is –
1) Open the AR-tsunami app and give it permission to use the camera.
2) AR-tsunami automatically detects horizontal surfaces, such as the beach you are standing
on or your driveway.
3) AR-tsunami sends a web request to the server for the pre-stored input data of wave
simulation.
4) AR-tsunami calculates and simulates the wave equations on the local device (iPhone or
iPad).
5) AR-tsunami renders the waves and displays the People Occlusion feature, so you have an
immersive experience of tsunami hazards.
41
Figure 2-7: Architecture of AR-tsunami
42
2.5 AR-tsunami App Demos
2.5.1 Demo of Placing a Virtual Tsunami on Real-world Surfaces
Figure 2-8 shows a demo that was recorded on a pier with boats mooring on still water.
This demo helps coastal hazard learners know if a coastal hazard happens, the water is high enough
to flood the pier. Using AR-tsunami near the coast can be useful to educate the coastal residents
and give a warning about how destructive coastal hazards can be.
(a) (b)
Figure 2-8: AR-tsunami demo on a pier. (a) the pier in the real world without AR effect; (b) by
using AR-tsunami, the pier is covered by small waves.
In Figure 2-9, the demo was shot on a driveway in a residential area. AR-tsunami detects
the surface of the driveway and puts virtual tsunami waves on it. Learners can learn how the
43
devastating waves can inundate the houses. This can cause coastal hazard learners to pay more
attention to the hazards that can potentially happen in or near their residential area.
(a) (b)
Figure 2-9: AR-tsunami demo on the driveway. (a) the driveway in the real world without AR
effect; (b) by using AR-tsunami, the driveway and the house are flooded by waves.
2.5.2 Demo of Immersing a Person into a Virtual Tsunami
In Figure 2-10, a demo was shot using the AR-tsunami app on iPhone 11 near Lake Monona in
Madison, Wisconsin. The demo shows that with the AR-tsunami app the lake surface is covered
by the virtual still surface or tsunami waves, which blends the virtual fluid into reality. The demo
also shows the People Occlusion feature in Apple’s ARKit using machine learning. This means
AR-tsunami detects the 3D position of my hand related to the virtual water surface which is placed
beyond the real lake surface. The initial conditions and boundary conditions of waves have been
stored in AR-tsunami, and the wave simulation computing is performed locally on iPhone 11. This
44
demo shows that with AR-tsunami, people can directly capture what the reality will be like if a
tsunami or flood occurs. This provides an immersive experience of how dramatic the hazard will
be.
(a) (b) (c)
Figure 2-10: AR-tsunami demo on a lake. (a) the lake in the real world without AR effect; (b)
AR-tsunami has detected the lake surface and placed the still water on it, and virtually submerges
the hand with people occlusion effect; (c) AR-tsunami displays the tsunami waves.
45
In Figure 2-11, a demo was shot on a sloping lawn with the author as an example of coastal
hazard learners. AR-tsunami scans the real world and identifies the sloping surface and puts the
water surface on a horizontal surface according to the sloping lawn, which resembles the real
situation if a flow is flooding a sloping lawn. AR-tsunami recognizes the human body and body
parts like feet, and it performs the people occlusion effect. The coastal hazard learners can
experience walking in the still water and being submerged by the flood or tsunami waves. The
visual effect can have an impact on the learners and leave them with a long-lasting memory of
what coastal hazards will be like.
46
(a) (b) (c)
(d) (e) (f)
Figure 2-11: AR-tsunami demo on a sloping lawn. (a) the sloping lawn in the real world without
AR effect; (b) by using AR-tsunami, the author is walking in the virtual still water which is
sloping along with the lawn; (c) by using AR-tsunami, the author can look down and put the feet
out of “water”; (d, e, f) by using AR-tsunami, the author is “submerged” by the big waves.
47
2.5.3 Demo of Displaying a 3D Tsunami Inundating a Beach
In Figure 2-12, it is shown that wave simulations on a complicated bathymetry like the
sand topography can run on an iPad or iPhone. The waves in demos of sections 2.5.1 and 2.5.2 are
generated by a FFT-type procedural wave model. It demonstrates how AR-tsunami merges the
virtual tsunamis into the reality. In this demo, the waves are simulated by the Boussinesq-type
model in Celeris. The bathymetry data is pre-stored in the Amazon Web Service (AWS) file
system. In Figure 2-12(a), a small part of the Beach of Crete is the domain, and the initial wave
source is generated by the sinewave maker in Celeris on the north boundary with a direction
towards the south boundary. The waves are computed in real-time on an iPhone 11 Pro Max. In
Figure 2-12(b), a larger domain is computed, and the resolution gets higher while AR-tsunami can
still run smoothly in real-time. The area is Ventura Harbor and a constant sine wave source is
added on the west boundary. Since the virtual wave simulation is anchored to a fixed coordinate
of the real world, users can move the camera and observe other parts of the domain. People
Occlusion feature is added for a better education experience.
48
(a)
(b)
Figure 2-12: AR-tsunami demo of wave propagations on a complicated bathymetry. (a)
screenshots of different moments of virtual wave propagation on the Beach of Crete in the AR-
sandbox app; (b) screenshots of different angles and moments of virtual wave propagation on the
Ventura Harbor in the AR-tsunami app. The people occlusion AR effect can be shown having
been added in the AR-tsunami app.
49
Chapter 3 AR-sandbox iOS/iPadOS App:
Interactive Augmented Reality Sandbox with
Wave Simulation for Coastal Engineering
3.1 Introduction
Prior to the development of Project AR-sandbox, an augmented reality sandbox was constructed.
This existing sandbox used the Linux-based augmented reality software (S. Reed et al. 2016)
developed by the group at UC Davis W.M.Keck Center for Active Visualization in the Earth
Sciences (KeckCAVES). Their project uses a real sandbox, a powerful computer installed with a
simulation and visualization software, a data projector, and a Microsoft Kinect 3D camera to
exhibit the effect of the virtual topography and water map on the sand
(https://web.cs.ucdavis.edu/~okreylos/ResDev/SARndbox/). The purpose of their augmented
reality sandbox software is for advancing earth science (S.E. Reed et al. 2014). In this study, the
Project AR-sandbox means to create a more efficient and convenient AR sandbox software for
coastal engineering. Since 2020, Apple integrated the LiDAR Scanner in the new generation of
iPhone Pro 12 and iPad Pro 12 4
th
generation. With the LiDAR Scanner, a brand-new interactive
sandbox app can be built for coastal engineering on iPad Pro. The setup of the Project AR-sandbox
in this study is shown in Figure 3-1.
50
Figure 3-1: The setup of the Project AR-sandbox. To view the depth map on a sandbox, users
only need a real sandbox, a regular projector, and an iPad with the AR-sandbox app installed.
Compared to the UC Davis sandbox, the AR-sandbox app in this study runs the
visualization on a GPU, providing faster computing and nearly instantaneous interactions. Here
with the GPU, a more accurate high-order model can be used to simulate waves. Also, both the
LiDAR scanning and wave simulation computation are performed on the 4
th
generation iPad Pro,
while UC Davis’ sandbox uses an external Kinect camera and a computer. This makes AR-sandbox
much more convenient to set up. What’s more, with the ARKit framework in the iOS/iPadOS
system, AR effects can be developed such as people occlusion or image tracking that provide a
more immersive experience.
The goal of the Project AR-sandbox is to perform interactive coastal engineering by
visualizing the changes in the coast after interaction with the sandbox. The aim is to help engineers
51
and researchers to perform cursory coastal engineering design and predict coastal changes or
evolutions conveniently and efficiently. The main contributions of the Project AR-sandbox are:
1) AR-sandbox is the first and only mobile app applying Apple’s new LiDAR scanner and
AR technology to the sandbox for coastal engineering.
2) An instant colorful depth map is generated by AR-sandbox and is projected by the
projector on the real sandbox with the depth map matching exactly the sand topography.
3) AR-tsunami perform real-time depth sensing, instant wave simulation, and sand surface
rendering on GPU
3.2 Related Work
Interactive sandbox experiments can be designed to do various engineering tasks and studies, such
as studying the deformation of a granular-materials-covered ground when it contacts with rigid
objects (Onoue and Nishita 2003), predicting structure evolution (Crook et al. 2006), and studying
the effectiveness of hydraulic tomography (S. Liu, Yeh, and Gardiner 2002; X. Liu et al. 2007).
The Tangible Media Group in the MIT Media Lab develop a tangible interface called SandScape
(https://tangible.media.mit.edu/project/sandscape/) and the users can view different landscape
simulations of a sand terrain model which are projected on the surface of a sandbox so that users
can have a better understanding of a model presentation by manipulating both the physical form
and the computational simulation (Ishii et al. 2004). With the state-of-art augmented reality
technology becoming popular, there are potential to develop more innovative applications with
sandbox experiments.
52
UC Davis W.M.Keck Center for Active Visualization and UC Davis Department of
Geology leads an open-source AR sandbox project for geological process education (S.E. Reed et
al. 2014; S. Reed et al. 2016). They use Shallow Water Equations (SWE) as the governing equation
for the water flow simulation, and the implementation of solving the equations follows the method
introduced by A. Kurganov and G. Petrova (2007) using a second-order Runge-Kutta computation
method with the central-upwind scheme. They use C++ for the computation and GLSL shaders to
render the results. Their computational method, the connections of data between three devices (a
computer, a data projector, and a Microsoft Kinect 3D camera), and the depth frame filtering
process for noise reduction are all very time-consuming. This makes the result visualization not
instant with a 1-second delay.
3.3 Numerical Simulation for Wave Run-up in AR-sandbox
Tsunamis are generated by the intense seabed displacement or a large volume of water body
movements, such as a marine earthquake, a landslide, or a submarine volcano eruption. Unlike the
common wind waves or ripples which only move up and down around the surface layer of the
water body, tsunamis waves are gravity waves and fluctuate the entire water body. A Tsunami
carries tremendous energy. The energy E can be expressed in the form of the free surface
elevation :
53
( )
2
1
,
2
E x d g y xdy =
(3-1)
where is the sea water density, g the gravity, and ( ) , xy the free surface elevation measured
from still water depth. The wavelength of tsunami waves can be up to 450 km and is much bigger
than the water depth. Therefore, tsunami waves belong to shallow water waves, whose wavelength
is more than twice the water depth. The wave speed 𝑣 and the wavelength of shallow water
waves are:
v gh = (3-2)
vT = (3-3)
where h is the total water depth and T is the wave period.
According to Equations (3-1) and (3-3), when the tsunami waves propagate in the deep sea
(i.e. the value of h is huge), the wave speed is fast and the wavelength is long. Due to the law of
conservation of energy, the wave height should be small, which makes it hard to detect the tsunami
in the deep sea. But once the tsunami waves arrive offshore, due to the sharp reduction of water
depth and the sudden accumulation of energy, the wave height suddenly increases and the tsunami
waves run up onshore as a rapidly rising debris flows, which can cause huge damage to the coast
(as shown in Figure 3-2). The phenomenon that a tsunami shoves a large volume of water onto the
shore above the still water level is called tsunami run-up, and the run-up measurement is the
quantification of the max elevation onshore reached by a tsunami.
54
Figure 3-2: Diagram of tsunami run-up
Run-up height and distance are crucial factors to define the effect of a tsunami. Wave run-
up has been widely studied through laboratory experiments (Jensen, Pedersen, and Wood 2003;
Saville 1955; Saville Jr 1956; Gedik, İrtem, and Kabdasli 2005; Irish et al. 2014; Irtem et al. 2009),
the fieldwork of tsunami or hurricane events (McAdoo, Richardson, and Borrero 2007; Voulgaris
and Murayama 2014; Esteban et al. 2021; Okal et al. 2003; Bondevik et al. 1997), and numerical
simulations (Patrick Lynett and Liu 2005; Nakamura, Mizutani, and Fujima 2010; Marchuk and
Anisimov 2001; Weiss, Wünnemann, and Bahlburg 2006; Cho 1995). These studies have found
the characteristics of a tsunami run-up phenomenon are: 1) the bottom friction terms and nonlinear
convective inertia force become important as the waves run up onshore; 2) since the shoreline
boundaries move dynamically as the process of run-up and run-down, a moving boundary
condition could be taken into consideration for a better approximation (Patrick Lynett and Liu
2005); 3) the approaching wave of a tsunami typically don’t break on the shoreline, so models for
55
nonbreaking long waves with Boussinesq terms are adequate to simulate the tsunami run-up
(Abdalazeez, Didenkulova, and Dutykh 2019). The 1D standard depth-averaged bottom friction
terms are (Rouse 1946):
f
k
f u u
h
x
a
=
(3-1)
where f is the bottom friction,
f
k is the bottom friction coefficient, h is the total water depth, u
is the horizontal velocity vector, and a is the Lagrangian coordinate representing the initial
location of the fluid and its relationship with the Cartesian coordinate x is:
1
x
x a a
−
=
(3-1)
In AR-sandbox, the numerical model is for simulating the wave run-up phenomena. Boussinesq
equations are the governing equations to simulate the tsunami run-up:
0
t x y
h PQ + + =
(3-1)
22
11
0
2
tx
y
x
P gh PQ
P ghz f
hh
+ + + + + + =
(3-2)
2
22
2
0
2
y t
x
y
PQ Q gh
Q ghz f
hh
+ + + + + + =
(3-3)
where h is the total water depth, P and Q respectively the depth-integrated mass fluxes
in x and y direction, g the gravitational acceleration coefficient, z the bottom elevation
measured from still water depth,
1
f and
2
f respectively the bottom friction terms in the two
directions.
1
and
2
the dispersive Boussinesq terms:
56
2
1
3
1
( ) ( )
3
xxt xyt xxx xyy
B d P Q Bgd
= − + + − +
1 1 1
2
3 6 6
x xt xt xx yy y xt xy
dd P Q Bgd Bgd dd Q Bgd
− + + + − +
(3-4)
23
2
1
( ) ( )
3
xyt yyt yyy xxy
B d P Q Bgd
= − + + − +
1 1 1
2
3 6 6
yt yt yy x yt y xx xy
dd Q P Bgd Bgd dd P Bgd
− + + + − +
(3-5)
The friction terms
1
f and
2
f are important factors affecting run-up measurements, and
they can be calculated via the Manning’s equation:
( )
2 2 2
1 1/3 22
max ,
gn P Q
fP
h hh
+
=
+
(3-1)
( )
2 2 2
1/3 2
2
2
max ,
gn P Q
fQ
h hh
+
=
+
(3-2)
where n is the Manning’s coefficient, and is a constant tolerance to avoid the denominators to
be too small values or zero.
3.4 Method of Building AR-sandbox app
3.4.1 LiDAR Scanner on New Apple iPad Pro
LiDAR, short for light detection and ranging, is an emerging tool for high-resolution surface 3D
model generation (Reutebuch, Andersen, and McGaughey 2005). Even though both are remote
sensing technology, but RADAR, short for radio detection and ranging, has a wavelength between
3mm to 30cm while LiDAR has a micrometer range with a usual wavelength of 900 nm (Intrieri
et al. 1993). This means LiDAR generates high-resolution 3D data with an accuracy of 2mm. Apart
57
from the advantage in the accuracy, a LiDAR scanner traces a laser beam in space and calculates
the travel time of the light reflected from each sample point in this space, which makes LiDAR
has the advantage in speed (Kreylos, Bawden, and Kellogg 2008). The LiDAR Scanner on iPad
Pro 12 4
th
generation can measure the distance up to 5 meters. Those advantages and features
ensure iPad Pro is a good choice for the Project AR-sandbox.
3.4.2 Wave Simulation Method
The numerical simulation and rendering follow the Boussinesq equation solver and shading in
Celeris Base (Tavakkol and Lynett 2020). Celeris base is an open-source virtual reality software
for wave simulation in 3D Unity. The compute shaders for wave simulation and shading shaders
for rendering are coded in HLSL. Unity-to-iOS API is used to transfer the compute shaders and
shading shaders to Apple’s Metal Shading Language (MSL) so that AR-sandbox can be further
developed with Apple’s augmented reality framework ARKit. The wave simulation is GPU-
accelerated by Metal and can run smoothly on mobile devices like an iPad or iPhone.
3.4.3 AR-sandbox Architecture
To enable AR-sandbox to render depth maps and simulate waves entails:
1) Create an empty AR project in Xcode in Programming Language Swift.
2) Create a class for receiving depth-related AR data from the LiDAR sensor called
ARReceiver. Apple Framework Combine should be imported to customize the handling
of asynchronous events.
58
3) Create a class for converting AR data to a Metal texture called ARProvider. Apple
Framework Accelerate and MetalPerformanceShaders should be imported to make
conversion run on the GPU.
4) Create a SwiftUI view for viewing the depth map using Metal Texture.
5) Create a function to control the configurations and update the view, such as the frame
size, the color pixel format, and the frame frequency.
6) Create a Metal Shader to display a 2D texture, shade a 2D plane, convert a depth value to
RGB, and shade a texture with depth values. The depth value captured by LiDAR can be
obtained through the Instance Property depthMap, which is the estimated distance in
meters. To send depth data to the GPU, the depthMap pixel format should be
MTLPixelFormat.
7) In AR-sandbox, the contour interval is 0.01m. The color from blue to red to grey means
the distance is from far to close.
8) Create a class for controlling the pipeline of receiving, providing, shading, and displaying
the depth map called CoordinatorDepth, which obeys the MTKCoordinator protocol. The
pipeline is setting the pixel format according to the platform (iPad Pro 4
th
generation),
running the Vertex Shader which receives depth data as points clouds, and running the
fragment shader which generates and shades the texture. and sending it to a SwiftUI
view. Now the instant shaded depth map can be displayed on the iPad with a LiDAR
scanner.
59
9) The wave solver from Celeris is in Objective-C when converted to the iOS/iPadOS
platform, while the functions and classes using LiDAR depth data to display a instant
depth map are written in Swift. To simultaneously render depth map and run wave
simulation, it needs an Objective-C head file referring to all the classes written in Swift,
or vice versa. The way is to import Swift codes into the Objective-C project, which
automatically generates a *-Bridging-Header by Xcode. In the bridging header,
developers can import the target’s public headers that are exposed to Swift. To use Swift
Classes in an Objective-C script, developers should use the Objective-C Generated
Interface Header. The name of the Interface Header can be found in the Swift Compiler
in Building Settings.
10) A SwiftUI view with the Structure ZStack can be used to display wave simulation and
depth map at the same time.
Figure 3-3 shows the architecture of the AR-sandbox iPadOS app. In brief, the workflow
is
1) Connect the iPad pro with a projector, turn on the projector, open the AR-sandbox app
and give it permission to use the camera.
2) AR-sandbox automatically scans the terrain of the real sandbox and generates the 3D
depth model.
3) AR-sandbox generates the depth model and sends it to the server.
60
4) AR-sandbox calculates and simulates the wave equations based on the depth model of the
real sandbox.
5) AR-sandbox renders the waves and projects the AR effects on the real sandbox through a
projector.
Figure 3-3: Architecture of AR-sandbox
61
3.5 AR-sandbox App Demos
3.5.1 Demo of Basic Features in AR-sandbox
Currently, we have achieved–
(1) Displaying the interactive and real-time depth map on the sandbox (Figure 3-4).
(2) Generating AR video with the waves on a complicated bathymetry detected by LiDar.
(3) Combining (1) and (2) in one app on an iPad. This means that an iPad Pro is ensured to
have the memory of running AR-sandbox.
In Figure 3-4, it is shown that when engineers interact with the sand, AR-sandbox can
render the depth map in real time. The three photos in the top row are serial photos taken without
any AR effect, and the three photos in the bottom row are serial screenshots taken when using the
AR-sandbox on an iPad Pro. The photos and the screenshots below are taken at the same time
accordingly. The blue-to-red-to-gray color in the depth map means the elevation of the sand
topography is from low to high accordingly, which also means that the distance of the sand surface
to the LiDAR scanner in the rear camera of the iPad Pro is from far to close. Blue can mean the
area is under water, yellow to red can mean the ground, and gray can mean the infrastructure on
the ground mimicking the building near the coast. The accuracy of the LiDAR camera is 2mm
enough to mimic coastal processes in a sandbox, and the response time is instant with a 60-times
update per second. Users can turn on the projector and project Figure 3-4 (d) - (f) on the sand
covering the same topography so that they can see the colorful depth map changes seamlessly on
the real sand.
62
Figure 3-4: AR-sandbox demo of dropping a paper cube to show the depth map is changed
instantly with human interaction. (a-c) the serial photos of dropping a paper cube onto the sand
(the projector is turned off to better see the sand and cubes); (d-f) the serial screenshots of the
AR-sandbox app on an iPad Pro. (a) and (d), (b) and (e), (c) and (f) are taken in the same
moment. The depth map is updated 60 frames every second.
63
3.5.2 Demo of Enhanced AR-sandbox with Instant Wave Simulations Add-on
(a) (b) (c)
Figure 3-5: Enhanced AR-sandbox demo of generating the topography of an island from a hand
and the instant wave simulation add-on. (a) run the enhanced AR-sandbox on an iPad Pro 12 4
th
generation and aim the rear camera at the right hand. (b) after the LiDAR scanner function
scanned the right hand, an island is created with the contour of the right hand and the
background. The sinewave simulation on the right boundary is instantly added on. The waves
propagate towards the island. (c) the waves run up the island and inundate parts of the shore
which are referred to and generated from the right little finger.
The enhanced AR-sandbox can scan the surroundings, collect the depth data, and send to
the Boussinesq-type wave solver as the initial input of bathymetry (shown in Figure 3-5). In Figure
3-5(a), the right hand is kept as still as possible. LiDAR depth data is updated 60 times per second.
After a second, an algorithm will generate a grid for the next wave simulation and calculate the
mean depth value of each depth point cloud. Then the gridded depth data is sent to the Boussinesq-
type wave solver. The value of still water depth is predefined. In Figure 3-5, the still water depth
is set to be beneath the island of “right hand”. Once the LiDAR depth data is transported to the
wave solver, the wave simulation instantly starts. In this demo, the sinewave maker is used to
generate waves on the right boundary towards the left boundary, which will inundate parts of the
island (shown in Figure 3-5(c)).
With the enhanced AR-sandbox, the wave simulation can be added onto the depth map of
the sandbox. The wave animation which is projected on the sandbox through a projector can clearly
64
display the wave motion on the sand. The people occlusion is added so that when coastal engineers
interact with the sand, the virtual scene will not project on the hands, making it easier to conduct
engineering experiments. An instant depth map change and the simultaneous response of waves
to the depth change will be included in AR-sandbox, so the virtual fluid changes instantly while
the sand is changed by human interactions. Similar work can be seen in Figure 3-6.
(a)
(b)
Figure 3-6: The demo of UC Davis’ sandbox (S. Reed et al. 2016). (a) a scoop of real sands is
added to the virtual lake area; (b) after a second, the depth map has been changed accordingly
and shows a new island in the virtual lake. It is the augmented reality sandbox done by UC Davis
W.M.Keck Center for Active Visualization in the Earth Sciences (KeckCAVES), UC Davis
Tahoe Environmental Research Center, Lawrence Hall of Science, and ECHO Lake Aquarium
and Science Center (S. Reed et al. 2016).
65
Since depth map generation and wave simulation are both memory-consuming, the key to
achieving this work is to decide the order and time interval of generating a depth map by LiDAR
scanner and wave simulation based on this depth map. The pipeline scheduling is planned here:
1) Scan and render the depth map every 0.2 seconds (in Figure 3-4 the frequency is 0.1
seconds).
2) Transfer the 3D data of depth map to wave solver every 0.4 second
3) Make sure wave simulation and rendering on GPU takes <0.4 second
4) Add/Update wave AR scene on top of the AR scene of depth map every 0.4 second
3.5.3 Demo of Adding Digital Objects
Figure 3-7 has shown how to use the image tracking feature in ARKit to anchor virtual objects like
videos in a certain position. This technology can also be used to anchor virtual 3D buildings on
the sandbox. By moving the tracked image, users can “move” the virtual buildings, and then users
can view the different resulting wave simulations and estimate their safety level.
Currently, AR-sandbox can detect different markers and place 3D objects (such as
breakwaters, buildings, etc.) in the same position as their markers. Figure 3-7 and Table 3-1 show
the 3D virtual model of a beach house and three famous breakwaters (downloaded from 3D model
library https://3dwarehouse.sketchup.com/) and their markers. The newest version of ARKit can
only place the same 3D object on one unique marker once, so in Figure 3-7, three markers are used
to track the 3D objects three times.
66
Figure 3-7: 3D object House and its marker. The 3D virtual building is downloaded from online
3D objects libraries: https://3dwarehouse.sketchup.com/. (a) the marker of the 3D building. It is
the photo of the author’s cat. (b) the marker is being tracked and the 3D building has been placed
on the marker. (c) a different angle and different distance to see the 3D building.
Table 3-1: Three types of 3D breakwaters (https://3dwarehouse.sketchup.com/) and their
markers
Tetrapod Core-loc Tetrahedron
Single
A cluster
67
Marker
Tracking
with
Marker
In future work, the surface elevation for wave simulation will be
elevation of wave model topography of sandbox topography of virtual objects =+
With the materials shown in Table 3-1, AR-sandbox can help coastal engineers assess the
efficiency of different breakwaters in different positions and study wave impacts on coastal
infrastructures. The demonstration of how to conduct this work is shown in Figure 3-8. Figure
3-8(a) – (c) respectively show the image tracking of the three different breakwaters – tetrapod,
core-loc, and tetrahedron on the sandbox. The markers are made of paper so their depth can be
negligent when AR-sandbox is generating the topography model of the sandbox by the LiDAR
68
scanner. Hence, to assess and compare the efficiency of different breakwaters, the topography of
the sandbox stays the same (as shown in Figure 3-8(a-1), (b-1), and (c-1)) and the colorful depth
map shows the same (as shown in Figure 3-8(a-3), (b-3), and (c-3)). When importing the depth
model to wave simulation, AR-sandbox will sum up the topography of the sandbox and the
topography of virtual objects.
69
Figure 3-8: Put "virtual" buildings on the sandbox via labels. The 3D models are downloaded
from https://3dwarehouse.sketchup.com/. (a-1, b-1, c-1) markers of tetrapod, core-loc, and
tetrahedron on the sandbox without AR effect. (a-2, b-2, c-2) respectively place the breakwater
tetrapod, core-loc, tetrahedron, and the beach house on their marker. Figure 3-8(a-3, b-3, c-3) To
view the 3D object on a colorful depth map generated by AR-sandbox.
70
Chapter 4 WebVR-tsunami Web App: Web-
based Convenient Tool for 3D Global Tsunami
Visualization and Coastal Hazard Education
4.1 Introduction
With the improvement of both software and hardware, Web-based Virtual Reality (WebVR) as a
revolutionary technology has become more appealing for scientific visualization and flexible
education. WebVR’s characteristic combination of online visualization and virtual reality not only
provides a way to quickly spread knowledge but also enables an immersive and deeply memorable
experience. Additionally, the three-dimensional visualization can provide more information versus
traditional two-dimensional graphics or video presentations.
Different from the traditional VR technology, which requires VR software downloading
and only allows VR headsets to access virtual content, the cutting-edge WebVR technology does
not need to download a resource-intensive VR software and allows any mobile device, PC, or a
VR headset like Meta Quest or HTC VIVE to experience the virtual reality via a simple click on a
web link (as shown in Figure 4-1). The transition to web-based VR guarantees a convenience to
try the web app and encourages more users to peak at the metaverse. This development makes
WebVR a promising way to present and promote the understanding of global natural hazards.
Global natural hazards can widely affect people’s lives, so a realistic online visualization of the
hazards can promote efficient education or even autonomous learning activities.
71
Figure 4-1: Comparison between traditional VR applications and WebVR applications.
Over the last decade, tsunami events have gained increased global attention after the 2022
Hunga Tonga Tsunami, 2018 Indonesia tsunami, 2015 Chile tsunami, and 2011 Tohoku tsunami
in Japan. Most of these tsunamis propagate across different oceans and their effects are felt
worldwide. This leads to increasing discussions about the behavior of tsunamis both among
academics and in the general public. It also results in further collaborations across different fields
around the world. It also presses upon us the need to find better and faster means of communicating
pertinent information about tsunamis and educating the public about the dangers posed by these
natural events. Sevre, Yuen, and Liu (2008) proposed that interactive visualization with a web
portal helps us to understand complex tsunami wave behavior. The interactive 3D visualization of
a global tsunami event directly shown on a web browser can encourage comprehension of its
features. This can be justified directly through the comparison between traditional 2D visualization
of a global tsunami and WebVR visualization used in Project WebVR-tsunami (as shown in Figure
72
4-2). Traditionally, to visualize a global tsunami, a 2D animated circle of Earth is used to show the
transoceanic tsunami. Like the figure on the left in Figure 4-2, with a certain projection method,
researchers use a circle presenting a side of Earth and the animation inside the 2-dimensional circle
represents the shock wave generated by 2022 Hunga Tonga volcano eruption. But with WebVR
technology (figure on the right in Figure 4-2), the global tsunami is projected on an interactive 3D
animated Earth sphere. In this 3D virtual space, you can see any part of the Earth and you can track
how exactly the tsunami propagates on the sphere. Figure 4-2 shows the WebVR visualization is
more powerful than traditional visualization.
Figure 4-2: Comparison between traditional visualization of global tsunamis and Project
WebVR-tsunami.
Now we aim to provide a state-of-the-art, efficient, and immersive way to educate the
public about natural hazard events. Since in most cases these global events are also a “Big Data”
problem, it is important to render the simulation results in a suitable way so that the visualization
73
is presented in real-time, and the interaction is responded to smoothly. Therefore, in the framework
we have proposed, the open-source wave model COULWAVE is used to simulate a global tsunami
event, the WebVR framework Three.js and WebVR API A-Frame are applied to visualize the 3D
tsunami simulation in a web browser, and functions in JavaScript library jQuery are added to
enable interactions with the virtual tsunami. Finally, the cloud storage and content delivery
services in Amazon Web Service (AWS) are used to ensure the 3D visualization results can be
quickly accessed online anywhere via a desktop browser, mobile browser, or VR headset.
4.2 Related Work
The visualization of physical models such as tsunami simulations is critical for scientific
information delivery. Efficient visualizations can display the vital characteristics of the simulated
physical phenomenon. A 2D wave simulation video or graph with a legend showing the wave
height or propagation time is the traditional and common method for tsunami visualizations. It is
often used in tsunami warnings, such as graphs of the 2014 Chile tsunami travel times and
maximum wave amplitudes made by the National Oceanic and Atmospheric Administration
(NOAA) (https://ntwc.ncep.noaa.gov/previous.events/?p=09-24-13). The 2D visualization is
efficient for direct and fast tsunami warnings. But to better understand the propagation of a tsunami,
3D animations are needed. The software platform for 3D and 4D data visualization Amira is used
for portraying the wave heights of tsunami waves with lighting and generating simulation movies
to emphasize the tsunami generation stage (Sevre, Yuen, and Liu 2008). The videos showing the
3D simulations created by Amira make the simulation more realistic. Tavakkol and Lynett (2020)
74
applied the Unity3D game engine as the visualization platform for the Boussinesq-type wave
solvers and created an open-source software Celeris Base based on VR. The capability of offering
an interactive and immersive experience makes Celeris Base able to give more flexibility and even
facilitate more understanding of the event. However, for a large-scale or transoceanic tsunami,
being projected on a globe should be more accurate and realistic. Global or spheric visualization
is common in visualizing the ocean and atmospheric circulation (Martin and Quirchmayr 2019;
Adcroft et al. 2004) or global climate models (Lindsey 2018; Lindsey and Dahlman 2020).
With technologies advancing in the field of virtual reality, WebVR has evolved from
displaying data with simple 3D objects, such as a 3D bar chart (P.W. Butcher and Ritsos 2017) or
a fitness landscape (Dolson and Ofria 2018), to performing virtual object manipulations (Sun et al.
2020) and presenting online applications, such as online fire evacuation training (Yan et al. 2020).
WebVR has been studied for promoting and visualizing open health data (Andrews et al. 2015;
Hadjar et al. 2018). The multidimensional health data are graphically displayed in 3D and stored
in the memory of a web browser. These projects show that WebVR can provide a more intelligible
way not only to explore complex data, but to lead to better decision making. What’s more, with
the advantage of remotely processing and sharing data via the Internet, visualizing the patients’
diagnosis data based on WebVR can be convenient for both doctors and patients through
telemedicine (Xu et al. 2019). Zhao et al. (2019) have proposed a case study showing a 3D
hydrodynamic flow field simulation based on WebVR. The data is calculated by the hydrodynamic
model HydroInfo, which is running on the server, and the results are visualized and presented via
75
the WebVR framework Three.js (Danchilla 2012) and WebVR API A-Frame (Neelakantam and
Pant 2017).
4.3 Transoceanic Tsunami Numerical Simulation
4.3.1 Modeling for the Transoceanic Propagation of Tsunamis
According to the distance from the source in which a tsunami has the destructive impact,
tsunamis can be classified as the local tsunami (< 100 𝑘𝑚 ), the regional tsunami (< 1000 𝑘𝑚 ),
and the transoceanic tsunamis (> 1000 𝑘𝑚 ) (Levin and Nosov 2009). Transoceanic tsunamis are
rare compared to local or regional tsunamis, but they are much more disastrous. They are caused
by giant earthquakes or volcanic eruptions. For instance, the 2004 Indonesia Tsunami is triggered
by the 2004 Indian Ocean Earthquake with a magnitude of
w
M 9.2, and the 2022 Hunga Tonga
Tsunami, which is caused by the mega eruption of the submarine volcano Hunga Tonga-Hunga
Ha’apai, propagated across the Pacific Ocean and reached the Atlantic Ocean. The scale of the
mega submarine earthquakes or volcanoes is from 10 𝑘𝑚 to more than 100 𝑘𝑚 , which leads to
the resulting wavelength of a tsunami with a similar size. Since normal ocean depth is 3 − 5 𝑘𝑚 ,
the shallow water wave theory could be applied to transoceanic tsunami modeling. The travel time
of a tsunami across the ocean can be quickly estimated based on Huygens’ Principle. To capture
the tsunami amplitudes and propagation, shallow water equations are commonly used to simulate
the tsunami. The initial condition for the Shallow Water Model can be acquired by the Okada
seafloor displacement model (Okada 1985) or the Elastic half-space fault model by Mansinha and
76
Smylie (1971). Because of the multiple reflections on the continental shelf and on the small basin
nearshore, the transoceanic tsunami usually arrives at bays with multiple waves and the nearshore
tsunami currents are nonlinear. Also, difference from computing the wave rays of the local and
regional tsunamis, for transoceanic tsunamis, the Earth’s sphericity should be considered, and a
polar coordinate system should be adopted. Satake (1988) applied the ray tracing of seismic surface
waves to transoceanic tsunamis propagation on a spherical coordinate for examining the
bathymetric effect. The ray tracing equations for transoceanic tsunamis are:
1
cos
d
dt nR
= (4-1)
1
sin
sin
d
dt nR
=
(4-2)
22
sin cos 1
sin cot
sin
d
dt n R n R
n
nR
n
= − + −
(4-3)
where is the ray colatitude, is the ray longitude, is the ray direction calculated
counterclockwise from the Earth South, t is time, n is the inverse velocity of long waves (i.e.
( )
1/2
n gH
−
= ), and R is the Earth’s radius.
Shallow Water Equations (SWE) are a suitable physical model to describe transoceanic
tsunamis. SWE in the Cartesian coordinate system is shown in Equations (1-3) – (1-5). The
transformation from the Cartesian coordinate system to the spherical coordinate system of Shallow
Water Equations is (Castro, Ortega, and Parés 2017):
( )
( )
( ) ( )
cos
1
0
cos
h
t R
u
hu
h
+
+=
(4-1)
77
( )
( )
( ) ( )
tan
cos cos cos
uu
u gg
R R R R R
u
u u u hz
t
+ + +
−
=
(4-2)
( )
( )
2
tan
cos
u
u g
u
g
RR
u
uu hz
R R R t
+ + =
−
+
(4-3)
where h is the total water depth, z is the bottom elevation, ( ) , is the longitude and latitude,
( )
, u u
are the averaged velocities on the longitude and latitude directions, g is the gravity.
4.3.2 Open-source Wave Solver COULWAVE
COULWAVE is adapted in the showcase of WebVR-tsunami to simulate the transoceanic
tsunami 2022 Hunga Tonga Tsunami. COULWAVE is a high-order depth-integrated Boussinesq-
type wave model (P Lynett et al. 2002). With sufficient source data and accurate boundary
conditions, the solver of COULWAVE can create a precise profile of tsunami wave heights and
velocities. Since the nonlinear dispersive terms in the COULWAVE model, the results are accurate
when the tsunami is approaching the shore (i.e. when the wavelength becomes short) and when
the topo is complex near the source. Meantime, COULWAVE enables parallel computing which
can help save runtime for a large-domain and data-intensive problem like transoceanic tsunami
modeling.
78
4.4 Core Technologies to Build WebVR-tsunami
4.4.1 Create Simulation Video: Map Projection and Video Format Selection
For worldwide visualizations on a globe, choosing an ideal projection method to create simulation
videos is the first step. Since the animated simulation will be wrapped on a 3D sphere using
Three.js, cylindrical projections should be used to create simulation videos. Common cylindrical
projections are Gall Stereographic Projection (gstereo), Equidistant Cylindrical Projection
(eqdcylin), Gall Isographic Projection (giso), Miller Cylindrical Projection (miller), Wetch
Cylindrical Projection (wetch). In our study, to display the worldwide propagation of the Hunga
Hunga Tonga Tsunami is the key purpose and the Pacific is our region of interest. We select the
gstereo projection method to project the tsunami simulation results on a worldwide map.
With a proper projection method, the simulation is ready to be mapped on a web-based 3D
earth globe. But to make the 3D animated visualization universally accessible, the video codec
should be carefully chosen to ensure the video is compatible with most browsers in different
operating systems. The MPEG-4/H.264 video format is commonly used and supported by most
popular browsers such as Chrome on Win/Android/macOS/iOS/, Safari on macOS/iOS;
HEVC/H.265 video format is more efficient than H.264 in the encoding performance, but Chrome
doesn’t support it; Ogg/Theora video format is enabled by default on Chrome, but Safari doesn’t
support it. The open-source media player software VLC can be used to get the codec of videos,
and Movavi Video Converter Premium can be used to resize or compress the video and change the
79
video codec. After testing, we chose MPEG-4/H.264 video format wrapped in a QuickTime MOV
container as the format of the simulation video, and set the video resolution to 2880x1920, the
buffer dimensions to 2880x1920, and the frame rate to 30.
4.4.2 Build a 3D WebVR Earth with WebXR Framework A-Frame
Three.js is a JavaScript library for creating 3D computer graphics in a web browser. Using three.js
as its library and being based on top of HTML and WebVR API, A-Frame is a framework
developed by the Mozilla VR team from 2015 for easily developing virtual reality environments
and applications on a web browser (Neelakantam and Pant 2017). In our web-based 3D simulation,
the sphere primitive in A-Frame is used to create a 3D earth. The material source of
the sphere is the processed simulation video introduced in section 3.1. The camera component in
A-Frame reflects the mouse movement of desktop/laptop users, the screen touch action of mobile
users, and the headset movement of VR users. To make the interaction with the 3D earth more
flexible, the camera component in A-Frame can be modified according to the perspective of what
users want to view the scene. It is natural to observe the earth in orbit around the center of the
Earth, hence we add an external library called aframe-orbit-controls to help participants view every
part of the 3D earth through the rotation-like action for desktop/laptop/mobile users. Viewport in
A-Frame enables common interactions such as zooming in and out through mouse scroll or two-
finger scroll on a trackpad. For VR users, the VR button can provide a portal to enter VR mode.
In VR mode, the headset movement replaces the orbit control by mouse or screen touch in non-
VR mode, which provides a more immersive experience.
80
4.4.3 Add HTML DOM Manipulation with JavaScript library jQuery
After the 3D WebVR earth is mapped with the animated simulation, HTML video autoplay
Attribute can play the simulation once users enter the webpage. To show the time since the eruption
in the Hunga Hunga Tonga Tsunami simulation, another video displaying the clock is added into
the same webpage of the 3D earth but outside the WebVR scene. As long as the two videos own
the same duration and frame rate and the network response is fast, assigning HTML video autoplay
Attribute to both videos can make them play synchronously once the webpage is opened. But to
provide more flexibility, HTML Document Object Model (DOM) manipulation such as
addEventListener is added through the video events in JavaScript library jQuery, so that the
animated simulation and the clock video can be controlled to stop, pause, and reset instantly and
simultaneously. We also add a time slider to allow users to capture the desired information more
conveniently.
4.4.4 Fast Deliver Content Online Using Cloud Storage Service and High-speed Global
Content Delivery Network Service Using Amazon Web Services (AWS)
To publish VR content using the browser’s WebVR API, the website should be served with
SSL/HTTPS, meaning the WebVR contents only work on secured websites. Online web hosting
sites like Glitch (https://glitch.com) or Neocities (https://neocities.org) serve with SSL/HTTPS, so
via them, the VR projects based on A-Frame can be created and deployed from within the browser
81
for free. But self-hosting a website or hosting through Amazon Web Services (AWS) is also
popular for developers who want to publish the WebVR contents on their personal websites.
Using AWS to manage personal registered domains and deploy WebVR contents is
convenient, because AWS provides SSL/HTTPS certificates, cloud storage service like S3, and
high-speed global content delivery network service like CloudFront. The Web-based 3D earth
globe with animated simulation and HTML DOM manipulation in section 3.3 involves different
online libraries (A-Frame, jQuery, and HTML) and big videos (~150 Mb). The related files are
stored in the AWS S3 service with the author’s domain zilizhou.com, and we deploy the AWS
CloudFront service to deliver the 3D earth globe to the worldwide network with high speed and
low latency.
In the order of Sections 4.4.1 to 4.4.4, the architecture of WebVR-tsunami is described in
Figure 4-3. As introduced in Section 4.4.1, first a tsunami simulation should be computed by a
fluid model. In our case, we used the high-order depth-integrated Boussinesq-type model called
COULWAVE to simulate the tsunami. Because COULWAVE is proved to be accurate when a
tsunami travels in a deep sea or when the tsunami is approaching the shore, it is a suitable model
for understanding a transoceanic tsunami and its impact on the coast. Then a WebVR API called
A-Frame is used to generate a 3D web-based VR earth and to map the tsunami video on the earth
globe, which is introduced in Section 4.4.2. And with the JavaScript library jQuery, user controls
like play/stop/pause buttons can be added to the web page (Section 4.4.3). Then, all the files are
stored in Amazon Web Service, and with the fast content delivery service in AWS, users can access
82
the pre-registered website and experience WebVRtsunami anywhere all over the world (Section
4.4.4). With a click of a link, users can immersively learn the information of the tsunami by VR
headset, mobile devices or computers.
Figure 4-3: Architecture of WebVR-tsunami
4.5 Showcase: 2022 Hunga Tonga Tsunami
4.5.1 2022 Hunga Tonga Volcano Eruption Induced Tsunami
On 15 January 2022, at 17:14:45 local time, the submarine volcano Hunga Tonga-Hunga Ha’apai
Volcano erupted and triggered a tsunami that traveled across the Pacific Ocean and even reached
the Atlantic Ocean, triggering the tsunami alerts around the world (V. Titov et al. 2022). The
information on this volcano is visualized using WebVR at https://zilizhou.com/tonga. Different
from the scientists’ expectations of the tsunamis generated by volcanic eruptions, the Hunga Tonga
Tsunami had uniformly small leading waves that arrived earlier than expected (Carvajal et al. 2022;
83
Yuen et al. 2022). The massive sequential explosions, the fast-moving atmospheric pressure waves,
and the unique close-to-trench bathymetry are all speculated to be factored into the generation
mechanism of the unusually fast tsunami waves caused by Hunga Tonga underwater volcano
eruption (Patrick Lynett 2022; Adam 2022; Yuen et al. 2022).
Due to the unprecedented nature of this event, the Hunga Tonga Tsunami is widely studied
by researchers from different fields, such as volcanologists, lightning experts, infra-sound experts,
and seismologists. Displaying the Hunga Tonga Tsunami simulation through WebVR is thus the
best way to deliver scientific information and enable fast communication. In this study, the Hunga
Tonga Tsunami simulation uses COULWAVE, an open-source wave solver (P Lynett et al. 2002),
and the WebVR result is shown on the website https://zilizhou.com/lynett.
84
4.5.2 Visualization on Computer and Mobile Browsers
Figure 4-4: Screenshot of https://zilizhou.com/lynett in the Safari browser Version 15.4 on
macOS
The visualization example of the WebVR 3D Hunga Tonga Tsunami is shown in Figure
4-4. In the center, the 3D globe with the Hunga Tonga Tsunami simulation is placed in a 360-
degree virtual space. With a mouse dragging movement, the globe can be rotated flexibly. Users
can see any part of the Earth according to their study interests. In the upper left corner, there is a
time clock showing the time of the Hunga Honga-hunga Ha’apai Volcano Eruption. There are two
different time references: the exact date and time in UTC format, and the hours since the eruption.
The date and time beyond the clock synchronize with the clock and with the Tsunami simulation
shown on the Earth globe. Beneath the time clock, there are three buttons and a time slider. After
85
opening the website, users can hit the “play” button to start viewing the simulation, and the time
clock, the simulation and the data/time number will play at the same time; to pause the simulation
and see the tsunami details, users can hit the “pause” button; to go back to the beginning, they can
hit the “reset” button. To make the control more flexible and accessible, there is a time slider for
users to directly go to the time point they are interested in. By dragging the slider, the time clock
and the simulation will display and pause at the corresponding time. As shown in Figure 4-4, the
slider is dragged to “4.3” hours since the eruption, and then the request for the time change is sent
to the Event Listener of the time clock and tsunami simulation. Accordingly, the time clock and
tsunami simulation will pause at the same time – 4.3 hours since the eruption on 15 January 2022
at 8:33 UTC. In the right lower corner is the VR button, which can help VR headset users to enter
VR mode. More discussion on VR mode is in section 4.3.
86
Figure 4-5: Sequential screenshots of https://zilizhou.com/lynett in the Chrome browser Version
99.0.4844.59 on iPhone 11 Pro Max. (a) Hunga Tonga Tsunami simulation visualization of 0.8
hours since the eruption. (b) Hunga Tonga Tsunami simulation visualization at 3.1 hours since
the eruption. (c) Hunga Tonga Tsunami simulation visualization at 3.1 hours since eruption after
the northward rotation by screen touch. (d) Hunga Tonga Tsunami simulation visualization at 3.1
hours since eruption after zooming in by screen touch. (e) Hunga Tonga Tsunami simulation
visualization at 3.1 hours since eruption after zooming out by screen touch.
When using the browsers on a mobile device, the content is similar but the size of the
WebVR content (the 3D globe) and non-WebVR contents (the time clock and the buttons) change
according to the screen size of the mobile device. Figure 4-5 shows the visualization in the Chrome
browser on iPhone 11 Pro Max. Figure 4-5(a) shows the contents after hitting the “play” button;
Figure 4-5(b) shows the animated Tonga simulation and the time clock at 3.1 hours since the
eruption; the change of the virtual earth direction between Figure 4-5(b) and Figure 4-5(c) shows
the screen-touch operation of the northward rotation; the changes of the virtual earth size between
Figure 4-5(c), Figure 4-5(b) and Figure 4-5(e) show the screen-touch operation of zooming in and
out.
87
The close-to-reality 3D WebVR content and easy-to-understand HTML operation in
https://zilizhou.com/lynett suggest many advantages of using WebVR to display the Hunga Tonga
Tsunami Simulation. It is fast and easy to access since users don’t need any powerful hardware or
content to download. More importantly, unlike the traditional video or photo visualization methods
that offer only limited angles of the resulting scene, VR visualization makes it possible to pick and
see any angles anywhere. In addition, putting high-resolution visualization results online makes it
easier for collaborating researchers to share their simulation results by just sending a web link.
4.5.3 Visualization on VR headsets
The WebVR 3D visualization of the 2022 Hunga Tonga Tsunami can also be viewed through a
VR headset, such as Oculus Quest or HTC VIVE (shown in Figure 4-6). After setting up a VR
headset and hitting the right lower “AR” button in Figure 4-4, the scene enters VR mode. Since
only the background space and the 3D earth with the Hunga Tonga Tsunami simulation are in the
3D VR scene, while the clock, the buttons, the slider, and the logo are plain HTML elements, users
in VR mode can experience an immersive tour of the Hunga Tonga Tsunami from space. With
sufficient space, users can walk around the “earth” and observe the Hunga Tonga Tsunami
propagation as a satellite would.
To improve the interaction with the WebVR earth when using VR equipment, the A-Frame
open-source component Super Hands (https://github.com/wmurphyrd/aframe-super-hands-
component) is added. The new WebVR Hunga Tonga Tsunami Simulation website with an extra
88
Super Hands add-on is https://zilizhou.com/become_archimedes. Using HTC VIVE VR
equipment to view the website is shown in Figure 4-6.
Figure 4-6: WebVR 3D visualization of 2022 Hunga Tonga Tsunami viewed by HTC VIVE. (a)
The VR view. White hands represent the two controllers; (b) The reality view of the author using
HTC VIVE to see the WebVR Hunga Tonga Tsunami in
https://zilizhou.com/become_archimedes.
Figure 4-6(a) shows what the user is viewing on the WebVR Hunga Tonga Tsunami
Simulation website with an HTC VIVE headset. The left and right white hands stand for the left
and right controllers. Pulling the trigger on the controller can cause the white hand in the VR scene
to appear as a grab action. If the user is close enough, pulling the trigger can grab and move the
earth. As shown in Figure 4-6(b), the author is pulling the trigger on the right controller, and
correspondingly the white right hand appears to be grabbing the 3D Earth in Figure 4-6(a). This
amazing feature allows users without enough room to view closer or different parts of the earth.
89
Chapter 5 Future Work
5.1 AR-tsunami Future Work: Suggestion for Enhancing AR-
tsunami with Deep Learning Models
Boussinesq-type equations and Nonlinear Shallow Water (NLSW) equations can resolve and
simulate complex flows, but they do not fully capture the turbulent behavior. In turbulent situations,
the 3-D pressure and velocity fields behave in a chaotic way and random temporal disruptions
appear (Wilcox 1998). To properly describe turbulence, engineers need 3D simulations to resolve
Navier-Stokes (N-S) equations with high Reynolds numbers. The numerical simulation of a 3D
Navier-Stokes model usually needs to be performed on a supercomputer with enough memory to
hold the billions of grid points. For example, Saito and Furumura (2009) used incompressible 3D
Navier-Stokes equations as the governing equations, and the 3D tsunami velocities and heights are
calculated by the finite-difference method (FDM) using parallel computing procedures on
supercomputers. Solving N-S equations is still a mathematical challenge and simulating
turbulence is a difficulty. In the meantime, turbulence is a common phenomenon and turbulent
flow can be commonly observed in a coastal area, such as fast flow hitting coastal infrastructure.
In this study, we want to capture the physical effects of turbulent processes like eddies or
shock waves and add virtual turbulence to AR apps, which can be possibly achieved through
physics-aware deep learning (Zanna and Bolton 2021). Unlike traditional physics-driven
approaches to resolving complex phenomena, physics-aware deep learning is a data-driven
algorithm using neural networks to train data and learn a pattern that captures the physical
90
phenomenon. Using physics-aware deep learning to solve the turbulence, three features should be
considered: what input data, how to learn, and what to learn. The most conventional and
straightforward answer for the three questions is to use observation data as input data, to use simple
neural networks to learn, and to learn the unknown coefficients. This is also called parameter
learning. Schneider et al. (2017) built a blueprint for models that can quickly learn from
observational climate data and targeted high-resolution simulations, and learn the unknown
parameters of the Lorenz 96 model in a perfect-model setting. The disadvantage of this approach
for fluid models is that the learned parameters are only correct in ideal situations. An alternative
way for parameter learning problems is to modify the architecture of NNs (Zanna and Bolton 2021).
In the work of Zanna and Bolton (2020), the map of resolved velocity components is used as input
data of CNNs, and the output or the unknown parameters are the subgrid eddy momentum forcing
in x and y directions. A physics-aware layer is constructed with fixed parameters as the final
convolutional layer of the CNNs. The final layer is the spatial derivatives of the previous layer,
which represents the eddy tensor elements. This approach is better for parameter learning and more
physically robust. Because it is not restricted to certain learned coefficients, it directly integrates
physical principles and learns parameters. Another approach to constructing physics-aware neural
networks is to embed physics constraints into the loss function in the training process. Beucler et
al. (2021) and Raissi, Yazdani, and Karniadakis (2020) both use conservation laws as the loss
function and use the neural networks to learn and predict fluid motion.
In this research, a rolling CNN-LSTM neural network architecture has been built to
forecast the propagation of simple waves (e.g., bowl waves). During optimizations, the
91
conservation and symmetry laws are used as the loss function. Input data is the simulation results
(e.g., wave velocities and heights) from a Shallow Water wave solver, and the output is the wave
velocities and heights in several future steps. However, the accumulative error is getting bigger,
and the results show the data is not sufficient so that the current model cannot properly forecast
the next instant future. In future work, we will explore the reason and propose an advanced neural
network architecture for solving the high-order unknown parameters in turbulence flows.
A promising ideal is to use neural networks to predict the part that the Boussineq model is
missing compared to the 3D Reynolds-averaged Navier-Stokes equations. The 3D Reynolds-
averaged Navier-Stokes equations are:
( ) 0
i
i
u
tx
+
=
(5-1)
( ) ( )
( )
''
2
3
j
k
ji
i
i i j ij i j
i i j j k
u
uu p
u u u u
t x x x x x x
u
x
+ = − + + − + −
(5-2)
This 3D Reynolds-averaged Navier-Stokes model can properly simulate fully 3D flows,
like the landslide. Compared with this model, Boussinesq equations mainly lack the Reynolds
Stresses
( )
''
ij
j
uu
x
−
. Also, calculating Reynolds Stress Model costs most of the simulation time.
The plan is to train a 3D effect corrector add-on for the Boussinesq model. The training data will
from the simulation of a 3D reynolds-averaged navier-stokes equations. The loss function will be
the equations relating reynolds stresses to mean velocities:
( )
''
2
3
i
j
k
ji
i j t t i
k
j
jj
u
uu
u u k
x x x x x
−+
−
=
+
(5-3)
92
Where
t
is turbulent viscosity; k is turbulence kinetic energy. The output of neural network will
be the Reynolds Stress Model. When simulating a landslide in AR-tsunami or AR-sandbox, in
every time step, the boussinesq model will add the Reynolds Stress Model for correction:
wave simulation model Boussinesq Model Reynolds Stress Model =+
(5-3)
5.2 AR-sandbox Future Work: Enabling Multiuser Experience in
AR-sandbox
ARKit version 5 supports a multiuser AR experience, which enables the same VR contents and
their actions to display on different devices. The framework Multipeer Connectivity supports
peer-to-peer connectivity through infrastructure Wi-Fi networks, peer-to-peer Wi-Fi, and
Bluetooth in iOS. The class MultipeerSession provides the features of connecting to peer devices
and sharing AR contents. Besides connecting the AR app on different devices, transporting the
app data to the same app on other devices is another important step to enable everyone to view the
same virtual content. The class ARSession.CollaborationData holds all the information that a
user has gathered about the real-world environment. With this framework and these classes,
different users can view the same VR contents from their perspectives. For instance, users A and
B stand across a table and they both run the same AR app with the multiuser experience. When
user A puts a 3D virtual object on a real-world table with its front side facing user A, user B will
immediately see the back side of the 3D virtual object at the same real-world position. To make
this multiuser experience function, the steps for users are:
93
1. In the same surrounding, users A and B turn on the Wi-Fi or Bluetooth on their devices.
2. Run the AR app on user A’s device. The content of a typical AR app is to scan the local
environment and place a virtual 3D object on a real-world surface. User A can tap the
screen and put a virtual object on the floor.
3. Run the AR app on user B’s device. Due to the peer-to-peer connectivity, a message
will pop and indicate A and B have automatically joined a shared session.
4. User B’s device shows the same virtual 3D object at the same real-world positions on
the floor but in B’s perspective.
5. User B can also add another virtual object, and user A can view it simultaneously too.
A future scenario for multiuser experience in AR-sandbox is shown in Figure 5-1. There
are three iPad devices A,B,C and a sandbox. in the beginning, the depth map of the sandbox
scanned by device C is shared and shown on all the devices. If user A click the screen of device A
and put a virtual breakwater in the domain, all the device will response the same and show the
breakwater. If user B move the breakwater on device B, again, all the device will response the
same. Figure 5-1 shows multiuser experience allows real-time collaboration and makes it easy for
teamwork.
94
Figure 5-1: Demonstration of multiuser experience in AR-sandbox.
Introducing the multiuser experience to AR-sandbox is useful, especially when AR
contents like virtual breakwaters in Section 3.6.1 are involved. Multiuser experience enables the
results of virtual objects and tsunami waves interacting with the real-world sandbox immediately
shown on participants’ devices. Participants can also control AR-sandbox, such as changing the
direction of tsunami waves and others will see how the changes look like from different angles. It
is helpful for collaboration. Everyone’s modification is collected and shown on everyone’s device
in real-time.
5.3 WebVR-tsunami Future Work: Web-based Augmented
Reality (AR) Visualization for global tsunamis
Like Web-based VR, Web-based AR has also been developed. WebAR technology is based on the
Three.js, a cross-browser JavaScript library and interface for creating and displaying 3D computer
95
graphics using WebGL, and AR.js, a JavaScript library for web-based Augmented Reality.
Currently, WebAR can activate and show 3D virtual contents based on Markers or Locations.
Marker-based AR is a feature of displaying 3D virtual contents on a Hiro, Kanji, and letter marker.
Location-based AR is a feature of displaying the AR contents on specified places of interest based
on GPS. We have developed a simple demo of visualizing the shockwave of the 2022 Hunga Tonga
Tsunami using marker-based WebAR, as shown in Figure 5-2. First, users open the WebAR app
through a web link. A message will pop up asking for turning on the rear camera, and after the rear
camera is with users’ permission, users can aim the camera at the Hiro marker (Figure 5-2(a)). The
AR content will automatically show up on the Hiro maker once the camera catches the full image
of the marker (Figure 5-2(b)).
The main advantages are that users can know the information about the shockwave and
have an immersive experience on any browser without any installation on the device. But for users
who don’t have the Hiro marker at hand, the AR content is not accessible. Furthermore, the control
and interaction with the marker-based AR content are very limited. To view the other side of the
3D Earth globe, users have to rotate the device since the position of the 3D globe is fixed on the
marker. Compared to the well-advanced ARKit with many features, web-based AR technology is
still immature. But in the future, with more development, it is useful and much more convenient
to build web-based apps providing similar immersive experiences as AR-sandbox and AR-tsunami.
96
(a) (b)
Figure 5-2: Screenshots of the visualization of a 3D globe with the 2022 Hunga Tonga Tsunami
shockwave (marked as a red line) based on WebAR. (a) aim the rear camera at the Hiro marker.
(b) the animation of the 2022 Hunga Tonga Tsunami shockwave on a 3D earth is placed on the
Hiro marker.
97
5.4 Future Work of a Software Bundle for Coastal Engineering
Our eventual objective is to create an immersive visualization software package for coastal
engineering – Immersive Computing for Coastal Engineering (ICCE). The package will connect
AR-sandbox, AR-tsunami, and WebVR-tsunami: (1) AR-tsunami is the coastal wave visualization
AR iOS/iPadOS app for coastal education. (2) a physics-aware deep learning model of using neural
networks for improving the accuracy or efficiency of wave solver. Based on the result of the deep
learning model, AR-tsunami can simulate strong nonlinear coastal flow which current wave
solvers cannot easily resolve. (3) AR-sandbox is the interactive sandbox AR iOS/iPad app for
coastal engineering. (4) WebVR-tsunami is a web-based convenient tool for 3D global tsunami
visualization and coastal hazard education. Their relations can be described as (shown in Figure
5-3):
(a) The physics-aware deep learning model takes the simulation results of AR-sandbox as the
training dataset and provides learned high-order terms back to AR-tsunami and AR-
sandbox apps, so they can perform high-order simulation more efficiently.
(b) AR-sandbox transfers user data such as depth elevation models (DEMs) to the deep
learning model. Through the Amazon Web Services (AWS) file system and federated
learning technology, the transfer can be done synchronously and instantly when the AR-
sandbox and deep learning model are running.
(c) AR-tsunami shares the same method to build compute shaders with AR-sandbox.
(d) WebVR can utilize the shared cloud data on AWS to perform web-based VR experiences.
98
ICCE shows another advantage of using mobile devices as the hardware platform for high-
performance augmented reality apps. It is much easier to use the mobile app than the software
running only on powerful computers or running only with certain physical constraints such as the
VR headset or clusters or lidar sensors.
Figure 5-3: Immersive Computing for Coastal Engineering (ICCE) software bundl
99
Chapter 6 Conclusion
In chapter 1, we introduced the motivation and justification of this dissertation. We demonstrated
the objectives of the study are: 1) models of coastal processes, 2) Augmented Reality (AR) mobile
app for Coastal Engineering education, 3) combination of coastal laboratory experiments and
numerical simulations, 4) web-based Virtual Reality (VR) visualization of global tsunamis.
In chapter 2, we introduced AR-tsunami. The AR-tsunami app utilizes Apple’s augmented
reality platform ARKit to simulate the experience of being a coastal resident witnessing tsunami
waves crash onshore and bulldoze through a community, all while standing on the beach. This
virtual tsunami is simulated by a tsunami modeling system following the methods described in the
coastal wave modeling software Celeris. The simulation and visualization are performed on the
GPU through the Apple framework Metal on an iPhone/iPad, while the input of bathymetry data
and boundary data is transformed online through the AWS file system. AR-tsunami supports plane
detection and people occlusion, so it can automatically render the tsunami on the ground and
provide an immersive experience for the users. The goal of this experience is to elicit an emotional
response in users and influence future planning decisions, and ultimately push a more proactive
approach to tsunami preparedness.
In chapter 3, we discussed AR-sandbox. In the AR-sandbox app, users can visualize a
hydrodynamic simulation in real time using a small laboratory sandbox as the bathymetric
conditions. The AR-sandbox app utilizes the LiDAR Scanner on Apple’s new generation of iPad
Pro in order to gather a “point cloud” sampling of arbitrary surfaces and generate a high-resolution
100
digital elevation model (DEM) as the bathymetric map for the hydrodynamic simulation. By
projecting the digital effect from an iPad Pro onto a sandbox, users can have a real-time, immersive,
and interactive testbed to explore coastal processes and practice coastal engineering. For example,
users can insert protective coastal infrastructure like breakwaters in order to see their
hydrodynamic and morphologic effects in real-time. The AR-sandbox app also supports people
occlusion with simulation and visualization performed on the GPU.
In chapter 4, we propose a framework for the WebVR 3D visualization and web-based
application for global simulations called WebVR-tsunami. WebVR-tsunami uses WebVR for
visualization, AWS for cloud storage, and jQuery for HTML manipulations. A showcase of the
2022 Hunga Tonga Tsunami in web browsers is presented on the website
https://zilizhou.com/lynett. Users can visit this website and have an immersive VR experience of
the Hunga Tonga Tsunami via the browsers on a VR headset, a desktop, or a mobile device. The
result is appealing compared to traditional visualizations for global tsunamis or traditional VR and
shows promising results for fast scientific communication and education. With the support of most
desktop or mobile browsers in the market and the VR headset, this framework also proves to be
powerfully conveyable and accessible.
In Chapter 5, we propose potential future work for the three projects AR-tsunami, AR-
sandbox, and WebVR-tsunami, and also plan to build a immersive software bundle for Coastal
Engineering. The future work will enhance our objectives of study: (1) the plan to improve
accuracy via deep learning for AR-tsunami shows we tend to build a practical and effective AR
mobile app for Coastal Engineering which correctly approximates coastal processes; (2) the plan
101
to enable multiuser experience in AR-sandbox is for better performing combination of hands-on
lab work, digital interactions, and numerical simulations; (3) the plan to achieve WebAR-tsunami
is for enhancing WebVR-tsunami so that users can have more options to have immersive
experiences and learn scientific information; (4) The software bundle ICCE is to connect the three
projects and to provide a testbed for developing new coastal protection solutions.
102
Bibliography
Abdalazeez, Ahmed A, Ira Didenkulova, and Denys Dutykh. 2019. "Nonlinear deformation and
run-up of single tsunami waves of positive polarity: numerical simulations and analytical
predictions." Natural Hazards and Earth System Sciences 19 (12): 2905-2913.
Adam, David. 2022. "Tonga volcano eruption created puzzling ripples in Earth's atmosphere."
Nature.
Adcroft, Alistair, Jean-Michel Campin, Chris Hill, and John Marshall. 2004. "Implementation of
an atmosphere–ocean general circulation model on the expanded spherical cube."
Monthly Weather Review 132 (12): 2845-2863.
Albert, Alex, Matthew R Hallowell, Brian Kleiner, Ao Chen, and Mani Golparvar-Fard. 2014.
"Enhancing construction hazard recognition with high-fidelity augmented virtuality."
Journal of Construction Engineering and Management 140 (7): 04014024.
Amin, Dhiraj, and Sharvari Govilkar. 2015. "Comparative study of augmented reality SDKs."
International Journal on Computational Science & Applications 5 (1): 11-26.
Anderson, John David, and J Wendt. 1995. Computational fluid dynamics. Vol. 206. Springer.
Andrews, Keith, Thomas Traunmüller, Thomas Wolkinger, Robert Gutounig, and Julian
Ausserhofer. 2015. "Building an open data visualisation web app using a data server: the
Styrian diversity visualisation project." Proceedings of the 15th International Conference
on Knowledge Technologies and Data-driven Business.
Arminjon, P, M-C Viallon, and A Madrane. 1998. "A finite volume extension of the Lax-
Friedrichs and Nessyahu-Tadmor schemes for conservation laws on unstructured grids."
International Journal of Computational Fluid Dynamics 9 (1): 1-22.
Arvanitis, Theodoros N, Argeroula Petrou, James F Knight, Stavros Savas, Sofoklis Sotiriou,
Michael Gargalakos, and Elpida Gialouri. 2009. "Human factors and qualitative
pedagogical evaluation of a mobile augmented reality system for science education used
by learners with physical disabilities." Personal and ubiquitous computing 13 (3): 243-
250.
Ayoub, Ashraf, and Yeshwanth Pulijala. 2019. "The application of virtual reality and augmented
reality in Oral & Maxillofacial Surgery." BMC Oral Health 19 (1): 1-8.
Azuma, Ronald T. 1997. "A survey of augmented reality." Presence: teleoperators & virtual
environments 6 (4): 355-385.
103
Berger, Marsha J, David L George, Randall J LeVeque, and Kyle T Mandli. 2011. "The
GeoClaw software for depth-averaged flows with adaptive refinement." Advances in
Water Resources 34 (9): 1195-1206.
Beucler, Tom, Michael Pritchard, Stephan Rasp, Jordan Ott, Pierre Baldi, and Pierre Gentine.
2021. "Enforcing analytic constraints in neural networks emulating physical systems."
Physical Review Letters 126 (9): 098302.
Blazek, Jiri. 2015. Computational fluid dynamics: principles and applications. Butterworth-
Heinemann.
Boghosian, Alexandra L, Martin J Pratt, Maya K Becker, S Isabel Cordero, Tejendra Dhakal,
Jonathan Kingslake, Caitlin D Locke, Kirsty J Tinto, and Robin E Bell. 2019. "Inside the
ice shelf: using augmented reality to visualise 3D lidar and radar data of Antarctica." The
Photogrammetric Record 34 (168): 346-364.
Bondevik, Stein, John Inge Svendsen, Geir Johnsen, JAN Mangerud, and Peter Emil Kaland.
1997. "The Storegga tsunami along the Norwegian coast, its age and run up." Boreas 26
(1): 29-53.
Bostrom, Ann, Rebecca E Morss, Jeffrey K Lazo, Julie L Demuth, Heather Lazrus, and Rebecca
Hudson. 2016. "A mental models study of hurricane forecast and warning production,
communication, and decision-making." Weather, Climate, and Society 8 (2): 111-129.
Boussinesq, Joseph. 1897. Théorie de l'écoulement tourbillonnant et tumultueux des liquides
dans les lits rectilignes a grande section. Vol. 1. Gauthier-Villars.
Bower, Matt, Cathie Howe, Nerida McCredie, Austin Robinson, and David Grover. 2014.
"Augmented Reality in education–cases, places and potentials." Educational Media
International 51 (1): 1-15.
Bridson, Robert. 2015. Fluid simulation for computer graphics. AK Peters/CRC Press.
Burdea, Grigore, and Philippe Coiffet. 2003. Virtual reality technology. MIT Press.
Butcher, John Charles. 2016. Numerical methods for ordinary differential equations. John Wiley
& Sons.
Butcher, Peter WS, and Panagiotis D Ritsos. 2017. "Building immersive data visualizations for
the web." 2017 international conference on cyberworlds (CW).
Carrillo, JA, MA Sanchez, A Platonov, and JM Redondo. 2001. "Coastal and interfacial mixing.
Laboratory experiments and satellite observations." Physics and Chemistry of the Earth,
Part B: Hydrology, Oceans and Atmosphere 26 (4): 305-311.
Carvajal, Matías, Ignacio Sepúlveda, Alejandra Gubler, and René Garreaud. 2022. "Worldwide
Signature of the 2022 Tonga Volcanic Tsunami." Geophysical Research Letters:
e2022GL098153.
104
Castro, Manuel J, Sergio Ortega, and Carlos Parés. 2017. "Well-balanced methods for the
shallow water equations in spherical coordinates." Computers & Fluids 157: 196-207.
Chang, George, Patricia Morreale, and Padmavathi Medicherla. 2010. "Applications of
augmented reality systems in education." Society for Information Technology & Teacher
Education International Conference.
Cho, Yong-Sik. 1995. "Numerical simulations of tsunami propagation and run-up." Cornell
University.
Chung, TJ. 2002. Computational fluid dynamics. Cambridge university press.
Crook, AJL, SM Willson, JG Yu, and DRJ Owen. 2006. "Predictive modelling of structure
evolution in sandbox experiments." Journal of Structural Geology 28 (5): 729-744.
Danchilla, Brian. 2012. "Three. js framework." In Beginning WebGL for HTML5, 173-203.
Springer.
Demir, Ibrahim, Enes Yildirim, Yusuf Sermet, and Muhammed Ali Sit. 2018. "FLOODSS: Iowa
flood information system as a generalized flood cyberinfrastructure." International
journal of river basin management 16 (3): 393-400.
Deser, Clara, Reto Knutti, Susan Solomon, and Adam S Phillips. 2012. "Communication of the
role of natural variability in future North American climate." Nature Climate Change 2
(11): 775-779.
Dolson, Emily, and Charles Ofria. 2018. "Visualizing the tape of life: exploring evolutionary
history with virtual reality." Proceedings of the Genetic and Evolutionary Computation
Conference Companion.
Dupeyrat, L, F Costard, R Randriamazaoro, E Gailhardis, E Gautier, and A Fedorov. 2011.
"Effects of ice content on the thermal erosion of permafrost: implications for coastal and
fluvial erosion." Permafrost and Periglacial Processes 22 (2): 179-187.
Durran, Dale R. 1991. "The third-order Adams-Bashforth method: An attractive alternative to
leapfrog time differencing." Monthly weather review 119 (3): 702-720.
Esteban, Miguel, Tomoyuki Takabatake, Hendra Achiari, Takahito Mikami, Ryota Nakamura,
Mustarakh Gelfi, Satriyo Panalaran, Yuta Nishida, Naoto Inagaki, and Christopher
Chadwick. 2021. "Field Survey of Flank Collapse and Run-up Heights due to 2018 Anak
Krakatau Tsunami." Journal of Coastal and Hydraulic Structures 1: 1-1.
Fanfarillo, Alessandro, Davide Del Vento, and Patrick Nichols. 2018. "Optimizing
Communication and Synchronization in CAF Applications." In Parallel Computing is
Everywhere, 191-200. IOS Press.
Fong, Derek A, and W Rockwell Geyer. 2002. "The alongshore transport of freshwater in a
surface-trapped river plume." Journal of Physical Oceanography 32 (3): 957-972.
105
Fournier, Alain, and William T Reeves. 1986. "A simple model of ocean waves." Proceedings of
the 13th annual conference on Computer graphics and interactive techniques.
Fréchot, Jocelyn. 2006. "Realistic simulation of ocean surface using wave spectra." Proceedings
of the first international conference on computer graphics theory and applications
(GRAPP 2006).
Fredsoe, Jorgen, and Rolf Deigaard. 1992. Mechanics of coastal sediment transport. Vol. 3.
World scientific publishing company.
Gedik, Nuray, Emel İrtem, and S Kabdasli. 2005. "Laboratory investigation on tsunami run-up."
Ocean Engineering 32 (5-6): 513-528.
Gingold, Robert A, and Joseph J Monaghan. 1977. "Smoothed particle hydrodynamics: theory
and application to non-spherical stars." Monthly notices of the royal astronomical society
181 (3): 375-389.
Gómez-Gesteira, Moncho, Alejandro JC Crespo, Benedict D Rogers, Robert A Dalrymple, José
M Dominguez, and Anxo Barreiro. 2012. "SPHysics–development of a free-surface fluid
solver–Part 2: Efficiency and test cases." Computers & Geosciences 48: 300-307.
Gomez-Gesteira, Moncho, Benedict D Rogers, Alejandro JC Crespo, Robert A Dalrymple,
Muthukumar Narayanaswamy, and José M Dominguez. 2012. "SPHysics–development
of a free-surface fluid solver–Part 1: Theory and formulations." Computers &
Geosciences 48: 289-299.
Goto, C, Y Ogawa, N Shuto, and F Imamura. 1997. "IUGG/IOC time project." IOC Manuals and
Guides (35).
Hadjar, Hayet, Abdelkrim Meziane, Rachid Gherbi, Insaf Setitra, and Noureddine Aouaa. 2018.
"WebVR based interactive visualization of open health data." Proceedings of the 2nd
International Conference on Web Studies.
Haidvogel, Dale B, Hernan Arango, W Paul Budgell, Bruce D Cornuelle, Enrique Curchitser,
Emanuele Di Lorenzo, Katja Fennel, W Rockwell Geyer, Albert J Hermann, and Lyon
Lanerolle. 2008. "Ocean forecasting in terrain-following coordinates: Formulation and
skill assessment of the Regional Ocean Modeling System." Journal of computational
physics 227 (7): 3595-3624.
Hinsinger, Damien, Fabrice Neyret, and Marie-Paule Cani. 2002. "Interactive animation of ocean
waves." Proceedings of the 2002 ACM SIGGRAPH/Eurographics symposium on
Computer animation.
Horner-Devine, Alexander R, Derek A Fong, Stephen G Monismith, and Tony Maxworthy.
2006. "Laboratory experiments simulating a coastal river inflow." Journal of Fluid
Mechanics 555: 203-232.
106
Intrieri, Janet M, Graeme L Stephens, Wynn L Eberhard, and Taneil Uttal. 1993. "A method for
determining cirrus cloud particle sizes using lidar and radar backscatter technique."
Journal of Applied Meteorology and Climatology 32 (6): 1074-1082.
Irish, Jennifer L, Robert Weiss, Yongqian Yang, Youn Kyung Song, Amir Zainali, and Roberto
Marivela-Colmenarejo. 2014. "Laboratory experiments of tsunami run-up and withdrawal
in patchy coastal forest on a steep beach." Natural hazards 74 (3): 1933-1949.
Irtem, Emel, Nuray Gedik, M Sedat Kabdasli, and Nilay E Yasa. 2009. "Coastal forest effects on
tsunami run-up heights." Ocean Engineering 36 (3-4): 313-320.
Ishii, Hiroshi, Carlo Ratti, Ben Piper, Yao Wang, Assaf Biderman, and Eran Ben-Joseph. 2004.
"Bringing clay and sand into digital design—continuous tangible user interfaces." BT
technology journal 22 (4): 287-299.
Jensen, Atle, Geir K Pedersen, and Deborah J Wood. 2003. "An experimental study of wave run-
up at a steep beach." Journal of Fluid Mechanics 486: 161-188.
Jeschke, Stefan, Christian Hafner, Nuttapong Chentanez, Miles Macklin, Matthias M üller ‐
Fischer, and Christopher Wojtan. 2020. "Making Procedural Water Waves Boundary ‐
aware." Computer Graphics Forum.
Kaufmann, Hannes, and Dieter Schmalstieg. 2002. "Mathematics and geometry education with
collaborative augmented reality." ACM SIGGRAPH 2002 conference abstracts and
applications.
Kirby, James T, Ge Wei, Qin Chen, Andrew B Kennedy, and Robert A Dalrymple. 1998.
"FUNWAVE 1.0: fully nonlinear Boussinesq wave model-Documentation and user's
manual." research report NO. CACR-98-06.
Kreylos, Oliver, Gerald W Bawden, and Louise H Kellogg. 2008. "Immersive visualization and
analysis of LiDAR data." International Symposium on Visual Computing.
Kurganov, Alexander, and Guergana Petrova. 2007. "A second-order well-balanced positivity
preserving central-upwind scheme for the Saint-Venant system." Communications in
Mathematical Sciences 5 (1): 133-160.
Kurganov, Alexander, Martina Prugger, and Tong Wu. 2017. "Second-order fully discrete
central-upwind scheme for two-dimensional hyperbolic systems of conservation laws."
SIAM Journal on Scientific Computing 39 (3): A947-A965.
Lauga, Eric, Michael P Brenner, and Howard A Stone. 2005. "Microfluidics: the no-slip
boundary condition." arXiv preprint cond-mat/0501557.
Lazo, Jeffrey K, Ann Bostrom, Rebecca Morss, Julie Demuth, and Heather Lazrus. 2014.
"Communicating hurricane warnings: Factors affecting protective behavior." Conference
on Risk, Perceptions, and Response, Harvard University, Massachusetts, USA.
107
Le, Quang Tuan, AKEEM Pedro, Chung Rok Lim, Hee Taek Park, Chan Sik Park, and Hong Ki
Kim. 2015. "A framework for using mobile based virtual reality and augmented reality
for experiential construction safety education." International Journal of Engineering
Education 31 (3): 713-725.
Lee, Kangdon. 2012. "Augmented reality in education and training." TechTrends 56 (2): 13-21.
Levin, Boris W, and Mikhail Nosov. 2009. Physics of tsunamis. Vol. 327. Springer.
Li, Xiao, Wen Yi, Hung-Lin Chi, Xiangyu Wang, and Albert PC Chan. 2018. "A critical review
of virtual and augmented reality (VR/AR) applications in construction safety."
Automation in Construction 86: 150-162.
Lindsey, Rebecca. 2018. "Climate change: global sea level." Climate. gov.
Lindsey, Rebecca, and LuAnn Dahlman. 2020. "Climate change: Global temperature." Climate.
gov 16.
Liu, MB, and GR2593940 Liu. 2010. "Smoothed particle hydrodynamics (SPH): an overview
and recent developments." Archives of computational methods in engineering 17 (1): 25-
76.
Liu, Philip L-F, Seung-Buhm Woo, and Yong-Sik Cho. 1998. "Computer programs for tsunami
propagation and inundation." Cornell University 25.
Liu, Shuyun, T ‐C Jim Yeh, and Ryan Gardiner. 2002. "Effectiveness of hydraulic tomography:
Sandbox experiments." Water Resources Research 38 (4): 5-1-5-9.
Liu, X, WA Illman, AJ Craig, J Zhu, and T ‐CJ Yeh. 2007. "Laboratory sandbox validation of
transient hydraulic tomography." Water Resources Research 43 (5).
Lynett, P, PLF Liu, KI Sitanggang, and DH Kim. 2002. "Modeling wave generation, evolution,
and interaction with depthintegrated, dispersive wave equations COULWAVE code
manual." Cornell University Long and Intermediate Wave Modeling Package.
Lynett, Patrick. 2022. "The Tsunamis Generated by the Hunga Tonga-Hunga Ha'apai Volcano on
January 15, 2022."
Lynett, Patrick, and Philip L ‐F Liu. 2005. "A numerical study of the run ‐up generated by three ‐
dimensional landslides." Journal of Geophysical Research: Oceans 110 (C3).
Madsen, Per A, and Ole R Sørensen. 1992. "A new form of the Boussinesq equations with
improved linear dispersion characteristics. Part 2. A slowly-varying bathymetry." Coastal
engineering 18 (3-4): 183-204.
Mansinha, LA, and DE Smylie. 1971. "The displacement fields of inclined faults." Bulletin of the
Seismological Society of America 61 (5): 1433-1440.
108
Marchuk, Andrei G, and Alexandr A Anisimov. 2001. "A method for numerical modeling of
tsunami run-up on the coast of an arbitrary profile." Proceedings of International Tsunami
Symposium (ITS).
Martin, Calin Iulian, and Ronald Quirchmayr. 2019. "Explicit and exact solutions concerning the
Antarctic Circumpolar Current with variable density in spherical coordinates." Journal of
Mathematical Physics 60 (10): 101505.
Mazzanti, Paolo, and Fabio Vittorio De Blasio. 2011. "The dynamics of coastal landslides:
insights from laboratory experiments and theoretical analyses." Bulletin of Engineering
Geology and the Environment 70 (3): 411-422.
McAdoo, Brian G, N Richardson, and J Borrero. 2007. "Inundation distances and run ‐up
measurements from ASTER, QuickBird and SRTM data, Aceh coast, Indonesia."
International Journal of Remote Sensing 28 (13-14): 2961-2975.
Moloney, Kevin, and Marijke Unger. 2014. "Transmedia storytelling in science communication:
one subject, multiple media, unlimited stories." In New Trends in Earth-Science
Outreach and Engagement, 109-120. Springer.
Monaghan, Joe J. 1992. "Smoothed particle hydrodynamics." Annual review of astronomy and
astrophysics 30 (1): 543-574.
Monaghan, Joe J. 2005. "Smoothed particle hydrodynamics." Reports on progress in physics 68
(8): 1703.
Morrow, Betty H, Jeffrey K Lazo, Jamie Rhome, and Jesse Feyen. 2015. "Improving storm surge
risk communication: Stakeholder perspectives." Bulletin of the American Meteorological
Society 96 (1): 35-48.
Morrow, BH, and JK Lazo. 2015. "Effective tropical cyclone forecast and warning
communication: Recent social science contributions." Tropical Cyclone Research and
Review 4 (1): 38-48.
Morss, Rebecca E, Julie L Demuth, Heather Lazrus, Leysia Palen, C Michael Barton,
Christopher A Davis, Chris Snyder, Olga V Wilhelmi, Kenneth M Anderson, and David
A Ahijevych. 2017. "Hazardous weather prediction and communication in the modern
information environment." Bulletin of the American Meteorological Society 98 (12):
2653-2674.
Moulton, Forest Ray. 1926. New methods in exterior ballistics. University of Chicago Press.
Nakamura, Tomoaki, Norimi Mizutani, and Koji Fujima. 2010. "Three-dimensional numerical
analysis on deformation of run-up tsunami and tsunami force acting on square
structures." Proc. 32nd Int. Conf. on Coastal Eng.
Neelakantam, Srushtika, and Tanay Pant. 2017. "Introduction to a-frame." In Learning Web-
based Virtual Reality, 17-38. Springer.
109
Nincarean, Danakorn, Mohamad Bilal Alia, Noor Dayana Abdul Halim, and Mohd Hishamuddin
Abdul Rahman. 2013. "Mobile Augmented Reality: the potential for education."
Procedia-social and behavioral sciences 103: 657-664.
Oishi, Yusuke, Fumihiko Imamura, and Daisuke Sugawara. 2015. "Near ‐field tsunami
inundation forecast using the parallel TUNAMI ‐N2 model: Application to the 2011
Tohoku ‐Oki earthquake combined with source inversions." Geophysical Research
Letters 42 (4): 1083-1091.
Okada, Yoshimitsu. 1985. "Surface deformation due to shear and tensile faults in a half-space."
Bulletin of the seismological society of America 75 (4): 1135-1154.
Okal, Emile A, George Plafker, Costas E Synolakis, and José C Borrero. 2003. "Near-field
survey of the 1946 Aleutian tsunami on Unimak and Sanak Islands." Bulletin of the
Seismological Society of America 93 (3): 1226-1234.
Onoue, Koichi, and Tomoyuki Nishita. 2003. "Virtual sandbox." 11th Pacific Conference
onComputer Graphics and Applications, 2003. Proceedings.
Park, Hyoungsu, Tori Tomiczek, Daniel T Cox, John W van de Lindt, and Pedro Lomonaco.
2017. "Experimental modeling of horizontal and vertical wave forces on an elevated
coastal structure." Coastal Engineering 128: 58-74.
Pedro, Akeem, Quang Tuan Le, and Chan Sik Park. 2016. "Framework for integrating safety into
construction methods education through interactive virtual reality." Journal of
professional issues in engineering education and practice 142 (2): 04015011.
Pereira, R Eiris, HF Moore, M Gheisari, and B Esmaeili. 2019. "Development and usability
testing of a panoramic augmented reality environment for fall hazard safety training." In
Advances in informatics and computing in civil and construction engineering, 271-279.
Springer.
Pereira, Ricardo Eiris, Masoud Gheisari, and Behzad Esmaeili. 2018. "Using panoramic
augmented reality to develop a virtual safety training environment." Construction
Research Congress 2018.
Psihoyios, G, and TE Simos. 2005. "A fourth algebraic order trigonometrically fitted predictor–
corrector scheme for IVPs with oscillating solutions." Journal of Computational and
Applied Mathematics 175 (1): 137-147.
Raissi, Maziar, Alireza Yazdani, and George Em Karniadakis. 2020. "Hidden fluid mechanics:
Learning velocity and pressure fields from flow visualizations." Science 367 (6481):
1026-1030.
Reed, S, S Hsi, O Kreylos, M Yikilmaz, LH Kellogg, SG Schladow, H Segale, and L Chan.
2016. "Augmented reality turns a sandbox into a geoscience lesson." Eos 97: 18-22.
110
Reed, Sarah E, Oliver Kreylos, Sherry Hsi, Louise H Kellogg, Geoffrey Schladow, M Burak
Yikilmaz, Heather Segale, Julie Silverman, Steve Yalowitz, and Elissa Sato. 2014.
"Shaping watersheds exhibit: An interactive, augmented reality sandbox for advancing
earth science education." AGU Fall Meeting Abstracts.
Reutebuch, Stephen E, Hans-Erik Andersen, and Robert J McGaughey. 2005. "Light detection
and ranging (LIDAR): an emerging tool for multiple resource inventory." Journal of
forestry 103 (6): 286-292.
Richardson, S. 1973. "On the no-slip boundary condition." Journal of Fluid Mechanics 59 (4):
707-719.
Rouse, Hunter. 1946. "Elementary fluid mechanics." New York.
Saito, Tatsuhiko, and Takashi Furumura. 2009. "Three-dimensional tsunami generation
simulation due to sea-bottom deformation and its interpretation based on the linear
theory." Geophysical Journal International 178 (2): 877-888.
Sánchez-Arcilla, Agustín, Iván Cáceres, Leo van Rijn, and Joachim Grüne. 2011. "Revisiting
mobile bed tests for beach profile dynamics." Coastal Engineering 58 (7): 583-593.
Sansón, L Zavala, and GJF Van Heijst. 2000. "Interaction of barotropic vortices with coastal
topography: Laboratory experiments and numerical simulations." Journal of physical
oceanography 30 (9): 2141-2162.
Satake, Kenji. 1988. "Effects of bathymetry on tsunami propagation: Application of ray tracing
to tsunamis." Pure and Applied Geophysics 126 (1): 27-36.
Saville Jr, Thorndike. 1956. "Wave run-up on shore structures." Journal of the Waterways and
Harbors Division 82 (2): 925-1-925-14.
Saville, Thorndike. 1955. Laboratory data on wave run-up and overtopping on shore structures.
Vol. 64: US Beach Erosion Board.
Schneider, Tapio, Shiwei Lan, Andrew Stuart, and Joao Teixeira. 2017. "Earth system modeling
2.0: A blueprint for models that learn from observations and targeted high ‐resolution
simulations." Geophysical Research Letters 44 (24): 12,396-12,417.
Schumacher, Russ S, Daniel T Lindsey, Andrea B Schumacher, Jeff Braun, Steven D Miller, and
Julie L Demuth. 2010. "Multidisciplinary analysis of an unusual tornado: Meteorology,
climatology, and the communication and interpretation of warnings." Weather and
forecasting 25 (5): 1412-1429.
Sevre, Erik OD, Dave A Yuen, and Yingchun Liu. 2008. "Visualization of tsunami waves with
amira package." Visual Geosciences 13 (1): 85-96.
111
Shchepetkin, Alexander F, and James C McWilliams. 2005. "The regional oceanic modeling
system (ROMS): a split-explicit, free-surface, topography-following-coordinate oceanic
model." Ocean modelling 9 (4): 347-404.
Shuto, Nobuo, F Imamura, AC Yalciner, and G Ozyurt. 2006. "TUNAMI N2: Tsunami modeling
manual." Iwate Prefectural University.
Song, Jae Yeol, Atieh Alipour, Hamed R Moftakhari, and Hamid Moradkhani. 2020. "Toward a
more effective hurricane hazard communication." Environmental Research Letters 15
(6): 064012.
Sun, Wenxin, Mengjie Huang, Rui Yang, Jingjing Zhang, Liu Wang, Ji Han, and Yong Yue.
2020. "Workload, Presence and Task Performance of Virtual Object Manipulation on
WebVR." 2020 IEEE International Conference on Artificial Intelligence and Virtual
Reality (AIVR).
Tavakkol, Sasan. 2018. "Interactive and Immersive Coastal Hydrodynamics." University of
Southern California.
Tavakkol, Sasan, and Patrick Lynett. 2017. "Celeris: A GPU-accelerated open source software
with a Boussinesq-type wave solver for real-time interactive simulation and
visualization." Computer Physics Communications 217: 117-127.
Tavakkol, Sasan, and Patrick Lynett. 2020. "Celeris Base: An interactive and immersive
Boussinesq-type nearshore wave simulation software." Computer Physics
Communications 248: 106966.
Tessendorf, Jerry. 2001. "Simulating ocean water." Simulating nature: realistic and interactive
techniques. SIGGRAPH 1 (2): 5.
Thon, Sebastien, J-M Dischler, and Djamchid Ghazanfarpour. 2000. "Ocean waves synthesis
using a spectrum-based turbulence function." Proceedings Computer Graphics
International 2000.
Thon, Sébastien, and Djamchid Ghazanfarpour. 2002. "Ocean waves synthesis and animation
using real world information." Computers & Graphics 26 (1): 99-108.
Titov, Vasily, MF Cronin, R Dziak, Y Wei, D Arcas, and C Moore. 2022. Understanding a
unique tsunami event caused by the Tonga volcano eruption.
Titov, Vasily, Utku Kânoğlu, and Costas Synolakis. 2016. "Development of MOST for real-time
tsunami forecasting." Journal of Waterway, Port, Coastal, and Ocean Engineering 142
(6): 03116004.
Titov, Vasily V, and Frank I Gonzalez. 1997. "Implementation and testing of the method of
splitting tsunami (MOST) model."
Tixier, Antoine Jean-Pierre, and Alex Albert. 2013. "Teaching construction hazard recognition
through high fidelity augmented reality." 2013 ASEE Annual Conference & Exposition.
112
Ts'o, Pauline Y, and Brian A Barsky. 1987. "Modeling and rendering waves: wave-tracing using
beta-splines and reflective and refractive texture mapping." ACM Transactions on
Graphics (TOG) 6 (3): 191-214.
Tu, Jiyuan, Guan Heng Yeoh, and Chaoqun Liu. 2018. Computational fluid dynamics: a
practical approach. Butterworth-Heinemann.
Türker, Umut, Oral Yagci, and M Sedat Kabdasli. 2019. "Impact of nearshore vegetation on
coastal dune erosion: assessment through laboratory experiments." Environmental Earth
Sciences 78 (19): 1-14.
Voulgaris, G, and Y Murayama. 2014. "Tsunami vulnerability assessment in the southern boso
peninsula, Japan." International journal of disaster risk reduction 10: 190-200.
Weiss, Robert, Kai Wünnemann, and Heinrich Bahlburg. 2006. "Numerical modelling of
generation, propagation and run-up of tsunamis caused by oceanic impacts: model
strategy and technical solutions." Geophysical Journal International 167 (1): 77-88.
Wilcox, David C. 1998. Turbulence modeling for CFD. Vol. 2. DCW industries La Canada, CA.
Wolfe, Christopher L, and Claudia Cenedese. 2006. "Laboratory experiments on eddy generation
by a buoyant coastal current flowing over variable bathymetry." Journal of Physical
Oceanography 36 (3): 395-411.
Wu, Hsin-Kai, Silvia Wen-Yu Lee, Hsin-Yi Chang, and Jyh-Chong Liang. 2013. "Current status,
opportunities and challenges of augmented reality in education." Computers & education
62: 41-49.
Xu, Gaowei, Yisha Lan, Wen Zhou, Chenxi Huang, Weibin Li, Wei Zhang, Guokai Zhang, EYK
Ng, Yongqiang Cheng, and Yonghong Peng. 2019. "An IoT-based framework of webvr
visualization for medical big data in connected health." IEEE Access 7: 173866-173874.
Yan, Fengting, Yonghao Hu, Jinyuan Jia, Zihao Ai, Kai Tang, Zhicai Shi, and Xiang Liu. 2020.
"Interactive WebVR visualization for online fire evacuation training." Multimedia Tools
and Applications 79 (41): 31541-31565.
Yuen, David A, Melissa A Scruggs, Frank J Spera, Yingcai Zheng, Hao Hu, Stephen R McNutt,
Glenn Thompson, Kyle Mandli, Barry R Keller, and Songqiao Shawn Wei. 2022. "Under
the surface: Pressure-induced planetary-scale waves, volcanic lightning, and gaseous
clouds caused by the submarine eruption of Hunga Tonga-Hunga Ha'apai volcano."
Earthquake Research Advances: 100134.
Zanna, Laure, and Thomas Bolton. 2020. "Data ‐driven equation discovery of ocean mesoscale
closures." Geophysical Research Letters 47 (17): e2020GL088376.
Zanna, Laure, and Thomas Bolton. 2021. "Deep Learning of Unresolved Turbulent Ocean
Processes in Climate Models." Deep learning for the Earth Sciences: With Applications
and R, Second Edition: 298-306.
113
Zhao, Shilin, Sheng Jin, Congfang Ai, and Nan Zhang. 2019. "Visual analysis of three-
dimensional flow field based on WebVR." Journal of Hydroinformatics 21 (5): 671-686.
Zhou, Zili, and Patrick Lynett. 2020. "INTERACTIVE AUGMENTED REALITY
SIMULATION SYSTEM FOR COASTAL HAZARD EDUCATION." Coastal
Engineering Proceedings (36v).
Zhu, Bolin, Mi Feng, Hannah Lowe, Jeffrey Kesselman, Lane Harrison, and Robert E Dempski.
2018. "Increasing enthusiasm and enhancing learning for biochemistry-laboratory safety
with an augmented-reality program." Journal of Chemical Education 95 (10): 1747-1754.
Abstract (if available)
Abstract
Augmented reality (AR) is a technology that integrates 3D virtual objects into the physical world in real-time, while virtual reality (VR) is a technology that immerses users in an interactive 3D virtual environment. The fast development of augmented reality (AR) and virtual reality (VR) technologies has reshaped how people interact with the physical world. This dissertation will outline the deliverables from two unique AR and one Web-based VR coastal engineering projects and will motivate the next stage in the development of the augmented reality package for coastal students, engineers, and planners. Three projects demonstrate the completed aspects of this effort – 1) Project AR-tsunami for promulgating education about coastal hazards, 2) Project AR-sandbox for combining laboratory experiments and numerical simulations in coastal engineering 3) Project WebVR-tsunami for providing a convenient tool for 3D tsunami visualization and education. Project AR-tsunami and Project AR-sandbox produce two user-friendly and GPU-accelerated iOS/iPadOS apps – AR-tsunami and AR-sandbox.
Combining the features of plane detection and people occlusion in ARKit with the Boussinesq-type wave solver in Celeris, AR-tsunami can automatically render a tsunami on the ground and provide an immersive experience of the impact of tsunamis for users. The goal of this experience is to elicit an emotional response in users and influence future planning decisions, and ultimately push a more proactive approach to tsunami preparedness. AR-sandbox utilizes the LiDAR Scanner on Apple’s new generation of iPad Pro to gather a “point cloud” sampling of arbitrary surfaces and generate a high-resolution digital elevation model (DEM) as the bathymetric map for the hydrodynamic simulation. The wave simulation and visualization start instantly after the DEM is transferred to wave solvers, and the resulting simulation is projected on the sandbox through a projector. With AR-sandbox, coastal engineers can view the virtual waves interacting with real-world sand. AR-sandbox combines laboratory experiments and numerical simulations and provides better practicability and maneuverability. Project WebVR-tsunami produces an online web-based VR tool using the numerical simulation of the 2022 Hunga Tonga Tsunami by COULWAVE as a showcase (https://www.zilizhou.com/lynett). These are the first apps of their kind to bring an interactive, immersive, and convenient experience to coastal hazard stakeholders and coastal engineers.
The penultimate goal is to develop the software bundle ICCE (Immersive Computing for Coastal Engineering). In ICCE, 1) AR-sandbox gathers the depth data from user input in the small laboratory sandbox, 2) all the gathered data are stored in a cloud-based AWS file system, 3) AR-tsunami uses the depth data collected from AR-sandbox for tsunami simulations, and 4) rendering of simulation output via WebVR-tsunami provides an online entrance to an immersive experience of coastal hazards. The goal of this software package is to provide a user-accessible interactive and immersive AR and VR experience that can educate and inform stakeholders on coastal processes and hazards. Additionally, this software suite will provide a testbed for developing new coastal protection and disaster preparedness solutions that can be utilized by coastal engineers, scientists, and planners.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Interactive and immersive coastal hydrodynamics
PDF
Building information modeling based design review and facility management: Virtual reality workflows and augmented reality experiment for healthcare project
PDF
Penrose Station: an exploration of presence, immersion, player identity, and the unsilent protagonist
PDF
CFD visualization: a case study for using a building information modeling with virtual reality
PDF
Understanding human-building interactions through perceptual decision-making processes
PDF
Tsunami-induced turbulent coherent structures
PDF
Community resilience to coastal disasters
PDF
Enabling virtual and augmented reality over dense wireless networks
PDF
On virtual, augmented, and mixed reality for socially assistive robotics
PDF
Visualizing architectural lighting: creating and reviewing workflows based on virtual reality platforms
PDF
Semantic modeling of outdoor scenes for the creation of virtual environments and simulations
PDF
Understanding the transport potential of nearshore tsunami currents
PDF
Virtual production for virtual reality: possibilities for immersive storytelling
PDF
Understanding human-building-emergency interactions in the built environment
PDF
Enabling human-building communication to promote pro-environmental behavior in office buildings
PDF
A radio frequency based indoor localization framework for supporting building emergency response operations
PDF
Impact of virtual reality (VR)-based training on construction robotics remote-operation
PDF
Behavioral form finding using multi-agent systems: a computational methodology for combining generative design with environmental and structural analysis in architectural design
PDF
Virtual extras: conversational behavior simulation for background virtual humans
PDF
Investigating the debris motion during extreme coastal events: experimental and numerical study
Asset Metadata
Creator
Zhou, Zili
(author)
Core Title
Immersive computing for coastal engineering
School
Viterbi School of Engineering
Degree
Doctor of Philosophy
Degree Program
Civil Engineering
Degree Conferral Date
2022-08
Publication Date
07/28/2022
Defense Date
05/25/2022
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
augmented reality,coastal engineering,ios apps,OAI-PMH Harvest,tsunami simulation,virtual reality,webVR
Format
application/pdf
(imt)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Lynett, Patrick (
committee chair
), Becerik-Gerber, Burcin (
committee member
), Nakano, Aiichiro (
committee member
)
Creator Email
zilizhou@usc.edu,zilizhou0825@gmail.com
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-oUC111375572
Unique identifier
UC111375572
Legacy Identifier
etd-ZhouZili-11031
Document Type
Dissertation
Format
application/pdf (imt)
Rights
Zhou, Zili
Type
texts
Source
20220729-usctheses-batch-963
(batch),
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the author, as the original true and official version of the work, but does not grant the reader permission to use the work if the desired use is covered by copyright. It is the author, as rights holder, who must provide use permission if such use is covered by copyright. The original signature page accompanying the original submission of the work to the USC Libraries is retained by the USC Libraries and a copy of it may be obtained by authorized requesters contacting the repository e-mail address given.
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Repository Email
cisadmin@lib.usc.edu
Tags
augmented reality
coastal engineering
ios apps
tsunami simulation
virtual reality
webVR