Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Modeling and simulation testbed for unmanned systems
(USC Thesis Other)
Modeling and simulation testbed for unmanned systems
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
Modeling and Simulation Testbed
for Unmanned Systems
By
Edwin Ordoukhanian
A Dissertation Presented to the
FACULTY OF THE USC GRADUATE SCHOOL
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfilment of the
Requirements for the Degree
DOCTOR OF PHILOSOPHY
(ASTRONAUTICAL ENGINEERING)
May 2022
Copyright 2022 Edwin Ordoukhanian
ii
This Dissertation is dedicated to the memory of my Father.
iii
Acknowledgment
Over two decades ago I was a kid with big dreams who went to school almost halfway
around the world. Today, it is hard to believe that the same kid is now a grown-up man getting his
doctorate from a prestigious university in the United States. It has indeed been a journey! As I am
writing this section, I can no way do justice in thanking all people who helped me in this journey,
however, I'll do my best!
I have had the privilege of having caring parents. My parents, Vahik and Ida, did everything
they could to allow me to get a good education. They stood by my side and supported me in every
aspect of life. My father passed away two years ago, but I am sure he is watching me from heaven
and is proud of what I have accomplished. My lovely sister Evlin and my parents stood by me
through thick and thin. Their unconditional love, support and encouragement kept me going. I am
grateful to them.
I am thankful to my doctoral advisor, Prof. Azad Madni. He is truly one of a kind. Working
with him has been one of the best decisions of my life. He mentored me to grow professionally
and personally, and he advised me on my academic research. It was an honor to be his student! I
was fortunate to have two of the best at USC on my dissertation committee, Professors James
Moore, and Daniel Erwin. Their advice helped me shape this research and complete my
dissertation. I am grateful to them.
In this journey, I was lucky to meet an amazing girl, little I know I would madly fall in
love with Almara. She came into my life and showed me what true love means. She helped me,
even more, to grow professionally and personally. She encouraged me when I was overwhelmed
and under pressure. When my father passed away, Almara and her family were great moral support
iv
for me and my family. Ever since I met Almara’s family, they treated me like their own son. I am
grateful to have them and Almara in my life.
I am humbled by the love I received from my friends over the years. I would like to thank
Marilee Wheaton, Rosalind Lewis, Parisa Pouya, and Shatad Purohit. I would like to thank Dr.
Michael Sievers for our productive discussions. I would like to extend my gratitude to Dell Cuason,
Linda Ly, and Marlyn Lat for their administrative support throughout my doctoral studies at USC.
Finally, I would like to acknowledge the support of Mr. Ken Cureton, Dr. Jairus Hihn, and Dr.
Robert Minnichelli during my doctoral studies. My sincere thanks to all of them.
v
Table of Content
Acknowledgment _____________________________________________________________ iii
List of Tables _______________________________________________________________ vii
List of Figures ________________________________________________________________ ix
List of Equations ______________________________________________________________ xi
Abbreviations _______________________________________________________________ xii
Abstract ____________________________________________________________________ xiii
Chapter 1 Introduction _________________________________________________________ 1
1.1 Reader Orientation ___________________________________________________ 3
1.2 Organization of the Dissertation _________________________________________ 5
Chapter 2 Motivation, Research Problem and Objectives ______________________________ 7
2.1 Need for Experimentation Testbeds ______________________________________ 7
2.2 Research Problem Formulation__________________________________________ 9
2.3 Research Objectives _________________________________________________ 10
2.4 Selected Application Domain __________________________________________ 11
Chapter 3 Literature Review ____________________________________________________ 14
3.1 Multi-UAV Systems _________________________________________________ 14
3.1.1 Multi-UAV Systems as SoS____________________________________ 16
3.2 Understanding Adaptation in Multi-UAV Systems _________________________ 20
3.2.1 Typology of Disruptions ______________________________________ 20
3.2.2 Applicable Adaptation Techniques ______________________________ 21
3.3 Multi-UAV Systems Research Areas ____________________________________ 23
3.4 Overview of Simulation Techniques ____________________________________ 24
3.5 Current Simulation Testbeds for Multi-UAV Systems _______________________ 28
3.6 Gaps in Current Approaches ___________________________________________ 32
Chapter 4 Testbed Approach ___________________________________________________ 36
4.1 Methodology Overview and Key Elements _______________________________ 36
4.2 Testbed Definition and Requirements ___________________________________ 37
4.3 Use Case vs Scenario ________________________________________________ 38
4.4 Ontology-Based Approaches __________________________________________ 41
4.5 Ontology Development Tasks__________________________________________ 42
4.6 Developing Ontology for UAV Domain __________________________________ 43
4.6.1 Class Hierarchy _____________________________________________ 43
4.6.2 Classes, Relationships, and Overall Ontology ______________________ 49
4.6.3 Ontology Reasoning__________________________________________ 52
4.7 Testbed Architecture _________________________________________________ 53
4.8 Technologies Used in Testbed Development ______________________________ 54
4.8.1 Protégé ____________________________________________________ 55
4.8.2 Anylogic® _________________________________________________ 55
vi
4.8.3 Java™ Programming Language _________________________________ 55
4.9 Testbed Implementation ______________________________________________ 56
4.10 Ontology Import ___________________________________________________ 57
4.11 Use Cases Import __________________________________________________ 58
4.12 Actor Database and Actor Library _____________________________________ 59
Chapter 5 Experimentation Results ______________________________________________ 62
5.1 Use Case I: Single UAV Operation _____________________________________ 62
5.2 Use Case II: Multi-UAV Operation _____________________________________ 68
5.3 Use Case III: Adaptive Multi-UAV Operation _____________________________ 75
5.4 Use Case IV: Autonomous Vehicle Operation _____________________________ 80
5.5 Use Case V: Multiple AV Operation ____________________________________ 85
5.6 Use Case VI: Air and Ground Vehicle Operation __________________________ 89
Chapter 6 Summary and Implication of Research ___________________________________ 94
References __________________________________________________________________ 97
Appendix A Quadcopter Mathematical Modeling __________________________________ 107
Appendix B Autonomous Vehicle Mathematical Modeling __________________________ 112
Appendix C System-of-Systems Classification ____________________________________ 115
Appendix D Use Case Template ________________________________________________ 117
vii
List of Tables
Table 1: Multi-UAV System Adaptation Levels .............................................................. 22
Table 2: Summary of Simulation Techniques .................................................................. 27
Table 3: Summary of Multi-UAV Simulation Techniques............................................... 29
Table 4: Gap Analysis Summary ...................................................................................... 34
Table 5: Non-Functional Requirements for Testbed ........................................................ 38
Table 6: Important Terms in Ontology ............................................................................. 43
Table 7: Classes and Class Hierarchy ............................................................................... 45
Table 8: Ontology Relationships ...................................................................................... 50
Table 9: Hermit Reasoner Report ..................................................................................... 52
Table 10: Use Case I Description ..................................................................................... 63
Table 11: Dashboard Elements ......................................................................................... 65
Table 12: Use Case I Actor Database ............................................................................... 66
Table 13: Dashboard Elements During Runtime .............................................................. 67
Table 14: Operational Environment Log File Comparison .............................................. 67
Table 15: Simulation Environment Log File Comparison ................................................ 68
Table 16: Use Case II Description .................................................................................... 69
Table 17: Use Case II Actor Database .............................................................................. 72
Table 18 Operational Environment Log File Comparison ............................................... 74
Table 19 Simulation Environment Log File Comparison ................................................. 74
Table 20: Use case III Description ................................................................................... 75
Table 21: Use Case III Actor Database............................................................................. 79
Table 22: Use Case IV Description .................................................................................. 80
viii
Table 23: Operational Environment Log file Comparison ............................................... 83
Table 24: Simulation Environment Log File Comparison ................................................ 83
Table 25: Use Case Log File Comparison ........................................................................ 84
Table 26: Use Case IV Actor Database ............................................................................ 84
Table 27: Use Case V Description .................................................................................... 85
Table 28: Use Case V Actor Database .............................................................................. 88
Table 29: Use Case VI Description .................................................................................. 89
Table 30: Operational Environment Log file Comparison ............................................... 92
Table 31: Simulation Environment Log File Comparison ................................................ 92
Table 32: Use Case Log File Comparison ........................................................................ 92
Table 33 Use Case VI Actor Database ............................................................................. 93
Table 34 System of systems Categories .......................................................................... 116
ix
List of Figures
Figure 1: Adaptive Multi-UAV System Conceptual Framework ..................................... 23
Figure 2: Ontology Development Tasks ........................................................................... 42
Figure 3: Class Hierarchy ................................................................................................. 44
Figure 4: Testbed Ontology for UAV Domain ................................................................. 51
Figure 5: High-Level Testbed Architecture ..................................................................... 53
Figure 6: Implementation Architecture ............................................................................. 56
Figure 7: Process for Converting and Importing Ontology .............................................. 58
Figure 8: Use Case I dashboard view ............................................................................... 64
Figure 9: Use Case I runtime view ................................................................................... 66
Figure 10: Use Case I Agent View ................................................................................... 68
Figure 11: Use Case II Dashboard View .......................................................................... 71
Figure 12 Use Case II Runtime View ............................................................................... 71
Figure 13: Use Case II Actor View for Actor 0 (a) and Actor 2 (b) ................................. 73
Figure 14: Use Case III Dashboard View ......................................................................... 77
Figure 15: Use Case III RunTime View – Before Disruption .......................................... 77
Figure 16: Use case III Run-Time View - After Disruption ............................................. 78
Figure 17: Use Case IV Dashboard View ......................................................................... 82
Figure 18: Use Case IV RunTime View ........................................................................... 82
Figure 19: Use Case V Dashboard View .......................................................................... 86
Figure 20: Use Case V Run-Time View ........................................................................... 87
Figure 21: Use Case VI Dashboard View ......................................................................... 91
Figure 22: Use Case VI Run-TIme View ......................................................................... 91
x
Figure 23: Quadcopter Orientation ................................................................................. 108
Figure 24: Quadcopter Architecture ............................................................................... 111
Figure 25: Forces Acting on Autonomous vehicle ......................................................... 113
Figure 26: (a) Coordinate System and (b) Vehicle Steering Angle ................................ 113
xi
List of Equations
Equation 1: Nonlinear Translational Motion .................................................................. 108
Equation 2: Transformation Matrix ................................................................................ 108
Equation 3: Nonlinear Rotational Motion ....................................................................... 109
Equation 4: Quadcopter State Space Representation ...................................................... 109
Equation 5: Autonomous Vehicle State Space Representation ...................................... 113
xii
Abbreviations
UAV Unmanned Aerial Vehicle
AV Autonomous Vehicle
SoS System-of-Systems
ABM Agent-Based Modeling
ABS Agent-Based Simulation
DEM Discrete Event Modeling
DES Discrete Event Simulation
SDM Systems Dynamics Modeling
SDS Systems Dynamics Simulation
COTS Commercially Off the Shelf
HIL Hardware-In-The-Loop
DoD Department of Defense
UC Use Case
xiii
Abstract
With the growing complexity of the systems and their need for rapid development, there is
a growing need to create inexpensive, flexible testbed. Testbeds are platforms that can take various
forms such as virtual (simulation) only, hardware only, or hybrid (simulation-hardware). It is not
effective to deploy systems without studying their behavior under various conditions and use cases.
This is where testbeds come in. However, often time testbeds focus on single applications without
any consideration for flexibility and extendibility. Therefore, testbeds built for specific purposes
have limited reusability because they invariably become single-point solutions or concept
demonstrators. This is because they do not scale and therefore are not suitable for research. A
testbed developed for research needs adequate flexibility so components can be added or removed
as necessary.
This dissertation focuses on developing a generalized methodology for developing a
testbed for unmanned systems. These systems can take various forms such as unmanned aerial
vehicles (UAVs) or Autonomous Vehicles (AV). These systems operate in dynamic and uncertain
environments where demonstrating adaptive behavior has become one of the key requirements.
Systems with adaptation capabilities can change their operational routine in case of disrupting
events. A testbed with sufficient flexibility and extendibility can contribute to the development of
such systems significantly. Ad hoc and point solution testbeds are not suitable for systems that
require extendibility and scalability.
My research establishes the feasibility of a general-purpose testbed for unmanned systems
management and control. The testbed, based on a formal ontology, enables modeling, analysis,
testing, and verification of new technologies and integration schemes.
xiv
The application domain chosen for this dissertation is multi-UAV systems. However, my
research has also demonstrated that this ontology is general enough by extending it to other
domains such as Autonomous Vehicles. To ensure that testbed implementation follows the
ontology, a mechanism is developed to check the implementation against the ontology. A key
feature of the testbed developed in this dissertation is that it allows importing use cases and actors
(agents) from external sources into the testbed. This contributes toward the flexibility and
demonstrates that the testbed is not a point solution.
1
CHAPTER 1
INTRODUCTION
Complex systems typically operate in dynamic and uncertain environment [1]. They
invariably perform complex tasks in parallel [1]. These systems undergo rigorous testing in a
controlled environment to ensure they meet an acceptable level of performance. For such systems
an experimentation testbed can be most beneficial to conduct analysis, experiments, and collect
data [2]. A testbed is essentially a platform where the system’s behavior is studied under various
configurations [3]. This can be done in different stages and for different reasons, for instance, when
systems are still under development or when multiple teams need to perform studies on the actual
performance of the system. To this end, testbeds need to have requisite fidelity to be useful for
such activities.
The term "testbed" can refer to many setups in the literature. For example, specific types
of robots [4] or devices that are used for studying different algorithms and/or techniques can be
called testbeds. However, the testbed is more than just a device or a specific component. Generally,
2
an engineering testbed is a platform where systems behavior can be modeled, analyzed, and studied
[4-8]. Testbeds can take various forms such as virtual (simulation only), physical (hardware only),
and hybrid (hardware-simulation). Each type of testbed has its challenges. For instance, in a
simulation-only testbed, the main challenge is to ensure that models used in the simulation have
sufficient fidelity and are not trivial models. The main challenge in creating a hardware-simulation
testbed is to integrate physical and non-physical components seamlessly, especially for real-time
systems. Physical components can range from simple rotors or servos to sophisticated onboard
computers and processors [3, 9, 10]. Similarly, non-physical components can range from simple
codes running on the hardware to complicated simulation environments [3, 9, 10].
In dynamic environments, various external and internal events can happen [11]. Some
events may negatively impact the system's normal operation, while some may require a system to
extend its capacity or capability [12]. In such environments, systems need to adapt their behavior
to handle disrupting events [12]. In systems engineering literature the ability of the system to
handle disrupting events has been attributed to quality attributes such as flexibility and
adaptability.
In this dissertation, unmanned aerial vehicle (UAV) missions are the primary focus. UAV
domain missions encompass a wide range of applications which is why is chosen for this
dissertation. The methodology developed in this dissertation is general enough that can be easily
adapted for other application domains such as autonomous vehicles.
UAV missions have gained considerable interest in recent years in areas such as search and
rescue, reconnaissance, or surveillance [13, 14]. In this context, UAVs typically operate in a
dynamic environment using onboard and external sensors for navigation, object detection, and
avoidance. With advances in swarm technology, multiple UAVs are deployed to assist humans in
3
such missions (e.g. mapping an area using multiple UAVs) [14]. Operating multiple vehicles (e.g.
UAVs) simultaneously requires flexible allocation of requirements to multiple vehicles, to reduce
operational complexity for individual vehicles, and increase overall mission coverage [15].
Component systems collect information from multiple sources (using onboard sensors), share that
information with other members, and execute actions in coordinated fashion in multiple locations
[16, 17]. This capability brings more time efficiency into mission execution as vehicles perform
assigned or negotiated tasks in parallel to fulfill mission objectives. The latter is important for
time-critical missions such as medicine delivery [16].
Operational missions that employ multi-UAV systems are typically carried out in dynamic
and open environment subject to disrupting events [18]. Over the years, many researchers have
explored simulation techniques [19-28] to study multi-UAV systems behavior. In the meantime,
many researchers [10, 29-34] have developed application-specific testbeds and performed
experimentation with developed algorithms. While these are useful, many are point solutions.
Therefore, there is a need for a flexible testbed that can enable the exploration of UAV system
behavior under various configurations.
1.1 Reader Orientation
My research is concerned with taking systematic approach toward developing testbeds for
unmanned systems, specifically development of a flexible simulation testbed to model and
evaluate the behavior of unmanned air and ground systems. As the extensive literature review has
showed, testbeds thus far have been built following a unique process that is not generalizable. As
a result, they are ad-hoc and point solutions. Ad-hoc approaches for developing testbeds tend to
be costly in the longer run since they lack requisite scalability. To remedy this problem, I have
4
created a generalizable testbed strategy by defining key functionalities and properties of a testbed.
Then, by defining an explicit ontology to reason about these properties, a flexible, scalable, and
robust testbed is created. In this dissertation, I show that an ontology-enabled approach in
developing such testbeds will have a significant impact on cost reduction by eliminating
unnecessary rework and ensuring requisite flexibility and scalability are considered during
architecting process of the testbed. A testbed which is developed based on an explicit ontology
will have the requisite formal architecture to support testing of systems under various conditions
and use cases.
This dissertation establishes the feasibility of a hardware-software testbed for complex
systems modeling and analysis. It also establishes the generalizability of the testbed by showing
its use on both unmanned aerial vehicles and autonomous vehicles. A key feature of the testbed
developed in this dissertation is that it allows importing ontology, use cases, and use case actors
(agents) behavior from external sources into the testbed. This contributes toward the flexibility
and demonstrating that the testbed is not a point solution.
The results of this research should be of use to researchers in industry, academia, and
government research centers who work on unmanned systems. An Unmanned Aerial Vehicles
(UAV) is an example of unmanned systems from the aerospace domain. Similarly, an Autonomous
Vehicle is an example of such systems from the automotive domain. Since the UAV industry is
growing at a fast rate, having a flexible and extendible testbed will help researchers and
practitioners to rapidly setup simulation testbeds to evaluate the performance of models and
algorithms. The autonomous vehicle domain can also benefit from the results of this research
because UAVs and Avs have much in common.
5
1.2 Organization of the Dissertation
This dissertation is organized as follows:
Chapter two presents a formulation of the research problem and in-depth discussion of the
research objectives. The objective of this dissertation is to develop a flexible testbed to simulate
and model to evaluate the behavior of unmanned systems. The main problem that this dissertation
is addressing is the lack of formal methodology in developing testbed.
Chapter three surveys existing literature to identify the gaps. In chapter 3 status of the
methods for developing testbeds for the target application domain is discussed. In this chapter
current literature on testbeds for UAV systems is looked at and various aspects such as adaptation
are further studied. The literature review is performed to answer questions such as: Do current
methods in creating testbeds have requisite flexibility? Do they follow formal methodology or are
they ad-hoc?
Chapter four starts with a key question: what are the non-functional requirements for a
testbed for unmanned systems? This chapter dives into the methodology and presents needed
enhancements for creating flexible testbeds. In this chapter the ontology for creating flexible
testbeds is presented with in-depth discussion of the core elements and their interactions. It also
presents the analysis results to demonstrate completeness of the ontology. Chapter 4 also discusses
the implementation of the testbed developed in this dissertation.
Chapter five discusses total of six use cases that are developed for this dissertation. These
use cases are imported into the testbed from external excel files. Along with use cases, actor’s
behavior is also imported from actor repository. In these use cases the flexibility and extendibility
of the testbed is demonstrated. To show that developed ontology in chapter four has generality,
6
there are use cases from autonomous vehicles domain. Chapter six presents summary of the
dissertation and implications of the research.
7
CHAPTER 2
MOTIVATION, RESEARCH PROBLEM AND OBJECTIVES
2.1 Need for Experimentation Testbeds
As operational environment of the system becomes dynamic and complex, systems need
to perform complex tasks. Therefore, the need for more sophisticated and complicated system
grows [1]. Such systems require an environment that facilitates modeling, simulation, and
verification. An integrated testbed can create a platform where various experiments can be
conducted on early stages of system development. In such platforms various experiments can be
executed, various set of data can be collected, and a comprehensive analysis can be performed on
the system behavior. In addition, such platforms allow exploring multiple design alternatives by
various teams during system development process. A well-defined and properly designed testbed
can be effective during system development process.
8
In a testbed, experiments can be conducted to test-drive various control algorithms,
decision making algorithms, as well as, monitoring performance of specific sub-systems or
components. In such platform, multiple subsystems can be integrated together, and the system can
be tested. In a testbed hybrid configuration can be created where some aspects of the systems are
simulated, and some aspects are physical hardware. Such configuration is also known as hardware-
in-the-loop (HIL) simulation.
Techniques such as hardware-in-the-loop simulation bring complexity of the plant (or
systems) into the simulation and consequently increase testing fidelity [35-37]. This technique has
been widely adapted in industries such as automotive, aerospace, and space. Such hybrid
environments can be used to test system-of-systems. Single physical system or sub-system can
be connected to virtual environment where many instance of system exist in simulated
environment [38].
In a testbed environment various kind of models can be utilized [6]. These models can be
used to represent different aspects of the systems or sub-systems. In similar way, various types of
simulations can be used in a testbed environment. A detailed review of various simulation
techniques and their benefits are discussed in next chapter.
In a complex operational environment, many complicating events can occur [11].
Therefore, systems that operate in such environments and perform critical tasks, need to adapt their
behavior in response to such events [11]. Adaptive systems change their behavior to external or
internal incidents while keeping acceptable level of performance [11, 39]. They gained more
interest in recent years and are deployed to carry out missions of all sorts. For instance, an
unmanned aerial vehicle (UAV) is deployed to carry out tasks such as searching, monitoring or
9
medicine delivery [14, 40]. While operating in a complex dynamic environment, the UAV system
adapts its behavior to handle disrupting event in parallel to performing its main objectives (tasks).
There are multiple ways to deal with changing conditions and events; however, which
alternative is appropriate to deal with the disruption given the current context needs additional
exploration [18]. A testbed must have sufficient flexibility to enable exploration of various
adaptation mechanism given the operational context. A well-engineered system requires in depth
analysis and experimentation during system design time. Point solution and ad hoc testbed
approaches do not address this need simply because they lack requisite flexibility, extendibility,
and fidelity [35]. What is needed is an inexpensive, instrumented testbed with sufficient flexibility
to address a class of problems defined as unmanned air and ground vehicles.
2.2 Research Problem Formulation
Researchers in academia and industry have been utilizing testbed to explore system’s
behavior under various conditions. As the complexity of systems grow, modeling requirements for
development and implementation of such systems becomes commensurately complicated. Many
researchers employ various control algorithms on vehicles to ensure they perform within
acceptable range while operating under changing conditions. Their approaches are valuable to rule
out inconsistencies and further refine algorithms which enhance system behavior. However, most
of current approaches tend to be tailored toward a specific instance of the system with one
application in mind. A specific instance of a testbed is driven by a specific need and does not have
a list of formal requirements or underlying architecture.
The problem that seems nobody is willing to address is identification of properties for
flexible and extendible testbeds. For instance, a set of questions that needs to be addressed are:
10
• What are the underlying concepts and properties of an inexpensive, flexible, and extendible
testbeds?
• How are these properties and concepts related to each other?
• Which relationships are important comparing to others given the application of the testbed?
• How a testbed incorporates future extensions without losing its current capability?
Requirements for testbeds needs to be explicitly defined. There requirements should drive
the testbed development. Existing testbeds tend to be focused on specific domains which limits
their general utility. The consequences of domain-specific and proprietary testbeds are substantial
development expenses and time, and manpower for each new test environment instantiation. The
effort put into creating high-fidelity testbeds often becomes a significant part of an engineering
effort. Thus, there is a need for creating a domain-agnostic, general purpose testbed platform – a
platform that is integrated, is inexpensive, is flexible and extendible, and has minimal learning
curve.
2.3 Research Objectives
The objectives of my research are:
Objective I: Investigate current approaches toward developing testbeds for unmanned
aerial systems. This objective includes critically reviewing current approaches and identifying
gaps. The goal of this objective is to understand status of the application domain and its prospects
and how this research can be useful for this domain
Objective II: Investigating whether a set of common properties for testbeds exists for
unmanned system. If such properties exist, then what are those and how they are related.
11
Objective III: If set of common properties exist, develop an architecture to create flexible
testbeds. This objective includes exploring the ontology and reason about what is needed and what
is not.
Objective IV: Instantiate and implement a testbed for the application domain and perform
analysis to demonstrate that such testbed meets underlying properties developed under objective
II. This objective includes researching and developing required mechanism to check the
implementation against testbed properties, import or export required data, models, and system use
cases.
2.4 Selected Application Domain
Unmanned Aerial Vehicles (UAVs) have gain interest in recent year. They are being
utilized to carry out complex missions. To increase time efficiency of the mission, multiple UAVs
are deployed to carry out set of tasks in parallel to satisfy mission requirements. Designing and
building single UAV that meet all mission requirements becomes harder as mission complexity
grows. Thus, by utilizing UAVs that are already built and operating them simultaneously helps to
achieve considerable cost reduction.
A group of Unmanned Aerial Vehicles can be viewed from many perspectives. An UAV
is essentially a complicated (or complex) agent that performs set of tasks to fulfill the overall
mission. UAVs have wide range of applications such as:
• Aerial mapping for surveying, remote sensing of hidden topography and underground oil
and mineral deposits [41, 42]
• Aerial photography for journalism and film [43, 44]
• Communication and efficient data sharing [45]
12
• Express shipping and delivery (famously championed by Amazon) [46]
• Gathering information or supplying essentials for disaster management [47]
• Thermal sensor drones for search and rescue operations [48]
• Geographic mapping of inaccessible terrain and locations [49]
• Building safety inspections [50]
• Monitoring traffic and responding to accidents and disasters [51]
• Storm tracking and forecasting hurricanes and tornadoes [52]
• Agricultural uses to cut costs and increase efficient yields, such as precision crop
monitoring, fertilizing crop fields on an automated basis, planting seeds [53]
• Law enforcement and border control surveillance, to supplement or replace less efficient
methods of border protection [54]
It is estimated that the impact of commercial UAVs, alone, could be 82 billion and over
100,000 job boost to the U.S. economy by 2025 [55]. As UAVs operate in open environment,
disruptions can take a variety of forms such as systemic (within UAV or within UAV network),
environmental (e.g., jamming, loss of communication, loss of sensor, or loss of observability due
to extreme weather), or human-triggered (e.g., sending a wrong command by the operator or
hacking into the system by a third party) [56]. Current state-of-the-art in multi-UAV mission
execution is primarily limited to pre-loaded plans for the mission with limited flexibility to handle
changing conditions [56]. A flexible testbed for exploring multi-UAV system behavior ensures
researchers and practitioners have properly designed platform to test drive various algorithms. A
further examination of the application domain and current approaches in this domain is discussed
in subsequent chapters.
13
Given their wide range of applications, multi-UAV systems are suitable application domain
for this research. If flexible testbed is created for such application domain, UAV industry stands
to benefit from it in multiple ways such as lowering development time through extensive modeling,
simulation, and what-if exploration under various configurations.
14
CHAPTER 3
LITERATURE REVIEW
3.1 Multi-UAV Systems
Unmanned Aerial Vehicles (UAVs) are deployed as a group for a variety of application
domains such as military reconnaissance and surveillance, search and rescue, science data
collection, agriculture, payload delivery, or as flying ad-hoc networks (FANET) to support
wireless communications [16, 57-62].
Single system operation can experience significant drop in performance when disrupting
events happen [15] [63]. Furthermore, a single system operation can perform limited set of
mission, while a multi-UAV system can perform wide range of mission through reconfiguration.
Operating multiple systems simultaneously has many advantages over single-system operation.
Advantages such as uninterrupted mission coverage in case of disruptions; lower development cost
15
due to allocation of functionalities to multiple systems; and lower personnel training costs due to
semi-autonomous operation of vehicles [16].
A Multi-UAV system, in essence, is a network of UAVs (agents or actors) in which
managing interaction and dependencies are important for successful mission execution [56].
Operating multiple UAVs simultaneously enables flexible allocation of mission requirements to
multiple vehicles, which reduces operational complexity while increases overall mission coverage
[16]. Component systems collect information from multiple sources (using onboard sensors), share
that information with other members and execute actions in coordinated fashion in multiple
locations [64]. This capability brings more time efficiency into mission execution as vehicles
perform assigned or negotiated tasks in parallel to fulfill mission objective [63].
Multi-UAV systems can be homogeneous or heterogeneous [65]. In homogeneous multi-
UAV system, UAVs share similar physical and functional characteristics [56]. On the contrary, in
a heterogeneous system, UAVs can have different physical shapes or perform different
functionalities [64]. The heterogeneity can be leveraged to derive adaptive response to disruptions.
At the same time, it can also limit system's ability to handle disrupting events since required
functionalities may not exist in any of the constituent systems. In addition, multi-UAV systems
demonstrate higher availability since each vehicle has a degree of fault tolerance and reliability[56,
63]. These systems also enable flexible communication protocols and allow adaptable functional
allocations, which contribute to system's overall adaptability.
As these systems typically operate in an open environment and are susceptible to multiple
disruptions. It is desirable that these systems maintain acceptable level of performance[56, 63, 66].
Designing adaptable and unmanned systems, particularly multi-UAV systems, capable of
successfully conducting missions requires in-depth understanding of system's capabilities, its
16
requirements, and performing extensive modeling and testing in an integrated testbed environment
[67].
Performance is a complex multi-dimensional metric. Exemplar metrics that contribute to
monitoring multi-UAV system performance are: flow rate, response time, and recovery time [68,
69]. Multi-UAV operation is reliant on accurate and timely data flow. Successful operation of the
multi-UAV SoS heavily depends on effective, reliable, and secure communication between UAVs.
Multi-UAV SoS’s situational awareness depends on data flow as the data acquired by each
vehicle’s sensors is being shared with neighbor vehicles. UAVs have to be interoperable and
shared semantics is key to successfully and effectively transmitting the data among UAVs [68,
69].
Response time is the round-trip time between sending a command to the multi-UAV system
and receiving a response. Factors such as task distribution algorithm, individual system’s
capabilities, and communication bandwidth and protocols have impact on response time [68, 69].
Recovery time is the time it takes between detecting a failure and restoring operation. Recovery
requirements typically vary with missions and operational context [68, 69].
3.1.1 Multi-UAV Systems as SoS
Multi-UAV systems can be viewed as a system-of-systems (SoS) [17]. This perspective
explicates the interactions and dependencies among the UAVs (agents) and facilitates
understanding of disruption propagation throughout the SoS [15, 56]. When multiple systems
come together under system-of-systems umbrella, they offer additional functionalities that do not
reside in any single system, multi-UAV systems are no exception [15, 56, 63]. Thus, they can
exhibit unique capabilities. Additionally, there is no need to build new systems from scratch when
17
an overall functionality can be achieved by integrating various systems. This in fact has great
impact on reducing cost [15, 63, 67]. One key aspect that must be taken into consideration carefully
is interoperability and integration of constituent systems to ensure successful operation [67].
Maier [70] identifies five key characteristics of system-of-systems. These characteristics
are Operational Independence, Managerial Independence, Evolutionary Development, Emergent
Behavior, and Geographic Distribution. A short description of each is provided next. For detailed
discussion reader is referred to [15, 56, 63, 67].
Operational Independence of the Elements: If the system-of-systems is disassembled into
its component systems, the independent systems still provide useful functionality on their own.
[70-72].
Managerial Independence of the Elements: The component systems can and do operate
independently [73]. The component systems are integrated but maintain operational existence
independent of the system-of-systems [70-72].
Evolutionary Development: The system-of-systems is not fully formed. Its development
and existence are evolutionary with functions and purposes added, removed, and modified with
experience and need [70-72].
Emergent Behavior: The system performs functions that cannot be accomplished by any
component system. Such behavior is considered emergent properties of the entire system-of-
systems and cannot be associated to a single system [73]. The main objective of the entire systems-
of-systems is satisfied by this behavior [70-72]
18
Geographic Distribution: With the advances in communication technologies, systems can
share information easily over long distances. This property simply means that components can
exchange information and not mass or energy [70-72]
A multi-UAV system satisfies the requirements for SoS defined in Maier [70].Each UAV
has operational independence as it performs its assigned function(s) while also participating in the
SoS. High-level planning, plan decomposition, dynamic task allocation, and conflict resolution are
the key functions that play major roles for successful operations. As such, proper coordination and
cooperation are essential [17, 74]. Coordination requires allocating sufficient temporal and spatial
resources [16]. Cooperation of vehicles requires integration of sensing, planning and control within
a decision framework.
Vehicles in a multi-UAV system may have different governance while participating in the
SoS. This can have an impact on the interaction and communication protocols among them.
Furthermore, multi-UAV system can evolve with functions and purposes added, removed, and
modified [56, 63]. A multi-UAV system also exhibits emergent behavior as overall functionality
of the system does not reside within any single UAV. Furthermore, UAVs are geographically
distributed since they primarily exchange information [70].
A multi-UAV system can be characterized as a collaborative, acknowledged, or directed
SoS [17, 71, 72]. The details of these configuration can be found in Appendix C. The type of the
SoS depends on the operational context. For instance, for a science data collection mission where
security is not crucial, collaborative SoS can be employed. In this case, each UAV interacts
voluntarily to fulfill an agreed upon central purposes. Collaboration among UAVs provides
significant operational advantages through improved situational awareness for the entire SoS [71,
72].
19
A Multi-UAV System comprises interoperable, standalone systems. Systems are integrated
together to satisfy mission requirements with each system performing a specific task. In a multi-
UAV SoS, single vehicles are not responsible for the entire operation [56, 63]. Instead, the entire
SoS is responsible for carrying out the mission.
Multi-UAV integration becomes increasingly more challenging as the complexity of the
SoS increases [71, 72]. Once component UAV systems are developed and fielded, they are
incrementally integrated and tested, and ultimately, deployed in the operational environment [71,
72]. Integrating such systems requires resolving technology compatibility issues and semantic
differences.
A multi-UAV SoS can have a leader and subordinates, or every UAV can be commanded
by the ground station [16]. A single leader configuration has single point failure. At the same time,
having a leader makes communication easier with the ground and within the SoS. In contrast, if
all UAVs are in communication with the ground station and with each other, data sharing and data
allocation becomes quite challenging.
Multi-UAV System operation has both business and safety ramifications [16, 75]. No
longer is there a need to develop sophisticated and custom systems to address specific
requirements. Instead, requirements can be allocated to multiple vehicles which simplifies and
reduces the complexity of the solution [64]. However, events such as environmental obstacles and
system failures can disrupt system's operation. Thus, having an adaptive operation of the system
is desirable [15].
20
3.2 Understanding Adaptation in Multi-UAV Systems
Adaptation of a system is known to mean system's ability to handle changing conditions
without significant loss of function and performance [76-78]. In engineering systems context, this
could be understood as system’s ability to handle disruptions that are beyond the system's
performance envelope. Adaptation is about what an engineered system does. Thus, system's
adaptation is the result of a closed-loop process that includes sensing, planning, and acting
(executing) [15] [79].
Flexibility is system's ability to cope with expected change or disruption [2, 3, 7, 9]. A
system can handle a particular set of disruptions (known disruption) by a pre-defined set of
reactions (i.e., a pre-defined pattern) while it adapts its behavior when it is managing unexpected
disruptions.
To deal with disruptions, set of alternatives (mechanisms) may be employed by the system.
Exemplar techniques are physical redundancy, functional redundancy, and function re-allocation
[11]. However, often time it is unknown which techniques or approach should be employed, and
when and where it should be used as system tries to deal with disruptions [11]. A flexible testbed
can help with testing multiple algorithms in real-life situations to evaluate the performance of such
approaches in various contexts.
3.2.1 Typology of Disruptions
There are three categories of disruptions: external, systemic, and human-triggered [11].
External disruptions are largely related to with environmental obstacles and incidents. For
instance, loss of communication or degraded data transfer rate due to multi-UAV system passing
through a flock of bird can be considered an external disruption.
21
Systemic disruption happens when internal component’s functionality, capability, or
capacity causes performance degradation. For example, an internal failure in UAVs flight control
systems can cause the system to initiate safe landing and land immediately. In such cases, a pre-
planned protocol can be activate to enable an adaptive behavior [11].
Human-triggered disruptions are associated with human operators inside or outside
of the system boundary [11]. Even though human operator’s role can be limited to commanding
the overall objective and monitors activities, this does not make the multi-UAV system immune
from human-triggered disruptions. Multi-UAV SoS are highly adaptable and complex systems.
These systems can adapt on much faster rate than humans. Thus, human operators can
inadvertently cause a disruption within the system [11].
Disruptions can also be predictable or random. Predictable disruptions are known in
terms of time of occurrence, location of occurrence, and triggering event. Random disruptions
occur unexpectedly [76, 80]. To determine best possible mechanism to employ for handling the
disruption, system’s operational context must be taken into account [76, 80] .
3.2.2 Applicable Adaptation Techniques
Madni and Jackson [11] have identified several techniques and heuristics to handle
disrupting events. Some of these heuristics can be applied during system design to ensure adaptive
behavior during system operation. Relevant heuristics to UAV systems are Human as a Backup,
Pre-planned Protocols, Physical Redundancy, Functional Redundancy, Function re-allocation,
Circumvention, and Safe State. The details of these heuristics and relevant techniques are described
in details in [11].
22
Collision Avoidance is a key characteristic of adaptive multi-UAV SoS [16, 75, 81]. The
SoS, as well as individual UAVs can detect and avoid obstacles. The presence of some obstacles
may not be known until very close proximity [16]. This puts extra emphasis on the re-configuration
and maneuverability of the entire system-of-system. Vehicle’s maneuverability depends on the
vehicle’s shape, size, and onboard flight control system capabilities [82].
Figure 1 presents a conceptual framework for a multi-UAV system. A multi-UAV SoS has
an objective (mission goal) that it needs to fulfill even in the face of disruptions. The mission
objective can be broken down into three categories: SoS level, system (i.e. UAV) level, and the
constraints imposed by the operational environment [15, 63]. While satisfying mission objectives,
a multi-UAV system employs various adaptation logic to handle disruptive events. Adaptation in
multi-UAV system can occur in many levels. In each level, various techniques and algorithms can
be employed to enable adaptive behavior. Table 1 shows multiple level of adaptation in multi-
UAV system.
TABLE 1: MULTI-UAV SYSTEM ADAPTATION LEVELS
• Level 4: Mission Planning and Decision Algorithm
• Level 3: Route Planning and Collision Avoidance
• Level 2: Guidance
• Level 1: Navigation
• Level 0: Control
23
FIGURE 1: ADAPTIVE MULTI-UAV SYSTEM CONCEPTUAL FRAMEWORK
3.3 Multi-UAV Systems Research Areas
Multi-UAV systems have gained interest in many domains due to their unique capabilities
[64]. Researchers have made significant advances in various aspects of these systems. The research
areas can be categorized into 5 major groups as discussed in [56]:
Observation and Monitoring: this area is concerned with researching and developing
mechanisms to ensure appropriate data is collected. For instance, techniques that ensure vehicles
are instrumented with proper sensors to collect information. These sensors can be used to monitor
individual vehicle’s and multi-UAV system’s performance.
Detection and Identification: this area of research investigate methods that can be used to
analyze and understand the information that is collected from vehicles. For example, using
Bayesian Belief Networks (BBN) to calculate a belief for system's current state [83]. This
information later can be used to detect the disruptions and their impact on the system.
24
Planning and Decision Making: this research area is concerned with high level decision
making, planning and coordination among the vehicles. Questions such as what is best course of
action given system's current state are among the main questions that this area is concerned with
[84, 85].
Communication and Networking: this research area focuses on the communication and
networking challenges to ensure uninterrupted data flow among networks, or the best topology to
ensure smooth data exchange [58].
Execution and Control: this area of research is concerned with low level control algorithms
for each vehicle to execute high level commands properly. For example, guidance, navigation, and
control on the single vehicle levels [86].
The research areas discussed above can benefit from an integrated hardware-software
testbed. A testbed that is flexible and extendible can enable an environment for such research
thrusts.
3.4 Overview of Simulation Techniques
Simulations are often used to predict system performance or to explore design alternative
[87]. In a simulated environment system design alternatives can be compared, and the outcome of
the design can be assessed. Similarly, a portion of system's concept of operation (CONOPS) can
be simulated in a virtual environment where behavior and performance of the overall system can
be quantitatively studied [87].
Although there is great benefit to simulate a system design alternative or system behavior
in a virtual setting, it is not cheap to do so. The cost of simulation can be a lot less than physical
25
prototype, however, a simulation with enough fidelity can be time consuming potentially
expensive depending on the application domain.
As system's and operational complexities grow, simulation alone is not enough to very and
validate system's behavior [88]. Often time replicating real-world setting in a simulated
environment can be extremely difficult and potentially impossible. Hence, many researchers have
been looking into alternatives to create hybrid environment where some aspects of the simulation
are tethered with physical hardware. However, with careful thought process and considering all
aspect such tethering can potentially be dangerous and prone to errors. To avoid such problem, an
ontology -driven approach is needed to guide the integration of hardware with software, as well
as, integrating various simulation environment and techniques.
This section provides a brief overview of simulation techniques that are widely used and
draws out their pros and cons. Agent-based modeling and simulation have gained interest in recent
years. Many real-world problems that have interacting pieces can be modeled and simulated using
this technique [89]. Essentially, each interacting piece and component is an agent that receives
information from the environment and act upon that information. Agent-based simulation
technique has been utilized to demonstrate various system/system-of-system capabilities.
In discrete-event simulation technique the behavior of the system is modeled through time
using state representation where state variables change instantaneously at specific countable time
points. Continuous operation can be broken down into time steps where actions take place at
specific time points [88]. In these type of simulation events happens based on triggering
action/conditions. This technique can be mixed with agent-based simulation technique where agent
behavior is modeled using state machines and are looked at through countable time points.
26
System Dynamics is a computer-aided simulation technique for policy design and analysis
[90]. However, since it allows researcher to study and understand the interaction among the
components, this simulation technique have been adopted for engineering systems. Many
researchers use this technique to study system-of-system interaction, especially where there are
human components. Table 2 summarizes advantages and disadvantages of each simulation
technique.
Depending on the application domain and the problem nature, simulation techniques can
be mixed and matched [88]. A hybrid simulation can be beneficial to complex problems such as
multi-UAV systems operation. For instance, a search and rescue mission have many interacting
components. Each of the simulation techniques discussed above can help with understanding a
portion of the concept of operation, but a hybrid simulation can help researchers to gain better
insight into the system's operation and performance.
Hybrid models and simulations are more effective in studying real-world problems and
help with providing better solutions [88]. While creating hybrid models and simulation can be time
consuming, it yields betters results. With the advances of the simulation tools creating such hybrid
models would become easier. However, as noted earlier, creating such hybrid models cannot be
done without proper planning and understanding of the problem domain. An ontology-driven
approach is promising in creating such hybrid simulation environments where all the interacting
components are considered properly.
27
TABLE 2: SUMMARY OF SIMULATION TECHNIQUES
• Agent-Based Modeling and Simulation (ABM/ABS)
o Pros
▪ Addresses agent's heterogeneity
▪ Allows collective study of inter-dependencies
▪ Does not restrict type of the rules that can be implemented
▪ Includes some notion of time inherently
▪ Explicit representation of agents and their environment
o Cons
▪ Hard to balance it, either it is trivial or too complex
▪ Hard to select number of parameters and features of the agents
▪ As complexity of the model grows, it can be hard to simulate the model
▪ Simulation/Model can exponentially grow if not properly scoped
• Discrete-Event Modeling and Simulation (DEM/DES)
o Pros
▪ Suitable for queuing activities (e.g., set of objectives to reach over time)
▪ Allows studying of a system in a short period of time
▪ Allows looking into more details than system dynamic models
▪ Allows inclusion of randomness in the simulation
o Cons
▪ Need multiple runs to gain full understanding of the system due to its
stochastic nature
▪ Longer simulation scenarios may need more resources (e.g., memory)
since discretization may take longer
▪ Information may get lost between each time stamp if they are not handled
correctly
• System Dynamics Modeling and Simulation (SDM/SDS)
o Pros
▪ Emphasis on dynamics of the whole system to understand its behavior
▪ Helps with understanding non-linearity
▪ Helps to understand the big picture and supports system's thinking
28
o Cons
▪ It divides system into sub-components, determining a useful set of objects
and relations is difficult
▪ The approach is limited when the whole is truly more than its parts
▪ Behavior of individual systems/sub-components is approximated which
neglects noise and randomness especially in complex systems
3.5 Current Simulation Testbeds for Multi-UAV Systems
Researchers over the years have tried to develop various control techniques for developing
adaptive multi-UAV systems. Many of these approaches have adopted different techniques from
domain outside engineering to develop sophisticated multi-UAV systems. While these techniques
perform incredibly well in simulated environment, their applicability in the real world remains
questionable. Many of these techniques either neglect key aspects such as dynamics of the
environment and the vehicle or get into very specific and specialized aspects of the system that
ignore the bigger picture and the fact that UAVs are required to perform a variety of missions.
Researchers have been using behavior-rule techniques [19, 23, 24, 26] for demonstrating
control strategies for adaptive behavior of multi-UAV systems. The specific application domains
include search missions [19, 24] and formation flight [26]. While these applications are useful,
they lack requisite fidelity for any real-world application.
Gradient-Vectors [20] and graph theory [22] have been successfully applied to application
domains such as formation flight [20] and specific configurations such as leader-follower [22].
While these are helpful for many real-world applications, since they have not yet been tested in
higher fidelity settings the transition of these techniques from simulation environment to physical
world is questionable.
29
Mathematical patters [27, 28] also hold strong promise for multi-UAV systems adaptive
control. While these techniques have been successfully applied to areas such as formation and
robotics [28], they hold strop promise for multi-UAV systems. These methods still need rigorous
testing in hybrid environments to validate their applicability in real-world scenarios.
Artificial Pheromones are other set of techniques that are useful for multi-UAV systems.
These techniques use concepts such as potential fields (borrowed from magnetism) for UAV
formation flight. While applied mostly to search and rescue [25]and search and destroy missions
[21] in simulated environment, they seem to be promising for real-world scenarios, however, after
completion of rigorous testing.
TABLE 3: SUMMARY OF MULTI-UAV SIMULATION TECHNIQUES
Reference Technique Application
[19] Behavior-rules Multi-UAV Search Mission in 2D, focused on communication
[23] Behavior-rules No specific target area, utilizing genetic algorithm
[26] Behavior-rules Formation flight, using self-organizing approach
[24] Behavior-rules Search Missions, Applied to Robots, can be extended to UAVs
[20] Gradient Vector Formation Flight
[22] Graph Theory No Specific target area, primarily studying leader-follower
[27] Math. Patterns No specific target area
[28] Math. Patterns Formation, applied to robots but can be extended to UAVs
[25] Artificial
Pheromones
Use potential field for UAV formation; Mostly used for search
applications
[21] Artificial
Pheromones
Search and Search and Destroy Mission
30
Table 3 summarizes some of the current techniques that are successfully demonstrated in
the simulated environment and are promising approaches for the real world. However, these
approaches have not been tested in real-world application.
Several researchers have been working on advancing hardware-in-the-loop testbeds for
multi-UAVs. Many of these approaches solve a piece of the problem. For instance, Purta et al [91]
introduces Dynamic Data Driven Application System. In this structure, real data is shared between
hardware (actual vehicle) and simulation. Orders are being sent to the environment as well as
getting sensory data from it. It consists of the following modules:
• Command module where all the main swarm control algorithms are stored and
operated
• GUI module allows users to switch between environments and checking status of
vehicles
• Environment Module which can be simulated or physical
• Middle-ware module which is a service-oriented architecture to enable
communication between modules
In such setting, MASON multi-agent simulation library is used as a backbone of many
swarming techniques. MASON does most of the heavy lifting while also being able to connect to
another simulation environment to acquire data, or to actual drones for acquisition of real-world
data.
O’Neil et al [32] developed a simulator based on a game engine to simulate real world.
Their approach is primarily applied to wireless communication with specific focus on human
interaction. It describes TATUS which is computing simulator. TATUS is novel in that it removes
31
the need for experimenters to develop games level code, while retaining a large level of flexibility
in the scenarios that can be readily developed by researchers.
D'Andrea and Babish [30] developed hybrid environment specifically designed for
Roboflag game. Some aspects are simulated such as communication link, however, the robots are
real. The testbed has four main components: Vehicles, Global Sensor Information, Centralized
Control, Human Interfaces and Communications Network. In their approach communications
network subsystem is completely simulated. The simulation occurs at a relatively high level. For
example, errors in transmissions are captured by the transmission latency [30].
Montufar et al [31] developed a testbed for multi-UAV primarily focusing on moving
objects. Their testbed gives experimenter two environments to choose from, Simulation and/or real
world. Schmittle et al [34] developed an open source, cloud-based simulation testbed for UAVs.
Their testbed utilizes commercially available tools and cloud servers to reduce time to setup. It is
easy to use and specifically addresses scalability and usability. Palacios et al [33] develops a
heterogeneous testbed based on a specific drone model and demonstrates its application in two
different scenarios. It enables users to select 4 different environments to test their control
algorithms.
Afanasov et al [29] developed a testbed called FlyZone . In their methodology they use
tags for vehicle localization. On board computer is responsible for giving high level commands.
They have decoupled testbed with the real application. Their approach only considers static
obstacles and wind disturbances as disruptions. Their testbed also utilizes vehicle dynamics model
to determine the influence of winds on the UAV’s behavior.
Michael et al [9] identified robustness and reliability as one of the main requirements of
the hardware-software testbeds. Scalability and capability of estimating system states are among
32
other key requirements for an integrated hardware-software testbed [9]. Michael et al [9]
successfully demonstrate application of formation control algorithm on ground vehicles by
utilizing specific type of a ground vehicle platform called Scarab.
3.6 Gaps in Current Approaches
Many of the current hardware-software approaches for multi-UAV systems consider only
specific type of vehicles and furthermore focus on a specific model such as Parrot AR Drone 2.0
[31, 91]. While this is valuable, it still doesn’t show the applicability of the current approaches to
other vehicles with sophisticated sensor suites. After all, Parrot AR Drone is a specific type of a
quadcopter and research findings cannot be generalized unless rigorous testing is performed with
different types of quadcopters.
Environment often time is a misused terminology. Some researches [91] understand
environment as either simulation (software) or hardware. But at the same time the term
environment can also be associated with operational environment where system is performing the
mission. Some approaches in hardware-software testbed development [91] do not focus on a
specific mission and only focus on a portion of the overall operation. While this divide and conquer
technique is valuable to address the bigger problem, however, often time the link between the
smaller portion of the problem and the bigger problem is not identified, which in turn creates
misunderstandings.
Some testbeds focus only on the simulation aspect without any interface to hardware or
physical aspects of the systems [31, 32]. These approaches have almost no discussion of physical
aspect of the system. In such simulation testbeds the vehicle is often time considered as a point
33
mass which performs some actions. Neglecting dynamical aspects of the system decreases
simulation fidelity which leads to performing insufficient analysis on the outcome.
In some cases [32] human interactions have been the focus of the testbed with no onboard
sensor systems. Some approaches [30] have put vehicle’s high-level control on a workstation and
not specifically on the robot. While this is a good approach for rapidly testing an algorithm, it lacks
enough flexibility for real world application. Such approaches put extra emphasis on the
communication among the workstation and the vehicle to ensure commands are sent and sensory
data are received on time.
Current approaches in testbed development lack system engineering processes. In that, key
steps such as requirement development is explicitly missing. While some researchers have
identified a set of the requirement, those requirements are simple and are only applicable to the
instance of the testbed they have developed. Furthermore, requirements usually are not validated
or verified [31, 33, 34].
Current approaches let researcher change their control algorithm, however, they do not
explicitly discuss adaptive behavior of the systems. Particularly, what is missing in the current
approaches is how the operational environment changes in the laboratory or the facility where the
vehicle is being tested. In many of the current approaches, for instance [33, 34], there is not explicit
ontology that defines key concepts and relationships. Many approaches jump directly into an
instance to solve their problem without proper consideration of the overall requirements and
underlying ontology.
Many current testbeds offer embedded systems models. Such tight integration of system
behavior models with rest of the testbed shuts the door for future extensions and limits testbed’s
reusability. In such circumstances the testbed becomes a rigid platform and cannot accept any
34
outside models. Furthermore, in such testbeds scenario and the agents participating in the scenario
become tightly coupled to the testbed. This configuration makes it impossible to try out new
scenarios with different configurations. The only option remaining in such cases is to change most
of the testbed to make it useful for another application. Such changes can come at a great cost.
Table 4 summarizes current gaps.
TABLE 4: GAP ANALYSIS SUMMARY
Simulation/
Hardware/
Hybrid
Extendible
/Scalable
Formal
Approach?
(Purta, Dobski et al. 2013) Semi-hybrid None Partially No
(O'Neill, Klepal et al. 2005) Simulation Limited Partially No
(D'Andrea and Murray 2003) Semi-hybrid None Partially No
(Montufar, Munoz et al. 2014) Hardware None Partially No
(Schmittle, Lukina et al. 2018) Simulation None Partially No
(Palacios, Quesada et al. 2017) Hybrid None Partially No
(Afanasov, Djordjevic et al. 2019) Hardware None Partially No
In summary, current methods tend to jump into a specific instance of a testbed without
exploring other alternatives [92]. Primarily because requirements are not identified or are ill-
defined. For instance, many researchers select a specific model of an UAV or vehicle [9] without
showing any justification for why specific model has been chosen.
Currently, there is no explicit discussion of formal ontology for creating hardware-software
testbed. Furthermore, current literature in hardware-software testbed fails to address adaptive
behavior of multi-UAV systems. Most of the current methods do not consider realistic scenarios
and do not consider or overly simplify the operational context.
35
Many applications involving development of hardware-software testbeds have been
focused on simple scenarios such as formation fly with no complicating factors. Often time, safety
constraints are neglected or implied.
36
CHAPTER 4
TESTBED APPROACH
4.1 Methodology Overview and Key Elements
To fill the gaps identified in previous chapter, the testbed developed in this section is driven
by system requirement and system use cases. To ensure flexibility and extendibility, system use
case and actors’ behavior and properties live outside of testbed. The testbed developed in this
section has an underlying architecture which guides the implementation and ensures testbed is
flexible enough for future extensions. Such architecture is developed by following a formal
ontology. To this end, a mechanism is developed to check the implementation against ontology
during run time.
To keep development costs down, and ensure affordability, proven commercial tools, off
the shelf components, and open-source components play important role in developing the testbed.
37
At the end, it is demonstrated that the developed testbed has sufficient flexibility for modeling and
evaluating complex engineered systems, specifically unmanned aerial vehicles.
The methodology follows a system engineering approach in developing modeling and
simulation testbed. Therefore, in the beginning the testbed requirements are clarified and then
testbed ontology is developed to scope the problem and ensure there are no gaps in the
representation. Such ontology facilitates integration of various testbed elements and ensure
flexibility of the overall testbed. A flexible, low-cost modeling and simulation testbed is
prototyped by following a formal ontology with two specific goals in mind:
• to demonstrate that testbed is not point solution by importing various use cases and
actor behavior
• can do more robust evaluation of the system by looking at various scenarios and
use cases
4.2 Testbed Definition and Requirements
A testbed is a platform to conduct modeling, analysis, testing and verification of new
technologies (i.e., explore system behavior). Testbeds can come in various modes such as
virtual/simulation only, physical/hardware only, or hybrid (virtual + physical). Table 5 shows non-
functional requirements for a testbed which is derived from literature review.
38
TABLE 5: NON-FUNCTIONAL REQUIREMENTS FOR TESTBED
• Ease of use:
o Intuitive
o Minimal learning curve
• Process-Driven
o Enable systematic modeling and analysis of system properties
o Integration of hardware, software, and simulation environment
• Flexible
o Capable of adding new components, capabilities (features), models and scenarios
o Supporting one or more systems
o Allow users (e.g., researchers) to define various events of interest
• Low cost
o Minimum to no need for custom equipment or software to run
o Use of commercially-of-the-shelf (COTS) equipment
• Maintainable
o Easily perform upgrade to hardware and/or software
4.3 Use Case vs Scenario
A Use Case (UC) is a statement of how a user (actor) interacts with the system [93-95].
Systems can be defined by multiple use cases. Use Case relationships can be:
a) Extension which is a behavior that is used rarely or often used to represent off-
nominal conditions [93-95]
b) Inclusion which is repeated behaviors and can be separated out for easy access
[93-95];
c) Generalization when a use-case inherits general behavior from higher level parent
use case [93-95]
39
A Use Case at minimum has following elements: Goal, Pre-Condition, Post-Condition,
Basic Flow of Events, and Alternate Flow of Events [93-95]. A use case generally has an actor.
An actor is a human agent or a system that performs set of tasks [93-95]. An actor that will be
initiating the use case is considered primary and any other actors who will participate in completing
the use case are considered secondary [93-95]. As part of pre-conditions, each use case has set of
triggering events that initiate the use case [93-95]. These events could be an external business
event or system event that causes the use case to begin, or it could be the first step in the normal
flow of events [93-95].
Pre-conditions are list of activities (tasks) that must take place, or conditions that must be
true before the use case can be started. Each pre-condition can be related to a specific component
which can be an actor from the list of actors, or scene which will define operational context (also
known as, operational environment) [93-95].
Post-conditions are essentially requirements that the system needs to satisfy at the
conclusion of the use case [93-95]. Post-Conditions should also include both minimal guarantees
(i.e., what must happen even if the actor’s goal is not achieved) and the success guarantees (i.e.,
what happens when the actor’s goal is achieved) [93-95].
Normal (basic) flow of events provides a detailed description of the user actions and system
responses that will take place during execution of the use case under normal, expected conditions
[93-95]. This sequence of events will ultimately lead to accomplishing the goal stated in the use
case name and description [93-95]. A key distinction between use case and scenario is that a
scenario is specific instance of use case [96]. Scenarios typically includes more details than the
general use case [96]. One use case can have many scenarios such as variations in some parameters
and specific configurations [93-96].
40
Alternative flow of events documents legitimate branches from the main flow to handle
special conditions such as disruptions (also known as extensions) [93-95]. For each alternative
flow, condition which must be true in order for this extension to be executed must be identified
[93-95].
As part of any use case, various detailed information is documented to ensure all aspects
of the use case is captured by the use case author. These are listed below. A use case template is
shown in Appendix D.
• List of exceptions which describe any anticipated error conditions that could occur
during execution of the use case and define how system is to respond to those
conditions.
• List any other use cases that are included (“called”) by this use case.
• Frequency of which the use case will be used. This information is primarily useful
for system designers ( e.g., 50 times per hour, 200 per day, once a week, once a
year, or on demand)
• List of any additional requirements, such as nonfunctional requirements,
performance requirements or other quality attributes.
• List of any assumptions that were made in the analysis that led to accepting this
use case writing the use case description.
41
4.4 Ontology-Based Approaches
Interest in ontologies has surged in recent years as systems have become more increasingly
complex. Ontologies have been used variety of applications such as requirement analysis [97],
metamodeling [98], enterprise integration [99], interoperability [100], model-driven engineering
[101], ontology merging [102], formal concept analysis [103], architecture development [104],
and systems engineering [105-108].
An ontology is “an explicit specification of a conceptualization.” [109] In some references
it is described as a “formal, explicit specification of a shard conceptualization[110]”. Ontologies
are expressed using a specific language [110].
A conceptualization is set of objects, concepts, and other entities that are assumed to exist
in some area of interest and the relationships that hold among them [101, 111]. It is an abstract
view of the world that is represented for specific purpose[111]. Conceptualization should not
change when the world changes. Once we are committed to certain conceptualization, we have to
make sure to only admit those domain models which are intended according to the
conceptualization [101, 111].
Metamodel defines general structure, constrains, and symbols [111, 112]. A metamodel by
itself doesn’t have practical value [112]. Metamodel doesn’t assign semantics to the symbols and
rules, ontology does [101, 111]. An ontology represents concepts and relationships formally using
the structure provided by the metamodel [101, 111]. Those ontologies that have metamodel are
more formal and have properties such as scalability, reuse, and interoperability. [112] Formal
languages allow semantic reasoning and analysis which is main advantage of using ontologies.
42
4.5 Ontology Development Tasks
Developing an ontology is an iterative process [105, 112]. There are few important tasks
that need to be followed to create an ontology that is consistent and relevant to the problem. These
start with determining the domain and scope of the ontology [113]. This is where the boundary
and main use of ontology is determined. When this is identified, the next step is to look for similar
ontologies and consider reusing existing ontologies as much as possible [102, 112, 113]. To define
the ontology correctly important terms in the ontology should be enumerated to ensure all aspects
of the domain is being covered [102, 112]. After enumerating important terms then main classes
and class hierarchies can be defined. Once this is done, then properties of each class and the values
that each property can take is defined [102, 112]. In essence this is a layered process where each
step is built on top of previous step. Figure 2 shows this process.
FIGURE 2: ONTOLOGY DEVELOPMENT TASKS
43
4.6 Developing Ontology for UAV Domain
The purpose of this ontology is to guide development of flexible and low-cost testbed for
exploration of unmanned aerial vehicles operation. The key questions that this ontology is going
to answer are related to key elements of the domain and their relationships, the scope of a flexible
hardware-software testbed, and how UAV systems handle disruption given operational context.
To this end, the important terms in the ontology are defined in Table 6.
TABLE 6: IMPORTANT TERMS IN ONTOLOGY
• Testbed: An experimentation and exploration platform to facilitate system development
• System of Interest: A system under test that can changes its behavior to various
disruptions
• Disruption: An event that negatively or positively impact systems performance
• Adaptation Logic: An elaborate algorithm or set of heuristics that navigate the system
through disruption
• Use Case/Scenario: An outline/Narrative and sequence of events with pre-conditions
and post conditions
• Simulation Environment: An environment run by a simulation engine where all events
and actions are performed in a computer without any ties to actual, physical hardware
• Hardware: the physical components that interact with physical environment
• Software: the algorithms and codes that run hardware or computer
• Middleware: a mechanism to enable communication and data sharing among various
components in the testbed
• Human Agent: The person performing system or domain modeling, or operating system
4.6.1 Class Hierarchy
Following important terminology defined in previous section, a set of classes is defined
next. These classes have been created based on the literature review and series of revisions during
implementation of the testbed which will be discussed later in this chapter. These classes have
44
been created in the Protégé domain modeling software. Each class has set of data properties and
set of values that each property can take. Furthermore, each class is connected to another class by
a relationship. These relationships are defined in the next section. Table 7 lists main classes and
their properties. Figure 3 shows class hierarchies from Protégé domain modeling software.
FIGURE 3: CLASS HIERARCHY
45
TABLE 7: CLASSES AND CLASS HIERARCHY
Parent
Class
Class Name Sub-Classes Class Property Property Value
Property
Given or
Inherited
Thing
Adaptation
Mechanism
-Algorithm
-Heuristics
-If-Then-
Rules
Execution-
Time
Date Time
Given
Mechanism-
Implementation
Java, Python
Adaptation
Mechanism
Algorithm -- -- -- Inherited
Adaptation
Mechanism
Heuristics -- -- -- Inherited
Adaptation
Mechanism
If-Then-
Rules
-- -- -- Inherited
Thing
Configuration
Management
--
Config. Mng.
Implementation
Java, Python,
Simulink
Given
Thing Dashboard --
Dashboard-
Implementation
Java, Python Given
Thing Event
-Disruption
-Normal
Number-of
Rep.
Integer
Given
Repeated-
Event
Boolean
Time-of-
Occurrence
double
Event Disruption
-External
-Human-
Triggered
-Internal
-- -- Inherited
Disruption External -- -- -- Inherited
Disruption
Human-
Triggered
-- -- -- Inherited
Disruption Internal -- -- -- Inherited
46
Event Normal -- -- -- Inherited
Thing Hardware --
Accuracy Double
Given
Hardware-Type
Com. Unit,
Computer,
Flight
Computer,
Sensor, Servo,
Vehicle
Mobility
Mobile,
Stationary
Processing
Time
Double
Thing
Hardware-
Software
Testbed
Thing
Human
Agent
-Designer
-Modeler
-Operator
Skill-Level String Given
Human Agent Designer
-Domain
Modeler
-System
Modeler
Designer
Domain
Modeler
-- -- -- Inherited
Designer
System
Modeler
-- -- -- Inherited
Human Agent Operator -- -- -- Inherited
Thing Middleware --
Data Transfer
Rate
Double
Given
Data Type
Boolean,
Double,
Integer, String
47
Middleware
Type
Socket
Communication
Thing Model --
Model
Implementation
Java, Python,
Simulink
Given
Model
Representation
System,
System-of-
System
Model Type
Deterministic,
Probabilistic,
State Machine
Thing
Operational
Environment
-Physical
Environment
-Virtual
Environment
Environment
Type
Indoor,
Outdoor
Given
Geo.
Coordinates
Double
Terrain
Dessert, Jungle,
Mountain, Sea
Weather
Cloudy, Rainy,
Snowy, Sunny
Operational
Environment
Physical
Environment
Inherited
Operational
Environment
Virtual
Environment
Inherited
Thing Requirements
-Mission
Requirement
-System
Requirement
Description String Given
Requirements
Mission
Requirements
Inherited
Requirements
System
Requirements
Inherited
Thing Software
-Scenario
Authoring
Tool
Architecture Open-Source
Given
Software Type
Authoring,
Code, Game-
48
-Simulation
Environment
Engine,
Modeling,
Simulation-
Engine
User-Interface
Drag-Drop,
Script
Software
Scenario
Authoring
Tool
Inherited
Software
Simulation
Environment
-Engineering
Tool
-Game-
Engine
Based
Inherited
Simulation
Environment
Engineering
Tool
Inherited
Simulation
Environment
Game-Engine
Based
Inherited
Thing
System of
Interest
-Single
System
-System-of-
System
System
Category
Multi, Single
Given
System Type Air, Ground
System-of-
Interest
Single
System
Inherited
System-of-
Interest
System-of-
Systems
Inherited
Thing Use Case --
Alternative
Flow
String
Given
Flow-of-Events String
Goal String
Mission-Type
Area-
Surveillance,
Monitoring,
Payload
Delivery,
49
Search and
Rescue
Post Condition String
Pre-Condition String
Use Case
Category
Commercial,
Military
4.6.2 Classes, Relationships, and Overall Ontology
Table 8 shows each class defined in previous section and its relationship with another class
or sub-class in the ontology. Figure 4 shows full ontology developed for integrated testbed with
special focus on unmanned aerial vehicles, however, each ontology class and properties are
carefully chosen to be sufficiently general. Such generality is demonstrated in chapter five by
extending the testbed to autonomous vehicles domain.
Figure 4captures entire ontology in one place. It includes information regarding classes,
their relationships, properties of each class, and possible values for each property. The details of
classes, their properties and values are shown in Table 7.
50
TABLE 8: ONTOLOGY RELATIONSHIPS
Domain Relationship Range
Modeler Authors Use-Case
Modeler Creates Model
System-of-Interest Employs Adaptation-Mechanism
Middleware Facilitates Hardware-Software-Testbed
Use-Case Includes Event
Hardware Interacts-With Physical-Environment
Use-Case Is-Defined-In Operational-Environment
System-of-Interest Operates-In Operational-Environment
System-of-Interest Plays-Role-In Use-Case
Dashboard Provides-Access-To Hardware-Software-Testbed
Dashboard Provides-Exploration-To Designer
Dashboard Provides-Modeling-To Modeler
System-of-Interest Satisfies Requirements
Hardware-Software-Testbed Utilizes-Lab-Hardware Hardware
Hardware-Software-Testbed Utilizes-Lab-Software Software
Simulation-Environment Utilizes-Models Model
Adaptation Mechanism Utilizes-Operator Operator
51
FIGURE 4: TESTBED ONTOLOGY FOR UAV DOMAIN
52
4.6.3 Ontology Reasoning
Protégé has many ontology reasoning packages available for domain modelers. Ontology
reasoning allows domain modelers to ensure that ontology elements are consistent and there are
no mismatches between concepts and their relationships. For this research HermiT ontology
reasoner is used. HermiT determine if the ontology is consistent and identifies subsumption
relationships between classes.
HermiT analyzes the ontology by looking at class hierarchy, object property hierarchy, data
property hierarchy, class assertions, object property assertions, data property assertions, and same
individuals. Table 9 shows the report generated by HermitT reasoner after reasoner has analyzed
the ontology. In case of any inconsistency, the report shows those warnings in red.
TABLE 9: HERMIT REASONER REPORT
----------------- Running Reasoner -------------------
Pre-computing inferences:
- class hierarchy
- object property hierarchy
- data property hierarchy
- class assertions
- object property assertions
- same individuals
Ontologies processed in 423 ms by HermiT
53
4.7 Testbed Architecture
Figure 5 shows high-level architecture of the testbed. This architecture is developed by
following the ontology defined in previous section. Each component of the architecture is carefully
chosen to address one or more classes of the ontology. The dashboard is a door into the testbed
and its components. It gives users such as system modeler, experimenter, or domain modeler to
access various components of the testbed, perform modeling, collect data, and perform various
analysis. The dashboard also enables importing various components into the testbed such as use
cases and domain model (ontology). The database keeps information about actors which will be
used in the simulation. It also contains model libraries which will be particularly used in the
simulation environment. The model library is utilizing dynamic class mechanism which enables
importing system behavior files. This capability will be discussed in detail in the following
sections. The middleware connects the dashboard to various components such as hardware,
system/domain model, virtual worlds/simulation environment or physical world.
FIGURE 5: HIGH-LEVEL TESTBED ARCHITECTURE
54
The hardware component is referring to any components that experiment will be using in
a laboratory environment. System modeling and domain modeling components are software and
tools which help modelers develop various kind of models to be used in simulation environment.
Virtual component is referring to various software or tools which will be used in generating
simulated environment. These can be game engines or engineering tools. Physical world
components are those which exist in physical environment. This is like the operational
environment. For instance, if vehicles operate in a laboratory indoor environment, the environment
in which they operate can be viewed as physical environment (also known as operational
environment). There is an arrow that connects physical environment to hardware, this is to
emphasize the concept that hardware gets information from physical environment directly through
its sensors.
The arrows and boxes in green show currently implemented and working links and layers.
This architecture results in a layered and modular testbed. The benefit of such architecture is that
it is flexible, and each layer or module can be changed, updated, or extended without impacting
the overall architecture.
4.8 Technologies Used in Testbed Development
To satisfy non-functional requirements set in previous sections, the strategy for
implementing the testbed is to use low cost, stable, open-source components. The goal is to select
tools and software that are backed up by a programing language that is available freely and is well
documents. To this end, the components used for the testbed used in this dissertation are either
open source, come with free license, or are backed up by a popular programming language. These
components are discussed next.
55
4.8.1 Protégé
Protégé is an open source, free, ontology development and visualization software. This
software is written in Java programming language. It is a light software that can run on any
computer with basic capabilities. In the testbed developed for this dissertation it serves as domain
modeling software component. In later sections, the process of how ontology relates to rest of the
testbed is discussed.
4.8.2 Anylogic®
Anylogic is a commercially available tool with free personal use license. This software
support different modeling techniques such as agent-based modeling, discreet event simulation,
and system dynamics. Anylogic enables hybrid modeling and simulation environment, which is a
key advantage for modeling complex systems. The main components of the software are written
in Java. Java programs developed outside this software can be easily imported to interfaced with
the software. Given the flexibility and popularity of Java, there are libraries that can be utilized to
connect this software to programs written in other programming languages such as Python. This
software is well documented and is easy to use. For the testbed developed in this dissertation it
serves as testbed dashboard, simulation engine, and data collection/analysis tool.
4.8.3 Java™ Programming Language
Java is popular programing language and has rich supporting libraries. For the testbed
developed in this dissertation it serves as primary language for creating model libraries (especially
dynamics models). It is flexible and well-documented to interface with other programming
languages (e.g., python).
56
4.9 Testbed Implementation
As mentioned earlier the testbed has layered architecture. Same principle is followed in the
implementation of the testbed as well. Figure 6 shows layered implementation architecture of the
testbed.
FIGURE 6: IMPLEMENTATION ARCHITECTURE
On the top layer, the dashboard of the testbed is the interface between testbed and user.
There are also elements that interface with different types of files such as text files and excel files.
The testbed is implemented in AnyLogic software which is commercial software. It is based on
Java; therefore, it is easy to extend capabilities with utilizing software’s java interface. Python is
the main language being used for controlling hardware these days. Having Java and Python
interface, and Java and Anylogic interface, this brings flexibility to the testbed to be extended to
hardware and create hardware-in-the-loop capability.
The top layer (dashboard) is the gate to import information into the testbed and select
testbed mode. It also allows presenting data and information to the user. Anylogic supports multi-
57
view capabilities, therefore, various information can be shown to the user from different
perspectives.
Middle layer is essentially running the simulation. It is the main simulation engine. This is
where actions happen during run-time. This layer directs all the agents in the third layer on which
actions to perform, initialize their parameters, collects their data, and send the information to the
dashboard. In this layer, the actors that are defined in the database are pulled into the simulation
environment. Before running the simulation, the software ensures that model for that actor exist in
the actor library. There is also an algorithm that checks imported use case against the actor’s
database to ensure all the properties of the actors defined in the use case match the properties
defined in the database. If properties do not match, the simulation is terminated.
The middle layer is also connected to the actor library. This library is a relative path to Java
source codes and actor .jar file folder. This library has a main superclass called “Actor” which
includes header for functions that actors execute. The detail of this library is discussed in
subsequent section.
The middle layer is also responsible to run ontology checker algorithm. This algorithm
checks to ensure each component and its properties match the ontology. The ontology is developed
in an ontology editor software called Protégé.
4.10 Ontology Import
The ontology developed in protégé goes through several steps to get converted into a proper
format to be imported into the testbed. To make it easy and readable to both user and the testbed,
the ontology classes and their properties are converted into text file. Each file contains information
58
regarding the parent class, properties and their values, the relationship to other classes, as well as
a list of all possible instances of the class.
Protégé does not have a direct support for text files. However, it does have the capability
to export the ontology to html files though OWLDoc. A custom code is written to convert html
files into text files. This text files are then imported into the testbed. Figure 7 shows how the
ontology developed in protégé is converted and imported into testbed.
FIGURE 7: PROCESS FOR CONVERTING AND IMPORTING ONTOLOGY
4.11 Use Cases Import
The use-cases for system-of-interest are written in an excel file. Each excel file represents
a specific use-case or scenario for the system-of-interest. An algorithm reads in all the information
from the use case file such as Use Case ID, Use Case Name, Created By, Date Created, Use Case
Category, Last Updated By, Last Revision Date, List of Actors (Type and Priority), Description
of Use Case, Triggering Events, Pre-Conditions, values, and related components, Post-Conditions,
values, and related components (aka requirements), Normal Flow of Events, Alternative Flow of
59
Events, Exceptions, Including Use Cases, Special Requirements, Frequency of Use, Assumptions,
and Notes and Issues.
Anylogic does have excel file interface which makes the integration easier. Each excel file
gets read by the algorithm in the dashboard and every information in each category gets saved into
a local variable to be used later in the testbed.
After importing the use cases into the testbed, an algorithm first creates list of all actors in
the use case. These actors are then compared to the actors and their properties defined in the actor’s
database in a step-by-step approach. In the first step the Actor ID is verified. If Actor ID matches,
it then moves to check Actor Type. If Actor Type is correct, then it moves to next variable set. For
example, operational context has four variables , Weather, DayTime, Terrain, and Area. The
algorithm checks for every variable that is defined for the actor.
4.12 Actor Database and Actor Library
To make the testbed as flexible as possible, actors and their behaviors are defined outside
of the testbed. This gives an opportunity for experimenters and modelers to share the actor database
and associate models. Such capability creates a platform where various experimenters and
modelers can contribute to enriching the testbed with various actors with different behaviors.
In this case a database is created to define actor type, its initial conditions, and health status.
In case of an UAV, its route is also defined in the database by defining set of waypoints. Since
operational environment is treated like an actor in the Anylogic simulation environment, this
database also allows experimenters to define properties of the operational environment in the
database as well.
60
Such capability also enables adding new actors into the simulation environment. As it is
shown in the next chapter, few of use cases are about extending testbed to other domains such as
autonomous vehicles domain and defining actors (agent) properties and behavior outside of testbed
makes such extension smooth.
To ensure compatibility and ease of use with many platforms, the database defined in this
case is a Microsoft Excel sheet with each row defining an actor and its properties. This database
is then connected to Anylogic environment through Anylogic’s database API and actors are created
upon initializing the simulation environment.
To enable a capability where various actor models can be imported into testbed, a library
of actors has been created using Java’s dynamic class mechanism. The library has superclass called
“Actor”. This abstract superclass only defines main function header for the actor. These functions
are:
• OnStartUp()
• OnStep()
• UpdateData()
• getInitialLocation()
These functions are shared among all actors that are imported into the testbed. The detail
of each function varies for each actor. It is modelers responsibility to define the body of these
functions. However, since they are defined outside the testbed, there will be no modification to the
main testbed.
61
OnStartUp function is required to setup the actor at the beginning of the simulation. Once
it is called, it loads up the actor with the information necessary to start the simulation. This is a
function, and it is only executed at the beginning of the simulation.
OnStep function defines the behavior of the system at every simulation time step or every
time the function is called and executed. Each actor will have different behavior and these
behaviors need to be defined in OnStep function.
UpdateData function is responsible for update the actor variables in case there is a change
in these parameters during system operation. This function essentially facilitates the adaptive
behavior. Every time that system needs to update its originally loaded information should use this
function to update its parameters.
getInitialLocation function is responsible for returning the initial location of the actor at
the beginning of the simulation. This is the case where the location of the actor is defined in the
outside .jar file and there is a need to access to this information at the beginning of the simulation.
As it can be concluded from the description and structure of these functions, they are not
tailored toward a specific type of an actor. They are special purpose functions. In nature, they are
generic and are common for all types of actors. Such capability enables importing various actor
behavior without modifying the source code of testbed.
62
CHAPTER 5
EXPERIMENTATION RESULTS
In this chapter various use cases are developed to test the capabilities of the testbed as well
as the ontology developed in previous chapter. Use cases are developed with the increasing
complexity to ensure various components of the testbed are engaged. Use cases are first selected
from UAV domain. Thereafter, to show the extendibility of the ontology, use cases are defined
from autonomous vehicle’s (AV) domain. To show that testbed and the ontology are both
extendible and flexible, a use case is developed from the intersection of UAV and AV domains.
5.1 Use Case I: Single UAV Operation
The goal of this use case is to demonstrate testbed capabilities and to provide a baseline.
This is to ensure that all components are working together seamlessly. Therefore, this use case is
simple yet has enough complexity to show testbed’s main capabilities. As mentioned earlier use
63
cases are living outside the testbed. That is the main distinction of the testbed developed in this
dissertation with other testbed described in literature. Table 10 shows the information that are
imported into the testbed regarding the use case.
TABLE 10: USE CASE I DESCRIPTION
Use Case ID: 1
Use Case Name: Area-Surveillance Use-Case Category Military
Created By: Edwin Ordoukhanian Last Updated By: Edwin O.
Date Created: August 25 2021 Last Revision Date: August 25 2021
Actor(s): Actor ID Actor Type Actor Priority
Actor 0 UAV Primary
Actor 1 OperationalContext Primary
Description: Goal is to deploy a single UAV to search and map an area more than X
percent and within T seconds
Pre-Conditions: Cond. ID Cond. Variable Condition Value Related
Component
0 Health_Status Good Actor 0
1 Weather Sunny Actor 1
2 Terrain Desert Actor 1
3 DayTime Morning Actor 1
4 Area 50x50 Actor 1
PostConditions: Cond. ID Cond. Variable Condition Value Related
Component
0 Covered Area >95 Actor 0
1 Mission Time <200 Actor 0
Normal Flow: Step
Number
Actor Action
64
0 Actor 0 LoadWaypoints;
1 Actor 0 FollowWaypoints;
2 Actor 0 TakePictures;
3 Actor 0 CalculatesProgress;
The information provided in Table 10 are inserted into the Excel file designed for this
testbed. This Excel file is then imported into the testbed. Ontology developed in the Protégé is also
converted into the text files and imported into the testbed. Figure 8 is a screenshot from the testbed
dashboard showing components are successfully imported.
FIGURE 8: USE CASE I DASHBOARD VIEW
Table 11 describes the elements of the dashboard.
65
TABLE 11: DASHBOARD ELEMENTS
1. Use case path selection and excel file search
2. Use case selection from a drop-down list, loaded from the path
3. Loaded use case information
4. Testbed mode selection buttons
5. List of loaded actors from the use case
o Each actor’s model type can be selected from the drop-
down menu in front of the actor
6. Path to ontology text files
7. Ontology checker enabling button
8. List of loaded classes from the ontology
9. An audit trail box showing what has been selected
10. An edit box to insert new path directory into testbed log files
The actor information is also entered into the actor database. Before running the dashboard,
the actor .jar files are imported. The .jar file for UAV contains nonlinear models of a quadcopter.
The summary of the model is described in Appendix A. The .jar files and database content are
used by testbed to create required actors for each use case. Table 12 shows the database entry for
use case I.
After running the simulation, on the screen the user can see 2D and 3D views of the use
case, mission requirements and status of the requirements. If a requirement is satisfied it will be
shown in green and requirements that are still not satisfies will be shown in red. On the right side
of the screen user can also see the set of ontology classes. If the properties defined in the testbed
matches the properties defined in the ontology, that ontology class will be shown in green. If there
are any mismatches, a set of warnings will appear on the screen. These warnings will guide the
user to understand which classes do not follow the developed ontology. Figure 9 shows a
screenshot after running use case I. Table 13 shows elements of the dashboard during runtime.
66
TABLE 12: USE CASE I ACTOR DATABASE
Actor_ID Actor_Type initialConditions DesiredT ID
Actor 0 UAV 0,0,0,0,0,0,0,0,0,0,0,0 20,40,60,80,100,120,140 0
Actor 1 OperationalContext 0, 0, 999
Actor_ID Xt Yt Zt initialTime CameraAvailable HealthStatus
Actor 0 0, 10,
40, 40,
10, 10,
40
0, 10, 10
,30, 30,
40, 40
15, 15,
15, 15,
15, 15,
15
0 Y Good
Actor 1 0, 0, 0, 0 N N
Actor_ID Weather Terrain
DayTime
Area
Actor 0 - - - -
Actor 1 Sunny Jungle-
Road
Morning 50x50
FIGURE 9: USE CASE I RUNTIME VIEW
67
TABLE 13: DASHBOARD ELEMENTS DURING RUNTIME
1. 2D (top) and 3D (bottom) views of the scenario
2. Mission or Actor specific requirements loaded from the use case file
3. Measured variables during runtime for specified requirements
4. Loaded classes from ontology files
5. Warning received from ontology checker algorithm (if enabled)
6. Simulation Clock
7. Console of dashboard showing information regarding simulation and
models
8. Buttons to start, pause, and stop simulation
9. Buttons to speed up or slow down the simulation
Table 14 shows snapshot of operational environment log files and compares to the
information provided regarding operational environment class in the ontology. The information
mismatch is shown with an arrow. Table 15 contains the same information regarding the warning
related to simulation environment.
TABLE 14: OPERATIONAL ENVIRONMENT LOG FILE COMPARISON
Testbed Operational Environment
Log File
Operational Environment
Class Information
Is A: Operational_Environment
Environment Type: Outdoor
Coordinates: double
Terrain: Jungle
Weather: Sunny
➔ DayTime: Morning
Parent Class Name= Thing_
Environment-Type_ = Indoor,Outdoor,
Geographic-Coordinates_ = double,
Terrain_ = Dessert,Jungle,Mountain,Sea,
Weather_ = Cloudy,Rainy,Snowy,Sunny,
68
TABLE 15: SIMULATION ENVIRONMENT LOG FILE COMPARISON
Simulation Environment
Log File
Simulation Environment
Class Information
Is A: Simulation_Environment
➔ Architecture: Proprietary
Software Type: Simulation Engine
User-Interface: Drag-Drop
Parent Class Name= Thing_
Architecture_ = Open-Source,
Software-Type_ = Authoring,Code,Game-
Engine,Modeling, Simulation-Engine,
User-Interface_ = Drag-Drop,Script,
FIGURE 10: USE CASE I AGENT VIEW
Figure 10 shows the agent view for this use case. The agent view shows X, Y, Z coordinates
of the actor as they change through time in time plot. The agent view also shows actor’s variables,
and their values change during simulation.
5.2 Use Case II: Multi-UAV Operation
The goal of this use case is to demonstrate the extendibility of the testbed and ontology.
This is to ensure that both testbed components as well as ontology are flexible enough to handle
69
system-of-interest that is composed of two separate systems working together to accomplish
mission requirements. Table 16 shows the information that are imported into the testbed regarding
the use case.
TABLE 16: USE CASE II DESCRIPTION
Use Case ID: 2
Use Case Name: Area-Surveillance Use-Case Category Military
Created By: Edwin Ordoukhanian Last Updated By: Edwin O
Date Created: August 25 2021 Last Revision Date: September 30 2021
Actor(s): Actor ID Actor Type Actor Priority
Actor 0 UAV Primary
Actor 1 Operational
Context
Primary
Actor 2 UAV Primary
Description: Goal is to deploy a single UAV to search and map an area more than X
percent and within T seconds
PreConditions: Cond. ID Cond. Variable Condition Value Related
Component
0 Health_Status Good Actor 0
1 Weather Sunny Actor 1
2 Terrain Jungle Actor 1
3 DayTime Morning Actor 1
4 Area 50x50 Actor 1
5 Health_Status Good Actor 2
PostConditions: Cond. ID Cond. Variable Condition Value Related
Component
0 Covered Area >90 Actor 0
70
1 Mission Time <250 Actor 0
Normal Flow: Step
Number
Actor Action
0 Actor 0 LoadWaypoints;
1 Actor 0 FollowWaypoints;
2 Actor 0 TakePictures;
3 Actor 0 CalculatesProgress;
4 Actor 2 FollowWaypoints;
5 Actor 2 TakePictures;
6 Actor 2 LoadWaypoints;
7 Actor 2 CalculatesProgress;
Alternative
Flows:
Step
Branching
Number
Condition
Variable
Condition
Value
Actor Action
2a Time >40 Actor 0 Lands
Figure 11 shows the screenshot from testbed dashboard while importing use case and other
components. The elements of dashboard are discussed in Table 11. Figure 12 shows the runtime
view of dashboard. The elements of this view are discussed in Table 13. Figure 13 shows the actor
view. The actor view shows the time plot of X,Y, Z coordinates and the set of variables for each
actor.
71
FIGURE 11: USE CASE II DASHBOARD VIEW
FIGURE 12 USE CASE II RUNTIME VIEW
72
The actor information is also entered into the actor database. Before running the dashboard,
the actor .jar files are imported into the testbed. The .jar files and database content are used by
testbed to create required actors for each use case. Table 17 shows the database entry for use case
II.
TABLE 17: USE CASE II ACTOR DATABASE
Actor_ID Actor_Type initialConditions DesiredT ID
Actor 0 UAV 0,0,0,0,0,0,0,0,0,0,0,0 20,40,60,80,100,120,140 0
Actor 1 OperationalContext 0, 0, 999
Actor 2 UAV 0,0,0,0,0,0,0,0,0,0,0,0 20,40,60,80,100,120,140 1
Actor_ID Xt Yt Zt initialTime CameraAvailable HealthStatus
Actor 0 0, 10,
40, 40,
10, 10,
40
0, 10, 10
,30, 30,
40, 40
15, 15,
15, 15,
15, 15,
15
0 Y Good
Actor 1 0, 0, 0, 0 N N
Actor 2 0, 10,
40, 40,
10, 10,
40
0, 10, 10
,30, 30,
40, 40
10, 10,
10, 10,
10, 10,
10
0 Y Good
Actor_ID Weather Terrain
DayTime
Area
Actor 0 - - - -
Actor 1 Sunny Jungle Morning 50x50
Actor 2 - - - -
73
(A)
(B)
FIGURE 13: USE CASE II ACTOR VIEW FOR ACTOR 0 (A) AND ACTOR 2 (B)
Table 18 shows snapshot of operational environment log files and compares to the
information provided regarding operational environment class in the ontology. The information
mismatch is shown with an arrow. Table 19 contains the same information regarding the warning
related to simulation environment.
74
TABLE 18 OPERATIONAL ENVIRONMENT LOG FILE COMPARISON
Testbed Operational Environment
Log File
Operational Environment
Class Information
Is A: Operational_Environment
Environment Type: Outdoor
Coordinates: double
Terrain: Jungle
Weather: Sunny
➔ DayTime: Morning
Parent Class Name= Thing_
Environment-Type_ = Indoor,Outdoor,
Geographic-Coordinates_ = double,
Terrain_ = Dessert,Jungle,Mountain,Sea,
Weather_ = Cloudy,Rainy,Snowy,Sunny,
TABLE 19 SIMULATION ENVIRONMENT LOG FILE COMPARISON
Simulation Environment
Log File
Simulation Environment
Class Information
Is A: Simulation_Environment
➔ Architecture: Proprietary
Software Type: Simulation Engine
User-Interface: Drag-Drop
Parent Class Name= Thing_
Architecture_ = Open-Source,
Software-Type_ = Authoring,Code,Game-
Engine,Modeling, Simulation-Engine,
User-Interface_ = Drag-Drop,Script,
75
5.3 Use Case III: Adaptive Multi-UAV Operation
The goal of this use case is to demonstrate that testbed and ontology can further be extended to
include adaptative behavior. In this use case the system-of-interest is employing an adaptation
logic which is an if-then-rule. This is to demonstrate that a) system-of-interest can employ pre-
planned protocols which is a type of adaptation mechanism; b) system-of-interest can utilize
resources, if instructed; and c) testbed and ontology both can support such behavior. Table 20
shows the information that are imported into the testbed regarding the use case.
TABLE 20: USE CASE III DESCRIPTION
Use Case ID: 3
Use Case Name: Area-Surveillance Use-Case Category Military
Created By: Edwin
Ordoukhanian
Last Updated By: Edwin O
Date Created: September 30 2021 Last Revision Date: September 30 2021
Actor(s): Actor ID Actor Type Actor Priority
Actor 0 UAV Primary
Actor 1: Operational Context Primary
Actor 2: ReserveUAV Primary
Description: Goal is to deploy a single UAV to search and map an area more than X
percent and within T seconds
PreConditions: Cond. ID Cond. Variable Condition Value Related
Component
1 Health_Status Good Actor 0
2 Weather Sunny Actor 1
3 Terrain Jungle Actor 1
4 DayTime Morning Actor 1
76
5 Area 50x50 Actor 1
6 UAV_Health Good Actor 2
PostConditions: Cond. ID Cond. Variable Condition
Value
Related
Component
0 Covered Area >95 Actor 0
1 Mission Time <200 Actor 0
Normal Flow: Step Number Actor Action
0 Actor 0 LoadWaypoints;
1 Actor 0 FollowWaypoints;
2 Actor 0 TakePictures;
3 Actor 0 CalculatesProgress;
Alternative
Flows:
Step
Branching
Number
Condition Variable Condition
Value
Actor Action
2a Time >40 Actor0 Lands
2b Time >40 Actor0 LaunchReserve
Figure 14 shows the dashboard view. The elements of the import view are described in
Table 11. Figure 15 and Figure 16 show simulation results before and after disruption, respectively.
Like previous cases, classes shown in green are those that match with ontology. Warnings are
shown for classes where properties do not match. Table 18 and Table 19 show these warnings.
77
FIGURE 14: USE CASE III DASHBOARD VIEW
FIGURE 15: USE CASE III RUNTIME VIEW – BEFORE DISRUPTION
78
FIGURE 16: USE CASE III RUN-TIME VIEW - AFTER DISRUPTION
The actor information is also entered into the actor database. Before running the dashboard,
the actor .jar files are imported. The .jar files and database content are used by testbed to create
required actors for each use case. Table 21 shows the database entry for use case III.
79
TABLE 21: USE CASE III ACTOR DATABASE
Actor_ID Actor_Type initialConditions DesiredT ID
Actor 0 UAV 0,0,0,0,0,0,0,0,0,0,0,0 20,40,60,80,100,120,140 0
Actor 1 OperationalContext 0, 0, 999
Actor 2 ReserveUAV 0,0,0,0,0,0,0,0,0,0,0,0 20,40,60,80,100,120,140 1
Actor_ID Xt Yt Zt initialTime CameraAvailable HealthStatus
Actor 0 0, 10,
40, 40,
10, 10,
40
0, 10, 10
,30, 30,
40, 40
15, 15,
15, 15,
15, 15,
15
0 Y Good
Actor 1 0, 0, 0, 0 N N
Actor 2 0, 0, 10, 0 Y Good
Actor_ID Weather Terrain
DayTime
Area
Actor 0 - - - -
Actor 1 Sunny Jungle Morning 50x50
Actor 2 - - - -
80
5.4 Use Case IV: Autonomous Vehicle Operation
The goal of this use case is to test the extendibility of the implementation as well as the
ontology to another domain. For this purpose, Autonomous Vehicle domain is selected since it is
a trending topic these days with various private and public companies putting efforts in developing
fully autonomous vehicles or advanced driver assistance modules. Table 22 shows a simple use
case for an autonomous vehicle driving on a road.
TABLE 22: USE CASE IV DESCRIPTION
Use Case ID: 4
Use Case Name: Driving Use-Case Category Commercial
Created By: Edwin O. Last Updated By: Edwin O.
Date Created: October 2 2021 Last Revision Date: October 2 2021
Actor(s): Actor ID Actor Type Actor Priority
Actor 0 AV primary
Actor 1 Operational Context primary
Description: This use case is for an autonomous vehicle driving on a road in jungle
environment under normal condition.
PreConditions: Cond. ID Cond. Variable Condition Value Related
Component
0 Terrain Jungle-Road Actor 1
1 Weather Sunny Actor 1
2 DayTime Morning Actor 1
3 Speed >=10 Actor 0
PostConditions: Cond. ID Cond. Variable Condition Value Related
Component
81
0 Distance >=50 Actor 0
Normal Flow: Step Number Actor Action
0 Actor 0 DrivingStraight
Such extension does not require extensive changes to the testbed since originally developed
testbed has been architected with flexibility. The information regarding AV actor such as initial
conditions, initial time to start is added to the actor database. The testbed will automatically create
agent based on this actor information. The actions that the AV actor should perform during
initialization and at each simulation step are imported by simply adding the .jar file of AV actor
into the testbed. The summary of AV model are presented in Appendix B. Under such condition,
after running testbed these actions are read from the java code and executed during simulation.
Figure 17 shows the dashboard view of this use case. The elements of the dashboard are like those
of previous cases. Figure 18 shows the run-time view of this use case. The elements of this view
are like previous use cases (see Table 11).
82
FIGURE 17: USE CASE IV DASHBOARD VIEW
FIGURE 18: USE CASE IV RUNTIME VIEW
83
The ontology checker algorithm captures the information regarding each component and
compare them against the ontology that has been developed previously. In this case there is a new
warning regarding use case. This can be explained by the fact that category of “Driving” use cases
doesn’t exist for use cases defined for UAV systems. Hence, the ontology checker throws a
warning to emphasize this mismatch. Table 23, Table 24, and Table 25 show comparison of log
files for the warnings issued by ontology checker algorithm. The mismatches are shown with an
arrow.
TABLE 23: OPERATIONAL ENVIRONMENT LOG FILE COMPARISON
Testbed Operational Environment
Log File
Operational Environment
Class Information
Is A: Operational_Environment
Environment Type: Outdoor
Coordinates: double
➔ Terrain: Jungle-Road
Weather: Sunny
➔ DayTime: Morning
Parent Class Name= Thing_
Environment-Type_ = Indoor,Outdoor,
Geographic-Coordinates_ = double,
Terrain_ = Dessert,Jungle,Mountain,Sea,
Weather_ = Cloudy,Rainy,Snowy,Sunny,
TABLE 24: SIMULATION ENVIRONMENT LOG FILE COMPARISON
Simulation Environment
Log File
Simulation Environment
Class Information
Is A: Simulation_Environment
➔ Architecture: Proprietary
Software Type: Simulation Engine
User-Interface: Drag-Drop
Parent Class Name= Thing_
Architecture_ = Open-Source,
Software-Type_ = Authoring,Code,Game-
Engine,Modeling, Simulation-Engine,
User-Interface_ = Drag-Drop,Script,
84
TABLE 25: USE CASE LOG FILE COMPARISON
Testbed Use Case Log File Use Case Class Information
Is A: Use_Case
Alternative-Flow: string
Flow-of-Events: string
Goal: string
➔ Mission Type: Driving
Post-Condition: string
Pre-Condition: string
Category: Commercial
Parent Class Name= Thing_
Alternative-Flow_ = string,
Flow-of-Events_ = string,
Goal_ = string,
Mission-Type_ = Area-Surveilance,
Monitoring ,Payload-Delivery, Search-And-
Rescue,
Post-Condition_ = string,
Pre-Condition_ = string,
Use-Case-Category_ = Commercial,Military,
The actor information is also entered into the actor database. Before running the dashboard,
the actor .jar files are imported. The .jar files and database content are used by testbed to create
required actors for each use case. Table 26 shows the database entry for use case II.
TABLE 26: USE CASE IV ACTOR DATABASE
Actor_ID Actor_Type initialConditions DesiredT ID
Actor 0 AV 0,0,0,10,5,32.5,0 0, 0
Actor 1 OperationalContext 0, 0, 999
Actor_ID Xt Yt Zt initialTime CameraAvailable HealthStatus
Actor 0 0, 0, 0, 0 Y Good
Actor 1 0, 0, 0, 0 N N
Actor_ID Weather Terrain
DayTime
Area
Actor 0 - - - -
Actor 1 Sunny Jungle-
Road
Morning 50x50
85
5.5 Use Case V: Multiple AV Operation
The goal of this use case is to show that the implementation and ontology are extendible to another
domain and will be sufficiently flexible to not only work for one vehicle but also for more than
one. The secondary goal for this use case is also to show that system of interest can have multiple
requirements and testbed is flexible enough to run multiple agents with different requirement(s)
for each. Table 27 shows the details of this use case.
TABLE 27: USE CASE V DESCRIPTION
Use Case ID: 5
Use Case Name: Driving Use-Case
Category
Commercial
Created By: Edwin O. Last Updated
By:
Edwin O.
Date Created: October 2 2021 Last Revision
Date:
October 7 2021
Actor(s): Actor ID Actor Type Actor Priority
Actor 0 AV primary
Actor 1 Operational
Context
primary
Actor 2 AV primary
PreConditions: Cond. ID Cond. Variable Condition Value Related
Component
0 Terrain Jungle-Road Actor 1
1 Weather Sunny Actor 1
2 DayTime Morning Actor 1
3 Speed >=10 Actor 0
4 Speed >=10 Actor 2
86
PostConditions: Cond. ID Cond. Variable Condition Value Related
Component
0 Distance >=50 Actor 0
1 Distance >=25 Actor 2
Normal Flow: Step Number Actor Action
0 Actor 0 DrivingStraight
1 Actor 2 DrivingStraight
Like previous use cases actor information is inserted into the actor database. This
information is then pulled by the testbed to create the actor and operational context. Figure 19
shows the testbed dashboard view for this use case and Figure 20 shows the run-time view of this
use case. The elements of these views are like previous cases.
FIGURE 19: USE CASE V DASHBOARD VIEW
87
FIGURE 20: USE CASE V RUN-TIME VIEW
The actor information is also entered into the actor database. Before running the dashboard,
the actor .jar files are imported. The .jar files and database content are used by testbed to create
required actors for each use case. Table 28 shows the database entry for use case V.
88
TABLE 28: USE CASE V ACTOR DATABASE
Actor_ID Actor_Type initialConditions DesiredT ID
Actor 0 AV 0,0,0,10,5,32.5,0 0, 0
Actor 1 OperationalContext 0, 0, 999
Actor 2 AV 0,0,0,10,5,32.5,0 0, 1
Actor_ID Xt Yt Zt initialTime CameraAvailable HealthStatus
Actor 0 0, 0, 0, 0 Y Good
Actor 1 0, 0, 0, 0 N N
Actor 2 0, 0, 10, 0 Y Good
Actor_ID Weather Terrain
DayTime
Area
Actor 0 - - - -
Actor 1 Sunny Jungle-
Road
Morning 50x50
Actor 2 - - - -
89
5.6 Use Case VI: Air and Ground Vehicle Operation
The goal of this use case is to demonstrate that testbed is capable of handling different
types of agents at the same time. As such, the use case is designed to utilize both air and ground
vehicles. This is to showcase the flexibility of the testbed in covering various types of use cases.
Table 29 shows the use case detail.
TABLE 29: USE CASE VI DESCRIPTION
Use Case ID: 6
Use Case Name: Traffic Monitoring Use-Case Category Commercial
Created By: Edwin O. Last Updated By: Edwin O.
Date Created: October 2 2021 Last Revision Date: October 7 2021
Actor(s): Actor ID Actor Type Actor Priority
Actor 0 UAV primary
Actor 1 Operational Context primary
Actor 2 AV Primary
Description: This use case employs two different types of systems an autonomous
vehicle and unmanned aerial vehicle. The UAV is monitoring the traffic
while AV is driving on a constant speed
PreConditions: Cond. ID Cond. Variable Condition Value Related
Component
0 Terrain Jungle-Road Actor 1
1 Weather Sunny Actor 1
2 DayTime Morning Actor 1
3 Speed >=10 Actor 2
4 Health_Status Good Actor 0
5 Area 50x50 Actor 1
90
6 Health_Status Good Actor 2
PostConditions: Cond. ID Cond. Variable Condition Value Related
Component
0 Mission Time <200 Actor 0
1 Covered Area >95 Actor 0
2 Distance >=50 Actor 2
Normal Flow: Step Number Actor Action
0 Actor 0 LoadWaypoints;
1 Actor 0 FollowWaypoints;
2 Actor 0 TakePictures;
3 Actor 0 CalculatesProgress;
4 Actor 2 DrivingStraight
Figure 21 shows dashboard view of this use case. Figure 22 shows the run-time view of
this use case. The elements of these views are discussed in previous cases. Table 30Table 25. Table
31Table 26, and Table 32 show comparison log files for the warnings issued by ontology checker
algorithm. The mismatches are shown with an arrow.
91
FIGURE 21: USE CASE VI DASHBOARD VIEW
FIGURE 22: USE CASE VI RUN-TIME VIEW
92
TABLE 30: OPERATIONAL ENVIRONMENT LOG FILE COMPARISON
Testbed Operational Environment
Log File
Operational Environment
Class Information
Is A: Operational_Environment
Environment Type: Outdoor
Coordinates: double
➔ Terrain: Jungle-Road
Weather: Sunny
➔ DayTime: Morning
Parent Class Name= Thing_
Environment-Type_ = Indoor,Outdoor,
Geographic-Coordinates_ = double,
Terrain_ = Dessert,Jungle,Mountain,Sea,
Weather_ = Cloudy,Rainy,Snowy,Sunny,
TABLE 31: SIMULATION ENVIRONMENT LOG FILE COMPARISON
Simulation Environment
Log File
Simulation Environment
Class Information
Is A: Simulation_Environment
➔ Architecture: Proprietary
Software Type: Simulation Engine
User-Interface: Drag-Drop
Parent Class Name= Thing_
Architecture_ = Open-Source,
Software-Type_ = Authoring,Code,Game-
Engine,Modeling, Simulation-Engine,
User-Interface_ = Drag-Drop,Script,
TABLE 32: USE CASE LOG FILE COMPARISON
Testbed Use Case Log File Use Case Class Information
Is A: Use_Case
Alternative-Flow: string
Flow-of-Events: string
Goal: string
➔ Mission Type: Traffic Monitoring
Post-Condition: string
Pre-Condition: string
Category: Commercial
Parent Class Name= Thing_
Alternative-Flow_ = string,
Flow-of-Events_ = string,
Goal_ = string,
Mission-Type_ = Area-Surveilance,
Monitoring ,Payload-Delivery, Search-And-
Rescue,
Post-Condition_ = string,
Pre-Condition_ = string,
Use-Case-Category_ = Commercial,Military,
93
Table 33shows actor database entry for this specific use case. In this case actor database
contains three different types of actors. For each of the actors required information are entered into
the respective columns.
TABLE 33 USE CASE VI ACTOR DATABASE
Actor_ID Actor_Type initialConditions DesiredT ID
Actor 0 UAV 0,0,0,0,0,0,0,0,0,0,0,0 20,40,60,80,100,120,140 0
Actor 1 OperationalContext 0, 0, 999
Actor 2 AV 0,0,0,0,0,0,0,0 0, 1
Actor_ID Xt Yt Zt initialTime CameraAvailable HealthStatus
Actor 0 0, 10,
40, 40,
10, 10,
40
0, 10, 10
,30, 30,
40, 40
15, 15,
15, 15,
15, 15,
15
0 Y Good
Actor 1 0, 0, 0, 0 N N
Actor 2 0, 0, 0, 0 N Good
Actor_ID Weather Terrain
DayTime
Area
Actor 0 - - - -
Actor 1 Sunny Jungle-
Road
Morning 50x50
Actor 2 - - - -
94
CHAPTER 6
SUMMARY AND IMPLICATION OF RESEARCH
The process of development and evaluation of unmanned systems today tends to be specific
to the system being developed (i.e., it is not generally applicable to a class of unmanned systems).
Consequently, their evaluation tends to get delayed because a unique test environment must be
developed for the system use cases to be tested. The creation of an instrumented testbed with
sufficient flexibility to address a class of problems can be expected to have a significant payoff
when it comes to testing. My research was concerned with the development of a flexible simulation
testbed to model and evaluate the behavior of unmanned systems (air and ground). My research
hypothesis was that a flexible testbed with facilities for modeling, analysis, testing and verification
will reduce the time to evaluate new technologies and development and testing approaches.
In this dissertation, I investigated the development of testbed ontology comprising key
concepts and relationships by performing literature review and identifying key concepts and
95
investigating their relationships. This ontology went through multiple revisions and final version
was presented in detail in chapter four.
After developing ontology and implementing the testbed, the key component of research
approach was to ensure that the implementation in fact follows the ontology. Therefore, a
mechanism was developed to check implementation against ontology in real time.
To demonstrate the flexibility of the testbed a database and library of actors was designed.
The database contains all the necessary information about the actors and is used to create actors in
the simulation environment. The model library is set of .jar files that can be read in during
simulation. Model library is there to increase fidelity of the simulation. This capability ensures that
testbed can be reused for various use cases.
It is important to note that use cases exist outside of the testbed to ensure that testbed is not
point solution (or single simulation). Since use cases are defined externally, a mechanism was
developed to import these use cases into the testbed. Since actor information is defined in a
database, then a mechanism was developed to ensure that use case actors and their properties match
the properties defined in the database. This mechanism terminates the simulation if there is a
mismatch between use case actor properties and properties defined in database.
All these capabilities were then integrated together to create integrated testbed capability.
Various use cases were defined and imported into the testbed to demonstrate its feasibility and
flexibility. To demonstrate the flexibility and extendibility of the testbed and the ontology defined
for UAV Systems testbed, the implementation was extended to another domain. For this purpose,
the domain of autonomous vehicles was chosen. Autonomous vehicles (AV) are now a trending
subject for many public and private companies. These vehicles share similar characteristics with
unmanned aerial systems but also have considerable difference.
96
The testbed offers a convenient means to reduce complexity when experimenting with
different scenarios, actors, and agents. My research has demonstrated that a general ontology for
a testbed for evaluating unmanned systems can be defined . Incorporating an actor library in the
testbed reduces modeling effort. The ability to read in third party scenario scripts and actor
behaviors reduces development effort. My research also demonstrated that there is no need to
rebuild testbed for every new application if testbed is extendible. Using existing tools and software
packages makes setting up the testbed easy with minimal learning curve. As a result, a platform
can be developed to demonstrate evolving capabilities of the testbed.
In summary, the approach taken in this research is driven by system use cases. To assure
testbed flexibility, system use case and actors are defined external to the testbed and are imported
into the testbed using appropriate mechanisms. The development of the testbed is guided by a
formal ontology to scope the problem domain. The ontology developed for UAV domain is
extended to AV domain to demonstrate its generalizability. The testbed developed in this
dissertation is also equipped with ontology reasoner which ensures developed ontology is
consistent. While simulation testbed is running, its components are checked against ontology to
ensure ontology is being followed during implementation. Proven commercial tools and open-
source components are used to ensure developed testbed has maximum compatibility with various
tools and platforms. Other key features of the testbed include visualization of ontology, classes
and hierarchy definition, ontology reasoning, run-time ontology checker, use case checker, an
ontology converter (OWL->HTML->Text), and multi-lens view of simulation (dashboard view,
agent view).
97
References
[1] A. M. Madni, Transdisciplinary systems engineering: exploiting convergence in a
hyper-connected world: Springer, 2017.
[2] A. M. Madni, "MBSE Testbed for Rapid, Cost-Effective Prototyping and
Evaluation of System Modeling Approaches," Applied Sciences, vol. 11, p. 2321, 2021.
[3] A. M. Madni, C. C. Madni, and S. D. Lucero, "Leveraging digital twin technology
in model-based systems engineering," Systems, vol. 7, p. 7, 2019.
[4] C. Chevallereau, G. Abba, Y. Aoustin, F. Plestan, E. Westervelt, C. C. De Wit, et
al., "Rabbit: A testbed for advanced control theory," 2003.
[5] V. Nejkovic, N. Petrovic, N. Milosevic, and M. Tosic, "The SCOR ontologies
framework for robotics testbed," in 2018 26th Telecommunications Forum (TELFOR), 2018, pp.
1-4.
[6] C. J. Budnik, S. Eckl, and M. Gario, "Testbed for Model-based Verification of
Cyber-physical Production Systems," in ARCH@ CPSWeek, 2017, pp. 92-99.
[7] A. M. Madni, C. Madni, S. Purohit, and A. Madni, "Digital Twin Technology-
Enabled Research Testbed for Game-Based Learning and Assessment in Theoretical Issues of
Using Simulations and Games in Educational Assessment," in Theoretical Issues of Using
Simulations and Games in Educational Assessment H. F. O'Neil, E. L. Baker, R. S. Perez, and S.
E. Watson, Eds., ed New York: Taylor and Francis Group, 2021.
[8] A. M. Madni, M. Sievers, S. Purohit, and C. C. Madni, "Toward a MBSE Research
Testbed: Prototype Implementation and Lessons Learned," in 2020 IEEE International Conference
on Systems, Man, and Cybernetics (SMC), 2020, pp. 2939-2945.
[9] N. Michael, J. Fink, and V. Kumar, "Experimental testbed for large multirobot
teams," IEEE robotics & automation magazine, vol. 15, pp. 53-61, 2008.
[10] N. Michael, D. Mellinger, Q. Lindsey, and V. Kumar, "The grasp multiple micro-
uav testbed," IEEE Robotics & Automation Magazine, vol. 17, pp. 56-65, 2010.
[11] A. M. Madni and S. Jackson, "Towards a conceptual framework for resilience
engineering," IEEE Systems Journal, vol. 3, pp. 181-191, 2009.
[12] A. M. Madni, D. Erwin, and M. Sievers, "Constructing Models for Systems
Resilience: Challenges, Concepts, and Formal Methods," Systems, vol. 8, p. 3, 2020.
98
[13] R. Bogue, "Search and rescue and disaster relief robots: has their time finally
come?," Industrial Robot: An International Journal, 2016.
[14] M. Erdelj and E. Natalizio, "UAV-assisted disaster management: Applications and
open issues," in 2016 international conference on computing, networking and communications
(ICNC), 2016, pp. 1-5.
[15] E. Ordoukhanian and A. M. Madni, "Resilient multi-UAV operation: key concepts
and challenges," in 54th AIAA Aerospace Sciences Meeting, 2016, p. 0475.
[16] P. Almeida, M. G. Gonçalves, and J. B. Sousa, "Multi-UAV platform for
integration in mixed-initiative coordinated missions," IFAC Proceedings Volumes, vol. 39, pp. 70-
75, 2006.
[17] E. Ordoukhanian and A. M. Madni, "Introducing resilience into multi-UAV
system-of-systems network," in Disciplinary Convergence in Systems Engineering Research, ed:
Springer, 2018, pp. 27-40.
[18] A. M. Madni, M. W. Sievers, J. Humann, E. Ordoukhanian, J. D’Ambrosio, and P.
Sundaram, "Model-based approach for engineering resilient system-of-systems: Application to
autonomous vehicle networks," in Disciplinary Convergence in Systems Engineering Research,
ed: Springer, 2018, pp. 365-380.
[19] R. L. Lidowski, B. E. Mullins, and R. O. Baldwin, "A novel communications
protocol using geographic routing for swarming uavs performing a search mission," in 2009 IEEE
International Conference on Pervasive Computing and Communications, 2009, pp. 1-7.
[20] L. Barnes, M. Fields, and K. Valavanis, "Unmanned ground vehicle swarm
formation control using potential fields," in 2007 Mediterranean Conference on Control &
Automation, 2007, pp. 1-8.
[21] P. Gaudiano, E. Bonabeau, and B. Shargel, "Evolving behaviors for a swarm of
unmanned air vehicles," in Proceedings 2005 IEEE Swarm Intelligence Symposium, 2005. SIS
2005., 2005, pp. 317-324.
[22] B. Liu, T. Chu, L. Wang, and G. Xie, "Controllability of a leader–follower dynamic
network with switching topology," IEEE Transactions on Automatic Control, vol. 53, pp. 1009-
1013, 2008.
[23] K. M. Milam, "Evolution of control programs for a swarm of autonomous
unmanned aerial vehicles," AIR FORCE INST OF TECH WRIGHT-PATTERSON AFB OH
SCHOOL OF ENGINEERING AND …2004.
99
[24] D. J. Pack and B. E. Mullins, "Toward finding an universal search algorithm for
swarm robots," in Proceedings 2003 IEEE/RSJ International Conference on Intelligent Robots and
Systems (IROS 2003)(Cat. No. 03CH37453), 2003, pp. 1945-1950.
[25] H. V. Parunak, M. Purcell, and R. O'Connell, "Digital pheromones for autonomous
coordination of swarming UAV's," in 1st UAV Conference, 2002, p. 3446.
[26] I. C. Price, "Evolving self-organized behavior for homogeneous and heterogeneous
UAV or UCAV swarms," AIR FORCE RESEARCH LAB WRIGHT-PATTERSON AFB
OH2006.
[27] P. J. Vincent and I. Rubin, "A swarm-assisted integrated communication and
sensing network," in Battlespace Digitization and Network-Centric Systems IV, 2004, pp. 48-60.
[28] W. Xu and X. Chen, "Artificial moment method for swarm robot formation
control," Science in China Series F: Information Sciences, vol. 51, pp. 1521-1531, 2008.
[29] M. Afanasov, A. Djordjevic, F. Lui, and L. Mottola, "FlyZone: A Testbed for
Experimenting with Aerial Drone Applications," in Proceedings of the 17th Annual International
Conference on Mobile Systems, Applications, and Services, 2019, pp. 67-78.
[30] R. D'Andrea and R. M. Murray, "The roboflag competition," in Proceedings of the
2003 American Control Conference, 2003., 2003, pp. 650-655.
[31] D. Montufar, F. Munoz, E. Espinoza, O. Garcia, and S. Salazar, "Multi-UAV
testbed for aerial manipulation applications," in 2014 International Conference on Unmanned
Aircraft Systems (ICUAS), 2014, pp. 830-835.
[32] E. O'Neill, M. Klepal, D. Lewis, T. O'Donnell, D. O'Sullivan, and D. Pesch, "A
Testbed for Evaluating Human Interaction with Ubiquitous Computing Environments," in
TridentCom, 2005, pp. 60-69.
[33] F. M. Palacios, E. S. E. Quesada, G. Sanahuja, S. Salazar, O. G. Salazar, and L. R.
G. Carrillo, "Test bed for applications of heterogeneous unmanned vehicles," International Journal
of Advanced Robotic Systems, vol. 14, p. 1729881416687111, 2017.
[34] M. Schmittle, A. Lukina, L. Vacek, J. Das, C. P. Buskirk, S. Rees, et al.,
"OpenUAV: a UAV testbed for the CPS and robotics community," in 2018 ACM/IEEE 9th
International Conference on Cyber-Physical Systems (ICCPS), 2018, pp. 130-139.
[35] J. J. Streilein, "Test and Evaluation of highly complex systems," ARMY TEST
AND EVALUATION COMMAND ALEXANDRIA VA2009.
100
[36] D. Bullock, B. Johnson, R. B. Wells, M. Kyte, and Z. Li, "Hardware-in-the-loop
simulation," Transportation Research Part C: Emerging Technologies, vol. 12, pp. 73-89, 2004.
[37] J. S. Keränen and T. Räty, "Model-based testing of embedded systems in hardware
in the loop environment," IET Software, vol. 6, pp. 364-376, 2012.
[38] N. B. Ali, K. Petersen, and M. V. Mäntylä, "Testing highly complex system of
systems: an industrial case study," in Proceedings of the 2012 ACM-IEEE International
Symposium on Empirical Software Engineering and Measurement, 2012, pp. 211-220.
[39] M. Haghnevis and R. G. Askin, "A modeling framework for engineered complex
adaptive systems," IEEE Systems Journal, vol. 6, pp. 520-530, 2012.
[40] S. Lee, D. Har, and D. Kum, "Drone-assisted disaster management: Finding victims
via infrared camera and lidar sensor fusion," in 2016 3rd Asia-Pacific World Congress on
Computer Science and Engineering (APWC on CSE), 2016, pp. 84-89.
[41] F. Nex and F. Remondino, "UAV for 3D mapping applications: a review," Applied
geomatics, vol. 6, pp. 1-15, 2014.
[42] J. Barnard, "Use of unmanned air vehicles in oil, gas and mineral exploration
activities," in AUVSI Unmanned Syst. North America Conf., Denver, CO, USA, 2010.
[43] A. Gynnild, "The Robot Eye Witness: Extending visual journalism through drone
surveillance," Digital journalism, vol. 2, pp. 334-343, 2014.
[44] I. Mademlis, V. Mygdalis, N. Nikolaidis, and I. Pitas, "Challenges in autonomous
UAV cinematography: An overview," in 2018 IEEE international conference on multimedia and
expo (ICME), 2018, pp. 1-6.
[45] I. Bekmezci, O. K. Sahingoz, and Ş. Temel, "Flying ad-hoc networks (FANETs):
A survey," Ad Hoc Networks, vol. 11, pp. 1254-1270, 2013.
[46] S. Jung and H. Kim, "Analysis of amazon prime air uav delivery service," Journal
of Knowledge Information Technology and Systems, vol. 12, pp. 253-266, 2017.
[47] P. Meier, "UAVs and Humanitarian," Drones And Aerial Observation: New
Technologies For Property Rights, Human Rights, And Global Development A Primer, p. 57,
2015.
[48] P. Doherty and P. Rudol, "A UAV search and rescue scenario with human body
detection and geolocalization," in Australasian Joint Conference on Artificial Intelligence, 2007,
pp. 1-13.
101
[49] J. Everaerts, "The use of unmanned aerial vehicles (UAVs) for remote sensing and
mapping," The International Archives of the Photogrammetry, Remote Sensing and Spatial
Information Sciences, vol. 37, pp. 1187-1192, 2008.
[50] S.-s. Choi and E.-k. Kim, "Building crack inspection using small UAV," in 2015
17th International Conference on Advanced Communication Technology (ICACT), 2015, pp. 235-
238.
[51] K. Kanistras, G. Martins, M. J. Rutherford, and K. P. Valavanis, "A survey of
unmanned aerial vehicles (UAVs) for traffic monitoring," in 2013 International Conference on
Unmanned Aircraft Systems (ICUAS), 2013, pp. 221-234.
[52] E. Darack, "UAVs: The new frontier for weather research and prediction,"
Weatherwise, vol. 65, pp. 20-27, 2012.
[53] T. Adão, J. Hruška, L. Pádua, J. Bessa, E. Peres, R. Morais, et al., "Hyperspectral
imaging: A review on UAV-based sensors, data processing and applications for agriculture and
forestry," Remote Sensing, vol. 9, p. 1110, 2017.
[54] A. Merwaday and I. Guvenc, "UAV assisted heterogeneous networks for public
safety communications," in 2015 IEEE wireless communications and networking conference
workshops (WCNCW), 2015, pp. 329-334.
[55] D. Jenkins and B. Vasigh, The economic impact of unmanned aircraft systems
integration in the United States: Association for Unmanned Vehicle Systems International
(AUVSI), 2013.
[56] E. Ordoukhanian and A. M. Madni, "Model-Based Approach to Engineering
Resilience in Multi-UAV Systems," Systems, vol. 7, p. 11, 2019.
[57] C. Ju and H. I. Son, "Multiple UAV systems for agricultural applications: control,
implementation, and evaluation," Electronics, vol. 7, p. 162, 2018.
[58] A. Guillen-Perez and M.-D. Cano, "Flying ad hoc networks: A new domain for
network communications," Sensors, vol. 18, p. 3571, 2018.
[59] S. Waharte and N. Trigoni, "Supporting search and rescue operations with UAVs,"
in 2010 International Conference on Emerging Security Technologies, 2010, pp. 142-147.
[60] J. Humann and E. Spero, "Modeling and simulation of multi-UAV, multi-operator
surveillance systems," in 2018 Annual IEEE International Systems Conference (SysCon), 2018,
pp. 1-8.
102
[61] E. Kuiper and S. Nadjm-Tehrani, "Mobility models for UAV group reconnaissance
applications," in 2006 International Conference on Wireless and Mobile Communications
(ICWMC'06), 2006, pp. 33-33.
[62] H. V. D. Parunak, S. Brueckner, and J. Odell, "Swarming coordination of multiple
UAV's for collaborative sensing," in 2nd AIAA" Unmanned Unlimited" Conf. and Workshop &
Exhibit, 2003, p. 6525.
[63] E. Ordoukhanian and A. M. Madni, "Toward development of resilient multi-UAV
system-of-systems," in AIAA SPACE 2016, ed, 2016, p. 5414.
[64] K. P. Valavanis and G. J. Vachtsevanos, Handbook of unmanned aerial vehicles
vol. 2077: Springer, 2015.
[65] A. Kopeikin, A. Clare, O. Toupet, J. How, and M. Cummings, "Flight testing a
heterogeneous multi-UAV system with human supervision," in AIAA Guidance, Navigation, and
Control Conference, 2012, p. 4825.
[66] A. M. Madni, M. W. Sievers, J. Humann, E. Ordoukhanian, B. Boehm, and S.
Lucero, "Formal methods in resilient systems design: application to multi-UAV system-of-systems
control," in Disciplinary convergence in systems engineering research, ed: Springer, 2018, pp.
407-418.
[67] E. Ordoukhanian and A. M. Madni, "System Trade-offs in Multi-UAV Networks,"
in AIAA SPACE 2015 Conference and Exposition, 2015, p. 4542.
[68] R. Bamberger, D. Scheidt, C. Hawthorne, O. Farrag, and M. White, "Wireless
network communications architecture for swarms of small uavs," in AIAA 3rd" Unmanned
Unlimited" Technical Conference, Workshop and Exhibit, 2004, p. 6594.
[69] B. T. Clough, "UAV swarming? So what are those swarms, what are the
implications, and how do we handle them?," AIR FORCE RESEARCH LAB WRIGHT-
PATTERSON AFB OH AIR VEHICLES DIRECTORATE2002.
[70] M. W. Maier, "Architecting principles for systems‐of‐systems," Systems
Engineering: The Journal of the International Council on Systems Engineering, vol. 1, pp. 267-
284, 1998.
[71] A. M. Madni and M. Sievers, "System of systems integration: Key considerations
and challenges," Systems Engineering, vol. 17, pp. 330-347, 2014.
[72] A. M. Madni and M. Sievers, "Systems integration: Key perspectives, experiences,
and challenges," Systems Engineering, vol. 17, pp. 37-51, 2014.
103
[73] M. W. Maier, "Architecting Principles for Systems-of-Systems," INCOSE
International Symposium, vol. 6, pp. 565-573, 1996/07 1996.
[74] E. Yanmaz, S. Yahyanejad, B. Rinner, H. Hellwagner, and C. Bettstetter, "Drone
networks: Communications, coordination, and sensing," Ad Hoc Networks, vol. 68, pp. 1-15,
2018.
[75] Y. Wei, M. B. Blake, and G. R. Madey, "An operation-time simulation framework
for UAV swarm configuration and mission planning," Procedia Computer Science, vol. 18, pp.
1949-1958, 2013.
[76] A. M. Madni and M. Sievers, "A Flexible Contract-Based Design Framework for
Evaluating System Resilience Approaches and Mechanisms," in IIE Annual Conference.
Proceedings, 2015, p. 2982.
[77] C. S. Holling and L. H. Gunderson, "Resilience and adaptive cycles," In: Panarchy:
Understanding Transformations in Human and Natural Systems, 25-62, 2002.
[78] E. Hollnagel, D. D. Woods, and N. Leveson, Resilience engineering: Concepts and
precepts: Ashgate Publishing, Ltd., 2006.
[79] J. R. Boyd, "The essence of winning and losing," Unpublished lecture notes, vol.
12, pp. 123-125, 1996.
[80] M. Sievers and A. M. Madni, "A flexible contracts approach to system resiliency,"
in 2014 IEEE International Conference on Systems, Man, and Cybernetics (SMC), 2014, pp. 1002-
1007.
[81] I. Maza, K. Kondak, M. Bernard, and A. Ollero, "Multi-UAV cooperation and
control for load transportation and deployment," in Selected papers from the 2nd International
Symposium on UAVs, Reno, Nevada, USA June 8–10, 2009, 2009, pp. 417-449.
[82] N. Ayanian, "Coordination of multirobot teams and groups in constrained
environments: Models, abstractions, and control policies," University of Pennsylvania, 2011.
[83] M. Correia, P. Veríssimo, and N. F. Neves, "The design of a COTS real-time
distributed security kernel," in European Dependable Computing Conference, 2002, pp. 234-252.
[84] C. G. Rieger, D. I. Gertman, and M. A. McQueen, "Resilient control systems: Next
generation design research," in 2009 2nd Conference on Human System Interactions, 2009, pp.
632-636.
104
[85] N. Michael and V. Kumar, "Planning and control of ensembles of robots with non-
holonomic constraints," The International Journal of Robotics Research, vol. 28, pp. 962-975,
2009.
[86] K. Nonami, M. Kartidjo, K. Yoon, and A. Budiyono, "Autonomous control systems
and vehicles," Intelligent Systems, Control and Automation: Science and Engineering, vol. 65,
2013.
[87] J. R. Swisher, P. D. Hyden, S. H. Jacobson, and L. W. Schruben, "A survey of
simulation optimization techniques and procedures," in 2000 Winter Simulation Conference
Proceedings (Cat. No. 00CH37165), 2000, pp. 119-128.
[88] J. J. Nutaro, Building software for simulation: theory and algorithms, with
applications in C++: John Wiley & Sons, 2011.
[89] R. J. Allan, Survey of agent based modelling and simulation tools: Science &
Technology Facilities Council New York, 2010.
[90] J. Sterman, "System dynamics at sixty: the path forward," ed: Wiley Online
Library, 2018.
[91] R. Purta, M. Dobski, A. Jaworski, and G. Madey, "A testbed for investigating the
UAV swarm command and control problem using DDDAS," Procedia Computer Science, vol. 18,
pp. 2018-2027, 2013.
[92] E. Ordoukhanian and A. M. Madni, "Ontology-Enabled Hardware-Software
Testbed For Engineering Adaptive Systems " presented at the Conference On Systems Engineering
Research Redondo Beach, CA, 2020.
[93] F. Armour and G. Miller, Advanced use case modeling: software systems: Pearson
Education, 2000.
[94] K. Bittner and I. Spence, Use case modeling: Addison-Wesley Professional, 2003.
[95] D. Rosenberg and K. Scott, Use case driven object modeling with UML: Springer,
1999.
[96] A. M. Madni*, "Expanding stakeholder participation in upfront system engineering
through storytelling in virtual worlds," Systems Engineering, vol. 18, pp. 16-27, 2015.
[97] H. Kaiya and M. Saeki, "Ontology based requirements analysis: lightweight
semantic processing approach," in Fifth international conference on quality software (QSIC'05),
2005, pp. 223-230.
105
[98] Y. Wand and R. Y. Wang, "Anchoring data quality dimensions in ontological
foundations," Communications of the ACM, vol. 39, pp. 86-95, 1996.
[99] M. S. Fox and M. Grüninger, "Ontologies for Enterprise Integration," in CoopIS,
1994, pp. 82-89.
[100] R. Grønmo and J. Oldevik, "An empirical study of the UML model transformation
tool (UMT)," Proc. First Interoperability of Enterprise Software and Applications, Geneva,
Switzerland, p. 16, 2005.
[101] N. Guarino and C. Welty, "A formal ontology of properties," in International
Conference on Knowledge Engineering and Knowledge Management, 2000, pp. 97-112.
[102] N. F. Noy, R. W. Fergerson, and M. A. Musen, "The knowledge model of Protege-
2000: Combining interoperability and flexibility," in International Conference on Knowledge
Engineering and Knowledge Management, 2000, pp. 17-32.
[103] G. Stumme, "Ontology merging with formal concept analysis," in Dagstuhl
Seminar proceedings, 2005.
[104] M.-N. Terrasse, M. Savonnet, G. Becker, and E. Leclercq, "A UML-based
metamodeling architecture with example frameworks," in WISME 2002, WORKSHOP ON
SOFTWARE MODEL ENGINEERING, 2002.
[105] A. M. Madni, W. Lin, and C. C. Madni, "IDEONTM: An extensible ontology for
designing, integrating, and managing collaborative distributed enterprises," Systems Engineering,
vol. 4, pp. 35-48, 2001.
[106] A. M. Madni, C. C. Madni, and J. Salasin, "5.4. 1 ProACT™: Process‐aware Zero
Latency System for Distributed, Collaborative Enterprises," in INCOSE international symposium,
2002, pp. 783-790.
[107] I. Mayk and A. M. Madni, "The role of ontology in system-of-systems acquisition,"
INTELLIGENT SYSTEMS TECHNOLOGY SANTA MONICA CA2006.
[108] L. van Ruijven, "Ontology for systems engineering as a base for MBSE," in
INCOSE International Symposium, 2015, pp. 250-265.
[109] T. Guber, "A translational approach to portable ontologies," Knowledge
Acquisition, vol. 5, pp. 199-229, 1993.
[110] R. Studer, V. R. Benjamins, and D. Fensel, "Knowledge engineering: principles
and methods," Data & knowledge engineering, vol. 25, pp. 161-197, 1998.
106
[111] N. Guarino, D. Oberle, and S. Staab, "What is an ontology?," in Handbook on
ontologies, ed: Springer, 2009, pp. 1-17.
[112] A. M. Madni, "Minimum Viable Model to Demonstrate Value Proposition of
Ontologies for Model-Based Systems Engineering " presented at the Conference on Systems
Engineering Research (CSER), Redondo Beach, CA, 2020.
[113] N. F. Noy and D. L. McGuinness, "Ontology development 101: A guide to creating
your first ontology," ed: Stanford knowledge systems laboratory technical report KSL-01-05
and …, 2001.
[114] S. Bouabdallah and R. Siegwart, "Backstepping and sliding-mode techniques
applied to an indoor micro quadrotor," in Proceedings of the 2005 IEEE international conference
on robotics and automation, 2005, pp. 2247-2252.
[115] E. C. Suicmez, "Trajectory Tracking of a Quadrotor Unmanned Aerial Vehicle
(UAV) via Attitude And Position Control," Middle East Technical University, 2014.
[116] A. M. Madni, M. Sievers, A. Madni, E. Ordoukhanian, and P. Pouya, "Extending
formal modeling for resilient systems design," INSIGHT, vol. 21, pp. 34-41, 2018.
[117] E. Freund and R. Mayr, "Nonlinear path control in automated vehicle guidance,"
IEEE transactions on robotics and automation, vol. 13, pp. 49-60, 1997.
[118] J. D’Ambrosio, A. Adiththan, E. Ordoukhanian, P. Peranandam, S. Ramesh, A. M.
Madni, et al., "An MBSE approach for development of resilient automated automotive systems,"
Systems, vol. 7, p. 1, 2019.
[119] Office of the Deputy Under Secretary of Defense for Acquisition and Technology,
Systems and Software Engineering. Systems Engineering Guide for System-of-Systems, Version
1.0. . Washington, DC: ODUSD(A&T)SSE,, 2008.
[120] W. K. Vaneman and R. D. Jaskot, "A criteria-based framework for establishing
system of systems governance," in Systems Conference (SysCon), 2013 IEEE International, 2013,
pp. 491-496.
[121] A. M. Madni and M. Sievers, "Systems Integration: Key Perspectives, Experiences,
and Challenges," Systems Engineering, vol. 17, pp. 37-51, 2013/05/29 2013.
[122] A. M. Madni and M. Sievers, "System of Systems Integration: Key Considerations
and Challenges," Systems Engineering, vol. 17, pp. 330-347, 2013/07/01 2013.
107
APPENDIX A
QUADCOPTER MATHEMATICAL MODELING
The dynamics modeling and control of quadcopter presented here is entirely based on
approaches described in [114] and [115]. For detailed explanation, the reader is encouraged to
review the original papers. This section only summarizes the equations (taken directly from the
sources) to give reader easier access to the essence of models used in this dissertation. There are
multiple approaches in modeling and control of quadcopters in the literature, but the approach
discussed in these references are good starting point.
In general, a quadcopter has at least two frames, a body frame which is attached to the
center of the mass, and earth frame. Figure 23 shows these frames which is taken from [116]. The
acting forces on the quadcopter are the forces produces by each rotor (F1, F2, F3, F4) on the positive
z axis of body frame and vehicle’s weight (mg) acting on the negative z direction. Each rotor
108
produces torque (moment) and is noted by Ti for each rotor. Rotors 1 and 3 are rotating clockwise,
while rotors 2 and 4 are rotating counterclockwise.
FIGURE 23: QUADCOPTER ORIENTATION
The nonlinear equation for translational motion takes the form of:
[
𝒙 ̈ 𝒚 ̈ 𝒛 ̈ ] = − [
𝟎 𝟎 −𝒈 ] + 𝑳 𝑬𝑩
[
𝟎 𝟎 𝑼𝟏 /𝒎 ] − (
𝒌 𝒕 𝒎 ) [
𝒙 ̇ 𝒚 ̇ 𝒛 ̇ ]
EQUATION 1: NONLINEAR TRANSLATIONAL MOTION
[𝑥 ̈, 𝑦 ̈, 𝑧 ̈ ] are the acceleration of the vehicle in the earth frame and by double integrating
vehicle’s position can be calculated. In the previous equation, 𝐿 𝐸𝐵
is the transformation matrix
from body frame to earth frame. For simplicity, trigonometric sin and cos functions are substituted
by s and c, respectively.
𝐿 𝐸𝐵
= [
𝑐𝜃𝑐𝜓 𝑠𝜃𝑠𝜙𝑐𝜓 − 𝑐𝜙𝑠𝜓 𝑐𝜙𝑠𝜃𝑐𝜓 + 𝑠𝜙𝑠𝜓 𝑐𝜃𝑠𝜓 𝑠𝜙𝑠𝜃𝑠𝜓 + 𝑐𝜙𝑐𝜓 𝑐𝜙𝑠𝜃𝑠𝜓 − 𝑠𝜙𝑐𝜓 −𝑠𝜃 𝑠𝜙𝑐𝜃 𝑐𝜙𝑐𝜃 ]
EQUATION 2: TRANSFORMATION MATRIX
109
In the matrix above, 𝜓 is yaw angle, 𝜃 is pitch angle, and 𝜙 is roll angle in earth frame. The
nonlinear equation of motion for rotational motion takes the form of:
[
𝒑 ̇ 𝒒 ̇ 𝒓 ̇ ] = [
(𝑰 𝒚 − 𝑰 𝒛 )𝒒𝒓 /𝑰 𝒙 (𝑰 𝒛 − 𝑰 𝒙 )𝒑𝒓 /𝑰 𝒚 (𝑰 𝒙 − 𝑰 𝒚 )𝒑𝒒 /𝑰 𝒛 ] + [
𝑼𝟐𝑳 /𝑰 𝒙 𝑼𝟑𝑳 /𝑰 𝒚 𝑼𝟒𝑳 /𝑰 𝒛 ]
EQUATION 3: NONLINEAR ROTATIONAL MOTION
Where p, q, and r are roll, pitch, and yaw angles on the body frame. In this equation, U2,
U3, and U4 are the control inputs. Ix,Iy,and Iz are inertia in x,y,z axis. L is the distance between
the center of the mass and rotors. Since quadcopter is symmetric, then L is a constant for all rotors.
The relationship between p, q, r, and 𝜙 , 𝜃 , 𝜓 , is given by:
[
𝜙 ̇ 𝜃 ̇ 𝜓 ̇ ] = [
1 𝑠𝜙𝑡𝜃 𝑐𝜙𝑡𝜃 0 𝑐𝜙 −𝑠𝜙
0 𝑠𝜙 /𝑐𝜃 𝑐𝜙 /𝑐𝜃 ] [
𝑝 𝑞 𝑟 ]
Where for simplicity, trigonometric sin, cos, and tan functions are substituted by s, c, and
t respectively. To design a controller, the state space representation takes the following form:
EQUATION 4: QUADCOPTER STATE SPACE REPRESENTATION
110
Where, 𝑈 1
= 𝐹 1
+ 𝐹 2
+ 𝐹 3
+ 𝐹 4
is total force required to move the vehicle.
𝑈 2
= 𝐹 4
− 𝐹 2
is the required deference between forces to generate roll, 𝑈 3
= 𝐹 3
− 𝐹 1
is
required deference between forces to generate pitch, and 𝑈 4
= 𝑇 2
+ 𝑇 4
− 𝑇 1
− 𝑇 3
is required
torque to generate yaw. In these equations, F is the force generated by each rotor and T is the
moment (torque) generated by each rotor.
To control the quadcopter on a specified trajectory, Backstepping controller has been used.
In essence, backstepping controller is choosing a Lyapunov function (which is positive definite
and has negative semi-definite first order derivative with respect to time) so that error term
converges to zero [114]. Effects of aerodynamic drag for translational motion and acceleration of
desired trajectory are added into the control law to get fairly accurate results. Simpler flight
approach is obtained by controlling the yaw angle (and heading) relative to the direction of motion
[114]. Backstepping control method is done in three steps:
1) Attitude control-formulating control inputs U2, U3, U4 to control attitude
2) Position control-formulating control input U1 and functions ux and uy related to the
orientation of U1
3) Calculating desired "Euler angles" (desired orientation) to be used to find control inputs
U2 , U3 and U4.
Overall quadcopter architecture is shown in Figure 24. Waypoint selection and path
generation blocks are simple algorithms that are developed separately by me to work with the rest
of the model.
111
FIGURE 24: QUADCOPTER ARCHITECTURE
112
APPENDIX B
AUTONOMOUS VEHICLE MATHEMATICAL MODELING
The model presented in this section is following the steps defined in [117]. Similar steps
have been taken in [118]. This section only summarizes the models and equations. For more details
reader is encouraged to access the original papers.
A simple bicycle model for vehicle dynamics is sufficient to demonstrate enough fidelity.
This model helps determine the behavior of the system (i.e., expected outcome). For physics
modeling, forces acting on the vehicle and the coordinate system are shown in Figure 25 and Figure
26, respectively. These figures are taken directly from [118] and with minor modification from
[117].
113
FIGURE 25: FORCES ACTING ON AUTONOMOUS VEHICLE
(a) (b)
FIGURE 26: (A) COORDINATE SYSTEM AND (B) VEHICLE STEERING ANGLE
The following 6 nonlinear equations give the physical state variables, assuming angles β
and δ are small.
𝛽 ̇ = 𝜓 ′
+
1
𝑚𝑣
([𝐷 − 𝑇 ]𝛽 − 𝛤 𝑓 − 𝛤 𝑟 )
𝜓 ̇ = 𝜓 ′
𝜓 ′ ̇ =
1
𝜃 (𝛤 𝑓 𝑙 𝑓 − 𝛤 𝑟 𝑙 𝑟 )
𝑣 ̇ =
1
𝑚 (𝑇 − 𝐷 )
𝑥 ̇ = 𝑣 cos(𝜓 − 𝛽 )
𝑦 ̇ = 𝑣 sin(𝜓 − 𝛽 )
EQUATION 5: AUTONOMOUS VEHICLE STATE SPACE REPRESENTATION
where:
β—Slip angle—Angle between car’s velocity vector and body angle
114
ψ—Body angle—Angle of car’s centerline in global coordinate frame
ψ’—Angular velocity—Derivative of body angle
v—Velocity—Velocity in direction of centerline
x—X position—X-position in global coordinate frame
y—Y position—Y-position in global coordinate frame
115
APPENDIX C
SYSTEM-OF-SYSTEMS CLASSIFICATION
Based on Office Under Secretary of Defense, Acquisition Technology and Logistics
(OUSD AT&L) Systems Engineering Guide for SoS [119] a system-of-system can be categorized
as Virtual, Collaborative, Acknowledged, or Directed. Table 34 describes each of these categories.
Vaneman and Jaskot [120] compare different types of SoS based on Autonomy, Belonging,
Connectivity, Diversity, and Emergence. Autonomy is the ability of component systems to make
independent choices. Belonging determines if the component system operates as member of group
or individually [120]. Connectivity is the ability of component system to link with other systems,
which takes into account interoperability and integration [120]. Diversity is the degree of
heterogeneity or homogeneity, i.e., having distinct or unlike elements or qualities in a SoS.
Emergence is the appearance of new properties during development, evaluation, and operations
[120-122]. More details about SoS properties can be found in [120].
116
TABLE 34 SYSTEM OF SYSTEMS CATEGORIES
■ Virtual SoS
➢ no central management authority and agreed-upon purpose
➢ systems can enter/exit dynamically based on mission requirements
■ Collaborative SoS
➢ component systems interact voluntarily to fulfill agreed upon central purposes
➢ central players collectively decide how to provide/deny service
■ Acknowledged SoS
➢ has recognized objectives, a designated manager, and resources
➢ constituent systems retain independent ownership, objectives, funding,
development, and sustainment
■ Directed SoS
➢ built and managed to fulfill specific purposes
➢ normal operational of component systems is subordinate to central purpose
117
APPENDIX D
USE CASE TEMPLATE
The template shown in this appendix is similar to templates identified in litrature for
describing system use cases. The elements of this template are the same as any template that can
be found in any systems engineering or software engineering textbook. For the purpose of the
research performed in this dissertation, this template was implented in Microsoft Excel.
118
Use Case ID
Use Case
Name
Use-Case
Category
Created By Last Updated
By
Date Created
Last Revision
Date
Actor(s) Actor ID Actor Type Actor
Priority
Actor ID Actor Type Actor
Priority
Description
Trigger Trigger ID: Trigger
Type:
Trigger
ID:
Trigger
Type:
Trigger ID: Trigger
Type:
Pre
Conditions
Cond. ID Cond.
Variable
Condition Value Related Component
Post
Conditions
Cond. ID Cond.
Variable
Condition Value Related Component
Normal Flow: Step
Number
Actor Action Step Number Actor Action
Alternative
Flows:
Step
Branching
Number
Condition
Variable
Condition
Value
Actor Action
119
Exceptions Error ID Error
Condition
Related
Actor
Actor
Performing
Action
Action
Includes Use Case # Use Case
#
Use Case #
Use Case # Use Case
#
Use Case #
Use Case # Use Case
#
Use Case #
Frequency of
Use
Special
Requirement
ID Parameter Value ID Parameter Value
Assumptions ID Description ID Description ID Description
Notes and
Issues:
Abstract (if available)
Abstract
With the growing complexity of the systems and their need for rapid development, there is a growing need to create inexpensive, flexible testbed. Testbeds are platforms that can take various forms such as virtual (simulation) only, hardware only, or hybrid (simulation-hardware). It is not effective to deploy systems without studying their behavior under various conditions and use cases. This is where testbeds come in. However, often time testbeds focus on single applications without any consideration for flexibility and extendibility. Therefore, testbeds built for specific purposes have limited reusability because they invariably become single-point solutions or concept demonstrators. This is because they do not scale and therefore are not suitable for research. A testbed developed for research needs adequate flexibility so components can be added or removed as necessary.
This dissertation focuses on developing a generalized methodology for developing a testbed for unmanned systems. These systems can take various forms such as unmanned aerial vehicles (UAVs) or Autonomous Vehicles (AV). These systems operate in dynamic and uncertain environments where demonstrating adaptive behavior has become one of the key requirements. Systems with adaptation capabilities can change their operational routine in case of disrupting events. A testbed with sufficient flexibility and extendibility can contribute to the development of such systems significantly. Ad hoc and point solution testbeds are not suitable for systems that require extendibility and scalability.
My research establishes the feasibility of a general-purpose testbed for unmanned systems management and control. The testbed, based on a formal ontology, enables modeling, analysis, testing, and verification of new technologies and integration schemes.
The application domain chosen for this dissertation is multi-UAV systems. However, my research has also demonstrated that this ontology is general enough by extending it to other domains such as Autonomous Vehicles. To ensure that testbed implementation follows the ontology, a mechanism is developed to check the implementation against the ontology. A key feature of the testbed developed in this dissertation is that it allows importing use cases and actors (agents) from external sources into the testbed. This contributes toward the flexibility and demonstrates that the testbed is not a point solution.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Integration of digital twin and generative models in model-based systems upgrade methodology
PDF
Extending systems architecting for human considerations through model-based systems engineering
PDF
COSYSMO 3.0: an extended, unified cost estimating model for systems engineering
PDF
Context-adaptive expandable-compact POMDPs for engineering complex systems
PDF
A declarative design approach to modeling traditional and non-traditional space systems
PDF
Techniques for analysis and design of temporary capture and resonant motion in astrodynamics
PDF
Optimal guidance trajectories for proximity maneuvering and close approach with a tumbling resident space object under high fidelity J₂ and quadratic drag perturbation model
PDF
The development of an autonomous subsystem reconfiguration algorithm for the guidance, navigation, and control of aggregated multi-satellite systems
PDF
Advanced nuclear technologies for deep space exploration
PDF
Reduction of large set data transmission using algorithmically corrected model-based techniques for bandwidth efficiency
PDF
Package delivery with trucks and UAVs
PDF
Designing an optimal software intensive system acquisition: a game theoretic approach
PDF
Multi-robot strategies for adaptive sampling with autonomous underwater vehicles
PDF
An application of aerial drones in high definition mapping for autonomous vehicles
PDF
Grid-based Vlasov method for kinetic plasma simulations
PDF
Novel queueing frameworks for performance analysis of urban traffic systems
PDF
Behavioral modeling and computational synthesis of self-organizing systems
PDF
Models and algorithms for pricing and routing in ride-sharing
PDF
A framework for comprehensive assessment of resilience and other dimensions of asset management in metropolis-scale transport systems
PDF
Dynamic modeling and simulations of rigid-flexible coupled systems using quaternion dynamics
Asset Metadata
Creator
Ordoukhanian, Edwin
(author)
Core Title
Modeling and simulation testbed for unmanned systems
School
Viterbi School of Engineering
Degree
Doctor of Philosophy
Degree Program
Astronautical Engineering
Degree Conferral Date
2022-05
Publication Date
04/14/2022
Defense Date
02/17/2022
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
autonomous vehicle,extendible,flexible,model-based,Modeling,OAI-PMH Harvest,ontology,simulation,testbed,UAV,unmanned systems
Format
application/pdf
(imt)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Madni, Azad M. (
committee chair
), Erwin, Daniel A. (
committee member
), Moore, James E. (
committee member
)
Creator Email
ordoukha@usc.edu,ordoukhanian.edwin@gmail.com
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-oUC110960497
Unique identifier
UC110960497
Document Type
Dissertation
Format
application/pdf (imt)
Rights
Ordoukhanian, Edwin
Type
texts
Source
20220415-usctheses-batch-924
(batch),
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the author, as the original true and official version of the work, but does not grant the reader permission to use the work if the desired use is covered by copyright. It is the author, as rights holder, who must provide use permission if such use is covered by copyright. The original signature page accompanying the original submission of the work to the USC Libraries is retained by the USC Libraries and a copy of it may be obtained by authorized requesters contacting the repository e-mail address given.
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Repository Email
cisadmin@lib.usc.edu
Tags
autonomous vehicle
extendible
flexible
model-based
ontology
simulation
testbed
UAV
unmanned systems