Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Collecting and managing VGI infrastructure assessments in support of stability operations
(USC Thesis Other)
Collecting and managing VGI infrastructure assessments in support of stability operations
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
Collecting and Managing VGI Infrastructure Assessments in Support of Stability Operations
by
James Andrew Potter
A Thesis Presented to the
Faculty of the USC Graduate School
University of Southern California
In Partial Fulfillment of the
Requirements for the Degree
Master of Science
(Geographic Information Science and Technology)
December 2018
i
Copyright ® 2018 by James Potter
ii
In memory of my father, Gary Potter (DUSTOFF 15) August 19, 1947 – October 04, 2017
iii
Table of Contents
List of Figures ................................................................................................................................. vi
List of Tables ................................................................................................................................. vii
Acknowledgements ....................................................................................................................... viii
List of Abbreviations ...................................................................................................................... ix
Abstract ............................................................................................................................................x
Chapter 1 Introduction .................................................................................................................... 1
1.1 Motivation ............................................................................................................................2
1.1.1. Advancing the Practical Applications of VGI ...........................................................3
1.2 Understanding the Operational Environment ......................................................................3
1.3 Infrastructure Assessments in Stability Operations .............................................................5
1.3.1. Post Assessment Infrastructure Surveys ....................................................................8
1.4 Research Objectives .............................................................................................................9
1.5 Thesis Structure....................................................................................................................9
Chapter 2 Related Work................................................................................................................ 10
2.1 Volunteered Geographic Information (VGI) .....................................................................11
2.1.1. Spatial Data Quality Standards for VGI ..................................................................12
2.1.2. Benefits and Shortcomings of Crowdsourced Data .................................................14
2.1.3. Positional Accuracy with VGI .................................................................................16
2.1.4. Public Sector Use of VGI ........................................................................................18
2.1.5. Use of VGI in Stability Operations and Improving Local Governance...................19
2.1.6. Military Geospatial Data Management ....................................................................20
2.2 Existing Infrastructure Assessment Applications ..............................................................21
2.3 Using Technology in Unsecured Areas .............................................................................22
Chapter 3 Requirements and Design Methodology ...................................................................... 23
3.1 Survey Workflow ...............................................................................................................23
3.2 Overview of Study Area used for Proof of Concept ..........................................................26
3.2.1. Three Rivers .............................................................................................................27
3.2.2. Woodlake .................................................................................................................28
iv
3.3 Survey components ............................................................................................................28
3.4 System requirements ..........................................................................................................30
3.4.1. User Requirements ...................................................................................................31
3.4.2. Administrator requirements .....................................................................................31
3.5 GIS Software Required ......................................................................................................32
3.5.1. Software Choice .......................................................................................................32
3.5.2. Platform Choice .......................................................................................................33
3.6 Web Map Application Creation .........................................................................................33
3.6.1. Structuring Geodatabase for submissions ................................................................33
3.6.2. Coordinate System for collected data ......................................................................33
3.6.3. Symbology of Infrastructure Features in ArcGIS Pro .............................................34
3.6.4. Infrastructure Feature Attributes ..............................................................................34
3.6.5. Other features included in the geodatabase..............................................................35
3.7 Infrastructure Assessment Application Development .......................................................36
3.7.1. Survey 123 Workflow ..............................................................................................36
3.7.2. Building the survey ..................................................................................................36
3.7.3. Creating a Web Map for the Assessment Feature Layer .........................................39
3.7.4. Publishing a Hosted Feature Layer from ArcGIS Pro .............................................39
3.7.5. Creating a COP for infrastructure needs on operations dashboard ..........................40
Chapter 4 Results .......................................................................................................................... 43
4.1 Assessment Collection .......................................................................................................43
4.1.1. Survey feedback .......................................................................................................44
4.2 Assessment Results ............................................................................................................45
4.3 Analysis of VGI Infrastructure Submissions .....................................................................45
4.3.1. Positional Analysis...................................................................................................46
4.3.2. Thematic Analysis....................................................................................................47
4.3.3. Control Data Analysis ..............................................................................................48
4.3.4. Analyzing Survey Results in Survey 123 ................................................................49
4.4 Infrastructure Assessment Maps ........................................................................................50
4.5 Infrastructure Assessment Dashboard................................................................................51
Chapter 5 Conclusions and Future Work ...................................................................................... 52
5.1 Summary of Application ....................................................................................................52
5.2 Project Hurdles...................................................................................................................52
v
5.2.1. Security of participants and the enterprise system...................................................53
5.2.2. Use in areas of degraded network coverage.............................................................53
5.2.3. Bundling of submissions and limitations of collection using Survey123 ................54
5.3 Scalability...........................................................................................................................55
5.4 Future Work .......................................................................................................................56
References ..................................................................................................................................... 58
Appendix A: Coding of Infrastructure Metrics............................................................................. 61
Appendix B: Infrastructure Survey Form (Sewage) ..................................................................... 64
vi
List of Figures
Figure 1: Village assessment in Kandahar Province Afghanistan .................................................. 2
Figure 2: SWEAT-MSO definition and sewage scoring criteria .................................................... 7
Figure 3: Overlap between infrastructure assessments and surveys ............................................... 8
Figure 4: Ushahidi VGI submissions for Syria chemical attacks ................................................. 11
Figure 5: Accuracy comparison of survey grade and recreational GPS devices .......................... 17
Figure 6: Army Geospatial Enterprise data flow .......................................................................... 20
Figure 7: Crowdsourced village assessment workflow ................................................................ 23
Figure 8 District stability framework process............................................................................... 24
Figure 9: Afghanistan aid and infrastructure websites ................................................................. 25
Figure 10: Overview of study area in Tulare County CA............................................................. 27
Figure 11: Symbology selected for SWEAT-MSO in ArcGIS Pro .............................................. 34
Figure 12: Attribute table of infrastructure features ..................................................................... 35
Figure 13: Workflow for building and publishing the infrastructure survey ................................ 36
Figure 14: Screenshots of SWEAT-MSO infrastructure assessment application......................... 37
Figure 15: Published AO Tulare map in ArcGIS Online.............................................................. 40
Figure 16: Operations view menu in ArcGIS Online ................................................................... 41
Figure 17: AO Tulare COP and dashboard widgets ..................................................................... 42
Figure 18: Infrastructure category and damage rating submission counts by type....................... 45
Figure 19: Pie chart of infrastructure type submissions in Three Rivers and Woodlake ............. 49
Figure 20: Infrastructure maps of Woodlake and Three Rivers ................................................... 50
Figure 21: Online COP dashboard for AO Tulare ....................................................................... 51
vii
List of Tables
Table 1: The Operational Variables. ............................................................................................... 4
Table 2: Stability and DSCA tasks ................................................................................................. 5
Table 3: Elements of spatial data quality ...................................................................................... 13
Table 4: Benefits and challenges of digital aid surveys ............................................................... 22
Table 5: Infrastructure category feature descriptions ................................................................... 29
Table 6: Comparison of VGI submissions for test features .......................................................... 46
Table 7: Comparison of VGI submission attributes for thematic accuracy .................................. 48
viii
Acknowledgements
Thank you to my wife Aimee, my family, friends, coworkers, and others who have had to put up
with me during the long process of writing this thesis. I am also grateful to the USC faculty who
have guided me along the way.
ix
List of Abbreviations
AI Area of Interest
AO Area of Operations
AGE Army Geospatial Enterprise
CAT Civil Affairs Team
COP Common Operating Picture
DSCA Defense Support to Civil Authorities
DOD Department of Defense
FEMA Federal Emergency Management Administration
GIS Geographic Information Systems
GISci Geographic Information Science
MRRD Ministry of Rural Reconstruction and Development
NCDCDS National Committee Digital Cartographic Data Standards
NGA National Geospatial Intelligence Agency
NGO Non-Governmental Organization
OSM Open Street Map
PMESII-PT Operational Variables
SDI Spatial Data Infrastructure
UNDP United Nations Development Program
USACE United States Army Corps of Engineers
UKAID United Kingdom Department for International Development
USAID United States Agency for International Development
VGI Volunteered Geographic Information
x
Abstract
Humanitarian assistance, disaster response and stability/peacekeeping operations are
an important part of current national defense strategy and represent an opportunity for the United
States to project goodwill around the world. This thesis explores using Volunteered Geographic
Information (VGI) to streamline the process through which aid and assistance is routed to those
most in need. The acronym SWEAT-MSO (Security, Water, Electricity, Academics, Trash,
Medical, Safety, Other) describes the metrics the United States Military uses for evaluating
infrastructure health in support of foreign stability operations. SWEAT-MSO features are coded
as green, amber, red or black based on the severity of damage and their ability to function.
Assessments are currently completed by deployed service members but by incorporating VGI,
this burden can be shifted to those who live and work within an affected area. Under the
proposed framework, volunteers use a browser-based infrastructure assessment app to capture
metrics and store them within a spatial database for analysis by Civil Affairs (CA) teams. VGI
assessments are displayed in real time within a common operating picture that spreads awareness
of infrastructure issues throughout the area of interest. This thesis demonstrates the VGI
infrastructure assessment concept by creating a custom app to collect assessments and a common
operating picture dashboard to display the results of the assessments. Unskilled volunteers
collected test assessments in two rural communities and the results were analyzed for spatial and
thematic accuracy. The successful collection of targeted infrastructure metrics and the user
reviews of the assessment app and the operations dashboard indicate that this method can be
expected to produce results in a forward deployed environment.
1
Chapter 1 Introduction
In the opening decades of the 21
st
century, many nations, particularly the United States, have
been forced to engage in prolonged humanitarian assistance and stability campaigns. Recent
campaigns in Iraq and Afghanistan stand out but there are dozens of other examples that get only
a small fraction of the attention that the previously mentioned nations receive. While global
instability is on the rise, resources dedicated to building global stability are stretched thin.
While there are many factors increasing the difficulty of stability missions, there are also
reasons for optimism. The globe is more connected than ever before. Mobile devices are prolific
even in otherwise austere locations. The proliferation of technology presents an opportunity to
harness spatial data to change the paradigm of stability operations.
Democratization of GIscience is changing the world (Goodchild 2007; Goodchild and
Glennon 2010). To an extent this process is happening organically. Collaborative mapping
services such as Ushahidi have sprung up in areas affected by violence and political strife. Non-
Government Organizations (NGO) also use crowdsourced data to effectively address
humanitarian issues. Governments, as usual, are relatively slow to fully embrace the benefits of
this emerging technology and have fallen behind the private sector (Hammon and Hippner 2012).
This thesis creates a template for a web application that units deployed in support of global
stability operations can use to collect spatial data relating to community infrastructure needs.
Research combines elements of Geographic Information Science (GIS) in emergency
management and governance, VGI quality, Spatial Data Infrastructure and Joint Doctrine in
conducting infrastructure assessments.
2
1.1 Motivation
Motivation for this project takes several forms. The primary interest is to make collecting
infrastructure assessments and performing other stability tasks safer and more efficient. As a
former army officer with experience in stability operations the author has firsthand knowledge of
the challenges inherent to these types of endeavors (Figure 1). The author has also witnessed the
positive impact that efficiently run infrastructure projects can have on the lives of the people
living in impoverished areas. Poor governance and systemic poverty are major contributors to
global instability yet the resources to address such challenges are scarce and must be utilized
effectively. Protracted “nation building” type operations that seek to address the foundations of
instability such as those seen in the Iraq and Afghanistan wars are no longer feasible given
current resources and political constraints. This project harnesses VGI and local talent to capture
needed information safely and efficiently to address infrastructure deficiencies.
Figure 1: The author on a village assessment in Kandahar Province Afghanistan in 2011
3
1.1.1. Advancing the Practical Applications of VGI
This research builds on a wide body of research incorporating VGI. Using VGI in the
conduct of humanitarian assistance, disaster relief or populating spatial databases is nothing new
but this application puts a new spin on the practice. Once fully implemented, VGI as a method
for completing infrastructure assessments will provide a viable alternative to paper assessments
and will allow concurrent collection of assessments across an entire area of interest. The
challenges and opportunities of VGI infrastructure assessments explored in this research are
applicable to other endeavors that seek to assess physical infrastructure in denied areas. While
this method may not be suitable for all situations, the broad acceptance of VGI applications for
infrastructure assessments can pave the way for other public-sector VGI projects.
1.2 Understanding the Operational Environment
Under the Unified Land Operations concept there are four types of decisive action tasks
that can be assigned to Army units (other specialized missions nest within these tasks). These
tasks are Offense, Defense, Stability and Defense Support to Civil Authorities (DSCA). In each
of these tasks understanding the operational environment is of the utmost importance. The
operational environment encompasses aspects of the military, and nonmilitary, environment that
differ from one area to another and affect operations (Department of the Army 2012). The
operational variables, seen in Table 1 and remembered using the acronym PMESII-PT, are used
to capture vital facets of the environment that planners and commanders need to understand their
AO (Department of the Army 2017). This thesis focuses primarily on applications within the
Stability and DSCA tasks because they have the greatest applicability outside the military
context and within the GISci community at large.
4
Table 1: The Operational Variables. (Department of the Army 2017)
Stability operations seek to create conditions where the inhabitants of an area see their
situation as “legitimate, acceptable and predictable” (Department of the Army 2012). DSCA
encompasses the process through which state and local authorities can request the use of federal
military resources in support of civilian activities. Typical requests cover disaster relief, assisting
with a special event such as the Super Bowl or addressing law enforcement challenges such as
riots or insurrections that overwhelm the resources of local agencies.
Table 2 describes the respective tasks and their associated purpose inherent to DSCA or
stability operations. In general, DSCA occurs domestically and faces strict constitutional limits
on what activities can and cannot be conducted. Stability Operations pursue similar ends as
DSCA but occur on foreign soil under specific conditions arranged with a host government under
the umbrella of a Status of Forces Agreement.
5
Table 2: Stability and DSCA Tasks (Department of the Army 2012)
1.3 Infrastructure Assessments in Stability Operations
Infrastructure assessments help planners allocate resources and determine priorities of
effort. These assessments help planners understand the operational environment by answering
infrastructure information requirements as outlined in the operational variables. During a tour of
duty in southern Afghanistan in 2010-2011, the author completed hundreds of such assessments
of local communities. The purpose was to determine the baseline needs of local communities.
Following the initial assessment, communities deemed to be most in need of essential services
would be visited by professional engineer teams for a more robust infrastructure survey. After a
potentially lengthy process, contracts for infrastructure development were drafted and work
could begin. Building host nation capacity and fostering confidence in local governance meant
certifying measures of performance and effectiveness. With that goal in mind infrastructure
assessments continued for the life of a project and beyond.
6
Mounting dedicated patrols to conduct assessments in isolated villages is logistically
taxing and potentially dangerous to both the supporting service members and the local
inhabitants. These challenges meant that the critical work of rebuilding communities and
restoring services was often relegated to a lower priority than other tasks. The limited numbers of
units available dictated that even in the best of times progress on the assessments was slow. In
practice, many units attempted to delegate assessments to local community leaders. Since these
surveys were paper-based and still had to be sorted and verified this technique only marginally
reduced the workload on beleaguered project officers.
Metrics of infrastructure health in military operations are contained in the Engineer
Reconnaissance Manual ATP 4-34.8. Metrics are evaluated by the basic criteria of green, amber,
red, or black (Figure 2). In general, green is determined by 100% functionality of the pertaining
feature, amber for 50-99%, red for less than 50% functionality and black for destroyed or non-
existent services. Complete scoring criteria are shown in Appendix A. The example given in
Figure 2 is specific to the sewage category.
It is important to note that the health of a village’s infrastructure depends on more than a
value assigned to its structure. Assessments are generally completed during on-site inspections of
a village combined with the testimony of local stakeholders. For instance, an assessment team
would seek the advice of local teachers to determine whether schools are open, staffed and
supplied before assigning a score to the academic category. This face-to-face component helps
the assessor make an accurate assessment of the structures that factors in the social components
of infrastructure health.
7
Figure 2: SWEAT-MSO definition and sewage scoring criteria
(Department of the Army 2016)
8
1.3.1. Post Assessment Infrastructure Surveys
An infrastructure survey is completed following the initial infrastructure assessment. The
primary difference between the two is the degree of technical informatio n and the amount of
expertise required to complete them (Department of the Army 2016). By nature, the
infrastructure assessment is a quick, tactical, analysis conducted by frontline troops in what may
be an unsecured environment (Figure 3). The survey is usually completed by dedicated engineer
teams with associated support from medical, veterinary, civil affairs, communications, and other
specialties as appropriate. The survey also requires an area to be secure while the infrastructure
assessment does not. The first page of the infrastructure survey for the sewage element is shown
in Appendix B. Infrastructure surveys encompass elements of the assessment but the survey itself
is not a component of this research. Infrastructure surveys are discussed here to give the reader a
holistic view of the process.
Figure 3: Overlap of infrastructure assessments and surveys (Department of the Army 2017)
9
1.4 Research Objectives
The primary objective of this project was to build an application to facilitate volunteer
collection of infrastructure assessments in accordance with published doctrine. Sewage, Water,
Electricity, Academic, Trash, Medical, Safety and Other infrastructure are the target features for
capture. This project includes a detailed workflow to demonstrate the feasibility of using VGI
submitted via commonly available mobile devices and their embedded GNSS receivers to
capture pertinent features and effectively store them within the Army-Enterprise-Database. Once
VGI assessments are submitted they will populate a customizable dashboard that enhances
situational understanding of the AO and facilitates timely and accurate infrastructure decisions.
The application can be tailored to the unique needs of geographically and culturally
disparate audiences. Different areas have different requirements and the assessment application
was designed to maximize flexibility. The workflow was designed to accommodate individuals
who are digital natives but who otherwise may have extremely limited formal education.
Methods of ensuring data quality and completeness within the database were explored.
1.5 Thesis Structure
The remainder of this thesis is organized into four chapters. Chapter 2 reviews related
work in the fields of VGI, spatial data quality, data management, stability operations and the
ways these fields intersect. Chapter 3 explores the design and use of the application and
operations dashboard. Results are found in Chapter 4. Conclusions and recommendations for
future study are found in Chapter 5.
10
Chapter 2 Related Work
Crowdsourced data is ubiquitous in the modern world. Average users may only be vaguely aware
of its prevalence but the news feeds on social media, mobile mapping services and even popular
mobile games all rely on user-generated information. OpenStreetMap (OSM) is a common
platform that allows users to submit spatial information that, in aggregate, can expose very
meaningful trends. Crisis maps on platforms like OSM have sprung up to tell the story of many
thousands of individuals caught in the midst of tragedies both large and small (Ahn, Hervé, and
Zinsz 2017). One such mapping platform, Ushahidi, has seen tremendous grassroots success in
telling the stories of people mired in disaster that would otherwise go unheard. An example from
the Ushahidi Syria Tracker capturing instances of chemical weapons use is seen in Figure 4
(Okolloh 2009; Ushahidi 2018). Such maps are powerful because they harness an impulse to do
something and enable any user equipped with a mobile device to generate content that tells an
individual story in the context of a larger event (Bittner, Michel, and Turk 2016) .
This chapter reviews research relevant to the field of VGI in general while focusing on
research most applicable to applying VGI in support of stability operations. While a great deal of
research was uncovered on crowdsourced spatial data in disaster areas and in emergency
management, much of it dealt with data mining of social media feeds or verifying accuracy and
other highly specific topics. However, there were some authoritative works on using VGI to
maintain accountability of disaster aid which closely resembles the goal of this project.
11
Figure 4: Ushahidi VGI submissions for Syria chemical attacks (Ushahidi 2018)
2.1 Volunteered Geographic Information (VGI)
VGI is considered a subset of the broader internet phenomenon of user generated content.
Michael Goodchild coined the term to describe the actions of geographic amateurs who “create,
assemble and disseminate” spatial information online (Goodchild 2007). VGI is an extension of
Web 2.0. Web 2.0 describes the democratization of the web and the shift of power to the
ordinary user such that they can generate content. There are valid concerns with VGI. There is a
noted inconsistency in quality regarding spatial and attribute accuracy in crowdsourced data.
Some researchers have suggested ethical boundaries regarding collecting and using VGI. Ethical
concerns are especially stark when the data collected has the potential to violate privacy or basic
human rights. (Fleming, Sedano, and Carlin 2018).
12
Despite the issues VGI has opened new ways for the public to access and produce spatial
data. Various works describe VGI as powerful because it is cheap, easy, flexible and “accurate
enough” to be useful in a wide range of activities (Johnson and Sieber 2013).
2.1.1. Spatial Data Quality Standards for VGI
Spatial data quality is an ongoing concern in GIScience that will likely never be fully
resolved. Like other sciences, introducing an error into a process causes cascading errors that can
drastically affect the output. Since VGI is collected by volunteers (volunteers are not necessarily
unskilled) there are concerns about user error and the variations in the positional accuracy of
different devices. The National Committee for Digital Cartographic Data Standards (NCDCDS)
introduced six requirements for digital cartographic data in 1988 (Morrison 2013) . There are
multiple formulations regarding spatial data quality, but most researchers have retained the
original six NCDCDS elements and added additional elements to reflect current technology.
Van Oort (2006) looked at a variety of methods for determining spatial data quality.
Methods used by the NCDCDS, International Standardization Organization (ISO), International
Cartographic Association (ICA) and others were found to have overlapping criteria. The
combined elements of spatial data quality from these sources are seen in Table 3.
Oort also formulated a three-part fitness-for-use methodology for spatial data that considers: (1)
the information needed for the intended application; (2) the constraints of using spatial data such
as legal or financial liabilities or costs; and (3) if spatial data quality contains risks, whether the
risks are acceptable.
13
Table 3: Elements of spatial data quality (Van Oort 2006; Morrison 2013)
14
2.1.2. Benefits and Shortcomings of Crowdsourced Data
VGI has the benefit of being able to produce content where there is none or where it is
so scarce and incomplete that content is of little or no value the consumer. A volunteer with local
knowledge but lacking formal training can sometimes create products superior to that which a
“mapping expert in a distant government agency” might produce (Goodchild 2012). The
downside is that VGI carries no guarantee of quality. Goodchild goes on to suggest three
mutually supporting methods of approaching VGI quality. These methods help determine
whether VGI features are: (1) valid contributions; (2) something useful but containing errors; or
(3) something that should be rejected. The three methods were the crowdsourcing approach, the
social approach, and the geographic approach.
2.1.2.1. Crowdsourcing Approach
Goodchild (2012) identified three distinct meanings to the term crowdsourcing. The first
seeks a solution to a problem by referring it to a population, without regard to qualifications, and
expecting that the “crowd” will come up with an adequate solution. This technique is widely
used in the age of the web but is not strictly dependent on technology.
The second and third meanings have less focus on solving problems and more emphasis
on harnessing the wisdom of a crowd to correct errors or corroborate the statements of an
individual. The second interpretation of crowdsourcing was explained by using a metaphor of
citizens reporting wildfires. While one report from a trustworthy source can be compelling,
multiple corroborating reports from different vantage points paint a much more complete picture
of an event. By this interpretation each new report lends additional credence to spatial clusters of
similar reports.
15
The third interpretation of crowdsourcing Goodchild (2012) highlights is the tendency of
a crowd to converge on the truth. The premise that large numbers of volunteers will generate a
self-policing ecosystem of information is what allows platforms like Wikipedia to thrive. If an
individual commits an error then, in most cases, other users can be expected to notice and correct
the error. An exception occurs with more obscure data. In a geographic context a prominent
feature will receive more attention than an isolated one. People are also more likely to assist in
the resolution of an issue they are interested in. Given that the intended audience for this project
resides in more isolated areas with fewer services, a key driver for success will be creating
enough interest to motivate the participation of residents.
2.1.2.2. Hierarchy Approach
The next approach to VGI quality is the social or hierarchy approach. This approach
depends on the participation of trusted individuals. Research has shown that there is a
distribution in the frequency of participation among users of a crowdsourcing platform. A few
individuals make many contributions while many individuals make only single or a small
number of contributions. This approach introduces a metric for reliability and trust for users who
make frequent contributions that are factual and correct. Members higher on the “social ladder”
may obtain certain edit or discretionary permissions that are not afforded to less trusted users.
This approach will be less applicable in the context of infrastructure assessments since
the gate keepers who manage the data and maintain the data infrastructure will be within a
government hierarchy and not one that exists only in the context of the VGI platform. The
proposed infrastructure application will also be primarily marketed to stakeholders within the
components of the infrastructure assessment. The trust placed in the status of these individuals
will negate the need for a network of privileged contributors.
16
2.1.2.3. Geographic Approach
The third approach is the geographic approach. This method largely tracks Waldo
Tobler’s (1970) First Law of Geography which states that “all things are related but nearby
things are more related than distant things”. This method requires an element of human
verification to contrast what facts are known about a location and what are reported via VGI.
Science or other conventions define what may or may not occur at a given location. An overly
simplistic but effective example would be that a submission for a coffee shop located in a lake is
likely to have been placed there by error. Goodchild (2012) wrote that the research for this
approach was promising but the lack of automation in the process limited it’s potential.
Within the context of stability operations, the geographic approach will likely be very
useful. Units engaged in stability operations are intimately familiar with the human and
environmental geography of their operating areas. Villages in a given region tend to follow a
template dictated by the human and environmental geography of the area. Using a combination
of remote sensing and human intelligence, the data corroborators in the Civil Affairs Teams will
be able to determine the general usefulness of a crowdsourced data point.
2.1.3. Positional Accuracy with VGI
Positional accuracy in VGI is a concern since the GPS devices contained in the average
smart phone are not of survey or professional grade accuracy (Heipke 2010). The spatial
relationship between features is often of critical importance in scientific research. The 6-15 m
horizontal accuracy of a typical smartphone is often good enough for navigation or recreational
purposes but leaves something to be desired when collecting field data (Schaefer and Woodyer
2015). While seeking to quantify the accuracy of volunteered data, Haklay (2010) compared
OSM data to the Ordnance Survey of Great Britain. The study showed an average commonality
17
of 80% between the OSM and Ordinance Survey overlays with percentages of overlap varying
from 60% to 89%. Haklay found that the accuracy of a dataset cannot be greater than the margin
of error of the GNSS receiver or the imagery points they are extracted from. If aerial imagery has
a 5 m resolution, error cannot be less than 5 m. If a GNSS receiver has a self-location error
between 5-10 m, then the error cannot be less than 10 m.
Another interesting study on positional accuracy in VGI compared relative positional
accuracy of popular mobile devices, recreation grade GNSS receivers and survey grade
equipment. The purpose was to study the effects of device choice on location reporting. In one
experiment, a Trimble 5600TS with 5 cm accuracy was compared with a Garmin eTrex receiver
in measuring a distinct historical site in the UK. The mean difference between points taken by
the two devices was 1.15 m (Figure 5). The author noted that factors such as weather, time,
vegetation, slope and the number of satellites being tracked can all adversely impact receiver
accuracy (Schaefer and Woodyer 2015).
Figure 5: Accuracy comparison of survey grade and recreational GNSS devices (Haklay 2010)
18
2.1.4. Public Sector Use of VGI
Geographic information production is traditionally the domain of elite and professional
organizations within government, academia and certain corporate entities who specialize in such
products. In the contemporary environment that paradigm is shifting. There are many recent
examples of VGI supplementing or even replacing professionally made products and GIS
professionals are taking notice (Goodchild 2012). The National Geospatial Intelligence Agency
(NGA) is the premier agency for collecting spatial intelligence and producing the products that
guide policy decisions. In late 2015, the NGA established a working group titled the VGI
TRIAD which explores three supporting legs of crowdsourced spatial data. Those legs being:
Active VGI, Passive VGI and Community Sourcing. The active component queries the public
about specific geographic features, the passive component feeds data to the public for further
VGI “enrichment” and the community sourcing component establishes a forum for analysts to
collaborate on datasets in real time (Mortenson 2016).
On a local level, the appeal of vast pools of data harvested by engaged citizens at little
cost to cash strapped governments is contributing to a gradual acceptance of VGI in the public
sphere. Challenges remain and there are limits to the extent public entities will expose
themselves to liability by using VGI. One study found municipal governments in the UK,
Australia, Western Europe, and the USA were often initially eager to embrace VGI, but
enthusiasm waned as projects neared implementation. VGI as an abstract concept is appealing
but in practice bureaucratic hurdles can bog down projects and sap enthusiasm (Johnson and
Sieber 2013). Projects of high fiscal or public safety consequence are unlikely to rely solely on
the use of crowdsourced data. Those will continue to be largely handled by professionals.
Creative uses of VGI on behalf of fledgling governments in crisis may face less bureaucratic
resistance than if similar methodologies were employed by more developed countries.
19
2.1.5. Use of VGI in Stability Operations and Improving Local Governance
A service member assigned to stability tasks in a conflict zone can expect a host nation
government to suffer from a lack of legitimacy in the eyes of its citizens. As a junior officer
engaged in these operations, the author frequently encountered residents who had complete faith
in coalition forces to accomplish tasks but who would refuse to engage with agents of their own
government. Those citizens believed, not without cause, that their government was corrupt and
impotent and that engaging with it brought mortal peril and no possibility of positive results.
Unfortunately, the volunteer nature of VGI lends itself to abuse in the context of stability
operations. Bad actors will inevitably co-opt the process to siphon resources towards their own
causes, subvert the lawful government, lead friendly forces into ambushes and engage in
countless other nefarious schemes. Abhorrent behavior can and will occur on any platform that
encourages public participation. Geographic platforms may be particularly vulnerable to bad
online behavior because online contributions will lead to responses in the real world (Elwood,
Goodchild, and Sui 2013). Careful vetting of claims submitted via the infrastructure survey is
essential. Follow up infrastructure surveys of the initial assessments must be corroborated with
up-to-date threat assessments and risk analysis and executed with caution.
VGI has the potential to engage citizens and presents local governments with a way of
proving competency and thus initiating a cycle of participation. Once initiated, this cycle is
mutually beneficial. The government saves money and citizens have an outlet to leverage their
talents and knowledge to support decision making within their government. According to
Johnson and Sieber (2013), two-way participation of this type changes the dynamic of citizens as
sensors into a more effective relationship of citizens as partners. A web-based VGI application
enables citizens to participate anonymously while contributing to their government and
community without the overt appearance of picking sides.
20
2.1.6. Military Geospatial Data Management
Brigade Combat Teams and all Civil Affairs Battalions have dedicated Geospatial
Engineer Teams (Department of the Army 2017). Geospatial engineers are responsible for
managing the enterprise geospatial database within their organization. They are also responsible
for generating and analyzing the geospatial data captured within their organization. The Army
Geospatial Enterprise (AGE) shown in Figure 6 is a distributed geodatabase that supports an
Army-Wide Common Operating Picture. A joint database supporting the entire DOD and partner
agencies is administered by the NGA and draws from all service component databases.
Figure 6: Army Geospatial Enterprise data flow (Department of the Army 2017)
21
2.2 Existing Infrastructure Assessment Applications
The Instrument Set, Reconnaissance and Surveying (ENFIRE) Kit is currently issued to
engineer teams to perform engineer tasks which includes infrastructure assessments. The kit
includes a Toughbook computer, a laser range finder, a survey grade GNSS receiver and several
other components. The software suite includes Esri ArcGIS Desktop as part of the Distributed
Common Ground System - Army. The ENFIRE Kit allows geospatial information to be captured
and transmitted to an enterprise geodatabase in real time (Northrop Grumman 2018). Unlike
host-nation application users, the administrators will be required to maintain a secure
communications network. The ENFIRE kit allows spatial information to be cached within the
standalone Toughbook computer and uploaded to the AGE when attached to a secure network.
The Federal Emergency Management Administration (FEMA) in cooperation with the
American Red Cross developed a damage assessment methodology to conduct assessments in the
wake of disasters. This methodology defines structures as affected, minor, major or destroyed
based on the severity of the damage. Like infrastructure assessments these damage assessments
are usually carried out by dedicated teams of collectors. The assessments are aggregated and
affected communities are assigned a score that helps determine their eligibility and need for aid
(Federal Emergency Management Administration 2016). VGI is increasingly being utilized to
assist collection teams in this effort. One novel approach proposed using spatial video to rapidly
survey large swaths of terrain. Spatial video can be collected from a variety of means including
airborne sensors or drones but they can also be rapidly collected by an individual driving through
an affected area. After collection, geotagged still frame photos are analyzed to assign damage
criteria to affected structures (Lue, Wilson, and Curtis 2014).
22
2.3 Using Technology in Unsecured Areas
Secure Access in Volatile Environments (SAVE) is a research foundation funded by UK
AID. The group studies humanitarian efforts in conflict zones and attempts to quantify results.
SAVE recommends using a variety of methods to verify that development and humanitarian aid
is reaching the intended recipients. Verification methods include phone and in-person interviews,
radio broadcasts, internet surveys and remote sensing. The study noted Afghanistan and South
Sudan as locations where some residents are afraid to carry mobile devices out of intimidation of
armed criminal groups. Syria, Turkey, Somalia, and Iraq reported much more freedom to openly
use technology in aid reporting (Schepard 2016). With these examples in mind, the
appropriateness of any single method of assessment collection should be considered against the
reality on the ground in a particular area of operations. Benefits and challenges of using digital
devices to aid data collection in volatile areas are shown in Table 4.
Table 4: Benefits and challenges of digital aid surveys (Schepard 2016)
23
Chapter 3 Requirements and Design Methodology
This chapter presents the requirements and design methodology for producing the infrastructure
assessment application, web map and operations dashboard.
3.1 Survey Workflow
This research produced a simple and intuitive workflow that supports the collection of of
infrastructure metrics within a given area of operations (Figure 7).
Figure 7: Crowdsourced village assessment workflow
Infrastructure metrics support recovery in affected areas by enabling the District Stability
Framework Process shown in Figure 8. Building confidence in the population while reducing
accelerants of conflict and providing for basic needs and services help produce lasting stability
(USACE 2016). Situational awareness is enhanced by knowing the location and severity of
damaged infrastructure. Analysis is conducted on the spatial data harvested to identify trends.
Design is based on feedback from local stakeholders identified in the assessments. During the
implementation phase, project benchmarks are independently monitored to ensure all measures
of performance and measures of effectiveness are being met.
24
Figure 8 District stability framework process (Department of the Army 2012)
Infrastructure assessments will be reviewed by units engaged in stability operations but
the assessments themselves will be spearheaded by host governments with the assistance of
NGO’s and development organizations such as USAID and the UNDP. Military units in
partnership with local government officials, police and community leaders will advertise the
surveys in centers of gravity such as markets and places of worship. The surveys will also be
prominently placed within aid and development websites. The Afghanistan Ministry of
Rehabilitation and Rural Development (MRRD) and the USAID sponsored Afghanistan
Infrastructure Rehabilitation Program (AIRP) are examples of such web pages. The Farsi and
English versions of their websites and the MRRD Facebook page are shown in Figure 9.
25
Figure 9: Afghanistan aid and infrastructure websites hosted by USAID and MRRD (2018)
Submitted infrastructure data will be reviewed and vetted by development staff.
Depending on the stage of the operation, this task may fall to uniformed service members within
the AO encompassing the feature or it may fall to another entity within an aid agency or the host
government. The review corroborates the data with available intelligence and deconflicts
responses with other ongoing projects or operations. Full infrastructure scoring criteria is found
in Appendix A. If all infrastructure reports across all categories are green, then the priority for
that village will be low. If multiple categories in another village are ranked as amber, red, or
black then that village will be a higher priority for engineer teams to visit for a formal
26
infrastructure survey. The VGI infrastructure database is iterative. Entries will continue to be
accepted, reviewed, and updated for the lifecycle of a project. All records and procedures are to
be maintained and documented for ultimate transition to host nation government control.
3.2 Overview of Study Area used for Proof of Concept
While this research focuses on the applicability of applying VGI in conflict or disaster
areas. Testing occurred within a pair of communities in the Sierra Nevada Foothills of Tulare
County. This area was chosen because the author was raised there and is intimately familiar with
the terrain and culture of the region. In certain respects, the region mirrors areas where stability
operations are needed. Many of the residents are first generation immigrants and residents of the
area suffer from poverty and crumbling infrastructure. Conditions of severe drought and serious
flooding are common and damage from wildfires or floods are annual worries. The communities
are isolated and do not have contiguous borders with other communities which simplifies the
task of sorting assessments.
Tulare County is located south of Fresno and runs from the San Joaquin Valley into the
foothills of the Sierra Nevada Mountains. The county is home to approximately 400,000 people
as of the 2010 Census. Agribusiness is the largest economic driver in Tulare County (US Census
Bureau 2010) with that sector dominated by dairy, other cattle products, and varieties of citrus
fruit. The World Agriculture Exposition, colloquially known as “The Farm Show” is the world’s
largest agriculture exposition of its kind and occurs every February in the city of Tulare. The
climate in the region ranges from ~20° F in the winter to well over 100° F in the summer. The
foothills depend heavily on various rivers flowing from the Sierra Nevada Mountains for
agriculture, power generation and tourism. For the remainder of this thesis the region
encompassing the general area of the test communities is described as AO Tulare (Figure10).
27
Figure 10: Overview of study area in Tulare County CA
3.2.1. Three Rivers
Three Rivers is the easternmost and more rural of the two study communities. The name
is derived from the town’s location at a junction of the North, Middle and South Forks of the
Kaweah River. The Kaweah is a tributary of the man-made Lake Kaweah which serves as the
westernmost border to the town. The eastern edge terminates where the Sequoia National Park
and Sequoia National Forrest begin. The 2010 census places the population of Three Rivers at
2,182. The town’s history includes ranching and mining as well as more recent arrivals drawn to
a thriving “new-age” movement. Public infrastructure is limited in Three Rivers. There is a K-8
public school but the town lacks educational facilities beyond 8th grade. Power generation comes
from two hydro-electric plants and a solar farm. Most homes and neighborhoods utilize
28
groundwater wells for drinking water. Medical and emergency services are provided by a small
general practice doctor’s office and a county fire station which also houses an ambulance. There
is a single Sheriff Deputy in the town but no permanent police station. The town’s population has
shrunk during recent decades as the population ages and the tourism economy consolidates.
3.2.2. Woodlake
Woodlake is a small community in the San Joaquin Valley. The town has a semi-arid
climate and is primarily a ranching and agriculture-based community. The population hovers
near 7,600 residents living within a 2.5 square mile area surrounded by citrus groves that abut the
Sierra Nevada Foothills to the east (City of Woodlake 2018). The town has a permanent police
force, a high school, and some larger stores so many residents of Three Rivers travel here to shop
or to send their teenage children to high school. According to the 2010 census residents
identifying as Hispanic or Latino represented 87.7% of the town. The relative poverty of the
population contributes to a general lack of services and infrastructure. About 36% of families
residing within Woodlake have an income placing them below the federal poverty threshold.
3.3 Survey components
The components of an infrastructure assessment as defined in this research are contained
in the acronym SWEAT-MSO (Sewage, Water, Electricity, Academics, Trash, Medical, Safety,
Other). The feature descriptions are seen in Table 5. These metrics fulfill the infrastructure
requirement of the operational variables. Because they are the most tangible aspect of
infrastructure these metrics focus primarily on structures. According to published engineer
doctrine regarding infrastructure assessments, location must be established for each pertinent
structure. Efforts must also be made to determine the value to the community from the
perspective of the population (USACE 2016).
29
Table 5: Infrastructure category feature descriptions
Attribute Name Type Data Source Notes
Sewage Point VGI Center Point of Waste Water Treatment Facility
Water Point
VGI
Center Point of primary water source
Electricity Point VGI Center point of primary power source
Academic Point VGI Center point of academic facilities
Trash Point VGI Entrance point of collection area
Medical Point VGI Entrance point of medical facilities
Safety/Security Point VGI Center point of police/military facilities
Other Concerns Point VGI Center point of noteworthy feature
Doctrine does not specify what specific geographic features should be recorded. The
specific deficiencies that should be recorded are also not explicitly specified. The assessment
relies on the best judgement of those providing the data to determine where deficiencies lie.
For qualitative decisions a simple binary assessment of ‘works’ or ‘does not work’ is
sufficient to record in the assessment so that deficiencies can be noted. For quantitative decisions
the Black – Green scoring criteria outlined in Appendix A will be applied. While completing an
assessment the following questions should be asked for each category:
Sewage: What is the status of the local sewage system? Where is sewage treated? Do
health and environmental risks exist?
Water: Where are the potable water sources? Are they adequate? Has testing occurred?
Electricity: What is the status of power generation facilities? Status of transmission
infrastructure? What critical facilities are without power? Where are fuel supplies?
Academic: Where are the schools in need of rebuilding or repair? What is the school age
population k-8? Do post primary education facilities exist?
Trash: Is there a system for removing waste? Does waste accumulation create health or
environmental risks? Where is the trash disposed of?
30
Medical: Where are the available medical facilities? Are there emergency services? Are
there veterinary facilities? What medical specialties are in the area? Are there facilities
for vulnerable populations (women, children, elderly)?
Public Safety: Are there police and fire services available? Where are the facilities?
Other: Annotate the locations of any special hazards or concerns held by community
leaders.
- Transportation Networks: Airport Location? Downed bridges/roadway obstructions?
- Fuel Distribution: Location of fuel supply? Sufficient for all residents and businesses?
- Housing: Structurally sound with utilities access? Log location of destroyed homes.
- Explosive/Hazmat: Annotate all locations.
- Connectivity: Television/Radio/Newspaper access? Is there internet connectivity? Is
cellular telephone service available?
- Worship: Where are religious facilities? Are the needs of all faiths adequately met?
The information listed above is only intended as a rough guide for determining the
humanitarian and civic assistance needed in a given location. If any site or facility has
circumstances not directly addressed, it should be logged as a data point in the assessment and
then described as accurately as possible.
3.4 System requirements
Users of this system fall into two distinct categories. The training and equipment needed
will be slightly different depending on whether the user is a collector or an administrator.
31
3.4.1. User Requirements
The residents of the host nation living in areas needing assessment will be the primary
collectors. The application is accessible on any web browser using commonly available
computers, tablets, or mobile devices. People residing in impoverished areas of the world may
not have IT Infrastructure enabling regular internet access, but cellular service is widespread.
Afghanistan had the worst internet connection speed in the world as of mid-2016 but even users
in that country have 3G or better coverage 79.6% of the time (Open Signal 2017).
The application does not require any specialized training or knowledge to operate. The
application is designed for use by people with minimal formal education but who have the
knowledge and skills to manipulate basic information technology. The user interface is simple
and intuitive and includes only basic functionality. Language and minor variations to format will
be made based on the area of operations. Functions that do not facilitate completing the
infrastructure assessment are not included in the app. Due to security concerns basic users will
not be able to see the results of their submission or other user’s submissions. While this
stipulation may seem extreme, many communities in conflict areas do not want their homes on
the map. By not advertising the location of infrastructure projects bad actors will not be able to
use this data to target the projects or the people that they benefit.
3.4.2. Administrator requirements
The administrators of the geodatabase and web map are those service members and
government officials who have responsibility for compiling and acting upon infrastructure
assessments. The lowest echelon responsible for viewing and editing submissions would be a
Company Intelligence Support Team (COIST) and progressing through the ranks of planning
cells up to a Theater Plans (G3) or Engagement (G9) Officer. Administrators within these
32
organizations will require access to the ArcGIS Online and ArcGIS Pro software suites. They
will also require access to the Surevey123 App and the Esri Operations Dashboard and Web
AppBuilder applications. The apps mentioned are included within the ArcGIS Online
subscription. The unit Geospatial Engineering Team is the point of contact for all software and
technical needs to support the infrastructure application.
3.5 GIS Software Required
This section outlines the choice of platform and software for the users to collect data and
for the administrators to archive and analyze the collected data.
3.5.1. Software Choice
Administrators will view collected data within the Esri Desktop and ArcGIS Online
suites of programs. This decision was made due to the use of Esri products within military
geospatial teams and on common soldier equipment such as the ENFIRE Kit. The web map layer
displays a map of the area of interest with individual features delineating infrastructure contained
in a geodatabase. ArcGIS Online is viewable on mobile devices and has a robust suite of widgets
designed for use by military personnel so it’s selection brings many benefits to the user beyond
viewing infrastructure assessments.
Users gather VGI on Esri Survey123. Survey123 allows users to submit data directly into
the geodatabase hosted on ArcGIS Online. The permissions are set so that users of the app
cannot modify, update, or delete any data hosted on the server. This software was chosen
because of ease-of-use and that it has no requirement for a user account to submit data. Future
versions of this project may use a custom app for collecting infrastructure surveys. The initial
map extent will display an area of interest set by the administrator.
33
A common operating picture displaying all infrastructure assessments within the AO is
displayed and managed through the Esri Operations Dashboard app. Operations Dashboard for
ArcGIS is a web app that provides visualization and analytics for a real-time operational view.
The dashboard shows user selected views and queries of layers published to ArcGIS Online.
3.5.2. Platform Choice
This application allows submissions by anyone with access to the website. The use of a
web-based application means the only limitation is access to an internet connection. While the
browser-based survey does not require any software, the use of ArcGIS Online is available on
mobile applications. This feature allows data administrators to analyze submissions in the field.
3.6 Web Map Application Creation
This section outlines the creation of the infrastructure assessment on Survey123 and the
steps required to prepare the data for storage in the geodatabase.
3.6.1. Structuring Geodatabase for submissions
To display the collected data an enterprise geodatabase was created in ArcGIS Pro with
the database set as an SQL_Server. The feature classes within the geodatabase are the elements
of SWEAT-MSO contained in the infrastructure assessment. The geodatabase was then
registered, packaged, and uploaded to ArcGIS Online. As Survey123 infrastructure assessments
are submitted, the data is automatically populated in ArcGIS Pro and ArcGIS Online.
3.6.2. Coordinate System for collected data
All data is collected in the Shapefile format (.shp). All shapefiles are points and represent
components of the assessment. The web map and shapefiles are in the WGS 1984 Web Mercator
Auxiliary Sphere projected coordinate system.
34
3.6.3. Symbology of Infrastructure Features in ArcGIS Pro
Symbology describes the use of symbols to represent spatial features on a map.
A feature submitted on Survey123 defaults to a single symbol for all feature classes. This
arrangement is not ideal for conveying an understanding of infrastructure problems. The
symbology of the layer will not generally be seen by the application user. To be consistent with
other features in the Army Enterprise Geodatabase, symbology was selected to make all graphics
consistent with approved joint military symbology (Defense Information Systems Agency 2014).
The selected symbology for SWEAT-MSO is seen in Figure 11.
Figure 11: Symbology selected for SWEAT-MSO in ArcGIS Pro
3.6.4. Infrastructure Feature Attributes
Each infrastructure point includes attributes to help describe the feature. Attributes
include the village name, assessor name, assessor email, date/time survey collected,
infrastructure category, infrastructure type, color rating of assessed feature, importance of
assessed feature to the community, additional Information and 2x by attachment (photo) fields.
The fields not selected by the user include an OBJECTID field to capture a unique identifier. The
OBJECTID field is important because it keeps data from being corrupted and allows different
tables to join for more complex spatial analysis.
35
Within the infrastructure type field there is a drop-down selection with 32 options that
give more specificity to the type of feature being targeted. These options include infrastructure
features such as libraries and TV stations. While SWEAT-MSO categories are the primary
collection goal, users are encouraged to include additional pictures, notes and information on
infrastructure type to ensure entries are verifiable and correct. Including these fields helps paint a
more detailed picture about community needs and deficiencies.
The attribute fields shown in Figure 12 were configured so that the field alias had a
uniquely identifiable name. When the desktop map was published to ArcGIS online all attribute
fields were transferred to the published data. This ensured continuity between both platforms.
Figure 12: Attribute table of infrastructure features
3.6.5. Other features included in the geodatabase
The layer USA_Census_Populated_Places was added from the Esri Living Atlas for
populated areas throughout the USA as defined by the 2010 Census. This layer was clipped to
only include boundaries and attributes for the communities of Woodlake and Three Rivers. The
clipped feature contains the attributes of FID, ObjectID, State, State FIPS, Place Type, Place
FIPS, HousingUnits, Total_Pop, Pop_SqMi, Area_SqMi and shape. These are similar statistics to
what would be needed for an infrastructure assessment. Layers for major roads and rivers within
town boundaries were explored but not included in the final product due to the cluttering they
caused on the map.
36
3.7 Infrastructure Assessment Application Development
This section describes the construction of the assessment survey on Survey123 and
publishing the survey to ArcGIS Online.
3.7.1. Survey 123 Workflow
Survey123 for ArcGIS is a simple and intuitive data gathering application that allows
surveys to be quickly created and disseminated. While there are other options for collecting field
data such as Collector for ArcGIS or with a custom app builder, Survey 123 was selected
because it takes no proprietary software or specialized knowledge to operate. The workflow for
creating and publishing a Survey123 assessment is contained in Figure 13.
Figure 13: Workflow of building and publishing Infrastructure Survey
3.7.2. Building the survey
To build a survey or view results in Survey123 requires access to ArcGIS Online user
credentials and a user account. After logging in and selecting create new survey the blank survey
form appears. Survey123 allows the framing of questions using a variety of templates. The
question template options include multiple choice, dropdown menus, image and geopoint. The
creator of the survey determines the number of questions. The completed survey becomes a
feature shapefile. Each question corresponds to an attribute in the feature. Figure 14 shows a
screenshot of the final assessment format as seen in a web browser.
37
Figure 14: Screenshots of SWEAT-MSO infrastructure assessment application
38
Figure 14 (cont.)
39
3.7.3. Creating a Web Map for the Assessment Feature Layer
Once the survey was published to ArcGIS Online as a hosted feature layer it was ready to
receive submissions. The next step was to publish a web map from Survey123 to capture the
symbology and layer order of the submitted assessments. As discussed in the symbology section
the infrastructure assessment feature layer defaults to a single point for every feature. If
symbology from ArcGIS Pro is published and the assessment layer is added it overrides the
symbology and defaults to the single point. This problem is resolved by publishing a web map
for the layer from Survey123 and then adding layers published by ArcGIS Pro.
3.7.4. Publishing a Hosted Feature Layer from ArcGIS Pro
A feature service is used to publish the hosted feature layer which contains multiple
feature classes, tables and relationships that are stored on ArcGIS Online and are viewable as a
web map. ArcGIS Pro was used to create the desired map layer (.mxd) and geodatabase and then
published to ArcGIS Online as a hosted feature layer. ArcGIS Pro and ArcGIS Online have their
individual strengths and weaknesses and both were carefully leveraged in this project. ArcGIS
Pro is better for data analysis and management while ArcGIS Online allows broader
dissemination and collaboration. Because the purpose of publishing the map is to create shared
understanding of the infrastructure assessment progress within a given AO the editing
functionality of the feature layer was disabled. If needed for future work the permissions can be
changed on the item details page of ArcGIS Online. The published AO Tulare webmap is shown
in Figure 15.
40
Figure 15: Published AO Tulare map in ArcGIS Online
Several edits had to be made to enhance the functionality of the webmap and ensure its
compatibility with the operations dashboard. The desired map extents for Tulare AO, Woodlake
and Three Rivers were saved as bookmarks. Pop up views were enabled. The pop-up view was
configured to label popups with the community, infrastructure type and status. Scroll over data
was also enabled to show that same label information when the mouse hovers over a feature.
3.7.5. Creating a COP for infrastructure needs on operations dashboard
The common operating picture view was created on the Windows application for
Operations Dashboard. The customizable dashboards can be viewed either on the Windows app
or through a web browser. If viewed on the application there is some increased functionality and
41
an option to view multiple maps simultaneously. The Windows application also allows the use of
multiple screens. An operations view with multiple screens can be published to ArcGIS Online
but cannot be viewed via a browser. A user attempting to access the operations view in the
content tab of ArcGIS Online (Figure 16) will have the option to view the dashboard in either
configuration but to open the tab in the Windows app it will first have to be downloaded. The
browser option has a simpler interface, but it has the advantage of being viewable from mobile
devices or any computer that does not have the Windows app installed.
Figure 16: Operations view menu in ArcGIS Online
The common operating picture has a series of widgets designed to display the most
pertinent components of the progress of infrastructure assessments in a manner that conveys
rapid understanding to the viewer. Within the map window there are options for base map
selection, bookmarks, map contents, filters and selecting features.
Several queries were established to filter information based on likely user requirements.
The default filter is all features. This setting displays VGI submissions of all types and locations.
To display infrastructure in a specific village there is a query where the “Village Name” field
42
contains the value of “Three Rivers” or “Woodlake”. Queries filtering submissions based on the
extent of damage to the feature are included in the filters for damaged infrastructure. Damage
filters look for a color code equal to “not green” AND contain a village name placing them
within one of the previously named communities. There is also a query for “submissions last 24
hours”. This filter displays options when date submitted equals “less than” 24 hours from the
current time. If this filter is used in conjunction with the damaged infrastructure filters, a very
narrowly tailored view of current damaged infrastructure can be achieved in a given area.
The widgets shown in Figure 17 are linked to the map window and will display
information based on the filter selected. The dashboard notes window contains administrative
information essential for understanding the map. Included in the notes are the specific area of
concern, the last edit for the map and the responsible unit. The feature count gives the total
number of features active on the map extent. The bar graph breaks down the active features by
infrastructure category type. The legend displays all symbology for the active layer.
Figure 17: AO Tulare COP and dashboard widgets
43
Chapter 4 Results
This chapter outlines the results achieved and shows the final maps and dashboard views of the
Three Rivers and Woodlake areas. Section 4.1 describes how volunteers were selected, which
areas they were assigned and what information and training they received in advance of
collecting surveys. User feedback for the application and the web map are included in this
section. Section 4.2 presents the results of the volunteer surveys and an analysis of these results
is provided in Section 4.3. Section 4.4 reviews of the infrastructure assessment dashboard.
4.1 Assessment Collection
Testing was initiated as soon as a workable framework for the application, web map and
dashboard was completed. To collect the infrastructure assessments for Woodlake and Three
Rivers the author distributed the survey link to four volunteers residing in the local area. Therese
and Katy were assigned to Three Rivers while Marie and Joe collected data in Woodlake. The
author described the scenario to the volunteers and informed them that the survey is designed for
use in remote environments and that they should only utilize the cellular network rather than
wireless internet. The volunteers were made to understand that the data they were collecting was
being used to construct an infrastructure assessment of their community.
Volunteers were given no specific directions on how to complete assessments other than
the instructions contained in the survey. Each participant was sent a link to their mobile phones
via text message and email. Volunteer test data was collected on the 28th and 29
th
of April 2018.
Other test data can be found in the attribute tables from tests conducted by the author.
44
4.1.1. Survey feedback
Users generally agreed that the survey was intuitive and functional. One consideration
mentioned is the impact of small screens on menu selection. While users reported no issues
reading the survey instructions or menu items, the drop-down menus within the survey contain
many options and the user must scroll through the menu to select the appropriate attribute. There
were several cases of users selecting the wrong menu item in the survey due to the lack of fine
motor control with the touch screens on smaller mobile devices.
Of interest was how well the infrastructure survey app would perform in areas with poor
cellular service. The majority of Woodlake has 3-5 bars of LTE service. Three Rivers has
relatively poor coverage with most of the town only receiving 3G coverage and many parts of the
town have no coverage at all. Testing demonstrated that surveys would submit with only 2 bars
of 3G service, but problems arose when coverage was below that threshold. Since the survey can
be used on any web browser, one volunteer described taking a picture and a note of the location
and returning to their desktop to complete the survey.
The geotag function defaults to an awkward extent when viewed on a mobile device. The
app also defaults to a topo map base map. While the pin is centered directly on the user’s device,
sometimes the desired location for a feature is not exactly where the user is standing. It takes
some manipulation of the device to manually place the pin over the desired location. In one case
a user did not know how to change the base map when attempting to place a feature and the
location was not easily associated with the default topo map.
In the cases mentioned above the author was able to determine the intended location and
feature type from the cross referencing of the different attribute fields. If the user was close and
included a picture the author was generally able to infer the intended location.
45
4.2 Assessment Results
Volunteers collected 30 features in AO Tulare. A total of 16 were reported in Woodlake
while 14 were reported within Three Rivers. There were 26 infrastructure features attributed to
the green damage category, 2 were amber and 1 each were classified as black or red. The
breakdown for all SWEAT-MSO categories by number of entries and damage category is seen in
Figure 18.
Figure 18: Infrastructure category and damage rating submission counts by type
Of the 30 infrastructure features submitted 2 were reported as within the city limits of a
community but were actually located on unincorporated county land. This occurrence is an
outlier because in real-world stability operations there will likely be poorly defined community
boundaries. For practical purposes if a user submits an infrastructure feature it implies that the
feature impacts the health and welfare of that community and should therefore be counted.
4.3 Analysis of VGI Infrastructure Submissions
Submitted assessments were reviewed to ensure validity and quality. Data was assessed
by positional and thematic analysis and by comparison to control data. Upon review of the data
there were 3 cases of user attribution error. One “OTHER” category was misattributed as an
“ACADEMIC” feature. There were also two mislabeled infrastructure types. In these cases, a
church was labeled as a cell tower and a school was labeled as a reservoir.
46
4.3.1. Positional Analysis
To gain an understanding of the positional accuracy of assessments two test features were
selected. Volunteers were instructed to submit an assessment for those points. The results of
these comparisons are summarized in Table 6.
Table 6: Comparison of VGI submissions for test features
47
The two test features were selected for their prominence and distinguishability. The
control point coordinates were taken using a Garmin GPSmap 60CSX. Volunteers were given no
prompting on which attributes they should select or any other information other than the standard
instructions encountered when submitting the survey. VGI submissions returned a modest degree
of accuracy when compared to the control data. Manipulating the “pin drop” geotag in
Survey123 requires a degree of fine motor skills but an astute collector can place the point with
some precision. If the user is careless in the geotag placement then a high degree of error can be
expected. The assessment app instructions specify that the point should be placed directly over
the center point of the feature being collected.
4.3.2. Thematic Analysis
There was some variation on the themes of the non-spatial data contained in the VGI
assessments. Variations stemmed from different interpretations of what a feature was and to an
incomplete menu of options. When producing the content in the Survey123 app the author
deliberately used drop down menus or radio buttons to describe as many options as possible. The
purpose of standardizing options was to reduce thematic variance between different users. Even
with the benefit of defined options some submissions did not fit into a suggested category. In
these cases, users selected the “other” type and described the feature in the notes text box. The
assessment questions were designed such that the questions mutually reinforce each other. The
village name question can be corroborated with the placement of the geotag. The infrastructure
category and infrastructure type questions help narrow down the precise nature of the feature, but
the essential functions of both questions are to place a feature within the framework of SWEAT-
MSO. The photo requirement in each submission offers another tool to cross check the accuracy
of the infrastructure type and category questions. Even if one or both type and category questions
48
misattribute the feature, the picture and the context provided by the remaining questions should
allow the administrator to correctly identify the feature. As a final resort the volunteer name and
email can be leveraged to seek additional clarity on the feature submission.
4.3.3. Control Data Analysis
Test feature attributes are summarized in Table 7. The attribute table for infrastructure
submissions was exported to excel and submissions for the test cases were compiled as a table.
Volunteers received no coaching for these submissions other than being instructed via text to
collect these specific infrastructure points. All submissions included the full names and emails of
the participants. Village name submissions included two counts of the town name and two that
contained the names of nearby businesses. The village name is relatively subjective and
misattributed submissions would not be detrimental in isolated locales. All submissions correctly
attributed features to the water category. Most submissions correctly placed the feature type as
“other” and put information in the notes that it was some type of water regulation device. One
individual left the note information blank, but the author was able to determine intent by viewing
the picture and by knowing the category. The importance of the assessed variables ranged
between medium and low. Damage criteria was uniformly assessed as green.
Table 7: Comparison of VGI submission attributes for thematic accuracy
49
4.3.4. Analyzing Survey Results in Survey 123
Survey 123 offers analytical tools to evaluate survey submissions. Within the analyze tab
of the SWEAT-MSO Survey there are options to view the breakdown of each assessment into
categories based on the answers given to the assessment questions. The numbers of assessments
for each individual participant and for the assessed communities are included in the summary. A
time filter can be used to view assessments submitted within a certain window of time.
Infrastructure types and categories can be viewed by column, bar, or pie chart or via graduated
symbols on a map. Submissions within the additional information tab can be viewed in a table
format that gives a count for each time a word is used. The table can also be viewed as a word
cloud. The variety of analysis tools available for instant use facilitates a degree of understanding
of the infrastructure needs within assessed communities that far exceeds what one can glean from
viewing the data. These tools will be put to good use by CA Teams. Figure 19 shows a pie chart
of infrastructure submission. The “other” category was the most used selection.
Figure 19: Pie chart displaying infrastructure type submissions in Three Rivers and Woodlake
50
4.4 Infrastructure Assessment Maps
The maps reproduced in Figure 20 show completed infrastructure assessments for the
communities of Three Rivers and Woodlake within the Tulare area of operations. These maps
include all marginal map information and can be used to graphically communicate the state of
infrastructure needs within the assessed communities.
Figure 20: Infrastructure Maps of Woodlake and Three Rivers, CA
51
4.5 Infrastructure Assessment Dashboard
The completed dashboard shown in Figure 21 displays all infrastructure submissions for
the communities of Woodlake and Three Rivers. The dashboard map allows the user to change
base maps, turn features on or off, activate filters or select bookmarked map extents. Data cannot
be edited from the dashboard view. The active widgets show all pertinent info needed to rapidly
convey understanding of the areas infrastructure needs.
Figure 21: Online COP dashboard for AO Tulare
52
Chapter 5 Conclusions and Future Work
This chapter discusses the conclusions drawn from developing a web application and an
operations dashboard to collect and display infrastructure assessments in support of stability
operations. The chapter also discusses challenges encountered while building this application and
dashboard and details recommendations for improvement. The chapter concludes with a
discussion of the long-term management of the infrastructure assessment application/dashboard
and how this project could be scaled up or down to fit a diverse range of operating environments.
5.1 Summary of Application
This project produced a custom SWEAT-MSO survey within the Survey123 app for
collecting VGI infrastructure assessments and then built a webmap template to display those
assessments. The map was uploaded as a hosted feature layer to ArcGIS Online and imbedded
within the Operations Dashboard application. Widgets within the dashboard allow critical
information pertaining to infrastructure health to be rapidly processed and disseminated across an
organization. All research goals for this project were met. Submissions received on the VGI
assessment app and displayed on the dashboard supplements or replaces the traditional method
of service members or aid workers traveling to unsecured sites collecting this data in person.
5.2 Project Hurdles
The use of commercially available off-the-shelf software to facilitate this project resulted
in challenges as well as opportunities. The benefits include ease-of-use, interoperability with
multiple platforms, ability to integrate into the existing Army and DOD enterprise geodatabase
and needing little additional training for staff and administrators to integrate into full operational
use. While the result was a usable product that portends well for the future of VGI use within
53
stability operations, there are challenges to overcome in future versions of this project, as
documented below.
5.2.1. Security of participants and the enterprise system
The nature of stability operations, particularly the fact that they typically take place in
contested areas with unstable governments means that security will always be a concern. VGI
depends on open and free exchanges of data. Such a dynamic is at odds with the traditional
military emphasis on operational and information security. Encouraging mass participation in an
endeavor that supports U.S. policy goals introduces the risk that those who have a personal stake
in the failure of those policies may attempt to harm or subvert the system.
To protect the integrity of the data and the safety of the volunteers, participants had only
limited access to collected data. They would know only about the submissions they personally
collected. Participants can also elect to submit assessments anonymously. In many areas of the
globe, cultural, religious, or political influences cast suspicion on the use of technology. As
discussed in Section 2.3, some insurgent and criminal groups will violently attack individuals
found to be using or in possession of a mobile device. The application optimized for use in areas
where such security concerns exist should tie their use to local reconstruction and aid
organizations and forgo any mention of central government or foreign military entities.
5.2.2. Use in areas of degraded network coverage
While infrastructure assessments can be submitted remotely or from a desktop computer,
if used in that manner the assessment will not benefit from the GPS receivers found in most
mobile devices. Placing the geotag manually can be done accurately but leaves more room for
user error than if a submission is automatically tagged to the location of that device. This subset
of user reviews of the assessment application covered the use of the application in areas with
54
poor network coverage. While browser enabled assesments performed surprisingly well in a
mountainous area with weak coverage the work was tediously slow, and the application was
prone to crashing with intermittent 3G coverage. If the Survey123 application is used, multiple
surveys can be collected and archived for submission when coverage is ideal. The reason this
option was not used is that downloading the application requires access to an ArcGIS Online
account. The browser app in comparison faces the drawback of poor functionality in some areas
but anyone with a browser can use it.
5.2.3. Bundling of submissions and limitations of collection using Survey123
The inability of Survey123 to simultaneously submit multiple geotagged points within a
single submission or feature layer is a major drawback. This limitation was recognized at the
onset of this project but the ability of Survey123 to facilitate collection from anyone with no
permissions or authorization needed outweighed this drawback. Many infrastructure features
contain distinct components that work in tandem with other components to provide a service but
they cannot be reduced to a single spatial feature. The current infrastructure assessment
somewhat clumsily overcomes this problem by counting on participants to submit multiple
assessments to capture multiple features. Analysis on the back end can then collate these features
into systems by using submitted feature descriptions and spatial analysis.
In practice many users may not submit multiple assessments for various features in a
single category. Users are likely to pick a prominent feature for submissions and neglect less
obvious but nonetheless important infrastructure. There is a high likelihood of clustering near
prominent features and administrators will have to use various means including remote sensing
and ground truth surveys to determine which feature submissions are most useful. This tendency
towards clustering is displayed in the test of spatial accuracy shown in Table 6. Duplicate
55
submissions of the same infrastructure feature were displayed in multiple locations. By
comparing attribute information and attached pictures, a CA team can determine the true state of
a feature with an acceptable degree of accuracy. Discussion of VGI quality in Section 2.1
discussed the tendency of crowdsourced data to converge on the truth. The crowdsourcing and
hierarchical approach suggested by Goodchild (2012) will both be utilized here to ensure
submission quality.
5.3 Scalability
The methodology of this assessment can be scaled to cover large geographic areas and
support large numbers of users and submissions. The local enterprise geodatabase supporting the
collection of the surveys will define the geographic boundaries of an AO and create further
subdivisions as appropriate. Local data managers within the Civil Affairs Team or the unit
GEOINT Cell will manage the collected data and arrange the dashboard to suite their
organizations infrastructure requirements.
Although some techniques, such as spatial video, have been attempted to rapidly assess
large, densely packed areas (Lue, Wilson, and Curtis 2014). Collecting infrastructure
assessments via the workflow established in this thesis becomes potentially less reliable when
adopting the technique to dense urban cores or possible future use within megacities. Operating
in relatively poor and isolated communities with underdeveloped infrastructure allows a relative
novice to isolate a problem and submit an assessment. The assessment is then triaged and
processed by a human operator who provides a measure of quality control. As technological
sophistication increases so will the support systems needed to keep it going. This project
operated under the assumption that stability operations in more advanced urban locations would
56
operate with some organic host nation public works support and that supporting units will not
have to build their assessments from scratch.
5.4 Future Work
Future use of this project depends on transitioning from an abstract academic construct to
something that works in the sometimes-messy world of real life stability operations. The design
methodology of this project placed a premium on flexibility and user customization. The
infrastructure project managers working on public utilities in Columbia will have different
requirements than a Civil Affairs Team working on village sanitation services restoration in Iraq.
The application and dashboard are designed so that each could tailor the specific questions to
their needs within the constraints of SWEAT-MSO and local language and cultural preferences.
Some broad changes would improve the framework for collecting VGI in remote areas while still
maximizing flexibility.
The next iteration of this project should include a custom app. The Survey123 application
worked well but the limitations of the survey in the browser mode make a custom application
desirable. Assessment applications would be distributed free-of-charge via aid and reconstruction
websites focusing on the area. A browser mode would still be useful, but the application could be
downloaded onto a mobile device and used where mobile service was degraded. Infrastructure
assessments hosted on the application would be cached during times of degraded connectivity
and synced with the spatial database when the device is connected.
The improved infrastructure app would include some increased functionality to bundle
multiple features into one submission. In addition to point features the assessment would allow
users to submit lines and polygons. An option to link multiple points and features would also be
included. For instance, if a participant is submitting for a school there would be an option to tie
57
multiple buildings and points into one submission. The health of a category is more than the
damage criteria of a single part of that system. A school may still be able to function optimally if
a single building is even moderately damaged.
SWEAT-MSO App version 2.0 will feature an attribute field for users to comment on the
completeness of their submitted data. Individual infrastructure features are important but they are
not standalone objects. A point may be spatially and thematically accurate but still not convey its
importance within the larger system of features. To capture the metric of completeness within a
technologically enabled trust network the future app will include a “verified by” tab. Local
stakeholders will be known in the area of operations and they can be leveraged as a tool to verify
the completeness and accuracy of a submission. This trust network will enhance security and
accuracy since submissions can be checked against a list of known actors.
A publicly viewable web map was intentionally not promoted in this project. The security
risks of advertising where reconstruction aid is being directed outweigh the benefits of informing
the public. That said, a map featuring completed projects that advertises that the projects featured
the input of volunteer generated assessments could be featured on government and aid websites.
This feature could help drive participation and build confidence in the local authorities while
minimizing security risks.
58
References
Ahn, Eunsu, Camille Hervé, and Laury Zinsz. 2017. "Crowdsourcing for Quality of Life: The
Case for Collaborative Crisis Mapping." Encyclopedia of Emerging Industries: 122-
129. https://hal.archives-ouvertes.fr/hal-01593574.
Bittner, Christian, Boris Michel, and Cate Turk. 2016. "Turning the Spotlight on the Crowd:
Examining the Participatory Ethics and Practices of Crisis Mapping." ACME: An
International Journal for Critical Geographies 15 (1): 207-229. http://www.acme-
journal.org/index.php/acme/article/view/1238.
City of Woodlake. "City of Woodlake.", accessed May 23,
2018, http://www.cityofwoodlake.com/.
Department of Defense. 2014. Joint Military Symbology. Washington DC: Defense Information
Systems Agency.
Department of the Army. 2012. ADP 3-07 Stability Operations. Washington DC: Army
Publishing Directorate.
———. 2017. ATP 3-34.80 Geospatial Engineering. Washington DC: Army Publishing
Directorate.
———. 2016. ATP 3-34.81 Engineer Reconnaissance. Washington DC: Army Publishing
Directorate.
Elwood, Sarah, Michael Goodchild, and Daniel Sui. 2013. "Prospects for VGI Research and the
Emerging Fourth Paradigm." In Crowdsourcing Geographic Knowledge, 361-375: Springer,
Dordrecht.
Federal Emergency Management Administration. 2016. FEMA Damage Assessment Operations
Manual. Washington DC: Department of Homeland Security.
Fleming, Steven, Elisabeth Sedano, and Margaret Carlin. 2018. "The Ethics of Volunteered
Geographic Information for GEOINT Use." Trajectory
Magazine. http://trajectorymagazine.com/ethics-volunteered-geographic-information-
geoint-use/.
Goodchild, Michael. 2007. "Citizens as Sensors: The World of Volunteered
Geography." GeoJournal 69 (4): 211-221. doi:10.1007/s10708-007-9111-
y. http://www.jstor.org/stable/41148191.
——. 2012. "Assuring the Quality of Volunteered Geographic Information." Spatial Statistics 1:
110-120. doi:10.1016/j.spasta.2012.03.002. https://www-sciencedirect-
com.libproxy1.usc.edu/science/article/pii/S2211675312000097.
Goodchild, Michael and Alan Glennon. 2010. "Crowdsourcing Geographic Information for
Disaster Response: A Research Frontier." International Journal of Digital Earth 3 (3): 231-
241.doi:10.1080/17538941003759255. http://www.tandfonline.com/doi/abs/10.1080/17538
941003759255.
59
Haklay, Mordechai. 2010. "How Good is Volunteered Geographical Information? A
Comparative Study of OpenStreetMap and Ordnance Survey Datasets." Environment and
Planning B: Planning and Design 37 (4): 682-703.
doi:10.1068/b35097. http://journals.sagepub.com/doi/full/10.1068/b35097.
Hammon, Larissa and Hajo Hippner. 2012. "The Role of Crowdsourcing for Better Governance
in International Development." Business & Information Systems Engineering 4 (3): 163-166.
doi:10.1007/s12599-012-0215-7. https://search.proquest.com/docview/1024715874.
Heipke, Christian. 2010. "Crowdsourcing Geospatial Data." ISPRS Journal of Photogrammetry
and Remote Sensing 65 (6): 550-557. doi:10.1016/j.isprsjprs.2010.06.005. https://www-
sciencedirect-com.libproxy1.usc.edu/science/article/pii/S0924271610000602.
Johnson, Peter and Renee Sieber. 2013. "Situating the Adoption of VGI by Government."
In Crowdsourcing Geographic Knowledge, edited by Daniel Sui, Sarah Elwood and
Michael Goodyear. 2012th ed., 65-81. Dordrecht: Springer Netherlands. doi:10.1007/978-
94-007-4587-2_5. https://link-springer-com.libproxy1.usc.edu/chapter/10.1007/978-94-007-
4587-2_5.
Lue, Evan, John P. Wilson, and Andrew Curtis. 2014. "Conducting Disaster Damage
Assessments with Spatial Video, Experts, and Citizens." Applied Geography Complete (52):
46-54.
doi:10.1016/j.apgeog.2014.04.014. https://www.infona.pl//resource/bwmeta1.element.elsevi
er-98b1223c-b05b-3235-828e-f2ddaf71af34.
Morrison, Joel. 2013. "Elements of Spatial Data Quality." In Elements of Spatial Data Quality,
edited by Stephen Guptill and Joel Morrison: International Cartographic Association.
Mortenson, Will. 2016. "The "in Crowd": NGA Adopts the Crowdsourcing Model." Pathfinder
Magazine 14 (2). https://www.nga.mil/MediaRoom/Pathfinder/Pages/Archive.aspx.
MRRD. "Ministry of Rural Rehabilitation and Development.", last modified June
25, http://mrrd.gov.af/fa.
Northrop Grumman. "Instrument Set, Reconnaissance and Surveying (ENFIRE)." Northrop
Grumman., accessed May 7,
2018, http://www.northropgrumman.com/Capabilities/ENFIRE/Pages/default.aspx.
Okolloh, Ory. 2009. "Ushahidi, Or 'Testimony': Web 2.0 Tools for Crowdsourcing Crisis
Information." In Change at Hand: Web 2.0 for Development, edited by Holly Ashley, 65-68:
IIED.
Open Signal. "The State of LTE." opensignal.com., last modified November, accessed 26 April,
2018, https://opensignal.com/reports/2017/11/state-of-lte.
Pease, Patricia. 2017. "The Influence of Training on Position and Attribute Accuracy in
Volunteered Geographic Information.".
60
Schaefer, Martin and Tara Woodyer. 2015. "Assessing Absolute and Relative Accuracy of
Recreation‐grade and Mobile Phone GNSS Devices: A Method for Informing Device
Choice." Area 47 (2): 185-196.
doi:10.1111/area.12172. https://onlinelibrary.wiley.com/doi/abs/10.1111/area.12172.
Schepard, Andrew. 2016. "Eyes and Ears on the Ground: Monitoring Aid in Insecure
Enviroments." Global Public Policy Institute 54 (4): 543-546.
doi:10.1111/fcre.12240. http://onlinelibrary.wiley.com/doi/10.1111/fcre.12240/abstract.
Tobler, Waldo. 1970. "A Computer Movie Simulating Urban Growth in the Detroit
Region." Economic Geography 46: 234-240.
doi:10.2307/143141. https://ci.nii.ac.jp/naid/30020890215/.
US Census Bureau. "2010 US Census.", accessed May 6,
2018, https://www.census.gov/2010census/popmap/ipmtext.php?fl=06:0678638.
USACE. 2016. TR-16-3: Human Infrastructure System Assessment for Military Operations.
Champaign, IL: U.S. Army Corps of Engineers.
USAID. "U.S. Agency for International Development.", accessed June 25,
2018, https://www.usaid.gov/afghanistan/agriculture.
Ushahidi. "Syria Tracker.", accessed May 6, 2018, https://www.ushahidi.com/case-studies/syria-
tracker.
Van Oort, Pepijn. 2006. Spatial Data Quality: From Description to Application. 1st ed.
Rotterdam: Optima.
61
Appendix A: Coding of Infrastructure Metrics
Infrastructure quality metrics (Department of the Army 2016)
62
Appendix A: Coding of Infrastructure Metrics (cont.)
Infrastructure quality metrics (Department of the Army 2016)
63
Appendix A: Coding of Infrastructure Metrics (cont.)
Infrastructure quality metrics (Department of the Army 2016)
64
Appendix B: Infrastructure Survey Form (Sewage)
Partial infrastructure survey form (USACE 2016)
Abstract (if available)
Abstract
Humanitarian assistance, disaster response and stability/peacekeeping operations are an important part of current national defense strategy and represent an opportunity for the United States to project goodwill around the world. This thesis explores using Volunteered Geographic Information (VGI) to streamline the process through which aid and assistance is routed to those most in need. The acronym SWEAT-MSO (Security, Water, Electricity, Academics, Trash, Medical, Safety, Other) describes the metrics the United States Military uses for evaluating infrastructure health in support of foreign stability operations. SWEAT-MSO features are coded as green, amber, red or black based on the severity of damage and their ability to function. Assessments are currently completed by deployed service members but by incorporating VGI, this burden can be shifted to those who live and work within an affected area. Under the proposed framework, volunteers use a browser-based infrastructure assessment app to capture metrics and store them within a spatial database for analysis by Civil Affairs (CA) teams. VGI assessments are displayed in real time within a common operating picture that spreads awareness of infrastructure issues throughout the area of interest. This thesis demonstrates the VGI infrastructure assessment concept by creating a custom app to collect assessments and a common operating picture dashboard to display the results of the assessments. Unskilled volunteers collected test assessments in two rural communities and the results were analyzed for spatial and thematic accuracy. The successful collection of targeted infrastructure metrics and the user reviews of the assessment app and the operations dashboard indicate that this method can be expected to produce results in a forward deployed environment.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
GIS analysis of helicopter rescue in San Bernardino County, California
PDF
Philly Bike Report: a mobile app for mapping and sharing real-time reports of illegally blocked bike lanes in Philadelphia
PDF
The community wildfire protection plan repository: using VGI to create a national collection of wildfire management information
PDF
Social media to locate urban displacement: assessing the risk of displacement using volunteered geographic information in the city of Los Angeles
PDF
Creating a Web GIS to support field operations and enhance data collection for the Animal and Plant Health Inspection Service (APHIS)
PDF
Web GIS as a disease management workspace: enabling advocacy at multiple scales across multiple continents with the case of tungiasis
PDF
Generating trail conditions using user contributed data through a web application
PDF
A spatial analysis of veteran healthcare accessibility
PDF
Validation of volunteered geographic information quality components for incidents of law enforcement use of force
PDF
Developing, maintaining, and employing crowd-sourced geospatial data in support of helicopter landing zone surveys for disaster response operations
PDF
Using GIS to predict human movement patterns in complex humanitarian emergencies: a test case of the Syrian Conflict
PDF
Using GIS to perform a risk assessment for air-transmitted bioterrorism within San Diego County
PDF
Enriching the Demographic Survey sampling for the Los Angeles County Annual Homeless Count with spatial statistics
PDF
Operational optimization model for Hungry Marketplace using geographic information systems
PDF
Urban areas and avian diversity: using citizen collected data to explore green spaces
PDF
Evaluating machine learning tools for humanitarian road network mapping
PDF
Advancing Redwood City's bicycle infrastructure through a geodesign workflow
PDF
Creating a well database and web mapping application: using geographic information systems to manage and monitor groundwater resources in Sonoma County, California
PDF
Utilizing 311 service requests as a signature of urban location in the City of Los Angeles
PDF
Utilizing advanced spatial collection and monitoring technologies: surveying topographical datasets with unmanned aerial systems
Asset Metadata
Creator
Potter, James Andrew
(author)
Core Title
Collecting and managing VGI infrastructure assessments in support of stability operations
School
College of Letters, Arts and Sciences
Degree
Master of Science
Degree Program
Geographic Information Science and Technology
Publication Date
09/26/2018
Defense Date
07/05/2018
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
civil affairs,disaster response,Humanitarian assistance,OAI-PMH Harvest,peacekeeping,stability operations,SWEAT-MSO,VGI,volunteered geographic information
Format
application/pdf
(imt)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Wilson, John P. (
committee chair
), Fleming, Steven Douglas (
committee member
), Sedano, Elisabeth Jane (
committee member
)
Creator Email
james.andrew.potter@gmail.com,japotter@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c89-73602
Unique identifier
UC11668708
Identifier
etd-PotterJame-6782.pdf (filename),usctheses-c89-73602 (legacy record id)
Legacy Identifier
etd-PotterJame-6782.pdf
Dmrecord
73602
Document Type
Thesis
Format
application/pdf (imt)
Rights
Potter, James Andrew
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
civil affairs
disaster response
peacekeeping
stability operations
SWEAT-MSO
volunteered geographic information