Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
The role of precision in spatial narratives: using a modified discourse quality index to measure the quality of deliberative spatial data
(USC Thesis Other)
The role of precision in spatial narratives: using a modified discourse quality index to measure the quality of deliberative spatial data
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
The Role of Precision in Spatial Narratives: Using a Modified Discourse Quality Index to Measure the
Quality of Deliberative Spatial Data
by
Christopher Thomas Marder
A Thesis Presented to the
Faculty of the USC Graduate School
University of Southern California
In Partial Fulfillment of the
Requirements for the Degree
Master of Science
(Geographic Information Science and Technology)
May 2019
Copyright © 2019 by Christopher Thomas Marder
To the Milazzos
iv
Table of Contents
List of Figures ............................................................................................................................... vii
List of Tables ............................................................................................................................... viii
Acknowledgements ........................................................................................................................ ix
List of Abbreviations ...................................................................................................................... x
Abstract .......................................................................................................................................... xi
Chapter 1 Introduction .................................................................................................................... 1
1.1. Landscape Values ...............................................................................................................3
1.2. PPGIS ..................................................................................................................................4
1.3. SoftGIS ...............................................................................................................................5
1.4. Deliberation Quality and Spatial Narrative Influence ........................................................7
1.5. Discourse Ethics..................................................................................................................8
1.6. Motivation .........................................................................................................................11
1.6.1. Community Influence on Public Policy through Spatial Thinking ..........................11
1.6.2. Community Influence on Public Policy through Technology .................................12
1.6.3. Spatial Science Advancement ..................................................................................13
1.7. Project Goals .....................................................................................................................13
1.7.1. Research Questions ..................................................................................................14
1.7.2. Methods Overview ...................................................................................................15
1.7.3. Analysis Assumptions ..............................................................................................17
1.8. Case Study Area – Chugach National Forest ....................................................................19
Chapter 2 Related Work................................................................................................................ 22
2.1. The “Influence” of Precision in Spatial Thinking.............................................................22
2.2. Defining PPGIS ................................................................................................................24
2.3. SoftGIS Defined and Its Usage .........................................................................................26
v
2.4. Landscape Values .............................................................................................................29
2.5. Discourse Ethics................................................................................................................32
2.6. Spatial Science Approaches Related to Project Goal .......................................................35
2.7. Gaps in Previous Research ................................................................................................38
2.8. This Project’s Contribution to Spatial Science .................................................................38
Chapter 3 Methodology ................................................................................................................ 40
3.1. Research Design................................................................................................................40
3.1.1. Research Workflow, Tools, and Task Overviews ...................................................40
3.2. Data Description ...............................................................................................................46
3.2.1. Exploration of Quality and Fitness for Use .............................................................47
3.2.2. Sampled Population Data Selection .........................................................................48
3.3. Research Task Details .......................................................................................................50
3.3.1. Detecting Spatial Precision in Content Analysis .....................................................50
3.3.2. Content Analysis Approach .....................................................................................53
3.3.3. Index Reliability and Validity Calculations .............................................................59
3.3.4. Calculating and Comparing Discourse Quality Indices ...........................................65
Chapter 4 Results and Discussion ................................................................................................. 68
4.1. Measured Discourse Quality Values .................................................................................69
4.1.1. Value based on comment parameters ......................................................................72
4.1.2. Value based on spatial precision ..............................................................................74
4.1.3. Value based on case study area geographic features ...............................................77
4.1.4. Statistical significance of discourse quality values ..................................................79
4.2. Statistics for the Index’s Measurement Reliability and Validity ......................................80
4.2.1. Rater Reliability .......................................................................................................81
4.2.2. Index Item Correlation .............................................................................................85
vi
4.2.3. Summary of the Statistics Results............................................................................90
4.3. Discussion of Measured Discourse Quality ......................................................................91
4.3.1. Determining overall discourse quality change from spatial precision .....................91
4.3.2. Detecting the degree of change to discourse with spatial precision ........................98
Chapter 5 Conclusion .................................................................................................................. 101
5.1. Answering Research Questions ......................................................................................101
5.2. Areas for Project Improvement .......................................................................................103
5.3. Future Directions ............................................................................................................107
References ................................................................................................................................... 109
Appendix A Original DQI Item Overview Table ....................................................................... 115
Appendix B Table of Discourse Item Weights and Quality Value, per comment ...................... 117
vii
List of Figures
Figure 1.1. Case study area ........................................................................................................... 21
Figure 3.1. Research workflow and analysis tasks ....................................................................... 41
Figure 3.2. Example of public comment and its metadata ............................................................ 43
Figure 3.3. Public comment webpage “reading room” site for the CNF ...................................... 49
Figure 3.4. Generalized overview of process for coding narratives using the modified DQI ...... 58
Figure 4.1. Histogram for the dataset’s distribution of discourse quality values. ........................ 70
Figure 4.2. Example arguments of low, medium, and high discourse quality .............................. 72
Figure 4.3. Geovisualization of discourse quality and frequencies .............................................. 79
Figure 4.4. Example of an argument using “illustration” ............................................................. 83
Figure 4.5. Example of an argument using “implicit group justification” .................................... 84
Figure 4.6. Example of an argument using “explicit group justification” .................................... 84
Figure 4.7. Example of an argument showing “high precision spatial narrative” ........................ 96
Figure 4.8. Examples of arguments having low and medium spatial precision ........................... 97
Figure 4.9. Examples of arguments having low discourse quality but high spatial precision ...... 99
Figure 5.1. Distribution of the index’s ordinal weights, per nominal item ................................. 105
viii
List of Tables
Table 3.1. Metadata for nonspatial qualitative data ...................................................................... 47
Table 3.2. Spatial precision item’s ordinal weights for discourse quality index .......................... 53
Table 3.3. Modified discourse quality index’s six nominal items and their ordinal weights ....... 55
Table 4.1. Deliberative discourse quality, for sampled population .............................................. 69
Table 4.2. Deliberative discourse quality, per comment type ....................................................... 73
Table 4.3. Deliberative discourse quality, per comment origination ............................................ 74
Table 4.4. Deliberative discourse quality, per comment type and spatial precision ..................... 75
Table 4.5. Deliberative discourse quality, per comment origination and spatial precision .......... 77
Table 4.6. Deliberative discourse quality, per top-ten precise locations ...................................... 78
Table 4.7. Mean rater reliability and Cohen’s kappa scores, per index item ................................ 82
Table 4.8. Polychoric correlation scores ....................................................................................... 86
Table 4.9. Factor Analysis scores, per factor loading, per index item .......................................... 89
Table 4.10. Deliberative discourse quality, per spatial precision ................................................. 92
Table 4.11. Comparison of means, with and without spatial precision item ................................ 93
ix
Acknowledgements
To my family, friends, and colleagues I told I could not have a social life with them because of
my studies, thank you for your patience. To the recruitment team and academic support staff for
the Spatial Sciences Institute, thank you for the chance to participate in the M.S. GIST program,
and for the ongoing support so I could stay focused. Thank you to the faculty who required that I
put forth only my best for the past two years: Dr. Katsuhio Oda, Dr. Su Jin Lee, Dr. Laura
Loyola, Dr. An-Min Wu, Dr. Jennifer Swift, and Dr. Andrew Marx. Without their instruction and
support, this thesis would not have been inspired to push the boundaries of what is possible to
study in the spatial sciences.
This thesis represents my desire to combine social sciences with the spatial sciences. To
that, thank you Dr. Vanessa Osborne for our discussions on how my writing should reflect
confidently the ideas I wanted to convey. Thank you too, Dr. Elisabeth Sedano and Dr. Robert
Vos, for humbling me by showing what scientific rigor is truly about and how to achieve it. And
without a doubt, this thesis reflects the constant support, attention to detail, and excitement for a
topic that only a thesis advisor could have with her student. To Dr. Jennifer Bernstein, I’m
beyond belief that we had this research journey, and I’m forever grateful. Finally, to all the
stewards of the Chugach National Forest in Alaska, U.S.A., the deliberation you engaged in,
however challenging, is vital for democracy. I am honored to have your impassionate speeches
driving this research.
x
List of Abbreviations
CAQDAS Computer assisted qualitative data analysis software
CNF Chugach National Forest
DQI Discourse quality index
DTM Deliberative Transformative Moments
GCS Geographic coordinate system(s)
GIS Geographic information system(s)
GIScience Geographic information science
NCGIA National Center for Geographic Information and Analysis
PDF Portable Document Format
PGIS Participatory geographic information system(s)
PPGIS Public participation geographic information system(s)
PWS Prince William Sound
softGIS Soft geographic information system(s)
SPSS Statistical Package for the Social Sciences
USFS United States Forest Service
WSA Wilderness Study Area
VGI Volunteered geographic information
xi
Abstract
The importance of spatial precision in geographic information science is not limited to
quantitative data. As spatial data can also exist in qualitative form, this project showed how
modifying a discourse quality index from the field of discourse ethics helped to better understand
whether mentioning specific spatial locations changes the quality of spatial narratives. The
discourse quality index was modified by incorporating an item into the index that detected the
presence and magnitude of a spatial precision construct. The spatial narratives analyzed with this
modified index were public comments submitted during a public policy revision process, for a
national forest plan revision at the Chugach National Forest in Alaska, U.S.A. One hundred fifty-
one public comments submitted during this policy process were analyzed.
Analysis showed when discourse quality values were classed by their magnitude of
spatial precision, the discourse quality changed between comments with no spatial precision
versus those considered to have spatial precision. The results suggest, preliminarily, that
employing spatial precision in narratives changes discourse quality during deliberative activities.
This project demonstrated how spatial precision applied to qualitative datasets. Further, the way
in which people use spatial precision to communicate during a policy revision process can
impact how spatial narratives are understood and valued. Most importantly, this project showed
that including spatial thinking into our discourse shapes the way people communicate their
landscape values, and that spatial thinking is indeed an influential communicative tool. The
results leave room to explore the degree to which incorporating precise spatial thinking into their
arguments for policies could empower individuals and/or political groups. Suggestions for
further research are provided.
1
Chapter 1 Introduction
Spatial precision in geographic information science (GIScience) is not limited to quantitative
spatial data. Spatial precision can be applied to qualitative datasets. Furthermore, spatial
precision can be detected via narrative, and how people use spatial precision to communicate can
impact how spatial narratives are understood and valued. Thus, this project suggested
researching spatial precision in narratives can help GIScience understand how people can
communicate spatial thinking when describing their relationships with landscapes.
To research spatial precision in people’s narratives, one may think it would not be
possible to collect narrative data using GIScience’s typical tools such as geographic information
systems (GIS). On the contrary, with GIS’s use in many aspects of people’s daily lives (e.g.,
driving navigation, searching for shopping, or sharing one’s social activities) qualitative,
narrative data is being easily collected with GIS (Bolstad 2016; Nummi 2018). Beyond personal
use, GIS has also been integrated into government activities as a public policy tool to collect and
analyze narratives on how a policy can affect people’s activities at the locations people are living
their lives (Fu and Sun 2011; Kahila-Tani et al. 2015; Ramasubramanian 2010).
This use of GIS for both personal benefit or from government initiatives (e.g., having a
web-based “311” system for citizens to report deteriorating streets needing repair, or with police
departments analyzing crime occurrences by location to prioritize patrolling) has encouraged
people to share narratives that emphasizes the social-emotional connections individuals have for
places, sharing essentially what are called their landscape values (Brown and Weber 2012;
Nummi 2018). Landscape valuation can provide a type of data that examines spatial precision in
narratives when relative to the geographic area framing qualitative data collection. Unlike
quantitative data where precision is dependent on a ratio/interval scales, spatial narratives have
2
precision based on language that demarcates a feature within a data collection area. This means
landscape values can contain components akin to a geographic coordinate system (GCS) that
allow a quantitative, locational data point to be formed from qualitative narratives.
Landscape value data are also an interesting type of data for public policy crafting
because the valuations garner different types of expectations from the data’s producers, namely
the public, as to how their data should be used. For instance, people who have a similar
landscape suggest that their valuation will help build a sense of community and encourage
people to express similar valuations at social events or grassroots demonstrations on social issues
(Elwood 2006; Plantin 2014). Yet, when landscape values are collected by the government for
policy planning, this generates more explicit expectations from the public as to how their
landscape values should be considered in policy planning. The “public,” defined in the broadest
sense as being those who are not part of the government doing the policy planning (Brown and
Donovan 2013), often stipulate that their landscape values are the most authentic representations
as to how a landscape is experienced based on use, and therefore should significantly influence
what a policy should accomplish on a social issue because the valuations come from such an
authentic source (Kahila-Tani et al. 2015).
In addition to the increasing use of GIS for policy planning, governing bodies are more
broadly starting to recognize that the public wants to play an active role in shaping policy, as
policies will ultimately affect the places people are connected to emotionally (Brown and
Donovan 2013; Engen et al. 2018; Plantin 2014). More importantly, when the public participates
in policy shaping activities, governing bodies recognize that the public expects to see places
managed using their perspectives, rather than an approach in which governments assume a priori
how the public wants places managed based on proxy, demographic measures (Engen et al.
3
2018; Ramasubramanian 2010). Governing bodies have also realized that policy development
methods that incorporate landscape valuation are limited, as since policy methods have
traditionally valued an “information only”, top-down approach for public participation activities
(Birkland 2005). Driven by the increasing use of GIS by governments for collecting landscape
values for crafting policy, a domain called Public Participation GIS (PPGIS) emerged to make
sense of how people attribute emotions to places, and at what precision, including how
governments could use GIS to gather and apply emotional-spatial data (Ramasubramanian 2010).
1.1. Landscape Values
Landscape values can be defined as the psychosocial emotions that people attribute to
landscapes given that places can incite emotional experiences (Brown, Raymond, and Corcoran
2015; Brown and Reed 2000). As such, landscape values can motivate a person to invest time
and resources in policy processes because these emotional connections to places allow a person
to feel that they have a personal stake at how changes to a policy could affect changes to a
landscape. Thus, if the public feels their landscape values are threatened because of policy
change, and that those changes would cause their landscape values to be “harmed”, the public
would be motivated to navigate the often-complex public participation process to ensure their
landscape values are prioritized in a policy.
Acknowledging that landscape values can motivate people to participate in policy
processes is important because the public often finds the policy process cumbersome and
frustrating (Birkland 2005; Ramasubramanian 2010). Even government “citizen guides”
outlining public participation processes, meant to empower people by openly giving information
on how-to participate in these processes (e.g., US Department of Agriculture 2016), implicitly
admit collaborating with the government during a policy process is not straightforward. For
4
people then to willingly engage in cumbersome processes, an underlying motivator must be
acknowledged because qualitative research must have a context (i.e. a motivator) to ground
research results and interpretations, even in circumstances where there is a high level of trust
from the public that a government will make “appropriate” policy decisions (Engen et al. 2018).
Landscape valuation then is that underlying motivator for why the public chooses to get involved
in policymaking.
1.2. PPGIS
PPGIS is an example of how GIS can evolve to meet the spatial thinking needs of
society. With the first GIS was built in the 1960s, early systems stored digitized geographic data
that could be queried and visualized (Goodchild 1992). From then it wasn’t until the late 1990s
when the GIS community of researchers and users saw how GIS, along with other technologies,
could empower the broadest form of the “public” for representation in public policy by providing
the means to collect and disseminate place-based narrative experiences by bottom-up grassroot
activism using technology to bypass established information authority sources (Plantin 2014).
Along with the government’s increasing use of GIS to collect and disseminate similar data,
academia developed the domain of PPGIS. PPGIS studies the way in which GIS can not only
shape how government policies should be created as data becomes more democratized, and but
how governments could use GIS to improve the public participation process.
Research into PPGIS’s successes and failures has created a canon of methodologies,
theories, and application scenarios (Brown and Kyttä 2018; Plantin 2014; Sui, Elwood and
Goodchild 2013; Ramasubramanian 2010). PPGIS is inherently multidisciplinary, borrowing
paradigms from social and natural sciences to explain the socio-environmental interactions
people have with places (Brown and Kyttä 2014). This emphasis on socio-environmental
5
interactions however means that a great deal of PPGIS research has focused specifically on how
land-use policy affects people’s interaction with places such as nature preserves and other public
lands (Brown and Donovan 2013; Brown and Weber 2012; Brown, Weber, and de Bie 2014;
Brown and Reed 2000; Engen et al. 2018).
Yet this PPGIS research also looked at how affective valuations of a landscape could
challenge policies through the public process which were not yet implemented. This is best seen
with PPGIS “predicting” public challenges through conflict indexing, an analysis which
compares competing public values or the collective public values against a policy’s outlined
impacts to an area. Such analyses show what changes to landscapes the public is willing or
unwilling to accept (Ernoul et al. 2018). Indeed, these types of initial analyses using GIS showed
how PPGIS expanded our understanding on how precisely articulated landscape values can
influence policies, and thusly is why this project’s work on precision spatial narratives should be
thought of as PPGIS research.
1.3. SoftGIS
Another GIS domain that focuses on researching both government-driven and grassroots-
inspired participatory engagement through mapping is soft geographic information systems
(softGIS). For most researchers, softGIS is not that different from PPGIS, since both activities
are typically driven by governments seeking landscape value data from the public (Rantanen and
Kahila 2009). The difference between the two is in the type of data generated. Where PPGIS
uses landscape value typologies so people can “place” digital point-pins on a map, softGIS
collects qualitative landscape valuations, in the form of spatial narratives, based on a digital map-
pin. The content of these narratives is usually thematically related to a single, open-ended prompt
or a series of open-ended questions, both of which ask how a softGIS participant feels about a
6
place. The digital map-pin that a participant places before giving their narratives is usually
placed against a “clean” basemap (clean relative to other types of spatial data such as
demographic information overlaid through census block geographies) (Kahila and Kyttä 2009).
For softGIS researchers, this qualitative data provides a narrative explanation as to why a
person chose the landscape value(s) they articulated. With these narratives, researchers can use a
content analysis schema to “code” narratives for the presence of certain communicative items,
then analyze the quantity of codes detected for trends in communicative styles and strategies
(Saldaña 2013; Steenbergen et al. 2003). With this methodology, researchers have stated that
qualitative data can better show a person’s landscape valuation thought process, in terms of what
experiences or geographic features help contribute to a person’s valuation (Cerveny, Biedenweg,
and Mclain 2017).
For this benefit of having data that’s more revealing and descriptive of the landscape
valuation process, these qualitative data could be limited in suggesting that one person’s
valuation thought process is representative of a sampled population. Essentially, many
researchers have stated that narrative, qualitative data are better suited for only determining a
phenomenon’s attributes that could be present in a sampled population, not for directly
measuring the presence of a phenomenon’s attributes in a sampled population (Montello and
Sutton 2013; Moore 2004). Yet for the increasing use of softGIS to collect more-and-more
qualitative spatial data, understanding what type of information can or should be extracted, and
how that information should be extracted, from spatial narratives generated by softGIS and
similar data collection activities frames of this project.
7
1.4. Deliberation Quality and the Influence of Spatial Narratives
Though softGIS research has continued to encourage the collection of qualitative
narratives and helped with analysis process methodologies for government policy planning
(Kahila and Kyttä 2009), a significant question remains. To what degree can qualitative spatial
narratives influence the deliberation on shaping policy?
Answering this question is contingent on how deliberation is measured. Deliberative acts
are the products of abstract, psychological processes (Hammersley 2011). Such processes are
inherently difficult to capture since they can only be measured by proxy measures (Jaramillo and
Steiner 2014). One such proxy measure explored by this project included quantifying the quality
of discourse during deliberation. Exploring the components of spatial narratives using discourse
quality provides an opportunity to measure the deliberation dimension of softGIS data using a
vetted methodology that quantifies the components of spatial narratives through content analysis
(Saldaña 2013).
By using a content analysis methodology to quantify deliberative discourse, the concept
of spatial precision in landscape valuations can be integrated into this quantitative coding schema
used on qualitative data. This integration is possible because content analysis in general has
methods to add to the existing deliberative discourse coding construct that validated how spatial
precision in narratives could be measured, so the quantified results are statistically reliable and
valid (Montello and Sutton 2013; Saldaña 2013; Urdan 2017). When the spatial precision
concept is merged with the coding schema to measure deliberative discourse quality, this proxy
measure formulated the project’s main research objective to validate the assumption that spatial
narratives with more precise locations mentioned changes the quality of policy deliberation,
more so than narratives with a lack of spatial precision.
8
On a basic level, how specifically does one measure a phenomenon like deliberative
discourse in qualitative spatial data? This is something that has been asked through various
lenses but not explored deeply (Kar et al. 2016). However, this lack of exploration makes sense
within the spatial sciences, given that studying qualitative data is often understood as outside the
realm of the field (see Bolstad 2016). Yet, given the public expects softGIS process will
influence policy changes (Engen et al. 2018), understanding how the information generated from
qualitative spatial data may influence policy merits examination. This project then explored
whether measuring change in deliberative discourse quality could be achievable using a content
analysis method from the field of discourse ethics.
1.5. Discourse Ethics
Discourse ethics looks at communication during an act of deliberation. Generally
speaking, deliberation is an act of communication between parties on a divisive issue with the
goal of finding a solution to the issue that satisfies both parties (Maia et al. 2017; Steenbergen et
al. 2003). The elements around how discourse ethics studies the deliberative process was of
interest to this project. Akin to a public comment submitted to change a policy decision, the
deliberative process frames the act of “public participation” as a person persuading another to
change the other’s opinion on an issue. Using discourse ethics as means to quantify spatial
narratives in deliberative terms is appropriate because qualitative data that comes from activities
such as softGIS are data from a deliberative activity venue, even if the deliberation is not
happening face-to-face.
Applying discourse ethics to qualitative spatial data analysis was done using a discourse
quality index (DQI). This measures the discourse quality of the qualitative data while
incorporating the spatial precision dimension. This project selected the DQI developed by
9
Steenbergen et al. (2003) based on comparing the DQI to other discourse quality methods
available and critiques related to the DQI’s discourse context limitations. To reflect the spatial
nature of this project, the DQI was modified to reflect an assumption that simply stating how a
place has meaning does not make for quality discourse. Instead, by describing a landscape value
in precise spatial terms, i.e. naming a known geographic feature, point, or area, along with an
accompanying narrative, deliberative discourse quality is changed (Brown, Raymond, and
Corcoran 2015; Kahila-Tani et al. 2015; Zolkafli, Brown, and Liu 2017). The change in
discourse quality subsequently should exert influence on the policy process.
Quantifying discourse is challenging due to the subjective and complex nature of
communication and language (Hammersley 2011). The DQI though allows a qualitative dataset
reviewer (or coder) to identify communicative actions taken by a person during a speech act to
reach an agreement on how to address an issue through the deliberation of parties (in this case,
between a public and its government). The discernable communication actions categorize the
speech elements that produce a “better” argument (Bächtiger et al. 2010). By examining these
communicative elements within a speech act, a DQI quantifies the process of formulating and
presenting an argument at the scale of the individual. Quantifying qualitative spatial data at this
scale is ideal since the landscape value data from softGIS activities are generated per person and
the DQI measures discourse quality per party making a speech act.
Once discourse quality values are determined per speech act, the DQI schema does allow
for each speech act’s quality value to be aggregated so the overall discourse quality for the
sampled population can be calculated. This calculation creates a baseline value to determine
what, if any, communicative items or speech act parameters (e.g., gender, ethnicity, education,
etc.) could influence discourse quality. Additional quality indices were created and segmented
10
into communicative item categories, which helped show how certain communicative elements,
such as referencing spatial precision, can affect discourse quality. This type of qualitative
analysis makes the DQI superior to measures that examine how power dynamics in deliberative
processes play out over time in active, face-to-face discourse settings (e.g., the Deliberative
Transformative Moments measure) which for this project is inapplicable (Jaramillo and Steiner
2014; Maia et al. 2017).
The DQI has been critiqued in how it accurately measures all types of natural speech.
According to Bächtiger et al. (2010), the DQI represents the most ideal form and function that a
speech act can achieve. As such, this ideal form is a phenomenon that has not been documented.
Because of this, the DQI is essentially a measure of how close a speech act is to achieving
idealized discourse. Critics argue that comparing natural discourse to an idealized form
characterizes natural speech unrealistically, such as the expectation that people can achieve the
highest forms of discourse quality even if they do not understand the elements used to
accomplish it. Furthermore, the DQI does not quantify the emotional components of natural
speech (dramatic inflections, storytelling), and the degree to which they have a positive influence
on a person’s discourse.
If then the DQI measures how natural speech compares to the idealized form of “proper”
speech, why use the DQI to ascertain the quality of natural speech to a theoretical level of
discourse quality ? The communicative items that make up the DQI still best capture the most
common discourse items that would most likely be used in deliberation, so the added spatial
dimension is analyzed against expected and vetted items from discourse ethics research. The
DQI is thus the best instrument to capture discourse change in qualitative spatial data.
11
1.6. Motivation
The motivation for this project was twofold. The first was to find community-based
solutions that use spatial thinking in adequately capturing public policy perspectives (FRS and
Brookings Institution 2008; Zuk et al. 2015). The second was to conduct softGIS research so the
public and government bodies can share responsibility for communicating more effectively.
Existing research suggests the public is frustrated with current public participation processes,
which leave many to feel helpless with their government (Dorling 2010). Identifying these
motivators also reveals the sources of personal bias that this project’s author could inadvertently
impart during the analysis and discussion of results. By being transparent as to the motivation,
readers can determine for themselves whether such motivations created biases that compromised
the project’s objectivity.
1.6.1. Community Influence on Public Policy through Spatial Thinking
The use of spatial thinking for understanding the public’s perspectives on policy is in
demand because traditional policy processes are overly cumbersome (Brown and Donovan 2013;
Kahila-Tani et al. 2015; Kar et al. 2016). For instance, when in the past a high-level policy
expert’s opinion on locating new local residential developments would have been found
favorable, such perspectives are now increasingly distrusted. For many, expert perspectives have
failed to capture local experiences, creating policies that were either not impactful or negative
(Ramasubramanian 2010).
To replace the eroding confidence with expert perspectives, ways in which governments
try helping the public to participate directly in policy creation has been well-researched
(Birkland 2005; Brown and Donovan 2013; Engen et al. 2018; Kahila and Kyttä 2009; Kahila-
Tani et al. 2015; Lopes-Aparicio et al. 2017; Nummi 2018; Ramasubramanian 2010; Rittel and
12
Webber 1973; Warner and Molotch 1995; Zolkafli, Brown, and Liu 2017). Yet citizens are not
confident that their interactions with government will be reflected in new or revised policy
(Engen et al. 2018; Kahila-Tani et al. 2015). This influences how the public perceives whether a
public policy represents the people it claims to represent (Dorling 2010). As such, recent
research has shown how community-based solutions improve collaboration and trust between the
public and the government (Brown and Kyttä 2018; Kahila and Kyttä 2009; Kar et al. 2016).
Understanding how the public values landscapes can also contribute to this community-
based literature, as landscape valuations are similar to social values that do not contain non-
spatial attributes (Rantanen and Kahila 2009). Previous research even has found the potential for
the generation of political power when these valuations are collected through softGIS (Elwood
2006). Given this potential it is worth investigating how landscape values can empower
communities, as a shared valuation influences how a community advocates for a valuation to be
represented in a policy (Kyttä et al. 2013).
1.6.2. Community Influence on Public Policy through Technology
Research has shown that the public’s ability to influence policy is dependent on the
public’s access to government’s information used to create policy in the first place. This research
showed how government, top-down approaches using modern technologies, e.g. government
saving costs by publishing content on websites, help the public gain access to the information
and data used to formulate public policies (Fu and Sun 2011; Ramasubramanian 2010). Yet
while the public has access to more information at a reduced cost, access to the technologies
needed to review government material (i.e. broadband Internet) persists (Anderson 2017). This
suggests that top-down approaches using technology are not as empowering as originally
thought. Indeed, access to and control of data collection and distribution makes political power
13
concentrated in the hands of those who have the technologies to create such knowledge (Elwood
and Leszczynski 2013; Mitchell and Elwood 2012). Since landscape value narratives are thought
to have influence on policy, and that the narratives are being generated through community-
based technologies such as softGIS, qualitative spatial data warrants research given it has
promise in reducing knowledge production and collection by governments.
1.6.3. Spatial Science Advancement
This research project contributes a new perspective within spatial science by applying
non-spatial paradigms to spatial problems. Though PPGIS and/or softGIS researchers have
developed methods for qualitative spatial data collection and analysis, neither domain has
incorporated other disciplines to address the systemic spatial problem of how does qualitative
spatial data influence policy and society. This lack of incorporation is unfortunate since
understanding a person’s landscape values requires understanding constructs on the valuation
process from paradigms beyond spatial science (Brown, Raymond, and Corcoran 2015; Brown
and Reed 2000). Through borrowing and modifying constructs from other disciplines, this
project demonstrated how spatial sciences could find how spatial relationships are conceived
based on how other disciplines view, and talk about, objects in a space.
1.7. Project Goals
This project sought to achieve two overarching goals which help it contribute new
perspectives for softGIS research. First, to explore the way in which qualitative spatial data can
be analyzed for its spatial dimensions just as other types of high-quality, quantitative spatial data.
Second, that a DQI with a spatial component can quantify landscape values in qualitative spatial
narratives to ascertain the influence these data have on crafting policies.
14
Historically, qualitative data, despite its richness and ability to document subtleties in
language usage, is typically used in exploratory study (Montello and Sutton 2013). This has been
the case because qualitative data is thought to only represent phenomenon as it occurred exactly
in the time and space it was collected, a condition for data that the social sciences describe as
having an ethnographic present (Moore 2004). Yet with increasing collection, this project
assumed that qualitative data has “quality” to it since these data reflect on people’s experiences
and interactions at granular scales (Rantanen and Kahila 2009), scales akin to how data from the
most precise GPS devices to measure one’s location on Earth is considered to have equally high
quality (Bolstad 2016; O'Sullivan and Unwin 2010).
The second goal of this project was to use a modified DQI for measuring whether spatial
narratives changes discourse quality during a public policy creation process, in the context of a
particular case study. This goal reflected the project’s desire to successfully apply a paradigm
from a non-spatial science discipline. Success in this case means that the project’s results were
reproducible to the same extent that the original DQI’s results had achieved in its validation
study. To achieve this goal however, reproducing the methods and results of the DQI required a
research context. This project’s context explored whether the quality of deliberative discourse
changed when a precise spatial component was mentioned in spatial narratives during a policy
revision process at a particular case study area.
1.7.1. Research Questions
There were four research questions (RQ.[n]) to show how the theoretical underpinnings
to measuring spatial precision in narratives were investigated. The research questions also
provided an outline as to what methodological steps were taken to answer the questions. One
should note though, given this application of a non-spatial paradigm to research qualitative
15
spatial data, these questions and subsequent answers should not be generalized to the sampled
population, nor an entire human population in general. This is the case since one of the project’s
goals was on determining how well the DQI would work with qualitative spatial data. As such, it
would not be appropriate for this project to extrapolate on trends observed in a sampled
population if it was unknown how likely the measurement methodology was to accurately detect
the trends being looked for (Montello and Sutton 2013). That said, the research questions can
also provide the framework from which future research could start their explorations. The order
of these questions was reflective of the deductive methodological approach that was employed:
RQ.1: Can spatial narratives be quantified from qualitative spatial data?
RQ.2: Can the discourse quality index (DQI) measure spatial precision in public
comment spatial narratives?
RQ.3: Does locational precision of spatial narratives change the quality of deliberative
discourse for this case study’s policy revision process?
RQ.4: How do precise spatial narratives change the quality of discourse in this case
study’s policy deliberation?
1.7.2. Methods Overview
This project analyzed a case wherein landscape values were submitted as individual
comments by a public for use in a policy revision process. Now considering that qualitative data
collected from softGIS and PPGIS are for policy crafting (Brown and Kyttä 2018; Kahila and
Kyttä 2009), the qualitative comments found from this project’s case study area served as a
proxy data source for the type of qualitative data that would have been generated from a similar
type of softGIS activity. Using a proxy data source was necessary to ensure that the qualitative
spatial data subjected to content analysis methods had a realistic deliberative discourse context.
16
This was needed to rule out that the analysis’ results were due to the application of the measuring
instrument, and not due to bias in the data that could have resulted from fabricating fictious
comments for study. The case study area came from the Chugach National Forest (CNF) in
Alaska, U.S.A., where public comments were collected as part of a process to revise policies
dictating the CNF’s land management activities. Analyzing the qualitative landscape values in
the CNF’s comments was accomplished by working through a sequence of five analysis tasks.
The first analysis task was devising how spatial precision would be detected during
content analysis and determine how the spatial concept could interact with the other items in the
DQI, so that spatial precision could be researched as to how it contributes to changes in
discourse quality. The second analysis task was preparing and importing the data into a
Computer Assisted Qualitative Data Analysis application (CAQDAS) for the content analysis
coding. Once imported, the third task was coding the data based on the modified DQI that
included the spatial dimension. For the fourth task, the coded data was checked for rater
reliability and index item correlation to show that the DQI was reliably applied across three
coding sessions to the data as intended, and that the DQI’s measure of the deliberative discourse
construct was validated. Finally, the fifth task calculated the discourse quality indices.
This sequence of tasks was based on the project’s research questions. The tasks framed
what type of data each task was meant to produce and how the information from each task
helped in answering the project’s goals. Before detailing the tasks, an overview defining the
“public” that was expected to participate in the case study area’s policy process is provided,
followed by an overview as to how the concept of spatial precision was defined.
17
1.7.3. Analysis Assumptions
For this project, two assumptions were necessary with respect to defining concepts of the
“public” and what spatial precision in qualitative data means. These assumptions were necessary
to help calibrate the methodology, as to what the project should be measuring from the dataset.
1.7.3.1. Defining the “public”
The concept of the public is considered ambiguous (Brown and Donovan 2013; Pant
2014; Perkins 2010; Ramasubramanian 2010). Therefore, in the context of government policy
processes, a “public” as defined by this project are those simply not working for the government
initiating the policy crafting process. In this view, the public includes individual citizens,
business groups, single-issue coalitions, and lobbying firms (Brown and Donovan 2013). This
project’s definition was appropriate as the focus was on measuring the discourse quality of
spatial narratives rather than analyzing who was providing the narrative.
This definition though does overly generalize those typically involved in policy
processes. Essentially, this generalization means for those who do not participate in this public
comment submission process, they are assumed not to participate in, nor care about, any policy
revision process. This definition also assumes that local government agencies do not participate
in the policy process via comment submission, which is not always the case. More importantly,
this definition assumes the political lobbying influence that different persons, groups, or
organizations may already have with the USFS are all equal, which is usually not the case
(Birkland 2005). However, these limitations appeared to have minimal effect on analysis since
the overall majority of comments were from individuals.
18
1.7.3.2. Defining spatial precision for qualitative data
Typically, spatially precise quantitative data is measured using a variety of interval or
ratio-based scales. With these measures, users of a dataset would know to the degree of certainty
the location of a vector’s geometry or a raster’s coverage as oriented to a GCS or some other
map coordinate system (Bolstad 2016; O'Sullivan and Unwin 2010). Yet with qualitative data,
assuming the narrative does not state explicit coordinates as part of a speech, the use of interval
or ratio-based scales to know the degree of certainty for a location stated cannot be had. Instead,
a definition is needed to show how a person’s speech on landscape values could be considered
conceptually to have a similar type of spatial precision to that of quantitative spatial data.
Previous research into identifying the spatial precision of qualitative data is lacking
(Brown 2012; Brown and Kyttä 2014). For this project, qualitative spatial precision was defined
as speech containing language elements which describe the location of a geographic feature to
the point that another person, familiar with an area surrounding said feature, could identify the
same feature with a reasonable level of accuracy. This definition essentially assumed that spatial
precision is dependent on first, identifying a spatial context which surrounds the features in
question. Then second, this definition assumed that the highest form of spatial precision in a
speech is based on how a described feature’s geographic extent is relative to the spatial context.
More details on ‘spatial context’ is provided in subsection 3.3.1.
This definition of spatial precision for qualitative data involved limitations. Unlike
precision measured with interval/ratio scales, from which standardized measurement systems are
made, precision in narratives must be relative to a study area in question (as outlined in
subsection 3.3.1). Such relatively could allow a valuation in one context to be more precise while
in another to be less precise. This comparative condition for spatial precision means that
quantifying speeches can only be made using spatial contexts that are relatively in-of-themselves
19
stable (i.e. have undergone limited boundary changes). For the project, this definition was
appropriate to quantify the spatial precision of landscape valuations since the qualitative
dataset’s speeches were focused on valuations for features within a single, consistent spatial
context which the comments’ precision were judged to.
1.8. Case Study Area – Chugach National Forest
This project applied the DQI to qualitative data from a real-world policy process. The
location for this project of this real-world policy process case study was the Chugach National
Forest (CNF) in the State of Alaska, United States of America. This location was ideal for
studying landscape values since the area has a landscape well-known and traversed by its
surrounding communities. With such an area under frequent use, there was no shortage of
landscape valuations to be measured for the project’s goals. Additionally, with frequent use came
a more intimate spatial knowledge of the CNF’s geographic features and boundaries. This meant
that the comments contained referencing to more precise landmarks, a key feature needed from
the qualitative data for the project’s goals.
The CNF was also an ideal case use area since the landscape was an actively “managed”
landscape under the administrative authority of the US federal government. Thus, because the
landscape was an actively managed one, it would not only logically undergo a policy discussion,
possibly using landscape values, to outline how it should be managed, but that the government is
obligated to have a policy dictating how the area is managed for a diverse set of uses (Brown and
Donovan 2013; US Forest Service 2014). Given these obligations, subsequent public comments
on policy were indeed focused on being deliberative to the issue of crafting a policy.
The administrative oversight of the CNF is conducted by the United States Forest Service
(USFS). The CNF is the second largest forest in the USFS system at 5.4 million acres. Roughly
20
96% of the forest landscape is managed to “…allow natural ecological processes to occur with
very limited human influence,” and the remaining 4% targeted with “active management” as
those areas have frequent human activity (US Forest Service 2014, 4). Nearby, urban settlements
range in population size from approximately 9,600 (e.g. remote villages without primary road
access) to approximately 300,000 (e.g., the City of Anchorage) (US Forest Service 2014, 7). As
public land, the CNF is legally expected to generate different opportunities for diverse uses (1-2).
The CNF then has opportunities for “tourism and recreation,” fisheries, “wood products,” and
mineral extraction that does not include oil or natural gas (7).
The policy plan of CNF is divided into three geographic regions which dictate the day-to-
day land management activities (US Forest Service 2014, 4). These regions are not the same as
‘ranger districts’ but based on community naming traditions. These traditions name regions in
homage to the Western-European discovery and exploration of the area in favor of place-names
given by the indigenous population living in the area prior to Western-European contact. Since
these regions were referred to by local communities consistently in public comments and are
used to orient visitors to the CNF, the project assessed the precision of the spatial components
mentioned in the data based on these three recognized regions.
As shown in Figure 1.1 the geographic regions are the Copper River Delta (‘Copper
River’ or ‘Copper River Basin’), Prince William Sound (‘PWS’), and the Kenai Peninsula
(‘Kenai’). The Copper River area covers 31% of the CNF and the management priority is
primarily focused on conserving habitats for fisheries and wildlife. The PWS covers 48% of the
CNF and is mostly water and scattered forested islands. In the western portions of PWS, there is
the Nellie Juan-College Fiord Wilderness Study Area (‘WSA’, ‘Nellie Juan Fiord’, or ‘Nellie
Juan’), which is a landscape under observation for the consideration of a US Congressional
21
designation as a formal wilderness. The Kenai covers 21% of the CNF and, due to its proximity
to the City of Anchorage, sees the most frequent human usage (US Forest Service 2014, 4).
Figure 1.1. Case study area. Notice the geographic extents and proximity of roadways, trails, the
Alaskan Railroad railway, and some of the frequently mentioned urbanized (i.e. built-up
population settlement) areas in the public comments, to the CNF as a whole. Source: US Forest
Service 2018a.
22
Chapter 2 Related Work
A literature review assisted in finding commonalities between theories and methods used
between different disciplinary frameworks. Additionally, a literature review helped determine
knowledge gaps, and how said gaps inspired this project. The literature review presented in this
Chapter starts by exploring the concept and influence of precision in spatial thinking, and how
that thinking is articulated via language. The review also explored how deliberative spatial
discourse has been understood through PPGIS and softGIS. From there, landscape values and
discourse ethics are discussed. Finally, spatial science approaches are evaluated to determine
where this project fits within the broader literature.
2.1. The “Influence” of Precision in Spatial Thinking
A key construct tested in this project was the assumption that landscape values, when
expressed in spatially precise terms, can change the quality of deliberative discourse more so
than if landscape values were spatially generalized (i.e., not precise). An example of this
assumption would be, if asked what the value of a forest is, when one person describes that an
entire forest had recreational value that that landscape valuation is said to not be spatially precise
because, relatively speaking, the valuation is being applied to the entire area in question.
However, when another person describes that a forest has recreational value because of a trail
within the forest, that valuation is said to be spatially precise because, again relatively speaking,
the valuation is being applied to a geographic feature within the area in question. In the first
instance the valuation is spoken broadly, whereas the second instance ties a personal experience
to a specific location.
Intuitively, one may think landscape valuations do indeed change discourse quality when
precise locations are included. This intuition may stem from the mindset that when a person says
23
a whole forest has recreational value the person expressing the value could not possibly justify
their emotional experiences at small scales. However, if a person mentions a specific location
within the forest, it is more likely their valuation would be appreciated (i.e., their discourse has
high quality) given that the specificity of the valuation would match the scale at which they are
likely to emotionally experience a landscape in general.
Previous research on the role of precision in a person’s spatial thinking, and how that
precision is articulated, in relation to discourse is limited (Kar et al. 2016; Brown and Kyttä
2014; Brown and Kyttä 2018). In one instance, landscape values were examined to determine the
autocorrelation between a valuation, and the geographic features that the valuations were
intended to reflect on, as they were mentioned in qualitative reports submitted via PPGIS
activities (Brown 2012; Brown and Weber 2012; Brown, Weber, and de Bie 2014). In another,
one study did look at how qualitative narratives of places give valuations authenticity based on
that valuation’s expression between specific audiences (Elwood 2006). For the spatial sciences in
general, determining the accuracy and precision of data is an ongoing concern (e.g., O'Sullivan
and Unwin 2010). Yet for understanding qualitative data precision, in terms of building the case
that a landscape value is accurate and precise with respect to the phenomenon or a feature being
mapped, has not yet been explored.
Despite a lack of research on the role of exact landscape values, the value of precise
spatial thinking is seen in the literature. For instance, Nummi (2018) demonstrated that a city’s
historic value is better appreciated by the public, and in turn better encourages a city to have
historic preservation efforts, when those values are expressed precisely (i.e. this building here, or
these two to three blocks of homes here). Wolf, Brown, and Wohlfart (2017) showed that
recreational values within a nature preserve are discussed at the scale of user trail segments,
24
demonstrating how individuals interact with the landscape and develop their valuations. Mitchell
and Elwood (2012) showed that the precise location of historical events (e.g. a protest over racial
injustices in the city’s town square) influence perceptions as to how demographic groups should
be treated. Overall, the literature seems to support the notion that precisely stating landscape
values generates influence just as high-quality discourse generates similar influence during
deliberative activities.
2.2. Defining PPGIS
Despite being recognized as a formal domain within GIScience since the late 1990s, the
literature defines PPGIS still rather loosely. A key debate is to whether PPGIS is a tool to study
the spatial extent of policies (Ramasubramanian 2010), a science studying concepts and methods
for participatory mapping projects (Plantin 2014), or a meta-study as to how certain GIS
technologies yield certain kinds of participation results (McHugh, Roche, and Bédard 2009).
Prior to the 1990s, the term PPGIS was not used since these types of GIS use cases were limited
in number. However, once U.S. federal government agencies began to use spatial data and local
perspectives to justify grant funding, the debate increased as to how GIS for government public
participation should be defined and researched (Ramasubramanian 2010). Even after the
National Center for Geographic Information and Analysis (NCGIA) recommended the term
PPGIS as a distinct subdiscipline within the geographic sciences, the definition remains fluid
(Brown and Kyttä 2014; Kar et al. 2016). Regardless of a formal definition, it is largely agreed
on that PPGIS essentially emphasizes the use of spatial thinking by leveraging geospatial
technologies, so the public has access to the policy planning process.
Some state that other types of public participation GIS cases, such as those of PGIS, VGI,
and softGIS, falls under the PPGIS domain (e.g., Brown and Kyttä 2014; Sui, Elwood and
25
Goodchild 2013). Others argue that there are differences between those types of public
participation GIS cases and PPGIS. While these divisions between cases seem trivial, it is
important to clarify what each type of public participation case does, in terms of the geospatial
technologies used and how the public uses those technologies. This is important because each
case has different goals with both the data collected and how the public is engaged. Ultimately,
how a landscape value collection strategy is classified can elude to how that public engagement
could be perceived as how likely the collected data will influence a policy process.
Furthermore, this case study area’s engagement strategy also helped determine what type
of spatial science research this project’s analysis was suited for. By looking at how the above
public participation GIS cases gather landscape valuations, and how those valuations are used,
this project framed why understanding spatial precision in narratives is important. With PGIS,
for instance, the geospatial “technology” used for soliciting spatial data is usually paper maps,
where the expected outcome is to foster solidarity on a local issue for grassroot political activism
(Plantin 2014). On the other hand, VGI uses web-based geospatial technology to collect almost
real-time spatial data, essentially making the public act as “citizen sensors”, and in some cases
using data as a means of solving specific problems (Bolstad 2016). Somewhere in between,
softGIS contains elements of both PGIS and VGI, where web-based technologies are used to
foster solidarity on a local issue. For this project then, categorizing people submitting public
comments as part of a policy revision process is a softGIS activity. This is so primarily because
not only are comments collected as part of a web-based activity to engage the public, but the
comments are not part of a real-time collection effort (Kar et al. 2016).
26
2.3. SoftGIS Defined and Its Usage
SoftGIS can be understood broadly as both a lens to frame human geography research
(Kahila and Kyttä 2009; Vich, Marquet, and Miralles-Guasch 2018) and a methodology for
collecting data (Rantanen and Kahila 2009). Though there are other interpretations, softGIS
activities at its core consists of gathering qualitative spatial data to study socio-environmental
interactions between people and landscapes (Kyttä et al. 2013). Given that there is more spatial
science research being performed, including this project, based on qualitative data, softGIS
warrants further discussions as to understanding the type of data it works with and how analysis
with that data will contribute to spatial scientific inquiry (Brown and Kyttä 2018). In
understanding what comprises softGIS data and analysis, this project worked to match its
methodologies and interpretative perspectives so that they were similar to those used in other
types of softGIS research.
SoftGIS is viewed as fitting with the current trend of using GIS for data collection by lay-
people with little to no formal training in geographic data collection (Sui, Elwood, and
Goodchild 2013). While typically not considered softGIS, both traditional participatory mapping
(PGIS) projects (Plantin 2014) and research using volunteered geographic information (VGI) on
landscape values (Nummi 2018) are characterized as falling within the softGIS framing as data
collection efforts done by non-geographic experts. For this project, understanding that softGIS
activities fits into a larger trend of using technologies to leverage political influence is important
to framing why the public would choose to use softGIS technology and method to begin with.
Though PGIS and VGI are similar activities, softGIS is distinct in two ways. First, the
maps used by softGIS are typically void of layered spatial data, i.e. boundaries or roads with
attribute data. Instead, softGIS typically orients through a base map, e.g. with aerial imagery or a
27
topographic map, as a means of basic orientation for data contributors. SoftGIS participants then
add their landscape valuations based on an overarching data collection question to a “clean”
map. For example, Nummi (2018) asked participants to mark and describe if certain buildings
within an urban area as shown on aerial imagery had meaning (values) to them. By contrast,
PGIS activities are usually driven by consolidating local knowledge on known features and/or
continuous-field phenomenon (Ernoul et al. 2018; Plantin 2014).
Second, softGIS typically captures transactional interactions, or affordances, that people
believe the environment “gives” to them. Affordances arise based on the psychological attributes
a person ascribes to a landscape during their interaction with the landscape. For example, one
could ascribe to a patch of forest an economic landscape valuation based on how a logger felling
the trees there generates a sense of economic independence (Brown and Reed 2000). These
transactional interactions are the result of wide ranging and complicated emotional processes
through which people attach meaning to an inanimate landscape. Yet interestingly, like with
other types of GIS activities that take measurements of temperature or rainfall for showing a
snapshot of a phenomenon at a location, affordances as they are collected through PPGIS, PGIS,
and VGI activities usually means that these data are interpreted deterministically (i.e. a landscape
value exists only as recorded or it does not exist at all). As such, these types of GIS activities
cannot capture the range of feelings people have for places because softGIS seeks qualitative
data, usually in the form of narrative experiences (Brown 2012; Brown and Kyttä 2014; Kahila
and Kyttä 2009). Research using PPGIS methods do not disparage that type of research, but
rather illuminates differences between the quantitative and qualitative landscape valuations using
GIS data collection.
28
SoftGIS’s expansion has been bolstered by other technological developments such as
Web 2.0 infrastructure (Fu and Sun 2011). Arguably, one of the reasons why softGIS activities
are increasing is because it is more accessible to the public being a web-based activity (Kar et al.
2016). Through advances in computer processing power, more user-friendly interfaces, and
lower costs of broadband Internet, readily available web GIS programs (e.g. Google Maps, ride-
share apps, and social media) have shown people how they use spatial thinking in their daily
experiences (McHugh, Roche, and Bédard 2009; Mitchell and Elwood 2012). In turn, the
increased web GIS use has “trained” people to more readily identify how their place-based
experiences (as they identify through the use of web GIS programs) could change with a change
in policy (Brown and Weber 2012). Thus, as more people continue to volunteer their experiences
through web GIS activities and generate spatial data (e.g., landscape valuation, rankings of
importance, and narratives), the use of softGIS will be furthered along (Fu and Sun 2011). This
expansion of softGIS demonstrates that the resulting qualitative spatial data will not only become
more plentifully available, but that it has the chance to prove its value to researchers and for
policy planning (Plantin 2014; Sui, Elwood, and Goodchild 2013).
As softGIS has been defined and its use been explored here, softGIS also seems as a
methodology for researching qualitative spatial data appropriate for measuring discourse quality
change based on spatial precision detected in the discourse. Using a different approach (such as
those explained above) would be too deterministic in interpreting landscape valuations
qualitatively. Transactional interactions people have with an environment need interpretative
constructs to be translated into usable data for spatial analysis. SoftGIS’s focus on qualitative
data makes that room for other interpretative approaches, such as discourse ethics, to help
29
understand narrative spatial data in the context of how landscape values may influence
deliberative discourse.
2.4. Landscape Values
Human geographers understand people’s place-based psychological experiences through
the lens of landscape values. Landscape values are typologies that categorize people’s thought
processes when formulating how a landscape satisfies their sociocultural and emotional needs
(Brown and Reed 2012). Though practical to use a structure that categorizes emotional
experiences, this type of quantifying does limit the ability to understand the landscape valuation
determination process. This happens because typologies can over-generalize how abstract
thought processes, such as emotional place attachment, operate in a natural environment
(Montello and Sutton 2013). Nevertheless, generalized typologies are suited for generalizing the
geographic extents of these emotional experiences across landscapes (Brown and Reed 2000).
Despite the generalizations, landscape values research has shown how valuations remain
consistent across time and space. Even if valuations change locations for a person, (e.g. from a
forest to an open prairie) the valuations do not change if conditions prompting the valuation
remain the same across different landscapes (e.g., trails in the forest and open prairie give both
areas a recreational value, despite ecological differences between landscapes) (Brown and Weber
2012). Research has also shown typologies correlate between a value’s ontological
conceptualization and how a person would categorize a value based on their conceptualization.
The values essentially can describe how people perceive locations without choosing from a set of
categorizations (Brown and Reed 2000). From the spatial sciences perspective, this consistency
with landscape valuations, in how people think about and apply them to environments, gives
30
qualitative data credibility in accurately capturing people’s location-based emotional
experiences.
As stated before, landscape values are based on a person’s psychological experiences.
The term ‘psychological experiences’ generalizes the experience of a person when social and
cultural identities influence one’s affective response to an environment. From there a person
assigns value to a place, giving the place a measure of value (e.g., valuation). The term
‘valuation’ generalizes a person’s assessment that a place is worthy due to its personal emotional
benefit (Brown, Raymond, and Corcoran 2015; Cerveny, Biedenweg, and Mclain 2017).
This generalized definition of landscape values is preferred since the quantitative
epistemologies part of this definition connects the qualitative nature of our landscape experiences
to a method for the quantification of landscape values. Using a narrower definition would mean
that the quantification methods used on the landscape values could not be reproducible beyond
an individual study site. That means for the analysis done in this project, such a method would be
of limited use for spatial science research, which would be counter-productive to the research
objectives outlined. This definition also provides planners a reason as to why policy changes can
affect an individual’s well-being if a policy alters their sense of place (Mitchell and Elwood
2012).
The landscape value literature is typically grouped within one of three research
objectives. These are (1) research on defining value typologies (Brown and Reed 2012), (2)
determining the spatial distribution of values across a landscape (Brown and Weber 2012;
Brown, Weber, and de Bie 2014; Ernoul et al. 2018), and (3) measuring how accurately the
values are indicative of a population’s “true” valuation of a landscape (Brown and Reed 2000;
Cerveny, Biedenweg, and Mclain 2017). Though research on landscape value typologies have
31
matured, more recent research has explored the use of qualitative data to create new typologies,
even if it is only used in a single study (Ernoul et al. 2018).
When determining the spatial distribution of landscape values, researchers typically use
point-pattern analysis since values gathered from softGIS or PPGIS activities are usually placed
as point features (Brown and Weber 2012), though in some cases they are polygons (Brown and
Pullar 2012; Brown, Raymond, and Corcoran 2015). Understanding the spatial distribution of
values can help suggest the management policy that would be best for an area, e.g. whether a
coastal area needs better animal management if it is predominately valued for its wildlife
viewing (Ernoul et al. 2018). Determining the spatial distribution of landscape values helps
determine whether values were placed by random chance or as a deliberate act (Engen et al.
2018). Identifying if the spatial distribution of values exhibits significant patterning too can
determine if valuations were the result of social susceptibility bias during the collection of values
(i.e. people were placing values because of the values placed before) (Brown and Reed 2000).
Finally, though challenging to quantify, research seeks to determine if values placed on
maps are similar to the same values expressed in non-spatial ways. Researchers hope here that
valuation is consistent across the mediums of expression – speaking, writing, art – to confirm
that softGIS or PPGIS approaches elicit principles that are deeply held normatively (Cerveny,
Biedenweg, and Mclain 2017).
Value typology research looks at whether different values are employed to describe the
same landscape. For example, in the context of urban green spaces, some state these spaces are
important for their natural value while others state these spaces harbor cultural value (Pietrzyk-
Kaszyn´ska, Czepkiewicz, and Kronenberg 2017). In another instance, users of multi-use trails in
a nature preserve indicated that while recreation is appreciated generally, some forms of
32
recreational activity are preferred over others, e.g., having recreational value based on the
landscape’s “ability” to improve an individual’s health and wellness, versus having recreational
value because the landscape has trailed access to showcase the preservation of an area’s
biodiversity (Wolf, Brown, and Wohlfart 2017). Along similar lines, in another study the value
of urban density was seen as either an economic opportunity or contribution to community
cohesion despite people identifying both under the same valuation (Kyttä et al. 2013). Similar
research furthermore validates that landscapes means different things to different people, though
many can agree to one overarching value typology (Wolf, Brown, and Wohlfart 2017). Research
also seeks to understand whether values reflect a person’s behaviors, or if a person is providing
attributes to an object they value, i.e. to describe what that object “gives” to them (Brown and
Reed 2000).
Though less prevalent in the literature, landscape values research has explored whether
values influence a policy-maker’s perception of place. Since landscape values are often collected
in the context of a policy process, a common research question is whether these values are
influential toward the policy being created (Elwood and Leszczynski 2013). Yet the answer to
this question remains vague. More importantly, past research has not offered a method to
measure the influence of qualitative data on policy processes (Brown and Kyttä 2018), which is
perplexing given softGIS’s goal to help people explain their value connection to places. The lack
of research is indeed an additional source of motivation for finding a methodology to measure
how valuations help influence policy.
2.5. Discourse Ethics
The field of discourse ethics looks at deliberation, a communication style that uses
concept framing methods and articulation techniques to change an opinion held by another party.
33
Deliberation is not the same as a debate, where a debate is a formal setting that has rituals and
rules for presenting arguments and counterarguments and those speeches are judged for the merit
and presentation of a position. Deliberation is different due to its content and context- the
subject-matter (content) is about a pressing issue that warrants communicative efforts where a
compromise as to how to solve an issue is needed.
The context of deliberation is that the conversation between parties (people, not the
political sense) becomes deliberative discourse because an issue requires a change in position on
a subject by one or both parties in order to execute an actionable item related to problem solving
(Bächtiger et al. 2010). Under this definition, discourse ethics offers an appropriate methodology
that could measure the quality of qualitative softGIS data, as a means of quantifying the
landscape values provided in the case study area’s public comments. Thus, using discourse ethics
methods here assumes that softGIS is essentially a digital venue for deliberation on an issue that
affects landscape values held by the public. Furthermore, discourse ethics offers the perspective
that deliberative actions are said to be influential, in terms of how a solution to an issue is agreed
on, when deliberative discourse is thought of as having high quality (Brown and Donovan 2013;
Engen et al. 2018; Wolf, Brown, and Wohlfart 2017).
Measuring deliberative discourse quality was important for the project since it helped
explain how discourse changed deliberation. Discussing quality in deliberation is not a means of
ranking discourse. Discourse that is not high quality does not necessarily mean the discourse is
not capable of swaying opinions. Discourse ethics emphasizes that changes to deliberation
outcomes should not be the result of aggression, in the sense of using threatening, bullyish
discourse. Rather, discourse ethics observes that high quality discourse should be the influential
factor as to how parties in deliberation can agree on an issue (Steenbergen et al. 2003).
34
Elements of quality discourse vary, but in general they encapsulate four tenets
(Steenbergen et al. 2003). The first looks at the justification a person has on a position, e.g. citing
sources for evidence or referring to trends that are observable by all parties. The second looks at
how counterarguments are respected among participants, e.g. exercising empathy or sympathy
(though this is not to acquiesce to the other argument on those grounds only). A third focuses on
recognizing when an argument being discussed is for the “common good” (25), e.g. a perspective
that offers solutions beyond the individual presenting the argument. Though ‘common good’ is a
broad term, in general it should be understood as the recognition that solutions to issues will
benefit beyond self-interest. Finally, quality discourse will “yield to force of the better argument”
(Maia et al. 2017, 8), e.g. when discourse quality as a whole convinces another that the reasoning
behind the solution being presented is considered universally trusted and accepted.
These discourse elements are a suitable frame to measure change in deliberative
discourse among qualitative spatial data. Used alone, applying these concepts to narrative data
would be limited to analyzing a single act of discourse. To operationalize then these concepts for
quantitative exploration, research by Steenbergen et al. (2003) placed these discourse ethic
components into an index, known as the discourse quality index (DQI). In the DQI, the discourse
is weighted so that a single speech act can be quantified to measure the quality of discourse. The
index measures discourse quality by coding discourse, as text or transcripts, as to the presence of
certain deliberative discourse items present in a speech act. The DQI is meant to generate a
statistically meaningful discourse quality value measure. This means that the items in the index
are truly measuring discourse quality (given the index’s item unidimensionality) and that the
results are meaningful in the context of assessing deliberation (that the probability of the quality
35
values measured would not as likely appear by chance). As explained below, the DQI is this
project’s approach to qualitative spatial data analysis.
Other methodologies for analyzing discourse ethics are not limited to indices. Some
methods focus on reviewing how discourse can measure the proclivity for completing certain
behaviors . Other methods measure how “well” (not quality, but rather content) discourse
justifies an ideological position. Overall, the issues in measuring discourse quality are not
quantitative, i.e. how accurate or relevant the measures are. Rather, issues in measuring discourse
relate to determining when a measure is appropriate to use, based on the type of discourse being
analyzed and the context for which the discourse is taking place (Hammersley 2011; Hepburn
and Potter 2011; Wodak 2011).
For example, another type of index, the Deliberative Transformative Moments (DTM)
(Jaramillo and Steiner 2014), found during this review showed potential for use here but was
ultimately rejected. The DTM focused on how discourse acts trend throughout a deliberative
process. An example is a group discussion between a police force and urban residents, where the
DTM showed how the power bestowed by the government on a police agency can influence
interactions within the public (Maia et al. 2017). As such, for this project the DQI is more
suitable for understanding speech acts that are not occurring in dynamic environments.
2.6. Spatial Science Approaches Related to Project Goal
There is a dearth of spatial science research with respect to the previously discussed
disciplinary paradigms and their applications. This is especially true with respect to qualitative
spatial data influence intersecting with discourse ethics methodologies. Among the PPGIS and
softGIS literature, research has focused on either the execution of GIS technology for public
participation projects or anticipating how the survey results may affect policy outcomes (rather
36
than using spatial analysis to validate the results) (Brown and Kyttä 2014; Ramasubramanian
2010).
That said, spatial analysis has though been integrated into landscape value research.
Primarily, especially for PPGIS projects, this has focused on calculating indices of conflict to
gauge where locations with differing landscape values are contentious with respect to land-use
policies (Brown and Donovan 2013; Brown, Weber, and de Bie 2014). One case study
investigated the spatial autocorrelation of landscape value point clustering to determine if the
placement of values was relevant to the placement location (i.e., values were placed where
people had access or experiences, not by random chance) (Brown and Weber 2012). Though
calculating conflict indices are typical for landscape value research, followed by an occasional
spatial autocorrelation analysis of landscape values, there were some unique cases that solidified
how measuring change in discourse quality using qualitative spatial data is possible in spatial
science research.
Cerveny, Biedenweg, and Mclain (2017) used qualitative data analysis of narratives
submitted through a softGIS activity to show that the same landscape value can vary in meaning
when it is expressed in words versus being placed as typological value point on a map. This is an
interesting finding since it shows that the way in which landscape value data is collected can
influence the value’s intended meaning, despite the fact that both forms of data collection collect
the same emotions but in different formats. Thus, while a group may say they value a place for
its recreation opportunities, the expression of that value may potentially create conflicts among
users, even when everyone generally agrees to the same value but submitted that value through
different means.
37
This case demonstrates at a minimum why a measure of discourse quality would be
helpful when discussing policy issues. For example, when a position is taken on a land-use issue,
the landscape value referenced is understood by the individual to be universally understood by
government and individuals. But in relying on the notion that the value categorization is thought
to be universally understood, that notion may reduce the argument’s ability to sway a
counterargument unless there is more context to clarify its expression (Cerveny, Biedenweg, and
Mclain 2017). In such cases, methods such as the DQI can clarify how qualitative data values
should be understood based on the valuation for its contribution to quality deliberative discourse.
In another study, Brown and Reed (2000) used discriminant factor analysis to predict
how values cluster when policy strategies that will be implemented in an area are already known
by respondents. An example is when valuations of wilderness cluster in the same area on a
basemap because that area has already been proposed as having a wilderness designation. When
it is not known ahead of time that an area is planned for wilderness designation, valuations of
wilderness protection on a basemap may not be as clustered, perhaps reflecting affective
responses to landscape more accurately, since people would not be tempted to place valuations
where governments “expect” them to be placed. Discriminant factor analysis is more
confirmatory as compared to collecting data to ascertain affective responses to a particular
location. Truly, if one already knows what type of values will be associated with a location, one
may not consider counterarguments. Values could also be related to by its object of value and not
its inherent value. For example, knowing that an area is going to be a wilderness, a person may
value this object (wilderness) because of A, B, and C, versus showing how this area is valuable
because of its “wild nature”.
38
Discriminant factor analysis was inappropriate for this softGIS-related project since it
analyzes the way in which landscape values cluster based on the information presented ahead of
the mapping activity. The project’s analysis cannot measure changes in discourse quality if the
deliberative arguments being made are already known ahead of time, making the detection of
discourse change a moot point. Altogether, the DQI appeared as the most appropriate choice.
2.7. Gaps in Previous Research
Previous research has not employed a DQI to measure discourse change in qualitative
spatial data. The reason for this gap is uncertain, though there are a few possibilities. Measuring
discourse quality could not be thought of as an inherently spatial problem, therefore the spatial
science research community may not feel it is appropriate to quantify qualitative data from
softGIS and PPGIS activities. This may be rooted in the fact that deliberative discourse is a by-
product of the collection of other spatial and non-spatial data. Regardless of the cause, this
project seeks to address this gap by applying a novel methodology to a spatial science problem.
2.8. This Project’s Contribution to Spatial Science
This project shows how methods from discourse ethics can measure changes in
deliberative discourse among qualitative spatial data. This is a necessary exploration given that
landscape valuations can be a meaningful way for governments to understand what places mean
to people, and how policy changes could elicit feelings of acceptance or rejection of a policy.
More importantly, by quantifying spatial narratives, this project offers a means for communities
to understand the type of spatial narrative it might take in the future to sway policy makers, so
policies respect the public’s landscape values. However, focusing on a sampled population’s
qualitative spatial narratives for the CNF does limit this project’s conclusions to be generalized
to other populations or geographic regions. Nevertheless, from this project one should be
39
inspired to at least replicate these methods and further along spatial science research using
qualitative spatial data.
40
Chapter 3 Methodology
The Methodology Chapter overviews the qualitative dataset and analysis tasks used to achieve
the project’s goals. The ‘Data Description’ subsection outlines the dataset’s fitness for use in
answering the research questions and for content analysis. The ‘Research Design’ subsection
outlines the specific content analysis methods, showing the steps to process, analyze, and
calculate the statistical results needed to interpret the findings against the literature and research
questions.
3.1. Research Design
The sequence of research tasks outlined in this Chapter demonstrates how the project was
able to answer its research questions. The sequence also shows how the data was processed at
each research task step to create a specific data product. The workflow presented shows what
type of data or information product was created at each step and if those products were used in
subsequent tasks. In this section, the specific tools utilized for each task are discussed.
3.1.1. Research Workflow, Tools, and Tasks Overview
Figure 3.1 shows the research workflow, outlining how the specific analysis tasks (T.[n])
fit more broadly with the overall methodological underpinnings using broad-step groupings. The
broad steps guided when the specific analysis tasks were performed with a certain data product
or information in hand from another step. The specific analysis tasks also show how tasks were
dependent on another task or broad-step.
41
Figure 3.1. Research workflow and analysis tasks.
The bulk of the project focused on using content analysis with the qualitative data during
the coding task (T.3 in Figure 3.1). This task relied on specialized qualitative content analysis
software, which was ATLAS.ti. For the rater reliability and index item correlation calculations
(T.4 in Figure 3.1), the item correlation was performed with open source R software, while SPSS
was used to calculate rater reliability. The results of those calculations, along with the
construction of the discourse quality indices (T.5 in Figure 3.1), were recorded using a Microsoft
Excel spreadsheet. Each task outlined below explains how these tools were used.
Though not given a research task, the first broad step (labeled as ‘Stage One’ in Figure
3.1) was orienting the project’s goals and research questions so the analysis results could be
contextualized within the canon of previous research. This step resulted in the perspectives
presented in the ‘Introduction’ and ‘Related Work’ Chapters. Such content framed why this
42
project matters and what the results meant for the project’s goals and spatial science. This step
also entailed locating data that helped fulfill the project’s goals. A detailed explanation of the
data selected, and the process for its selection, is in section 3.2.
The first research task (T.1 in Figure 3.1) focused on determining how spatial precision
could be detected and measured using a content analysis approach. This required anticipating
how landscape values would be expressed spatially in narratives using deliberative
communicative elements. After having identified the language to detect spatiality in narratives,
the spatial dimension construct needed to be incorporated into the DQI. Incorporating the spatial
precision item required that spatiality be identified using ordinal rankings to signify the
magnitude at which a comment’s landscape value was spatially precise. Since the original DQI
research did not include information on how the current nominal items’ ordinal weights were
devised, a three-tiered logic was used to devise a ranking schema as to how spatial precision
would be detected in qualitative data for this project. The explanation on how this ordinal
ranking was devised is detailed in subsection 3.3.1.
The second task (T.2 in Figure 3.1) prepared the qualitative data for use with the content
analysis software. The qualitative data needed preparation since the data was in the form of
public comments written on PDF documents, as shown in Figure 3.2. Each public comment was
submitted either by an individual person, with no formal statement on their connection to an
organization, or by an individual on behalf of an organization. Sometimes, if either the person or
organization addressed specific items from the CNF’s revised plan, comments were submitted
with additional documentation (e.g. comments written on organizational letterheads). All the
comment documents were inspected using Windows Explorer, so only comments with text that
related to landscape values were included. Each comment document was housed in one master
43
comments folder. Once this process was complete, there was one digital folder containing 151
PDF documents of qualitative comments.
Figure 3.2. Example of public comment and its metadata regarding the CNF policy revision.
Source: USFS 2018b.
The third task (T.3 in Figure 3.1) coded the qualitative comments using the DQI, while
extracting each comment’s spatial features and landscape values. Using the PDF documents from
the first task, these were uploaded into ATLAS.ti, a type of content analysis software. Once the
documents were uploaded, the DQI’s coding schema was programmed. ATLAS.ti then generated
two project files with the documents and coding schema. Two project files were needed so that
the two, first-cycle coding sessions, required to generate a rater reliability score, would not
influence the second coding session with codes from the first. A third, second-cycle coding
session was also performed on measures in the coding schema where analytical memos showed
significant concerns with consistently applying the schema during the first-cycle sessions (see
Saldaña 2013 for description of coding cycles and analytical memo documentation examples).
44
For content analysis research it is not ideal to have only one coder (Steenbergen et al.
2003), however this project’s resource constraints did not allow for a second coder. Thus, the
primary researcher coded and recoded the comment document groups with a minimum of one
weeks-time between the first and second sessions. The time span between coding sessions was to
allow an opportunity for the primary researcher to reevaluate if the index items applied during
the first-cycle of coding were reflective to the intended meaning that those items were to encode.
Having a second recoding allowed for refining the application of the DQI’s items, based on the
first cycle coding and closer scrutiny of the DQI’s outlined application strategies that were
missed due to the primary researcher getting acclimated to using the DQI in this project context
(Saldaña 2013).
The third recoding session looked at the application of two items in the DQI, the ‘Level
of Justification’ and the ‘Content of Justification’. These items were singled out due to rater
reliability statistics confirming what the analytical memos during the first-cycle sessions
suspected of non-consistent identification of magnitudes for these two nominal constructs. Such
low inconsistencies showed that the rater did not have a universalistic concept for the nominal
constructs by which to consistently detect and code the presence of that item in a speech act. As
such, this could have influenced other statistical analysis results to falsely assume that the DQI
items had been applied to the comments as intended by the constructs. This third recoding then
reevaluated the analytical memos against the original DQI research construct parameters to see if
concepts from the original research were being applied to the project’s dataset in a way that left
little reasonable doubt as to the magnitude detected.
Each comment had a minimum of six codes, five from the original DQI and the one
spatial item. With the six codes, the entire public comment dataset had discourse quality values
45
calculated three times. The first quality index value emphasized the discourse quality for when
precision spatial narratives were present. The second value index emphasized when the spatial
narratives were not as precise as the preceding value here, while the third calculation showed
how a lack of precision in the spatial narratives affected discourse quality. The coded qualitative
comments based on the modified DQI items were then used with the fourth task.
The fourth task (T.4 in Figure 3.1) involved calculating the DQI item correlation and the
rater reliability scores. These calculations validated two research concerns- one, that the DQI
codes were applied consistently across the public comments, and two, that the spatial item
incorporated into the DQI met the statistical unidimensionality requirement to show the spatial
item was indeed measuring just the spatial component in the comments. The results from the two
datasets of qualitative comments coded were inputted into an Excel spreadsheet in a
crosstabulation format. This spreadsheet was then saved to a format for use in other statistical
analysis software, e.g. SPSS and R software.
Calculating the rater reliability scores sought to follow the methods outlined in
Steenbergen et al. (2003, 37-9), using SPSS to perform these calculations. From four reliability
statistics the original research used, this project used ratio of coding agreement (RCA) and
Cohen’s κ ‘kappa’. Spearman’s r ‘rho’ correlation and standardized α ‘alpha’ were also run
though not included in the results. The parameters surrounding the use of these statistics are
explained below in subtopic 3.3.3.1. The item correlation calculation used Steenbergen et al.
(2003, 39-41) method of polychoric correlation coefficients. These coefficients helped determine
if the construct of spatial precision could have been incorporated into the DQI to measure
discourse quality, or also known as if spatial precision had unidimensionality with the other
items of the index. This is explained more below in subtopic 3.3.3.2. This calculation was
46
executed in R software. The results of this task were not used in the remaining tasks but
determined how accurate the DQI was for the coding and its application for these comments.
The last task (T.5 in Figure 3.1) constituted the project goal, namely constructing the DQI
to ascertain how the quality of discourse among public comments changed when precise spatial
landscape values were used. For this the coded comments from the second task were used.
Employing the recommended method in Steenbergen et al. (2003, 41), DQIs were calculated
using basic addition of all the items found in one comment and adding that per comment
calculation to the whole comment dataset. Microsoft Excel was used for this task and for
generating descriptive statistics on the indexes. The descriptive statistics were used to compare
the changes in discourse quality among the different weights to indicate the level of spatial
precision within the public comment dataset. A Paired Samples t test was also run to show if the
difference between discourse quality values with or without the spatial item in the modified DQI
was statistically significant, or that the presence of spatial precision in narratives showed
significant influence on overall discourse quality. The results from this task are further discussed
in the ‘Discussion’ Chapter.
3.2. Data Description
The qualitative data came from public comments submitted on proposed changes to a
forest land management policy for the Chugach National Forest (CNF) in the State of Alaska,
U.S.A. These public comments represented the discourse collected for a deliberative process.
Acquiring the qualitative data was accomplished by downloading the public comments available
for viewing through the CNF’s free-access online reading room.
47
3.2.1. Exploration of Quality and Fitness for Use
The dataset would be considered thematically relevant if the narrative and metadata
content had elements that could help in answering the four research questions. The dataset’s
completeness was based on whether its narratives would explain why the submitted landscape
values should be considered as part of the CNF policy revision. To determine the overall quality
of the data for this project’s approach to content analysis, a dozen individual comment cases
from the dataset were randomly selected for direct viewing. This viewing entailed interpreting
the content of the metadata and narratives to anticipate the extent of the landscape values that
could be expected in a larger dataset. From this initial review, the dataset was judged for its
thematic relevance to the research questions. The dataset was also assessed as to its completeness
in attributes and geographic coverage of the case study area. Completeness also evaluated
whether the data’s geographic content contained references to landscapes relevant to the CNF.
The metadata for this qualitative dataset is presented in Table 3.1.
Table 3.1. Metadata for nonspatial qualitative data
Dataset Source Format Date of Compilation
Population
Sample
Public Comments of
Qualitative
Landscape Valuation
Chugach
National Forest
(CNF), USFS
PDF
documents
Collected from Dec. 18,
2015 to Feb. 19, 2016
1,501
Source: Data from US Forest Service 2018b.
The fitness for use exploration showed the dataset was both relevant and complete, and
therefore appropriate to the four research questions. The comments contained valuations which
were intended by the public to influence the USFS’s development of a land management policy.
The narratives’ content ranged from being precise in landscape valuation (geographically and
logically) to fuzzy, i.e. broad statements as to preserving the environment in general. The
metadata for each comment also consistently and clearly indicated when the comment was
48
collected, from where (i.e. within the state of Alaska or beyond), and if the comment was
submitted by an individual or on behalf of an organization. From this review, the project had
surety that the dataset was appropriate for this type of qualitative spatial data research.
3.2.2. Sampled Population Data Selection
A sampled population for the case study area’s qualitative dataset was used to measure
changes in discourse quality. The population for the comment dataset consisted of those who
claim to be users of the CNF, whereas the users here are considered in the broadest sense to be
the “public” as defined by this project. This population, based on their claim of use, were thus
most likely to have landscape values for the CNF, which subsequently should motivate these
users to want to influence the policies that manage the CNF landscape.
The sampling frame consists of 1,501 public comments submitted to the CNF that were
later published online. Fifteen comment groups of 50 comments per group, plus a 16th comment
group containing one comment, were chosen through a random process. This sampling strategy
was based on the objective to analyze approximately 10% of the comments available. A 10%
sampling size for qualitative data analysis was acceptable when compared to other types of
qualitative analysis research, including PPGIS with public participation rates usually between
10%-15% (Brown and Kyttä 2014), or as seen in the original research to develop the DQI where
56 cases from over hundreds of deliberative speeches were eligible for their analysis
(Steenbergen et al. 2003).
To download the data, the organizing functionality of the data’s online webpage viewer
was used, where this project set the webpage to select 50 comments within its display. These
selected comments on the page were then downloaded. This process was done arbitrarily starting
with the first webpage display, as shown in Figure 3.3, then working through every odd webpage
49
number, i.e. “1, 3, 5, 7,” and so on until the final webpage that displayed all the comments
available was reached.
Figure 3.3. Public comment webpage “reading room” site for the CNF policy revision process.
Source: US Forest Service 2018b.
Within each downloaded comment group, 10 comments from 10 different members of
the public were chosen for content analysis using a random number generator. This strategy was
possible since each comment was given a document number by the USFS for their document
tracking. This project used those same numbers to select which comments were analyzed. The
50
random number generator was supplied by Microsoft Excel. There were nine comment selection
rounds, where the random number generator was run per comment group. For all but two rounds,
comments were selected using random numbers. In instances where the number generation
resulted in low selection, i.e. one comment per round, or no selections after two or three rounds,
comments were selected directly. The direct selection was based on reducing some of the uneven
distribution of the comment group’s tracking numbers at that point in the selection process (e.g.,
multiple comments with document numbers in the 1200s but none in the 1400s). Thus, either the
researcher directly chose a comment based on its document number compared to the distribution
of document numbers for the selected comments, or the number generator was rerun until there
was a matching comment number. This random sampling method selected 151 comments for
coding, representing approximately 10% of the sampling frame per the intention.
3.3. Research Task Details
These tasks presented in the workflow in Figure 3.1 are outlined in further detail here.
Each task discusses its dependence on another task. How the specialized tools were used, and the
data or information outputs these tools generated, are explained as well.
3.3.1. Detecting Spatial Precision in Content Analysis
The detection of spatial precision using content analysis was discovered during the
literature review as plausible. This detection is plausible based on an individual’s ability to
exercise spatial thinking through describing how objects orient in a space (e.g., describing how
to arrange furniture in a room, or verbally giving someone driving directions) (Cerveny,
Biedenweg, and Mclain 2017; NRC 2006). The recognition of this formed the basis by which the
project was able to develop a construct to detect the spatial precision of a person’s landscape
values. Afterall, if people already use spatial thinking to orient themselves and others as objects
51
within a space, then the thinking should be applicable to describing how a person’s landscape
value connect with the landscape.
Identifying whether spatial precision could be extracted from narratives for this project
required devising a means of quantifying a person’s spatial precision. The quantification allowed
for the use of statistics-based analyses methods to discern if an individual’s communicative
items, as identified in the existing DQI, share any sort of statistically significant patterning when
spatial precision of a landscape value is had. In other words, quantifying spatial precision allows
us to determine whether spatial precision correlates with other elements used in deliberative
discourse, and if there is correlation, how the correlations between spatial precision and other
items affects deliberative discourse quality.
Developing a methodology to quantify spatial precision using the modified DQI required
looking at the potential for how landscape values could be articulated using language. As such,
the project first looked to the existing research on quantifying spatial precision in narratives, for
which this project had already presented that such quantification schemas are essentially
nonexistent. Ergo, a custom detection schema was devised using the schema’s similar to those
used with the original DQI. In the original DQI, each nominal item had an ordinal ranking
schema to “measure” the magnitude which that nominal item was present in a speech act. Those
rankings were concocted using previous research in discourse ethics (Jaramillo and Steiner 2014;
Maia et al. 2017; Steenbergen et al. 2003). Such research suggested that deliberative items are
not used by people in binary terms. Rather, deliberative items are used along a continuum of
intensities, and the demarcation between one intensity versus another can be seen using both
explicit and implicit language (Steenbergen et al. 2003). For detecting spatial precision, the
52
ordinal ranking schema was devised to reflect the notion that spatial precision is not a binary
concept.
Comparison is necessary for quantifying spatial precision despite that it can also be
subjectively applied. When using language to locate an object in space, spatial thinking research
eludes to how the choice of words to describe an object appears dependent on the context of the
subject in a speech act (Brown and Weber 2012; Cerveny, Biedenweg, and Mclain 2017; Elwood
2006; Engen et al. 2018; Kyttä et al. 2013; Mitchell and Elwood 2012; Nummi 2018; Plantin
2014). For example, should one be given directions to get to a lake, the person giving the
directions may first ask where the other person is starting from. Upon understanding how the
location of the lake is associated to the location of the second person, the person giving the
directions would be able to orient the second person to the cardinal directions they would need to
arrive at the lake. Thus, as presented in this case, the spatial context is an area that surrounds
both the traveler and their destination. Without context, any sort of cardinal or orienteering-based
directions would be meaningless, since those references to how one should orient themselves to
move toward a destination would not be grounded in the space in which the references were
intended. As such, landscape valuations could not be deemed to have spatial precision unless it
had a spatial context by which to compare to a broader area for showing how much more surety
the location of a landscape value has. Indeed, since the use of language to describe spatial
phenomenon arguably creates a nominal measurement scale, having a comparative component
for this nominal construct of spatial precision helps to quantify spatial narratives since the
comparison allows to rank ordinally the precision intensity given a “baseline” measure.
With these concepts, the ordinal schema to quantify spatial precision followed a three-
tiered logic along a continuum. Precision is had at a zero, one, and two magnitudes. The
53
inclusion of a zero ranking is necessary to be in alignment with the other items of the original
DQI, where a zero ranking is indicative of that item having insignificant presence in a speech act.
These magnitudes correlate to the concepts of spatial precision outlined in Table 3.2. As with the
original DQI’s, items with higher number ordinal rankings indicates a stronger magnitude
presence of that item. Thus, overall the less precise landscape valuations are mentioned, the less
magnitude they have in the speech act itself. How the nominal construct overall correlates with
the other items in the DQI is presented in the ‘Results and Discussion’ Chapter.
Table 3.2. Spatial precision item’s ordinal weights for discourse quality index
Spatial Construct Weight Meaning of Weight
Beyond Study Area 0
No explicit mention of case study area in general or a
feature within the case study area
Study Area Only 1
Explicit mention of the case study area in general only,
without a mention of a feature within the case study area
Feature Within Study
Area
2
Explicit mention of at least one feature within the case
study area
Source: Marder 2018.
3.3.2. Content Analysis Approach
A discourse quality index (DQI) provides a means of analyzing qualitative comments.
The non-modified DQI contained seven alphanumeric codes. Each code defines the
communicative item that should be present in a speech act. The notion of a “speech act”
encompasses the delivery of an idea from an individual to another using the communicative
items (Steenbergen et al. 2003, 27). These communicative items accordingly are coded to show
the magnitude at which an item is present in a speech act. Once coded, the numeric magnitude
weights are used to calculate the quality of discourse. As speech acts consisted of a person’s
submitted comments, the DQI was applied to each comment. While not in real-time, a public
comment was considered here to be part of the deliberative process and hence eligible for the
54
DQI to be applied. A detailed overview of the original index’s items and its ordinal weights are
in ‘Appendix A.’
The ‘modified DQI’ for this project is shown in Table 3.3. The DQI was modified to
eliminate items that were anticipated to have no variation in magnitude during content analysis.
A lack of variation for an item’s magnitude was interpreted by the original DQI creators as an
item that would not contribute to the discourse quality of a speech act. Essentially, if an item was
detected without variation, then it was assumed that the item has a magnitude which would
neither contribute or degrade discourse quality. This is assumed as a lack of variation meaning
the communicative item is considered normative for the discourse context, and therefore not as
influential to discourse quality (Steenbergen et al. 2003).
55
Table 3.3. Modified discourse quality index’s six nominal items and their ordinal weights
Nominal Ordinal
Meaning of Weight Index Item Constructs Weight
Level of
Justification
None 0
Speaker states what should/not be done without reasoning to
why
Inferior 1
Speaker states what should/not be done, but reasoning to why
has no linkage, or reasoning is based on illustrations
Qualified 2
Speaker states what should/not be done and at least one
reasoning to why has linkage
Sophisticated 3
Speaker states what should/not be done and at least two
reasonings to why have linkage
Content of
Justification
For group
interests
0 Speaker states argument to benefit one or more group interests
Neutral 1
Speaker does not state argument to benefit a group interest nor
for the ‘common good’
For common
good, utility
2a
a
Speaker states argument to benefit the ‘greatest good for
greatest number’ (utilitarian terms)
For common
good, difference
2b
a
Speaker states argument to benefit the ‘least advantaged in
society’ (different principle)
Respect (for
groups)
None 0
Speaker mentions only negative statements about groups
participating or benefiting from deliberation
Implicit 1
Speaker does not mention negative statements about groups
participating or benefiting from deliberation, nor mentions
explicit positive statements
Explicit 2
Speaker mentions at least one positive statement about groups
participating or benefiting from deliberation, regardless if
there are negative statements as well
Source: Steenbergen et al. 2003; Marder 2018.
a
Ordinal weight (2a) and (2b) having the same ranking weight, as creators of the original DQI deemed these
reasonings have the same impact to discourse, yet should be delineated due to different reasonings applied.
56
Nominal Ordinal
Meaning of Weight Index Item Constructs Weight
Constructive
Politics
Positional 0
Speaker offers no opportunities for reconciliation or consensus
building to an issue being deliberated
Alternative 1
Speaker offers for reconciliation or consensus building, but the
offer is for another issue not related to the one currently in
deliberation
Mediating 2
Speaker offers for reconciliation or consensus building to an
issue being deliberated
Respect for
Counter-
arguments
Ignored 0 Speaker flatly ignores counterarguments
Degraded 1
Speaker acknowledges counterarguments, but also explicitly
degrades it with a negative statement about the reasoning or
other speaker presenting the counterargument
Neutral 2
Speaker acknowledges counterarguments, but does not
explicitly applies a negative or positive value to it
Valued 3
Speaker acknowledges counterarguments and explicitly states
it as having positive value to it
Spatial
Precision
Beyond Study
Area
0
No explicit mention of case study area in general or a feature
within the case study area
Study Area Only 1
Explicit mention of the case study area in general only,
without a mention of a feature within the case study area
Feature Within
Study Area
2
Explicit mention of at least one feature within the case study
area
Source: Steenbergen et al. 2003; Marder 2018.
The ‘Participation’ and ‘Respect (for the demands of others)’ items were eliminated since
the dataset’s context assumed these items would be detected at constant magnitudes. For
instance, ‘Participation’ measures the times a speaker during deliberation was participating or
interrupted. For the dataset, participation would be considered a constant in this context since the
project interpreted the submission of a public comment as a willing indication to participate. The
‘Respect (for the demands of others)’ item falls along similar lines. Since public comments are a
mechanism to demand attention of the USFS requesting the narratives, the project assumed that a
submitted comment was a demand for the narrative to be respected.
Upon eliminating two of the original seven items, a spatial precision item was added to
the DQI to make the modified, 6-item DQI. The spatial precision item quantified the magnitude
to which spatially precise landscape values were present in a comment. Spatial precision was
57
determined based on detecting the mention of at least one geographic feature within the case
study area boundary. For instance, those who mentioned a broad but similar landscape, e.g.
“Save the forests!”, the comment would not be considered to have any contributing weight
toward a comment’s discourse quality, due to the lack of spatial precision relative to the case
study area. A comment with reference to the study area but did not mention specific features or
points, e.g. “Save the CNF!”, would be considered to have contributed weight toward a
comment’s discourse quality. Comments that mentioned specific features or points would be
considered to contribute the greatest weight toward the comment’s discourse quality, e.g. “Save
Knight Island that’s part of the CNF!”
An important point is the difference between the weights for all the items were not
known. There was no way to know by how much one type of item quality is better than the other.
For example, if a DQI for one speech act is “7” and for another speech act is “15”, then the
speech act with the higher DQI would be characterized as having higher quality discourse than
by comparison to the other with the lower DQI. The degree however to which the speech act
with a value of “15” is better than the other with a “7” is not eight. Rather the difference can only
be expressed in greater than or less than terms. This was acceptable for the project since the
research goal was determining how spatial precision in landscape valuations changes discourse
quality against the overall dataset, instead of determining how the DQI’s compared against each
comment.
A generalized example as to how the index items were applied during the content
analysis is provided in Figure 3.4. Using a fictitious public comment, the narrative is evaluated
to determine the magnitude at which a nominal item construct exists. The magnitude that each
nominal item exists were set by the original DQI creators.
58
Figure 3.4. Generalized overview of process for coding narratives using the modified DQI.
{Nominal
Index Item}
•{Coded ordinal ranking weight}
•{Reasoning for weight coded}
Level of
Justification
•Weight of "1"
•Comment gives reason for policy revision, "For...children...protect the
CNF...", but linking the reason to protect the forest for childern is not clearly
linked. Overall reasoning seems more illustrative.
Content of
Justification
•Weight of "1"
•Comment does is not explicit in saying the forest should be protected because
children are more important group interest than another, so the content is
considered neutral.
Respect (for
groups)
•Weight of "1"
•Comment shows no negative or positive statements about the groups
participating in this deliberation (comment submissions for policy change), so
the respect is considered neutral.
Respect for
Counter-
arguments
•Weight of "0"
•Comment does not acknowledge counterarguments made in policy for certain
actions being proposed (e.g. open up wilderness access to generate jobs in
timber harvesting).
Constructive
Politics
•Weight of "0"
•Comment does not offer reasoning or opportunity to mediate on policies
proposed (e.g. if allowing more timber harvesting, then reduce snow machine
access to reduce environmental impacts).
Spatial
Precision
•Weight of "1"
•Comment states only to "...protect the CNF...", not a specific feature within
the CNF. Nor comment overly broad in that a coder knows the comment is in
refernce to study area and not elsewhere (e.g. "protect our national forests!").
Example comment: “For the sake of our children, we must
protect the CNF, so they can have a place to understand
and appreciate nature as I had before in the same place!”
59
To clarify this process, an example can be seen with identifying the magnitude at which
the ‘Level of Justification’ is being exercised in a narrative. The original DQI creators assumed
that a coder reviewing the narrative will use their preconceived notion as to how a person
communicates rational, logic, and presents evidence to substantiate a thought on what should be
done on an issue in deliberation. Though the original DQI does set thresholds in defining which
magnitude to assign given the presence of specific communicative ques, initially identifying the
constructs in the narratives is up to the coder and based on the coder’s understanding as to what
communicative ques signal that the index’s nominal item is being exercised. Thus, the
“reasoning” being identified is based on the coder’s acceptance of the universally accepted
construct of reasoning. Similar logic for the other index’s items is presented in Figure 3.4.
3.3.3. Index Reliability and Validity Calculations
This stage involved calculating the DQI item correlation and the rater reliability scores.
These calculations validated two research concerns. First, that the DQI codes were applied
consistently across the public comments. Consistency means that the item constructs were
detected, and their magnitude weighed as one would reasonably expect to detect and weigh those
constructs in discourse. Second, that the spatial item incorporated into the DQI shows
statistically significant unidimensionality with the index’s other items. This is required to not
only show that the spatial item was indeed measuring spatial precision in narratives, but that
spatial precision as a construct is a deliberative, communicative element that can be measured in
discourse just like the other items in the DQI.
3.3.3.1. Rater Reliability Calculations
Calculating the rater reliability scores sought to follow the methods outlined in
Steenbergen et al. (2003, 37-9). In their original research, there were four reliability statistics
60
calculated. These included the ratio of coding agreement (RCA), Cohen’s κ ‘kappa’, Spearman’s
r ‘rho’ correlation, and standardized α ‘alpha’. These rater reliability statistics quantify how
consistently the index’s items were applied during content analysis. Consistency in content
analysis means that the item constructs’ magnitudes were applied in a manner that matches the
expectations as to how the item constructs should be detected in deliberative discourse. Thus,
these reliability statistics elude to how the matching of these expectations should be for
theoretically an infinite number of times that the index is applied (Steenbergen et al. 2003; Urdan
2017).
These calculations helped determine an important aspect of the index based on its item
definitions. When there is “strong” rater reliability, the index’s items constructs are being
understood consistently between different raters, or that the items can be consistently understood
across more than one coding sessions when there is only one rater. Such reliability means that
the index’s constructs are calibrated appropriately to be detected in the discourse dataset (that the
definitions used to detect the item in narratives are not too broad, nor too confined for use in
limited research contexts). Rater reliability scores then helped understand how the DQI’s items
are being applied to the public comments to detect the magnitudes by which the items are present
in narratives. If the level of reliability calculated is not “strong,” this could indicate that the rater
has an inaccurate understanding of the items, and subsequently have inconsistent application of
those items during content analysis (Steenbergen et al. 2003; Urdan 2017).
Strong rater reliability is typically considered to be calculated at ≥ 0.70 (Urdan 2017) and
was calculated for this project per nominal item and per ordinal magnitude within each nominal
item. These calculations must have at least two content analysis sessions, where the DQI was
applied to “code” the public comments using the ordinal magnitude weights. The calculation is
61
determined by comparing the number of codes that do not match between coding sessions to the
total cases available to code. This generates the ratio of coding agreement (RCA), essentially a
decimal-percentage showing the inconsistent coding results to the total cases available to code.
Along with RCA, Cohen’s κ ‘kappa’ looked at the likelihood that the codes were applied not by
random chance. The results are a decimal-percentage, where values ≥ 0.70 (Urdan 2017) are
considered to show that the ordinal magnitudes were not likely to be applied randomly. In other
words, this means that when the current coding results are compared to a randomized dataset, it
appears that the results seem to be more deliberate in their patterning, and therefore application,
than if the codes were randomly assigned across all cases in the dataset.
Spearman’s r ‘rho’ correlation and standardized α ‘alpha’ are focused on the ordinal
dimensions of the DQI. Similarly, to the RCA and the ‘kappa’ calculations, these results quantify
how consistent the distribution of the ordinal magnitudes were detected between at least two
coding sessions. Yet unlike RCA and ‘kappa’, ‘rho’ and ‘alpha’ generate correlation coefficients
to show how the magnitudes correlation in the dataset between coding sessions. As with the
decimal-percentages, a correlation of ≥ 0.70 (Urdan 2017) indicates that ordinal weights
distributions were consistent between coding sessions. This means that the ordinal magnitudes
were being detected at consistent rates, once again showing that the item constructs were being
understood and detected consistently. The project included them here to maintain methodological
consistency from the original research, however their use was limited as explained in the
‘Results and Discussion’ Chapter.
3.3.3.2. Index Item Correlation Calculations
Calculating the significance of unidimensionality is necessary for item correlation
analysis when working with an index for content analysis. The concept of unidimensionality
62
concerns itself with determining the correlation between a psychometric instrument (e.g., an
index administered to an individual as part of a psychological assessment, or the application of
an index in content analysis) and the overarching construct that the instrument is intended to
measure (Steenbergen 2000; Steenbergen et al. 2003; Ziegler and Hagemann 2015). For
example, say one was developing an index to measure the magnitude of one’s depression using
two items that measured sleep quality and the sugar intake. Calculating the unidimensionality of
this index’s items would show how those two items would correlate to the concept of depression.
If the items show correlation to the overarching concept being measured, then the index’s items
are said to have unidimensionality, or that the individual items in the instrument are appropriate
proxy measures for the overarching concept being quantified.
This unidimensionality calculation determines the correlation using a variety of similarity
coefficients, where correlations values between -1.0 and 1.0 are returned. The coefficients are
usually presented in a table to show how each item in an instrument correlates to the other items.
These results are interpreted against the theoretical underpinnings that were used to justify why
the instrument item should have been included to measure an overarching construct. If the
correlations are considered “strong,” then the item could be said to have limited latency. This
means that the item construct itself does not contain idiosyncrasies which could generate wide
ranging errors as that item measures its construct (Steenbergen 2000; Ziegler and Hagemann
2015). For example, if the item construct of ‘Respect’ had used a magnitude scale of “1”, “2”,
and “3” to quantify respectful language in a speech act, a strong correlation calculation result
should also indicate that there is limited variance in magnitudes (that a speech act with a
magnitude of “1” should be reasonably detected if the speech act was quantified using the same
63
scale an infinite number of times) (Ziegler and Hagemann 2015). When an item has limited
latency, the item itself could be said to be a valid construct to measure an overarching construct.
Strong correlations ≥ 0.60 therefore across all items in a results table indicates that the
instrument’s items have “internal consistency,” meaning the strong correlations are so due to
having “strong” associations with the overarching construct being measured (Steenbergen 2000).
Strong correlations in the results also shows that items have an “external consistency,” meaning
that even if the scales used per item to quantify an item are different between them, they should
not show correlation between the items themselves individually, but only when the items
correlate to the overarching construct being measured. Seeing these consistencies in the
calculation results reaffirms that the instrument is valid for measuring the overarching concept in
question, even if it were applied an infinite number of times with other population samples
(Urdan 2017).
Though necessary to have internal and external consistency for the index’s item to bolster
surety, item correlations should also not be expected to be perfectly correlated (r = ±1.0) against
each other. In other words, while it is expected that the correlation calculations would show that
the same item will have perfect correlation (as it should, since the inputs for that correlation
calculation are the same), there should not be perfect correlation between two different items
(e.g., between ‘Spatial Precision’ and ‘Constructive Politics’). If correlations were perfect
between two seemingly unrelated item constructs, this would indicate that the unrelated items
could be measuring a same construct, or that the items are being interpreted by the rater as being
the same thing (i.e., that the communicative elements used to describe spatial precision are one-
in-the-same used to describe constructive politics). As such, during the index item correlation
validity calculations, a factor analysis was run to ensure that the index items had correlations that
64
were statistically differentiating from each other, so as to show items with strong correlations
were not due to possibly measuring the same construct (Bhattacherjee 2012; Montello and Sutton
2013). This is known as calculating for convergent validity of the index’s items, and the results
are discussed briefly in the proceeding Chapter.
For the project, the unidimensionality item correlation calculation used Steenbergen et al.
(2003, 39-41) method of polychoric correlation coefficients. This calculation used the discourse
quality values, determined per index item, as presented in the table of ‘Appendix B.’ Polychoric
correlation coefficients were used to ensure that this project’s methods closely replicated those in
the original research. This way, the potential for results to be interpreted as inconsistent with the
original research due to methodological error could be reduced. The polychoric correlation
coefficient calculation was considered appropriate to use given the index’s ordinal scaled
variables. When working with ordinal variables, sometimes the ordinal categorizations can cause
attenuation on the bivariate correlations’ normal distribution of values. As such, the polychoric
correlation coefficient allows for a generalized estimate that considers the attenuation and
violation of bivariate normal distribution (Rigdon 2010). Indeed, given the potential for natural
speech acts to exhibit non-normal distribution of ordinal magnitudes from the index, this
calculation is still appropriate even for this project’s context.
This task of item correlation calculation seems to take this spatial science project beyond
its methodological realms. Yet this task is necessary since it quantifies how a spatial item could
be integrated as part of a content analysis methodology. Furthermore, quantification provides the
degree of statistical confidence the concept of spatial precision could be considered just as valid
of a communicative element as any of the other elements used in deliberative discourse. The
unidimensionality calculation essentially provides the quantification needed to show how one’s
65
discourse quality could be influenced using varying magnitudes of spatial precision, based on
how strongly spatial precision correlates with the other items in the DQI.
3.3.4. Calculating and Comparing Discourse Quality Indices
Calculating discourse quality assessed how the quality of discourse in public comments
changed when precise spatial landscape values were used. Calculating discourse quality for this
project meant that the values were looked at for changes in discourse quality based on the
presence of how spatially precise narratives were detected at, based on the ordinal weights for
the ‘Spatial Precision’ item.
Discourse quality is determined by adding the ordinal magnitude weights detected as
follows in Equation 3.1. Note that the order is not significant, but is presented as such since this
was the order which the items were introduced by the original research:
(Justification) + (Content) + (Respect) + (Counterarguments) + (Politics) + (Spatial) (3.1)
whereas upon substitution of the above variables for magnitude weights presented in Figure 3.4
results in the following calculation:
1 + 1 + 1 + 0 + 0 + 1 = 4
This discourse quality value for the comment is interpreted as having “low” discourse
quality. However, it is more accurately to interpret discourse quality comparatively to other
coded comments (e.g. less than or greater than the whole or subgroup comment dataset discourse
mean).
The DQI codes calculated per comment were downloaded into Microsoft Excel to create
a discourse quality table of values, where each row was a comment and each column was the
calculated discourse quality per nominal item, with an additional column showing the total
discourse quality value for that comment case. Discourse quality values were further aggregated
66
from all the comments for a discourse quality dataset total. With this dataset value, descriptive
statistics were calculated, including a median, standard deviation, and identifying the minimum
and maximum discourse quality values. These types of statistics were also calculated as
discourse values were reclassed based on certain comment metadata attributes to look at how
those attributes could affect discourse quality change, e.g. if discourse quality is at certain values
more so in comments that appear to have original thoughts versus containing form letter content.
Calculating descriptive statistics for the entire dataset and reclassed datasets provided a means to
describe in “greater” or “less” than terms how discourse values changed under certain
parameters, since with ordinal rankings the degree to which change occurred cannot be
determined.
Basic comparisons of discourse quality values reclassed under different metadata
parameters helped to understand how those parameters could influence the magnitudes of spatial
precision as well. These comparisons provided a quick means to determine if more calculations
were needed to account for which comment parameters exerted more influence over a spatial
precision value than others. Should certain metadata parameters, or all of them, appeared to show
potential to influence discourse quality, the project would have proceeded by using other
statistical t-test comparisons (see Urdan 2017) to isolate which parameters had the highest
correlations. From there, the focus would have been on determining how the magnitude of spatial
precision affected overall discourse quality under these parameter influences.
After evaluating how discourse quality values could be influenced by a comment’s own
characteristics, discourse quality values for the dataset were compared with and without the
spatial precision item present. This was necessary to ascertain whether the difference between
discourse quality values with a spatial item present was significant. This was determined by
67
calculating if the difference for the mean discourse value for the dataset with and without the
‘Spatial Precision’ item was statistically significant.
Significance in this context means that the difference observed in means was not the
result of random chance, but by a “treatment” condition that could account for the changes in
values before the treatment and after (Albright 2018; Kent State University Libraries 2018; PSU
2018). The calculation used for determining significance was the Paired Samples t test run in
SPSS. With this calculation, the difference of the mean discourse quality values from pre- and
post-treatments are divided by the “standard error of the difference between the means” (Urdan
2017, 100). This produces a t statistic and a probability coefficient, both used to determine the
likelihood that the difference in the discourse quality means is due not to random chance. With
this result, the comparison between the discourse quality values with spatial precision detected
against discourse values where spatial precision was not measured can be interpreted in terms of
how significantly spatial precision in landscape valuation narratives influence discourse quality.
Looking at how the means between the paired samples was supplemented with an
exploration into how the standard deviations (known as “SD” in the tables under the ‘Results and
Discussion’ Chapter) between the spatial precision classes. This direct comparison of standard
deviations showed how narrow discourse quality ranges could affect the magnitude of discourse
quality change, from one level of spatial precision to the next. Looking at the standard deviations
between dataset classes is often preferred since the calculation considers both the dataset’s mean
and variance, showing essentially the potential range that a measured value could exhibit even if
the memberships per class differ (Urdan 2017). This means that when comparing discourse
quality changes between classes, the standard deviations can help explain how consistently
added spatial precision will change discourse quality.
68
Chapter 4 Results and Discussion
The results presented here focus on the discourse quality value calculated at a scale of the entire
public comment dataset. By calculating discourse quality for the whole dataset, individual
comment quality values can be reclassed into different types of subgroupings based on comment
parameters. Having these subgroupings allows for comparison between parameters which
showed whether discourse quality changed under certain comment conditions. Focusing the
results at the scale of the dataset was necessary because quantifying measures that use ordinal
scales need to have a secondary value to compare to a first, so that change in values were made
appropriate to the scale of measure. As in this case, having a discourse value for the whole
dataset to compare to values reclassed into subgroupings showed how discourse quality values in
subgroups changed in either greater than or less than terms to the dataset as a whole.
Subsequently, determining how spatial precision in landscape valuation narratives changed
discourse quality was also focused at the scale of the entire comment dataset.
What these results will not focus on relate to how precise spatial narratives figure into
discourse ethics. Based on the literature review, the project’s analysis was conducted on the
assumption that human speech contains language elements needed to describe how objects
occupy or interact with a space (Bolstad 2016; Elwood 2006; NRC 2006). Thus, the results from
the content analysis reflected on how the modified DQI quantified qualitative data to create a
discourse quality measure. As these results are also being used to show if the modified DQI can
be used across different study areas with other types of deliberation, the discussion context
regarding these calculations should be limited to the sampled public comments and should not be
extrapolated to generalize landscape valuations for the wider population.
69
This chapter explores the analysis results in three parts. The first part looks at the
calculated indices for the dataset. The second overviews how the statistical reliability and
correlation calculations validate the index as a measuring instrument. The third part reflects on
what these results mean in the context of the research questions, in that whether the modified
DQI was able to show that spatial precision changed discourse quality, and by what extent.
4.1. Measured Discourse Quality Values
With the scale of the content analysis focused on an individual’s speech act, discourse
quality indices were calculated for each of the 151 sampled public comments. For each
comment, the DQI’s six nominal items’ magnitude values were added together. This summed
value was the measured discourse quality for that individual public comment. The range of
discourse quality for the entire sampled public comments was from a value of one to 13.
To understand how discourse quality was influenced by precision in spatial narratives,
the individual comments from the dataset were regrouped into different comment parameter
classes. With each class, new mean discourse quality values were calculated to show the
discourse quality for a class of comment parameter. Classes included where public comments
originated, the type of public comment submitted, and the level of precision used to spatially
locate landscape valuations mentioned in a comment. Table 4.1 presents the quality value
measured for the entire dataset. Figure 4.1 shows the distribution of discourse quality values in a
histogram.
Table 4.1. Deliberative discourse quality, for sampled population
Index Mean Median SD Min Max
6-component DQI (N = 151) 5.497 5.000 1.655 1.000 13.000
Source: Results calculated with data from USFS 2018b.
70
Figure 4.1. Histogram for the dataset’s distribution of discourse quality values. Source: Data
from USFS 2018b.
These quality values for the dataset showed a normal distribution. This distribution was
expected, given the assumption that a public participating in deliberative discourse would be
unlikely to exercise communicative elements which would result in greater than or equal to
discourse quality given by that of a seasoned deliberator most of time (Elwood 2006; Elwood
and Leszczynski 2013; Jaramillo and Steiner 2014). In turn, a normal distribution also makes
sense since it is unlikely that a public would consistently have less than desirable discourse
quality to communicate their landscape values, given the assumption that a public wants to
clearly articulate their landscape values so that those values are understood and appreciated
(Jaramillo and Steiner 2014; Mitchell and Elwood 2012).
Overall for the sample population, the deliberative discourse quality exhibited would be
considered, on its own, to be of moderate discourse quality. The designation of having moderate
71
discourse quality is subjectively based on comparing the mean discourse quality measured at
5.497 to the measure of 14 that could be had with this modified DQI, which would indicate that
the speech act analyzed exhibited the greatest form of deliberative discourse ability that could be
measured. Despite though the moderate quality, impassionate contributors were still able to
generally articulate clearly what policy for the case study area should or should not be
implemented. A sampling of these arguments, with differing discourse quality, are present in
Figure 4.2 (caption on page 72).
It is time for the nations and people of Earth to set aside some significant environmental
reserves where people may visit but they may not remove, alter, or denigrate the
environment. Such reserves provide a biological plant and animal reservoir against
species loss and a safe place for nature to evolve without the rapacious human destruction
witnessed around the globe.
Comment A: Discourse Quality = 1; Type = original; From = unknown
I am writing today in an effort to highlight the importance of motorized access within the
National Forrest service, specifically Chugach.
Motorized (Snowmobile) access here in Alaska provides many of us the opportunity
to visit otherwise untouchable places of our beautiful State. The exploration not only
provides enjoyment for many of us, but it also helps to raise awareness and create
appreciation for our resources and parks.
Disregarding the stigmas some use to associate snowmobiles and the environment...
Much of the rideable or skiable terrain would not be accessible without the use of
snowmobiles. The Park Service Professionals go through great lengths to ensure riding
areas aren't opened up prematurely and the terrain is as best protected as possible. To add
to that, the level of education about "best practices" and safety being passed through
organizations such as Chugach National Forrest Avalance [sic] Information Center
(CNFAIC) and social media helps create a culture where we police ourselves. Not every
person does the right thing all the time on either side of the highway, but I can assure you
most of us go out of our way to educate and clean up after one another.
I hope you consider this any other opinions deeply and recognize the importance
motorized access plays in both education, safety, and simply enjoyment within the NF.
Comment B: Discourse Quality = 5; Type = original; From = individual
72
My name is H**** ****** and I am writing in support of expanding opportunities for
Cross Country skiing in the Chugach National Park, in particular Turnagain Pass! A few
weekends ago Girdwood Nordic volunteered to groom at small loop near the Center
Ridge parking lot and it was magical! As we all know, our winters are getting warmer
and warmer which means if we want to ski, we need to be looking at venues at higher
elevations. We have many beautiful trails systems in Alaska but unfortunately, many of
them are at sea level making them inoperable and unusable for Alaskans!
I know that I'm biased but Cross Country skiing is a sport that spans ages, gender, and
economic class. The ability to XC in the winter makes people happy! On a personal note,
it's the reason I moved here! When the loop was groomed at Turnagain Pass it was
awesome to see all kinds of people with smiles ear to ear, enjoying the grooming and the
opportunity to recreate in a low-impact way in a new place.
There is plenty of "wild land" and back country skiing in Alaska. I think we
(Alaskans!) would really benefit from having an at-elevation option to Cross Country ski
South of Anchorage. …
Comment C: Discourse Quality = 9; Type = original; From = individual
Figure 4.2. Examples arguments that were measured as having low (Comment A), medium
(Comment B), and high (Comment C) discourse quality. Source: Data from USFS 2018b.
4.1.1. Value Based on Comment Parameters
Beyond looking only at the modified DQI’s discourse items for influencing discourse
quality, changes in values seemed to be influenced by certain comment parameters. For instance,
quality appeared to change between comment types (i.e., thought to be comprised of solely
original thoughts, made from pieces of a form letter, or was a non-edited, by the submitter, form
letter. Comments thus classed as ‘Appears Original’ were measured as having less discourse
quality than compared to comments containing form letters, edited by a submitter or otherwise.
However, original content comments showed the greatest range in discourse quality. This
observation seems plausible given that original thoughts would likely reflect a range of
communicative styles since these comments were not constrained by the contents of a form letter
(Elwood 2006).
Meanwhile, comments containing pieces of, or were un-edited from, form letters were
measured as having overall greater discourse quality than comments solely with original content.
73
The discourse quality range though was limited (i.e., anywhere between values measured at four
and nine, whereas the original content value range was between one and 13). This limited range
of values though seems plausible since form letter comments would be constrained to show
similar content across multiple submitters, and as such, to maintain coding consistency,
comments with form letters were coded as similarly as feasible, regardless of their submitter.
Thee reclassed discourse quality values results are presented in Table 4.2. It is important to note
that statistical correlation calculations between comment parameters and a comment’s discourse
quality value were not calculated, since the project was focused on how spatial precision changes
elements of discourse quality, not on how parameters such as these affect discourse quality.
Furthermore, changes in discourse quality between the classes were also measured as minimal,
which signified that the comment type parameter appeared to have minimal if any influence on
discourse quality, so correlation calculations were deemed unnecessary.
Table 4.2. Deliberative discourse quality, for population sample, per comment type
Comment Type
a
N Mean Median SD Min Max
Appeared Original
b
63 5.238 5.000 2.241 1.000 13.000
Form Letter, Edited
b
30 5.833 6.000 1.147 4.000 9.000
Form Letter, Unedited
b
58 5.621 6.000 0.671 5.000 7.000
Source: Results calculated with data from USFS 2018b.
a
From 6-component DQI.
b
Determined content was form letter based on comment likeness being repeated within sampled population.
Another comment parameter looked at for quality value influence included if comments
originated from submitters living in the state of Alaska (AK), where the location of the CNF
study area was, versus those living outside the state. Here, the largest range for quality value was
measured in comments having been submitted from those claiming to live in the state. The
overall discourse quality was also greater for comments originating from within the state (at
74
5.579), however the difference in quality between in and out-of-state comments appeared
minimal (with out-of-state value at 5.478). These results are presented in Table 4.3.
Table 4.3. Deliberative discourse quality, for population sample, per comment origination
Comment Origination
a
N Mean Median SD Min Max
Within AK
b
57 5.579 5.000 2.034 1.000 13.000
Beyond AK
bc
92 5.478 6.000 1.279 1.000 9.000
Unknown
d
2 4.500 5.000 0.707 4.000 5.000
Source: Results calculated with data from USFS 2018b.
a
From 6-component DQI.
b
Determined from comment metadata or explicit statement on where submission was from.
c
Including international submissions (N = 1).
d
Determined if comment metadata contained no information or no explicit statement in comment.
These results indicate for the project that a public comment’s parameters appeared to
have minimal influence on overall discourse quality. This was an important finding since it
helped show that the measured discourse quality values do not have a significant amount of data
noise, i.e. that discourse quality changes detected were not due to stronger influences coming
from entities unrelated to the discourse quality items used by the public. Thus, when looking at
how spatial precision changed discourse quality, the project in this context observed that quality
value changes were not as likely to change because of a comment’s parameters.
4.1.2. Value Based on Spatial Precision
Precise spatial narratives were detected throughout all comment types and from all points
of origination. With spatial precision detected throughout all comment parameters, this raised the
question as to whether comment parameters would also affect values per magnitude of spatial
precision used in the modified DQI. If so, this could introduce additional data noise. For
example, if comment origination showed to affect how often the greatest magnitude of spatial
precision was detected, then discourse quality value changes would be not just based on the
presence of spatial precision but by comment origination. This investigation then helped to show
75
how comment parameters influenced the magnitude of spatial precision detected in the public
comments.
The discourse quality values per spatial precision magnitude per comment type is
presented in Table 4.4. These results show how spatial precision was detected predominately at
the greatest magnitude across all comment types. The results also show how discourse quality
change was predominantly measured with comments considered to have original content while
comments containing form letter content had the greatest spatial precision magnitude detected.
These suggest that comment type could correlate with the amount of spatial precision present in
certain comment types.
Table 4.4. Deliberative discourse quality, for population sample, per comment type and spatial
precision
Comment Type Spatial Precision
a
N Mean Median SD Min Max
Appeared Original
b
Beyond Study Area 14 3.143 3.000 1.610 1.000 5.000
Study Area Only 8 4.875 5.000 1.458 3.000 7.000
Feature Within
Study Area
41 6.000 6.000 2.098 3.000 13.000
Form Letter, Edited
b
Beyond Study Area --
c
-- -- -- -- --
Study Area Only -- -- -- -- -- --
Feature Within
Study Area
30 5.833 6.000 1.147 4.000 9.000
Form Letter, Unedited
b
Beyond Study Area -- -- -- -- -- --
Study Area Only -- -- -- -- -- --
Feature Within
Study Area
58 5.621 6.000 0.671 5.000 7.000
Source: Results calculated with data from USFS 2018b.
a
From 6-component DQI.
b
Determined content was form letter based on comment likeness being repeated within population sample.
c
No comments detected to this spatial precision.
Yet this type of correlation would be unlikely because the design of form letters would
almost always guarantee a correlation between the greatest spatial precision magnitude and
comment types using form letters. Thus, correlation between spatial precision and comment type
76
was a false-positive correlation since the comment types using form letters are biased in the
amount of variation for spatial precision magnitudes that could be detected (essentially that
spatial precision is all but certain for comments using form letters, assuming the forms were
concocted to have that magnitude of spatial precision). This bias however showed to not affect
discourse quality overall, for instance, when form letter types were removed and discourse
quality recalculated using the remaining classed comments. Taken together, though the analysis
may lack variation in spatial precision magnitudes between comment type, the influence of type
to spatial precision magnitudes appeared minimal, enough to state that the spatial precision a
person used in their deliberative discourse was not a condition of comment type but a deliberate,
communicative choice.
This finding for spatial precision and comment type appeared apparent with comment
origination as well, as presented in Table 4.5. Precise spatial narratives from either origination
made up greater than half of the comments submitted for this class. Though with more variation
in spatial precision magnitudes, the results showed the distribution between origination appeared
quite similar. The results then suggest that no matter where the comment came from, the
likelihood of a comment exhibiting the greatest spatial precision had approximately the same
chance. This meant comment origination did not appear to exert enough data noise to suggest
that where a public comment came from would likely dictate the spatial precision to be detected.
For the analysis, this finding solidified that spatial precision was not based on where a comment
came from.
77
Table 4.5. Deliberative discourse quality, for population sample, per comment origination and
spatial precision
Comment Type
d
Spatial Precision
a
N Mean Median SD Min Max
Within AK
b
Beyond Study Area 4 3.750 4.000 2.217 1.000 6.000
Study Area Only 6 4.667 4.500 1.633 3.000 7.000
Feature Within
Study Area
47 5.851 5.000 1.989 3.000 13.000
Beyond AK
bc
Beyond Study Area 9 2.778 3.000 1.481 1.000 5.000
Study Area Only 2 5.500 5.50 0.707 5.000 6.000
Feature Within
Study Area
81 5.778 6.000 0.851 4.000 9.000
Source: Results calculated with data from USFS 2018b.
a
From 6-component DQI.
b
Determined from comment metadata or explicit statement on where submission was from.
c
Including international submissions (N = 1).
d
Not including comments with Unknown origination (N = 2).
4.1.3. Value Based on Case Study Area Geographic Features
The above results suggest that narratives using spatial precision was not only something
the public can confidently articulate, but that their landscape valuations were scale-dependent.
This meant the public was likely to associate their landscape values with precise spatial locations
during deliberative discourse activities. The locations mentioned themselves however, relative to
the CNF, did not suggest significant patterning upon geovisualization. Whether this lack of
patterning was significant was not explored by the project since the focus was on the use of
narrative spatial precision on discourse and not the location of geographic features themselves.
The results of the top ten mentioned locations are in Table 4.6. The geovisualization of the
geographic distribution of quality indices, and number of times a location was mentioned, are
mapped in Figure 4.3.
78
Table 4.6. Deliberative discourse quality, per top-ten precise locations mentioned with counts of
comment type and origination
Statistics
a
Type Count (N)
bcd
Origination
Count (N)
efgh
Location N Mean Median SD
Quality
Range
Original
FL-
Edited
FL-
Unedited
AK (AK)
Wilderness
Study Area
108 5.750 6.000 1.333
3.000-
13.000
22 28 58 28 79
Knight Island 59 6.220 6.000 1.314
4.000-
13.000
8 21 30 12 46
Lake Nellie
Juan
55 6.127 6.000 0.818
4.000-
9.000
4 21 30 8 46
Glacier Island 55 6.291 6.000 1.197
4.000-
13.000
4 21 30 8 46
Columbia
Glacier
54 6.278 6.000 1.250
4.000-
13.000
5 20 29 9 44
Port Wells 50 6.140 6.000 0.857
4.000-
9.000
4 21 25 4 45
mainland
Knight Island
passage
48 6.167 6.000 0.859
4.000-
9.000
2 21 25 2 45
Esther Island 48 6.167 6.000 0.859
4.000-
9.000
2 21 25 2 45
Perry Island 48 6.167 6.000 0.859
4.000-
9.000
2 21 25 2 45
Culross Island 46 6.261 6.000 0.743
4.000-
9.000
0 21 25 0 45
Source: Results calculated with data from USFS 2018b.
a
From 6-component DQI.
b
Determined content was form letter based on comment likeness being repeated within population sample.
c
FL-Edited = form letters whose content were revised prior to submission.
d
FL-Unedited = form letters whose content were not revised prior to submission.
e
Determined from comment metadata or explicit statement on where submission was from.
f
Not including comments with Unknown origination (N = 2).
g
Including international submissions (N = 1).
h
(AK) = comments from out-of-state.
79
Figure 4.3. Geovisualization of discourse quality and frequencies of the top ten precise spatial
locations mentioned in the public comment dataset. Source: Results calculated using data from
USFS 2018b.
4.1.4. Statistical Significance of Discourse Quality Values
All of the above calculated indices must be interpreted in the context of statistical
calculations used to ensure confident rater reliability and construct validation by means of
correlation coefficients. These calculations contextualize that the DQI was applied as intended,
and furthermore that the dataset was appropriate for the modified DQI to be applied. These
calculations essentially showed if the discourse quality values measured with the modified DQI
were measuring the construct of discourse quality. Thus, with these calculations shown below
that the index’s items, including the added spatial item, were appropriate for use in measuring
discourse quality for the public comments, then the calculated discourse quality values should
80
confidently portray how precise spatial locations affects discourse quality (Steenbergen et al.
2003).
4.2. Statistics for the Index’s Measurement Reliability and Validity
To maintain methodological consistency, the modified DQI used the same index
measurement reliability and validity calculations used with the original DQI. The first
calculation performed was quantifying how consistently the modified DQI’s items were applied
to the qualitative data across the three coding sessions performed, generating what’s known as
rater reliability statistics. Calculating rater reliability statistics are meant to show that the
modified DQI’s item constructs were applied to a speech act’s content in a deliberate and
thoughtful manner. Though all of the index’s nominal items were always considered detected in
a speech act, this statistic was important to verify that the items’ magnitudes were being detected
across the three content analysis sessions not at random. Consistency in this sense refers to how
accurately the rater was coding the public comments based on the construct that was being
detected, that across coding sessions the rater was consistently detecting an item such as
‘Respect for Groups’ and reasonably applying a magnitude to measure the intensity of that item.
The second calculation showed the index’s item correlations. Item correlations confirm
that each item measured the discourse element construct it was meant to. This means that the
index’s item should not be measuring another item construct (based on how similar two
measure’s distributions are in the dataset). In turn, to show the modified DQI was measuring the
concept of discourse quality, this calculation was also meant to show that the index’s items
shared unidimensionality, or that the index items’ share in correlations enough to show that the
items as combined in the index were measuring discourse quality. Thus, calculating item
correlation also verified if the added ‘Spatial Precision’ item, and the construct of spatial
81
precision in general, could have been considered a deliberative discourse item that could be
detected and quantified in discourse. Essentially this second calculation was for validating that
the discourse quality values measured could be considered accurate measurements of the
dataset’s discourse quality.
4.2.1. Rater Reliability
The four columns of Table 4.7 shows two rater reliability statistics. the first are the mean
rater reliability scores, as discussed in the ‘Methodology’ Chapter as having the ratio of coding
agreement, or RCA. Second are the Cohen’s κ ‘kappa’ scores in the second two columns,
showing the measure of probability that the codes were not applied randomly during the coding
sessions. These statistics were aggregated as means for each of the modified DQI’s nominal
items between three coding sessions. These are mean results due to both the reliability and kappa
scores originally calculated for each item weight contained per item. Reliability and kappa scores
were calculated for all comments in the dataset (n = 151). The first columns for RCA and kappa
were the results based on the rate of agreement between the two, first-cycle content analysis
coding sessions. The second columns for RCA and kappa show the rate of agreement between
the second, first-cycle session and a third, second-cycle recoding session. This third recoding
however was focused on only rectifying inconsistencies in coding with the ‘Level of
Justification’ and ‘Content of Justification’ items. A third recoding session was needed to
solidify on a process for detecting and identifying a magnitude for these two index items, so that
the process would be similarly consistent as it were with the other index’s items.
82
Table 4.7. Mean rater reliability and Cohen’s kappa scores, per index item
Between 1
st
&
2nd Coding
Between 2
nd
&
3rd Coding
Between 1
st
&
2nd Coding
Between 2
nd
&
3rd Coding
Index Item RCA RCA κ κ
Level of Justification 0.798 0.814 0.460 0.499
Content of Justification 0.744 0.872 0.351 0.605
Respect (for groups) 0.943 0.943 0.549 0.549
Respect for Counterarguments 0.955 0.955 0.454 0.454
Constructive Politics 0.940 0.940 0.866 0.866
Spatial Precision 0.982 0.982 0.886 0.886
Source: Results calculated with data from USFS 2018b.
The RCA scores show that the rater had agreed (or matched) with the magnitudes applied
across the three coding sessions at least greater than 70% of the time. The kappa scores indicate
that the probability of the rater having not applied the codes to the comments at random across
the coding sessions was on average at least greater than or equal to 50%. In consultation with the
reliability scores found in the original DQI research, the item construct’s magnitudes here could
be interpreted as having been identified and weighed consistently across the coding sessions. The
results then suggest the item constructs were built on universally understood connotations as to
how those items could be consistently detected in deliberative contexts. Should these scores have
been lower, that could suggest the DQI’s item constructs were either too broad for identifying
specific instances of that construct in content, or that they were too specific in that the construct
could only be detected in very tight instances in content (Urdan 2017).
Despite the scores indicating a high level of confidence with reliably and consistently
using the modified DQI, these results show areas where rater agreement was not meeting the
most ideal conditions for having even higher rater reliability scores. As a result, this instigated a
third recoding session for only a few items with the index as described before.
For instance, rater agreement for the ‘Level of Justification’ and ‘Content of justification’
items were consistently lower than with the other items. This finding was most likely due to how
these constructs’ demarcations were built between what was considered “illustrative”
83
justifications versus a “complete inference” instances. Illustrative justifications for example
broadly defined the involved parameters of an issue and had loosely linked cause-and-effect
statements, e.g., “If you don’t protect the CNF, more trash will show up in our streams!”. A
complete inference, on the other hand, defined the involved parameters precisely and left little
doubt as to the cause-and-effect linkages, e.g., “If you placed restrictions on the size of camping
groups at Rock Campground, lesser-sized groups are more likely to pick up trash after
themselves, which in turn means that trash is less likely to end up in our streams due to storm
runoff, wind, and animals picking it up and carrying it away.”
While these were clearly defined, both justifications were difficult to code consistently
since the original DQI’s parameters stated both implicit and explicit justifications were
acceptable to detect and code (Steenbergen et al. 2003, 27-30). This coding parameter made it
difficult to decipher if illustrative justification content could pass for an implicit justification, or
that an illustration was essentially a failed explicit justification. An example of a comment using
illustration that could be interpreted either as implicit or explicit is shown in Figure 4.4.
…The Forest Service should not implement any plant [sic] that would allow residential
timber harvests, expanded motorized uses and manipulation of habitats, mining, and/or
helicopter-assisted skiing and hiking. Wilderness is a finite and ever-appreciating
resource in today's world, and the special qualities of Alaska's wild lands are something it
should hold in trust for future generations to enjoy, as I and many others have. The most
crucial elements that elevate it above other places in "the lower 48" are the very lack of
timber cutting, mining, helicopters and other motorized vehicles that this proposed plan
threatens to introduce to the Nellie Juan College Fiord WSA. …
Comment: Discourse Quality = 2; Type = form letter, edited; From = individual
Figure 4.4. Example of an argument using “illustration.” Source: Data from USFS 2018b.
Lower rater reliability scores here also reflect on the difficulty in consistently applying
item weights due to deciphering when a comment was referencing policy actions for a group’s
self-interests or for the greater common good. Here too, the original DQI coding parameters
84
stated both implicit and explicit references were eligible for coding. Speech acts under this
parameter could not be clearly delineated as to whether a policy should focus solely on group
interests, or that focusing policy on group interests would benefit the common good. This is
illustrated in Figure 4.5 and Figure 4.6. Yet despite this lower agreement range for these two
items, the results still showed that the item constructs were appropriate for application to these
spatial narratives, even if the items were detected in either in implicit or explicit forms.
Thank you for the work you are doing. I would like to see expanded snow grooming for
cross country skiing in "front country" areas. For many people who do not have the
ability or interest to travel into back country areas during the winter this would provide
an opertunity [sic] to engage with the forest during these months. This would be an
activity that would have minimum or no dexter acne [sic] with other user groups. I would
also like to see continued efforts to give people a chance to utilize the forest for low
impact none motorized recreation in general. The forest around Turnigan [sic] pass may
become one of the last places to ski in south central Alaska in the future and long-term
planning should take the potential for concentrated non-motorized winter recreation into
account.
Comment: Discourse Quality = 5; Type = original; From = individual
Figure 4.5. Example of an argument using “implicit group justification.” Source: Data from
USFS 2018b.
To Whomever ... my wife and I experienced our first AK visit this past summer, along w/
other lower-48 friends on a repeat visit. We sailed w/ Capt. Dean Rand (Discovery
Voyage) and witnessed the awe and beauty of Alaska's pristine wilderness.
It was disheartening to see HOW MUCH human traffic (and climate) is washing
away the awesomeness of Alaska's natural beauty.
We very much want to return to repeat the breathtaking experience created by the
Discovery Voyage crew ... but news of developing more tracts of wilderness ... or
eliminating appropriate barriers that would allow unchecked development ... is
disheartening. …
Comment: Discourse Quality = 3; Type = original; From = individual
Figure 4.6. Example of an argument using “explicit group justification.” All ‘…’ except at the
end of the comment are in the original. Source: Data from USFS 2018b.
The rater reliability statistics shown above focused on the index’s nominal dimensions,
i.e. looking at the reliability of coding between sessions per nominal item and per each nominal
item’s ordinal magnitudes. For the project, to judge that the modified DQI was applied
85
consistently using only the nominal calculations was adequate. This was appropriate given that
these calculations provide equally adequate evidence, at the detailed scale of per ordinal
magnitude, to show that the modified DQI was applied consistently.
Such a focus on declaring reliability through nominal items was a departure from the
original DQI’s research, where reliability to the index’s ordinal dimensions was also calculated.
For ordinal scales of measurement, reliability is measured by finding the correlation of
distributions between coding sessions using Spearman’s ‘rho’ and Cronbach’s ‘alpha’. However,
as stated in in the ‘Methodology’ Chapter, these statistics were used in a limited capacity for this
project. There use was limited because knowing the correlation of ordinal magnitudes between
coding sessions does not enhance the findings already achieved using RCA and ‘kappa’
calculations. As such, these ordinal dimension calculations performed confirmed what was
already calculated -since the distribution of nominal magnitudes between coding sessions was
consistent, the modified DQI was considered as have been applied to these public comments as
the index was intended with qualitative deliberative data.
4.2.2. Index Item Correlation
To maintain methodological consistency with the original DQI, the index’s item
correlation was calculated as part of validating that the modified DQI was not only measuring
discourse quality, but that the index’s items, including the ‘Spatial Precision’ item were
constructs relevant for quantifying qualitative spatial data. This calculation was essentially the
deeper dive into the index’s unidimensionality using the polychoric correlation coefficient to
quantify how each of the six items in the index correlate to each other. Table 4.8 presents the
results of coefficients calculated after the third recoding session, with greater than or equal to
±0.500 correlation coefficients italicized to highlight moderate correlations (Steenbergen et al.
86
2003; Urdan 2017). Plus, the interest is in correlation interactions across the multiple items, not
just between pairs of them as the Pearson calculation only performs (Urdan 2017).
Table 4.8. Polychoric correlation scores, for 6-component DQI
Nominal Item
Correlated
Item
Correlation
Coefficient
p-values
Level of Justification (L) L 1.000 --
C -0.035 0.027
R -0.360 0.000
CA 0.432 0.000
P -0.089 0.002
S -0.096 0.006
Content of Justification (C) L -0.035 0.027
C 1.000 --
R 0.546 0.491
CA -0.328 0.009
P -0.096 0.000
S -0.477 0.025
Respect (for groups) (R) L -0.360 0.000
C 0.546 0.491
R 1.000 --
CA -0.631 0.002
P 0.042 0.001
S 0.012 0.009
Respect for counterarguments (CA) L 0.432 0.000
C -0.328 0.009
R -0.631 0.002
CA 1.000 --
P 0.232 0.352
S 0.585 0.192
Constructive Politics (P) L -0.089 0.002
C -0.096 0.000
R 0.042 0.001
CA 0.232 0.352
P 1.000 --
S 0.585 0.996
Spatial Precision (S) L -0.096 0.006
C -0.477 0.025
R 0.012 0.009
CA 0.585 0.192
P 0.585 0.996
S 1.000 --
Source: Results calculated with data from USFS 2018b.
87
These results show that some correlations were moderate, though they also lacked
statistical significance. Moderate correlations of the index’s other items to the ’Spatial Precision’
item was also detected with not all of the index’s items but only a select few. The lack of even
moderate correlations across all the index’s item suggested the modified DQI lacked
unidimensionality. Lacking unidimensionality does not mean that the modified DQI was useless
as a quantification measurement for qualitative data. Rather, the modified DQI was an
appropriate method. The DQI was appropriate in that it contained grounded, communicative
elements shown to be the most ideally needed for quality discourse (Jaramillo and Steiner 2014;
Maia et al. 2017). Without the original DQI, there would have been a lack of context as to how
speech acts influence the deliberative discourse in this qualitative dataset.
However, the results also suggest the modified DQI’s, as assembled in this index, cannot
directly measure the construct of discourse quality from deliberative, spatial data. This means
that while the index’s items were relevant to use in measuring discourse quality, the lack of
unidimensionality showed that discourse quality in deliberative spatial data cannot be measured
using the modified DQI. This finding does not discredit the DQI as a valid measure for discourse
ethics, but that the DQI requires some further modification to generate more significant
correlations, to further validate that the results’ interpretations could be extrapolated beyond this
sampled population.
And yet, the lack of unidimensionality may actually make sense for this dataset. Speech
acts from the public are less likely to be concerned that all elements of “formal” discourse are
used than those generated in formal, ritualized deliberative settings, where structured speech is
more likely to be expected (Jaramillo and Steiner 2014; Maia et al. 2017; Wodak 2011). Under
88
this premise, it is logical that this dataset does not have unidimensionality since the modified
DQI’s items did not generate normal distributions with the coding results.
Indeed, not having normal distributions of ordinal magnitudes across items is a major
factor affecting correlation coefficients. Correlation coefficients operate on the assumption that
any two or more data variables are normally distributed when compared against each other
(Urdan 2017). Beyond then ritualized deliberative settings, it would seem implausible to see
these DQI items be normally distributed. Even for this sample population, people are known to
adapt their vernacular to meet their communication needs, which is dependent on who they are
communicating with and for what purpose (Elwood 2006; Kyttä et al. 2013; Mitchell and
Elwood 2012; Moore 2004). Thus, with such frequent language shifting, one would be less likely
to detect the DQI’s items in normal distributions. Plus, as had been found in other content
analysis research, the public is not generally acclimated to using formal, deliberative elements
frequently, a notion for which the DQI has been criticized (Jaramillo and Steiner 2014; Maia et
al. 2017). For the project then, applying the modified DQI as it were may not actually be
appropriate for analyzing speech acts direct from the public, even if the public is engaging in
deliberative discourse.
The lack of unidimensionality with this modified DQI prompted a new question to
consider regarding the ‘Spatial Precision’ item developed. The original DQI contains the items
thought necessary to measure discourse quality. In its current combination, when applied to this
project’s dataset, the modified DQI generates a less-than valid measure of discourse quality.
Does then the ‘Spatial Precision’ item share in some form of dimensionality that could be used,
in another index form perhaps, to measure discourse quality? This was an important question to
ask given that the lack of unidimensionality could be associated with the ‘Spatial Precision’ item,
89
not that the original DQI in-of-itself was intrinsically inappropriate. Thus, while the polychoric
correlation coefficients results give an indication as to which items would likely group together
better to measure a concept similar or close to discourse quality, another index validity
calculation was performed to verify if the ‘Spatial Precision’ item was likely to correlate at all
with at least some of the other index’s items.
To answer this dimensionality question, a factor analysis was used to show how all of the
index’s items may have any sort of correlating dimensional grouping. Using a correlation
significance cutoff of less than ±0.30 to isolate which index items loaded along certain factor
dimensions (Urdan 2017), this calculation showed some index items, both including and
excluding the ‘Spatial Precision’ item, do factor load in groups with approximately two to three
items per factor grouping. These results are presented in Table 4.9 are based on discourse quality
values from the second coding session.
Table 4.9. Factor Analysis scores
a
, per factor loading, per index item
Factor Component
Index Item 1 2 3
Level of Justification -0.731 -- 0.433
Content of Justification 0.752 -- --
Respect (for groups) --
b
0.785 --
Respect for Counterarguments -- -- 0.937
Constructive Politics 0.762 0.333 --
Spatial Precision -- 0.799 --
Source: Results calculated with data from USFS 2018b.
a
Using oblique rotation.
b
Factors not shown when less than ±0.30.
For instance, the factor analysis showed the ‘Spatial Precision’ item loaded with more
confident correlation with the ‘Constructive Politics’ and ‘Respect’ items. In turn, the spatial
item appeared to have weaker correlation, in both positive and negative directions respectively,
with the ‘Content of Justification’ and the ‘Respect for Counterarguments’ items. With the
90
‘Level of Justification’ item, no significant correlation was detected between it and the spatial
item. Taken together, these were encouraging findings. This suggested that not only were the
public’s deliberative speech acts using at least some of the original DQI’s items, but that since
the ‘Spatial Precision’ item did correlate with other items, the spatial item was not the sole
source for the lack of unidimensionality for the entire modified DQI. Since this was an expected
result given the polychoric coefficients, the factor analysis results are not presented here.
For the communicative element of spatial precision in deliberative discourse, the item
correlation results suggest an intriguing finding. Some sort of interaction appears to occur
between certain types of discourse elements. Yet the evidence to suggest though which elements
interact, or whether the interaction was positive or negative for changes in discourse quality, still
lacks confidence to state that those interactions between the spatial precision of narratives and
the discourse elements used in the narratives were the result of spatial precision enhancing to
detracting discourse quality.
4.2.3. Summary of the Statistics Results
The index reliability and validity calculation results suggest that the potential for these
items’ interactions should not be disregarded. Having precise spatial landscape values in our
discourse changes the quality of that discourse, though to what intensity or direction that change
occurs remains uncertain. The modified DQI was appropriate for contextualizing and quantifying
these public comments. However, the DQI was also limited in showing the significance as to
how these communicative elements correlate. Nevertheless, though the index was more
multidimensional than anticipated, these results showed that the generated discourse quality
values can still provide a way forward to understand if spatial narratives change deliberative
91
discourse quality. Since, albeit moderately, correlations between items were still detected, the
index was still validated in its accuracy to capture the construct of discourse quality.
4.3. Discussion of Measured Discourse Quality
In some respects, the discourse quality values measured from the public comments and
subsequent statistical reliability and validation calculations for the modified DQI can roughly
explain for itself how the index quantified deliberative landscape valuations. However, the
project was focused not just on showing the index’s usability with qualitative spatial data, but on
showing how deliberative discourse quality changed with spatial precision integrated into these
public comment narratives. Thus, a subsection is needed to connect the results presented above
to the project’s research objectives.
This subsection then looks in two ways at interpreting the results to determine if
discourse quality changed with precise spatial locations detected in the public comments. The
first look was more binary, essentially concluding whether the overall discourse quality for the
dataset changed or not. This was achieved by comparing discourse values when the dataset
value with the spatial precision item was compared to the same dataset without the spatial item.
The second look was for the degree to which precise spatial narratives changed discourse quality
values. This was concerned with how the quality value shifted among the three magnitudes of
spatial weights for the spatial item. These looks reflect on the dimensions of the index,
incorporating its nominal aspects (whether spatial precision existed or not) and its ordinal aspects
(the magnitude which the spatial precision was detected at).
4.3.1. Determining Overall Discourse Quality Change from Spatial Precision
Overall, public comments with the greatest magnitude for spatial precision detected were
more likely to have greater discourse quality than comments that had less than the greatest
92
spatial precision for landscape valuations, or simply had no spatial precision detected at all. This
finding though is only applicable for the sampled population given the statistical reliability and
validity calculations showed there is little probability the trends for discourse quality with the
sampled population could be extrapolated to a wider population.
This finding for the sampled population is based on the direct comparison that the
discourse quality values changed when classed by spatial precision magnitudes. This direct
comparison involved looking at the class of discourse quality values with the greatest spatial
precision and comparing it to the other classes which had less than the greatest spatial precision.
The results showed that discourse quality for the class with the greatest spatial precision was
greater than the other spatial precision classes as shown in Table 4.10.
Table 4.10. Deliberative discourse quality, for population sample, per spatial precision
Spatial Precision N Mean Median SD Min Max
Beyond Study Area 14 3.143 3.000 1.610 1.000 5.000
Study Area Only 8 4.875 5.000 1.458 3.000 7.000
Feature Within Study Area 129 5.791 6.000 1.379 3.000 13.000
Source: Results calculated with data from USFS 2018b.
However, this observation must be supplemented with additional calculations from a
comparison of means test and observations from comparing the above class’ standard deviations.
This was needed because each ordinal weight class for the spatial precision item had different
membership counts. This means the more members associated with a spatial weight, the more
likely that that weight class would have a more normal distribution of its values than those with
less members. While a lack of a normal distribution is not bad, inconsistent distributions between
the spatial weights makes it impractical to compare directly quality change. For instance, since
the spatial precision magnitudes for weights “0” and “1” were skewed in one direction, this may
be so not because that was how the item was coded, but rather there were not enough members to
93
produce a normal distribution. Thus, to more accurately show how discourse quality changes
with added spatial precision, a comparison of means test was performed to show how statistically
significant changes to discourse quality with and without the spatial item present. A direct
comparison of the standard deviations between the spatial precision classes also contextualizes
how precise landscape valuations helped maintain consistent level of discourse quality, further
indicating that spatial precision appeared to help comments achieve greater discourse quality
than without.
4.3.1.1. Comparison of means test
The comparison of means test used the Paired Samples t Test to generate a statistic for
comparing how the spatial precision item contributed to the overall discourse quality value for
the dataset. The Paired Samples t Test, as outlined in the ‘Methodology’ Chapter, generates a
result if the spatial precision weight significantly contributed to discourse quality for the dataset
overall. Significance was determined by calculating the probability that the spatial precision
weight was more likely to add to discourse quality than if the dataset, without the spatial
precision weight, were rearranged in random combinations to generate closer to similar discourse
quality values. The results of the Paired Samples t Test are shown in Table 4.11.
Table 4.11. Comparison of means, for total index, with and without spatial precision item
Paired Samples
a
Correlation
b
Mean
b
SD 95% CI
c
t df
w/ Spatial – w/o Spatial 0.926 1.762 0.608 1.664-1.859 35.614 150
Source: Results calculated with data from USFS 2018b.
a
n = 151, per sample.
b
p < 0.001.
c
CI = Confidence Interval.
The Paired Samples t Test yielded encouraging results based on two indicators, as
outlined by Kent State University Libraries (2018) and Urdan (2017). First, with strong and
positive correlation (r = 0.926, p < 0.001) between the paired samples, the comparison of means
94
result was interpreted in the sense that the spatial precision item was the predominate treatment
in this test, which was further assumed to most influence change in discourse quality between the
paired samples. In other words, the strong correlation between sets meant that this test had
limited data noise from the index’s other items that could also influence how discourse quality
changes with or without the spatial item. Second, the mean difference between the paired
samples was at a 95% confidence interval with a statistically significant p-value (p < 0.001). This
suggested that under randomized conditions, the dataset without the spatial item was not likely to
produce a similar discourse quality that could be had with the spatial item integrated.
Taken together, the comparison of means test showed that, on average, the total discourse
quality value was 1.762 greater than the discourse quality value for a dataset without the ‘Spatial
Precision’ item. Along with showing that the difference between these tested datasets was
statistically significant, spatial precision in these public comments appeared to add to discourse
quality overall. More importantly, for the project, this comparison helped show that landscape
valuations could have its discourse quality changed when that discourse uses spatial precision in
its narratives. Whether ultimately that change in discourse quality is for the better (in terms of
influencing a public policy process), this finding cannot determine. What the finding does do is
confirm that using a spatial precision construct, during the landscape valuation process appeared
to alter how one communicates their landscape valuation.
The comparison of means test however only explained one aspect to how spatial
precision changed discourse quality. Changes in quality values should also be explored by
comparing the standard deviations between the spatial precision classes. This helps show that
when there are tight ranges for the discourse quality values per class, quality changes between
classes can be apricated for how consistently those values influence discourse quality.
95
4.3.1.2. Comparing spatial precision class standard deviations
Despite the unequal membership counts among the spatial weight rankings, looking at
each weight class’s standard deviation can show how quality changes were occurring based on
the level of spatial precision coded. Standard deviation essentially shows how consistent the
quality scores’ variances were between the dataset’s mean quality and the quality value
calculated per comment.
As shown in Table 4.10 above, for the most precise spatial weight class, there was a
standard deviation of 1.38, which means that comments with spatially precise locations produced
a discourse quality with a consistently limited range variance. Such a limited standard deviation
suggests spatial precision in comments keeps discourse quality at consistent levels, and for this
case study area, at levels greater than those with less spatial precision. In other words, where
there is spatial precision, deliberative discourse was likely to be greater in quality since spatial
precision appears to keep discourse quality within a consistent quality range. This type of “high”
discourse quality with the greatest amount of spatial precision detected is illustrated in Figure
4.7.
96
…The rules regarding motorized access in the Chugach have previously been pretty fair.
One of the major issues that is unfair is Skookum/Placer River Drainage closing to
motorized users in April. Due to the changing climate that area is rarely open as it is.
When there is sufficient snow to protect the underlying vegetation it is unfair that the area
is closed down to motorized users starting in April. I have been lucky enough to fly the
area and drive past it numerous times after it closes and noticed that while the entire
motorized community is closed out, very few non-motorized users are out enjoying the
area. I'm sure dozens will come forth and say this isn't true, but regardless, it isn't fair.
Motorized and non-motorized users need to share the back country when the snow is
sufficient to protect the land. This area in question doesn't have a "corridor" that people
use a trail. It is a wide-open valley with a million different ways in and out - therefore
trail conflict is non-existent. Please consider opening this area beyond the current
regulation so all Alaskans can access the back country - whether motorized or non-
motorized. …
Comment: Discourse Quality = 8; Type = original; From = individual
Figure 4.7. Example of an argument showing “high precision spatial narrative” (with ordinal
weight “2” for the spatial item). Source: Data from USFS 2018b.
Meanwhile, comments whose spatial precision were less than the highest-ranking weight
showed standard deviations comparatively larger (1.46 with some spatial precision and 1.61 with
no spatial precision). As before, these deviations suggest comments with less spatial precision
produced wider variances of discourse quality. This means comments with less spatial precision
are more likely to have wider ranges to discourse quality, and in this case have less quality than
comments with greater spatial precision detected. These types of comments showing
progressively less quality with progressively less spatial precision are illustrated in Figure 4.8.
97
I would like to offer my position that motorized access to the Chugach areas not be
further restricted. Most areas within the Chugach are only accessible in the winter
through the use of snowmobiles, and I have seen no evidence of damage to the ecosystem
through their use. I continue to be an advocate of responsible use of these machines,
respect to other users, and know that everyone else I ride with does the same. To restrict
or eliminate this form of access is to essentially lock up these lands from access by
citizens of this state. There is simply no other way to get into these areas, in a timely
fashion, without dedicating days or weeks to snowshoeing or hiking back in. This is
simply not feasible or practical for the majority of Alaskans.
To eliminate motorized access would do nothing to protect the Chugach, as there is
no additional protection needed, in my opinion. …
Comment A: Discourse Quality = 7; Type = original; From = individual
Please leave the wilderness alone. Allowing development and activity are
counterproductive to the goals of protection - which should be of paramount importance
to your existence. …
Comment B: Discourse Quality = 5; Type = original; From = unknown
Figure 4.8. Examples of arguments showing an “accurate spatial narrative” (as with ‘Comment
A,’ with ordinal weight “1” for the spatial item), and “no spatial precision narrative” (as with
‘Comment B,’ with ordinal weight “0” for the spatial item). Source: Data from USFS 2018b.
These examples show that when a public comment exhibited the greatest form of spatial
precision, that comment was more likely to have less discourse quality variation than of those
comments with less than the greatest form of spatial precision. In other words, spatially precise
comments were not outlying occurrences (i.e., spatially precise comments existed only a fraction
of a time within the sampled population). This is because the standard deviations shown in this
type of comment exhibited little variation within the dataset, that comments with spatial
precision tend to hold their quality throughout a sampled population, regardless of other
parameters (e.g., comment origination or if it was an original thought). Comments that fall within
a wider standard deviation arguably cannot maintain a consistent discourse quality, meaning that
these were less likely to be spatially precise in their landscape valuations. Altogether, when a
comment exhibited less than great spatial precision, the comment had a higher potential to not
exhibit significant discourse quality change.
98
4.3.2. Detecting the Degree of Change to Discourse with Spatial Precision
While detecting discourse quality changes overall for the entire public comment dataset
was achievable, detecting the magnitudes by which discourse quality changes from one level to
another was not possible. The modified DQI’s ordinal dimensions cannot logically be interpreted
as having values which fall along an interval scale, where the amount of change from one value
to another is known because all values on an interval scale are standardized with units to measure
changes between values (e.g., to go from one inch to one-and-a-half inches, you must have
traveled half-of-an-inch). Ordinal rankings inherently lack the properties to detect changes along
a standardize measuring scale, unlike interval scales which measure physical changes to the state
of an object like measuring temperature or distance (O'Sullivan and Unwin 2010).
As such, looking at the magnitude of changes from one discourse level to the next using
ranking does not make sense. While ranking comments on their summed ordinal values does
show which comments appear to have greater quality than others, the project’s objective was to
detect the degree of discourse change when there was spatial precision in landscape valuations.
Since the difference between an ordinal ranking of “1” and “2” is not “1” but rather a condition
of categorical identification, the project cannot interpret changes to discourse quality by a
specific, interval magnitude given the presence of a certain spatial precision. Rather, changes to
discourse quality must be looked at broadly (i.e., the discourse quality for an entire sampled
population) so as to show how certain ordinal categories correlate with other categories
(O'Sullivan and Unwin 2010). This type of comparison should show how the presence of one
magnitude could affect the presence of another, versus by determining how much one item could
affect the presence of another item. For example, Figure 4.9 shows how even with greatest
spatial precision detected, the presence of that magnitude cannot exert a degree of influence over
a comment’s discourse quality, but only correlate with it.
99
I am concerned with currently human powered areas of the NF being zoned motorized. I
do agree that there should be an equal amount of space available for every user group, but
the areas that are more accessible for human powered travel should remain closed to
motorized traffic. The motorized users can access terrain further from the roads, where
the human powered personnel have a greater difficulty getting to, and it makes sense to
use specific corridors to allow them access to those areas. In an effort to keep certain
areas more peaceful, which many human powered users seek, the current boundary
(East/West of Seward Hwy) in Turnagain Pass does a fairly good job keeping the East
side of the pass quieter and safer for the people who choose to recreate there. Thank you
for reading this very short concern.
Comment A: Discourse Quality = 8; Type = original; From = individual
The original Wilderness Study Area should remain a Wilderness. It is a unique
opportunity to set aside a prestine [sic]area for future generations. Think how much good
came from other wilderness areas which in the long run is much more beneficial to the
general population than opening it up to development etc. which would only be to a
few.…
Comment B: Discourse Quality = 6; Type = original; From = individual
The 1.9 million eligible acres of the WSA and surrounding roadless lands eligible for
wilderness designation as Wilderness. Do not abandon protection for the nearly 600,000
acres you propose to eliminate from the WSA.
Comment C: Discourse Quality = 4; Type = original; From = unknown
Figure 4.9. Examples of arguments with progressively lower discourse quality, despite all having
the greatest spatial precision (with ordinal weight “2” for the spatial item). Source: Data from
USFS 2018b.
Unfortunately, the original DQI research was not clear as to if the DQI item weights
should be considered in either the following ways, either along a scale where the difference
between each magnitude is known or -if the magnitudes are only to be ordered and the difference
between each order is unknown. Though the use of the Spearmen ‘rho’ in the original DQI
research suggests the DQI’s weights should be interpreted as ordinal ranks, almost any construct
involving the measurement of quality could lend their weighting schema to be interpreted along
interval scales as well. For example, the “quality” of a phenomenon could be measured not only
in terms of best to worst (i.e. ranking), but by the state of its existence (e.g., the quality of a bank
account based on its daily financial balance, or the quality of one’s health given their internal
100
temperature, both of which are instances measured on interval scale) (O'Sullivan and Unwin
2010).
However, discourse quality here it seems was conceptualized by in the original DQI
research to be measured using ordinal (ranking) scales. This is perhaps on the basis that since
discourse can occur through various mediums (e.g. writing, graphic visualization, video media),
the ordinal scale is utilitarian enough to capture quality by ranking it, rather than devising an
interval measure that would fit the mediums which discourse can exist. Thus, the ordinal
dimension of the modified DQI limits the analysis to identifying discourse quality changes not
for its degree of change between levels.
101
Chapter 5 Conclusion
The preceding statistical results and discussion provided a thorough overview of the sampled
population’s overall discourse quality value for their landscape valuations of the study area.
However, the interpretation as to the trends in spatial precision shaping discourse quality for
people in general would be only applicable to the discourse quality trends measured in the
sampled population. This is due to the statistical calculations yielding probability values that
could not meet thresholds, to state with surety, that the observed patterns of interactions between
discourse items here could likely be observed beyond the sampled population (Steenbergen et al.
2003; Urdan 2017). Nevertheless, the project’s analysis still yielded insights that are worth
further investigation, in both terms of understanding how spatial precision in our speech could
influence our discourse and how this project’s methods could be adapted for other qualitative
spatial science research. This Chapter provides an overview as to whether the above results and
discussion helped to answer the project’s research questions and hypothesis. There are also
discussions on the areas in which this project could improve and where future qualitative spatial
data research could be directed.
5.1. Answering Research Questions
Research questions provide expectations as to what type of results can be achieved with
the methods devised. Research questions also help determine if the methods devised were
appropriate for the data being worked with. Addressing a research question does differ from
answering a hypothesis. The hypothesis is considered the overarching research objective which a
project wants to answer with data, whereas research questions constitute the concrete analysis
steps (objectives) that would be needed to help answer the hypothesis (Montello and Sutton
2013; O'Sullivan and Unwin 2010). For the project, the research questions validate that the
102
analytical processes with the data helped answer the questions needed to get at the hypothesis.
Revisiting these questions also helps understand what parts of the methodology, or the dataset,
should be reassessed for future research.
The first of the four research questions asked if location precision in speech changes the
quality of deliberative discourse. Given the analysis generated findings that showed discourse
did appear to change along its nominal dimensions when the greatest magnitude of spatial
precision was detected, the project considered this question partially answered. This is partial
since the degree of discourse change cannot be known given the DQI’s inherent ordinal
measurement methods, and only shows in binary terms that discourse quality overall changed
with spatial precision. This limits the understanding as to how spatial precision in speech
influences other discourse elements. Nevertheless, with the methodology devised for the DQI to
include spatial precision classes, having this research question partially answered illuminates
how locations documented qualitatively can be useful within spatial science research.
The second research question asked whether the DQI could measure spatial precision in
spatial narratives. This was partially answered given how the spatial precision item construct
integrated with the modified DQI and was then applied during the coding sessions. On the
whole, the factor analysis and strong rater reliability statistical calculations showed that the
added spatial item for the construct of spatial precision could be detected and coded in
qualitative spatial data consistently and objectively. However, the item correlation calculations
also showed that the overall index for the case study area lacked unidimensionality across all the
items. Factor analysis here too though showed the lack of unidimensionality meant instead some
discourse items were more likely to be found correlating moderately together in two or three
item groupings. This grouping of discourse items meant that the public comments from the CNF
103
exhibited a more multidimensional formation of deliberative discourse than what the original
DQI was designed to measure. Therefore, while spatial precision could be detected using the
original DQI’s content analysis methods, the project cannot say with surety that spatial precision
could be consistently measured from all types of narratives beyond the CNF. This is so given the
spatial construct was moderately correlated only with the original DQI’s construct items, which
generates doubt as to if the spatial precision detected was a phenomenon for only the case study
area or otherwise.
The third research question asked if spatial narratives can be quantified from qualitative
spatial data. Assuming that qualitative data contains narratives for quantitative analysis, the
project considers this answered. Of course, to have narratives pulled from qualitative data, this
requires an appropriate instrument to help decide on what elements within a dataset should be
focused on for analysis. Since then the modified DQI instrument helped decide on which
deliberative discourse elements should be focused on for this project’s analysis, one should see
that qualitative data contains narrative elements that are quantifiable.
Finally, the fourth research question asked how precise spatial narratives changed the
quality of deliberative discourse. In reference to the factor loading analysis, the project stipulated
in more Boolean terms which discourse components change with precise spatial narratives.
Beyond this nominal finding though, the nuance as to how much the spatial precision influences
other items could not be answered, due to the ordinal nature of the index’s magnitude measuring.
5.2. Areas for Project Improvement
There are two areas of this project which would benefit from improved methods or
reframing. Indeed, these two areas should be reevaluated before attempting to replicate the above
104
methods using the same dataset or undertake similar research with another qualitative spatial
dataset. Both of these areas address concerns with the structure of index instrument itself.
First, given the potential for speech acts beyond formal deliberative settings (e.g., a
parliament) to have inconsistent use of discourse items, as illustrated in Figure 5.1 by the skewed
distributions of the index’s items’ magnitudes detected in the analysis, the index may have
limited application with other types of deliberative discourse. In the original DQI’s research, the
researchers had used a deliberative discourse dataset from a parliamentary debate. Arguably, a
‘formal deliberative setting’ requires more ritualized deliberative communication elements (see
Elwood and Leszczynski 2013; Maia et al. 2017) than would be known by a public submitting
comments for a policy revision. With a formalized discourse setting, this may have allowed for
the index’s items in the original research to be detected with normal distributions across all the
DQI’s items. In turn, when all of an index’s items have normal distributions, unidimensionality
of the index appears reinforced (Rigdon 2010; Urdan 2017). If then the index’s validity is reliant
on having unidimensionality for measuring the overarching construct of deliberative discourse,
then the index may appear to fall apart when deliberation is researched beyond formalized
discourse settings. Thus, while the elements of the index are still valid and well grounded, use of
the DQI, original or as modified in this project, outside of more formalized discourse settings
should proceed with caution.
105
Figure 5.1. Distribution of the index’s ordinal weights, per nominal item. Source: Results
calculated with data from USFS 2018b.
106
Second, the parameter from the original DQI that some items could be detected and
coded under both implicit and explicit contexts introduced significance doubt during the content
analysis process. Indeed, human communication strategies contains a massive range of
expressive abilities, both through direct language and subtle non-verbal ques (Jaramillo and
Steiner 2014; Maia et al. 2017; Moore 2004). As such, an index designed to quantify human
speech through transcripts or written words (i.e., public comments) should recognize that
capturing both implicit and explicit forms of communicative elements here is practically
impossible and gives a false-positive as to what a communicative item’s presence is in a speech
act. In other words, allowing both implicit and explicit communicative forms could make it
difficult to isolate the influences one discourse item may have on another in the index. Any index
then quantifying human language should accommodate those range of abilities in speech by
delimiting itself to accept either implicit or explicit forms of language.
This struggle with coding implicit and explicit forms of language was especially seen
with the index items ‘Level of Justification’ and ‘Content of Justification’. Under an assumption
of mutual exclusivity, explicit illustrative justifications, or mentions of group interest, for
example should be weighed as having less quality than justifications with explicit reasoning,
mentions for the common good, as outlined in the DQI’s coding parameters. And yet, implicit
illustrative justifications and group interest could suggest that such reasoning is just as valuable
as the more explicit versions. Thus, the allowance of either implicit or explicit reasoning seems
to contradict which type of reasoning is more valuable to discourse quality. To use the DQI,
original or otherwise, on another qualitative spatial dataset, one may wish to further reduce
ontological uncertainty during content analysis by grounding which forms of implicit or explicit
discourse should be quantified.
107
5.3. Future Directions
This project has created a bridge between spatial science and other disciplines. This
bridge hopefully spurs further research by setting the precedent for the spatial sciences to create
new constructs and models of spatial relationships as conceived by other disciplines, disciplines
that would not have spatial analysis tools. Essentially, this project hopefully inspires spatial
scientists to explore new ways of conceptualizing and analyzing spatial relationships.
With respect to the future, there are three areas in which researchers could further build
upon the findings. First, a before-and-after comparison of public policies that undergo a policy
revision process with public comments could occur to validate the dimensions by which spatial
precision narratives may influence a policy. Unfortunately, such comparisons prove challenging
as some policy revision processes take almost a decade, and by then the nature of influence over
a revised policy may well be forgotten or minimized (Brown and Donovan 2013; Kahila-Tani et
al. 2015). Second, the analysis included form letters from the public, which were assumed to
represent their perspective during deliberative discourse. As such, the implications on discourse
quality between submitting a form letter versus submitting original thoughts were not explored
in-depth. Investigations then into if discourse quality should be considered reduced when people
use form letters instead of composing original comments could help understand how “templated”
forms of discourse influence policy changes.
Finally, a larger question asked within softGIS research is to what extent participatory
mapping activities help to empower communities when they leverage spatial narratives into
knowledge politics (see Elwood and Leszczynski 2013; Perkins 2010). This is an important
question for softGIS since political capital by means of “claiming” geographic knowledge over
an area is a promise implied to a public who contribute their landscape valuations through
108
softGIS activities, that their deliberative spatial discourse will influence policy changes (Kahila
and Kyttä 2009; Kar et al. 2016). Yet the degree to which the public is empowered by softGIS
remains unanswered.
This analysis provides a cornerstone for ongoing investigations and discussion on what
spatial precision does for deliberative discourse. The research here offers glimpses into the type
of information that qualitative spatial data can deliver, and up to this point had been largely
unexplored. But further research is needed to build on the project’s findings. And despite having
areas for improvement, this project uncovered the possibilities to which spatial thinking affects
our discourse during policy processes.
As governments strive to craft policies reflective of the people impacted by them, and
people continue to find better ways to communicate landscape values to their governments, this
analysis bore at least moderate evidence to suggest that what people choose to include as part of
their arguments for policy changes matters. More importantly, this project showed that including
spatial thinking into our discourse shapes the way people communicate their landscape values,
and that spatial thinking is indeed an influential communicative tool.
109
References
Albright, Elizabeth A. 2018. “Inference: Comparison of Means.” Nicholas School of the
Environment, Duke University. Accessed December 16, 2018.
https://sites.nicholas.duke.edu/statsreview/means/.
Anderson, Monica. 2017. “Digital divide persists even as lower-income Americans make gains
in tech adoption.” Pew Research Center, March 22. Accessed July 22, 2018.
http://www.pewresearch.org/fact-tank/2017/03/22/digital-divide-persists-even-as-lower-
income-americans-make-gains-in-tech-adoption.
Arnstein, Sherry R. 1969. “A ladder of citizen participation.” Journal of the American Institute of
Planners 35, no. 4 (July): 216–24. doi: 10.1080/01944366908977225.
Bächtiger, André, Simon Niemeyer, Michael Neblo, Marco R. Steenbergen, and Jürg Steiner.
2010. "Disentangling diversity in deliberative democracy: Competing theories, their blind
spots and complementarities." Journal of Political Philosophy 18, no. 1 (March): 32-63.
doi: 10.1111/j.1467-9760.2009.00342.x.
Bhattacherjee, Anol. 2012. Social Science Research: Principles, Methods, and Practices. 2nd ed.
University of South Florida, Tampa: Global Text Project. url:
https://scholarcommons.usf.edu/oa_textbooks/3/.
Birkland, Thomas A. 2005. An Introduction to the Policy Process. New York: M.E. Sharpe.
Bolstad, Paul. 2016. GIS Fundamentals: A First Text on Geographic Information Systems.
Minnesota: Eider Press.
Brown, Greg. 2012. "An Empirical Evaluation of the Spatial Accuracy of Public Participation
GIS (PPGIS) Data." Applied Geography 34 (May): 289-94. doi:
10.1016/j.apgeog.2011.12.004.
Brown, Greg, Christopher M. Raymond, and Jonathan Corcoran. 2015. “Mapping and measuring
place attachment.” Applied Geography 57 (February): 42-53. doi:
10.1016/j.apgeog.2014.12.011.
Brown, Greg G., and David V. Pullar. 2012. "An Evaluation of the Use of Points versus
Polygons in Public Participation Geographic Information Systems Using Quasi-
experimental Design and Monte Carlo Simulation." International Journal of
Geographical Information Science 26, no. 2 (February): 231-46. doi:
10.1080/13658816.2011.585139.
110
Brown, Gregory Gordon, and Pat Reed. 2012. Social landscape metrics: measures for
understanding place values from public participation geographic information systems
(PPGIS). Landscape Research 37, no. 1 (February): 73-90. doi:
10.1080/01426397.2011.591487.
Brown, Gregory, and Delene Weber. 2012. "Measuring Change in Place Values Using Public
Participation GIS (PPGIS)." Applied Geography 34 (May): 316-324. doi:
10.1016/j.apgeog.2011.12.007.
Brown, Greg, Delene Weber, and Kelly de Bie. 2014. "Assessing the Value of Public Lands
Using Public Participation GIS (PPGIS) and Social Landscape Metrics." Applied
Geography 53 (September): 77-89. doi: 10.1016/j.apgeog.2014.06.006.
Brown, Greg, and Marketta Kyttä. 2014. "Key Issues and Research Priorities for Public
Participation GIS (PPGIS): A Synthesis Based on Empirical Research." Applied
Geography 46 (January): 122-36. doi: 10.1016/j.apgeog.2013.11.004.
——. 2018. "Key issues and priorities in participatory mapping: Toward integration or increased
specialization?" Applied Geography 95 (April): 1-8. doi: 10.1016/j.apgeog.2018.04.002.
Brown, Gregory, and Patrick Reed. 2000. “Validation of a Forest Values Typology for Use in
National Forest Planning.” Forest Science 46, no. 2 (May): 240–7. doi:
10.1093/forestscience/46.2.240.
Brown, Gregory, and Shannon Donovan. 2013. "Escaping the National Forest Planning
Quagmire: Using Public Participation GIS to Assess Acceptable National Forest Use."
Journal of Forestry 111, no. 2 (March): 115-25. doi: 10.5849/jof.12-087.
Cerveny, Lee, Kelly Biedenweg, and Rebecca Mclain. 2017. "Mapping Meaningful Places on
Washington’s Olympic Peninsula: Toward a Deeper Understanding of Landscape
Values." Environmental Management 60, no. 4 (June): 643-64. doi: 10.1007/s00267-017-
0900-x.
Dorling, Daniel. 2010. Injustice: Why Social Inequality Persists. Portland, OR: Policy Press.
Downs, Rodger M., 1997. “The geographic eye: Seeing through GIS?” Transactions in GIS 2,
no.2 (November): 111-121. doi: 10.1111/j.1467-9671.1997.tb00019.x.
Elwood, Sarah. 2006. "Beyond Cooptation or Resistance: Urban Spatial Politics, Community
Organizations, and GIS-Based Spatial Narratives." Annals of the Association of American
Geographers 96, no. 2 (June): 323-41. doi: 10.1111/j.1467-8306.2006.00480.x.
Elwood, Sarah, and Agnieszka Leszczynski. 2013. "New Spatial Media, New Knowledge
Politics." Transactions of the Institute of British Geographers 38, no. 4 (August): 544-59.
doi: 10.1111/j.1475-5661.2012.00543.x.
111
Engen, Sigrid, Claire Runge, Greg Brown, Per Fauchald, Lennart Nilsen, and Vera Hausner.
2018. “Assessing local acceptance of protected area management using public
participation GIS (PPGIS).” Journal for Nature Conservation 43 (June): 27-34. doi:
10.1016/j.jnc.2017.12.002.
Ernoul, Lisa, Angela Wardell-Johnson, Loïc Willm, Arnaud Béchet, Olivier Boutron, Raphaël
Mathevet, Stephan Arnassant, and Alain Sandoz. 2018. "Participatory Mapping:
Exploring Landscape Values Associated with an Iconic Species." Applied Geography 95
(June): 71-8. doi: 10.1016/j.apgeog.2018.04.013.
Fu, Pinde, and Jiulin Sun. 2011. Web GIS: Principles and Applications. Redlands, CA: ESRI
Press.
FRS (Federal Reserve System) and Brookings Institution. 2008. The enduring challenge of
concentrated poverty in America: Case studies from communities across the U.S. ISBN
978-0-615-25428-9. Federal Reserve System and Brookings Institution Metropolitan
Policy Program.
Gaventa, John. 2006. “Finding the Spaces for Change: A Power Analysis.” IDS Bulletin 37, no. 6
(November): 23-33. doi: 10.1111/j.1759-5436.2006.tb00320.x.
Goodchild, Michael F. 1992. “Geographical information science.” International Journal of
Geographical Information Systems 6, no. 1 (January): 31-45. doi:
10.1080/02693799208901893.
Hammersley, Martyn. 2011. “Conversation Analysis and Discourse Analysis: Self-Sufficient
Paradigms?” In Questioning Qualitative Inquiry, 101-127. London: SAGE Publications.
doi: 10.4135/9780857024565.
Hepburn, Alexa, and Jonathan Potter. 2011. “Discourse Analytic Practice.” In Qualitative
Research Practice, 168-185. London: SAGE Publications. doi: 10.4135/9781848608191.
Jaramillo, Maria, and Jürg Steiner. 2014. "Deliberative Transformative Moments: A New
Concept as Amendment to the Discourse Quality Index." Journal of Public Deliberation
10, no. 2: 1-22. URL: http://www.publicdeliberation.net/jpd/vol10/iss2/art8.
Kahila, Maarit, and Marketta Kyttä. 2009. “SoftGIS as a Bridge-Builder in Collaborative Urban
Planning.” In Planning Support Systems Best Practice and New Methods, edited by Stan
Geertman and John Stillwell, 389-411. New York: Springer.
Kahila-Tani, Maarit, Anna Broberg, Marketta Kyttä, and Taylor Tyger. 2015. "Let the Citizens
Map—Public Participation GIS as a Planning Support System in the Helsinki Master Plan
Process." Planning Practice & Research 31, no. 2 (December): 195-214. doi:
10.1080/02697459.2015.1104203.
Kar, Bandana, Renee Sieber, Muki Haklay, and Rina Ghose. 2016. "Public Participation GIS and
Participatory GIS in the Era of GeoWeb." Cartographic Journal 53, no. 4 (November):
296-9. doi: 10.1080/00087041.2016.1256963.
112
Kent State University Libraries. 2018. “SPSS Tutorials: Paired Samples t Test.” Kent State
University, November 7. Accessed December 16, 2018.
https://libguides.library.kent.edu/SPSS/PairedSamplestTest.
Kyttä, Marketta, Anna Broberg, Tuija Tzoulas, and Kristoffer Snabb. 2013. "Towards
Contextually Sensitive Urban Densification: Location-based SoftGIS Knowledge
Revealing Perceived Residential Environmental Quality." Landscape and Urban
Planning 113 (May): 30-46. doi: 10.1016/j.landurbplan.2013.01.008.
Lopes-Aparicio, Susana, Matthias Vogt, Philipp Schneider, Maarit Kahila-Tani, and Anna
Broberg. 2017. "Public Participation GIS for Improving Wood Burning Emissions from
Residential Heating and Urban Environmental Management." Journal of Environmental
Management 191, no. C (April): 179-88. doi: 10.1016/j.jenvman.2017.01.018.
Maia, Rousiley C. M., Danila Cal, Janine K. R. Bargas, Vanessa V. Oliveira, Patrícia G. C.
Rossini, and Rafael C. Sampaio. 2017 "Authority and Deliberative Moments: Assessing
Equality and Inequality in Deeply Divided Groups," Journal of Public Deliberation 13,
no. 2 (November). URL: https://www.publicdeliberation.net/jpd/vol13/iss2/art7.
McHugh, Rosemarie, Stéphane Roche, and Yvan Bédard. 2009. "Towards a SOLAP-based
Public Participation GIS." Journal of Environmental Management 90, no. 6 (May): 2041-
054. doi: 10.1016/j.jenvman.2008.01.020.
Mitchell, Katharyne, and Sarah Elwood. 2012. "From Redlining to Benevolent Societies: The
Emancipatory Power of Spatial Thinking." Theory and Research in Social Education 40,
no. 2 (April): 134-63. doi: 10.1080/00933104.2012.674867.
Montello, Daniel. R., and Paul C. Sutton. 2013. An introduction to scientific research methods in
geography and environmental studies. 2nd ed. Los Angeles, CA: Sage.
Moore, Jerry D. 2004. Visions of culture: An introduction to anthropological theories and
theorists. 2nd ed. Walnut Creek, CA: AltaMira Press.
NRC (National Research Council). 2006. Learning to Think Spatially: GIS as a Support System
in the K-12 Curriculum. Washington, DC: The National Academies Press. doi:
10.17226/11019.
Nummi, Pilvi. 2018. "Crowdsourcing Local Knowledge with PPGIS and Social Media for Urban
Planning to Reveal Intangible Cultural Heritage." Urban Planning 3, no. 1 (March): 100-
115. doi: 10.17645/up.v3i1.1266.
O'Sullivan, David, D. and David J. Unwin. 2010. Geographic information analysis 2nd ed.
Hoboken, N.J: John Wiley & Sons.
Perkins, Douglas D. 2010. "Empowerment." In Political and Civic Leadership: A Reference
Handbook, edited by Richard A. Couto, 207-18. Thousand Oaks, CA: SAGE
Publications.
113
Pietrzyk-Kaszyn´ska, Agata, Michał Czepkiewicz, and Jakub Kronenberg. 2017. “Eliciting non-
monetary values of formal and informal urban green spaces using public participation
GIS.” Landscape and Urban Planning 160 (January): 85-95. doi:
10.1016/j.landurbplan.2016.12.012.
Plantin, Jean-Christophe. 2014. Participatory Mapping: New Data, New Cartography. Focus
Series, edited by Anne Ruas. Hoboken, N.J.: John Wiley & Sons.
PSU (Pennsylvania State University). 2018. “Comparing Two Population Means: Paired Data.”
Eberly College of Science, Applied Statistics. Accessed December 16, 2018.
https://onlinecourses.science.psu.edu/stat500/node/51/.
Ramasubramanian, Laxmi. 2010. Geographic Information Science and Public Participation.
Advances in Geographic Information Science, edited by Shivanand Balram and Suzana
Dragicevie. New York: Springer.
Rantanen, H., and M. Kahila. 2009. "The SoftGIS Approach to Local Knowledge." Journal of
Environmental Management 90, no. 6 (May): 1981-990. doi:
10.1016/j.jenvman.2007.08.025.
Rigdon, Edward E. 2010. “Polychoric Correlation Coefficient.” In the Encyclopedia of Research
Design, edited by Neil J. Salkind, 1046-8. Thousand Oaks, CA: SAGE Publications.
Rittel, Horst W. J., and Melvin M. Webber. 1973. “Dilemmas in a General Theory of Planning.”
Policy Sciences 4 (June): 155-69. doi: 10.1007/bf01405730
Saldaña, Johnny. 2013. The Coding Manual for Qualitative Researchers 2nd ed. Thousand Oaks,
CA: SAGE Publications.
Steenbergen, Marco R. 2000. “Item Similarity in Scale Analysis.” Political Analysis 8, no. 3
(March): 261-83. doi: 10.1093/oxfordjournals.pan.a029816.
Steenbergen, Marco R., André Bächtiger, Markus Spörndli, and Jürg Steiner. 2003. "Measuring
Political Deliberation: A Discourse Quality Index." Comparative European Politics 1, no.
1 (March): 21-48. doi: 10.1057/palgrave.cep.6110002.
Sui, Daniel, Sarah Elwood, and Michael Goodchild, eds. 2013. Crowdsourcing Geographic
Knowledge: Volunteered Geographic Information (VGI) in Theory and Practice. New
York: Springer.
Trochim, William M.K. 2006. “Sampling.” In Web Center for Social Research Methods.
Accessed July 22, 2018. http://www.socialresearchmethods.net/kb/sampling.php.
Urdan, Timothy C. 2017. Statistics in Plain English 4th ed. New York: Routledge.
US Department of Agriculture. Forest Service. 2016. A Citizens’ Guide to National Forest
Planning, by the Federal Advisory Committee on Implementation of the 2012 Land
Management Planning Rule. Washington, DC.
114
US Forest Service. 2014. “2014 Assessment Report: Chapter 1 Assessment Overview.” Chugach
National Forest. Accessed July 22, 2018.
https://www.fs.usda.gov/Internet/FSE_DOCUMENTS/stelprd3822692.pdf.
——. 2018a. “Chugach National Forest Geographic Information System.” Theme Index.
Accessed November 4, 2018.
https://data.fs.usda.gov/geodata/rastergateway/alaska/chugach/.
——. 2018b. “Revision of the Land Management Plan for the Chugach National Forest.”
Reading room. Accessed July 7, 2018. https://cara.ecosystem-
management.org/Public/ReadingRoom?project=40816.
Vich, Guillem, Oriol Marquet, and Carme Miralles-Guasch. 2018. "The Scales of the Metropolis:
Exploring Cognitive Maps Using a Qualitative Approach Based on SoftGIS Software."
Geoforum 88 (January): 49-56. doi: 10.1016/j.geoforum.2017.11.009.
Warner, Kee, and Harvey Molotch. 1995. "Power to Build: How Development Persists despite
Local Controls." Urban Affairs Quarterly 30, no. 3 (January): 378-406. doi:
10.1177/107808749503000304.
Wodak, Ruth. 2011. “Critical Discourse Analysis.” In Qualitative Research Practice, edited by
Clive Seale, Giampietro Gobo, Jaber F. Gubrium, and David Silverman, 186-201.
London: SAGE Publications.
Wolf, Isabelle D., Greg Brown, and Teresa Wohlfart. 2017. "Applying Public Participation GIS
(PPGIS) to Inform and Manage Visitor Conflict along Multi-use Trails." Journal of
Sustainable Tourism 26, no. 3 (August): 470-95. doi: 10.1080/09669582.2017.1360315.
Wright, Dawn J., Michael F. Goodchild and James D. Proctor. 1997. “Demystifying the
persistent ambiguity of GIS as ‘Tool’ versus ‘Science’” Annals of the Association of
American Geographers 87, no. 2 (June): 346-62. doi: 10.1111/0004-5608.872057.
Ziegler, Matthias, and Dirk Hagemann. 2015. “Testing the Unidimensionality of Items.”
European Journal of Psychological Assessment 31, no. 4 (October): 231-7. doi:
10.1027/1015-5759/a000309.
Zolkafli, Amirulikhsan, Greg Brown, and Yan Liu. 2017. “An Evaluation of the Capacity-
building Effects of Participatory GIS (PGIS) for Public Participation in Land Use
Planning.” Planning Practice & Research 32, no. 4 (May): 385-401. doi:
10.1080/02697459.2017.1329470.
Zuk, Miriam, Ariel H. Bierbaum, Karen Chapple, Karolina Gorska, Anastasia Loukaitou-Sideris,
Paul Ong, and Trevor Thomas. 2015. Gentrification, displacement and the role of public
investment: A literature review. Working Paper 2015-05. San Francisco: Federal Reserve
Bank of San Francisco.
115
Appendix A Original DQI Item Overview Table
Table A.1. Original discourse quality index’s seven nominal items and their ordinal weights
Nominal Ordinal
Meaning of Weight Index Item Constructs Weight
Participation
a
Interruption of
speaker
0
During the delivery of a speech act, a speaker is interrupted
before completing an argument
No interruptions
to speaker
1
During the delivery of a speech act, a speaker is not
interrupted, allowed to complete an argument
Level of
Justification
None 0
Speaker states what should/not be done without reasoning to
why
Inferior 1
Speaker states what should/not be done, but reasoning to why
has no linkage, or reasoning is based on illustrations
Qualified 2
Speaker states what should/not be done and at least one
reasoning to why has linkage
Sophisticated 3
Speaker states what should/not be done and at least two
reasonings to why have linkage
Content of
Justification
For group
interests
0 Speaker states argument to benefit one or more group interests
Neutral 1
Speaker does not state argument to benefit a group interest nor
for the ‘common good’
For common
good, utility
2a
b
Speaker states argument to benefit the ‘greatest good for
greatest number’ (utilitarian terms)
For common
good, difference
2b
b
Speaker states argument to benefit the ‘least advantaged in
society’ (different principle)
Respect (for
groups)
None 0
Speaker mentions only negative statements about groups
participating or benefiting from deliberation
Implicit 1
Speaker does not mention negative statements about groups
participating or benefiting from deliberation, nor mentions
explicit positive statements
Explicit 2
Speaker mentions at least one positive statement about groups
participating or benefiting from deliberation, regardless if
there are negative statements as well
Source: Steenbergen et al. 2003
a
Item not included in modified DQI. Project assumed if one submitted a comment, then the speaker was
participating without interruption, which would have produced a constant (1.000). For statistical reliability
calculations, constants would have been removed prior to calculations.
b
Ordinal weight (2a) and (2b) having the same ranking weight, as creators of the DQI deemed these reasonings have
the same impact to discourse yet should be delineated separately due to different reasonings applied.
116
Nominal Ordinal
Meaning of Weight Index Item Constructs Weight
Respect (for
demands of
others)
c
None 0
Speaker explicitly states no respect for the demand to bring an
issue up for deliberation
Implicit 1
Speaker does not explicitly state respect or no respect for the
demand to bring an issue up for deliberation
Explicit 2
Speaker explicitly has at least one statement of respect for the
demand to bring an issue up for deliberation, regardless if
there are other negative statements as well
Respect for
Counter-
arguments
Ignored 0 Speaker flatly ignores counterarguments
Degraded 1
Speaker acknowledges counterarguments, but also explicitly
degrades it with a negative statement about the reasoning or
other speaker presenting the counterargument
Neutral 2
Speaker acknowledges counterarguments, but does not
explicitly applies a negative or positive value to it
Valued 3
Speaker acknowledges counterarguments and explicitly states
it as having positive value to it
Constructive
Politics
Positional 0
Speaker offers no opportunities for reconciliation or consensus
building to an issue being deliberated
Alternative 1
Speaker offers for reconciliation or consensus building, but the
offer is for another issue not related to the one currently in
deliberation
Mediating 2
Speaker offers for reconciliation or consensus building to an
issue being deliberated
Source: Steenbergen et al. 2003
c
Item not included in modified DQI. Project assumed since a submitted comment was focused on issue, then the
demand for the issue to be deliberated was already established, which would have produced a constant (1.000). For
statistical reliability calculations, constants would have been removed prior to calculations.
117
Appendix B Table of Discourse Item Weights and Quality Value, per
comment
Public
Comment
Document #
Content
Justification
Respect for
Counter-
arguments
Level of
Justification
Constructive
Politics
Respect
for
Groups
Spatial
Precision
Discourse
Quality
Value
D 89 0 0 1 0 1 2 4
D 90 1 0 2 2 1 2 8
D 91 1 0 0 0 1 2 4
D 92 2 0 1 0 1 2 6
D 93 1 0 0 0 1 2 4
D 94 1 0 2 0 1 2 6
D 95 2 0 3 0 2 2 9
D 96 1 0 0 0 1 2 4
D 97 0 0 2 0 1 2 5
D 98 1 0 0 0 1 2 4
D 99 1 3 3 2 1 2 11
D 100 0 0 1 0 0 0 3
D 101 2 0 2 0 1 2 7
D 102 0 0 2 0 1 2 5
D 103 0 0 2 0 1 2 5
D 104 1 0 3 0 1 2 7
D 105 0 2 3 0 1 2 8
D 106 1 0 1 0 1 1 3
D 107 0 0 1 0 1 1 3
D 108 1 0 1 0 0 2 4
D 109 0 0 1 0 1 2 4
D 110 0 0 2 0 1 2 5
D 111 1 0 3 2 1 2 9
118
Public
Comment
Document #
Content
Justification
Respect for
Counter-
arguments
Level of
Justification
Constructive
Politics
Respect
for
Groups
Spatial
Precision
Discourse
Quality
Value
D 112 1 0 1 0 1 2 5
D 113 0 0 2 0 1 2 5
D 114 1 3 3 2 2 2 13
D 115 2 0 1 0 1 2 6
D 116 1 0 0 2 1 2 6
D 117 0 0 1 0 1 2 4
D 118 0 1 1 0 1 2 5
D 119 0 0 2 0 1 2 5
D 120 0 0 2 0 1 2 5
D 121 0 0 2 0 1 2 5
D 122 0 0 2 0 1 2 5
D 123 0 0 2 0 1 2 5
D 124 0 0 2 0 1 2 5
D 125 0 0 2 0 1 2 5
D 126 0 0 2 0 1 2 5
D 127 0 0 2 0 1 2 5
D 128 0 0 2 0 1 2 5
D 129 0 0 2 0 1 2 5
D 130 0 0 2 0 1 2 5
D 131 0 0 2 0 1 2 5
D 132 0 0 2 0 1 2 5
D 133 0 0 2 0 1 2 5
D 134 0 0 2 0 1 2 5
D 135 0 0 2 0 1 2 5
D 136 0 0 2 0 1 2 5
D 137 0 0 2 0 1 2 5
119
Public
Comment
Document #
Content
Justification
Respect for
Counter-
arguments
Level of
Justification
Constructive
Politics
Respect
for
Groups
Spatial
Precision
Discourse
Quality
Value
D 138 0 0 2 0 1 2 5
D 139 0 0 2 0 1 2 5
D 140 0 0 2 0 1 2 5
D 141 0 0 2 0 1 2 5
D 142 1 0 0 2 1 2 6
D 143 1 0 3 0 1 2 6
D 144 1 0 1 2 1 2 7
D 145 1 0 1 2 1 2 7
D 146 0 0 0 2 1 2 5
D 147 1 1 0 0 1 0 3
D 148 1 0 0 2 1 2 6
D 149 0 0 3 0 1 2 6
D 150 1 0 1 0 0 2 4
D 151 1 0 0 0 1 2 5
D 152 0 0 1 0 0 0 1
D 153 1 0 1 0 1 2 5
D 154 1 0 0 2 1 2 6
D 155 0 0 2 0 0 2 4
D 156 0 1 3 0 0 1 5
D 157 1 1 2 0 0 2 6
D 158 1 0 1 0 1 2 5
D 159 1 0 0 0 1 2 4
D 160 1 0 1 0 1 2 5
D 161 1 0 0 2 1 2 6
D 162 1 0 1 0 1 2 5
D 163 1 0 0 2 1 2 6
120
Public
Comment
Document #
Content
Justification
Respect for
Counter-
arguments
Level of
Justification
Constructive
Politics
Respect
for
Groups
Spatial
Precision
Discourse
Quality
Value
D 164 1 0 0 2 1 2 6
D 165 1 0 1 0 1 2 5
D 166 1 0 1 2 1 2 7
D 167 1 0 0 2 1 2 6
D 168 1 0 1 0 1 2 5
D 169 1 0 0 2 1 2 6
D 170 1 0 0 2 1 2 6
D 171 1 0 0 0 1 2 4
D 172 1 2 1 0 1 0 5
D 173 1 0 2 0 1 2 6
D 174 1 0 0 2 1 2 6
D 175 1 0 1 0 1 2 5
D 176 2 0 1 0 1 2 6
D 177 1 0 0 2 1 2 6
D 178 1 0 0 2 1 2 6
D 179 1 0 0 2 1 2 6
D 180 1 0 0 2 1 2 6
D 181 1 0 0 2 1 2 6
D 182 0 0 1 2 1 2 6
D 183 1 0 0 2 1 2 6
D 184 1 0 1 2 1 2 7
D 185 1 0 1 0 1 0 3
D 186 1 0 0 2 1 2 6
D 187 2 1 2 0 0 0 5
D 188 1 0 1 0 1 2 5
D 189 1 0 0 2 1 2 6
121
Public
Comment
Document #
Content
Justification
Respect for
Counter-
arguments
Level of
Justification
Constructive
Politics
Respect
for
Groups
Spatial
Precision
Discourse
Quality
Value
D 190 1 0 0 2 1 2 6
D 191 1 0 0 2 1 2 6
D 192 1 0 0 2 1 2 6
D 193 1 0 1 2 1 2 7
D 194 1 0 1 2 1 2 7
D 195 1 0 1 2 1 2 7
D 196 1 0 0 2 1 2 6
D 197 0 0 0 0 1 0 1
D 198 1 0 1 2 1 2 7
D 199 1 0 0 0 1 0 2
D 200 1 0 1 2 1 2 7
D 201 1 0 0 2 1 2 6
D 202 1 0 0 2 1 2 6
D 203 0 1 0 0 0 0 1
D 204 0 0 1 2 1 2 6
D 205 1 0 0 2 1 2 6
D 206 1 0 0 2 1 2 6
D 207 1 0 0 0 1 2 4
D 208 2 1 1 0 0 2 6
D 209 2 1 1 0 1 0 5
D 210 1 0 1 0 1 0 3
D 211 0 0 2 0 1 1 4
D 212 1 0 1 2 1 2 7
D 213 1 0 0 2 1 2 6
D 214 1 0 0 2 1 2 6
D 215 1 0 0 2 1 2 6
122
Public
Comment
Document #
Content
Justification
Respect for
Counter-
arguments
Level of
Justification
Constructive
Politics
Respect
for
Groups
Spatial
Precision
Discourse
Quality
Value
D 216 1 0 0 0 1 0 2
D 217 1 0 0 2 1 2 6
D 218 1 0 1 0 1 2 5
D 219 2 0 2 0 1 0 5
D 220 0 0 2 0 1 2 5
D 221 0 3 2 0 1 2 8
D 222 2 0 1 2 0 0 5
D 223 2 0 1 0 1 2 6
D 224 0 2 3 0 1 1 7
D 225 2 1 2 0 0 1 6
D 226 2 0 2 2 0 2 8
D 227 0 0 3 0 1 1 5
D 228 0 0 3 0 1 2 6
D 229 1 0 2 0 1 2 6
D 230 1 0 2 0 1 2 6
D 231 1 2 3 0 1 2 9
D 232 1 0 2 0 1 2 6
D 233 1 0 2 0 1 2 6
D 234 1 0 2 0 1 2 6
D 235 0 1 3 0 0 2 6
D 236 0 3 3 0 2 2 10
D 237 1 0 1 0 1 2 5
D 238 0 0 0 0 1 2 3
D 239 0 2 2 0 1 1 6
Abstract (if available)
Abstract
The importance of spatial precision in geographic information science is not limited to quantitative data. As spatial data can also exist in qualitative form, this project showed how modifying a discourse quality index from the field of discourse ethics helped to better understand whether mentioning specific spatial locations changes the quality of spatial narratives. The discourse quality index was modified by incorporating an item into the index that detected the presence and magnitude of a spatial precision construct. The spatial narratives analyzed with this modified index were public comments submitted during a public policy revision process, for a national forest plan revision at the Chugach National Forest in Alaska, U.S.A. One hundred fifty-one public comments submitted during this policy process were analyzed. Analysis showed when discourse quality values were classed by their magnitude of spatial precision, the discourse quality changed between comments with no spatial precision versus those considered to have spatial precision. The results suggest, preliminarily, that employing spatial precision in narratives changes discourse quality during deliberative activities. This project demonstrated how spatial precision applied to qualitative datasets. Further, the way in which people use spatial precision to communicate during a policy revision process can impact how spatial narratives are understood and valued. Most importantly, this project showed that including spatial thinking into our discourse shapes the way people communicate their landscape values, and that spatial thinking is indeed an influential communicative tool. The results leave room to explore the degree to which incorporating precise spatial thinking into their arguments for policies could empower individuals and/or political groups. Suggestions for further research are provided.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Implementing spatial thinking with Web GIS in the non-profit sector: a case study of ArcGIS Online in the Pacific Symphony
PDF
Exploring the pernicious effects of redlining and discriminatory policies on an American city: a spatio-temporal case study of New York City
PDF
Generating trail conditions using user contributed data through a web application
PDF
A spatial analysis of veteran healthcare accessibility
PDF
Assessing the reliability of the 1760 British geographical survey of the St. Lawrence River Valley
PDF
Finding the green in greenspace: an examination of geospatial measures of greenspace for use in exposure studies
PDF
Soil lead contamination from the Exide battery smelter: the role of spatial scale in cleanup efforts
PDF
Measuring seasonal variation in food access: a case study of Everett, Washington
PDF
Precision agriculture and GIS: evaluating the use of yield maps combined with LiDAR data
PDF
The geographic connotations of reincarceration: a spatial analysis of recidivism in Washington State
PDF
Using pattern oriented modeling to design and validate spatial models: a case study in agent-based modeling
PDF
Demonstrating GIS spatial analysis techniques in a prehistoric mortuary analysis: a case study in the Napa Valley, California
PDF
Walking to the Longhouse: a deep map of the Central New York Military Tract & its indigenous history
PDF
Exploring land use changes in the city of Irvine’s master plan
PDF
A spatial narrative of alternative fueled vehicles in California: a GIS story map
PDF
A spatial analysis of beef production and its environmental and health impacts in Texas
PDF
An accessibility analysis of the homeless populations' potential access to healthcare facilities in the Los Angeles Continuum of Care
PDF
Water quality in the Los Angeles River: a remote sensing based analysis
PDF
Assessing the use of normalized difference chlorophyll index to estimate chlorophyll-A concentrations using Landsat 5 TM and Landsat 8 OLI imagery in the Salton Sea, California
PDF
Questioning the cause of calamity: using remotely sensed data to assess successive fire events
Asset Metadata
Creator
Marder, Christopher Thomas
(author)
Core Title
The role of precision in spatial narratives: using a modified discourse quality index to measure the quality of deliberative spatial data
School
College of Letters, Arts and Sciences
Degree
Master of Science
Degree Program
Geographic Information Science and Technology
Publication Date
01/31/2019
Defense Date
11/19/2018
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
content analysis,deliberation,discourse quality,empowerment,geographic information systems,index,OAI-PMH Harvest,participatory mapping,precision,Public Policy,qualitative data,softGIS,spatial thinking
Format
application/pdf
(imt)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Bernstein, Jennifer (
committee chair
), Sedano, Elisabeth (
committee member
), Vos, Robert (
committee member
)
Creator Email
christopher.marder@gmail.com,cmarder@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c89-117033
Unique identifier
UC11676794
Identifier
etd-MarderChri-7034.pdf (filename),usctheses-c89-117033 (legacy record id)
Legacy Identifier
etd-MarderChri-7034.pdf
Dmrecord
117033
Document Type
Thesis
Format
application/pdf (imt)
Rights
Marder, Christopher Thomas
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
content analysis
deliberation
discourse quality
empowerment
geographic information systems
index
participatory mapping
precision
qualitative data
softGIS
spatial thinking