Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Onshore wind power systems (ONSWPS): a GIS-based tool for preliminary site-suitability analysis
(USC Thesis Other)
Onshore wind power systems (ONSWPS): a GIS-based tool for preliminary site-suitability analysis
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
ONSHORE WIND POWER SYSTEMS (ONSWPS): A GIS-BASED
TOOL FOR PRELIMINARY SITE-SUITABILITY ANALYSIS
by
Jeffry D. Harrison
A Thesis Presented to the
FACULTY OF THE USC GRADUATE SCHOOL
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
MASTER OF SCIENCE
(GEOGRAPHIC INFORMATION SCIENCE AND TECHNOLOGY)
August 2012
Copyright 2012 Jeffry D. Harrison
ii
ACKNOWLEDGEMENTS
I would like to thank my friends and family who showed encouragement and support
throughout this difficult process, especially my Mother and my Father, long separated but
unified in their goal to give me every opportunity to succeed, who listened through many
long, frustrated conversations, and who imparted what wisdom they could in order to help
me understand the truth, despite how much I did not want to hear it. I would like to thank
Jessica Sanborn for her kindness, her positive attitude, and her willingness to tackle
abstract math problems that she had no context to understand, and without which I could
not have made it past Chapter 3. I would like to thank my committee, Chair Dr. John P.
Wilson, Dr. Jennifer N. Swift, and in particular Dr. Jordan T. Hastings for sharing his passion
for quality writing and forcing me to focus on it during the formulation of my project.
iii
TABLE OF CONTENTS
ACKNOWLEDGEMENTS .............................................................................................................................................ii
LIST OF TABLES ............................................................................................................................................................ v
LIST OF FIGURES........................................................................................................................................................ vii
ABBREVIATIONS AND ACRONYMS .....................................................................................................................ix
ABSTRACT .......................................................................................................................................................................xi
CHAPTER 1: INTRODUCTION .................................................................................................................................1
1.1 Status of Wind Energy Development....................................................................................................1
1.2 Research Statement .....................................................................................................................................2
1.3 Motivation .......................................................................................................................................................3
CHAPTER TWO: BACKGROUND.............................................................................................................................8
2.1 Why wind energy? ........................................................................................................................................8
2.2 Basics of wind energy .................................................................................................................................9
2.3 Onshore vs. Offshore Wind Energy Development ......................................................................... 10
2.4 Renewable Energy Source (RES) siting ............................................................................................ 11
2.5 Decision Support Systems (DSS) and Geographic Information Systems (GIS) ................ 12
2.6 Multi-Criteria Analysis (MCA) .............................................................................................................. 14
2.7 Critical Factors in Wind Energy Siting ............................................................................................. 17
2.8 Sensitivity Analysis (SA) ......................................................................................................................... 18
2.9 Visualization ................................................................................................................................................ 19
2.10 Study Area .................................................................................................................................................. 20
CHAPTER THREE: METHODS AND MATERIALS ........................................................................................ 22
3.1 Project workflow........................................................................................................................................ 22
3.2 MCA Methodology Comparison ........................................................................................................... 23
iv
Study A ................................................................................................................................................................... 23
Study B ................................................................................................................................................................... 25
Study C ................................................................................................................................................................... 27
Study D .................................................................................................................................................................. 29
3.3 MCA Comparison Discussion ................................................................................................................. 31
3.4 Proposed Framework .............................................................................................................................. 32
3.4.1 Stage 1 Evaluation: Exclusion of Non-feasible Areas ............................................................. 33
3.4.2 Stage 2 Evaluation: Geographical Suitability Assessment ................................................... 34
3.4.3 Stage 3 Evaluation: Suitability Assessment ................................................................................ 45
3.4.4 The Analytic Hierarchy Process (AHP) ......................................................................................... 47
3.5 SA Methodology .......................................................................................................................................... 53
3.6 Technology ................................................................................................................................................... 55
3.7 Data Processing.......................................................................................................................................... 55
CHAPTER FOUR: RESULTS AND DISCUSSION ............................................................................................. 63
4.1 Stage 1 Evaluation Results .................................................................................................................... 63
4.2 Stage 2 Evaluation Results .................................................................................................................... 67
4.3 Stage 3 Evaluation Results .................................................................................................................... 75
4.4 Suitability Assessment Discussion ...................................................................................................... 84
4.5 SA Results ...................................................................................................................................................... 85
4.6 SA Discussion ............................................................................................................................................... 97
CHAPTER FIVE: CONCLUSIONS .......................................................................................................................... 99
5.1 General Conclusions ................................................................................................................................. 99
5.2 Technical Conclusions .......................................................................................................................... 101
5.3 Future Work.............................................................................................................................................. 106
REFERENCES ............................................................................................................................................................ 109
APPENDIX ................................................................................................................................................................... 117
v
LIST OF TABLES
Table 1: Summary of relevant input criteria for four wind farm siting studies. ............................................ 32
Table 2: Stage 1 Evaluation criteria and constraints.................................................................................................... 34
Table 3: Stage 2 Evaluation criteria and constraints.................................................................................................... 36
Table 4: Wind power classes based on mean annual wind density and mean annual wind speed
at 50 m height, based on Rayleigh speed distribution of equivalent mean wind power
density. Data from NREL. ........................................................................................................................................ 37
Table 5: Suitability Index for the Wind Power Class (WPC) Layer ....................................................................... 38
Table 6: Suitability Index for the Proximity to Electrical Grid Layer................................................................... 39
Table 7: Suitability index for the Proximity to Urban Area/City Layer .............................................................. 42
Table 8: Suitability Index for the Proximity to Roads Layer .................................................................................... 42
Table 9: Suitability Index for the Land Cover Class Layer ......................................................................................... 44
Table 10: Suitability Index for the Slope Layer ............................................................................................................... 45
Table 11: Output criteria weights for Stage 3 Evaluation.......................................................................................... 46
Table 12: The fundamental scale, adapted from Saaty (1990). .............................................................................. 49
Table 13: Pairwise matrix for ONSWPS site selection criteria showing fundamental scale
values, reciprocal values, nth root values (n=6), priority vectors (relative weights),
the principal eigenvalue (λ max), the consistency index (CI) value, and the consistency
ratio (CR) value, based on Saaty (1990). ........................................................................................................ 50
Table 14: Random Index (RI) values, adapted from Saaty & Vargas (2001). .................................................. 52
Table 15: Acreage statistics based on Stages 1 and 2 of the analysis. ................................................................. 68
Table 16: Results of Moran’s I for both Stage 2 evaluation layers. ....................................................................... 72
Table 17: Suitable cell value statistics under two weighting schemes. .............................................................. 88
Table 18: Acreage statistics for optimal polygons under two different weighting schemes. ................. 89
Table 19: SA values for the WPC and GRID criteria using the OAT method..................................................... 91
Table 20: Cell distribution statistics for all six dynamic criteria under the OAT weighting
scheme (±20%). ........................................................................................................................................................... 97
vi
Table 21: Criteria weights for the WPC layer OAT analysis .................................................................................. 117
Table 22: Criteria weights for the GRID layer OAT analysis.................................................................................. 117
Table 23: Criteria weights for the URBCITY layer OAT analysis. ........................................................................ 117
Table 24: Criteria weights for the ROAD layer OAT analysis. ............................................................................... 118
Table 25: Criteria weights for the LANDCOV layer OAT analysis. ...................................................................... 118
Table 26: Criteria weights for the SLOPE layer OAT analysis. ............................................................................. 118
Table 27: Default GAP Status Code assigned by designation type, from USGS (2011). ........................... 119
vii
LIST OF FIGURES
Figure 1: Map showing thesis study area and the locations of existing onshore wind farms
(black ‘X’ symbols in map to the right). ........................................................................................................... 20
Figure 2: Schematic showing project workflow. ........................................................................................................... 22
Figure 3: Schematic of the criteria weight selection process using AHP, adapted from Chen, Yu,
& Khan (2010). ............................................................................................................................................................. 47
Figure 4: Schematic of the hierarchical structure of the ONSWPS Stage 3 Evaluation criteria,
based on Saaty (1990).............................................................................................................................................. 48
Figure 5: Histogram showing the number of cells by suitability score remaining after
“extraction by mask” in Stage 3, based on the AHP-derived criteria weights. ............................ 61
Figure 6: Sample spatial distribution of the suitable cells layer depicted in Figure 13. ............................ 61
Figure 7: Stage 1 constraint map showing areas excluded from the analysis (unsuitable cells),
the locations of existing wind farms, and areas of high WPC. ............................................................. 64
Figure 8: Map showing areas with high WPC and U.S. National Forest Land.................................................. 65
Figure 9: Map showing forested land cover, locations of existing wind farms, and areas with
high WPC. ........................................................................................................................................................................ 66
Figure 10: Stage 2 suitability map showing the results of the weighted overlay operation using
the AHP-derived criteria weights (10 = most suitable; 0 = unsuitable)......................................... 68
Figure 11: Stage 2 suitability map showing only areas with the highest suitability scores. ................... 69
Figure 12: Results of Moran’s I for the Stage 2 suitable areas (> 0) layer......................................................... 71
Figure 13: Map details showing the results of the Getis-Ord G-statistic test for Stage 2. In the
lower map, “hot spots” (clusters of high values) are shown in dark red. ...................................... 74
Figure 14: Venn Diagram representing the three stages of ONSWPS evaluation. ........................................ 75
Figure 15: Map of study area showing overlay results of Stage 1 and Stage 2 analysis. .......................... 76
Figure 16: Maps showing the difference between all Stage 3 suitable polygons (suitability score
> 0) and those polygons greater than 5,000 acres. ................................................................................... 77
Figure 17: Maps showing the comparison between the optimal polygons and the results of the
Stage 3 hot spot analysis (Getis-Ord G-statistic). ....................................................................................... 78
Figure 18: Maps showing the comparison between areas with high WPC and suitable area hot
spots................................................................................................................................................................................... 80
viii
Figure 19: Maps showing the location of the optimal polygons in relation to cities, power lines,
existing ONSWPS, and the Stage 2 suitable areas mask. ........................................................................ 82
Figure 20: Maps showing locations of Tribal Lands relative to optimal sites and areas with high
WPC. ................................................................................................................................................................................... 83
Figure 21: Visualized results of the first SA weighting scenario, showing suitable areas based
on two different weighting schemes................................................................................................................. 86
Figure 22: Histograms showing the differences in suitable cell distribution between two
different weighting schemes. ................................................................................................................................ 87
Figure 23: Line graph comparing suitable cell distribution between two weighting schemes
and associated correlation coefficient. ............................................................................................................ 87
Figure 24: Map detail showing the differences in optimal polygons A, B, and C between two
weighting schemes (AHP and Equal Weight). .............................................................................................. 90
Figure 25: Maps showing the difference in suitable cell values for the WPC criterion under the
OAT method (± 20%). ............................................................................................................................................... 92
Figure 26: Histograms showing the differences in suitable cell distributions for the WPC
criterion at ± 20% of the baseline values. ...................................................................................................... 93
Figure 27: Line graphs showing the OAT ±20% cell distributions for the six dynamic criteria. .......... 94
Figure 28: Maps showing the differences in suitable cell distribution for the GRID criterion
under the OAT method (±20%). ......................................................................................................................... 95
Figure 29: Histograms showing the differences in suitable cell distributions for the GRID
criterion at ±20% of the baseline values. ....................................................................................................... 96
Figure 30: Histograms comparing the WPC layer and the GRID layer suitable cell distributions
under the OAT weighting scheme. ..................................................................................................................... 96
Figure 31: Schematic of the Stage 1 Model built using ArcGIS ModelBuilder. ............................................. 121
Figure 32: Schematic of the Stage 2 Model built using ArcGIS ModelBuilder. ............................................. 122
Figure 33: Schematic of the Stage 3 Model built using ArcGIS ModelBuilder. ............................................. 123
ix
ABBREVIATIONS AND ACRONYMS
AHP- Analytical Hierarchy Process
AWEA- American Wind Energy Association
CO 2- Carbon Dioxide
dB- Decibel (measure of sound pressure)
DOE- U.S. Department of Energy
DSS- Decision Support System(s)
EIA- U.S. Energy Information Administration
GHG- Greenhouse Gas(es)
GIS- Geographical Information System(s)
GW- Gigawatt (1 GW = 1,000 MW = 1,000,000,000 or 10
9
watts)
kW- Kilowatt (1 kW = 1,000 watts)
LCC- Land Class Code(s) (based on NLCD classifications)
m- Meter (the International System’s unit of length)
m/s- Meters per second (the International System’s unit for speed)
MC- Multi-Criteria
MCA- Multi-Criteria Analysis (also referred to as MC Analysis)
MCDM- Multi-Criteria Decision-Making
MCE- Multi-Criteria Evaluation
mph- Miles per hour (measure of speed)
MW- Megawatt (1 MW = 1,000 kW = 1,000,000 or 10
6
watts)
NAD_83- North American Datum of 1983 (geographic coordinate system)
x
NLCD- National Land Cover Database
NREL- National Renewable Energy Laboratory
NO x - Nitrous Oxides
ONSWPS- On-Shore Wind Power System(s)
PTC- Production Tax Credit(s)
RES- Renewable Energy Source(s)
ROI- Return On Investment (economic measure)
RPS- Renewable Portfolio Standard(s)
SDSS- Spatial Decision Support System(s)
SMCA- Spatial Multi-Criteria Assessment(s)
SO 2- Sulfur Dioxide
STEM- Science, Technology, Engineering, and Math (education standard)
USGS- U.S. Geological Society
W- Watt (the International System’s unit for power)
WECS- Wind Energy Conversion System
WPC- Wind Power Class(es)
WPS- Wind Power System(s)
WRA- Wind Resource Assessment
xi
ABSTRACT
Wind energy was the fastest growing form of renewable energy in the world during the last
decade and forecasts predict that this trend will continue. In the U.S., Renewable Portfolio
Standards (RPS) and federal tax incentives drive this trend from a policy perspective, but
despite its potential to reduce CO 2 emissions and dependence on foreign fuel for electricity
generation, wind energy development remains a contentious issue and siting of wind
power systems remains problematic. This thesis presents a GIS-based tool for preliminary
site suitability analysis for Onshore Wind Power Systems (ONSWPS) that can be used to
address these issues from a planning perspective. This tool incorporates Multi-Criteria
Analysis (MCA) and the Analytical Hierarchy Process (AHP) along with various forms of
spatial and sensitivity analysis to provide quick visual access to ONSWPS site selection
information through a series of suitability maps.
1
CHAPTER 1: INTRODUCTION
1.1 Status of Wind Energy Development
Wind energy was the fastest growing form of renewable energy in the U.S. and the world
during the last decade (Bohn & Lant, 2009; DeCarolis & Keith, 2006; Hoogwijk, de Vries, &
Turkenburg, 2004; Rosenburg, 2008a; U.S. Department of Energy, 2008), and forecasts
predict that this trend will continue for the next decade and beyond (U.S. Energy
Information Administration, 2011). In the U.S., the majority of states have Renewable
Portfolio Standards (RPS) in place that mandate a percentage of electricity generation from
renewable energy sources (RES) (U.S. Energy Information Administration, 2011; Van
Haaren & Fthenakis, 2011). In addition, several federal tax incentives drive the wind
energy industry from a policy perspective (American Wind Energy Association [AWEA],
2008; Bohn & Lant, 2009; Rosenburg, 2008a). In fact, the growth of the wind energy
industry is highly dependent on the existence of federal Production Tax Credits (PTC),
which makes the cost of generating electricity from wind energy competitive with other
forms of electricity generation (Bohn & Lant, 2009).
Despite its potential to reduce CO 2 emissions, conserve water and fuel, and reduce the
country’s dependence on foreign fuel for electricity generation, wind energy development
remains a contentious social, economic, and environmental issue (DeCarolis & Keith, 2006;
Denny & O'Malley, 2006; Kuvlevsky Jr., et al., 2007; Rosenburg, 2008b; Sutton & Tomich,
2005). The proper siting of wind power systems remains inherently problematic because
2
geographical limitations, public opposition, wildlife conservation, electricity grid
integration, and fuel market fluctuations all pose challenges for planners and developers.
Deciding which criteria to include in the site selection process, and how much priority to
assign each criteria, is the subject of considerable research and public debate, but all agree
that proper site evaluations and accurate resource assessments can save time, money, and
resources and can help to mitigate causes of costly delays (Cavallaro & Ciraolo, 2005; Chen,
Yu, & Khan, 2010; Dominguez & Amador, 2007; Hansen, 2005; Jankowski, 2009; Loring,
2007; Simao, Densham, & Haklay, 2009).
1.2 Research Statement
This thesis presents a GIS-based application for evaluating potential site suitability of
Onshore Wind Power Systems (ONSWPS) in order to provide quick visual access to this
information for politicians, developers, researchers, students, and the public. This
application will be useful for preliminary site selection of utility-scale and large distributed
wind power systems, and will be suitable for regional (approx. 1:3,000,000) and larger
scale site suitability analysis based on a set of physical, economic, and environmental
criteria, including topography, wind power capacity, land use, and proximity to
infrastructure. This application can be integrated into Spatial Decision Support Systems
(SDSS) as part of a Multi-Criteria Analysis (MCA) approach to ONSWPS siting, thus making
it a valuable planning tool. Finally, as a demonstration of spatial problem solving, it can also
serve as a teaching, learning, and decision-making tool through an interactive web-based
interface and suitability maps.
3
The working hypothesis is that combining GIS spatial analysis and visualization capabilities
with MCA is an effective approach for “solving” complex spatial problems like wind power
system siting, which must balance numerous geographic, technical, environmental,
economic, and social variables. The rationale is that this research can help ensure the best
use of this form of renewable energy by making information more accessible to interested
parties and by facilitating discussion on the aesthetic, environmental, and economic issues
surrounding wind power development. The Middle Columbia River Basin, covering
portions of Washington and Oregon States, was chosen as the pilot study region (Figure 1).
1.3 Motivation
The responsible production and use of energy is something that ties us all together as
citizens of the world. Recent concern over the adverse effects of global climate change has
spurred many nations to pursue alternative sources of energy (United Nations, 1997) and
has set in motion numerous policies to integrate RES into existing national energy mixes at
higher levels (Rosenburg, 2008a; U.S. Energy Information Administration, 2011). The
creation of an economically viable renewable energy infrastructure is a monumental issue
facing this and future generations, and contributions to research on this issue are of great
value to decision makers, to society, and to me personally.
The primary motivation for my research is to develop a GIS-based tool that serves multiple
practical purposes as well as integrates and expands on the work done by others in the RES
siting field, with a particular focus on wind energy. The foundation of this project involved
4
compiling, reviewing, and organizing the necessary data into a spatial database that
supports site suitability analysis, as well as model development, sensitivity analysis, and
the production of a series of site suitability maps.
Information dissemination is regarded as a critical factor in public acceptance of wind
energy development, and this in turn has tremendous impacts on the successful
implementation of wind energy projects (Berry, Higgs, Fry, & Langford, 2011; Bohn & Lant,
2009; Jobert, Laborgne, & Mimler, 2007; Loring, 2007; Malczewski, 2004; Rosenburg,
2008a; Simao, Densham, & Haklay, 2009; Sutton & Tomich, 2005; Van der Horst & Toke,
2010). By making this information more readily available to decision makers and the
public, I hope to stimulate and enhance discussions on the subject of wind energy
development, and by creating a tool that assesses many of the criteria involved in wind
energy project siting I intend to provide a practical context for those discussions.
Wind energy is a rapidly growing industry in much of the U.S. and in the Pacific Northwest
in particular. Washington State (where I live) has gone from having zero installed wind
capacity in 2000 to ranking 6th nationwide in installed wind capacity, with 2,356 MW as of
June 30, 2011. Oregon, which is 7
th
nationally with 2,305 MW, has experienced nearly
identical growth in that same period (U.S. Department of Energy, 2011). This trend is
predicted to continue due to volatile fuel prices and socioeconomic pressure to move away
from fossil fuel-based energy sources. Other incentives, such as the passage of Washington
5
State Initiative 937
1
in 2006, and generous federal, state, and regional subsidies for
renewable energy projects, have also added to the momentum of this trend (North Carolina
State University, 2011; U.S. Department of Energy, 2008). As the number of suitable sites is
reduced through development, greater value will be placed on efficient methods to locate
potential wind energy development sites (Kuvlevsky Jr., et al., 2007; Marinoni, 2004).
One important outcome of this thesis will be the ability to integrate this tool into Decision
Support Systems (DSS), or more specifically, Spatial Decision Support Systems (SDSS). SDSS
are used to address complex, multi-faceted spatial problems, such as land use planning and
renewable energy siting, which require informed judgments rather than calculable
solutions. Since the inception of computer-aided GIS, one of its primary uses has been land
use planning; in fact, the evolution of GIS has largely been a response to the needs and
techniques of land use planners and developers (Malczewski, 2004). The research and
frameworkd presented here will draw on well-documented land use planning theory and
research using GIS, and although it will rely on SDSS theory to inform some elements of its
design, the primary focus will be on the GIS portion of this combination that can serve as a
part of a SDSS for wind energy system siting.
Since RES siting is inherently multi-faceted, an approach capable of evaluating several
criteria simultaneously must be used. GIS have the ability to assimilate, analyze, and
visualize multiple spatial data sets that pertain to the different factors used for site
selection, but GIS are limited in their ability to assign values to these factors. MCA has been
1
State Initiative 937 mandates that large utilities (those that serve >25,000 people) obtain 15% of their
energy from renewable resources by the year 2020.
6
shown to be an effective approach to assigning values to different criteria, and it is
compatible with the functionality of GIS (Baban & Parry, 2001; Cavallaro & Ciraolo, 2005;
Chen, Yu, & Khan, 2010; Conley, Bloomfield, St. George, Simek, & Langdon, 2010; Griffiths &
Dushenko, 2011; Hansen, 2005; Janke, 2010; Jankowski, 1995; Lee, Chen, & Kang, 2009;
Malczewski, 2004).
In fact, it is nearly impossible to find an RES siting study that does not use some form of
MCA in combination with GIS. However, comprehensive review of these methods is lacking
and I have not come across any examples of studies attempting to implement an existing
methodology in another region. Additionally, the criteria evaluated in each study vary
widely, so it is difficult to compare one methodology to another when the baseline datasets
(i.e. input values) are different.
This thesis will examine and compare four of the MCA-GIS methods found in the literature
before presenting a new framework, followed by some Sensitivity Analyses (SA).
Comparing this methodology and model to those found in similar studies will provide
insight into the reliability and effectiveness of these models for locating potential sites.
Undertaking sensitivity analysis will provide some evaluation of the uncertainty involved
in the MCA, which may help decision makers understand which criteria are more sensitive
to subjective input values.
Another important outcome of this thesis will be the production and publication of multi-
layered suitability maps using GIS. Such maps can be an effective means of assessing the
7
suitability of potential sites for wind energy development because they can be a cost-
effective and visually powerful information source (Griffiths & Dushenko, 2011; Hansen,
2005; Ramirez-Rosado, et al., 2008; Sidlar & Rinner, 2006; Simao, Densham, & Haklay,
2009). These maps can be displayed on the web to provide free, quick access for those
interested in ONSWPS siting, and increasing access to this type of information has been
shown to enhance public participation in the siting process (Berry, Higgs, Fry, & Langford,
2011; Sidlar & Rinner, 2006; Simao, Densham, & Haklay, 2009). It is beyond the scope of
this thesis to explore the effectiveness of information dissemination on public
participation, but based on the substantial body of research on this subject in the literature
(Berry, Higgs, Fry, & Langford, 2011; Jankowksi & Nyerges, 2003; Jankowski, 2009; Jobert,
Laborgne, & Mimler, 2007; Sidlar & Rinner, 2006; Sieber, 2006; Simao, Densham, & Haklay,
2009; Van der Horst & Toke, 2010), I believe it is reasonable to work from the assumption
that increasing the availability of information will benefit public participation in the
process.
8
CHAPTER TWO: BACKGROUND
2.1 Why wind energy?
Onshore wind power has tremendous potential as a competitively-priced alternative to
fossil fuel-based sources of electricity generation (Conley, Bloomfield, St. George, Simek, &
Langdon, 2010; Elliott, Wendell, & Gower, 1991; U.S. Department of Energy, 2008; U.S.
Energy Information Administration, 2011), and it is the fastest growing form of renewable
energy in the U.S. since 2000 (Bohn & Lant, 2009; Hoogwijk, de Vries, & Turkenburg, 2004;
Rosenburg, 2008a). Although it currently comprises less than 1% of the present energy
supply (Rosenburg, 2008a), researchers estimate that wind energy could be the source of
20% of the U.S. electricity supply (American Wind Energy Association [AWEA], 2008; U.S.
Department of Energy, 2008).
In addition, wind energy development has potential environmental, economic, and energy
security benefits over fossil fuel-based sources, including the potential reduction of CO 2
and other greenhouse gases (GHG), the reduction of air pollutants (SO 2, NO x, etc.) and other
toxins, water conservation, domestic job creation, landowner revenue generation and rural
tax revenue, and perhaps most importantly, reduced reliance on foreign sources of fuel for
electricity generation (American Wind Energy Association [AWEA], 2008; DeCarolis &
Keith, 2006; Denny & O'Malley, 2006; Rosenburg, 2008a).
9
2.2 Basics of wind energy
Wind energy is a form of solar energy and, like solar, wind is an intermittent, or variable
output, source of energy (Ibrahim, Ghandour, Dimitrova, & Perron, 2011; Rosenburg,
2008a). Wind turbines, typically of a horizontal-axis configuration
2
, capture the kinetic
energy in the wind with propeller blades and convert it to other forms of useable energy
(American Wind Energy Association [AWEA], 2004). The current trend is to convert this
energy into electricity which can be used to supplement or replace the electricity
traditionally created from fossil fuels
3
(Denholm, Kulcinski, & Holloway, 2005; Ibrahim,
Ghandour, Dimitrova, & Perron, 2011; Rosenburg, 2008a). Because of this conversion
process, all wind energy systems technically should be called wind energy conversion
systems (WECS) (Billinton & Gao, 2008), but his thesis will use the nomenclature wind
power systems (WPS)- and specifically onshore wind power systems (ONSWPS) - in order to
avoid confusion between the entire power system and the on-site energy conversion part
of the system.
Further, a wind power system can consist of one single turbine or hundreds of turbines,
ranging from small distributed systems to large distributed systems to utility-scale
systems, and the term wind farm is often used interchangeably in the literature. However,
generally speaking, wind farms do not include small distributed systems, such as a single
home-owner or a rural school, because the energy is only used on site and is not connected
to the grid. Because of the wind resource dataset used in this thesis (at 50 m above the
2
See Dabiri (2011) for a discussion of horizontal- and vertical-axis configurations.
3
Some argue that wind energy is a better candidate for hydrogen production to be used in fuel cells, see
(Granovskii, Dincer, & Rosen, 2007).
10
ground), the appropriate focus will be on large distributed systems and utility-scale
systems, and the term wind farm will sometimes be used to describe these systems,
particularly when discussing other studies that use the term.
An understanding of the complete power system is necessary for thorough site suitability
analysis, including the energy conversion and storage systems, turbine type and
arrangement, power transmission, grid integration, load balancing, and the wind resource
itself. While these factors must be addressed at some point in the site selection process,
they primarily affect the final cost of the system or other economic measurements such as
return on investment (ROI). Detailed assessments are expensive and these expenses are
only appropriately incurred by developers in the later stages of a project. This thesis
focuses on preliminary site selection using a GIS-based tool, and as such will make many
informed assumptions about these economic factors based on the literature and use proxy
values where appropriate.
2.3 Onshore vs. Offshore Wind Energy Development
There is a notable dichotomy between onshore and offshore wind energy development in
terms of project costs, environmental impacts, public opposition, infrastructure
development, and siting constraints that essentially makes them two different forms of
renewable energy. The spatial datasets required for onshore wind energy assessments will
not suffice for offshore and vice versa, and the economic assessments of each are limited to
their respective forms. For example, according to the U.S. Energy Information
Administration (2011), the national average levelized cost of onshore wind energy is
11
approximately 39% the cost of offshore wind energy, and this disparity impacts the
economic arguments for wind energy development significantly. Due to the various
discrepancies between onshore and offshore wind energy development and the different
datasets that would be required to model the two, this thesis will be limited to onshore
wind energy analysis, which currently is cost-competitive with other forms of electricity
generation
4
.
2.4 Renewable Energy Source (RES) siting
RES availability is always a matter of geography, and the first step in the siting process
must always be an assessment of the availability of a resource at a given location
(Dominguez & Amador, 2007; Malczewski, 2004; Voivontas, Assimacopoulos, Mourelatos,
& Corominas, 1998). For wind energy, this consists of assessing and measuring wind
characteristics like speed, power, density, prevailing direction, daily and seasonal variation,
long-term consistency (climate cycles), turbulence and wake, temperature, and uncertainty
of the wind at various heights
5
above the Earth’s surface (American Wind Energy
Association [AWEA], 2004; Dabiri, 2011; Ozerdem, Ozer and Tosun, 2006; Prasad, Bansal
and Sauturaga, 2009). This type of analysis is called a Wind Resource Assessment (WRA)
and it is critical to any wind energy project (Prasad, Bansal, & Sauturaga, 2009).
However, different scales and applications of wind energy development require WRA at
different hub heights (Elliott, Wendell, & Gower, 1991). For example, small distributed
4
With the federal Production Tax Credit in place, see Bohn & Lant (2009).
5
Typically called hub heights, referring to the central point along the blade axes where the “hub” of the
turbine generator is located (American Wind Energy Association [AWEA], 2004).
12
systems typically do not need to know how the wind behaves 80 meters above the ground
because their turbines will not be that tall, whereas large utility-scale wind operations
would be acutely interested in that information. It is not within the scope of this thesis to
make detailed WRA or to critique the methods used to make WRA; it is an extremely
technical, time- and resource-intensive process and, fortunately, much work has already
been done for this type of application.
Organizations around the world have dedicated substantial resources to measuring the
wind at different hub heights so that planners, developers, and the public have access to
this information. The National Renewable Energy Laboratory (NREL) is one such
organization in the U.S., and they have compiled a number of useful datasets for utility-
scale or large distributed wind energy development (Janke, 2010). This thesis utilizes the
NREL High Resolution Wind Resource at 50 m dataset for the Pacific Northwest Region of
the U.S., which can be obtained at http://www.nrel.gov/gis/data_wind.html. This dataset
provides an adequate level of detail for regional analysis of annual average wind power
based on wind power classes (WPC) at a height that is useful for utility-scale and large
distributed systems, and it is a quintessential starting point for wind energy site selection
processes in the U.S.
2.5 Decision Support Systems (DSS) and Geographic Information Systems (GIS)
DSS are often combined with GIS to address problems that are inherently spatial or have a
geographic component, yielding Spatial Decision Support Systems (SDSS) (Marinoni, 2004).
GIS alone do not constitute SDSS because a GIS just handles the data; it does not provide a
13
systematic approach to making complex, subjective decisions. Conversely, SDSS do not
have the all of tools required to unlock the value in complicated spatial data, and so
combining the two is necessary when seeking solutions to multi-faceted spatial problems
(Jankowksi & Nyerges, 2003; Malczewski, 2004; Simao, Densham, & Haklay, 2009).
The research and tool presented in this thesis can be used as part of SDSS for locating
potential, suitable, and optimal sites for wind energy development, the expansion of which
is a governmental and societal goal (American Wind Energy Association [AWEA], 2008;
Bohn & Lant, 2009; Hoogwijk, de Vries, & Turkenburg, 2004; Rosenburg, 2008a; U.S.
Department of Energy, 2008; United Nations, 1997; Van Haaren & Fthenakis, 2011). DSS
are a common method for land-use planning and project management activities that
require the consideration and analysis of multiple, often diverse or unquantifiable,
variables (Baban & Parry, 2000; Cavallaro & Ciraolo, 2005; Jankowksi & Nyerges, 2003;
Malczewski, 2004; Ramirez-Rosado et al., 2008; Simao, Densham, & Haklay, 2009). Decision
makers employ DSS when making complex decisions that involve many stakeholders, often
with conflicting priorities and agendas, and the result is nearly always a compromise rather
than a unanimous decision (Cavallaro & Ciraolo, 2005).
GIS offer a level of functionality that is difficult to achieve with other software packages;
they have powerful analytic capabilities, exceptional spatial data management, storage, and
retrieval functionality, and an array of visualization tools that make them an invaluable tool
for site suitability analysis (Malczewski, 2004; Marinoni, 2004). Modern GIS have the
advantage of using computers, but the spatial analysis techniques used in land-use
14
planning and renewable energy siting are not new (Malczewski, 2004; Rosenburg, 2008b).
Hand-drawn maps using overlay techniques for land use planning purposes date to the late
19
th
and early 20
th
centuries (Malczewski, 2004). As technology has evolved, the use of GIS
has spread to nearly all sectors of society, and although there are concerns over the equity
offered by this highly technical software (Sieber, 2006), the body of scientific research
supports the notion that GIS is an effective way to approach site suitability problems
(Baban & Parry, 2000; Cavallaro & Ciraolo, 2005; Dominguez & Amador, 2007; Griffiths &
Dushenko, 2011; Hansen, 2005; Janke, 2010; Malczewski, 2004; Rosenburg, 2008b; Sidlar
& Rinner, 2006; Simao, Densham, & Haklay, 2009; Tegou, Polatidis, & Haralambopoulos,
2010; Van Haaren & Fthenakis, 2011).
2.6 Multi-Criteria Analysis (MCA)
Multi-Criteria Analysis (MCA) is a method for evaluating the relative importance of
multiple variables as input criteria for making complex decisions (Chen, Yu, & Khan, 2010;
Hansen, 2005; Marinoni, 2004; Van Haaren & Fthenakis, 2011). MCA is by nature a complex
process, the essential concept being that a number of relevant criteria must be identified
and assessed in terms of value, or weight, with respect to the influence the criteria have on
the final decision. In spatial analysis, this is often accomplished by creating a suitability
map that is composed of several layers, each layer representing one of the criteria. The
criteria are given a weighted suitability score, and these scores are represented as different
classes or categories, which are then symbolized on the map layer showing the suitable
areas for that criteria. The layers are then overlaid on the map to yield a final site suitability
15
map, from which the user can then identify optimal areas and continue with a more
detailed investigation of those sites.
This method is noteworthy for its situational-adaptive properties and ability to assess a
wide range of tangible and intangible variables based on an assigned weighting scheme
rather than as hardened values. Variations of MCA pervade the literature
6
, but they all rely
on some sort of weighting scheme and they all share the common goal of providing a
framework to assess many disparate types of criteria (Baban & Parry, 2000; Berry, Higgs,
Fry, & Langford, 2011; Cavallaro & Ciraolo, 2005; Chen, Yu, & Khan, 2010; Conley,
Bloomfield, St. George, Simek, & Langdon, 2010; Griffiths & Dushenko, 2011; Hansen, 2005;
Janke, 2010; Malzcewski, 2004; Simao, Densham, & Haklay, 2009: Tegou, Polatidis, &
Haralambopoulos, 2010; Van Haaren & Fthenakis, 2011). In the case of wind energy siting,
these include avian mortality, land use/land cover/land ownership, wildlife habitat, wind
speed/wind power/wind density estimates, energy storage and energy grid requirements,
visual and auditory disturbances, topography, geology, radar interference, public
participation, and cost-revenue analysis. This thesis will not include all of these criteria
because high quality data is either not available or is too location-specific for regional
analysis, and the intent is to create a tool that can be used for preliminary site selection
based largely on geographical criteria.
6
Other variations include: MCE (Multi-Criteria Evaluation), MCDM (Multi-Criteria Decision-Making), MCDSS
(Multi-Criteria Decision Support Systems), SMCDM (Spatial Multi-Criteria Decision-Making), and SMCA
(Spatial Multi-Criteria Analysis).
16
Most of the relevant criteria for preliminary analysis can be addressed using just a few data
layers because multiple constraints can often be satisfied by running different spatial
analysis operations on the same GIS layer. For example, noise, visual disturbance, and
safety (from parts malfunctions or ice throws) are all criteria that can be analyzed from
buffering a ‘major cities/urban areas’ layer, and this same layer also embodies an economic
argument, as an urban area represents a demand for electricity. There are nonetheless
some criteria that require multiple data layers, such as the critical habitat criterion, which
assimilates information from several departments and organizations (i.e. U.S. Department
of Fish and Wildlife, Bureau of Land Management, U.S. Department of Ecology, Nature
Conservancy, etc.), and so will have several input layers.
Because criteria weights are based on the perceived importance of the selected criteria (in
which the selection process itself is most likely biased) to the different actors, little
consensus exists on how to derive MCA criteria weights. One party may argue that
protecting avian habitat should hold more weight than protecting rural homeowners from
”shadow flicker” (the moving or flickering shadows cast by rotating turbine blades, often
rapidly) or turbine noise, while another may consider avian mortality to be a negligible
issue and cite the evidence that cars kill more birds than turbines each year. Another may
consider turbines a blight on the landscape that will reduce tourist dollars while another
may consider turbines a valuable source of income (wind power developers often lease
agricultural land from rural landowners), while others may even consider them a tourist
attraction. The bottom line is that with so many actors involved and so many agendas to
reconcile, wind energy development is always a compromise.
17
One of the most common methods for deriving criteria weights is by using the Analytical
Hierarchy Process (AHP). Originally described by Saaty (1977), this rule-based method is
one of the most commonly used by decision-makers and planners for evaluating multi-
criteria decisions (Pohekar & Ramachandran, 2004), and it provides a calculable
consistency factor (in the form of a ratio) that provides decision-makers with a
considerably higher level of confidence in the criteria weighting process (Boroushaki &
Malczewski, 2008; Chen, Yu, & Khan, 2010; Saaty, 1977).
2.7 Critical Factors in Wind Energy Siting
Besides the wind resource itself, there are a number of environmental and economic
criteria that limit the suitable areas for wind energy development. After a thorough review
of the literature, six dynamic criteria have been identified in this analysis as critical: wind
power class (WPC), distance to the electricity grid, distance to cities/urban areas, distance
to roads, land cover class, and slope. Other criteria identified in this analysis as important
to wind energy development include critical wildlife/vegetation habitat, areas near
airports, military installations, National Parks, Forests, Recreation Areas, and Monuments,
state and local parks or recreation areas, wetlands, tribal lands, and areas with karst or
unstable soil conditions.
These criteria are classified numerous ways in the literature, but this thesis will focus on
separating them into two basic categories: simple and dynamic. Simple criteria are
evaluated in Stage 1 of this analysis, the dynamic criteria are evaluated in Stage 2, and the
18
two are combined in Stage 3. The sensitivity analysis (SA) will evaluate only the dynamic
(or weighted) criteria.
In this thesis, the defining property of critical criteria is that they are dynamic. These
criteria are dynamic in the sense that their impact on the suitability of a particular site
changes in relation to the other criteria depending on the perceived importance (or weight)
of the criteria. They are also more difficult to quantify because they deal with a range of
values. As such, these criteria must be evaluated differently than the simple criteria, which
can be evaluated with simple Boolean-type exclusionary process based on geographical
constraints. The simple criteria represent areas that are generally not suitable for
development under any circumstances, and these areas are excluded from the analysis in
Stage 1. Many of these criteria relate to ecological constraints and habitat preservation,
especially for avian (birds, bats, etc.) species, which are the most adversely affected by
large wind turbines (Barrios & Rodriguez, 2004; Kuvlevsky Jr., et al., 2007; Madders &
Whitfield, 2006; Sutton & Tomich, 2005).
2.8 Sensitivity Analysis (SA)
SA is a beneficial measure to include in MCA approaches because it provides insight into
the sensitivity of the outputs (i.e. the suitable areas for development) to errors, inaccurate
assumptions, or perturbations in the input values (i.e. the criteria values and/or criteria
weights). SA aids in assessing the precision and limitations of the model (Chen, Yu, & Khan,
2010). Because the criteria values are based on the perceptions of various stakeholders and
decision makers, they are often subjective or conditional. The criteria weights may also be
19
subjective or conditional, even if using a systematic approach such as AHP to derive criteria
weights (Marinoni, 2004; Saaty, 2004), or they may represent a range of values
(Boroushaki & Malczewski, 2008; Chen, Yu, & Khan, 2010; Karapetrovic & Rosenbloom,
1999). SA can therefore help identify where the greatest uncertainty exists, whether in
criteria values or criteria weights, and can identify which criteria need to be evaluated
more carefully.
2.9 Visualization
One of the greatest advantages of GIS is visualization. Most researchers using GIS-based
approaches to RES siting use maps to visually analyze locations and display results.
Although recent work has explored presenting 3D visualizations via virtual reality
technology (Bishop & Stock, 2010; Stock, et al., 2008) that show what a site would look like
after development (i.e. with wind turbines, new roads, power lines, etc.) as a way to
evaluate public acceptance of new projects, or even through the use of video games
(Bishop, 2011), maps remain the predominant visualization medium. Different types of
maps and mapping applications have been used: dynamic maps, web-based maps, static
maps, argumentation maps, and suitability maps (Conley, Bloomfield, St. George, Simek, &
Langdon, 2010; Elliott, Wendell, & Gower, 1991; Rodman & Meentemeyer, 2006; Sidlar &
Rinner, 2006; Simao, Densham, & Haklay, 2009). Maps can be extremely effective as a
vehicle for communicating geographic information and will be an invaluable part of the
effectiveness of this thesis in achieving its outcome of increased information dissemination.
20
2.10 Study Area
The geographic study area for this thesis is limited by the number of the geographic data
layers, the size of the datasets, and the computational time required to perform the
required analysis. Under these constraints, a 270 by 270-mile area (72,900 sq. miles or
approximately 45.8 million acres) was chosen to facilitate regional analysis at a scale of
1:3,000,000 and larger (NAD_83 Geographic Coordinate System, Oregon Statewide Lambert
Conformal Conic projection). The chosen area encompasses the Middle Columbia River
Basin, which comprises the southern portion of Washington State and northern portion of
Oregon, east of the Cascade Mountain Range (Figure 1). The approximate study area range
is 123˚ W to 117˚ W and 44˚ N to 48˚ N.
Figure 1: Map showing thesis study area and the locations of existing onshore wind farms
(black ‘X’ symbols in map to the right).
21
This area was selected for analysis for two reasons: 1) All of the existing (and planned)
onshore wind farms in Washington and Oregon are located in this region, indicating that
this is a viable area to apply this tool to, rather than randomly selecting an area where
there may be no suitable sites. If this tool shows promise in this study area, examination of
other areas would then be justified, and; 2) The inclusion of an “existing wind farms” GIS
layer allows us to evaluate the models and the criteria weights in a pragmatic, rather than
scientific, way. Since wind farm siting is not, as of yet, a purely scientific endeavor, having
so many unquantifiable variables and containing enormous uncertainty in regards to the
social and economic variables, a simple comparison with where wind farms actually exist
may provide additional insight into the effectiveness and limits of the models. If a study
area was chosen where no wind farms existed, this type of “ground truth” evaluation would
be impossible.
22
CHAPTER THREE: METHODS AND MATERIALS
3.1 Project workflow
MCA-GIS site suitability projects often have similar workflows, and this thesis follows a
basic approach, beginning with a detailed literature and methods review and the careful
selection of the criteria to by analyzed (Figure 2). This first step is critical in the outcome of
the project. A number of routine GIS processes and operations are then performed to
generate the dataset, followed by the spatial analysis portion of the project, including
sensitivity analysis, and finally the output maps are created and published.
Figure 2: Schematic showing project workflow.
Final Evaluation
Sensitivity Analysis Final Analysis
Stage 3 Evaluation: Finding Optimal Sites
Combine Stage 1 and Stage 2 Maps Create Optimal Areas Maps
Stage 2 Evaluation: Suitability Assessment
Create, run & adjust model Create Suitability Map
Stage 1 Evaluation: Exclude Non-feasible Areas
Create, run & adjust model Create Constraint Map
Organize Database
Select GIS Determine constraints Determine criteria weights (AHP)
Data Collection and Processing
Acquire and examine datasets Convert to common format
Determine Project Scope
Review literature & methods Select criteria for analysis
23
3.2 MCA Methodology Comparison
This thesis reviews four MCA/GIS-based studies for wind farm site suitability prior to
presenting its own framework. These studies were selected based on the similarities of
their geographic study areas in terms of the level of infrastructure development,
development costs and standards, social attitudes, the geographic features, and the policies
and political objectives. It is difficult, for example, to compare a site selection study
performed in a developing country having an unstable political structure with a study
performed in New York State because the perceived values of the various economic and
environmental constraints may vary widely, and, of course, policies at the federal, regional,
state, and/or local level can have a tremendous impact on development potential. Specific
policies aside, these four studies compare relatively well and all contribute something
valuable to the framework presented in this thesis.
Study A
The first approach was applied in the U.K. by Baban and Parry (2001) using the IDRISI GIS.
Fourteen constraints were evaluated (see Table 1 at end of section), but there were some
notable exclusions: airports, military facilities, unstable soil conditions, national parks and
forests, or any specific mention of critical avian habitat or migratory zones. The authors
selected the constraints based on results from questionnaires sent out to local council
bodies and private wind companies, who referred to guidance documents about wind farm
siting, and so there is no argument against the relevance of these criteria for that region.
However, based on the literature it seems that the excluded constraints are also critical to
24
consider, and the lack of a detailed description about the ecological constraints data is
somewhat unsettling.
Baban and Parry applied a three-part approach to their analysis. The first stage was to
exclude unsuitable sites (cell scores of 10 = total constraint; 0 = no constraint), and this
was followed by a comparative analysis of two weighting schemes. The first scheme
assumed that all criteria weights were equal, while the second assigned weights based on
their perceived importance. The authors grouped the factors into four classes of
importance prior to entering the layers into pairwise comparisons, from which the relative
importance of each layer was compared to the others, ultimately yielding a pairwise matrix
of all layers. From this matrix, the principal eigenvector could be computed to determine a
best-fit set of weights for the criteria.
However, it is unclear what process the authors used to determine which factors fell into
which classes, and thus it is difficult to interpret the results in a practical way, and this lack
of measurable consistency in the selection process limits the effectiveness of the
methodology. For example, Grade-1 factors included slope, roads, and urban centers, while
“ecological sites” and water bodies (which were the authors’ proxies for critical wildlife
habitat) were listed as Grade-2 factors, and there was no mention of distance to the
electrical grid or the wind resource itself in this section (these were listed as constraints
earlier in the article however). While the pairwise matrix method may be sound, the input
values seem to have been arbitrarily selected, and, if questioned by planners or
conservationists, the authors of this study may have a difficult time defending their results.
25
The most important conclusion from this study was that using a variable weighting scheme
(derived from calculating the principal eigenvector of the pairwise matrix) provided a
more effective method for identifying more suitable land area than simply assuming equal
weights for all layers. This makes sense because giving equal weight to a secondary or
tertiary criterion will most likely either exclude more land area or lower the suitability
scores of more land area than it should. It could also reduce the suitability scores for the
most important criteria, further reducing the area of land scored as most suitable. This is a
basic form of sensitivity analysis (SA), but there remains a high degree of ambiguity due to
the synergistic qualities of complex, multi-criteria datasets.
Study B
The second approach was used in the Greater San Francisco Bay Area of California by
Rodman and Meentemeyer (2006). The study area is heavily populated and has severe
geographic constraints, which present a considerable challenge for wind development. The
authors employed a four-part analytic framework, first calculating suitability scores for a
physical model, an environmental model, and a human impact model, and then ran a series
of combined models among all three, averaging the scores. Much like the first study, if any
location had a suitability score of ‘0’ (Unsuitable = 0; Excellent = 4) in any of the single
models, then that location would receive a score of ‘0’ in the combined model as well, no
matter what the score was in another model. The models were run for both small-scale
(>4.5 m/s), grid connected turbines and large-scale (>7 m/s) turbines.
26
Rodman and Meentemeyer (2006) did a better job explaining the rationale behind their
weighting scheme, although we see a fascinating example of conflicting perceptions of
importance between this study and Baban and Parry (2001). In their environmental model,
the Rodman and Meentemeyer (2006) assigned the highest weight to land use/vegetation,
and within that category cropland and pasture scored the highest for suitability because
farmers and ranchers can earn extra income by leasing their land to wind developers and it
does not significantly disrupt farming activities or disturb undeveloped land. Conversely,
Baban and Parry (2001) included a specific constraint against taking up Grade-1 or Grade-2
agricultural land, thus demonstrating the problems inherent in assigning weights based on
subjective perceptions. This is also an example of the complexity added through economic
arguments, which are extremely context-dependent and therefore difficult to assess and
model at the preliminary stage.
The advantage of this approach is that it allows for some basic SA among the three models.
The physical model provides the land area where development could feasibly occur, while
the environmental and human impact models reduce that land area through a set of
constraints that can quantifiably indicate which criteria have the most impact on the
suitable land area. However, as with the first study, there are some notable exclusions:
proximity to the electricity grid and visual disturbance (proximity to urban areas and
recreation areas), to which the authors admit, but also proximity to roads, water bodies,
military installations, airports, tribal lands, critical avian habitat, and unstable soil
conditions (karst).
27
Although the authors used a relatively sparse criteria set, their results show that their
models accurately located three land areas where wind development had already occurred
or had been planned. However, they admit that public opposition was present at two of the
three sites – one site in particular where public opposition prevented development
altogether – and they suggest that their models could benefit from more detailed datasets
and the inclusion of a public acceptance factor.
Study C
A third approach, by Van Haaren and Fthnakis (2011), was conducted for New York State
using ArcGIS 9.3.1 and it employed a three-stage framework: Stage 1 entailed the exclusion
of non-feasible sites, Stage 2 consisted of an economic evaluation, and Stage 3 was a bird
impact evaluation. This study is the most involved in terms of evaluating the economic
arguments and constraints, and is atypical among wind farm site suitability studies because
it looks at an entire (relatively large) state rather than a small geographic study area. The
authors use the term spatial multi-criteria assessment (SMCA) to describe their approach.
The authors went into great detail to describe their rationale behind the selected criteria,
and they drew on a more comprehensive dataset than the first two studies. In addition to
the expanded dataset and economic evaluation, other unique facets of this study are the
inclusion of geologically unstable areas (specifically karst, which results in porous grounds,
sinkholes, and caves)
7
, the exclusion of important bird areas (IBA), land clearance costs,
and a measure of cost optimization between building new substations and
7
For a detailed discussion of karst geology, see Waltham & Fookes (2003).
28
upgrading/expanding existing facilities. After the exclusion of non-feasible sites (Stage 1),
the authors ranked the remaining areas by net present value (NPV) based on the cost for
adding feeder lines, the cost for building new roads, and the cost of land clearing.
While the addition of an economic evaluation is helpful, it is a bit problematic because the
costs of the technology, the behavior of the wind resource, and the costs of producing wind
energy are not constant, and also wind energy development is largely policy-driven at the
regional, state, or local level (Boccard, 2009; Bohn & Lant, 2009; Ibrahim, Ghandour,
Dimitrova, & Perron, 2011). This is not to say that calculating the NPV of selected areas is a
meaningless exercise; it is certainly a valuable measure to developers and planners and so
must be considered at some stage in the planning process. The problem is the inherent
limitation of calculating the NPV at such a scale (an entire state) based on one turbine type
and its associated nameplate value (the maximum output rating of a turbine). Studies have
shown that the nameplate capacity estimates are often significantly less than the realized
values (Boccard, 2009), and this must be taken into consideration in any detailed economic
evaluation. In the author’s defense, they do admit the limitations of this type of assessment
and suggest that the user of the tool can change these input values to suit the situation,
which is an advantage of this model.
Another important feature of this study was the inclusion of criteria specifically focused on
avian habitat. Bird and bat mortality from turbines and habitat disturbance or destruction
are among the most controversial issues surrounding wind energy development (Barrios &
Rodriguez, 2004; Conley, Bloomfield, St. George, Simek, & Langdon, 2010; Kuvlevsky Jr., et
29
al., 2007; Sutton & Tomich, 2005), but this was the only study reviewed here that included
this constraint.
The results of this analysis were compared to the locations of existing wind farms in NYS
and the tool accurately predicted feasible sites for each existing wind farm, although they
were not always located in the most suitable areas. One important conclusion from this
study is that the MCA-GIS method is effective in identifying suitable areas for development.
However, the study was weakened by the absence of any robust sensitivity analysis,
particularly regarding the economic criteria.
Study D
The final study reviewed here, by Tegou et al. (2010) for the Island of Lesvos, Greece, takes
the MCA-GIS methodology a step further by including a systematic approach to selecting
criteria weights using the AHP. The AHP allows the user to assign criteria weights based on
relative importance (pairwise comparisons) to the overall goal of the decision hierarchy,
rather than based on perceived importance. The result of the pairwise comparisons for all
criteria is a pairwise matrix from which the principal eigenvector can be calculated.
Although Baban and Parry (2001) employed the pairwise comparison portion of this
method, they did not mention the use of any systematic method (such as the AHP) to
classify their criteria into grades of importance, and so were unable to evaluate the
consistency of their judgments.
30
Consistency is crucial to multi-criteria decision making because of the complexity of the
criteria weighting process and the likelihood of bias (either intentional or unintentional) on
the part of the different decision-makers (Chen, Yu, & Khan, 2010). An improved
consistency statistic does not necessarily mean that the judgments will lead to the best
answer in regards to the “real world” objective, but it does mean that the judgments are
significantly different from random (Saaty, 1977). Tegou et al. (2010) included two
measures of consistency in their approach: a consistency index (CI) and a consistency ratio
(CR). The CI can be measured by the formula (Saaty, 1977):
(1)
where λ max is the largest eigenvalue in the matrix and reciprocal matrix and n is the
number of criteria. If there are no inconsistencies in the pairwise comparisons, then λ max =
n. The CR measures coherence of the pairwise comparisons, written as:
(2)
where RI is the mean CI of a set of randomly generated comparison values (Saaty, 1977),
and generally a CR value greater than 10% indicates significant inconsistency and suggests
that the user reevaluate their judgments of relative importance regarding the criteria
(Tegou, Polatidis, & Haralambopoulos, 2010).
31
Another important aspect of this methodology is the inclusion of sensitivity analysis. The
authors used a technique similar to that of Baban and Parry (2001), but included four
weighting scenarios instead of two. The first assumed that all criteria have equal weights,
the second scenario set the “visual impact” criterion to zero, the third scenario set the
environmental criteria to zero, and the fourth set the economic criteria to zero. The results
show that the land area considered most suitable (scores of 0.9-1.0) was still relatively
small in all cases, but the important conclusion by Tegou et al. (2010) was that “each
selected criterion is influential in the evaluation of the study region.”
Their conclusion seems like a gross generalization, but it tells us two valuable things: 1) the
inclusion or exclusion of any relevant criterion is critical to the analysis, and so the set of
evaluated input criteria may impact the analysis just as much, if not more, than the analysis
method itself, and; 2) the relationship amongst the criteria is complex and dynamic, so the
measurement of consistency in the criteria weighting assignment is crucial.
3.3 MCA Comparison Discussion
This review has tried to present the selected studies in a way that illuminates the
advantages and shortfalls of each as well as shows a progression of methodologies. This
thesis draws on the conclusions from the studies reviewed here in formulating its
framework, and so much of the theory behind the analysis used in this framework is built
on the ideas seen in these four studies, namely: 1) the use of pairwise comparisons and a
pairwise matrix and the calculation of the principal eigenvalue; 2) the use of the AHP to
assign criteria weights; 3) the use of a GIS grid format (raster) and the weighted overlay
32
tool; 4) the use of a three-part analysis approach, beginning with the exclusion of infeasible
sites; and 5) the use of a more comprehensive dataset based on a combination of the layers
used in these four studies. A summary table is provided below for comparative purposes.
Table 1: Summary of relevant input criteria for four wind farm siting studies.
Criteria/Constraint
Baban & Parry
Rodman &
Meentemeyer
Van Haaren &
Fthnakis
Tegou et al.
1 Proximity to Roads x x x
2 Proximity to Urban
Areas/Cities
x x x x
3 Proximity to Electrical
Grid
x x x
4 Proximity to Water
Bodies
x x x
5 Proximity to Forested
Land
x x x
6 Proximity to Historic
Sites
x x
7 National Parks, Forests,
and Monuments
x
(National Trust
Property only)
x
(public parks
only)
x
8 Military Installations x
9 Airports x x
10 Tribal Land x
11 Wind Speed/Wind
Power Class
x x x x
12 Slope x x x x
13 Aspect (Orientation) x
14 Critical Avian Habitat x
15 Critical
Habitat/Conservation
Areas
x
(“ecological
sites”)
x
(endangered
species present?
Y/N)
x x
16 Soil Type (Karst) x
17 Land Use Type x x x
18 Wetlands x x
19 Electricity Demand x x x
3.4 Proposed Framework
As illustrated in Table 1, the studies reviewed here draw on disparate sets of input criteria
and constraints, and this is generally the case with similar wind farm siting studies in the
literature. This makes it difficult to directly compare and contrast analysis methods, but the
33
framework presented here has the advantage of learning from these other studies and
identifying gaps and shortcomings in terms of relevant input data. Therefore, one novel
contribution of this framework is the compilation of a more complete set of relevant
criteria as input values. Great analytic approaches may fall short of their full potential if the
datasets are missing vital criteria, and the results may suffer, even for preliminary analysis.
One thing that is clear from reviewing the literature on the subject is that in order to
answer the question “Where is it feasible to locate a wind farm?” it is often beneficial to
first answer the question “Where is it not feasible to locate a wind farm?” All four of the
studies reviewed here began their analysis with this step, and this thesis has adopted that
approach as well.
3.4.1 Stage 1 Evaluation: Exclusion of Non-feasible Areas
Stage 1 of this framework is to exclude unsuitable sites based on rudimentary physical,
administrative, and geographical constraints. Areas including and within specified
distances of National Parks, National Forests, National Monuments, state and local parks,
wetlands, water bodies, military installations, populated places, airports, and areas
considered critical habitat for wildlife or vegetation were excluded outright, as were areas
with karst (i.e. caves, sinkholes, aquifer feeds, etc.) geology (Table 2). Areas that did not
meet the constraints were excluded through a Boolean ‘AND’ (Yes = 1/No = 0) classification
process. All layers were converted to a common cell size of 400 m by 400 m for the analysis
because 500 m was the smallest buffer size, and 400 m is a scalable increment of most
other distance thresholds.
34
Table 2: Stage 1 Evaluation criteria and constraints.
Factors Criteria Constraint for exclusion
Economic, Safety Populated Place Within 800 m (≈ 1/2 mile)
Environmental Wetland Within 800 m (≈ 1/2 mile)
Environmental Water Body Within 800 m (≈ 1/2 mile)
Environmental Critical Habitat (IBA
1
, USFW, GAP
2
) Within 1,600 m (≈ 1 mile)
Physical, Engineering Karst Geology Less than 100 m depth (≈ 328 ft)
Administrative, Public Use National Park, Forest, or Monument Within 1,600 m (≈ 1 mile)
Administrative, Public Use State or Local Park Within 800 m (≈ 1/2 mile)
Infrastructure, Safety Airport Within 1,600 m (≈ 1 mile)
Infrastructure, Safety Military Installations Within 1,600 m (≈ 1 mile)
1
Important Bird Areas as designated by the Bureau of Land Management (BLM)
2
See Appendix for list of GAP Status Codes
3.4.2 Stage 2 Evaluation: Geographical Suitability Assessment
Stage 2 identifies those areas deemed “suitable” for large-scale wind energy development
through the assignment of suitability scores. These scores are calculated based on assigned
grading values (GV) given to the range of suitable criteria values. Grading values were
derived by dividing the maximum score value (GV max = 1.0) by the number of relevant
criteria (n) and then subtracting this value from each successive grading value, starting
from the highest ranking range of criteria values (GV = 1.0) to the constraint threshold (GV
= 0.0). Criteria value ranges that were deemed unsuitable (GV = 0.0) were sometimes a
function of distance where d = 0.0, but at other times represented a predetermined
unsuitable class range, (i.e. wind power class and land cover class), or in the case of slope,
unsuitable percentage ranges. Suitability indexes were derived for each of the Stage 2
criteria, shown in Tables 5 through 10 (below).
35
All layers were then reclassified to a common scale of 1 to 10 by intervals of 1, called scale
values for the weighted overlay operation, with 10 being the highest suitability score, 1
being the lowest, and 0 being restricted (unsuitable) values. Some layers were distance
ranges, some were classes, some were percentages, and most layers did not have exactly 10
value ranges or classes, so it was necessary to reclassify the input criteria value ranges in
order to overlay them. The ArcGIS Weighted Overlay tool requires integers for the scale
values, which were calculated by multiplying the grading values by 10 and rounding to the
nearest integer. These scale values were used as the suitability scores.
Criteria that have a geographic dependence on the proximity to specific features, such as
roads, power lines, and cities, will have suitability scores that diminish further from the
feature until they reach a distance threshold where the score is zero (economically not
favorable). However, criteria that deal with sources of public opposition, such as noise,
visual impact, habitat conservation, and safety, would theoretically demonstrate a “distance
decay” relationship where the resistance to development diminishes as the distance away
from the feature increases (Van der Horst & Toke, 2010), and therefore suitability scores
would also increase as a function of distance (d). Since this analysis deals primarily with
physical and geographical constraints, minimum distance thresholds (buffers), rather than
grading values, were used to identify unsuitable areas for Stage 1 (simple) criteria that
showed a distance decay relationship, while grading values were used to identify
unsuitable areas for Stage 2 (dynamic) criteria.
36
Roads and populated places (i.e. urban areas, cities, and towns) are unique examples
because ideal locations would be located within a specified distance of the feature and
would show an inverse distance decay relationship, but there are also visual, auditory, and
safety concerns that require a buffer and show a distance decay relationship. In this case,
two different thresholds are used; a minimum distance threshold and a maximum distance
threshold (where d = 0.0). The Stage 2 criteria and model constraints are shown in Table 3.
Table 3: Stage 2 Evaluation criteria and constraints.
Factors Criteria Constraint
Physical, Wind Resource Wind Power Class (WPC) Must be ≥ 4
Physical, Engineering Slope (percent rise) Must be less than 20%
Environmental, Economic Land Cover Class (LCC) NLCD Classes 11, 12, 21-24, 90, 95 excluded
Infrastructure, Economic Distance to Grid Must be within 8 km (≈ 5 miles)
Infrastructure, Economic Distance to Road > 500 m (≈ 1/4 mile); < 8,000 m (≈ 5 miles)
Infrastructure, Economic Distance to City > 1,600 m (≈ 1 mile); < 16,000 m (≈ 10 miles)
While Stage 1 criteria primarily relied on a simple Boolean classification of buffered
features (i.e. excluded or not), the Stage 2 criteria are a subset of the entire set of ONSWPS
site selection criteria that have a fluctuating geographical dependence on some aspect of
the input features. These criteria could have also been included in Stage 1 because they
each have thresholds for exclusion, but they also have a graduated range of suitable values
based on spatially dependent relationships with the features used to represent them that
defines their level of suitability.
To eliminate redundant computational processes and save time, it was easier to evaluate
these constraints through grading values using the Reclassify tool in ArcGIS. For distance-
37
dependent criteria, the Euclidean Distance tool was used to calculate distance ranges prior
to reclassification. A suitability index was then created for each of the Stage 2 Criteria that
graded the input values or value ranges on a scale of 0.0 (not suitable) to 1.0 (optimal).
Wind Power
The wind resource is the most important geographically-dependent criterion, and this
dataset is organized into classes based on mean annual wind density and mean annual
wind speed at delineated heights above the Earth’s surface (Table 4). Heights of 50-80 m
are typical for utility-scale or large distributed systems (American Wind Energy
Association [AWEA], 2008). Wind power classes (WPC) are based on the work of NREL,
AWS Truepower, and the U.S. Dept. of Energy’s Wind Powering America Program (U.S.
Department of Energy, 2011).
Table 4: Wind power classes based on mean annual wind density and mean annual wind
speed at 50 m height, based on Rayleigh speed distribution of equivalent mean wind power
density. Data from NREL.
Wind Power
Class
Wind Power Density
(W/m2)
Wind Speed
(m/s)
1 0-200 0.0 - 5.6
2 200-300 5.6 - 6.4
3 300-400 6.4 - 7.0
4 400-500 7.0 - 7.5
5 500-600 7.5 - 8.0
6 600-800 8.0 - 8.8
7 800-2000 8.8 - 11.9
There are various ways to assess the wind resource and different scales at which to
aggregate the data. For regional analysis, the 200 m resolution data compiled by NREL was
38
sufficient. A mean annual wind speed of 7 m/s is commonly considered the minimum range
for utility-scale wind energy production (Rodman & Meentemeyer, 2006), which
corresponds to WPC 4, although technology is constantly improving and approaching the
possibility of being able to utilize WPC 3 for utility-scale systems. Table 5 illustrates the
grading values used in this analysis.
Table 5: Suitability Index for the Wind Power Class (WPC) Layer
n WPC Grading Value Scale Value
1 7 1.00 10
2 6 0.75 8
3 5 0.50 5
4 4 0.25 3
n/a 3 0.00 Restricted
n/a 2 0.00 Restricted
n/a 1 0.00 Restricted
n/a -999 (no data) 0.00 Restricted
Electrical Grid
The proximity to the electrical grid is the most important distance-dependent criteria due
to both the cost of constructing and integrating new transmission lines, substations, and
other facilities (Ibrahim, Ghandour, Dimitrova, & Perron, 2011; Van Haaren & Fthenakis,
2011), and the costs associated with energy loss over long transmission distances, which
can devalue wind energy production to the point where it is not competitive with other
forms of energy (Bohn & Lant, 2009; Ibrahim, Ghandour, Dimitrova, & Perron, 2011;
Rosenburg, 2008b). Since wind energy is an intermittent energy source, it requires special
energy handling, storage, and transmission facilities to handle the energy fluctuations,
including energy overloads to the systems during very high winds (Ibrahim, Ghandour,
39
Dimitrova, & Perron, 2011). Proximity to the existing energy infrastructure is beneficial to
offset these costs as much as possible (Van Haaren & Fthenakis, 2011). Table 6 illustrates
the grading values used in this analysis.
Table 6: Suitability Index for the Proximity to Electrical Grid Layer
n Distance from grid (m) Grading Value Scale Value
1 0-500 1.00 10
2 501-1000 0.89 9
3 1001-2000 0.78 8
4 2001-3000 0.67 7
5 3001-4000 0.56 6
6 4001-5000 0.44 4
7 5001-6000 0.33 3
8 6001-7000 0.22 2
9 7001-8000 0.11 1
10 > 8,000 0.00 Restricted
Cities, urban areas, and populated places
The criteria representing urban areas, cities, and populated places, which consisted of two
different layers (“urban areas/cities” and “populated places”) in Stage 1, was reduced to
just one layer for Stage 2 analysis. The populated places layer, which included all cities,
towns, and census designated places in the United States (down to a population of 10 in 4
housing units in Warm River, ID), was only used in Stage 1 because appropriate buffers
needed to be set, but small cities and towns typically do not have the necessary
infrastructure or energy demand to facilitate large-scale wind energy development.
Urbanized areas, on the other hand, represent both of these, but also require larger buffers
to accommodate urban sprawl and a potentially larger constituency of opposition. These
40
constraints are somewhat at odds; the “not-in-my-back-yard” (NIMBY) notion of public
opposition (from things like visual or auditory concerns) would seem to promote a
distance decay relationship (Van der Horst & Toke, 2010), while the electricity demand and
infrastructure argument would seem to promote an inverse distance decay relationship.
This thesis argues for a more pragmatic approach based on satisfying a thorough set of
physical and geographical constraints, and so addresses the former through an adequate
buffer and then proceeds to assign grading values based on the idea that the economics of
the proximity to urban areas is more important than hypothetical public opposition.
Noise is a more quantifiable issue than visual disturbance and regulations do exist in
several countries. Van Haaren and Fthankis (2011) cite a Canadian report that summarizes
regulatory limits for noise in the range of 40-55 dB, while an Australian EPA report sets the
limit at 35 dB (Environmental Protection Authority, 2003). Noise, or sound pressure, levels
are a function of turbine height, wind speed, and distance. Van Haaren and Fthnakis (2011)
developed an equation for calculating noise levels at increasing distances based on a
common turbine height of 78 m (taken from the Vestas V80 model with a sound power
level of 100 dB), and estimated that the noise level at 500 m distance from the turbine is
approximately 35 dB. This is the basis for the 500 m buffer used in this framework, which
should also suffice as a buffer for safety and visual disturbance.
Visual “pollution” is similarly a function of tower height and distance, but there is no
agreeable threshold at which a person’s ability to see a wind turbine becomes a nuisance in
terms of aesthetic preference. Evidence suggests that the proliferation of information about
41
wind energy and a region’s prior experience with wind energy development tend to
increase public acceptance; more informed and experienced communities tend to view
wind turbines positively (Jobert, Laborgne, & Mimler, 2007; Van Haaren & Fthenakis,
2011). However, there are some nuisance issues that can be largely mitigated through
distance buffers, such as shadow flicker and reflective glare (Rosenburg, 2008b).
Safety is another fairly quantifiable issue and relates to precautions surrounding parts
malfunctions, such as a broken blade, or ice throws (when thawing ice chunks are flung
from a turbine blade). Since broken blades are extremely rare with modern turbines, safe
distances have only been projected from small-scale simulations, and estimates regarding
the maximum distance a fragment of a broken blade would travel from an 80 m tall turbine
would be about 350 m (Van Haaren & Fthenakis, 2011). Ice throws are slightly more
common, but have also been documented to be around 350 m, well within the 500 m
minimum threshold for roads and populated places.
Roads
The proximity to transportation infrastructure is another important distance-dependent
consideration due to the costs of constructing and maintaining new roads, which must be
substantial enough to allow for the transport of extremely large turbine parts (for example,
the blades on the Vestas model V80 are 180 ft. long). Van Haaren and Fthnakis (2011)
estimate the cost of building new access roads to be $82,000 per kilometer, not counting
the costs of land clearing, permitting, or maintenance. This clearly puts emphasis on
locating sites as nearby as possible to existing roads. However, as in the case of populated
42
places, there are aesthetic and safety concerns that require an adequate buffer. Table 8
discloses the grading values and buffers developed for this criterion. Tables 7 and 8
illustrate the grading values used in this analysis.
Table 7: Suitability index for the Proximity to Urban Area/City Layer
n Distance from city (m) Grading Value Scale Value
1
0-1600 0.00 Restricted
2 1601-3000 1.00 10
3
3001-4000 0.93 9
4 4001-5000 0.86 9
5
5001-6000 0.79 8
6 6001-7000 0.71 7
7
7001-8000 0.64 6
8 8001-9000 0.57 6
9
9001-10000 0.50 5
10 10001-11000 0.43 4
11
11001-12000 0.36 4
12 12001-13000 0.29 3
13
13001-14000 0.21 2
14 14001-15000 0.14 1
15
15001-16000 0.07 1
16 > 16,000 0.00 Restricted
Table 8: Suitability Index for the Proximity to Roads Layer
n Distance from road (m) Grading Value Scale Value
1 0-500 0.00 Restricted
2 501-1000 1.00 10
3 1001-2000 0.88 9
4 2001-3000 0.75 8
5 3001-4000 0.63 6
6 4001-5000 0.50 5
7 5001-6000 0.38 4
8 6001-7000 0.25 3
9 7001-8000 0.13 1
10 > 8,000 0.00 Restricted
43
Land Cover
Land cover is an unquestionably difficult criterion to assess because of the inability to
accurately define and map different land cover types, and this is problem is compounded
by the existence of more than one classification system. This thesis utilizes the National
Land Cover Database (NLCD) dataset for the United States, which is based on the Anderson
Level II Classification System (Anderson, Hardy, Roach, & Witmer, 1976), which provides a
level of detail more than sufficient for regional analysis. Selecting the particular land use
classes that are most suitable for wind energy development proved a more difficult task, as
there is a lack of consensus in the literature.
This framework promotes the approach that previously disturbed (developed) land is
preferable to undisturbed land. Among developed land classes, those that can support wind
energy development without compromising their value, such as lands dedicated to low-
maintenance crops or grazing, are preferable to other types of agricultural land where the
placement of turbines may interfere with production. For undisturbed land, there seems to
be agreement in the literature that areas predominantly covered by shorter vegetation
species, such as grasses and shrubs, are preferable to taller vegetation cover, like forests
(Janke, 2010; Malczewski, 2004; Rodman & Meentemeyer, 2006), presumably based on
land clearing costs and the notion that the taller the vegetation type, the more it reduces
the wind speed in that area. Barren land is theoretically more preferable based on this
logic, but barren land is often barren due to the presence of rocky soils or exposed rock,
conditions not necessarily conducive to the construction of massive towers. However, if
engineering allows for it, barren land is preferable among undisturbed land classes.
44
Table 9 presents the grading values used in this analysis based on these assumptions,
limited to the predefined classes from the NLCD.
Table 9: Suitability Index for the Land Cover Class Layer
n
NLCD Class
Code Land Cover Description
Grading
Value
Scale
Value
1
11 Open Water 0.0 Restricted
2
12 Perennial Ice and Snow 0.0 Restricted
3
21 Developed, Open Space 0.0 Restricted
4
22 Developed, Low Intensity 0.0 Restricted
5
23 Developed, Medium Intensity 0.0 Restricted
6
24 Developed, High Intensity 0.0 Restricted
7
31 Barren Land 0.7 7
8
41 Deciduous Forest 0.2 2
9
42 Evergreen Forest 0.2 2
10
43 Mixed Forest 0.2 2
11
52 Shrub/Scrub 0.6 6
12
71 Herbaceous 0.8 8
13
81 Hay/Pasture 1.0 10
14
82 Cultivated Crops 0.9 9
15
90 Woody Wetlands 0.0 Restricted
16
95 Emergent Herbaceous Wetlands 0.0 Restricted
Slope
Suitable slope for wind energy development is also difficult to determine based on the
literature. Recommendations range from a maximum of 10% to 30%, but a reasonable
compromise can be made at 20% as a maximum threshold for engineering and
construction purposes. This unfortunately eliminates many areas with high WPC, which
tend to be located on or around ridges and mountains, but these areas would likely
unsuitable based on other constraints as well, such as distance from roads or cities.
Rodman and Meentemeyer (2006) gave preference to ridge crests and areas of higher
45
elevation, but they were dealing with a densely populated study area. Most studies
consider slopes over 10% to be unsuitable based on responses from planning agencies or
private developers (Baban & Parry, 2001; Van Haaren & Fthenakis, 2011). For this analysis,
suitability scores decreased as slope increased until the 20% threshold (Table 10).
Table 10: Suitability Index for the Slope Layer
n
Slope (as % rise) Grading Value Scale Value
1
0 - 2.5 1.00 10
2
2.6 -5.0 0.88 9
3
5.1 - 7.5 0.75 8
4
7.6 - 10.0 0.63 6
5
10.1 - 12.5 0.50 5
6
12.6 - 15.0 0.38 4
7
15.1 - 17.5 0.25 3
8
17.6 - 20.0 0.13 1
9
20.1 - 35.0 0.00 Restricted
10
> 35% 0.00 Restricted
3.4.3 Stage 3 Evaluation: Suitability Assessment
Stage 3 of the analysis identifies those sites that are optimal for wind energy development
based on a combination of ideal circumstances (i.e. those cells that have high suitability
scores, larger than 5,000 acres, etc.). For this analysis, the economic viability of developing
certain land areas is assessed through the weighted overlay function in ArcGIS, which will
yield suitability scores for each cell in the grid. Suitability scores should theoretically reflect
the most economically viable sites based on the notion that ideal physical conditions will
yield the highest return on investment through the maximization of the wind resource, the
minimization of development costs for electricity transport and infrastructure, and the
46
minimization of factors that would instigate public opposition. Table 11 presents these
criteria and their associated relative weights.
Table 11: Output criteria weights for Stage 3 Evaluation.
Criteria Code Description
Criteria
Weight
Overlay
Weight
WPC Wind Power Class 0.303 30
GRID Proximity to Electrical Grid 0.303 30
URBCITY Proximity to Urban Areas, Cities and Populated Places 0.169 17
ROAD Proximity to Transportation Routes 0.096 10
LANDCOV Land Cover Class 0.096 10
SLOPE Slope (as percentage rise) 0.033 3
sum 1.000 100
Tribal lands constitute a unique criterion because of legal and logistical constraints on
development, and opposition from some tribes is very strong (The Confederated Umatilla
Journal, 2012). Tribal land, which is federally owned, cannot be bought, sold, or leased by
conventional means and any development on those lands must be arranged as a “special
lease” through the federal government (Gamboa, 2011). Legislation is currently circulating
that would change the way this is handled, but for the purposes of this analysis, it is
generally considered economically unjustified to pursue sites on tribal lands. There are
cases though where wind energy development is occurring or has occurred on tribal lands,
and so it may be worth investigating a site located on tribal land if it has a high suitability
score. This framework includes tribal lands as an additional Stage 3 constraint.
47
3.4.4 The Analytic Hierarchy Process (AHP)
The AHP is an heuristic algorithm that follows a hierarchical structure for multi-criteria
decision making and it provides mathematical measures of consistency. For site suitability
analysis it is critical that the assigned weights are logically consistent and mathematically
defensible, so the AHP is used to derive the input criteria weights that will be applied to the
weighted overlay technique. Figure 3 illustrates the process of determining input criteria
weights for suitability analysis using AHP.
The AHP requires that the problem be
diagrammed as a hierarchical structure, typically
with the overall objective at the top, the criteria
that impact it at the next level, the attributes of
those criteria at the next level, and alternatives at
the bottom (Boroushaki & Malczewski, 2008;
Saaty, 1990). The hierarchical structure can be
more complex (or less complex), but there is a
logical threshold at which humans have trouble
simultaneously evaluating options.
Based on George Miller’s (1956) work with chess players, the number of criteria that
humans are able to simultaneously consider is seven plus or minus two, and so generally
the second layer of the hierarchy structure should not contain more than nine criteria,
otherwise the structure should be reconfigured because the inherent error increases
Figure 3: Schematic of the criteria
weight selection process using AHP,
adapted from Chen, Yu, & Khan
(2010).
48
dramatically past this threshold (Saaty, 1977; 1990). For this analysis, seven criteria were
selected for Stage 3 Evaluation, as shown in Figure 4. However, the last criterion, Tribal
Land, was not included in the AHP matrix as a weighted variable; rather, it was evaluated
separately as a final Stage 3 constraint.
Figure 4: Schematic of the hierarchical structure of the ONSWPS Stage 3 Evaluation criteria,
based on Saaty (1990).
Once the hierarchical structure has been established, the pairwise matrix can be
constructed. In AHP, this process consists of ranking the relative importance of each
criteria against the others in terms of its impact on achieving the overall goal (the top level
of the hierarchy). The fundamental scale proposed by Saaty (1977) is used to rank the
relationships amongst the criteria by importance (Table 12), and from these pairwise
comparisons the pairwise matrix is created (Table 13). In MCA, it is often impossible to
assign absolute values of importance to the diverse, often intangible, criteria. For example,
how does one quantify the value (importance) of visual aesthetics, critical avian habitat,
and proximity to the electrical grid as applied to ONSWPS site selection?
49
Some sort of relative scale must be used to establish a hierarchy of priority, i.e. Action A is
more important than Action B. In terms of data types, it is dealing with ordinal data versus
interval data; the former has the advantage of flexibility in terms of handling diverse
criteria, but lacks an inherent zero and therefore cannot tell us how much more important
one thing is over another (even in relative terms). The fundamental scale enables the
conversion of ordinal data into ratio data by using an absolute scale with an inherent zero,
i.e. Action A is this much more important than Action B. Therefore, it is possible to not only
quantify the relationships amongst diverse criteria, but also to evaluate the consistency of
these judgments and revise them if necessary in the pairwise matrix.
Table 12: The fundamental scale, adapted from Saaty (1990).
Intensity of
importance on
an absolute scale
Definition
Explanation
1 Equal importance Two activities contribute equally to the objective
3 Moderate
importance
Experience and judgment strongly favor on activity over
another
5 Strong importance Experience and judgment strongly favor on activity over
another
7 Very strong
importance
An activity is strongly favored and its dominance
demonstrated in practice
9 Extreme importance Evidence favoring one activity over another is of the highest
possible order of affirmation
2, 4, 6, 8 Intermediate values When compromise is needed
Reciprocals If one activity i has one of the above activities assigned to it when compared with
activity j, then j has the reciprocal value when compared with i (i.e. 5 = 1/5 or .200)
The pairwise matrix is an n x n grid that requires input values to be assigned by the user
based on research, experience, and/or expert opinion for each pairwise comparison. In
AHP, these values are selected from Saaty’s (1977) fundamental scale and assigned by row,
meaning that the row representing each criterion is compared to each column in terms of
50
importance. This also means that the column representing each criterion will hold the
reciprocal value of the fundamental scale value assigned to the row.
At the end of each row the nth root value is calculated by multiplying all of the criteria
values together and taking the nth root, in this case n = 6, so each product is raised to the
1/6 power. This produces a normalized value for each row, and the nth root values are
summed together to provide a denominator for the priority vector calculation. The priority
vectors are calculated by dividing each row’s nth root value by the summed nth root value.
The priority vectors are the output criteria weights for each row (each criterion), now
normalized against the matrix so that the sum of the priority vectors is equal to 1.0.
Table 13: Pairwise matrix for ONSWPS site selection criteria showing fundamental scale
values, reciprocal values, nth root values (n=6), priority vectors (relative weights), the
principal eigenvalue (λ max), the consistency index (CI) value, and the consistency ratio (CR)
value, based on Saaty (1990).
n 1 2 3 4 5 6
WPC SLOPE LANDUSE GRID ROAD URBCIT nth
Root
Priority
Vector
1 WPC 1 9 3 1 3 2 2.33482 0.303
2 SLOPE 0.111 1 0.333 0.111 0.333 0.200 0.25491 0.033
3 LANDUSE 0.333 3 1 0.333 1 0.500 0.74184 0.096
4 GRID 1 9 3 1 3 2 2.33482 0.303
5 ROAD 0.333 3 1 0.333 1 0.500 0.74184 0.096
6 URBCIT 0.500 5 2 0.500 2 1 1.30766 0.169
λ max = 6.01255 CI = 0.00251 CR = 0.00202
Once the pairwise matrix was created and the formulas were input (using Microsoft Excel),
it was possible to adjust the criteria input weights until the lowest set of CI, CR, and λ max
values were found. Noting that a matrix is consistent if and only if: λ max = n (Saaty, 1977),
51
the goal was to adjust the input values until λ max was as close to 6.00000 (n = 6) as possible,
in this case 6.01255. The CI, which measures the deviation between λ max and n in order to
assess inconsistencies in the pairwise comparisons, was calculated from Equation 1. In this
case, the calculation was (6.01225 – 6)/(6 – 1) = 0.00251, and from this the CR can be
calculated using Equation 2.
As discussed in section 3. 6, a number of important calculations come from the pairwise
matrix that evaluate consistency, which in this context refers to consistency of judgment, but
it has a specific meaning in the mathematical structure of AHP. Consistency is measured for
two primary reasons and therefore has two imperative functions:
1) To evaluate the consistency of the user-assigned input criteria values in regards
to the dominance of one row (one criterion or action) over another in terms of the
order of magnitude of importance; these judgments must be consistent to preserve
the order or rank of the criteria in the pairwise comparisons (Saaty, 1990). For
example, if Criteria A has a 2:1 importance over Criteria B, and Criteria B has a 2:1
importance over Criteria C, and Criteria C has a 2:1 importance over Criteria A (A >
B > C > A), then these judgments are logically inconsistent (or invalid) and also
mathematically inconsistent. A row can demonstrate dominance over another either
directly (i.e. A > B) or indirectly (i.e. A > B because A > C and C > B) and these ranks
must be preserved throughout the matrix. This order of dominance can be
demonstrated in as many steps as the number of criteria (n), and this is why the nth
root is calculated for each row.
52
2) To control error in judgment by promoting the homogeneity of input values, i.e.
keeping the judgments within an order of magnitude between criteria; this is why
the fundamental scale is only 1-9. A larger difference in input values has potentially
larger error, while smaller differences in input values are less affected by
perturbations. The number of criteria (n, or λ max in a consistent matrix) also play an
important role in measuring the inherent error in judgment because the larger n
gets, especially beyond the 7±2 threshold, the larger the potential error becomes,
and the smaller n is the more stable it is to random perturbations. The difference
between λ max and n is therefore an important measure of consistency (Saaty, 1977;
Saaty, 1990; Saaty & Vargas, 2001).
While there are several measures of consistency in the AHP, the CR is the most indicative of
whether the judgments are acceptably consistent, and it is imperative that these values are
significantly different than those that would be derived from random input values. The
Random Index (RI) values used in this analysis come from a lookup table (shown in Table
14) based on the work of Saaty and Vargas (2001), who ran thousands of iterations to
derive the index.
Table 14: Random Index (RI) values, adapted from Saaty & Vargas (2001).
n RI value
1 0.00
2 0.00
3 0.58
4 0.90
5 1.12
6 1.24
7 1.32
8 1.41
9 1.45
53
The RI values are directly related to the number of criteria (n) in the analysis, and for this
application with seven criteria the corresponding RI value is 1.24. This value becomes the
denominator in Equation 2, and a matrix is generally considered consistent if CR < 10%
(Boroushaki & Malczewski, 2008; Chen, Yu, & Khan, 2010; Saaty, 1990; Tegou, Polatidis, &
Haralambopoulos, 2010). For this analysis the CR value was calculated (0.00251/1.24) =
0.00190, or approximately 0.19%, which is well below the 10% threshold.
3.5 SA Methodology
Several SA approaches are found throughout the literature, and the most common have to
do with changing the input values, changing the relative importance of the criteria (i.e.
Saaty’s fundamental scale values), or changing the criteria weights. There are also different
weighting schemes that can be used within each of these approaches, either by substituting
random values or by changing the values by a defined interval or percentage, or by giving
all criteria the same weight or zero weight when compared to all others.
This thesis is interested primarily in documenting the effects of perturbations in the input
criteria weights. A combination of approaches was used in this analysis, both drawing from
the studies reviewed earlier and incorporating another approach from the literature. Baban
and Parry (2001), Rodman and Meentemeyer (2006), and Tegou et al. (2010) all applied
the equal weighting scheme as part of their SA, and it is a logical baseline for comparative
purposes. This thesis applies the equal weighting scheme to the criteria weights as the first
phase of the SA.
54
This thesis also applies an SA method known as One-At-a-Time, or OAT, that allows the
user to alter single input values by a certain percentage interval and then measure the
impact of that change relative to the other criteria, which must be adjusted accordingly so
that the criteria weights still sum to 1.0. The isolation of variables eliminates ambiguity and
improves the comparability of the results (Chen, Yu, & Khan, 2010). A percentage interval
of ± 20% was chosen as the percent change (pc) used in this analysis, which was applied to
each of the six Stage 3 criteria individually and the changes were measured in acres of
suitable land (Table 15, next chapter). The formula used to calculate the weight (W) of the
main criterion under consideration (c m) is :
(
) (
) ( (
) ( )) (3)
where W(c m,0) is the original input weight of (c m) and (
) is the weight of that
criterion at a given pc (in this case, ± 20%). The formula for calculating the adjusted
weights of the other criteria is:
(
) ( (
))
(
)
(
)
(4)
where (c i) is the ith criterion and (
) is the original (AHP-derived) input weight of the
ith criterion (Chen, Yu, & Khan, 2010). With OAT, the user can choose to run the SA at any
number of percentage increments. This thesis chose to examine the data at 5% increments
55
to the ± 20% threshold, plus the base run (the original AHP-derived criteria weights),
yielding nine total runs.
3.6 Technology
This analysis was conducted using ArcGIS 10.0 Desktop software including Spatial Analyst
and 3D Analyst extensions. Models were constructed using ArcGIS ModelBuilder through
the ArcMap/ArcINFO interface. Maps were created using ArcMap and exported in JPEG
format at 200 dpi. The Microsoft Office 2010 Suite (Word, Powerpoint, and Excel) was
employed to create the main document and all associated graphs, tables, and figures. The
hardware used was the Windows 7 (Service Pack 1) operating system running on a 64-bit
HP 2000 Notebook PC laptop with an AMD E-350 processor, 3 GB RAM.
3.7 Data Processing
The first step was to collect the necessary datasets and convert them into usable forms
using ArcGIS 10.0 geoprocessing tools. For this type of analysis, working with a grid system
was the most effective means of calculating values for particular locations. This way each
grid unit (or cell) in the study area would have an integer value and these values could be
altered based on the weights assigned to them, yielding a suitability score for each cell.
However, most datasets are available as vector-type data and so several of the datasets had
to be converted to grid-based (raster) data prior to analysis.
A final grid resolution of 798 m (NAD_1983_Oregon_Statewide_Lambert projection) was
chosen for this analysis because it was the largest cell size amongst the datasets, found in
56
the Digital Elevation Model (DEM). All other datasets were converted to this cell size
because the accuracy of the spatial data can be no greater than the coarsest resolution
found in the datasets. Although a finer resolution could be obtained by resampling the
DEM, the chosen cell size was suitable for regional analysis, and this coarser cell size
reduced computer processing time. To further reduce the computation time of processing
and analysis, all layers were first clipped to a common regional extent, slightly larger than
the study area (for most layers, the boundaries of Washington, Oregon, and Idaho were
used for cartographic purposes).
The Land Cover layer required extensive manual processing because it was only available
in smaller extents than the study area due to the large size of the files. Two raster files were
required to cover the study area, each over 10 GB as individual layer packages, and they
were added to the geodatabase through the Create New Raster tool. The two rasters were
loaded into the new raster file using ArcCatalog and then clipped to the regional extent of
the other layers. The original rasters were converted from WGS84 to NAD83 by adding
them into the geodatabase through the Load Data function. The resolution, which was
nearly two orders of magnitude smaller than the cell size used in this analysis, was
adjusted by using the Export Data function and manually specifying the new cell size. This
new raster was then added to the geodatabase and then re-symbolized in ArcMap based on
the National Land Cover Dataset (NLCD) classification scheme.
Once all the datasets had been converted to a common format (i.e. same coordinate system,
same cell size, proper extent, etc.), they were added to a geodatabase in ArcGIS. The
57
geodatabase includes feature datasets for Administrative, Infrastructure, and
Environmental themes, and includes the raster datasets. Feature topology was not enforced
because it was not critical for this analysis at this scale.
The analysis was carried out in three stages using ArcGIS 10.0 ModelBuilder. For Stage 1,
the Euclidean distance was calculated from each feature to a maximum distance of 2,000 m
(slightly further than the largest buffer), producing new rasters as outputs. The new rasters
were manually converted to a binary scale (Yes = 1/No = 0) using the Reclassify tool. Areas
within the buffer thresholds were assigned values of zero, while the remaining areas were
given a value of one. These outputs were then combined into a single layer through the
Mosaic-to-New Raster tool with a minimum mosaic operator, and then the relevant areas
(buffered features) were selected using the Con tool, which uses conditional statements to
select only the desired cells (those with values of zero). Areas with values of ‘1’ were
discarded in Stage 1 for illustrative purposes, but were reused in Stage 3 as a mask raster.
A Euclidean Distance/Reclassify tool combination was used instead of a Buffer/Polygon-to-
Raster tool combination strictly to save processing time. The Buffer tool only works with
feature classes (vector layers), and the large size of the datasets used in this analysis often
required several hours to calculate the polygon geometry, and then the new polygon layers
would need to be converted back to rasters for the overlay operation, a process that also
took hours for each operation. The Euclidean Distance tool accepts either feature classes or
rasters as inputs and produces a raster as an output, thus effectively doing the same thing
as buffering in a fraction of the time. The Reclassify tool could be used to set the buffer
58
thresholds, and the Con tool could be used to select the cells that correspond to the
buffered areas. A diagram of the Stage 1 Model is shown in the Appendix.
Stage 2 required less significantly less processing time than Stage 1, in part because there
were less input data layers, but also because all of the Stage 1 layers were in vector format
and many of these layers consisted of extremely large numbers of features, which not only
take longer for ArcMap to process but also to render. Stage 2, with only three vector layers
and three raster layers, was much more efficient. A similar approach to Stage 1 was used
with the Euclidean Distance tool being used for the distance-dependent criteria (again using
the 2,000 m threshold), and then all layers were reclassified to a scale compatible with the
Weighted Overlay tool in ArcGIS (i.e. 1 through 10 by intervals of 1) and then given a scale
value (see Section 3.4.2) in the Weighted Overlay Table. The AHP-derived criteria weights
were then entered into the Weighted Overlay Table prior to running the tool. The output
raster from Stage 2 was a suitable areas layer based on the AHP-derived criteria weights.
The Stage 3 Model began with creating a mask layer that represented all non-excluded
areas from Stage 1. The mask layer could be used to extract (select) the suitable cells from
the Stage 2 Model outputs, as well as the alternatively weighted layers produced in the SA,
that were not located in a buffered zone from the Stage 1 Model. This approach was
effective because the suitable cells retained their original suitability scores and one layer
could be used repeatedly on as many layers as necessary, yielding consistent geographic
boundaries for site selection.
59
The second phase of the Stage 3 Model consisted of identifying optimal sites by using the
mask raster to eliminate all cells that corresponded with Stage 1 excluded areas from the
Stage 2 AHP-derived suitable areas layer. The output raster from this operation was then
converted to polygons so that the geographic area could be calculated, and then polygons
larger than 5,000 acres were selected as optimal sites. This part of the model was rerun
with the input rasters from the SA for comparative purposes.
The 5,000 acre threshold was chosen to accommodate utility-scale wind farms, which vary
tremendously in size, but it cannot be overlooked that large wind farms require large
continuous tracts of land. Although the space between the turbine towers can be used for
other activities (Rodman & Meentemeyer, 2006; Rosenburg, 2008b) and the ‘footprint’ of
the towers is relatively small (2-5%) compared to the total wind farm area (Kuvlevsky Jr.,
et al., 2007), the turbines must be located as close together as possible to achieve maximum
energy transmission and storage efficiency without compromising the ability of the turbine
blades to “capture” the wind directly. The turbine array (positioning) is therefore
extremely important in order to avoid potential losses due to interference from the wake
created by other turbines, and sufficient space is required (Dabiri, 2011).
Estimates are variously described throughout the literature as requiring 5-15 turbine
diameters of spacing between towers, and estimates for the overall size of wind farms
range from 0.25 acres/tower (National Renewable Energy Laboratory [NREL], 2012) to 2-3
W/square meter (Dabiri, 2011) to 40-200 acres/MW (Denholm, Hand, Jackson, & Ong,
60
2009). For this analysis, a theoretical 50 MW wind farm is used as the baseline, utilizing a
safe average of 100 acres per MW.
At this stage of the analysis, some tradeoffs become apparent due to the requirement for
large tracts of land. If one wishes to examine only the areas with the highest suitability
scores (i.e. 9 or 10), they will most likely eliminate any large polygons from the analysis
since there were relatively few cells with those scores (Figure 5) and those cells were
relatively well distributed (Figure 6). However, if one is willing to examine the entire range
of remaining cells, then not only are there more large polygons to choose from, but
adjacent polygons can be combined to identify larger suitable sites.
Since the requirement for large tracts of land is also a suitability constraint and any cells
remaining at this point in the analysis have already satisfied several critical constraints
(only cells with suitability scores ≥ 5 remain after Stage 3), this framework treats all
remaining cells as a single category: suitable. Also, as will be discussed further in Chapter
Four, the geographic location of the suitable cells is nearly identical under all weighting
schemes, but the values of the individual cells changes within those locations, suggesting
that the value of an individual cell is not as important as the fact that a suitable cell exists at
that particular geographic location.
61
Figure 5: Histogram showing the number of cells by suitability score remaining after
“extraction by mask” in Stage 3, based on the AHP-derived criteria weights.
Figure 6: Sample spatial distribution of the suitable cells layer depicted in Figure 13.
62
For the final phase of the Stage 3 Model, neighboring polygons (within a distance of 1600
m) were joined using the Aggregate Polygons tool, and polygons larger than 5,000 acres
were selected from that layer. All polygons smaller than 5,000 acres were discarded for this
analysis, despite their suitability scores, and the study area was examined for the locations
of the remaining polygons to select a detailed regional extent to analyze for the SA.
In summary, Stage 3 identifies optimal sites as those that:
Are not located in the excluded areas identified in Stage 1.
Have suitability scores greater than ‘0’ based on the Stage 2 constraints.
Are located on sites greater than 5,000 acres.
63
CHAPTER FOUR: RESULTS AND DISCUSSION
One of the primary outcomes of this thesis is the creation of a series of maps to provide
quick access to site suitability information. Static maps were created for each evaluation
stage that contain different types of information. For Stage 1, the maps show the areas
excluded from the analysis, for Stage 2 the maps show the areas considered suitable for
wind energy development based on graded suitability scores (high=10; low=0), and the
Stage 3 maps show the optimal areas for development based on an overlay of the Stage 1
and Stage 2 constraint maps and a minimum size constraint (5,000 acres). Additionally, a
series of maps and figures were included to help visualize the results of the sensitivity
analysis (SA). The locations of existing wind farms was shown in several of the maps as a
means of assessing the models in “real-world” terms.
4.1 Stage 1 Evaluation Results
The objective of Stage 1 was to remove unsuitable areas from the analysis based on simple
criteria and their associated buffers. Figure 7 shows the results of the Stage 1 Model, and it
is evident from the map that the majority of the study area is considered unsuitable for
wind energy development at this stage. It is also intriguing that most areas with high WPC
are located within the excluded areas, as are many of the existing wind farms. This is
largely due to the presence of the Cascade Mountain Range (running North/South on the
left-hand side of the map) which has very high WPC, but has unsuitably steep slopes and is
primarily U.S. National Forest land or National Parks.
64
Figure 7: Stage 1 constraint map showing areas excluded from the analysis (unsuitable
cells), the locations of existing wind farms, and areas of high WPC.
Significant portions of Washington and Oregon are designated as National Forests,
therefore excluding vast tracts of land that have suitable-to-high WPC values, and most of
the land that has the highest WPC. Unfortunately for the wind energy industry, National
Forests are federally-owned land that is principally off-limits to development, but there is
evidence to suggest that they harbor abundant wind resources. Figure 8 shows how much
excluded land from Stage 1 is due solely to the existence of National Forests. This is
currently a controversial topic throughout the country (Adkins, 2009; Streater, 2012) and
65
we can expect to hear more about it in the next few years as states try to reach ambitious
Renewable Portfolio Standards (Bohn & Lant, 2009; Rosenburg, 2008a; Van Haaren &
Fthenakis, 2011).
Figure 8: Map showing areas with high WPC and U.S. National Forest Land.
In the literature it is often cited that forested land is less desirable than open land because
stands of trees tend to reduce wind speeds and therefore reduce wind power potential
(Baban & Parry, 2001; Hansen, 2005; Janke, 2010; Ramirez-Rosado, et al., 2008; Rodman &
Meentemeyer, 2006; Tegou, Polatidis, & Haralambopoulos, 2010; Van Haaren & Fthenakis,
66
2011). However, looking at Figure 9 (below), it is clear that this assumption may not
always be true for large-scale wind power, which is typically measured at a height of 50-80
meters, well above most forest stands.
Figure 9: Map showing forested land cover, locations of existing wind farms, and areas with
high WPC.
In fact, most of the areas with the highest WPC in the study area are located within land
cover classes defined as evergreen, deciduous, or mixed forest. The argument then
becomes one of the cost of clearing forested land compared to other types of land cover.
Van Haaren and Fthenakis (2011) estimate that the cost for clearing forested land for a 50
67
MW wind farm to be approximately $3,000 per acre, compared to $40-60 per acre for
grassland, shrubs, barren land, and cropland. For that reason, forested land was removed
as a Stage 1 constraint and was assigned a suitability score of 2 in the Stage 2 analysis,
therefore representing an economic constraint rather than a physical constraint.
4.2 Stage 2 Evaluation Results
In Stage 2, the goal was to locate suitable areas for wind energy development based on a set
of six dynamic criteria constraints. Suitability indexes provided the basis for this
assessment, and the criteria weights were derived through AHP. The ArcGIS weighted
overlay tool was used to identify the feasible sites within the study area, and suitability
maps were produced from the Stage 2 Model (Figures 10 and 11) showing the graded
values of the suitable areas.
It is striking how much area is considered unsuitable based on the results of the Stage 2
Model. Perhaps more intriguing is comparing the preliminary results of the two different
approaches (Table 15). Stage 1, a subtractive approach based on nine input criteria,
reduced the amount of area under consideration (the study area) from approximately 45.8
million acres to 9.6 million acres of suitable land area, a difference of approximately 79%.
By evaluating only the areas that met critical suitability requirements, the Stage 2 Model
reduced the amount of suitable area by over 99% using just six input criteria and their
associated suitability scores.
68
Table 15: Acreage statistics based on Stages 1 and 2 of the analysis.
Acres Percent Reduction Percent of Study Area
Study area 45,806,472 0 100
Stage 1 Excluded Area 36,206,408 21.0 79.0
Stage 1 Remaining Area 9,600,064 79.0 21.0
Stage 2 Suitable Area > 0 185,009 99.6 0.4
Stage 2 Suitable Area > 7 127,360 99.7 0.3
Figure 10: Stage 2 suitability map showing the results of the weighted overlay operation
using the AHP-derived criteria weights (10 = most suitable; 0 = unsuitable).
69
Figure 11: Stage 2 suitability map showing only areas with the highest suitability scores.
The differences are visually subtle at this scale, but there is a difference of 57,650 acres
between the number of acres that are considered highly suitable (suitability score ≥ 8) and
all suitable areas (suitability score > 0). In a table this difference may look significant (and
it is considering how many wind turbines could fit on 57,650 acres), but the advantage of
GIS visualization is that we can see where those additional acres are located on a map and
draw different conclusions.
70
One conclusion from comparing the two maps is that the geographic distribution of the
suitable cells and the highly suitable cells is nearly identical, meaning that one won’t find
highly suitable cells in areas where no other suitable cells exist. This makes sense because
most of the graded values were distance-dependent, but it also supports the notion of
spatial autocorrelation, which is a measure of the probability that features in space are
randomly distributed.
The first law of geography, often called Tobler’s Law, states that “Everything is related to
everything else, but near things are more related than distant things” (O'Sullivan & Unwin,
2010). In this case, we would expect suitable cells to be located near one another, and
highly suitable cells to be located near other suitable cells, because they share similar
geographic qualities. In spatial statistics, Moran’s I is often used to measure spatial
autocorrelation. An example of the results of Moran’s I are shown in Figure 12 for the layer
representing suitable areas (suitability score > 0). The results of the Moran’s I test are
shown here in a graphic report, automatically generated by ArcGIS (Figure 12).
The results confirm that these areas are not randomly distributed, which is as expected,
and they indicate that the suitable areas are indeed very strongly clustered. We see a
similar result when examining the areas with suitability scores ≥ 8, although not nearly as
strongly clustered (Table 16). This can likely be explained by the smaller number (n) of
features (polygons) under consideration, and by the fact that the 235 features that made up
the difference between the two layers were highly clustered neighboring features with
distance-dependent values, thus leaving larger distances between polygons with higher
71
values. In both cases, the results of the Moran’s I test show a statistically significant
clustering of features.
Figure 12: Results of Moran’s I for the Stage 2 suitable areas (> 0) layer.
72
Table 16: Results of Moran’s I for both Stage 2 evaluation layers.
Layer Moran's Index Z-score* p-value n
Stage 2 Suitable Area ≥ 3 0.248671 18.400073 0.000000 540
Stage 2 Suitable Area ≥ 7 0.095718 4.502179 0.000007 305
*For Moran’s I, the Z-score is a measure of the variance between each pair of target feature values and the mean
of all feature values.
Since there is confirmation of a geographical pattern (a strong clustering of features), it
may also be useful to investigate whether there is clustering of particular values. Moran’s I
only measures the distribution of similar feature values, but it does not measure whether
there is clustering of high or low values across the entire study area. Clustering of high
values, called “hot spots” in spatial analysis (and conversely “cold spots” for clusters of low
values), can provide some additional insight into the suitability of a particular region
(based on the selected input criteria) and perhaps explain why many existing wind farms
are located in areas not considered highly suitable by technical standards.
For this type of analysis, the Getis-Ord G-statistic is often used, which measures the
distribution of high or low values in a given area based on distance thresholds set by the
user, and this is compared with an expected G-statistic calculated by the GIS (a random
distribution). The Z-score is then calculated to test the significance of the result (i.e.
whether or not it is significantly different from random). Again, we can expect that there is
clustering of high values because the input feature class is based on the modeling of the
input criteria in Stage 2, which selected only those cells with high suitability values. In this
case, one might expect the hot spot map to look very similar to the one in Figure 11 in
terms of the spatial distribution of cell values.
73
However, the results of the hot spot analysis indicate that there are clusters of high values
that differ from the general geographical distribution of highly suitable cells from the Stage
2 weighted overlay. The maps in Figure 13 display the results for a sub-region of the study
area where large clusters of suitable cells were found in the Stage 2 analysis, and there is a
significant visual difference in the clustering of high values between the two maps. For
example, the cell clusters to the east and southeast in the upper map appear to have about
the same amount of highly suitable cells as do some of the other clusters in the northwest
and middle parts of the map, yet they received significantly lower Z-scores in the Getis-Ord
G-statistic hot spot analysis.
The hot spots indicate areas that are especially good candidates for wind farm sites,
represented in the lower map in Figure 13 by dark red areas. In the maps presented here,
the darker the red, the more significant the clustering of high values, while the light yellow
areas represent no significant clustering of high or low values, and the darker the blue, the
more significant the clustering of low values. However, the cells identified as hot spots are
still subject to further geographical analysis during the Stage 3 Evaluation, as many of the
cells may be located within areas identified as unsuitable for development during the Stage
1 analysis, and/or they may not be in areas that meet the final Stage 3 constraint requiring
continuous land areas greater than 5,000 acres.
74
Figure 13: Map details showing the results of the Getis-Ord G-statistic test for Stage 2. In the
lower map, “hot spots” (clusters of high values) are shown in dark red.
75
4.3 Stage 3 Evaluation Results
The Stage 3 Evaluation consisted of overlaying the Stage 1 constraint map with the Stage 2
suitability map to yield an optimal areas map. The results of this overlay operation (Figure
15) show that a large percentage of suitable cells from the Stage 2 AHP-derived weighted
overlay are located in areas identified as non-suitable in Stage 1, demonstrating why it is
valuable to approach the problem from different perspectives. Relying only on one
approach or the other limits the effectiveness of the models and reduces the amount of
information contained in the maps. For example, if User A and User B were both tasked
with finding good sites for a wind farm and each used a different approach, then they might
very well come to completely different conclusions. This framework works conceptually
like a Venn Diagram (Figure 14), which makes sense because it is built on Boolean logic.
Figure 14: Venn Diagram representing the three stages of ONSWPS evaluation.
76
Figure 15: Map of study area showing overlay results of Stage 1 and Stage 2 analysis.
After the maps were overlaid, the next step in the Stage 3 Evaluation required identifying
land units larger than 5,000 acres. The suitable raster cells were converted to polygon
geometry to calculate the area, and adjacent polygons were aggregated to maximize
suitable areas. A distance threshold of 1600 m (≈ 1 mile) was used for the aggregation
operation, and a new layer was created from the selection of polygons > 5,000 acres. Only
four polygons met this criterion and they were included in the new layer based on the AHP-
derived weighted overlay.
77
Figure 16: Maps showing the difference between all Stage 3 suitable polygons (suitability
score > 0) and those polygons greater than 5,000 acres.
78
Figure 17: Maps showing the comparison between the optimal polygons and the results of
the Stage 3 hot spot analysis (Getis-Ord G-statistic).
Polygon 7
79
The four large polygons shown in the lower map in Figure 16 are the best fit for wind farm
sites based on the Stage 3 Model, but running the Getis-Ord G-statistic test again can
elucidate any visual correlation between these sites and areas with clusters of high values.
The hot spot analysis was run again with the new set of optimal polygons, and the results
are shown in Figure 17. Based on the results of this test, only the polygon in the northwest
portion of the map (Polygon 7) correlates with clusters of high values, and so it would be a
top candidate for further investigation into wind energy development potential in the area.
One additional map is provided in Figure 18 (below) to compare the results of the hot spot
analysis with areas of high WPC and the locations of existing wind farms. This is a practical
way to verify the results of the hot spot analysis and the Stage 3 Model visually, and it
confirms that the optimal sites are located in highly suitable areas for wind energy
development. Furthermore, the map shows that Polygon 7 is an excellent candidate based
on the relative proportion of high WPC area to existing wind farms already located in that
area, suggesting that this region has an abundance of untapped wind resource potential
and room for growth.
Of course, the cautious optimistic may question why there are no existing wind farms in
this seemingly bountiful area for wind energy, and this question rightfully deserves further
investigation. One thing to consider is that Polygon 7 barely exceeded the 5,000 acre
threshold, the smallest of the four (see Table 18, section 4.5), and so one could conclude
that there is simply not enough suitable area to warrant massive investment in that area
because there is little potential for expansion to nearby areas. Figure 19 supports this
80
Figure 18: Maps showing the comparison between areas with high WPC and suitable area
hot spots.
81
consideration by showing the mask used in this analysis. The mask (yellow areas)
represents all areas that were not excluded due to Stage 1 constraints, and it is clear that
there is not much room to expand in this area, especially compared to the other optimal
polygons.
Also included in Figure 19 is a map showing the locations of cities and power lines, the next
two most important criteria behind WPC, which highlights a certain disadvantage for
Polygon 7 based on its proximity to nearby power lines. Compared to the other optimal
polygons, which appear to have power lines running directly through them, Polygon 7
would require substantial investment to connect to the electrical grid. It is also more
remote in terms of serving large populations. These factors do not preclude Polygon 7 from
consideration as an optimal site; they simply demonstrate the complex nature of the
spatial-MCA approach and why more detailed site-specific analysis is necessary before
selecting a final site.
It is also interesting to note the location of the existing ONSWPS in relation to the suitable
areas mask developed in this framework (upper map, Figure 19). In general, the existing
wind farms are located within areas deemed suitable based on Stages 1 and 2 constraints,
and there are several that are located within or very near the Stage 3 optimal areas. At the
very least, the spatial distribution of existing ONSWPS closely mimics the spatial
distribution of the suitable areas, and this affords a level of confidence in the framework
and validates the model to some degree in real-world terms.
82
Figure 19: Maps showing the location of the optimal polygons in relation to cities, power
lines, existing ONSWPS, and the Stage 2 suitable areas mask.
Polygon 7
83
One additional note about the patterns observable in the Stage 2 mask (upper map, Figure
19) is the presence of what appears to be an artificially propagated series of “dots” or
“holes” in the Washington portion. In fact, these holes are from the buffered wetland layer,
and to some extent are artificial in the sense that the data collection methods and definition
of what constitutes a wetland are manmade constructs, and as such will differ amongst
departments and jurisdictions as the example of the difference between Washington and
Oregon illustrates in the above figure.
Figure 20: Maps showing locations of Tribal Lands relative to optimal sites and areas with
high WPC.
84
The final Stage 3 criterion concerning tribal lands was also assessed at this point. However,
there were no optimal sites were identified within the study area during the Stage 3
Evaluation that were located on tribal lands (Figure 20), so this criterion required no
further analysis.
4.4 Suitability Assessment Discussion
As evident in Figure 19, the overwhelming majority of existing wind farms are located in
areas identified by the Stage 1 and 2 models as suitable, and several wind farms exist in
areas identified as optimal, which lends credence to the effectiveness of these models.
However, there are a few wind farms that were not located within these areas, which was a
rather unexpected result, and may be explained by the conservative nature of the
constraints used in Stage 1, or perhaps these wind farms were located in areas of high WPC
despite not meeting other constraints. Upon re-examination of Figure 18 (upper map), it
can be seen that the latter case explains some of these anomalies, but another situation is
also observable, which is that some of the wind farms are not even located in areas with
high WPC. This leads to some questions about the accuracy of the WPC dataset and
emphasizes the fact that potential sites must undergo thorough wind resource assessments
(WRA) before pursuing development.
These cases aside, it is evident in Figure 19 that several wind farms were located within or
very near optimal sites identified in the models, but also that the majority of the existing
wind farms are located outside of the optimal sites. Again, this may have something to do
with the inaccuracy of the WPC dataset, as WPC was the strongest selective criteria in the
85
models, but it also suggests that wind energy siting is always a compromise and that there
are limits to the predictive capacity of the models in terms of real-world results. Overall
though, the sites identified as optimal by the Stage 3 Model corresponded well with the
locations of existing wind farms.
Based on the results of the Stage 3 Evaluation, four optimal sites were identified, and of
those four only one showed a significant clustering of high values. However, after further
investigation, this site (Polygon 7, hereafter called Site D) showed some limitations to
development when compared to the other three, specifically in its potential for expansion
into other areas. Economies of scale are vitally important in making wind energy cost
competitive with other sources of electricity, and a lack of this ability, combined with the
remoteness of Site D, led to the decision to focus on Sites A-C for the Sensitivity Analysis.
Since Sites A-C are also closer in proximity to one another, they can be presented in a
larger-scale map. This affords the reader the ability to observe small changes in the outputs
that would otherwise be impossible at the regional extent of the entire study area.
4.5 SA Results
Two different weighting schemes were implemented to measure the sensitivity of the input
criteria weights. In the first scenario, equal weight was given to each of the six dynamic
criteria and substituted into the Stage 2 Model weighted overlay tool. In the second
scenario, the criteria were altered by 5% increments up to a threshold of ± 20% using the
OAT method. The AHP-derived criteria weights were used as the baseline dataset for both
scenarios, and the results are shown in map details of a study area sub-region for
86
illustrative purposes. Figure 21 provides an example of the geographic distribution of
suitable cells under the first scenario.
Figure 21: Visualized results of the first SA weighting scenario, showing suitable areas based
on two different weighting schemes.
The map on the left shows the results of the AHP-derived input criteria weights, while the
map on the right shows the output under the equally weighted scheme. The location of
suitable cells is virtually identical in both maps, but the values of many cells are slightly
87
different. A graphic illustration of the cell distribution provides a more quantifiable
example of the differences (Figures 22 and 23).
Figure 22: Histograms showing the differences in suitable cell distribution between two
different weighting schemes.
Figure 23: Line graph comparing suitable cell distribution between two weighting schemes
and associated correlation coefficient.
0
100
200
300
400
500
3 4 5 6 7 8 9 10
Cell Count
Suitability Score (Cell Value)
AHP vs. Equal Weight
AHP (default)
Equal Wt.
Correlation Coefficient 0.9897
88
The distributions and cell counts are similar under both weighting schemes, and although
there is an additional cell value (‘3’) included in the equally weighted results, there are only
four cells with that suitability score (Table 17). The largest difference between the two is
the number of cells scored at a value of ‘8’ (355 for AHP compared to 291 for Equal
Weight), which may impact the amount of suitable acreage if one chooses to select only
suitability scores ≥ 8. However, if one draws from the entire set of suitable cells, the
number of total cells selected in both weighting scheme is exactly the same. In general, the
AHP-based distribution is skewed slightly toward the higher values, while the Equal
Weight-based cell counts represent a more classic distribution.
Table 17: Suitable cell value statistics under two weighting schemes.
Cell Value Cell Count
AHP (default) Equal Wt. Standard Deviation Avg. Deviation
3 0 4 2.828 2.00
4 23 37 9.899 7.00
5 133 152 13.435 9.50
6 291 307 11.314 8.00
7 468 463 3.536 2.50
8 355 291 45.255 32.00
9 104 118 9.899 7.00
10 2 4 1.414 1.00
SUM 1376 1376 MEAN 8.63
Correlation Coefficient 0.98974
Average Deviation 8.57
To find out if the difference in cell distribution had any effect on the location of optimal
sites, the suitable cells were converted to polygons and then aggregated within a distance
of 1,600 m (≈ 1 mile), replicating the parameters of the Stage 3 Model. Table 18 shows the
difference in acreage among the four optimal polygons (larger than 5,000 acres) between
the AHP-derived and the equal weighting schemes.
89
Table 18: Acreage statistics for optimal polygons under two different weighting schemes.
AHP Optimal Polygons Equal Weight Optimal Polygons
Site Polygon ID Acres Polygon ID Acres
A 30 11266.5 29 11319.2
B 43 10408.3 43 10289.6
C 37 7209.1 51 9324.4
D* 7 5625.1 7 5502.1
SUM 34,509.0 36,435.3
*Not shown in figures this section.
The differences in total acreage are slight in terms of acreage, and at the regional scale they
are visually indistinguishable. Figure 24, which shows a small sub-region of the study area,
reveals that the differences in Polygon ID are simply due to the numbering process used in
aggregation, not that different polygons were selected in other locations. The shapes of the
polygons are slightly different, but they are essentially the same sites. Neither method
resulted in a consistent increase or decrease in acreage across all four sites, providing little
insight into the overall impact of the input criteria weights on the output areas under the
first scenario. The overall difference in acreage however suggests that the equal weighting
scheme is generally less selective in terms of high cell values.
These results are also heavily influenced by the aggregation process and the 1,600 m
aggregation distance. Trials were done at 400 m (one cell width) and 800 m (two cell
widths), both times resulting in only one polygon larger than 5,000 acres. This seemed
peculiar given the highly clustered suitable polygons, and so it was decided to use the
larger aggregation distance of 1,600 m to try to maximize the number of optimal polygons
90
without compromising the data. For preliminary analysis, this level of aggregation was
considered acceptable for identifying potential development sites.
Figure 24: Map detail showing the differences in optimal polygons A, B, and C between two
weighting schemes (AHP and Equal Weight).
Under the OAT scenario, the input criteria values were altered by 5% increments to a
threshold of ±20% to simulate small perturbations or errors, in order to evaluate the
sensitivity of the individual criteria. Table 19 shows an example of the adjusted OAT
criteria values used in the weighted overlays (the complete set of OAT criteria weight
tables is displayed in the Appendix). WPC and the proximity to the electrical grid (GRID)
91
were key variables to investigate because of their large influence on the results and
because they are different types of constraints; WPC suitability is based on geographic
correlation, while GRID is a distance-dependent criterion. Note that the main changing
criteria weights (C m) are identical for both WPC and GRID, as are the adjusted weights for
the other criteria (C i), because their AHP-derived criteria weights are identical.
Table 19: SA values for the WPC and GRID criteria using the OAT method.
C m C i
% WPC GRID URBCITY ROAD LANDCOV SLOPE SUM
0.20 0.364 0.277 0.154 0.088 0.088 0.030 1.000
0.15 0.348 0.283 0.158 0.090 0.090 0.031 1.000
0.10 0.333 0.290 0.162 0.092 0.092 0.032 1.000
0.05 0.318 0.296 0.165 0.094 0.094 0.032 1.000
0.00 0.303 0.303 0.169 0.096 0.096 0.033 1.000
-0.05 0.288 0.310 0.173 0.098 0.098 0.034 1.000
-0.10 0.273 0.316 0.176 0.100 0.100 0.034 1.000
-0.15 0.258 0.323 0.180 0.102 0.102 0.035 1.000
-0.20 0.242 0.329 0.184 0.104 0.104 0.036 1.000
C m C i
% GRID URBCITY ROAD LANDCOV SLOPE WPC SUM
0.20 0.364 0.154 0.088 0.088 0.030 0.277 1.000
0.15 0.348 0.158 0.090 0.090 0.031 0.283 1.000
0.10 0.333 0.162 0.092 0.092 0.032 0.290 1.000
0.05 0.318 0.165 0.094 0.094 0.032 0.296 1.000
0.00 0.303 0.169 0.096 0.096 0.033 0.303 1.000
-0.05 0.288 0.173 0.098 0.098 0.034 0.310 1.000
-0.10 0.273 0.176 0.100 0.100 0.034 0.316 1.000
-0.15 0.258 0.180 0.102 0.102 0.035 0.323 1.000
-0.20 0.242 0.184 0.104 0.104 0.036 0.329 1.000
92
Figure 25: Maps showing the difference in suitable cell values for the WPC criterion under
the OAT method (± 20%).
The visualized results of the ±20% criteria weight changes for the WPC criterion are shown
in Figure 25. Again, it is difficult to comprehend the quantitative differences in suitable cell
distributions from the maps, but the locations are unmistakably similar. And again the
histograms (Figure 26) provide a better quantitative assessment of the differences, clearly
exhibiting the skewed distribution toward higher suitability scores for the WPC +20%
perturbation.
93
Figure 26: Histograms showing the differences in suitable cell distributions for the WPC
criterion at ± 20% of the baseline values.
When combined with the AHP vs. Equal Weight distribution results, the WPC ±20% results
begin to show a pattern involving the influence of WPC on the distribution of cell values.
The AHP scheme, which gave WPC a 30% weight, as compared to the Equal Weight scheme
which gave WPC an approximately 17% weight, has a similarly skewed distribution to that
of the WPC +20% scheme, which used a 36% weight compared to 24% for the WPC -20%
scheme. This suggests that the output cell values, particularly at the high end, are relatively
sensitive to the input weight assigned to WPC. The correlation appears to be that the higher
the input weight for the WPC criterion, the larger the number of cells with high suitability
scores, specifically cells with values ≥ 8.
94
Figure 27: Line graphs showing the OAT ±20% cell distributions for the six dynamic criteria.
An interesting pattern is observable in Figure 27, with a steady increase in the correlation
coefficients as the perceived importance of each variable decreases. It would be simple to
assess if the AHP-derived input criteria weights descended steadily in value: the greater the
input weight, the more sensitive the output is to small perturbations. However, the WPC
and GRID criteria had the same AHP-derived input weights (30%), as did the ROAD and
LANDCOV criteria (10%), so the relationship is not quite that straightforward. Looking at
the GRID criterion may help to explain some of the variation in the output cell distributions.
95
Figure 28: Maps showing the differences in suitable cell distribution for the GRID criterion
under the OAT method (±20%).
Predictably, the locations of the suitable cells are the same (Figure 28), but this time the
distributions are more similar as evident in the histograms in Figure 29. However, it is still
difficult to visually assess the differences between the impacts of the +20% perturbations
on the cell distributions for the two most influential criteria, WPC and GRID, which both
had input criteria weights of 30%. A side-by-side comparison of the distributions (Figure
30) between the two layers provides a better perspective of the differences.
96
Figure 29: Histograms showing the differences in suitable cell distributions for the GRID
criterion at ±20% of the baseline values.
Figure 30: Histograms comparing the WPC layer and the GRID layer suitable cell
distributions under the OAT weighting scheme.
Although the two layers had identical input criteria weights of 30% under the AHP scheme,
and their subsequent OAT criteria weights were identical, they exhibit distinctly different
97
distributions, and the difference in their average standard deviations is indicative of a
larger pattern among all six dynamic criteria. Table 20 provides a statistical perspective of
the results, and a closer examination of the average standard deviations reveals this
pattern more clearly.
Table 20: Cell distribution statistics for all six dynamic criteria under the OAT weighting
scheme (±20%).
4.6 SA Discussion
Comparing the equal weighting scheme to the baseline (AHP-derived) weighting scheme
provided little insight into the sensitivity of the criteria weights to changes in input values.
The equal weighting scheme, which altered several of the input criteria weights
98
considerably, exemplified a nearly perfect distribution of suitable cells, while the AHP-
derived weighting scheme showed a slight inclination towards higher suitable cell values.
This was likely due to a correlation between the larger influence of the WPC criterion
under the AHP weighting scheme. In both cases, the distributions were similar enough that
no significant difference in the selection of optimal areas resulted.
Under the OAT scenario there were some interesting patterns observable, with the
influence of the WPC layer being the predominant factor in the differences in cell
distributions. Despite having identical input criteria weights, the WPC layer and the GIRD
layer had substantially different distributions under the ±20% variations. The WPC
criterion showed a larger average standard deviation and a smaller correlation coefficient
when isolated compared to the GRID criterion, but the defining indicator was that an
increase in the selection of higher cell values was seen under both scenarios whenever
there was an increase in the WPC criterion weight (Figure 30). However, this was not the
case when the situation was reversed, as the GRID criterion actually showed more of a
prevalence for selecting higher cell values when the GRID criterion weight was -20% (this
meant that the WPC criterion weight was subsequently increased).
While it is natural for the criteria with large influences to be the most sensitive to
perturbations, this example comparing the WPC and GRID criteria, especially when
combined with the results of the AHP vs. the equal-weighting scheme results, indicates that
the WPC layer is the most sensitive to small perturbations in input criteria weights, and
therefore has the most impact on the distribution of suitable cells.
99
CHAPTER FIVE: CONCLUSIONS
5.1 General Conclusions
The framework developed in this thesis successfully identified areas suitable for wind
energy development based on a thorough set of criteria and three stages of evaluation,
resulting in the selection of four optimal sites. The GIS models developed within this
framework proved to be effective at handling the various types of data necessary for the
analysis, and they can be adapted to other situations or study areas. The maps created here
contain an abundance of information about the suitability of particular areas for wind
energy development, and the AHP-MCA methodology employed in this framework is
robust, quantifiable, and defendable.
The importance of criteria selection and constraint determination in site suitability studies
cannot be emphasized enough; these processes are arguably more important than the
methodology itself. The more comprehensive the set of criteria constraints used in the
preliminary analysis, the more likely the project will be to avoid costly setbacks and
unnecessary resource allocation during the site search process. While detailed economic
analysis is a necessary part of the site search and is included in some preliminary site
suitability studies, this thesis advocates an approach that postpones this type of analysis
until a set of physically feasible sites has first been identified.
It is the opinion of this author that too many studies, papers, and reports are overly liberal
with their assessment of developable area for wind energy. One of the primary causes of
100
this is an incomplete set of criteria. Most studies assess site suitability by volume, how
many acres can be developed, rather than the quality of developable areas. Perhaps there
are incentives in place for “finding” more acreage to develop, but does it really help
planners and developers if they have to sift through unfit sites that could have been
eliminated during preliminary analysis? Even those who unabashedly support wind energy
acknowledge the limits on productive land area, as wind energy requires massive
continuous tracts of land and special atmospheric conditions. This thesis has attempted to
be conservative with its assessments of suitable areas for wind energy development by
being more selective, including more criteria, and excluding more area in order to identify
the most optimal sites, versus just finding the most sites.
Studies that use liberal constraints or limited sets of criteria, and therefore identify a
perhaps disproportionate amount of suitable land area, inherently find that all the existing
wind energy developments are located within the areas that they have identified as
suitable, and they may use that as evidence that their approach is effective. It is like saying
that all areas on the surface of the Earth that meet the criterion of being a water body are
suitable for sailing a boat. Several studies throughout the literature did not even include
the proximity to the electrical transmission grid as a criterion, which is clearly an
ineffective approach. This thesis hopes to provide support for the notion that less is more,
in terms of quantity vs. quality, and that taking the necessary precautions and evaluating
more thoroughly the relevant criteria and constraints at the preliminary stage is a
beneficial approach to the site selection process.
101
5.2 Technical Conclusions
Chen et al. (2010) suggest that there is a lack of SA in spatial MCA approaches in the
literature, and they assert that, where SA is conducted on the criteria input weights as
opposed to the input values, weight sensitivity should be visualized geographically when
possible. In other words, there is a lack of appropriate spatial analysis in the arena of GIS-
MCA site suitability approaches. In fact, of the four studies reviewed in this thesis, only one
study even presented a visualization of the SA results (in Tegou et al., 2010). This ratio
seems to be consistent, if not generous, throughout the literature, and it highlights an area
where the visualization capabilities of GIS can be exemplified to great effect.
This thesis has attempted to address this criticism in two ways: first through a cartographic
presentation of the results, which provide a substantial amount of information through the
visual medium that is unique to maps, and second through the use of distribution graphs
and tables that provide a quantitative, while still visual, view of the results, creating a
bridge between numbers/values and their respective locations in space.
This combination of approaches demonstrates the versatility and effectiveness of GIS
software packages to evaluate such complex decision-making tasks, and it provides a
robust set of results on which to base those decisions. Spatial SA has proved to be a
powerful tool for identifying patterns and establishing a considerable level of confidence in
the results. It has also provided a means of assessing the capabilities and limits of the
models developed here.
102
The accurate estimation of criteria weights is imperative for spatial analysis overlay
approaches. The results from this analysis show that the perceived importance (input
weight) of the criteria have a substantial impact on the suitable cell distributions of a
selected area, and the effects of small perturbations (±20% of the baseline values) increase
as the criteria weight increases (Figure 27). In most cases, these effects were relatively
small, but this pattern suggests that if a criterion is assigned a very large input weight then
the effects of small perturbations would have a significant impact on the cell distribution.
This highlights the benefits of using the AHP to derive input criteria weights. The AHP
provides a means of ranking the importance of diverse criteria on a common scale (i.e. the
ability to compare “apples and oranges”) and therefore delivers a more accurate
approximation of their influence on the final outcome. The results of this analysis showed
that the AHP-based weighting scheme was more selective in terms of identifying suitable
land area (Table 18) and it selected areas with higher suitability scores (Table 17; Figures
22 and 23) when compared to the outputs under the equal weighting scheme.
However, the results are not overwhelmingly significant in favor of the AHP outputs, and
one may question whether it is worth going through all the trouble to use the AHP when
the equal weighting scheme produced visually similar results. One reason for this similarity
is due to the Stage 1 excluded areas and the subsequent mask used in Stage 2, which left
very little land area to examine (Table 18), which is why the cell counts were identical. It is
likely that the two methods would yield remarkably different cell counts if the remaining
land area wasn’t limited by the excluded areas mask.
103
Another factor to consider is that the AHP-derived input criteria weights were relatively
similar in this case, ranging from 30% to 3%, while the Equal Weights scheme assigned an
approximately 17% weight to all six criteria. When altered by ±20%, many of the criteria
were within a few percentage points of one another (Figures A-F, Appendix), with many
near 17%, so this study may not be a great example of the effectiveness of AHP over a
uniform weighting scheme. Nevertheless, the AHP did show improved results and one
could expect it to be considerably more accurate if applied to a situation where the input
criteria weights were more widely varied, for example ranging from 60% to 3%.
In addition, the AHP is mathematically defensible, and if the results were being measured
purely in mathematical terms, rather than spatial distributions, one could calculate a clear-
cut best option or set of best options. This could also be possible with these results if one
were to calculate the overall suitability scores for each of the optimal sites, and this is one
area where this methodology could be expanded. A more detailed assessment of the
differences between the AHP outputs and the equally weighted outputs would improve the
confidence in these results as well, such as applying the ±20% OAT approach to the equally
weighted criteria and conducting the Stage 3 analysis and the SA on the entire study area
without being limited by the excluded areas mask.
Another important conclusion from this project is that the data conversion and
organization process is critical to the success of the analysis. Acquiring the necessary
datasets is obviously important as well, but taking the time to convert the datasets into
104
common formats saves many headaches later in the analysis. Also, with the inclusion of a
large set of input criteria, the number of data layers in the GIS can get out of hand quite
easily, so organization is another key. This thesis employed an ArcGIS file geodatabase to
organize the data, which handles much of the data management automatically and enforces
some basic consistency among the datasets used in the analysis.
One of the primary features that the geodatabase manages is the cell size of raster datasets.
This framework used two different cell sizes during the project: 798 m during the data
conversion process and 400 m during the construction of the models and subsequent
analysis. It is unclear exactly what type of impact this may have had on the analysis results,
but in terms of balancing time-savings with the accuracy of the results, the chosen cell sizes
seemed acceptable for preliminary regional analysis. However, a more consistent approach
may improve the accuracy of the results.
Another way that this framework might be improved is through the use of fuzzy sets for
defining categorical membership. While Boolean logic is simple to use and to understand, it
is this simplicity that is invariably problematic for complex multi-criteria analysis where
many shades of gray exist. Studies in recent years have explored the use of fuzzy measures
for wind farm siting using MCA-GIS approaches and found this approach has many benefits
over Boolean overlay, weighted summation, or weighted linear combination (WLC)
approaches (Boroushaki & Malczewski, 2008; Hansen, 2005; Jiang & Eastman, 2000). Of
course, adding this type of complexity to an already complex process may be a more
academic endeavor than many planners and developers wish to engage in, but it certainly
105
has the potential to improve the accuracy of the results and, most importantly, provide a
stronger level of confidence in the decisions.
AHP is an excellent means of ensuring consistency in the decision-making process, but it
has its limits, too. There are many sources of criticism throughout the literature describing
the shortcomings of AHP, most of which concern the advanced mathematics and theories
involved, but one of the limitations relates to the use of fuzzy set theory. Boroushaki and
Malczewski (2008) point out that the AHP is limited in its linguistic ability for describing
quantities (i.e. “few,” “many,” “one,” etc.), and they propose that a combination of AHP with
ordered weight averaging (OWA) methods, which include linguistic quantifiers, could
expand the range of decision strategies available using fuzzy logic.
Another limitation of the AHP is the possibility of a paradoxical situation where the
decision-maker has created a pairwise matrix to the best of their ability, but still fails the
consistency test (Karapetrovic & Rosenbloom, 1999). This could easily happen when there
are large numbers of decision makers with widely varying levels of background knowledge
and expertise, as in RES site selection processes. Karapetrovic and Rosenbloom (1999)
suggest adding quality control approach to the AHP consistency check, and although it
would not apply directly to this study, it may be a beneficial element to add to the
methodology when trying to adapt it to other areas or situations.
As the methodology exists now, I believe it could be applied in other areas reasonably well,
particularly those with similar socio-economic status and political goals. There are aspects
106
to RES siting that are inherently specific to local and/or regional legislation surrounding
development, and traditional supply/demand models do not consistently apply when
massive government subsidies are present, so it highly improbable that any universal
model could ever be developed that would effectively apply in every situation. These types
of pressures largely relate to the criteria addressed in Stage 1, and could be adjusted with
minor effort. However, the strength of this framework is that the Stage 2 dynamic criteria,
which are primarily physical (or geographical), and thus generally avoid legislative
trappings, are widely adaptable to any region for preliminary analysis.
5.3 Future Work
One natural area to expand the work presented in this thesis is to evaluate the economic
costs, benefits, and risks associated with developing the selected sites. Several approaches
are evident at varying levels of detail in the literature, and despite their limited ability to
accurately portray development costs, these approaches can provide valuable information
and another means of evaluating potential sites. One approach that has particular appeal
for this type of analysis is presented by Lee et al. (2009). It combines AHP with benefits,
opportunities, costs, and risks (BOCR), and may fit well within the scope of preliminary
analysis without getting too site-specific.
A second extension of this research, which is of personal interest, would be to integrate
GIS-based learning modules into the education system in order to investigate the
hypothesis that spatial thinking, through the application of spatial science theory and GIS
technology, improves overall student performance, particularly in math and science. A recent
107
emphasis on improving math and science performance in the U.S. has led to the
implementation of increasingly popular “place-based” education initiatives and new
national Science, Technology, Engineering, and Math (STEM) Education standards (Kuenzi,
Matthews, & Mangan, 2006; The White House, 2011).
The education system is an arena that stands to benefit from an infusion of spatial thinking
and spatial problem solving technology, especially if it can be found to improve overall
student performance. Using the example of ONSWPS site selection, I would eventually like
to implement this tool into such learning modules as an example of spatial problem solving
using GIS. The motivation to implement the tool developed in this thesis into course
modules has informed the design of this project and its outcomes to some degree, and I
believe this would be a tremendous opportunity to expand and strengthen the role of GIS in
students’ lives and promote spatial literacy in the public arena.
Another way in which this research could have a greater impact on the public is through
the production of interactive, web-based maps. Web-based mapping applications have
been developed for wind energy siting and show serious potential for improving
information dissemination and increasing public participation (Berry, Higgs, Fry, &
Langford, 2011; Bishop & Stock, 2010; Jankowski, 2009; Simao, Densham, & Haklay, 2009).
At the very least, the publication of these suitability maps online, such as through the
ArcGIS Online interface, could provide a heightened level of access to this information for
planners, decision makers, politicians, students, and the general public. Improving access to
108
this type of information will be increasingly important as suitable land area is reduced
through urban sprawl and competing wind energy development, and as renewable energy
sources are mandated to become a larger percentage of the energy mix in the near future
due to renewable portfolio standards and pressures to move away from fossil fuel-based
sources of electricity generation. If the United States is going to achieve its goal of 20%
wind energy by the year 2030 (U.S. Department of Energy, 2008), tools like this will play a
substantial role in effectively finding optimal sites for wind energy development, and the
information presented here and in similar studies will be invaluable for educating people
about the complex issues involved in finding the best sites.
109
REFERENCES
Adkins, D. (2009). Wind Farm Permitting in National Forests. Jackson Kelly PLLC. Retrieved
June 17, 2012, from http://eem.jacksonkelly.com/2009/08/wind-farm-permitting-
in-national-forests.html
American Wind Energy Association [AWEA]. (2004). Electrical Guide to Utility Scale Wind
Turbines. Washington, D.C.: American Wind Energy Association. Retrieved
December 8, 2011, from
http://www.awea.org/documents/issues/upload/AWEA_Electrical_Guide_to_Wind_
Turbines.pdf
American Wind Energy Association [AWEA]. (2008). Wind Energy for a New Era.
Washington, D.C.: American Wind Energy Association.
Anderson, J., Hardy, E., Roach, J., & Witmer, R. (1976). A land use and land cover
classification system for use with remote sensor data (Geological Survey Professional
Paper 964). U.S. Department of the Interior. Washington, D.C.: United States
Government Printing Office. Retrieved June 17, 2012, from
http://www.ncrs.fs.fed.us/4153/deltawest/landcover/AndersonDoc.pdf
Baban, S., & Parry, T. (2001). Developing and applying a GIS-assisted approach for locating
wind farms in the UK. Renewable Energy, 24(1), 59-71. doi:10.1016/S0960-
1481(00)00169-5
Barrios, L., & Rodriguez, A. (2004). Behavioural and environmental correlates of soaring-
bird mortality at on-shore wind turbines. Journal of Applied Ecology, 41(1), 72-81.
Retrieved from http://www.jstor.org/stable/3505881
Berry, R., Higgs, G., Fry, R., & Langford, M. (2011). Web-based GIS approaches to enhance
public participation in wind farm planning. Transactions in GIS, 15(2), 147-172.
doi:10.1111/j.1467-9671.2011.01240.x
Billinton, R., & Gao, Y. (2008). Multistate wind energy conversion system models for
adequacy assessment of generating systems incorporating wind energy. IEEE
Transactions on Energy Conversion, 23(1), 163-170. doi:10.1109/TEC.2006.882415
Bishop, I. D. (2011). Landscape planning is not a game: Should it be? Lanscape and Urban
Planning, 100, 390-392. doi:10.1016/j.landurbplan.2011.01.003
Bishop, I. D., & Stock, C. (2010). Using collaborative virtual environments to plan wind farm
installations. Renewable Energy, 35(10), 2348-2355.
doi:10.1016/j.renene.2010.04.003
110
Boccard, N. (2009). Capacity factor of wind power realized values vs. estimates. Energy
Policy, 37, 2679-2688. doi:10.1016/j.enpol.2009.02.046
Bohn, C., & Lant, C. (2009). Welcoming the wind? Determinants of wind power
development among U.S. states. The Professional Geographer, 61(1), 87-100.
doi:10.1080/00330120802580271
Boroushaki, S., & Malczewski, J. (2008). Implementing an extension of the analytical
hierarchy process using ordered weighted averaging operators with fuzzy
quantifiers in ArcGIS. Computers & Geosciences, 34, 399-410.
doi:10.1016/j.cageo.2007.04.003
Cavallaro, F., & Ciraolo, L. (2005). A multicriteria approach to evaluate wind energy plants
on an Italian island. Energy Policy, 33, 235-244. doi:10.1016/S0301-
4215(03)00228-3
Chen, Y., Yu, J., & Khan, S. (2010). Spatial sensitivity analysis of multi-criteria weights in
GIS-based land suitability evaluation. Environmental Modeling & Software, 25, 1582-
1591. doi:10.1016/j.envsoft.2010.06.001
Conley, J., Bloomfield, B., St. George, D., Simek, E., & Langdon, J. (2010). An ecological risk
assessment of wind energy development in Eastern Washington. Seattle: The Nature
Conservancy.
Dabiri, J. O. (2011). Potential order-of-magnitude enhancement of wind farm power density
via counter-rotating vertical-axis wind turbine arrays. Journal of Renewable and
Sustainable Energy, 3(043104), 1-12. doi:10.1063/1.3608170
DeCarolis, J. F., & Keith, D. W. (2006). The economics of large-scale wind power in a carbon
constrained world. Energy Policy, 34, 395-410. doi:10.1016/j.enpol.2004.06.007
Denholm, P., Hand, M., Jackson, M., & Ong, S. (2009). Land-use requirements of modern wind
power plants in the United States. Golden, CO: National Renewable Energy
Laboratory. doi:10.2172/964608
Denholm, P., Kulcinski, G., & Holloway, T. (2005). Emissions and energy efficiency
assessment of baseload wind energy systems. Environmental Science & Technology,
39(6), 1903-1911. doi:10.1021/es049946p
Denny, E., & O'Malley, M. (2006). Wind generation, power system operation, and emissions
reduction. IEEE Transactions on Power Systems, 21(1), 341-347.
doi:10.1109/TPWRS.2005.857845
Dominguez, J., & Amador, J. (2007). Geographical information systems applied in the field of
renewable energy sources. Computers & Industrial Engineering, 52, 322-326.
doi:10.1016/j.cie.2006.12.008
111
Elliott, D., Wendell, L., & Gower, G. (1991). An assessment of the available windy land area
and wind energy potential in the contiguous United States. Battelle Memorial
Institute, U.S. Department of Energy. Richland, WA: Pacific Northwest Laboratory.
doi:10.2172/5252760
Environmental Protection Authority. (2003). Environmental Noise Guidelines: Wind Farms.
Adelaide: Environmental Protection Authority. Retrieved from www.epa.sa.gov.au
Gamboa, S. (2011, November 28). Administration unveils new rules for tribal lands.
Washington, D.C.: Associated Press. Retrieved June 17, 2012, from
http://www.boston.com/news/nation/articles/2011/11/28/administration_unveil
s_new_rules_for_tribal_lands/
Granovskii, M., Dincer, I., & Rosen, M. (2007). Greenhouse gas emissions reduction by use of
wind and solar energies for hydrogen and electricity production: Economic factors.
International Journal of Hydrogen Energy, 32, 927-931.
doi:10.1016/j.ijhydene.2006.09.029
Griffiths, J. C., & Dushenko, W. T. (2011). Effectiveness of GIS suitability mapping in
predicting ecological impacts of proposed wind farm development on Aristazabal
Island, BC. Environment, Development and Sustainability, 13, 957-991.
doi:10.1007/s10668-011-9300-1
Hansen, H. (2005). GIS-based multi-criteria analysis of wind farm development. In H.
Hauska, & H. Tveite (Ed.), ScanGIS 2005: Proceedings of the 10th Scandinavian
Research Conference on Geographical Information Science (pp. 75-87). Department of
Planning and Environment.
Hoogwijk, M., de Vries, B., & Turkenburg, W. (2004). Assessment of the global and regional
geographical, technical, and economic potential of onshore wind energy. Energy
Economics, 26, 889-919. doi:10.1016/j.eneco.2004.04.016
Ibrahim, H., Ghandour, M., Dimitrova, M. I., & Perron, J. (2011). Integration of wind energy
into electricity systems: Technical challenges and actual solutions. Energy Procedia,
6, 815-824. doi:10.1016/j.egypro.2011.05.092
Janke, J. R. (2010). Multicriteria GIS modeling of wind and solar farms in Colorado.
Renewable Energy, 35, 2228-2234. doi:10.1016/j.renene2010.03.014
Jankowksi, P., & Nyerges, T. (2003). Toward a framework on geographical information-
supported decision making. Journal of the Urban and Regional Information Systems
Association (URISA), 15(1), 9. Retrieved from
http://www.urisa.org/Journal/protect/APANo1/jankowski.pdf
112
Jankowski, P. (1995). Integrating geographical information systems and multiple criteria
decision-making methods. International Journal of Geographic Information, 9(3),
251-273. doi:10.1080/02693799508902036
Jankowski, P. (2009). Towards participatory geographic information systems for
community-based environmental decision making. Journal of Environmental
Management, 90, 1966-1971. doi:10.1016/j.jenvman.2007.08.028
Jiang, H., & Eastman, J. (2000). Application of fuzzy measures in multi-criteria evaluation in
GIS. International Journal of Geographic Information Science, 14(2), 173-184.
doi:10.1080/136588100240903
Jobert, A., Laborgne, P., & Mimler, S. (2007). Local acceptance of wind energy: Factors of
success identified in French and German case studies. Energy Policy, 35, 2751-2760.
doi:10.1016/j.enpol.2006.12.005
Karapetrovic, S., & Rosenbloom, E. (1999). A quality control approach to consistency
paradoxes in AHP. Eurpean Journal of Operational Research, 119(3), 704-718.
doi:10.1016/S0377-2217(98)00334-8
Kuenzi, J. J., Matthews, C. M., & Mangan, B. F. (2006). Science, Technology, Engineering, and
Mathematics (STEM) Education issues and legislative options. Washington, D.C.:
Congressional Research Service, The Library of Congress.
Kunz, T., Arnett, E., Cooper, B., Erickson, W., Larkin, R., Mabee, T., . . . Szewczak, J. (2007).
Assessing impacts of wind-energy development on nocturnally active birds and
bats: A guidance document. Journal of Wildlife Management, 71(8), 2449-2486.
doi:10.2193/2007-270
Kuvlevsky Jr., W., Brennan, L., Morrison, M., Boydston, K., Ballard, B., & Bryant, F. (2007).
Wind energy development and wildlife conservation: Challenges and opportunities.
Journal of Wildlife Management, 71(8), 2487-2498. doi:10.2193/2007-248
Lee, A., Chen, H., & Kang, H. (2009). Multi-criteria decision making on strategic selection of
wind farms. Renewable Energy, 34(1), 120-126. doi:10.1016/j.renene.2008.04.013
Loring, J. M. (2007). Wind energy planning in England, Wales, and Denmark: Factors
influencing project success. Energy Policy, 35, 2648-2660.
doi:10.1016/j.enpol.2006.10.008
Madders, M., & Whitfield, D. P. (2006, March 27). Upland raptors and the assessment of
wind farm impacts. Ibis, 148(Supplement s1), 43-56.
Malczewski, J. (2004). GIS-based land-use suitability analysis: a critical overview. Progress
in Planning, 62, 3-65. doi:10.1016/j.progress.2003.09.002
113
Marinoni, O. (2004). Implementation of the analytical hierarchy process with VBA in
ArcGIS. Computer & Geosciences, 30, 637-646. doi:10.1016/j.cageo.2004.03.010
Miller, G. A. (1956). The magical number seven, plus or minus two: Some limits on our
capacity for processing information. The Psychological Review, 63(2), 81-97.
doi:10.1037/h0043158
National Renewable Energy Laboratory [NREL]. (2012, January 12). Wind Farm Area
Calculator. Retrieved from NREL Energy Analysis Website:
http://www.nrel.gov/analysis/power_databook/calc_wind.php
North Carolina State University. (2011). State Incentives, Washington State. Retrieved from
Database of State Incentives for Renewables and Efficiency [DSIRE]:
http://www.dsireusa.org/incentives/incentive.cfm?Incentive_Code=WA15R&re=1
&ee=1
O'Sullivan, D., & Unwin, D. (2010). Geographical Information Analysis. Hoboken, NJ: John
Wiley & Sons.
Ozerdem, B., Ozer, S., & Tosun, M. (2006). Feasibility study of wind farms: A case study for
Izmir, Turkey. Journal of Wind Engineering and Industrial Aerodynamics, 94, 725-
743. doi:10.1016/j.jweia.2006.02.004
Pohekar, S., & Ramachandran, M. (2004). Application of multi-criteria decision making to
sustainable energy planning: A review. Renewable and Sustainable Energy Reviews,
8, 365-381. doi:10.1016/j.rser.2003.12.007
Prasad, R., Bansal, R., & Sauturaga, M. (2009). Some of the design and methodology
considerations in wind resource assessment. IET Renewable Power Generation, 3(1),
53-64. doi:10.1049/iet-rpg:20080030
Ramirez-Rosado, I., Garcia-Garrido, E., Fernandez-Jimenez, L., Zorzano-Santamaria, P.,
Monteiro, C., & Miranda, V. (2008). Promotion of new wind farms based on a
decision support system. Renewable Energy, 33, 558-566.
doi:10.1016/j.renene.2007.03.028
Rodman, L. C., & Meentemeyer, R. K. (2006). A geographic analysis of wind turbine
placement in Northern California. Energy Policy, 34, 2137-2149.
doi:10.1016/j.enpol.2005.03.004
Rosenburg, R. H. (2008a). Diversifying America's energy future: The future of renewable
wind power. Virginia Environmental Law Journal, 26(3), 505-544.
114
Rosenburg, R. H. (2008b). Making renewable energy a reality- Finding ways to site wind
power facilities. William & Mary Environmental Law and Policy, 32(3), 635-684.
Retrieved December 9, 2011, from
http://scholarship.law.wm.edu/wmelpr/vol32/iss3/3
Saaty, T. L. (1977). A scaling method for priorities in hierarchical structures. Journal of
Mathematical Psychology, 15(3), 234-281. doi:10.1016/0022-2496(77)90033-5
Saaty, T. L. (1990). How to make a decision: The analytic hierarchy process. European
Journal of Operational Research, 48(1), 9-26.
Saaty, T. L. (2004). Decision making- the Analytic Hierarchy and Network Processes
(AHP/ANP). Journal of Systems Science and Systems Engineering, 13(1), 1-35.
doi:10.1007/s11518-006-0151-5
Saaty, T. L., & Vargas, L. G. (2001). Models, methods, concepts & applications of the analytic
hierarchy process. Boston, Massachusetts, United States: Kluwer's Academic
Publishers.
Shamshad, A., Bawadi, M., & Wan Hussin, W. (2003). Location of suitable sites for wind
farms using tools of spatial information. International Symposium and Exhibition on
Geoinformation 2003, (pp. 201-209).
Sidlar, C., & Rinner, C. (2006, August 15). Analyzing the usability of an argumentation map
as a participatory spatial decision support tool. URISA Journal, 1-9. Retrieved
November 14, 2011, from http://www.urisa.org/Sidlar
Sieber, R. (2006). Public participation geographic information systems: A literature review
and framework. Annals of the Association of American Geographers, 96(3), 491-507.
Retrieved from http://dx.doi.org/10.1111/j.1467-8306.2006.00702.x
Simao, A., Densham, P., & Haklay, M. (2009). Web-based GIS for collaborative planning and
public participation: An application to the strategic planning of wind farm sites.
Journal of Environmental Management, 90, 2027-2040.
doi:10.1016/j.jenvman.2007.08.032
Stock, C., Bishop, I., O'Connor, A., Chen, T., Pettit, C., & Aurambout, J.-P. (2008). SIEVE:
Collaborative decision-making in an immersive online environment. Cartography
and Geographic Information Science, 35(2), 133-144. Retrieved December 16, 2011
Streater, S. (2012). Enviros challenge first large-scale wind farm in national forest.
Governors' Wind Energy Coalition. Retrieved June 17, 2012, from
http://governorswindenergycoalition.org/?p=1491
115
Sutton, V., & Tomich, N. (2005). Harnessing wind is not (by nature) environmentally
friendly. Pace Environmental Law Review, 22, 91-121. Retrieved December 4, 2011,
from http://digitalcommons.pace.edu/envlaw/498
Tegou, L., Polatidis, H., & Haralambopoulos, D. (2010). Environmental management
framework for wind farm siting: Methodology and case study. Journal of
Envrironmental Management, 91(11), 2134-2147.
The Confederated Umatilla Journal. (2012). Tribes say no to large wind farms. East
Oregonion. Retrieved June 17, 2012, from
http://www.eastoregonian.com/free/tribes-say-no-to-large-wind-
farms/article_577212ba-51bc-11e1-8df4-001871e3ce6c.html
The White House. (2011). STEM Education Coalition Fact Sheet. Retrieved October 31, 2011,
from STEM Education Coalition: http://www.stemedcoalition.org/wp-
content/uploads/2011/01/SOTU-Factsheet-STEM.pdf
U.S. Department of Energy. (2008). 20% Wind Energy by 2030: Increasing wind energy's
contribution to U.S. electrical supply. Washington, D.C.: United States Department of
Energy.
U.S. Department of Energy. (2011). Installed Wind Capacity. Retrieved from Wind Powering
America: http://www.windpoweringamerica.gov/wind_installed_capacity.asp
U.S. Energy Information Administration. (2011). Annual Energy Outlook 2011 (with
projections to 2035). Washington, D.C.: United States Department of Energy.
Retrieved December 4, 2011, from www.eia.gov/forecasts/aeo/
U.S. Geological Survey [USGS]. (2011). Protected Areas Database of the United States (PAD-
US): Standards and methods manual for state data stewards. U.S. Geological Survey
Gap Analysis Program (GAP). Retrieved June 26, 2011, from
http://gapanalysis.usgs.gov/padus/data/
United Nations. (1997). Kyoto Protocol to The United Nations Framework Convention on
climate change. Kyoto, Japan: United Nations. Retrieved November 14, 2011, from
http://unfccc.int/resource/docs/convkp/kpeng.html
Van der Horst, D., & Toke, D. (2010). Exploring the landscape of wind farm developments:
Local area characteristics and planning process outcomes in rural England. Land Use
Policy, 27, 214-221.
Van Haaren, R., & Fthenakis, V. (2011). GIS-based wind farm site selection using spatial
multi-criteria analysis (SMCA): Evaluating the case for New York State. Renewable
and Sustainable Energy Reviews, 15, 3332-3340. doi:10.1016/j.rser.2011.04.010
116
Voivontas, D., Assimacopoulos, D., Mourelatos, A., & Corominas, J. (1998). Evaluation of
renewable energy potential using a GIS decision support system. Renewable Energy,
13(3), 333-344. doi:10.1016/S0960-1481(98)00006-8
Waltham, A., & Fookes, P. (2003, May). Engineering classification of karst ground
conditions. Quarterly Journal of Engineering Geology and Hydrogeology, 36(2), 101-
118. Retrieved February 13, 2012, from
http://www.ingentaconnect.com/content/geol/qjeg;jsessionid=689tvr88rgi4.victor
ia?
117
APPENDIX
Table 21: Criteria weights for the WPC layer OAT analysis
C m C i
% WPC GRID URBCITY ROAD LANDCOV SLOPE SUM
0.20 0.364 0.277 0.154 0.088 0.088 0.030 1.000
0.15 0.348 0.283 0.158 0.090 0.090 0.031 1.000
0.10 0.333 0.290 0.162 0.092 0.092 0.032 1.000
0.05 0.318 0.296 0.165 0.094 0.094 0.032 1.000
0.00 0.303 0.303 0.169 0.096 0.096 0.033 1.000
-0.05 0.288 0.310 0.173 0.098 0.098 0.034 1.000
-0.10 0.273 0.316 0.176 0.100 0.100 0.034 1.000
-0.15 0.258 0.323 0.180 0.102 0.102 0.035 1.000
-0.20 0.242 0.329 0.184 0.104 0.104 0.036 1.000
Table 22: Criteria weights for the GRID layer OAT analysis.
C m C i
% GRID URBCITY ROAD LANDCOV SLOPE WPC SUM
0.20 0.364 0.154 0.088 0.088 0.030 0.277 1.000
0.15 0.348 0.158 0.090 0.090 0.031 0.283 1.000
0.10 0.333 0.162 0.092 0.092 0.032 0.290 1.000
0.05 0.318 0.165 0.094 0.094 0.032 0.296 1.000
0.00 0.303 0.169 0.096 0.096 0.033 0.303 1.000
-0.05 0.288 0.173 0.098 0.098 0.034 0.310 1.000
-0.10 0.273 0.176 0.100 0.100 0.034 0.316 1.000
-0.15 0.258 0.180 0.102 0.102 0.035 0.323 1.000
-0.20 0.242 0.184 0.104 0.104 0.036 0.329 1.000
Table 23: Criteria weights for the URBCITY layer OAT analysis.
C m C i
% URBCITY ROAD LANDCOV SLOPE WPC GRID SUM
0.20 0.203 0.092 0.092 0.032 0.291 0.291 1.000
0.15 0.194 0.093 0.093 0.032 0.294 0.294 1.000
0.10 0.186 0.094 0.094 0.032 0.297 0.297 1.000
0.05 0.177 0.095 0.095 0.033 0.300 0.300 1.000
0.00 0.169 0.096 0.096 0.033 0.303 0.303 1.000
-0.05 0.161 0.097 0.097 0.033 0.306 0.306 1.000
-0.10 0.152 0.098 0.098 0.034 0.309 0.309 1.000
-0.15 0.144 0.099 0.099 0.034 0.312 0.312 1.000
-0.20 0.135 0.100 0.100 0.034 0.315 0.315 1.000
118
Table 24: Criteria weights for the ROAD layer OAT analysis.
C m C i
% ROAD LANDCOV SLOPE WPC GRID URBCITY SUM
0.20 0.115 0.094 0.032 0.297 0.297 0.165 1.000
0.15 0.110 0.094 0.032 0.298 0.298 0.166 1.000
0.10 0.106 0.095 0.033 0.300 0.300 0.167 1.000
0.05 0.101 0.095 0.033 0.301 0.301 0.168 1.000
0.00 0.096 0.096 0.033 0.303 0.303 0.169 1.000
-0.05 0.091 0.097 0.033 0.305 0.305 0.170 1.000
-0.10 0.086 0.097 0.033 0.306 0.306 0.171 1.000
-0.15 0.082 0.098 0.034 0.308 0.308 0.172 1.000
-0.20 0.077 0.098 0.034 0.309 0.309 0.173 1.000
Table 25: Criteria weights for the LANDCOV layer OAT analysis.
C m C i
% LANDCOV SLOPE WPC GRID URBCITY ROAD SUM
0.20 0.115 0.032 0.297 0.297 0.165 0.094 1.000
0.15 0.110 0.032 0.298 0.298 0.166 0.094 1.000
0.10 0.106 0.033 0.300 0.300 0.167 0.095 1.000
0.05 0.101 0.033 0.301 0.301 0.168 0.095 1.000
0.00 0.096 0.033 0.303 0.303 0.169 0.096 1.000
-0.05 0.091 0.033 0.305 0.305 0.170 0.097 1.000
-0.10 0.086 0.033 0.306 0.306 0.171 0.097 1.000
-0.15 0.082 0.034 0.308 0.308 0.172 0.098 1.000
-0.20 0.077 0.034 0.309 0.309 0.173 0.098 1.000
Table 26: Criteria weights for the SLOPE layer OAT analysis.
C m C i
% SLOPE WPC GRID URBCITY ROAD LANDCOV SUM
0.20 0.040 0.301 0.301 0.168 0.095 0.095 1.000
0.15 0.038 0.301 0.301 0.168 0.096 0.096 1.000
0.10 0.036 0.302 0.302 0.168 0.096 0.096 1.000
0.05 0.035 0.302 0.302 0.169 0.096 0.096 1.000
0.00 0.033 0.303 0.303 0.169 0.096 0.096 1.000
-0.05 0.031 0.304 0.304 0.169 0.096 0.096 1.000
-0.10 0.030 0.304 0.304 0.170 0.096 0.096 1.000
-0.15 0.028 0.305 0.305 0.170 0.096 0.096 1.000
-0.20 0.026 0.305 0.305 0.170 0.097 0.097 1.000
119
Table 27: Default GAP Status Code assigned by designation type, from USGS (2011).
Domain Code Domain Description Default GAP Status Code
National Designations
100 National Park 2
101 National Forest-National Grassland 3
102 National Trail 4
103 National Wildlife Refuge 2
104 National Natural Landmark 2
105 National Landscape Conservation System - Non
Wilderness
3
106 National Landscape Conservation System -
Wilderness
2
107 Native American Land 4
Other Designations
109 Protective Management Area - Feature 3
110 Protective Management Area - Land, Lake or River 3
111 Habitat or Species Management Area 2
112 Recreation Management Area 3
113 Resource Management Area 3
114 Wild and Scenic River 2
115 Research and Educational Land 2
116 Marine Protected Area 3
117 Wilderness Area 2
118 Area of Critical Environmental Concern 3
119 Research Natural Area 2
120 Historic / Cultural Area 3
121 Mitigation Land 3
122 Military Land 4
123 Watershed Protection Area 3
124 Access Area 4
125 Special Designation Area 3
126 Other Designation 4
127 Not Designated 4
State Designations
300 State Park 3
301 State Forest 3
302 State Trust Lands 3
303 State Other 4
Local Government Designations
500 Local Conservation Area 2
501 Local Recreation Area 3
502 Local Forest 3
503 Local Other 4
Private Designations
700 Private Conservation Land 2
701 Agricultural Protection Land 3
702 Conservation Program Land 2
703 Forest Stewardship Land 3
120
Table 27 (Continued): GAP Status Code Definitions, from USGS (2011).
Status 1: An area having permanent protection from conversion of natural land cover and
a mandated management plan in operation to maintain a natural state within which
disturbance events (of natural type, frequency, intensity, and legacy) are allowed to
proceed without interference or are mimicked through management.
Status 2: An area having permanent protection from conversion of natural land cover and
a mandated management plan in operation to maintain a primarily natural state, but which
may receive uses or management practices that degrade the quality of existing natural
communities, including suppression of natural disturbance.
Status 3: An area having permanent protection from conversion of natural land cover for
the majority of the area, but subject to extractive uses of either a broad, low-intensity type
(e.g., logging, OHV recreation) or localized intense type (e.g., mining). It also confers
protection to federally listed endangered and threatened species throughout the area.
Status 4: There are no known public or private institutional mandates or legally
recognized easements or deed restrictions held by the managing entity to prevent
conversion of natural habitat types to anthropogenic habitat types. The area generally
allows conversion to unnatural land cover throughout or management intent is unknown.
121
Figure 31: Schematic of the Stage 1 Model built using ArcGIS ModelBuilder.
122
Figure 32: Schematic of the Stage 2 Model built using ArcGIS ModelBuilder.
123
Figure 33: Schematic of the Stage 3 Model built using ArcGIS ModelBuilder.
Abstract (if available)
Abstract
Wind energy was the fastest growing form of renewable energy in the world during the last decade and forecasts predict that this trend will continue. In the U.S., Renewable Portfolio Standards (RPS) and federal tax incentives drive this trend from a policy perspective, but despite its potential to reduce CO2 emissions and dependence on foreign fuel for electricity generation, wind energy development remains a contentious issue and siting of wind power systems remains problematic. This thesis presents a GIS-based tool for preliminary site suitability analysis for Onshore Wind Power Systems (ONSWPS) that can be used to address these issues from a planning perspective. This tool incorporates Multi-Criteria Analysis (MCA) and the Analytical Hierarchy Process (AHP) along with various forms of spatial and sensitivity analysis to provide quick visual access to ONSWPS site selection information through a series of suitability maps.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Site suitability analysis for implementing tidal energy technology in southern California
PDF
Site location suitability analysis for a smart grid network
PDF
Suitability analysis for wave energy farms off the coast of Southern California: an integrated site selection methodology
PDF
Site suitability analysis: small-scale fixed axis ground mounted photovoltaic power plants in Fresno, CA
PDF
System stability effect of large scale of EV and renewable energy deployment
PDF
A model for placement of modular pump storage hydroelectricity systems
PDF
Closed landfills to solar energy power plants: estimating the solar potential of closed landfills in California
PDF
A site suitability analysis for an inland port to service the ports of Los Angeles and Long Beach
PDF
A spatial narrative of alternative fueled vehicles in California: a GIS story map
PDF
Geographic object based image analysis for utility scale photovoltaic site suitability studies
PDF
Community gardens for social capital: a site suitability analysis in Akron, Ohio
PDF
Selection of bridge location over the Merrimack River in southern New Hampshire: a comparison of site suitability assessments
PDF
Concentrated solar thermal facilities: A GIS approach for land planning [map]
PDF
Water Rights Permit System (WRPS): a GIS‐based tool for the Umpqua Drainage Basin
PDF
Concentrated solar thermal facilities: A GIS approach for land planning
PDF
Climate conservation priorities: using MCDA to identify future refuge for the Joshua Tree Forest in California
PDF
GIS-aided production of equipment locator maps for Metro rail maintenance and support
PDF
Relocation bay: identifying a suitable site for the Tampa Bay Rays
PDF
Locating the need for financial education
PDF
Redefining urban food systems to identify optimal rooftop community garden locations: a site suitability analysis in Seattle, Washington
Asset Metadata
Creator
Harrison, Jeffry D.
(author)
Core Title
Onshore wind power systems (ONSWPS): a GIS-based tool for preliminary site-suitability analysis
School
College of Letters, Arts and Sciences
Degree
Master of Science
Degree Program
Geographic Information Science and Technology
Publication Date
08/01/2012
Defense Date
06/21/2012
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
AHP,alternative energy,analytical hierarchy process,beographic information systems,GIS,GIS-based,MCA,multi-criteria,OAI-PMH Harvest,onshore,renewable energy,SDSS,site suitability,spatial decision support systems,spatial science,suitability maps,wind,wind energy,wind farm,wind power
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Wilson, John P. (
committee chair
), Hastings, Jordan T. (
committee member
), Swift, Jennifer N. (
committee member
)
Creator Email
jeff.nwcascade@gmail.com,jeffryha@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c3-80968
Unique identifier
UC11288389
Identifier
usctheses-c3-80968 (legacy record id)
Legacy Identifier
etd-HarrisonJe-1096.pdf
Dmrecord
80968
Document Type
Thesis
Rights
Harrison, Jeffry D.
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
AHP
alternative energy
analytical hierarchy process
beographic information systems
GIS
GIS-based
MCA
multi-criteria
onshore
renewable energy
SDSS
site suitability
spatial decision support systems
spatial science
suitability maps
wind energy
wind farm
wind power