Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Does organizational culture play a role in aviation safety? A qualitative case study analysis
(USC Thesis Other)
Does organizational culture play a role in aviation safety? A qualitative case study analysis
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
“Does Organizational Culture Play a Role in Aviation Safety?”
A Qualitative Case Study Analysis
by
Stephen Robertson
A Dissertation Presented to the
FACULTY OF THE USC PRICE SCHOOL OF PUBLIC POLICY
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
DOCTOR OF POLICY, PLANNING AND DEVELOPMENT
August 2020
Copyright 2020 Stephen Robertson
ii
Dedication
I would like to dedicate this dissertation to my wife and three children; Lynette, Megan,
Kelsey and Ian. They were the main reason I went back to school in 2009 in order to complete
my undergraduate degree after 26 years of taking one class at a time. They are the reason that I
do everything. The juggling of my career and this doctoral program has required that I miss
many soccer games, gymnastic meets and other family get-togethers and for this I apologize to
my wonderful family. I know that these personal sacrifices were necessary and I have no regrets.
As my academic career comes to a close, I realize that it only means a new chapter will
be opening. I have been blessed with fantastic classmates and inspired by the best professors on
this planet. Nothing worthwhile in life comes without sacrifice. A solid education from a world-
class university is no different. I look forward to watching all three of my children complete their
academic journeys as well and I just want to thank them for allowing me to have the best and
most important job in the world; being their father.
All the best,
-Dad
iii
Acknowledgements
I would like acknowledge and share my sincere appreciation to Dr. Deborah Natoli, my
committee chair and mentor over this past eight year academic journey. Her inspiring and
passionate dedication to her students and to this program is awe inspiring. She brings such
energy into the classroom with her strong leadership skills and her ability to connect with each
and every student is evidence of her mastery of teaching. She brings out all of the positives of the
D.P.P.D. program and without her; it would not be the same.
I would also like to thank and acknowledge my other two committee members, Dr. Peter
Robertson and Director Tom Anthony for their willingness to participate and share their vast
knowledge and experience with me. Dr. Robertson shared his extensive research in the area of
culture and organizations and Director Anthony brought all of his expertise in Aviation Safety as
the Director of the USC Aviation Safety & Security Program at Viterbi School of Engineering.
Without their dedication and commitment to my research and dissertation, I would not have been
able to complete them.
My heartfelt thanks, to all three of you.
-Stephen
iv
Table of Contents
Dedication …………………………………………………………………………………………ii
Acknowledgements……………………………………………………………………………… iii
List of Tables……………………………………………………………………………………. vii
Abstract……………………………………………………………………………………….… viii
Chapter One: Introduction………………………………………………………………………... 1
Research Problem………………………………………………………………… 3
Research Questions…………………………………………………………….…. 7
Research Purpose and Contribution to Practice…………………………….…….. 7
Research Methodology….………………………………………………………... 8
Case Study-Asiana Flight #214.………………………………………………….. 9
Case Study-Space Shuttle Challenger ……………………………………………10
Case Study-New Mexico State Police.………………………………………….. 11
Theoretical Orientation…………………………………………………….……. 12
Summary………………………………………………………………………… 12
Chapter Two: Literature Review………………………………………………………………... 13
Introduction….…………………………………………………………….…….. 13
Organizational Culture…………………………………………………….…….. 13
The Wright and Selfridge Flight………………………………………………… 18
Aviation Accident Statistics…………………………………………….……….. 20
Sources of Power…….………………………………………………………….. 21
Power Distance Theory……………………………………………….…………. 23
Human Error in Aviation…………………………………………….………….. 25
United Airlines – Unstablized Approaches………………………………………29
Human Factors Analysis and Classification System (HFACS)…………….…… 32
v
High Reliability Organizations………………………………………………….. 36
Broken Windows Theory……………………………………………………….. 38
Associated Research Projects…………………………………………………… 39
Summary………………………………………………………………………… 43
Chapter Three: Methodology…………………………………………………………………… 44
Introduction ………………………………………………………………………44
Research Questions……………………………………………………………... 44
Methodology Selected…………………………………………………………... 44
Case Study Methodology……………………………………………………….. 45
Researcher………………………………………………………………………. 46
Document Analysis……………………………………………………………… 46
Case Study Document Review and Data Collection …………………………… 48
Description of Data Analysis and Coding……………………………………… 49
Summary………………………………………………………………………… 51
Chapter Four: Data Review……………………………………………………………………... 52
Introduction……………………………………………………………………… 52
Asiana Airlines flight #214……………………………………………………… 52
Space Shuttle Challenger………………………………………………………... 60
New Mexico State Police……………………………………………………….. 64
NMSP Aviation Unit Policy Analysis…………………………………...69
NMSP Airborne Law Enforcement Association Standards…………….. 70
NMSP Organizational Analysis………………………………………… 72
NMSP Conclusion and Findings for Case Study #3……………………. 73
NMSP NTSB Recommendations……………………………………….. 76
Summary………………………………………………………………………… 76
vi
Chapter Five: Data Analysis…………………………………………………………………….. 78
Introduction……………………………………………………………………… 78
Analysis of Data……………………………………………………………….… 80
Overview of Causes………………………………………………………….….. 82
Overview of Findings…………………………………………………………… 83
Overview of Recommendations……………………………………………….… 84
Summary………………………………………………………………………… 85
Chapter Six: Discussion, Implications and Conclusion……………………………………….… 86
Introduction……………………………………………………………………… 86
Discussion…………………………………………………………………….…. 86
Implications.………………………………………………………………….…..89
Conclusion………………………………………………………………………. 90
Bibliography…………………………………………………………………………………….. 92
Appendices……………………………………………………………………………………... 100
A. Coded Data Analysis – Supporting Documents…………………………….100
B. New Mexico State Police SOPs …………………………………………….118
C. Contribution to Practice PowerPoint Curriculum ………………………….127
vii
List of Tables
Table 1: NTSB Aviation Accident Statistics ……………………………………………………..4
Table 2: Reason’s Swiss Cheese model …………………………………………………………..6
Table 3: Boeing Accident Chart …………………………………………………………………20
Table 4: Hofstede Power Distance Index ………………………………………………………..24
Table 5: Helmreich’s Error Types Chart ………………………………………………………...26
Table 6: Helmreich’s Error Percentage Chart ……………………………………………………27
Table 7: Sources of Threats Chart ……………………………………………………………….28
Table 8: United Airlines Unstabilized Approach Chart …………………………………………31
Table 9: HFACS Causal Categories Chart ………………………………………………………33
Table 10: HFACS Organizational Influences Chart …………………………………………….35
Table 11: Stake’s Checklist ……………………………………………………………………..50
Table 12: Helicopter Radar Data ………………………………………………………………...68
Table 13: Coded Data Analysis – Causes ………………………………………………………100
Table 14: Coded Data Analysis – Causes/Sub-Themes ………………………………………..101
Table 15: Coded Data Analysis – Findings …………………………………………………….102
Table 16: Coded Data Analysis – Findings/Sub-Themes ………………………………………103
Table 17: Coded Data Analysis – Recommendations ………………………………………….104
Table 18: Coded Data Analysis – Recommendations/Sub-Themes ……………………………105
viii
Abstract
One hundred and twelve years ago, the first recorded fatality from an airplane accident within
the United States occurred and involved the Wright brothers. Since this time many years ago,
there have not been any new ways invented to crash airplanes. With this information and
experience, why do we continue to have airplane accidents?
The purpose of this study is to answer the following two questions; what causes the high
percentage of human error causation in aviation accidents and does organizational culture play a
role in this causation. These two questions form the foundation for this study and direct its
research.
A document analysis was conducted on three case studies analyzing three different aviation
accidents. These accident investigations represent an airliner accident, a spacecraft accident and
a law enforcement helicopter accident. While these three case studies are representative of three
very different types of aviation operations, they were analyzed to see if organizational culture
played a role in their causation.
The methodology used in this study was a qualitative multiple case study based on a
constructivist approach. According to Yin (2003), this type of methodology is appropriate when
answering “how” and “why” questions.
Data from all three case studies was coded and analyzed and based on emerging themes and
patterns, a constant comparative method analysis was conducted according to Glaser and Strauss
(1967). After the initial coding of four themes was analyzed, the coded data was re-analyzed
based on thirteen sub-themes that had emerged. This data was used to answer the initial research
questions.
ix
The contribution to practice from this research is a teaching curriculum to be used by the
aviation industry to highlight the importance of organizational culture and leadership. This
curriculum will be made available to the University of Southern California’s Aviation Safety &
Security Program to be included in their Safety Management System for Manager’s course.
Chapter One--Introduction
My first introduction to aviation safety came at a very early age during the Vietnam War. I
was a small child and my father was learning to fly airplanes near our home in a small town in
Southern Oregon. Many aspiring pilots in that era watched newsreel footage of F4 Phantom
fighter jets landing on United States Navy aircraft carriers engaged in the war. These new pilots
wanted to pretend they were doing the same thing.
At an uncontrolled airport in Ashland, Oregon, the normative behavior and accepted flying
culture was to try and land as close to the beginning of the runway just like the F4 Phantom
fighter jets did. On a sunny, summer day, my brother and I accompanied my father to the airport
to watch him fly with another pilot in that pilot’s airplane. Anytime a pilot could fly with another
pilot to build experience, they did so. As a five year old, I already knew that I wanted to be a
pilot when I grew up and being able to watch your father fly was a great experience.
My brother and I were watching from outside the small terminal building. My father and his
friend took off and remained in the pattern to practice take offs and landings. As most of the
airplanes I watched landing, my father’s airplane seemed very low as they were gliding in for a
landing, but I already knew they were seeing just how close they could touch down to the
beginning of the runway.
There is a line of runway approach lights that are positioned just in front of the beginning of
most runways so that at night pilots know where the runway begins. As my father’s airplane was
about to land I could tell that something was wrong. They were going to hit the runway approach
lights but there was nothing that neither my brother nor I could do. It was a very helpless feeling
even at my very young age.
2
My father’s airplane struck the line of runway approach lights with their landing gear which
cart-wheeled the airplane causing it to tumble across the runway. All I could see was smoke,
dust, dirt and airplane parts flying in all directions. Fortunately there was no fire. My brother and
I jumped the small wooden fence separating the observation area from the runway. I can still
remember a man yelling at us that we should not go out there, but we did not listen.
My father survived that crash landing, but it left a significant impact on me and what I learned
was airplanes and flying can be very dangerous. I have relived that moment many times over the
following years and it has caused me to wonder about the culture at the Ashland Airport and how
that culture affected how pilots made decisions. This is what has driven my research into
organizational culture and if it plays a role in aviation safety.
F4 Phantom Ashland Airport
3
Research Problem
Although my father’s crash occurred almost 50 years ago, these types of incidents keep
occurring today. In fact, since the Wright Brothers flew the first powered airplane on December
17, 1903 (Wright, 1913), there have been thousands and thousands of flights that have ended in
failure rather than success. These accidents or incidents have been investigated and their causal
factors have been determined. In these ensuing years, we have yet to invent a new way in which
to crash an airplane. Yet with this powerful knowledge, we keep crashing airplanes for the same
reasons. Are there other factors influencing these crashes and incidents that we as an industry
should be looking at closer?
According to data recently released by the National Transportation Safety Board, during
2018 there were 1,275 General Aviation (private airplanes) accidents in the United States
causing 381 total fatalities. During this same time period, there were 27 US air carrier (airlines in
scheduled flights) accidents which caused 1 fatality (NTSB, 2000).
4
Table 1: National Transportation Safety Board (2020)
The United States federal agency that is responsible for the investigation of airplane accidents
is the National Transportation Safety Board (NTSB) and was first established in 1967 as a stand-
alone federal agency. It is an agency charged with investigating and determining the cause of
airplane accidents both within the United States and all over the world and has investigated over
132,000 of these crashes (National Transportation Safety Board, 2019).
According to Ellingstad & Mayer, (1993), the National Transportation Safety Board (NTSB)
is an accident investigation agency that is responsible for producing complex investigative
analyses of individual transportation accidents. These investigations are based on both field and
laboratory expert reports, and ensures extensive oversight and deliberation from their board
members who provide the management of the system. Their official reports include a probable
5
cause based on the findings and many times include recommendations to the parties involved to
help to improve safety.
Quite often, the causal factors applied in these investigations point to something the flight
crew did or did not do while operating the aircraft or some type of mechanical component on the
aircraft failed. According to the Federal Aviation Administration (FAA), approximately 80% of
all airplane crashes are the result of pilot error (FAA, 2010). The other 20% are the result of
mechanical issues.
There is a theory that is used quite often in the aviation safety community that was developed
by Professor James Reason from the University of Manchester and is called the “Swiss Cheese
Model” (Reason, 2000). Very rarely does an aviation accident or incident occur as the result of
only one causal factor. Usually there are several causal factors that are identified that if only one
of them was removed, the accident or incident most likely would not have occurred. What
Professor Reason points to is the holes in the Swiss cheese lining up to allow an arrow to pass
through the block of cheese. Each hole in the cheese represents one layer of safety mitigation
that when applied could stop the arrow passing through and ultimately stop the accident or
incident from occurring.
An example of this would be a flight crew not getting enough rest between flights, finding a
maintenance issue that causes them to leave the gate late, the flight crew has not flown together
6
before, flying to an airport that neither pilot has flown to recently, dealing with unforcasted
weather, having to fly an instrument approach that neither pilot has flown recently and while on
short final to the runaway the pilots are momentarily blinded by a laser strike. Each one of these
issues is a hole in the Swiss cheese and if one were removed, the airplane most likely would not
crash, but when all of the holes line up, the arrow passes straight through each of these latent
failures and active failures and the accident or incident occurs. In this model, the arrow
represents the hazardous act.
Professor Reason wrote that there are four levels of human failure and that they are (Reason,
1990):
Level 1 Organizational influences
Level 2 Unsafe supervision
Level 3 Preconditions of unsafe acts
Level 4 The unsafe acts of operators
Table 2 Reason (1990)
7
The research for this project has focused on how organizational influences (Level 1 from
Reasons Table above) and culture may have a role in the safety of the operation and may also
include items from Levels 2-4 from Professor Reason’s Table above Ellingstad & Mayer (1993)
believed that while Reason’s research and theory on Swiss Cheese and human error changed how
investigators looked at accident and accident causation, his research did not supply a path
forward for putting his theory into practice.
Research Questions
The goal of this research is to find if there is a link between aviation accidents and
organizational culture and why we as an industry continue to make the same mistakes. The
research and data from the Federal Aviation Administration (FAA) shows that 80% of all
airplane/aircraft accidents are caused by human error (FAA, 2010). With this empirical data
taken into account, the actual research questions for this dissertation are;
1. What causes this level of human error in 80% of aviation accidents?
2. Is organizational culture a causational variable, and if so, to what extent?
Research Purpose and Contribution to Practice
The purpose of this research is to investigate the relationship between organizational culture
and aviation safety. Once better understood, this information can help create an impactful
training curriculum through new models and methods in order to lower the accident rates among
those involved. This new curriculum would be added into current training being conducted
within the aviation industry worldwide and would not only apply to aviation units, but to those
8
that manage them from the highest level of the organization. A PowerPoint presentation of this
proposed curriculum is included in the Appendix of this manuscript.
Research Methodology
Data collected and analyzed are from a formalized and deep multiple case study document
analysis of three different aircraft accidents where organizational culture may have played a role
in their causation. The accident data was categorized and coded and emerging themes and
patterns were documented using a constant comparative analysis method based on Glaser &
Strauss (1967). More detail on the methodology used for this research project is included in
chapter three.
9
Selected Case Studies
1. Asiana flight #214
On July 6, 2013, an Asiana airline Boeing 777 descended below the visual glidepath while
landing at San Francisco International Airport and crashed into the seawall at the beginning of the
runway causing the airplane to cartwheel and catch fire. The NTSB ultimately determined the
majority of causation for the accident was as a result of the pilots’ actions and inactions, however,
discussed the influence that organizational culture played in the causation (National Transportation
Safety Board, 2014).
10
2. Space Shuttle Challenger
On January 28, 1986, the Space Shuttle Challenger exploded 73 seconds after liftoff killing
all seven astronauts aboard. A Presidential Commission was formed to conduct an investigation
who determined the cause of the accident to be from NASA’s decision to launch at a colder
temperature than allowed. NASA’s organizational culture and fatigue by those involved by the
management personnel involved in the launch decision was found to have contributed to this
accident (Rogers Commission, 1986).
11
3. New Mexico State Police
On June 9, 2009, a New Mexico State Police helicopter crashed shortly after takeoff from
a mountain after rescuing an injured hiker. This crash killed the pilot and the injured hiker. The
National Transportation Safety Board determined the cause of the accident to be pilot error and
contributing to the accident was an organizational culture that prioritized mission execution over
aviation safety (National Transportation Safety Board, 2011).
12
Theoretical Orientation
The philosophical assumptions that underpin the methods and shaped the research can be
described as a qualitative multi case study analysis that formalized the data based on a constant
comparative analysis. The areas of the research focused on high-reliability organizations,
definitions of power, power distance cultures within an organization, human error, the Human
Factors Analysis and Classification System (HFACS) and the Broken Windows Theory. In
addition, three associated, but separate research projects were reviewed to help ground this
researcher’s premise.
Summary
Chapter one included a brief introduction into this research project and manuscript. This
introduction included the research problem and need for the project, an explanation of the Swiss
Cheese model by Professor Reason, the two research questions that will be answered by this
project, the research purpose and contribution to practice, a brief summary of the methodology
used and the theoretical orientation of the research. Chapter two will include the review of the
professional literature for this project.
13
Chapter Two—Literature Review
Introduction
Chapter two includes a thorough review of the professional literature regarding organizational
culture and the human error causation of aviation accidents. The chapter begins with an
explanation of organizational culture and then reviews the topics of aviation accident statistics,
sources of power and Power Distance, human error in aviation, United Airlines’ review of
stabilized approaches, Human Factors Analysis and Classification System (HFACS), and the
Broken Windows theory. The chapter finishes with a review of three associated research projects
from other researchers.
Organizational Culture
To begin a dissertation that focuses on organization culture and its possible influence
over aviation culture, one must first have a strong understanding of the definition of
organizational culture.
Effective efforts to achieve safety must recognize the importance of culture. Organisations
must have a full understanding of cultural influences on their operations if safety efforts
are to succeed. The basic premise of this discussion is that it is essential to build on the
strengths of national culture and to enhance professional and organizational cultures to
establish a robust safety culture. Culture surrounds us and influences the values, beliefs
and behaviors that we share with other members of groups. Culture serves to bind us
together as members of groups and to provide clues and cues as to how to behave in normal
and novel situations. When thinking of culture, what comes to mind first is national culture,
the attributes that differentiate between natives of one culture and those of another. For
pilots, however, there are three cultures operating to shape actions and attitudes. The first,
of course, is national culture. But there is also a strong professional culture that is
associated with being a member of the pilot profession. Finally, organizations have their
own cultures that are closest to the daily activities of their members. While national cultures
are highly resistant to change because they surround an individual from birth, professional
and organizational cultures may be modified if there are strong incentives (Helmreich,
1998, p. 1).
14
This definition of the three different types of culture influencing the flight-deck helps to
explain the complexities that flight crews and the organizations that they represent are facing on
a daily basis. This coupled with the ease of international travel and the possibility of
intermeshing and sometimes conflicting cultures could influence the decision making process of
those responsible for the safe passage of the traveling public.
Organizational culture is best described by Edgar Schein who many consider to be the leading
researcher of organizational culture;
A pattern of basic assumptions—invented, discovered, or developed by a certain group as
it learns to cope with its problems of external adaptation and internal integration—that has
worked well enough to be considered valid and, therefore, to be taught to new members as
the correct way to perceive, think, and feel in relation to those problems. (Schein, 1992, p.
12).
When you walk into a Starbucks, you expect a certain high level of service to be presented to
you. The same high level of service is expected when you walk into Nordstrom Department
Store. When you go to return a purchase to a Costco, you expect it to be done quickly and with
no questions. Put simply, this is how these businesses do business. It is what makes them
different from the other establishments pedaling in the same products and service. Culture is the
same. It is what makes one different from another. It is what makes one airline flying the same
equipment within the same governmental requirements and restrictions safer than the other.
Organizational culture is “deeply rooted in history, collectively held and sufficiently complex to
resist any attempts at direct manipulation” (Weigmann, Zhang, von Thaden, Sharma, & Mitchell,
2002, p. 4).
15
In the early 1990’s, then-NTSB Board Member John Lauber was one of the first to focus
on how organizational factors can influence aviation safety (Meshkati, 1997; National
Transportation Safety Board, 1992). Lauber argued that the cause of a commuter airliner
in-flight break-up due to faulty maintenance should be “the failure of Continental Express
management to establish a corporate culture which encouraged and enforced adherence to
approved maintenance and quality assurance procedures (Meshkati, 1997, p. 6; National
Transportation Safety Board, 1992, p. 54).
Continental Express Flight 2574 involved the in-flight structural breakup of an Embraer 120
twin engine turboprop airliner over Texas on September 11, 1991, killing both pilots, one flight
attendant and 11 passengers. The airplane had just come out of maintenance to replace the
leading edge deice boots on the horizontal stabilizer. This repair was conducted over several
days and the work was conducted by several different maintenance technicians. Even though that
per their maintenance procedures, a quality assurance inspection is required after this type of
repair, an inspection was not completed. Had this quality assurance inspection been
accomplished, hopefully they would have found that the 47 screws used to attach the leading
edge of the horizontal stabilizer had not been reinstalled. (National Transportation Safety Board,
1992)
The National Transportation Safety Board determines that the probable cause of this
accident was the failure of Continental Express maintenance and inspection personnel to
adhere to proper maintenance and quality assurance procedures for the airplane’s
horizontal stabilizer deice boots that led to the sudden in-flight of the partially secured
left horizontal stabilizer leading edge and the immediate severe nose-down pitchover and
breakup of the airplane. Contributing to the cause of the accident was the failure of the
Continental Express management to ensure compliance with the approved maintenance
procedures, and the failure of FAA surveillance to detect and verify compliance with
approved procedures” (National Transportation Safety Board, 1992, p. 54).
Attached to the official report written by the National Transportation Safety Board
documenting the Continental Express accident is a dissenting opinion written by National
Transportation Safety Board board member John Lauber and dated July 21, 1992. A dissenting
opinion written by a sitting board member is not something that is taken lightly and in many
16
senses is monumental. Out of the 5 sitting board members, Mr. Lauber was the only one with this
opinion. It begins by Mr. Lauber writing that he is “perplexed by the majority decision that the
actions of Continental Express senior management were not causal in this accident” (NTSB,
1992). What he was saying was his surprise was based on the fact the board found Continental
Express management as only a contributory factor rather than a causal factor. He goes on to say
that the many lapses and failures committed by a multitude of employees of Continental Express
shows a systemic problem with the organization rather than a simple error by a few and he
concludes his letter with what he believes the probable cause should have read;
The National Transportation Safety Board determines that the probable causes of this
accident were (1) the failure of Continental Express management to establish a corporate
culture which encouraged and enforced adherence to approved maintenance and quality
assurance procedures, and (2) the consequent string of failures by Continental Express
maintenance and inspection personnel to follow approved procedures for the replacement
of the horizontal stabilizer deice boots. Contributing to the accident was the inadequate
surveillance by the FAA of the Continental Express maintenance and quality assurance
programs (National Transportation Safety Board, 1992, p. 54).
Organizational culture has been identified as a causal factor in aviation accidents. (National
Transportation Safety Board, 1992). The National Transportation Safety Board has singled out
management for allowing a culture of non-compliance with procedures that can lead to fatal
error. Recent investigations concentrate on organizational attributes that serve as precursors to
disaster. (Helmreich, 2000).
Pidgeon & O’Leary (1994) discuss some of the historical components of organizational
culture being recognized as possible causation in aircraft crashes. They argue about the need to
broaden the perspective of human factors and its relationship to causal factors. They opine that
17
the first talks of considering organizational culture as a factor came about after the 1986
Chernobyl disaster in the former Soviet Union.
According to Reason (1998), at the time of the Chernobyl disaster, there were two competing
cultures that played corresponding parts in causing the meltdown, one representing the Soviet
nuclear power generation program in general and the other representing the organizational
culture specifically within the Chernobyl staff. The Soviet nuclear power group had never
released any information on any accidents, incidents or lapses in safety protocol that had
occurred during the use of nuclear power over the preceding 35 years and thus had convinced the
public that nuclear power was the safest and cleanest power generation process available.
The Chernobyl operators had a culture of their own. They had recently received an award for
being able to produce the most electricity and their status amongst their community was similar
to that given to cosmonauts. This combined with their can-do attitude caused them to never be
afraid of what they were working with on a daily basis and this deadly combination of misplaced
arrogance and ignorance became apparent during the course of events that night, according to
Reason (1998). A leading Soviet nuclear engineer by the name of Medvedev was asked why the
testing was not halted which in turn started the disaster and he responded;
It was almost as if they had conspired not to intervene. Why? The fact is that there was a
conspiracy of silence. Mishaps had never been publicized; and as nobody knew about them,
nobody could learn from them. For 35 years people did not notify each other about
accidents at nuclear power stations, and nobody applied the experience of such accidents
to their work. It was as if no accidents had taken place at all: everything was safe and
reliable (Medvedev, 1991, p. 39).
Organizations can function within a national culture or can extend across national
boundaries. Organizational norms can either be in harmony or at odds with national culture.
An organization’s culture reflects its attitudes and policies regarding punishment of those
who commit errors, the openness of communications between management and flight crew,
and the level of trust between individuals and senior management. Organizational culture
also influences norms regarding adherence to regulations and procedures. Using
18
perceptions of management’s safety concerns as a benchmark, large and highly significant
differences in organizational culture are seen, even in an industry that is highly regulated.
For example, in one airline, 84% agreed with the item ‘Management never compromises
safety for profit,’ while in another, only 12% endorsed the item (Helmreich, 2000, p. 5).
If there has been a single thread running through these various arguments, it concerns the
need for an organization to inculcate and then sustain a healthy but intelligent respect for
the hazards that threaten its operations. This is not easy to achieve. Several powerful factors
act to push safety into the background of an organization’s collective awareness,
particularly if it possesses many elaborate barriers and safeguards. But it is just these
defenses-in-depth that render such systems especially vulnerable to adverse cultural
influences. Organizations are also prey to external forces that either make them forget to
be afraid, or even worse, avoid fear altogether… If we understand what comprises an
informed culture, we can socially engineer its development. Achieving a safe culture does
not have to be akin to a religious conversion—as it is sometimes represented. There is
nothing mystical about it. It can be acquired through the day-to-day application of practical
down to earth measures. Nor is safety culture a single entity. It is made up of a number of
interacting elements, or ways of doing, thinking and managing, that have enhanced
resistance to operational dangers as their natural by-product (Reason, 1998, p. 305).
The Wright and Selfridge Flight
On September 17
th
….at 4:46 pm, the aeroplane was taken from the shed, moved to the
upper end of the field and set on the starting track. Mr. Wright and Lieutenant Selfridge
took their places in the machine, and it started at 5:14, circling the field to the left as usual.
It had been in the air four minutes and 18 seconds, had circled the field 4 ½ times and had
just crossed the aeroplane shed at the lower end of the field when I heard a report then saw
a section of the propeller blade flutter to the ground. I judged the machine at the time was
at a height of about 150 feet. It appeared to glide down for perhaps 75 feet, advancing in
the meantime about 200 feet. At this point it seemed to me to stop, turn so as to head up
the field towards the hospital, rock like a ship in rough water, then drop straight to the
ground the remaining 75 feet….
The pieces of propeller blade [were] picked up a point 200 feet west of where the airplane
struck. It was 2 ½ feet long, was a part of the right propeller, and from the marks on it had
apparently come in contact with the upper guy wire running to the rear rudder. … [The
propeller] struck [the guy wire] hard enough to pull it out of its socket and at the same time
to break the propeller. The rear rudder then fell to the side and the air striking this from
beneath, as the machine started to glide down, gave an upward tendency to the rear of the
machine, which increased until the equilibrium was entirely lost. Then the aeroplane
pitched forward and fell straight down, the left wings striking before the right. It landed on
the front end of the skids, and they, as well as the front rudder was crushed.
19
Lieutenant Selfridge…died at 8:10 that evening of a fracture of the skull over the eye,
which was undoubtedly caused by his head striking one of the wooden supports or possibly
one of the wires. …Mr. Wright was found to have two or three ribs broken, a cut over the
eye, also on the lip and the left thigh broken between the hip and the knee.” (1
st
Lieutenant
Frank P. Lahm, 1908, Wiegmann & Shappell, 2003)
This was taken from the first aviation crash report that caused a fatality in 1908. What was
supposed to be a simple orientation flight for a young 1
st
Lieutenant by one of the Wright
brothers made history in a negative fashion and started a reporting process that continues to
today.
What lessons were learned from this early accident? The first lesson was that this new and
exciting transportation method had certain aspects to it that were very dangerous. The second
lesson was the importance of quality control within the manufacturing process. As with most
lessons learned throughout the history of aviation, these lessons were written in the blood of
those who had perished in earlier accidents.
20
Aviation Accident Statistics
According to Boeing Aircraft (MEDA/Boeing, 2007), in the early part of the 20
th
century,
80% of all aircraft crashes were the result of mechanical issues or breakdowns and the remaining
20% of crashes were the direct result of human factors and/or errors. Currently, these statistics
have reversed themselves and approximately 80% of all aircraft crashes are the result of human
error and 20% are the result of mechanical failures or other factors.
Table 3
Source-MEDA/Boeing Aircraft, 2007
According to Shappell & Wiegmann (1996), over the past 40 years, the number of aviation
accidents caused solely by mechanical failure has dropped significantly and quite quickly. At the
same time, the percentage of aviation accidents caused in part by human error has declined also,
but at a much slower rate. What this says is that the efforts to reduce mechanical error have been
more successful than those to reduce human error and if we want to continue to reduce this rate
of human error, we will need to focus more clearly on what is causing these human errors.
Additional research conducted by Shappell and Wiegmann (2004) discussed the established
fact that between 70%-80% of all aviation accidents are attributed at least in part to human
21
factors of some type. They said that the combination of safety and technology together have
dramatically reduced the amount of aircraft crashes, however, the trending of these numbers in a
downward fashion has been slowing.. As a result of this, they believe that since human factors is
associated with human error, then all of these human factor related crashes are preventable and
they question whether the current accident rates can even trend lower.
Sources of Power
To fully understand the impact that power distance may have on organizational culture
within the aviation industry, one must first understand the definition and the context of the
different types of power and how they relate to this research topic. According to Pfeffer, (1993)
power is defined as the having the potential ability to influence others’ behavior and to
sometimes get people to do something that they would not normally do. What needs to be
understood is this reference to “potential power.” Power does not need to be exercised to be real.
The fact that a person or groups of people are influenced is enough to establish this power.
French and Raven (1959) established that there are five different sources of power that are
broken into two categories; formal and informal. The three sources of power under the formal
category are; 1. legitimate, 2. reward, 3. coercive. The two sources of power under the informal
category are, 4. expert, 5. referent.
Legitimate power is associated with authority and position. An example of this would be the
power associated with a person’s job position or their rank in a military or quasi-military
position. Their power comes from the authority granted to them. Another example would be that
of a judge presiding over their courtroom, the captain of an airliner or a professor in the
classroom.
22
Reward power comes from the ability to provide rewards to others. An example would be a
manager giving out a monetary bonus to an employee for exceeding their expectations. This type
of power is usually associated with the private sector and is not commonly used in the public
arenas. However, other intangible rewards are often used in their place.
Coercive power is derived from having the ability to punish. It is always hoped that this type
of power would only be used as a last measure. This type of power can be used for the wrong
reasons very easily and there are often repercussions that can follow. It is a necessary evil though
within the business world.
Expert power is associated with the perception of the leading person’s competence. A field
training officer in law enforcement has expert power as does a check pilot in the airlines. An
“expert” witness in a criminal or civil trial is only required to have more knowledge than that of
an average citizen.
Power is both a function of structure and of personal attributes. The first three sources of
power that were referenced (legitimate, reward and coercive) were considered formal sources of
power. As such, these would align themselves with structure. The fourth and fifth sources of
power that were referenced by French and Raven (referent and expert) were considered informal
sources of power and would align themselves with personal attributes or personal characteristics.
Referent power comes from one person respecting another such as what happens with celebrities.
Expert power is described as when others listen to you based on your skills and knowledge.
The purpose of including power within this research is so that the reader fully understands the
importance of power and how it can affect the decision making process within the confines of the
flightdeck and beyond. The accidents included as case studies for this project are examples of
how power can influence this process. And while it is important to understand and respect these
23
different types of power within the context of aviation, it is equally important to know when to
question it. The next section gives greater detail of this concept.
Power Distance Theory
Of all the research on dimensions of culture, perhaps the most referenced is the research of
Hofstede (1980, 2001). Based on an analysis of questionnaires obtained from more than
100,000 respondents in more than 50 countries, Hofstede identified five major dimensions
on which cultures differ; power distance, uncertainty avoidance, individualism-
collectivism, masculinity-femininity, and long-term-short-term orientation. Hofstede’s
work has been the benchmark for much of the research on world cultures. (Northouse,
2013, p. 387)
The one dimension of power that may provide a link to organizational culture within the
aviation community is Geert Hofstede’s Power-Distance. What the researcher wants to
accomplish with this research is to draw a clear line connecting the theory of Power-Distance
with real life examples or case studies that show where this dimension, when overlooked,
ignored, or minimized may cause real issues within the aviation industry.
So just what is power-distance? “The extent to which people accept unequal distribution of
power. In higher power-distance cultures, there is a wider gap between the powerful and the
powerless” (Hofstede and Hofstede, 2005, p. 46).
This dimension refers to the degree to which members of a group expect and agree that
power should be shared unequally. Power distance is concerned with the way cultures are
stratified, thus creating levels between people based on power, prestige, status, wealth, and
material possessions (Northouse, 2013, p. 388).
The measuring stick for Hofstede’s research on the culture dimension of power distance is his
Power Distance Index (PDI). This is a scale from 0-120 that measures the distance between the
powerful and the powerless. A lower number on the scale would indicate a closer degree of
contact between the powerful and the powerless and a higher number would indicate a larger
24
distance between the two. For example, the lowest number on the scale belongs to Austria at 11.
The highest number on the scale belongs to Malaysia at 104. The United States is ranked at 40,
South Korea is ranked at 60 and China is ranked at 80. (Hofstede, 2001) What this research
shows is when the power distance index number rises, we should be looking closely at the
impact that this could have on aviation safety.
Table 4
You can imagine the effect that Hofstede’s findings had on people in the aviation industry.
What was their great battle over mitigated speech and teamwork all about, after all? It was
an attempt to reduce power distance in the cockpit. Hofstede’s question about power
distance—‘How frequently, in your experience, does the following problem occur:
employees being afraid to express disagreement with their managers?’—was the very
question aviation experts were asking first officers in their dealings with captains. And
Hofstede’s work suggested something that had not occurred to anyone in the aviation
world: that the task of convincing first officers to assert themselves was going to depend
an awful lot of their culture’s power distance rating (Gladwell, 2008, p 206).
What Gladwell was referring to was the importance of all communication that occurs on the
flight deck. The reason that this theory is so important with regards to aviation safety is the fact
that with cultures displaying a high power distance index, co-pilots are less likely to speak up
when a higher ranking member of their culture such as the captain makes a mistake.
25
Human Error in Aviation
Research conducted by Helmreich (2000) defined flight crew error as something the crew did
or did not do that caused them to deviate from what they were supposed to be doing according to
their organization. Helmreich opined there were five different types of error:
▪ Intentional non-compliance errors which include conscious violations of SOP’s
▪ Procedural errors where intentions were correct but errors were made
▪ Communication errors where incorrect information is relayed
▪ Proficiency errors where there is a lack of technical competence
▪ Operational decision errors where a crew uses discretion not specified in SOP’s that
unnecessarily increases risk. Examples would be low level steep turns, over-reliance on
automation or continuing into adverse weather beyond an aircraft’s capability.
26
Table 5
Helmreich, (2000)
According to Helmreich (2000, p. 782), there are three possible responses to these errors:
▪ Trap—the error is caught and corrected by the crew
▪ Exacerbate—the crew’s response to the error leads to a negative situation
▪ Fail to respond—the crew fails to or refuses to respond to correct the error
27
Helmreich describes the three possible outcomes from the error;
▪ Inconsequential—the error had no effect on the safe outcome of the flight
▪ Undesired aircraft state—the error created a higher state of risk which would
include hard landings, low fuel, landing at the wrong airport and incorrect
navigation or use of the flight management system (FMS)
▪ Additional error—the initial error and response creates a new error
Helmreich describes the three resolutions to the undesired aircraft state;
▪ Recovery—the risk was eliminated by the crews’ response
▪ Additional error—the crews’ response created a new error and response
▪ Crew-based incident or accident
Table 6
Helmreich, (2000)
28
Helmreich’s research while at the University of Texas developed a methodology to collect
and analyze data during normal airline flight operations. These observations are called Line
Operational Safety Audits (LOSA) and are still being conducted by airlines worldwide today.
These audits use trained professional observers who ride along in the cockpit with a flight crew
during a scheduled revenue flight and watch what is being done. This audit is completely a non-
jeopardy event meaning a failure would not be recorded on their official FAA pilot record and
the flight crew’s identity is not even included in the report. The purpose is to gather data to
document situational factors and crew behavior. The graph above shows the outcomes from the
initial LOSA observations and the five main categories of errors based on the empirical data. The
graph shows that although the errors caused by non-compliance have a far less chance of
becoming a consequential event or crash, they occur much more frequently than those associated
with a proficiency error whose errors have the highest chance of causing a consequential event.
The findings from the initial audits of over 3,500 flights conducted by the University of
Texas found the following;
Sources of threat and types of error observed during line operations safety audit
Terrain (mountains, buildings)—58% of flights
Adverse weather—28% of flights
Aircraft malfunctions—15% of flights
Unusual air traffic commands—11% of flights
External errors (air traffic control, maintenance, cabin, dispatch, and ground crew)—8% of
flights
Operational pressures—8% of flights Source-University of Texas, 2000
Table 7
What was surprising to learn from research conducted by Helmreich (2000), was that on
average there were two threats and two errors observed on each of these LOSA observed airline
flights. By understanding what phase of flights these errors are occurring and for what reasons
29
they are occurring, allows the different operators and ultimately the entire industry to respond
with error management and mitigation procedures. These procedures many times involve
additional training for the flight crews.
Understanding how threat and error and their management interact to determine outcomes
is critical to safety efforts. To this end, a model has been developed that facilitates analyses
both of causes of mishaps and of the effectiveness of avoidance and mitigation strategies.
A model should capture the treatment context, including the types of errors, and classify
the processes of managing threat and error. Application of the model shows that there is
seldom a single cause, but instead a concatenation of contributing factors. The greatest
value of analyses using the model is in uncovering latent threats that can induce error.
(Helmreich, 2000, p. 783)
United Airlines Unstabilized Approaches
One of the challenges facing airlines worldwide is the pilot shortage and how that affects their
ability to hire and train competent flight crews. An example of this is what United Airlines is
seeing. According to research conducted by Rosenkrans, (2015) United Airlines employs over
11,000 pilots and will be hiring over 1,000 pilots each year to keep up with the mandatory
retirement age of 65. In addition to this daunting task, they also have a need to ensure that the
new pilots and tenured pilots all comply with the company’s SOPs (standard operating
procedures), company policies and work within the confines of the FARs (Federal Aviation
Regulations). United Airlines realized that the human error issues that they were confronting
were very similar to those unearthed by the investigation following the Space Shuttle Challenger
explosion in that management had made poor decisions.
“One of the academic analyses argues that although everyone involved was accustomed to
mission-completion pressure as a factor on decisions regarding a Challenger launch, the
fact that 24 previous launches had been successful with known leaks in seals (called o-
rings) between rocket stages may have been the most important human factor, Sharber said.
Today the term normalization of deviance-the gradual process by which the unacceptable
becomes acceptable in the absence of adverse consequences-can be applied legitimately to
the human factors risks in airline operations as to the Challenger accident. ‘The shortcut
30
slowly but surely over time becomes the norm,’ in other words, Sharber said” (Rosenkrans,
2015, p. 2).
United Airlines participates in several different data analysis and predictive analytical
programs within all of their flight operations as do all of the United States flag carriers. The
question becomes do they have the organizational culture and fortitude to pay attention to
empirical data.
In FOQA (flight operational quality assurance programs), the airplane gives us objective
information. From ASAP (aviation safety action programs), we get the human story from
the pilots themselves. We have the LOSA (line operations safety audit) study, [so] now we
get information from objective outside observers. So if SOPs and [other] procedures are
based on all that valid information, why then would crews not comply (Rosenkrans, 2015,
p. 2)?
All airlines have policies regarding flying stabilized approaches. What this means is that by a
certain altitude above the ground during an approach to landing which is normally between 500’
AGL (above ground level) and 1,500’ AGL, the airplane must be fully configured with the
landing gear down and locked, airspeed normally +10 knots or - 5 knots (10 knots above or 5
knots below approach speed), and the glideslope within one vertical dot (deviation from the
required glideslope). If the airplane is not stabilized by the required height above the ground, the
pilots are required to perform a go-around and not land. One of the areas that United Airlines
looked closely at was the number of times the pilots decided to land from an unstabilized
approach and intentionally violate this policy and SOP.
The data that United Airlines found for stabilized approaches matched that of the industry
averages. 96% to 97% of all airline approaches were stabilized and that left 3-4% as unstabilized.
What was disconcerting was that of the 3-4% of unstabilized approaches, landings were made
97-98.5% of the times in violation of the policy and go-arounds were only made 1.5-3% of the
time (Rosenkrans, 2015).
31
Table 8
Source-Rosenkrans, 2015
To give context to the data found by United Airlines listed above, the researcher has applied
their findings to an average sized airline that operates 1,300 flights per day to highlight the
breadth of this increased level of risk to their operation. This would yield 474,500 flights per
year which equates to 474,500 landings per year. Using their conservative numbers of 98% of
the landings being stabilized and 2% being unstabilized equates to 9,490 (474,500 X 2%)
unstabilized approaches per year or 26 (9,490 divided by 365 days) per day. All 26 of these
approaches should have ended in a go-around per their policy and SOPs, however, 25 out of 26
or 97% ended with a landing in direct violation of their policy and SOPs.
So what did United Airlines do to mitigate this known risk? They changed their policy for
stabilized approaches. The new policy stated that by 1,500’ AGL the pilots will have the landing
gear down and their speed under 180 knots. By 1,000’ AGL, their airspeed will be between +15
knots to -5 knots (15 knots above or 5 knots below the approach speed) which gives them a 20
knot buffer. If the approach is not stabilized by 1,000’ AGL, the approach may continue, but a
callout (communicating the deviation) by the non-flying pilot is required followed by an
immediate corrective action by the Pilot Flying. If the approach is not fully stabilized by 500’
32
AGL, the crew is required to do a go-around. This change resulted in a 22% decrease in
unstabilized approaches for United Airlines. “The conclusion that the airline reached is that the
status quo is unsustainable. ‘NASA [The National Aeronautics and Space Administration] got
away with [launches of Challenger] 24 times. If you’re the average airline…your exposure [to
having an accident]is over 50 times per day, which begs the next question. How long can we get
away with this as an industry?” (Rosenkrans, 2015, p. 3)
Human Factors Analysis and Classification System (HFACS)
As a direct result of an increase in the number of incidents and crashes occurring within
the US Navy and Marine Corps during the 1990’s, a group of naval behavioral scientists led by
Dr. Weigmann and Dr. Shappell developed the Human Factors Analysis and Classification
Framework. Their work was based upon the research and findings conducted by Dr. Reason and
his Swiss cheese model (Reason, 1990). The HFACS research found that the Navy and Marine
Corps had the highest accident rate within the US military. The primary cause of the majority of
these accidents was violations made by the flight crews.
The view that traditional accident investigation is little more than an effort to assign blame
may be true for some inquiries (e.g., legal proceedings, insurance claims), but most safety
investigators would argue that their goal is to simply prevent the accident from happening
again. It was with the latter view in mind that HFACS was developed” (Shappell &
Wiegmann, 2001). Drawing upon Reason’s (1990) concept of latent and active failures,
HFACS describes human error at each of four levels: (a) organizational influences, (b)
unsafe supervision (i.e., middle management), (c) preconditions for unsafe acts, and (d) the
unsafe acts of operators (e.g., aircrew, maintainers, air traffic controllers (Shappell, et al,
2007, p. 228).
33
Table 9 and Description of HFACS Causal Categories (Shappell, et al, 2007, p. 229)
Organizational Influences
Organizational climate: Prevailing atmosphere/vision within the organization, including such
things as policies, command structure, and culture.
Operational process: Formal process by which the vision of an organization is carried out
including operations, procedures, and oversight, among others.
Resource management: How human, monetary, and equipment resources necessary to carry out
the vision are managed.
Unsafe Supervision
Inadequate supervision: Oversight and management of personnel and resources, including
training, professional guidance, and operational leadership, among other aspects.
Planned inappropriate operations: Management and assignment of work, including aspects of
risk management, crew pairing, operational tempo, etc.
Failed to correct known problems: Those instances in which deficiencies among individuals,
equipment, training, or other related safety areas are “known” to the supervisor yet are allowed
to continue uncorrected.
Supervisory violations: The willful disregard for existing rules, regulations, instructions, or
standard operating procedures by managers during the course of their duties.
Preconditions for Unsafe Acts
Environmental factors
Technological environment: This category encompasses a variety of issues, including
the design of equipment and controls, display/interface characteristics, checklist layouts,
task factors, and automation.
Physical environment: Included are both the operational setting (e.g., weather, altitude,
terrain) and the ambient environment (e.g., as heat, vibration, lighting, toxins).
Condition of the operator
Adverse mental states: Acute psychological and/or mental conditions that negatively
affect performance, such as mental fatigue, pernicious attitudes, and misplaced
motivation.
Adverse physiological states: Acute medical and/or physiological conditions that
preclude safe operations, such as illness, intoxication, and the myriad pharmacological
and medical abnormalities known to affect performance.
34
Physical/mental limitations: Permanent physical/mental disabilities that may adversely
impact performance, such as poor vision, lack of physical strength, mental aptitude,
general knowledge, and a variety of other chronic mental illnesses.
Personnel factors: Crew resource management: Includes a variety of communication,
coordination, and teamwork issues that impact performance.
Personal readiness: Off-duty activities required to perform optimally on the job, such as
adhering to crew rest requirements, alcohol restrictions, and other off-duty mandates.
Unsafe Acts
Errors
Decision errors: These “thinking” errors represent conscious, goal-intended behavior
that proceeds as designed, yet the plan proves inadequate or inappropriate for the
situation. These errors typically manifest as poorly executed procedures, improper
choices, or simply the misinterpretation and/or misuse of relevant information.
Skill-based errors: Highly practiced behavior that occurs with little or no conscious
thought. These “doing” errors frequently appear as breakdown in visual scan patterns,
inadvertent activation/ deactivation of switches, forgotten intentions, and omitted items in
checklists. Even the manner or technique with which one performs a task is included.
Perceptual errors: These errors arise when sensory input is degraded, as is often the
case when flying at night, in poor weather, or in otherwise visually impoverished
environments. Faced with acting on imperfect or incomplete information, aircrew run the
risk of misjudging distances, altitude, and descent rates, as well as of responding
incorrectly to a variety of visual/vestibular illusions.
Violations
Routine violations: Often referred to as “bending the rules,” this type of violation tends
to be habitual by nature and is often enabled by a system of supervision and management
that tolerates such departures from the rules.
Exceptional violations: Isolated departures from authority, neither typical of the
individual nor condoned by management.
__________________________________________________________________
Table 9
Adapted from Shappell, et al, 2007
35
What Shappell et al (2007) were describing in the above referenced Table 8 was a listing
of aviation accident causal factors divided into four categories ie. Organizational
Influence, Unsafe Supervision, Preconditions for Unsafe Acts, and Unsafe Acts. These
four categories were further divided into sub-categories for more specific identification.
An example would be if an accident was caused by the organization’s culture it would be
assigned to the Organizational Influences category and to the Organizational Climate
sub-category.
Table 10
Shappell, et al, 2007
36
High Reliability Organizations
What makes an organization within aviation be considered a high-reliability organization
(HRO)? According to O’Neil (2013), during the 1980’s, a group of researchers from the
University of California, Berkeley, set out to find what made organizations such as the Federal
Aviation Administration Air Traffic Control, US Naval aircraft carriers and nuclear power plants
able to have a much lower rate of error amongst their employees and deliver on their mission
reliability. According to the FAA (2009), its Air Traffic Control services employ 11 million
workers who are responsible for the safe movement of 48 million civilian and military aircraft
transporting 745 million passengers safely to their destinations per year. According to O’Neil
(2013), their ability to safely move 48 million aircraft through the National Air System on a
yearly basis makes them a highly reliable and error-free business model.
What O’Neil’s (2013) research found was that the success of the FAA as an HRO was
directly attributed to the acceptance within their organizational culture of strict policies and
procedures. These standard operating procedures (SOPs) were the expected or normative
behaviors for their 11 million employees.
High-reliability organizations (HROs) have emerged across a number of highly technical,
and increasingly automated industries (e.g., aviation, medicine, nuclear power, and oil field
services). HROs incorporate complex systems with a large number of employees working
in dynamic, and potentially dangerous environments. Effectively managing contingencies
in HROs, to simultaneously promote safe and efficient behaviors is a daunting task. Crew
Resource Management (CRM) has emerged in HROs as a highly effective approach to
training and sustaining essential skills within work teams operating across a large
workforce. CRM provides a competency framework that enables adherence to standard
work instructions while, at the same time, encourages adaptive variance in responding to
effectively manage current environmental circumstances that depart from normal routines
(Alavosius, 2017, p. 142).
37
What started out as the aviation industry developing a solution to a horrible safety record
during the 1970’s and 1980’s became a model for these other industries. The basis for this model
was Crew Resource Management (CRM).
The introduction of CRM into aviation facilitated a dynamic culture shift among the crew
working in this industry. Historically, the hierarchies within aviation were the driving force
behind the organizational culture. A captain was seen as the ultimate authority figure of a
flight crew. A captain’s word was law, and should never be questioned or challenged. In
such hierarchical structures, human error can still occur, as the captain is not exempt from
making poor decisions. Therefore, it is crucial that effective, clear, two-way
communication exists between the captain and any subordinates to take advantage of all
available resources (their knowledge, observations of current conditions, interpretations of
data, etc.) that crew members have to offer. Throughout airline CRM training, crew
members are taught to speak up before an incident occurs, rather than to place blame after
an incident occurs. For instance, if a copilot notices some deviance from standard operating
procedures (SOPs), it is their obligation to speak up and it becomes the captain’s obligation
to take the co-pilot’s observation into account (Alavosius, 2017, p. 148).
CRM has been accepted at different rates by different airlines operating from different
countries. This challenge of acceptance is sometimes complicated by national culture
expectations that are discussed further and highlighted in the power-distance research mentioned
earlier.
Research conducted by Klein et al, (1995) concluded that HRO’s differ from other
organizations because of their unique culture and normative behaviors amongst their employees.
They also found that role perception and attitudes were different as to their perceived acceptance
and fit within the organization and their employee satisfaction and how correlated with their
willingness to stay. This aligns itself with other research conducted for this dissertation.
38
Broken Windows Theory
One area of research that has not been connected to or associated with aviation and
organizational culture is the Broken Windows Theory. This academic theory was proposed in
1982 by James Wilson and George Kelling. The premise was that neighborhood incivility, when
ignored, could lead to more serious crimes. The metaphor of the broken window was linked to an
abandoned warehouse that has only one broken window. When this one broken window is
ignored and not repaired, additional windows will begin to be broken which ultimately reduces
the value of the warehouse and the area or neighborhood surrounding it. A simple way to
describe this theory is that when you ignore the small problems, they will become big problems.
They drew upon a field experiment conducted by Philip Zimburdo (1973) that
demonstrated the process through which seemingly abandoned automobiles, left without
license plates and with hoods up were vandalized in the Bronx, New York, and Palo Alto,
California. The abandoned car in the high-crime area of the Bronx was vandalized within
10 minutes, and was completely stripped within 24 hours. The abandoned car in upscale
Palo Alto remained untouched for a full week until Zimburdo himself damaged the car
with a sledge hammer. Within a few hours, the abandoned care was vandalized, stripped
and turned over. To Wilson and Kelling (1982:32), the Zimburdo experiment suggested
that vandalism and more serious crimes can occur anywhere once the sense of mutual
regard and the obligations of civility are lowered by actions that seem to signal that “no
one cares” (Welsh et al., 2015, p. 1).
In the 1990’s, William Bratton became the police chief in New York City. He applied this
theory and turned it into policy. In doing so he was able to drop the rate of serious felonies by
40%. How he did this was by having officers focus enforcement actions on the smaller
misdemeanor crimes that are normally associated with quality of life issues. These would include
disorderly conduct, public drunkenness, panhandling, vandalism, petty theft, prostitution and
also civil actions on properties that were deemed to be a threat to the public. This focus on
quality of life issues caused an increased awareness by the residents in the different
39
neighborhoods and helped them to take ownership and be a part of the solution as a result of this
new civic engagement.
While this theory and subsequent policing policies were not without controversy, my own
personal experience of utilizing this approach during my 30 year career in law enforcement
resulted in similar results that Chief Bratton had found in New York City. When you focus your
efforts on the small problems or issues, there is a better chance that the small issues will not
become big problems. If this works with quality of life issues and overall crime, how will it work
within an organization? If an airline focuses efforts on the smaller day to day issues they are
faced with, will this manifest itself into a better safety record? If a pilot knows what types of
behaviors will not be tolerated, will this affect his/her decisions on the flight deck? I believe that
this is what separates how one organization does business versus another organization or in other
words what makes one organizational culture different from another.
Associated Research Projects
The Safety Culture Indicator Scale Measurement System
A search of completed research in the area of organizational culture and aviation was
conducted using available data in order to help ground the researcher’s premise. A research study
was conducted by von Thaden and Gibbons (2008) that was sponsored by the FAA and titled
“The Safety Culture Indicator Scale Measurement System”. Their purpose was to provide a
framework for developing a benchmarking tool that was scientifically-based and
psychometrically rigorous in order to identify and analyze indicators of an organization’s safety
culture (von Thaden & Gibbons, 2008).
According to von Thaden & Gibbons (2008), commercial aviation continues to have an
accident rate caused by human error of 80% and that a high number of these accidents that are
40
said to have been caused by errors by the flight crews. This represents only a superficial cause
and the examination of systemic organizational issues must be advanced (Reason, 1990;
Helmreich & Merritt, 1998; Wiegmann & Shappell, 2003; von Thaden, et. al, 2006).
The goal of their study was to develop and validate a measure of the safety culture within
scheduled airline operations. “The outcome of this research provided a well-vetted measurement
system that has provided consistent, meaningful information about the culture of safety within an
organization, from the initial baseline evaluation to re-evaluation of any changes or progress in
the culture, taking consideration of its working groups, as well as standardized information for
the comparison of safety cultures across the aviation industry as a benchmarking tool” (von
Thaden & Gibbons, 2008, p. 1). Their scale measurement tool utilizes a survey instrument using
a 7-point Likert-type response scale along with an option for respondents to provide comment.
This provides quantitative data to document an organization’s commitment to its safety culture
and qualitative evidence of where the organization is doing well or needs to improve. The survey
also then has both an operations version and a maintenance version.
Safety Culture in Military Aviation
A second research study reviewed was titled “Characterising influences on safety culture in
military aviation: a methodologically grounded approach” which was conducted in 2015 by
Bennett, Hellier & Weyman. According to Bennett, et al (2015), their research focused on
addressing issues by using a qualitative approach to investigating military aviation employees in
order to develop a quantitative measurement tool.
A thematic analysis was conducted using 12 focus groups (N=89) of military pilots and
maintenance personnel employees within a semi-structured manner. The purpose was to evaluate
41
the employees’ influence on organizational safety culture. They categorized them as follows; 1.
Policy and procedures. 2. Pressure. 3. Management ownership of safety. 4. Individual
responsibility and risk perception. 5. Communication. 6. Organizational commitment. These 6
highlighted areas formed the theoretical underpinnings for their data collection. The participants
provided grounded data through focus groups and interviews that were audio-taped and
transcribed verbatim and were theoretically and empirically driven. The method of constant
comparative analysis (Glaser and Strauss, 1967) was used to identify the relationships within this
grounded data. The research provided insights that focused on the need for contextualization of
safety culture and the need for further research to identify and develop a quantitative tool for
measuring organizational safety culture (Bennett, et al. 2015).
42
Safety Culture in Aviation Maintenance
The third and final associated research study that was reviewed for this project was titled “An
Empirical Study on Safety Culture in Aviation Maintenance Organization” which was conducted
by Chun-Yong Kim and Byung-Heum Song (2016). In this study that was sponsored by the
Federal Aviation Administration, Kim and Song analyzed the sub-par elements of the
organizational safety culture within the largest airline maintenance providers in Korea in order to
find ways of improving their performance.
According to data presented by Kim and Song (2016) from the Ministry of Land and
Transportation in Korea in 2014, during the previous three years there had been 141 cases of
aviation accidents/incidents in Korea with the majority of these occurrences being caused by an
emergency warning light malfunction, but nevertheless, these required a rejected takeoff or a
Return to Base (RTB) which is essentially causing the airplane to make an immediate landing.
These incidents greatly increased the risk for the flights by increasing the workload and risk for
errors by the flight crews unnecessarily.
A total of 300 questionnaires were distributed which resulted in 236 responses to the 64
questions asked. The questions were centered on the five different types of safety culture they
were examining; 1. informed culture. 2. reporting culture. 3. just culture. 4. learning culture. 5.
flexibility culture. A 5 point Likert scale was given, and the statistical data were analyzed using a
parameters test (analysis of variance-ANOVA, t-test and Pearson correlation), a frequency
analysis and reliability testing (Kim and Song, 2016).
According to Kim and Song (2016), the conclusions of this study found that in order for an
organization to have a healthy safety culture, they must first develop open communication
between all levels of the organization, they must focus on becoming a learning culture, and that
43
culture must be positive and flexible. The culture must not focus on punishment as a response to
an error made that was not considered deliberate or reckless. It needs to be understood that
safety-related errors may occur, but a more efficient way for the organization to respond is by
having a rewards program that honors error-free work.
Summary
Chapter two included a review of the professional literature surrounding organizational
culture and the human error involved in aviation accidents. This literature will be utilized to help
answer both research questions for this project. Chapter three will be a review of the
methodology used to support the research in this project.
44
Chapter Three: Methodology
Introduction
The purpose of this chapter is to introduce the research methodology for this qualitative case
study analysis regarding the possible influences that organizational culture may play with regards
to aviation safety. This approach incorporates a deep dive document analysis based on the
official investigative reports from the three accident investigations completed by the National
Transportation Safety Board and the Rogers Presidential Commission on the Space Shuttle
Challenger Accident.
The choice of a qualitative case study analysis and a constructivist approach for this study are
detailed within this chapter. The chosen research questions, methodology, selected case studies,
procedures, analysis method and validity concerns are also addressed. Both Stake (1995) and Yin
(2003) base their approach to case study on this same constructionist paradigm.
Research Questions
This study was conducted in order to answer the following research questions:
RQ1: What causes the high level of human error in aviation accidents?
RQ2: Is organizational culture a causational variable, and if so, to what extent?
Methodology Selected
According to Yin (2003), a qualitative case study analysis is appropriate when the purpose of
the research is to answer “how” and “why” questions, when the researcher cannot manipulate the
behavior of those studied, and the researcher wants to understand contextual conditions of the
case studies because of their belief that it is relevant to the study.
45
A Case Study approach can also offer additional insights into what gaps might exist in an
organization’s safety policies and procedures, and how adjusting these policies and their
associated training programs might help to mitigate these safety concerns. The purpose of this
study was to determine if organizational culture plays a role in aviation safety, and this can best
be ascertained by carefully examining multiple accident case studies and, as a result, this
qualitative approach was determined to be the most appropriate choice.
A constant comparative method was used to analyze the data collected from the case studies
in order to better understand the relationship between the data. According to Glaser and Strauss
(1967), this process starts with identifying a phenomenon or event such as an aircraft accident.
The next step involves identifying a few local concepts or details of the identified event.
Emerging patterns and themes are identified and based on the initial understanding of the event,
data is collected and analyzed which lays the foundation for the grounded theory.
Case Study Methodology
This qualitative study was conducted using multiple case study methodology. Creswell
(2003) defined case study as “researcher explores in depth a program, an event, an activity, a
process, or one or more individuals” (Creswell, 2003, p. 15). Creswell (1998) earlier believed
that the format of the case study should include the problem being investigated, the context of
those involved, the issues surrounding the problem and the lessons learned. Yin (2014) argues
that, when “the process has been given careful attention, the potential result is the production of a
high-quality case study” (Yin, 2004, p. 199). Stake (1995) defined a case study as “both the
process of learning about the case and the product of our learning” (p. 237).
According to Merriam (1998), case study methodology is constructivism since she believes:
46
the key philosophical assumption upon which all types of qualitative research are based is
the view that reality is constructed by individuals interacting with their social worlds…The
researcher brings a construction of reality to the research situations, which interacts with
other people’s constructions or interpretations of the phenomenon being studied. The final
product of this type of study is yet another interpretation by the researcher of others’ views
filtered through his or her own (Merriam, 1998, p. 22).
The Researcher
The researcher worked 30 years for a governmental agency with the majority of time spent
within their aviation division: as a line pilot, supervisor, chief pilot and ultimately as the aviation
unit program manager. He also worked for 10 years as a public safety aviation unit program
inspector for agencies applying for national accreditation and he currently is a pilot for a U.S.
flagged international airline. He is dual-rated as a pilot in multi-engine transport category jet
aircraft and helicopters.
The researcher has been involved in thousands of investigations throughout his extensive
career, and has been trained in the skills necessary to carry out the designed study. No participant
in this study has a direct relationship with the researcher that could represent a conflict of
interest, such as an employer-employee relationship, contracting/subcontracting, or any other
personal relationships that could bring bias to the research study. The researcher acknowledges
that as a result of his professional experience within the aviation industry, he brings with him a
certain amount of bias towards the importance of aviation safety and organizational culture.
Document Analysis
The foundation for this research lies within the confines of the three case studies that have
been selected. The accidents are similar in that they reference organizational culture as a factor
and yet very different in nature and bring about varying concerns from a human factors and
47
organizational culture standpoint. The review of the professional literature included the
following areas;
▪ Organizational Culture
▪ Aircraft Crash Statistics
▪ Power Distance
▪ Sources of Power
▪ Power Distance Theory
▪ Human Error in Aviation
▪ Swiss Cheese Model
▪ Human Factors Analysis and Classification System (HFACS)
▪ High Reliability Organizations
▪ Crew Resource Management
▪ Broken Windows Theory
48
Case Study Document Review and Data Collection
The three case studies that were chosen for this research after consulting with the dissertation
committee were:
1. Asiana Airlines flight #214. This crash involved a Boeing 777 that hit the seawall during
an approach to landing at San Francisco International Airport. There were several cultural
influences that played a role in this crash that were highlighted in the investigation. This
case study will focus on those cultural influences and also look into the multiple other
findings from the National Transportation Safety Board official documents covered in
chapters 4 and 5. These documents include thousands of pages of interviews, data and
other factual components. Organizational culture was an official causal factor in this
accident (National Transportation Safety Board, 2014).
2. Space Shuttle Challenger. This case study investigates the explosion of the space shuttle
that was partly caused by a decision to launch below a certain temperature that caused an
O-ring to leak and fuel the explosion. There were many organizational culture and human
factor issues that also played an integral part of this incident. A Presidential committee
was formed to investigate this tragic event and their official investigative findings and
report will be highlighted in this case study. Organizational culture was a factor in this
accident (Rogers Commission, 1986).
3. New Mexico State Police fatal helicopter crash. This case involved a search and rescue
mission performed by the New Mexico State Police that ultimately led to the crash of the
helicopter and death of the pilot and the person being rescued. There were numerous
organizational culture and human factors issues that played a part in the causation. The
primary investigative document that will be used is the official crash investigation report
49
prepared by the National Transportation Safety Board (National Transportation Safety
Board, 2011).
Description of Data Analysis and Coding
As a direct result of the depth of the official investigative reports, the researcher was able to
dissect each accident from a human factors and organizational culture standpoint. There were
literally thousands of pages of official documents that were reviewed and analyzed as part of this
research. While there are individual causal findings for these three accidents, they were also
analyzed to see if there were any emerging patterns or themes.
The analysis process was guided by methods and research conducted by Robert Yin (2014),
Robert Stake (1995) and Sharan Merriam (1998). All three of these researchers suggested using
document analysis as a main focus for gathering data. The case studies were reviewed, data were
gathered, categories were established and coded and then emerging themes and patterns were
documented using the method of constant comparative analysis (Glaser and Strauss, 1967). This
method of analysis provided the comparisons necessary to link the data with the resulting theory.
50
The following table was used to guide the research and the writing of this manuscript.
Stake’s (1995) Checklist for assessing the quality of a case study report
1. Is the report easy to read?
2. Does it fit together, each sentence contributing to the whole?
3. Does the report have a conceptual structure (i.e themes or issues)?
4. Are its issues developed in a serious and scholarly way?
5. Is the case adequately defined?
6. Is there a sense of story to the presentation?
7. Is the reader provided some vicarious experience?
8. Have quotations been used effectively?
9. Are headings, figures, artifacts, appendices, indexes effectively used?
10. Was it edited well, then again with a last minute polish?
11. Has the writer made sound assertions, neither over-or under-interpreting?
12. Has adequate attention been paid to various contexts?
13. Were sufficient raw data presented?
14. Were data sources well chosen and in sufficient number?
15. Do observations and interpretations appear to have been triangulated?
16. Is the role and point of view of the researcher nicely apparent?
17. Is the nature of the intended audience apparent?
18. Is empathy shown for all sides?
19. Are personal intentions examined?
20. Does it appear individuals were put at risk?
Table 11
51
Summary
The goal of this chapter was to outline the research method chosen to answer the research
questions posed. A review of the method used, the case studies chosen and the data that were
collected and analyzed resulted in the specifics of how the study was organized and presented. A
case study methodology using grounded constructionist theory was used to develop theory on the
role that organizational culture plays within aviation safety and why human error plays such a
large role in the causations of these aviation accidents. The goal of chapter four is to provide
details of the cases being analyzed and then how the data identified emerging themes and
patterns within these cases.
52
Chapter Four: Data Review
Introduction
Chapter four includes a deep dive document analysis of the three selected case studies
involved in this research project. Case study #1 involved the Asiana #214 accident, case study #2
involved the Space Shuttle Challenger accident and case study #3 involved the New Mexico
State Police helicopter accident. The documents that were reviewed were compiled by the
National Transportation Safety Board for the Asiana and the New Mexico State Police accidents
and the documents reviewed for the Space Shuttle Challenger accident were compiled by the
Rogers Commission.
Case Study #1
Asiana Airlines flight #214
53
According to the National Transportation Safety Board (2014), on July 6
th
, 2013, just before
noon, a Boeing 777-200ER airliner being operated as Asiana Airlines flight 214 was on approach
to land at the San Francisco International Airport. The sun was shining, there were no clouds in
the sky and the winds were light. Onboard were 291 passengers, 12 flight attendants and a crew
of 3. At the time, the glideslope component of the Instrument Landing System was out of
service, but this should not have been a factor as the pilots had been cleared for what is called a
“visual approach” due to the clear weather. What this meant was the pilots would be responsible
for “hand flying” the airplane and would not be relying on the airplane’s autopilot and
instruments that would let them know if they were at the correct 3 degree glidepath for altitude.
The flight crew were vectored (given directions), accepted the visual approach and
intercepted the final approach path 14 miles from the runway. The flight crew told the air traffic
controller that they had the airport in sight when they were 17 miles from the airport. From this
location, the flight crew should have simply flown a straight line while descending to the
runway. At first, the airplane was a little high above the glidepath and had to increase their
descent. During this time, the Pilot Flying (PF) switched modes on the Auto Throttles
inadvertently which caused them to go to an idle position. The flight crew did not recognize that
their airspeed was decaying and allowed the airplane to descend too low. This airliner is
normally flown by a crew of two who split their roles between actually flying (Pilot Flying-PF)
and the Pilot Monitoring (PM).
As the airplane reached 500 ft above airport elevation, the point at which Asiana’s
procedures dictated that the approach must be stabilized, the precision approach path
indicator (PAPI) would have shown the flight crew that the airplane was slightly above the
desired glidepath. Also, the airspeed, which had been decreasing rapidly, had just reached
the proper approach speed of 137 knots. However, the thrust levers were still at idle, and
the descent rate was about 1,200 ft per minute, well above the descent rate of about 700
fpm needed to maintain the desired glidepath; these were two indications that the approach
54
was not stabilized. Based on these two indications, the flight crew should have determined
that the approach was unstabilized and initiated a go-around, but they did not do so. As the
approach continued, it became increasingly unstabilized as the airplane descended below
the desired glidepath; the PAPI displayed three and then four red lights, indicating the
continuing descent below the glidepath. The decreasing trend in airspeed continued, and
about 200 ft, the flight crew became aware of the low airspeed and low path conditions but
did not initiate a go-around until the airplane was below 100 ft, at which point the airplane
did not have the performance capability to accomplish a go-around. The flight crew’s
insufficient monitoring of airspeed indications during the approach resulted from
expectancy, increased workload, fatigue, and automation reliance (National Transportation
Safety Board, 2014, p. xi).
The airplane struck the seawall that is just in front of the beginning of the runway causing it
to cartwheel and burst into flames. Miraculously the number of deaths was limited to three. Forty
passengers, eight flight attendants and one crewmember sustained injury. The airplane was
evacuated within 90 seconds which is a testament to the skills and actions of the flight
attendants. The reference above to stabilized and unstabilized approach references the aircraft
being flown within the airspeed, altitude and configuration standards set by the airline. The
reference to expectancy refers to the pilots expecting the autothrottles to work properly.
The question in many people’s minds was how a professional flight crew could fly the
world’s most technologically advanced airliner into the ground during daylight hours with over
ten miles of visibility and light winds. This question would be answered by the National
Transportation Safety Board (NTSB) approximately one year later and will be the basis of fact
for this case study.
Asiana Flight 214 was operated by three pilots in the cockpit. The Captain had just been
promoted to a training captain and this flight was his first flight with a training pilot under his
supervision and he was designated as the Pilot Monitoring (PM) as he was not actually flying the
airplane. The trainee co-pilot was the Pilot Flying (PF) and had just 33 hours of experience in the
55
Boeing 777. The observer/relief pilot flying in the jump seat was qualified as a co-pilot on the
777.
There is a belief that modern airliners have become so automated that pilots are not “hand
flying” the airplanes enough to develop the skills necessary to handle real life emergencies such
as when these complicated systems malfunction or the operator does not understand them well
enough to program them correctly. It was common practice to have the automation systems fly
the airplane all the way to 1000’ above the ground before the pilot would disconnect the
autopilot and hand fly the remaining 20 seconds to landing. This 20 seconds of experience is not
enough even when repeated many times.
The Asiana 777 chief pilot stated in an interview that the airline recommended using as
much automation as possible. He agreed with the statement that Asiana pilots obtained
most of their manual flying practice during approaches below 1,000 ft AGL (above ground
level or height above the ground). He stated it was permissible for Asiana pilots to
disengage the autopilot above 1,000 ft, but turning the autopilot off at 8 nm from the airport
and at 2,800 ft, for example, would not be recommended (National Transportation Safety
Board, 2014, p. 62).
The National Transportation Safety Board (NTSB) determined that the probable cause of
this accident was the flight crew’s mismanagement of the airplane’s descent during the
visual approach, the PF’s unintended deactivation of automatic airspeed control, the flight
crew’s inadequate monitoring of airspeed, and the flight crew’s delayed execution of a go-
around after they became aware that the airplane was below acceptable glidepath and
airspeed tolerances. Contributing to the accident were (1) the complexities of the
autothrottle and autopilot flight director systems that were inadequately described in
Boeing’s documentation and Asiana’s pilot training, which increased the likelihood of
mode error; (2) the flight crew’s nonstandard communication and coordination regarding
the use of the autothrottle and autopilot flight director systems; (3) the PF’s inadequate
training on the planning and execution of visual approaches; (4) the PM/instructor pilot’s
inadequate supervision of the PF; and (5) flight crew fatigue, which likely degraded their
performance (National Transportation Safety Board, 2014, p. 129).
All airlines have policies that state if an approach is not stabilized by a certain point, then the
crew is required to do a “go-around” which is when they abort the landing, power up and go and
try another approach. Asiana’s policy is that the approach must be stabilized by 500’ above the
56
airport’s field elevation which is approximately 500’ above the ground. (NTSB, 2014) Had the
flight crew abided by this simple policy, this accident would not have occurred. The reasoning
behind the decision not to conduct a “go-around” is what is complex in this scenario. Did
organizational culture or international culture influence this decision? Why would the Pilot
Flying who was also the junior pilot in training not have just told the Captain that he was “going
around?”
As part of the investigation conducted by the National Transportation Safety Board, all three
pilots on the flight deck were interviewed. The Pilot Flying (PF) said he was stressed out about
having to fly a visual approach into San Francisco because he would not be able to rely on the
instrument landing system. He knew a few days ahead of time that the ILS (Instrument Landing
System) at the San Francisco International Airport was out of service and the weather was
forecasted to be clear so he would surely be flying a visual approach. He did not share this with
the Captain. The Pilot Flying had also tried to continue an unstabilized approach in clear weather
while hand flying the airplane on the flight just prior to the accident flight with another Captain.
The Literature Review chapter of this dissertation discusses a concept called Power-Distance
that was formulated by Geert Hofstede, a social psychologist who worked for IBM in the late
1960’s. He said that power distance refers to the … “the extent to which less powerful members
of organizations and institutions (like the family) accept and expect that power is distributed
unequally” (Hofstede & Bond, 1988, p. 10). This cultural value could be one reason for the
sometimes reluctance of a junior flight crew member to call for a “go-around” or even speak up
about a possible risk to the flight. There is empirical evidence confirming certain international
cultures have greater issues with power distance than other cultures. The Pilot Flying (PF) said in
his interview with the National Transportation Safety Board that the Captain is the only one that
57
can call for a “go-around”. The Captain said in his interview that anyone can call for a “go-
around.” All three pilots of Asiana 214 were Korean and the Captain was a former military pilot.
The co-pilot had been hired with no flight experience (ab initio) and had gained all of his
experience with the airline. It was difficult for the co-pilot to speak up or question anything the
Captain would say or do. This included calling for a “go-around” when something did not seem
right. By him doing just that on that fateful day, none of this would have happened. He was
afraid to “go-around” and continued flying the airplane all the way into the ground.
The PF’s uncertainty about whether he had the authority to make a go-around decision
could have stemmed from confusion due to the inconsistent written policy in this area. In
either case, the PF’s deference to authority likely played some role in the fact that he did
not initiate a go-around. The PM, on the other hand, said he believed that the PF was
responsible for initiating a go-around. Thus, the PF and PM had differing understandings
about whose role it was to initiate a go-around in the event that one was required (National
Transportation Safety Board, 2014, p. 92).
The Airplane Safety Engineering Department of [Boeing] conducted an exhaustive
analysis of hull loss accidents for Western-built commercial jet transports over 60,000 lb
(27,216 kg) worldwide, for the period 1959 through 1992. According to this work, there
was a “strong correlation” between accident rate and two cultural indices: Individualism
and Power Distance (BCAG, 1993, p. 58). This fact led the BCAG to recommend that
“this is an area that needs further analysis. The industry must continue its efforts to
identify the cultural impacts on aviation safety (BCAG, 1993, p. 61).
Because of the National Transportation Safety Board findings, many are pointing to cultural
factors as a principal issue that led to the aircraft’s tragedy. Even Captain Lee, the trainee pilot,
indirectly said the Korean culture may have played a part. When asked if he, the Pilot Flying,
had the authority to initiate a go-around, Lee said that even though the go-around is an important
maneuver, the instructor pilot is the only one that has the authority to call for it and he said he
believed that was because of their culture (Nakaso & Carey, 2013). The fact that the Pilot Flying
who is supposedly the Pilot in Command of the aircraft did not feel that he had the authority to
conduct a go-around even though he felt it warranted, is a real problem. One of the very first
58
lessons taught to a new pilot is, if they do not feel that everything is correct on an approach, they
are to “go-around” (FAA, 2018). The fact that a pilot operating within the world of commercial
aviation for what is considered a world-class airline did not do so is disconcerting.
As discussed in a chapter in the 2008 bestseller Outliers by author Malcolm Gladwell.
Gladwell pointed out the poor safety record of Korean Air—the Asian country's largest
carrier—in the 1980s and 1990s, including several fatal crashes… he said Korean Air's
problem at the time was not old planes or poor crew training. "What they were struggling
with was a cultural legacy, that Korean culture is hierarchical," he said. "You are obliged
to be deferential toward your elders and superiors in a way that would be unimaginable in
the U.S." he added. That's dangerous when it comes to modern airplanes, said Gladwell,
because such sophisticated machines are designed to be piloted by a crew that works
together as a team of equals, remaining unafraid to point out mistakes or disagree with a
captain (Howard, 2013, p. 2).
An example of this is illustrated in the conclusion of this manuscript. It highlights a mission
commanded by Col. John Nance, USAF, ret.
The researcher wanted to know if Asiana 214’s unstabilized approach could happen to other
similar airliners, so flight recording tracking data were researched for similar types of airliners
(Boeing 777) being flown into the San Francisco International Airport in similar weather
conditions (clear weather). On July 23, 2013, just two weeks after the crash of Asiana 214, an
EVA Airways Boeing 777-300 flying from Taiwan to San Francisco was cleared for a visual
approach to landing on the same runway as Asiana 214. When the air traffic controller in the
tower noticed that the airliner was at 600’ when they were 3.8 miles from the runway which was
600’ below where they should have been at that position, the controller ordered the pilots to
“climb immediately, altitude alert” and gave them the altimeter setting of 29.97. This flight crew
made the right decision at that point and did a “go-around” (Flightaware, 2017). EVA 28 was at
the same altitude that Asiana 214 was at approximately the same position and being flown on
what is said within the industry to be the simplest of all approaches; one done in visual
59
conditions. What this event tells us is that what happened to Asiana 214 could happen to other
airlines.
60
Case Study #2
Space Shuttle Challenger
The first document that was researched for this case study was the Rogers Commission
Report (Rogers Commission, 1986). This was written by a Presidential Commission after the
disaster and ultimately presented to President Ronald Reagan in 1986. The focus of the
investigation was on the failure of the o-rings that sealed the halves of the rocket boosters. They
launched the shuttle in sub-freezing temperature for the first time with the knowledge that these
o-rings would not perform well at these low temperatures. The push was from NASA
management to launch even after being warned by engineers that they should postpone the
launch. This pressure was a direct result of the pressure placed on them to succeed at any cost
and the fact that NASA was scheduling 24 launches per year (The Rogers Commission Report
on the Space Shuttle Challenger Crash, 1986).
While the Rogers Commission Report certainly looked at the organizational culture of NASA
and all of the contractors involved in the Shuttle Program, the researcher wanted to drill down
61
further on how a highly structured organization such as NASA could have a flawed culture that
could lead to such a failure.
Research was conducted into a term associated quite frequently with regards to the shuttle
disaster and that was the “Normalization of Deviance.” This was a theory by sociologist Diane
Vaughan from Columbia University. When asked to describe Normalization of Deviance during
an interview in May, 2008 she said;
Social normalization of deviance means that people within the organization become so
much accustomed to a deviant behavior that they don’t consider it as deviant, despite the
fact that they far exceed their own rules for the elementary safety. But it is a complex
process with some kind of organizational acceptance. The people outside see the situation
as deviant whereas the people inside get accustomed to it and do not. The more they do it,
the more they get accustomed. For instance, in the Challenger case there were design flaws
in the famous o-rings, although they considered that by design the o-rings would not be
damaged. In fact it happened that they suffered some recurrent damage. The first time the
o-rings were damaged the engineers found a solution and decided the space transportation
system to be flying with acceptable risk. The second time damage occurred; they thought
the trouble came from something else. Because in their mind they believed they fixed the
newest trouble, they again defined it as an acceptable risk and just kept monitoring the
problem. And as they recurrently observed the problem with no consequence they got to
the point that flying with the flaw was normal and acceptable. Of course, after the accident,
they were shocked and horrified as they saw what they had done (Vaughan, 2008, p. 74).
According to Albright (2017), what occurred with the launch decision of the Challenger was
the decision to launch at colder and colder temperatures. The shuttle program was designed
around launch temperatures from 31F to 99F even though the manufacturer of the o-rings stated
that a minimum launch temperature of 53F was necessary to ensure that the o-rings did not
become too brittle and allow blow-by of the gasses. There was also concern of the o-rings
becoming cold-soaked overnight further complicating their ability to properly seal. It was
determined that this concern had not been received by NASA top management.
62
On the morning of the launch of the Space Shuttle Challenger, the temperature was well
below 53F. NASA engineers pressured the manufacturer’s management team to allow the
launch, however, the manufacturer’s engineers refused. NASA made the decision to launch
anyway (Albright, 2017).
The Normalization of Deviance theory explains how an organization responsible for putting
humans into outer space and returning them safely can be flawed. What this tells me is that if this
63
can happen to a large and complex organization such as NASA, it can happen to any
organization which is why this theory is so important to this research into aviation safety.
In Julianne Mahler’s book titled “Organizational Learning at NASA: The Challenger and
Columbia Accidents” (Mahler, 2009) she takes a close look at how an organization can be the
cause of an accident or failure. She describes how similar the accident of the Challenger is with
the accident of the Space Shuttle Columbia 17 years later. In both of these incidents, there was a
tremendous amount of pressure put on NASA to launch even though they knew there was an
elevated risk from a known weakness. She describes NASA as not being a “learning
organization” because of this.
Mahler goes on to say “…many of the factors contributing to the organizational failures
behind the Challenger accident were equally important in shaping the organizational outcome
that was the Columbia disaster. Our object is to understand how this could have happened”
(Mahler, 2009, p 7).
In James Reason’s book, “Managing the Risks of Organizational Accidents,” he describes
Diane Vaughan’s research and published work regarding the Challenger accident as one of “the
most detailed and compelling analyses yet made of an organizational accident” (Reason, 1997).
He said that the Presidential Commission focused on individual decision failures of middle
managers at NASA whose ultimate decisions authorized the deadly launch of the Challenger.
According to Vaughan these managers were made to look evil in what they had done.
64
Case Study #3
New Mexico State Police
The final investigation that has been included in this research is the accident involving a New
Mexico State Police helicopter in 2009.
On June 9, 2009, about 2135 mountain daylight time, an Agusta S.p.A. A-109E helicopter,
N606SP, impacted terrain following visual flight rules (visual flight conditions without
need for instruments) flight into instrument meteorological conditions near Santa Fe, New
Mexico. The commercial pilot and one passenger were fatally injured; a highway patrol
officer who was acting as a spotter during the accident flight was seriously injured. The
entire aircraft was substantially damaged. The helicopter was registered to the New Mexico
Department of Public Safety and operated by the New Mexico State Police (NMSP) on a
public search and rescue mission under the provisions of 14 Code of Federal Regulations
Part 91 without a flight plan. The helicopter departed its home base at Santa Fe Municipal
Airport, Santa Fe, New Mexico, about 1850 in visual meteorological conditions;
instrument meteorological conditions prevailed when the helicopter departed the remote
landing site about 2132 (NTSB, 2011, p VII).
According to the final investigative report released by the National Transportation Safety
Board (2011), the sergeant and chief pilot of the New Mexico State Police responded to a call of
a lost hiker in the mountains 20 miles northeast of Santa Fe, New Mexico. The female hiker, who
was Japanese and had difficulty communicating in English, had become separated from her
boyfriend while hiking and was lost and called 911 on her cell phone. This was at approximately
4:45 pm and she advised that she was not dressed for cold weather and that she was getting cold.
65
At approximately 5:15 pm a search and rescue effort was launched by the New Mexico State
Police who had jurisdiction for that area. It was determined that there were no roads that rescuers
could drive on and that a helicopter would be needed. An incident base was set up for the search
at a local ski resort that was determined to be approximately 4 miles from the location of the
hiker. At approximately 5:55 pm dispatch called the sergeant/chief pilot and advised him of the
situation and asked if he would be able to respond to the call. At first he said that it was too
windy up on the mountain, but he might be able to go up later that night if the winds calmed
down. A short while later, the sergeant/chief pilot called dispatch back and said the weather had
improved and he would be able to go up and take a look (National Transportation Safety Board,
2011).
According to the National Transportation Safety Board (2011 the sergeant/chief pilot was also
assigned as the press information officer for the agency and as a result was on call 24 hours a day
for those call outs as well as being on call 24 hours a day for the aviation unit. On this particular
day, the sergeant/chief pilot had already worked his full eight hour shift and had already flown
three missions. He contacted the only other full-time helicopter pilot to see if he could take this
mission, but he was unavailable so the sergeant/chief pilot took the mission. The on-duty
dispatcher that was coordinating this rescue was the wife of the sergeant/chief pilot.
The New Mexico State Police fly their missions with only one pilot and use a ground
officer/trooper as a spotter in the helicopter. This spotter is not specifically trained in this
assignment and is used only one mission at a time. When the trooper arrived at the hangar, the
sergeant/chief pilot was pre-flighting the helicopter for their mission. The trooper was told by the
sergeant/chief pilot that he would not need his police safety gear on and he could leave it in the
66
hangar. The trooper removed his uniform shirt, bullet-proof vest and gun belt and left them there.
The sergeant/chief pilot helped the trooper secure his safety belts and they departed the Santa Fe
Airport at 6:50 pm and made the approximately 10 minute flight up to the 11,400’ MSL (height
above sea level) height of the mountain to start the search. At this point, clouds were starting to
form and winds were increasing (National Transportation Safety Board, 2011).
According to the National Transportation Safety Board (2011), the crew searched for
approximately one hour (almost 8 pm) before the hiker said to dispatch on her cell phone that the
helicopter had just flown past her. They narrowed their search and ultimately found her. They
found a ridgeline to land on that was approximately 1/2 mile uphill from her location at 11,600’
MSL and touched down at approximately 8:15 pm as the sun was setting and it was starting to
snow. The trooper asked dispatch if the hiker could walk to them and she first said she could.
After a short while it was determined that she would not be walking to them and that they would
have to go and get her. At approximately 8:30 pm the sergeant/chief pilot called dispatch and
talked with his wife and told her that he would hike down to the lost hiker and bring her back.
Since it was starting to snow he said that he needed to get moving quickly otherwise there would
be a need for two search and rescues. At approximately 9:00 pm the sergeant/chief pilot located
the missing hiker and realized that she could not walk due to an injury. He decided to carry her ½
mile uphill back to the helicopter due to the quickly deteriorating weather. There was no
moonlight, no starlight and it had begun snowing. During this time the spotter had called
dispatch concerned about the length of time the sergeant/chief pilot had been gone and said that
it was windy and there was a large cloudbank over them. At approximately 9:25 pm the spotter
called dispatch and said that sergeant/chief pilot had returned to the helicopter carrying the
missing hiker and he was clearly out of breath. At 9:27 pm the spotter called dispatch again and
67
said they were about to take off. The sound of the helicopter engines starting could be heard in
the background.
According to the National Transportation Safety Board (2011), the helicopter they were
operating was an Agusta A109E. This is an all-weather, twin-engine helicopter capable of
operating in visual flight conditions and in instrument flight conditions. The sergeant/chief pilot
was instrument rated in airplanes, but was not instrument rated in helicopters. Helicopter pilots
flying for the NMSP were not required to be instrument rated, however, the airplane pilots were
required to be instrumented rated. The reasoning by their agency for this was that you cannot
rescue what you cannot see which meant that most rescues were conducted in visual flight
conditions.
Immediately upon lift off from the mountain, the Agusta helicopter carrying the
sergeant/chief pilot, spotter and rescued hiker entered clouds, snow and fog. The first radar
return for the helicopter was at 9:32:48 and the last radar return was at 9:35:25 or approximately
2 ½ minutes of flight of what appears to be very erratic maneuvering based on the radar return
data. The spotter heard the sergeant/chief pilot curse and felt the helicopter pitch up violently and
then he said the ride became very wild. At 9:34:10 the sergeant/chief pilot radioed dispatch
which was still his wife and asked if she could hear him. She said that she could and he replied
that they had struck the mountain and were going down. She asked him if they were alright and
he replied “negative” (National Transportation Safety Board 2011). Dispatch recorded rapid
breathing for the next 39 seconds and then it stopped.
According to the National Transportation Safety Board (2011), the sergeant/chief pilot and
the rescued hiker were thrown from the wreckage. The spotter was seriously injured, but was
68
able to crawl to the hiker and found her deceased. He could hear the sergeant/chief pilot but
could not locate him in the dark and snowy conditions.
Note: Helicopter radar data are indicated by red dots connected by white lines. The
map also shows the hiker’s approximate location (A); the helicopter’s likely landing
zone, confirmed by the spotter (B); the main wreckage location (C); SAF (D); and the
approximate location of the SAR IB (E). A compass showing north is located in the
upper left corner.
Table 12
Source: NTSB, 2011
The spotter spent the night in the wreckage of the helicopter waiting to be found. At morning,
he began trying to hike and crawl down the mountain which was very difficult due to the fact he
had a broken ankle and broken back. He was located by ground rescuers at 11:50 am and
eventually airlifted off the mountain at approximately 1:10 pm. Rescuers found the body of the
69
sergeant/chief pilot a short distance from the wreckage (National Transportation Safety Board,
2011).
According to the National Transportation Safety Board, (2011) earlier in the sergeant/chief
pilot’s career and prior to his promotion to sergeant and chief pilot, his sergeant was requested to
fly a search and rescue mission in deteriorating weather which was very similar in nature to this
rescue. His sergeant refused the mission because of the poor weather and because so was
demoted and removed from the air unit. This left a bitter taste in his mouth that may have
affected his decision making process in future rescue missions.
The National Transportation Safety Board determines that the probable cause of this
accident was the pilot's decision to take off from a remote, mountainous landing site in
dark (moonless) night, windy, instrument meteorological conditions. Contributing to the
accident were an organizational culture that prioritized mission execution over
aviation safety and the pilot's fatigue, self-induced pressure to conduct the flight, and
situational stress. Also contributing to the accident were deficiencies in the NMSP aviation
section's safety-related policies, including lack of a requirement for a risk assessment at
any point during the mission; inadequate pilot staffing; lack of an effective fatigue
management program for pilots; and inadequate procedures and equipment to ensure
effective communication between airborne and ground personnel during search and rescue
missions (National Transportation Safety Board, 2011, p. 65).
NMSP Aviation Unit Policy Analysis
According to the National Transportation Safety Board (2011), at the time of the accident, the
New Mexico State Police Aviation Unit operational policies and procedures were covered in a
nine page document which included several other non-aviation related policies (written policy
included as an Appendix). The Unit did not have the following types of policies and procedures in
place that are considered industry norms or standards;
70
▪ Aviation unit Safety Management System
▪ Published safety goals and objectives
▪ Designated unit safety officer
▪ Designated unit safety committee
▪ Structured prelaunch and midlaunch risk assessment protocols
▪ Formal system for communicating safety related information-other than oral
communication
As a result of these lapses in policy and procedures, the unit operated in a sub-standard
condition. As an example, there was no way of tracking the resolution of a safety related issue
after it had been reported. Likewise, it was not possible to verify that a preflight risk analysis was
being performed on all missions as it was not required. There was no way to verify that the current
weather conditions had been checked at the airport or at the intended destination of the flight or
that the airspace that they would be flying in had any temporary flight restrictions. The Unit
defaulted to the simple requirement made by the FAA that the Pilot in Command of any flight is
responsible for obtaining all relevant weather information for the intended flight. This simply left
too many layers of risk.
Airborne Law Enforcement Association Standards
The Airborne Law Enforcement Association (ALEA) was founded in 1968 as a non-profit
association comprised of local, state, and other public aircraft operators engaged in law
enforcement activities. The association’s mission is to “support, promote and advance the safe
and effective utilization of aircraft by governmental agencies in support of public safety missions
through training, networking, advocacy and educational programs.” (National Transportation
71
Safety Board, 2011, p 37). At the time of the accident, ALEA records indicated about 3,500
member agencies in the organization. Although that ALEA records showed that none of the
NMSP section pilots were members of the ALEA at the time of the accident (nor was the NMSP
a member organization, the current aviation section pilots are all members.
The ALEA has multiple organizational and safety related training available, events and
activities to its members. It allows networking amongst aviation units worldwide to share what
they have learned from units of all sizes. They have an annual convention where all of their
educational classes are presented which include topics such as starting an aviation unit, Safety
Management Systems, Aviation Unit Manager’s Course, Unit Safety Officer and more. They
also provide a service to accredit an aviation unit which includes an on-site review and
inspection of the aircraft and facilities, and a thorough review of its unit policies and procedures
based on their established standards.
72
Organizational Analysis
According to the National Transportation Safety Board (2011),
if the NMSP had a preflight risk analysis program in place for its pilots, then the
sergeant/chief pilot would have looked at the deteriorating weather in the mountains and
based on that and other factors may not have launched on the mission. They also stated that
upper management of an organization has control over the personnel and resources that
expose the flight crews to risk. “The safety management approach places a responsibility
on senior management to develop a formal safety policy, establish safety objectives,
develop standards of safety performance, and take the lead in fostering an organizational
safety culture… Research has shown that this type of management involvement plays a
key role in the success of organizational safety programs (p 55).
The National Transportation Safety Board (2011) believed that the NMSP as an organization
did not have a commitment to safety with regards to its aviation program. The Department of
Public Safety (DPS) cabinet secretary was the highest level of management that was in charge of
the unit. While he was the person who ultimately had the most influence on the unit, he did not
take responsibility for the safety performance of the unit and did not ensure that there was an
actual safety program in place. And while he felt he was effectively managing the program, in
2006 when a former sergeant/chief pilot refused to send two inexperienced pilots on a somewhat
challenging rescue mission based on their lack of technical competence, as referenced earlier, he
removed the sergeant/chief pilot from the unit. The cabinet secretary would demand explanations
when a pilot would not accept a mission and would complain if the New Mexico National Guard
would take a mission that one of their pilots rejected. “The NTSB believes that this pattern of
behavior sent a message to NMSP pilots that the highest-ranking official in the DPS prioritized
mission completion over flight safety and that he was closely monitoring their decisions”
(National Transportation Safety Board, 2011, p 55).
73
Conclusions and Findings of Case Study #3
The National Transportation Safety Board (2011, p. 63) concluded their investigation and report
with the following findings:
1. “The investigation determined that the accident helicopter was properly certificated and
maintained in accordance with New Mexico State Police policies and the manufacturer’s
recommended maintenance program. There was no evidence of any preimpact structural,
engine, or system failures.
2. The investigation found no evidence that the pilot had any preexisting medical or
toxicological condition that adversely affected his performance during the accident flight.
3. Post accident examination of the helicopter’s seats and restraint systems revealed no
evidence of preimpact inadequacies. The pilot and the hiker were ejected from the helicopter
when their seats and restraint systems were subjected to forces beyond those for which they
were certificated during the helicopter’s roll down the steep, rocky mountainside.
4. Neither the airborne nor the ground search and rescue (SAR) personnel could have reached
the pilot before he died of exposure given the adverse weather conditions, which precluded a
prompt airborne SAR response and hindered the ground SAR teams’ progress; the darkness
and the rugged terrain in which the ground SAR teams were responding; the distance they
had to travel; and the seriousness of the pilot’s injuries.
5. When the pilot made the decision to launch, the weather and lighting conditions, even at
higher elevations, did not preclude the mission; however, after accepting a search and rescue
74
mission involving flight at high altitudes over mountainous terrain, with darkness
approaching and with a deteriorating weather forecast, the pilot should have taken steps to
mitigate the potential risks involved, for example, by brining cold-weather survival gear and
ensuring that night vision goggles were on board and readily available for the mission.
6. The pilot exhibited poor decision-making when he chose to take off from a relatively secure
landing site at night and attempt VFR (visual flight rules) flight in adverse weather
conditions.
7. The pilot decided to take off from the remote landing site, despite mounting evidence
indicating that the deteriorating weather made an immediate return to Santa Fe inadvisable,
because his fatigue, self-induced pressure to complete the mission, and situational stress
distracted him from identifying and evaluating alternative courses of action.
8. Although there was no evidence of any direct New Mexico State Police or Department of
Public Safety management pressure on the pilot during the accident mission, there was
evidence of management actions that emphasized accepting all missions, without adequate
regard for conditions, which was not consistent with a safety-focused organizational safety
culture, as emphasized in current safety management system guidance.
9. If operators of public aircraft implemented structured, task-specific risk assessment
management programs, their pilots would be more likely to thoroughly identify, and make
efforts to mitigate, the potential risks associated with a mission.
10. An effective pilot flight and duty time program would address not only maximum flight and
duty times but would also contain requirements for minimum contiguous (uninterrupted)
75
ensured rest periods to reduce pilot fatigue; the New Mexico State Police aviation section’s
flight and duty time policies did not ensure minimum contiguous rest periods for its pilots.
11. At the time of the accident, the New Mexico State Police aviation section staffing level was
insufficient to allow helicopter operations 24 hours a day, 7 days a week without creating an
unacceptable risk of pilot fatigue.
12. New Mexico State Police (NMSP) personnel did not regularly follow the search and rescue
(SAR) plan, and NMSP pilots, including the accident pilot, did not routinely communicate
directly with the SAR commanders during SAR efforts, which reduced the safety and
effectiveness of SAR missions.
13. Because the accident pilot did not have a helicopter instrument rating, experience in
helicopter instrument operations, or training specific to inadvertent helicopter instrument
meteorological condition encounters, he was not prepared to react appropriately to the loss of
visual references that he encountered shortly after takeoff.
14. The 406-megahertz (MHz) emergency locator transmitter (ELT) signals received from the
accident helicopter’s 406-MHz ELT were primarily responsible for focusing searchers on
areas near the accident site and for eventually locating both the survivor and the helicopter
wreckage.
15. Although it is unlikely that the use of flight-tracking systems would have resulted in a
different outcome in this case, the use of such systems, which provide real-time information
regarding an agency’s assets, could shorten search times for downed public aircraft and their
occupants.” (National Transportation Safety Board, 2011, p 63)
76
NTSB Recommendations
As a result of the investigation, the NTSB made a number of recommendations not only to the
Governor of New Mexico but to the Airborne Law Enforcement Association, the National
Association of State Aviation Officials and to the International Association of Chiefs of Police.
The following were directly relevant to the New Mexico State Police.
▪ “Require the New Mexico Department of Public Safety to bring its aviation section
policies and operations into conformance with industry standards, such as those
established by the Airborne Law Enforcement Association.
▪ Require the New Mexico Department of Public Safety to develop and implement a
comprehensive fatigue management program for the New Mexico State Police (NMSP)
aviation section pilots that, at a minimum, requires NMSP to provide its pilots with
protected rest periods and defines pilot rest and ensures adequate pilot staffing levels and
aircraft hours of availability consistent with the pilot rest periods.
▪ Encourage your members to conduct an independent review and evaluation of their
policies and procedures and make changes as needed to align those policies and
procedures with safety standards, procedures and guidelines, such as those outlined in
Airborne Law Enforcement Association guidance” (NTSB, 2011, p 66).
Summary
Chapter four included a document analysis stemming from the official investigative reports
documenting the accidents of all three case studies involved in this research project. This also
77
included some additional review of the professional literature specific to certain case studies. The
following chapter will include a complete analysis of the data for this project.
78
Chapter Five: Data Analysis
Introduction
Chapter five includes an analysis of the data collected from the three case studies involved in
this research project. The data was coded and emerging themes and patterns were discovered.
The data was analyzed a second time and placed into thirteen sub-themes to further locate
similarities.
This research project analyzed three separate aviation accidents representing three
distinctively different types of operations: one from a commercial airline (FAR Part 121), one
from the National Aeronautics and Space Administration representing a spacecraft, and the final
accident from a state law enforcement agency under FAR Part 91 regulatory oversight from the
Federal Aviation Administration (FAA). The purpose of choosing such different operational
types of aviation organizations representing extreme differences (airline, spacecraft, and FAR
Part 91) in their organizations was done to look specifically at three different sizes of
organizations of varying complexities involved in accidents that may have been influenced by
organizational culture.
Thousands of pages of official accident reports were reviewed as part of this project. The
data analysis focused on the causes, findings and recommendations provided by the National
Transportation Safety Board (NTSB, 2011, 2013) for the Asiana #214 accident and the New
Mexico State Police accident respectively and it focused on the cause, findings and
recommendations by the Rogers Presidential Commission’s Report on the Space Shuttle
Challenger (1986). All three of these investigative reports were completed using multiple sources
79
of information including cockpit voice recorders, aircraft data recorders, witness interviews,
expert interviews, video recordings, post-mortem investigations and separate reports from
multiple field experts.
The format of official reports generated by the National Transportation Safety Board for the
Asiana #214 accident, Case Study #1, and the New Mexico State Police helicopter accident,
Case Study #3, and the final report issued by the Rogers Commission for the Space Shuttle
Challenger accident, Case Study #2, are structurally the same. All three investigations include a
probable cause (single cause for the Space Shuttle and multiple causes for the Asiana and NMSP
accidents) and base these official causes on their findings from the investigations. The reporting
agencies also give recommendations to the organizations involved to assist them in improving
their operational safety.
The data analysis conducted for this project included the initial coding of the causes, findings
and recommendations that were then placed into four initial categories. The emerging patterns
and themes were translated, and sub-themes and sub-categories were created based on a Constant
Comparative Analysis (Glaser & Strauss, 1967). The causes, findings and recommendations
were further analyzed and assigned to one of thirteen sub-theme categories. All of the thirteen
sub-theme categories are considered parts of the organizational culture and the individual
ranking show how they rank against each other.
Asiana #214 Challenger
NMSP
5 Probable Causes 30 Findings 27 Recommendations
1 Probable Cause 4 Findings 12 Recommendations
10 Probable Causes 15 Findings 15 Recommendations
80
The National Transportation Safety Board’s final report for the Asiana #214 accident listed
30 findings that led to 5 probable causes and they gave 27 recommendations going forward. The
Rogers Presidential Commission official report for the Space Shuttle Challenger accident listed 4
findings that led to 1 probable cause and gave 12 recommendations. The National Transportation
Safety Board’s final report of the New Mexico State Police helicopter accident listed 15 findings
that led to 10 probable causes and gave 15 recommendations. An individual finding, cause, or
recommendation can be represented by more than one category. As an example, Asiana 214’s 5
causes represented 9 initial categories. AC-1 (Asiana 214 Cause #1) was complexities of the
autothrottle and flight director system inadequately described in Boeing’s and Asiana’s manuals
and was attributed to X3 (organization and culture).
Analysis of Data
Each of the causes, findings and recommendations of all three accidents was coded
individually. Each of these coded lines was assigned to one of four different categories. After the
initial coding done individually, the Constant Comparative Analysis was done collectively to
identify emerging themes and patterns:
X1: Flight crew
X2: Weather
X3: Organization and Culture
X4: Manufacturer
After the initial analysis into the above four categories, the same data was assigned to one of
thirteen sub-theme categories to further understand the level of influence that organizational
81
culture plays. These 13 sub-theme categories represent the outcome of the Constant Comparative
Analysis based on the emerging themes and patterns that were found:
Y1: Organizational policies & procedures
(example-no policy for survival gear)
Y2: Organizational training
(example-training was contrary to industry standard)
Y3: Organizational pressure
(example-company pressure to complete mission)
Y4: Organizational internal communication
(example-company lacked safety boards)
Y5: Supervisory influence
(example-lack of supervision)
Y6: Management influence
(example-management without proper training)
Y7: Flight crew internal pressure to complete task/mission
(example-peer pressure)
Y8: Flight crew inadequate training
(example-crew not properly trained)
Y9: Flight crew perceived consequences for action/inaction
(example-internal pressure)
Y10: Flight crew individual responsibility and risk perception
(example-values)
Y11: Flight deck communication
(example-copilot afraid to speak up)
Y12: Flight crew pride/ego
(crew’s internal pressure to produce results)
Y13: Flight crew error
(simple human error)
82
Overview of Causes
There were 23 individual causes that were categorized and coded from highest percentage
represented to lowest percentage represented. It is important to note that each cause could
represent more than one factor. Asiana 214’s 5 causes represented 9 different initial categories.
This is the reason the 16 initial categories are represented 23 times.
X3 / Organization 52%
X1 / Flight crew 35%
X4 / Manufacturer 9%
X2 / Weather 4%
There were 48 emerging individual sub-theme causes representing the initial 16 causes that
were categorized and coded into 13 sub-theme categories. These 13 sub-theme categories
represent the outcome of the Constant Comparative Analysis based on the emerging themes and
patterns. The top six were:
Y1 / Organizational policies & procedures 20% Y7 / Flight crew internal pressure 10%
Y3 / Organizational pressure 15% Y2 / Organizational training 8%
Y4 / Organizational communication 11% Y6 / Management 8%
0
5
10
15
X1
.
8
35%
X2
.
1
4%
X3
.
12
52%
X4
.
2
9%
Causes
0
1
2
3
4
5
6
7
8
9
10
Y1
.
9
20%
Y2
.
4
8%
Y3
.
7
15%
Y4
.
5
11%
Y5
.
2
4%
Y6
.
4
8%
Y7
.
5
10%
Y8
.
4
8%
Y9
.
3
6%
Y10
.
1
2%
Y11
.
1
2%
Y12
.
1
2%
Y13
.
2
4%
Organizational Culture Causes
83
Overview of Findings
There were 51 individual Findings that were categorized and coded from highest to lowest;
X3 / Organization 45%
X1 / Flight crew 34%
X4 / Manufacturer 21%
X2 / Weather 0%
There were 130 emerging individual sub-theme Findings that were categorized and coded
into 13 sub-theme categories. These 13 sub-theme categories represent the outcome of the
Constant Comparative Analysis based on the emerging themes and patterns. The top six were:
Y1 / Organizational policies & procedures 22% Y3 / Organizational pressure 10%
Y13/ Flight crew error 13% Y11/ Flight crew communication 8%
Y8 / Flight crew inadequate training 10% Y2 / Organizational training 8%
0
5
10
15
20
25
X1
.
17
34%
X2
.
0
0%
X3
.
23
45%
X4
.
11
21%
Findings
0
5
10
15
20
25
30
35
Y1
.
29
22%
Y2
.
10
8%
Y3
.
12
10%
Y4
.
5
4%
Y5
.
3
2%
Y6
.
10
8%
Y7
.
8
6%
Y8
.
13
10%
Y9
.
5
4%
Y10
.
3
2%
Y11
.
11
8%
Y12
.
4
3%
Y13
.
17
13%
Organizational Culture Findings
84
Overview of Recommendations
There were 66 individual Recommendations that were categorized and coded from highest to
lowest;
X3 / Organization 64%
X4 / Manufacturer 33%
X1 / Flight crew 3%
X2 / Weather 0%
There were 114 emerging individual sub-theme recommendations that were categorized and
coded into 13 sub-theme categories. These 13 sub-theme categories represent the outcome of the
Constant Comparative Analysis based on the emerging themes and patterns. The top four
categories were:
Y1 / Organizational policies & procedures 46%
Y2 / Organizational training 30%
Y6 / Management 23%
Y4 / Organizational communication 1%
0
5
10
15
20
25
30
35
40
45
X1
.
0
3%
X2
.
0
0%
X3
.
42
64%
X4
.
22
33%
Recommendations
0
10
20
30
40
50
60
Y1
.
53
46%
Y2
.
35
30%
Y3
.
0
0%
Y4
.
1
1%
Y5
.
0
0%
Y6
.
25
23%
Y7
.
0
0%
Y8
.
0
0%
Y9
.
0
0%
Y10
.
0
0%
Y11
.
0
0%
Y12
.
0
0%
Y13
.
0
0%
Organizational Culture Recommendations
85
Summary
Chapter five included an analysis of the data used from the three case studies. This included
an overview of the findings, causes and recommendations. The associated coded data analysis
and supporting documents are included in the Appendix of this manuscript. The following
chapter will include the discussion, implications, and conclusions of this project.
86
Chapter Six: Discussion, Implications & Conclusion
Introduction
Chapter six includes the discussion, implications and conclusion of this research project. The
original research questions are discussed and answered. The contribution to practice as a result of
this project is identified. The chapter concludes with an example of a strong organizational
culture being demonstrated on the flight deck.
Discussion
This multiple case study analysis was conducted in order to answer the following two
research questions;
RQ1: What causes the high level of human error in aviation accidents?
RQ2: Is organizational culture a causational variable, and if so, to what extent?
This research project started with a review of the professional literature. The answer to the
first question, what is causing the high level of human error, is most clearly explained by the
research conducted by Shappell and Weigmann (2001). They classified human error into four
distinct categories:
1. Organizational influences,
2. Unsafe supervision (i.e., middle management)
3. Preconditions for unsafe acts = a loose culture
4. The unsafe acts of operators (e.g., aircrew, maintainers, air traffic controllers)
(Shappell, et al, 2007).
Helmreich (2000) described five different types or levels of human error:
87
1. Intentional non-compliance errors which include conscious violations of SOPs
2. Procedural errors where intentions were correct but errors were made
3. Communication errors where incorrect information is relayed
4. Proficiency errors where there is a lack of technical competence
5. Operational decision errors
These two lists tell us where the human errors could be originating. The research conducted
by Helmreich (2000) with regards to the LOSA audits (Line Oriented Safety Audits) determined
that although the most frequently occurring category (intentional non-compliance) caused the
least death or major damage, the least accounted for category (proficiency and decision errors),
caused the most damage and deaths (Helmreich, 2000). What this tells us is that we need to be
careful where we are spending our efforts at reducing errors in aviation. Something within
organizational cultures where flight crews are regularly not following their SOPs or even the
FARs is telling the flight crews that these violations are acceptable behavior. They just have not
been involved in an accident, yet. The organizational culture either drives adherence to the SOPs
and FARs or allows flight crews to bend the rules.
The research conducted for this project showed there were 23 individual causes shared
between the three case study accidents. The Asiana 214 case had 9 causes where 4 (44%) were
the result of flight crew errors, 4 (44%) were the result of organizational issues and 1 (12%) was
the result of the manufacturer. Space Shuttle Challenger had 1 (100%) cause and that was due to
errors by the manufacturer and NASA management. New Mexico State Police had 13 causes
where 4 (30%) were the result of flight crew error, 1 (8%) was the result of weather, and 8 (62%)
were the result of organizational issues. Looking at these 3 cases collectively, they would
88
attribute 8 (35%) causes to flight crews, 1 (4%) cause to weather, 12 (52%) cause to
organizational issues and 2 (9%) causes to the manufacturer. Based on the data from this project,
organizational issues cause more accidents than do the flight crew errors. In many cases, the
latent conditions are just lying in wait for an opportunity to strike.
Latent conditions are to technological organizations what resident pathogens are to the
human body. Like pathogens, latent conditions – such as poor design, gaps in supervision,
undetected manufacturing defects or maintenance failures, unworkable procedures, clumsy
automation, shortfalls in training, less than adequate tools and equipment—may be present
for many years before they combine with local circumstances and active failures to
penetrate the system’s many layers of defenses (Reason, 1997, p. 10).
The answer to the second research question regarding if organizational culture is a
causational variable in these aviation accidents is quite clear. All aspects of the research
conducted for this project and the research conducted by others outside of this project align
organizational culture with organizational safety. The two go hand in hand. This seems quite
simple, but it is not. The researcher would argue that it is the smallest and sometimes simplest
actions or inactions that make the difference. Something is telling these flight crews that it is
okay to continue an unstabilized approach. Something is telling these flight crews that it is okay
to taxi the airplane faster than what is allowed in their SOPs. Something is telling these flight
crews that it is okay to fly too close to a thunderstorm or to push the limits for a crosswind
landing.
In the research that was conducted on the Broken Windows theory for this project, it was
demonstrated that when communities ignored the small quality of life issues, they had a
propensity of growing to become larger, criminal issues that not only affected the quality of life
for their residents, but caused many of them to become victims of violent crimes. This is the
same idea with regards to organizational culture. When we ignore the simple things, they can
89
eventually become bigger problems. The quotation by Dr. Reason that was quoted earlier on this
past page referencing latent conditions was a perfect reference and accurately explained the
latent conditions that were experienced on all three of the case studies used for this project. The
organizations failed the flight crews and passengers. The industry has focused on the specific
events more so than the latent conditions.
The natural follow up question then becomes how can we refocus our efforts on changing the
organizational culture? There have been volumes of research conducted and many theories
established on organizational change and organizational development. The contribution to
practice that the researcher wants to focus on is creating a meaningful curriculum that is focused
on leadership and organizational culture and how those two traits can improve an organization’s
culture and safety which is detailed in the following section.
Implications
The University of Southern California is home to the world renowned Aviation Safety and
Security Program that is operated through the Viterbi School of Engineering. This program was
established in 1952 as the first aviation safety program at a major research university. They offer
20 courses to over 1000 students per year from all over the world and have given this important
training to over 21,000 aviation professionals thus far. These students come from all major U.S.
airlines and manufacturers, many international airlines, U.S. government agencies including the
FAA and the NTSB and many others.
The USC Aviation Safety and Security Program offers a 2-day course on Safety Management
Systems for Managers four times per year. This course is an introduction to Safety Management
90
Systems geared towards managers of aviation organizations. The full Safety Management
Systems course is 2 weeks in length. Safety culture and climates is included in the current course
outline which is more focused on safety culture within the Safety Management System. The
researcher is proposing to see a portion of this course focus on organizational culture and
leadership based on the findings from this research project. This would allow those in the
management levels of aviation organizations to learn of the importance of culture within their
own operations. This could be accomplished in approximately one hour of classroom instruction
time. This simple addition would add yet another effective layer of safety that has been under-
appreciated for some time now and is truly needed. A sample curriculum is included in the
Appendix.
Conclusion
In conclusion, this is an example of what a healthy operational culture looks like from the
flight deck. While researching different examples of similarities between aviation and medicine,
a short video of a retired US Air Force colonel was found who was lecturing to the medical field
about lessons learned on the flight deck regarding power distance and organizational culture.
Colonel John Nance, USAF, ret., spoke about preparing for a flight from Seattle to
Anchorage. He was the aircraft commander and his co-pilot was another colonel and they had
both flown together since the Viet Nam war. That day they would be flying one of the largest
transport category jets that the air force flew and would have a crew of 30 onboard. Colonel
Nance gathered the crew together for his standard pre-flight briefing that he conducted every
time he flew. Colonel Nance told everyone that he expected them to speak up if they saw
91
anything that concerned them. There was a brand new airman who was 18 years old and this was
going to be his first flight. Colonel Nance told him directly that he expected the same from him.
As they were taxiing for takeoff, they received their departure clearance which instructs them
which direction to fly and restricted them to different altitudes. This was to ensure separation
from other aircraft. Colonel Nance and his co-pilot both believed that they were cleared to climb
to 15,000 feet. After takeoff and during their climb to 15,000 feet, both pilots heard a soft, meek
voice over the intercom counting out their altitude as they passed them. When they were about to
pass 14,000 feet, the same meek voice said “we are only cleared to 14,000 feet.” Both colonels
looked at each other and Colonel Nance pushed the controls forward to immediately stop their
climb. Just after stopping at 14,000 feet, a Boeing 747 with over 300 people onboard went
directly over their heads at 15,000 feet inbound to Seattle. The meek voice was that of the 18
year old airmen who had just saved the lives of countless people. Both colonels had thought they
were cleared to 15,000 feet, but they had been cleared to 14,000 feet because of the 747. Later
that day they looked at radar tracks that showed they would have collided with the 747 had they
not stopped their climb (Nance, 2014).
This powerful example shows us the importance of taking power distance and culture seriously.
The same outcomes are possible when organizational culture is made a priority and practiced on a
consistent basis.
92
Bibliography
Alavosius, M., Ramona, H., Anbro, S., Burleigh, K., & Hebein, C. (2017). Leadership and crew
resource management in high-reliability organizations: a competency framework for
measuring behaviors. Journal of Organizational Behavior Management. 1-29.
10.1080/01608061.2017.1325825
Albright, J. (2017, January). Normalization of deviance: SOP’s are not a suggestion. Business
and Commercial Aviation. Pg. 40-43.
Bennett, A., Hellier, E., & Weyman, A. (2015). Characterising (sp) influences on safety culture
in military aviation: a methodologically grounded approach. Institute of Naval Medicine.
University of Plymouth. University of Bath, UK.
Boeing Commercial Aircraft Group, (BCAG) (1993). Crew Factor Accidents: Regional
Perspective, Proceedings of the 22
nd
Technical Conference of the International Air
Transport Association (IATA) on Human Factors in Aviation. Montreal, Canada (4-8
October 1993), Montreal, Canada: IATA, 45-61.
Creswell, J.W. (1998). Qualitative inquiry and research design: Choosing among five traditions.
Thousand Oaks, CA: Sage
Creswell, J. W. (2003), Research design: Qualitative, quantitative and mixed methods
approaches (2
nd
ed.). Thousand Oaks, CA: SAGE Publications.
93
Ellingstad, V. & Mayer, D. (1993). Assessment of human error from transportation accident
statistics; FAA workshop on flight crew human factors.
Federal Aviation Administration (FAA). 2010 Administrator’s Fact Book.
http://www.faa.gov/about/office_org/headquarters_offices/aba/admin_factbook/
[accessed August 14, 2011].
Federal Aviation Administration (FAA). 2018 Private Pilot Airplane, Airman Certification
Standards, June 2018. Flight Standards Service, Washington, DC.
Flightaware. (2017). Flight Track log, EVA28, 23-Jul-2013, TPE/RCTP/KSFO). Retrieved from
www.flightaware.com on 04/18/2017
French, J., Raven, B. (1959). The bases of social power. University of Michigan, Institute for
Social Research.
Gladwell, M., & Gladwell, M. (Ed.) (2008). Outliers: The story of success. New York: Little,
Brown.
Glaser, BG. & Straus, AL. (1967). The discovery of grounded theory: Strategies for qualitative
research. New York: Aldine De Gruyter.
Helmreich, R.L. (2000). Culture and Error in Space. Implications from analog environments.
Aviation, Space, and Environmental Medicine, 71(9-11), 133-139.
Helmreich, R.L. (2000). On error management: lessons from aviation. BMJ (Clinical research
ed.), 320(7237), 781-785.
94
Helmreich, R. L. & Merritt, A. (1998). Culture at work in aviation and medicine: National
organizational, and professional influences. Brookfield, VT: Ashgate.
Hofstede, G. & Bond, M. H. (1988). The Confucius connection: from cultural roots to economic
growth. Organizational Dynamics, 16, 4-21.
Hofstede, G. & Hofstede, G. J. (2005). Cultures and organizations: Software of the mind (Rev.
2
nd
ed.). New York: McGraw Hill.
Hofstede, G. (1980). Culture’s consequences: International differences in work-related
values. Beverly Hills, CA: Sage.
Hofstede, G., (2001). Culture’s consequences: comparing values, behaviors, institutions, and
organizations across nations, 2
nd
ed. Sage. Thousand Oaks, CA.
Howard, B., (2013). Could malcolm gladwell’s theory of cockpit culture apply to asiana crash.
National Geographic. Published July 11, 2013. Retrieved from
www.nationalgeographic.com/news/2013/7/130709-asiana-flight-214-crash
Kelling, G., Wilson, J., (1982, March). Broken windows: The police and neighborhood safety.
The Atlantic Monthly. 211:29-38.
Kim, C. & Song, B. (2016). An empirical study on safety culture in aviation maintenance
organizations.
95
Klein, R., Bigley, G., & Roberts, K. (1995). Organizational culture in high reliability
organizations: an extension. Human Relations, 48(7), 771–793.
https://doi.org/10.1177/001872679504800703
Mahler, J. & Casamayou, M. (2009). Organizational learning at NASA: the challenger and
columbia accidents. Washington: Georgetown University Press. Retrieved April 24,
2015, from Project MUSE database.
MEDA Investigative Process (2007). Boeing Aero Magazine, 2
nd
quarter, 2007
Medvedev, G. (1991). The truth about chernobyl. New York. Basic
Merriam, S. B. (1998), Qualitative research and case study applications in education. San
Francisco, CA: Jossey-Bass.
Meshkati, N. (1997, April). Human performance, organizational factors and safety culture.
Paper presented on National Summit by NTSB on transportation safety, Washington,
D.C.
Nakaso, D., & Carey, P. (2013). Mercury News.
Nance, J. (2014). Near Miss Story Lecture. May 5, 2014. Downloaded
www.youtube.com/watch?v=hW7LGxCLauo
National Transportation Safety Board. (2014) Descent below visual glidepath and impact with
Seawall Asiana Flight 214, Boeing 777-200ER, HL 7742, San Francisco, California, July
6, 2013. Washington, DC.
96
Northouse, P. (2013). Leadership: theory and practice. Los Angeles: Sage Publications.
Northouse, P. (2010). Leadership: theory and practice. (5
th
ed). Thousand Oaks, CA: SAGE
Publications
NTSB. (1992). National Transportation Safety Board final crash report on Continental Express
Flight 2574. In-flight structural breakup. NTSB. AAR-92/04
NTSB. (2000). National Transportation Safety Board final crash report on Korean Airlines
Flight #801, CFIT. NTSB.
NTSB. (2011). National Transportation Safety Board final crash report on New Mexico State
Police Helicopter. NTSB.
NTSB (2019). Aircraft crash statistics. Downloaded 2019
NTSB. (2020). National Transportation Safety Board aviation accident statistics, 1999-2018
Preliminary Aviation Statistics. NTSB.
O’Neil, P., Kriz, K. (2013). Do high-reliability systems have lower error rates? Evidence from
commercial aviation accidents. Public Administration Review. Vol. 73, Iss. 4, pp. 601-
612. The American Society for Public Administration.
Pfeffer, J. (1993). Managing with power: politics and influence in organizations. Boston, MA:
Harvard Business Review Press.
Pidgeon, N., O’Leary, M. (1994). Organizational safety culture: implications for aviation
practice. Aviation Psychology in Practice. Avebury Technical Press. Aldershot.
97
Reason, J. (1990). Human error. Cambridge: Cambridge University Press.
Reason, J., (1997). Managing the risks of organizational accidents. Burlington, VT: Ashgate.
Reason, J. (1998). Achieving a safe culture: theory and practice. Work & Stress; VOL 12, N
Reason, J. (2000). Human error: models and management. BMJ : British Medical Journal,
320(7237), 768–770.
Rogers Commission (1986). Report on Space Shuttle Challenger. Gpo.gov
Rosenkrans, W. (2015). AeroSafety World, Flight Safety Foundation, Normalization of
Deviance, June, 2015, retrieved from https://flightsafety.org/asw-article/normalization-
of-deviance/
Schein, E. (1992), Organizational culture and leadership. San Francisco, CA. Jossey-Bass.
Shappell, S., Detwiler, C., Holcomb, K., Hackworth, C., Boquet, A., & Wiegmann, D. A. (2007).
Human error and commercial aviation accidents: an analysis using the human factors
analysis and classification system. human factors, 49(2), 227–242.
https://doi.org/10.1518/001872007X312469
Shappell. S., Wiegmann, D. (1996). U.S. naval aviation mishaps 1977-1992: differences between
single- and dual-piloted aircraft. Aviation, Space and Environmental Medicine, 67, 65-9.
Stake, R. E. (1995). The art of case study research. London: Sage Publications Ltd.
98
Vaughan, D., (2008). Interview: Diane Vaughan. Consulting Newsline., May 2008. Interviewed
by Bertrand Villeret.
Von Thaden, T.L., Gibbons, A. (2008). The Safety Culture Indicator Scale Measurement System.
University of Illinois Human Factors Division Technical Report HFD-06-09. Prepared
for the Federal Aviation Administration, contract DTFA 01-G-015.
Von Thaden, T.L., Li, Y., Jiang L., and Dong L. (2006). Validating the commercial aviation
safety survey in the Chinese context. University of Illinois Human Factors Division
Technical.
Weigmann, Zhang, von Thaden, Sharma, & Mitchell, (2002). A synthesis of safety culture and
safety climate research. University of Illinois Human Factors Division Technical Report
ARL-02-3/FAA-02-2. Prepared for the Federal Aviation Administration. Contract DTFA
01-G-015.
Welsh, B., Braga, A., Bruinsma, G., Welsh, B., Braga, A., Bruinsma, G., (2015). Reimagining
broken windows: from theory to policy. Journal of Research in Crime and Delinquency,
52(4), 447-463. Https://doi.org/10.1177/0022427815581399
Wiegmann, D., Shappell, S. (2001). A human error analysis of commercial aviation accidents
using the human factors analysis and classification system (HFACS). University of
Illinois Human Factors Division. Prepared for the Federal Aviation Administration.
Contract 99-G-006.
99
Wiegmann, D.A. and Shappell S.A. (2003). A human error approach to aviation accident
analysis: The human factors analysis and classification system. Burlington, VT: Ashgate.
Wright, O. (1913). American Aviation Journal, Flying and the Aero Club of America. December,
1913 issue.
Yin, R. K. (1994), Discovering the future of the case study method in evaluation research. Eval
Pract. 1994; 15:283-290
Yin, R. K. (2014), Case study research design and methods (5
th
ed.). Thousand Oaks, CA. Sage
Publications.
Yin, R.K. (2003). Case study research: design and methods (3
rd
ed.). Thousand Oaks, CA. Sage
Publications.
100
Appendix A
Coded Data Analysis – Causes / Chart 13
x
Categories & Themes
X1 = Flight Crew
X2 = Weather
X3 = Organization
X4 = Manufacturer
CAUSES (11)
Asiana 214 (A) Challenger (B) NMSP (C)
AC1: X3, X4 BC1: X-4 CC1: X1, X2
AC2: X1 CC2: X3
AC3: X1, X3 CC3: X1, X3
AC4: X1, X3 CC4: X1
AC5: X1, X3 CC5: X1, X3
CC6: X3
CC7: X3
CC8: X3
CC9: X3
CC10: X3
0
2
4
6
8
10
12
14
X1
.
8
35%
X2
.
1
4%
X3
.
12
52%
X4
.
2
9%
Causes
101
Coded Data Analysis – Causes / Sub-Themes / Chart 14
x
Sub-Themes
Y1 = Org policies and procedures
Y2 = Org training
Y3 = Org pressure
Y4 = Org communications
Y5 = Supervisory
Y6 = Management
Y7 = Flight crew internal pressure
Y8 = Flight crew inadequate training
Y9 = Flight crew perceived
consequences
Y10 =Flight crew individual
responsibility and risk perception
Y11 =Flight deck communication
Y12 =Flight crew pride/ego
Y13 =Flight crew error
CAUSES (11)
Asiana 214 (A) Challenger (B) NMSP (C)
AC1: Y1, Y2, Y8 BC1: Y1, Y3,
Y4,Y6
CC1: Y1, Y2, Y3, Y6, Y7,
Y8, Y9, Y10, Y12, Y13 AC2: Y4, Y11
AC3: Y2, Y7, Y8, Y13 CC2: Y1, Y2, Y3, Y4, Y5,
Y6 AC4: Y5, Y8
AC5: Y1 CC3: Y3, Y7, Y9
CC4: Y7, Y9
CC5: Y7
CC6: Y1
CC7: Y1, Y3, Y4
CC8: Y1, Y3, Y6
CC9: Y1, Y3
CC10: Y1, Y4
0
1
2
3
4
5
6
7
8
9
10
Y1
.
9
20%
Y2
.
4
8%
Y3
.
7
15%
Y4
.
5
11%
Y5
.
2
4%
Y6
.
4
8%
Y7
.
5
10%
Y8
.
4
8%
Y9
.
3
6%
Y10
.
1
2%
Y11
.
1
2%
Y12
.
1
2%
Y13
.
2
4%
Organizational Culture Causes
102
Coded Data Analysis – Findings / Chart 15
Categories & Themes
X1 = Flight Crew
X2 = Weather
X3 = Organization
X4 = Manufacturer
FINDINGS (49)
Asiana 214 (A) Challenger (B) NMSP (C)
AF1: N/A BF1: X4 CF1: N/A
AF2: X1 BF2: X4 CF2: N/A
AF3: X1 BF3: X4 CF3: X4
AF4: X1 BF4: X4 CF4: N/A
AF5: X1 CF5: X1, X3
AF6: X1, X3 CF6: X1
AF7: X1 CF7: X1
AF8: X1, X3 CF8: X3
AF9: X1 CF9: X3
AF10: X1, X3, X4 CF10: X3
AF11: X4 CF11: X3
AF12: X4 CF12: X3
AF13: X1, X3 CF13: X1, X3
AF14: X1, X3 CF14: X3
AF15: X1, X3 CF15: X3
AF16: X4
AF17: X1
AF18: N/A
AF19: X4
AF20: N/A
AF21: N/A
AF22: X4
AF23: X3
AF24: X3
AF25: X3
AF26: X3
AF27: X3
AF28: X3
AF29: X3
AF30: X3
0
5
10
15
20
25
X1
.
17
34%
X2
.
0
0%
X3
.
23
45%
X4
.
11
21%
Findings
103
Coded Data Analysis – Findings / Sub-Themes / Chart 16
Sub-Themes
Y1 = Org policies and procedures
Y2 = Org training
Y3 = Org pressure
Y4 = Org communications
Y5 = Supervisory
Y6 = Management
Y7 = Flight crew internal
pressure
Y8 = Flight crew inadequate
training
Y9 = Flight crew perceived
consequences
Y10 =Flight crew individual
responsibility and risk
perception
Y11 =Flight deck communication
Y12 =Flight crew pride/ego
Y13 =Flight crew error
FINDINGS (49)
Asiana 214 (A) Challenger (B) NMSP (C)
AF1: N/A BF1: Y1, Y3, Y6 CF1: N/A
AF2: Y2, Y9, Y11,
Y13
BF2: Y1, Y2, Y3,
Y6
CF2: N/A
CF3: N/A
AF3: Y1, Y7, Y8, Y9,
Y11, Y12, Y13
BF3: Y1, Y3, Y4,
Y6
CF4: N/A
AF4: Y1, Y2, Y5, Y7,
Y8, Y9, Y11, Y12, Y13
BF4: Y1, Y3, Y4,
Y6
CF5: Y1, Y2, Y3, Y6, Y7,
Y8, Y10, Y13
AF5: Y1, Y2, Y5, Y7,
Y8, Y11, Y13
CF6: Y1, Y2, Y3, Y7, Y8,
Y10, Y12, Y13
AF6: Y1, Y3, Y7, Y11,
Y13
CF7: Y3, Y6, Y7, Y8, Y9,
Y10, Y12, Y13
AF7: Y4, Y11, Y13
CF8: Y3, Y6
AF8: Y1, Y3, Y7, Y8,
Y9, Y11, Y13
CF9: Y1, Y6, Y8
CF10: Y1, Y6
AF9: Y8, Y11, Y13
CF11: Y1, Y3, Y6
AF10: Y2, Y6, Y8,
Y11, Y13
CF12: Y1, Y2, Y4, Y6
CF13: Y1, Y2, Y8, Y13
AF11: N/A
CF14: Y1
AF12: N/A
CF15: Y1
AF13: Y1, Y5, Y8,
Y13
Asiana 214 (A)
AF21: N/A
AF14: Y1, Y4, Y11,
Y13
AF22: N/A
AF23: Y1
AF15: Y1, Y2, Y3, Y8,
Y13
AF24: Y1
AF25: Y1
AF16: N/A AF26: Y1
AF17: Y11, Y13 AF27: Y1
AF18: N/A AF28: Y1
AF19: N/A AF29: Y1
AF20: N/A AF30: Y1
0
5
10
15
20
25
30
35
Y1
.
29
22%
Y2
.
10
8%
Y3
.
12
10%
Y4
.
5
4%
Y5
.
3
2%
Y6
.
10
8%
Y7
.
8
6%
Y8
.
13
10%
Y9
.
5
4%
Y10
.
3
2%
Y11
.
11
8%
Y12
.
4
3%
Y13
.
17
13%
Organizational Culture Findings
104
Coded Data Analysis – Recommendations / Chart 17
Categories & Themes
X1 = Flight Crew
X2 = Weather
X3 = Organization
X4 = Manufacturer
RECOMMENDATIONS (54)
Asiana 214 (A) Challenger (B) NMSP (C)
AR1: X3 BR1: X4 CR1: X3
AR2: X3, X4 BR2: X4 CR2: X3
AR3: X4 BR3: X3, X4 CR3: X3
AR4: X3, X4 BR4: X3, X4 CR4: X3
AR5: X3 BR5: X3, X4 CR5: X3
AR6: X4 BR6: X3, X4 CR6: X3
AR7: X4 BR7: X3, X4 CR7: X3
AR8: X4 BR8: X3 CR8: X3
AR9: X4 BR9: X3, X4 CR9: X3
AR10: X4 BR10: X4 CR10: X3
AR11: X3 BR11: X3, X4 CR11: X3
AR12: X3 BR12: X3, X4 CR12: X3
AR13: X3 CR13: X3
AR14: X3 CR14: X3
AR15: X3 CR15: X3
AR16: X1, X3
AR17: X1, X3
AR18: X4
AR19: X3
AR20: X4
AR21: X4
AR22: X3
AR23: X3
AR24: X3
AR25: X3
AR26: X3
AR27: X3
0
10
20
30
40
50
X1
.
0
3%
X2
.
0
0%
X3
.
42
64%
X4
.
22
33%
Recommendations
105
Coded Data Analysis – Recommendations / Sub-Themes / Chart 18
Sub-Themes
Y1 = Org policies and procedures
Y2 = Org training
Y3 = Org pressure
Y4 = Org communications
Y5 = Supervisory
Y6 = Management
Y7 = Flight crew internal
pressure
Y8 = Flight crew inadequate
training
Y9 = Flight crew perceived
consequences
Y10 =Flight crew individual
responsibility and risk
perception
Y11 =Flight deck communication
Y12 =Flight crew pride/ego
Y13 =Flight crew error
RECOMMENDATIONS (54)
Asiana 214 (A) Challenger (B) NMSP (C)
AR1: Y1, Y2 BR1: Y1, Y2, Y6 CR1: Y1
AR2: Y1, Y2 BR2: Y1, Y2, Y6 CR2: Y1, Y6
AR3: Y1, Y2 BR3: Y1, Y6 CR3: Y1, Y4
AR4: Y1, Y2 BR4: Y1, Y6 CR4: Y1, Y6
AR5: Y1, Y2 BR5: Y1, Y6 CR5: Y1, Y2, Y6
AR6: Y1, Y2 BR6: Y1, Y6 CR6: Y1, Y2, Y6
AR7: Y1, Y2 BR7: Y1, Y6 CR7: Y1, Y2, Y6
AR8: Y1, Y2 BR8: Y4, Y6 CR8: Y1, Y6
AR9: Y1, Y2 BR9: Y1, Y2, Y6 CR9: Y1, Y2, Y6
AR10: Y1, Y2 BR10: Y1, Y6 CR10: Y1, Y6
AR11: Y1, Y2 BR11: Y1, Y6 CR11: Y1, Y6
AR12: Y1, Y2 BR12: Y1, Y6 CR12: Y1, Y6
AR13: Y1, Y2 CR13: Y1, Y2, Y6
AR14: Y1, Y2 CR14: Y1, Y6
AR15: Y1, Y2 CR15: Y1, Y6
AR16: Y1, Y2
AR17: Y1, Y2
AR18: Y1, Y2
AR19: Y1, Y2
AR20: Y1, Y2
AR21: Y1, Y2
AR22: Y1, Y2
AR23: Y1, Y2
AR24: Y1, Y2
AR25: Y1, Y2
AR26: Y1, Y2
AR27: Y1, Y2
0
10
20
30
40
50
60
Y1
.
53
46%
Y2
.
35
30%
Y3
.
0
0%
Y4
.
1
1%
Y5
.
0
0%
Y6
.
25
23%
Y7
.
0
0%
Y8
.
0
0%
Y9
.
0
0%
Y10
.
0
0%
Y11
.
0
0%
Y12
.
0
0%
Y13
.
0
0%
Organizational Culture Recommendations
Coded Data that was Analyzed
Asiana 214 (Adapted from NTSB, 2014)
Cause
AC-1. Complexities of the autothrottle and autopilot flight director that were inadequately
described in Boeing’s manual’s and Asiana’s pilot training.
AC-2. Flight crew’s nonstandard communication and coordination regarding use of autothrottles
and autopilot systems.
AC-3. The Pilot Flying’s (PF) inadequate training on the planning and execution of visual
approaches.
AC-4. The Pilot Monitoring (PM)/Instructor Pilot’s inadequate supervision of PF.
AC-5. Flight Crew fatigue which likely degraded their performance.
Findings
AF-1. The following were not factors in the accident: flight crew certification and qualification;
flight crew behavioral or medical conditions or the use of alcohol or drugs; airplane certification
and maintenance; preimpact structural, engine, or system failures; or the air traffic controllers’
handling of the flight.
AF-2. Although the instrument landing system glideslope was out of service, the lack of a
glideslope should not have precluded the pilots’ successful completion of a visual approach.
AF-3. The flight crew mismanaged the airplane’s vertical profile during the initial approach,
which resulted in the airplane being well above the desired glidepath when it reached the 5
nautical mile point, and this increased the difficulty of achieving a stabilized approach.
AF-4. The flight crew’s mismanagement of the airplane’s vertical profile during the initial
approach led to a period of increased workload that reduced the pilot monitoring’s awareness of
the pilot flying’s actions around the time of the unintended deactivation of automatic airspeed
control.
AF-5. About 200 ft, one or more flight crewmembers became aware of the low airspeed and low
path conditions, but the flight crew did not initiate a go-around until the airplane was below 100
ft, at which point the airplane did not have the performance capability to accomplish a go-
around.
AF-6. The flight crew was experiencing fatigue, which likely degraded their performance during
the approach.
107
AF-7. Nonstandard communication and coordination between the pilot flying and the pilot
monitoring when making selections on the mode control panel to control the autopilot flight
director system (AFDS) and autothrottle (A/T) likely resulted, at least in part, from role
confusion and subsequently degraded their awareness of the AFDS and A/T modes.
AF-8. Insufficient flight crew monitoring of airspeed indications during the approach likely
resulted from expectancy, increased workload, fatigue, and automation reliance.
AF-9. The delayed initiation of a go-around by the pilot flying and the pilot monitoring after they
became aware of the airplane’s low path and airspeed likely resulted from a combination of
surprise, nonstandard communication, and role confusion.
AF-10. As a result of complexities in the 777 automatic flight control system and inadequacies in
related training and documentation, the pilot flying had an inaccurate understanding of how the
autopilot flight director system and autothrottle interacted to control airspeed, which led to his
inadvertent deactivation of automatic airspeed control.
AF-11. If the autothrottle automatic engagement function (“wakeup”), or a system with similar
functionality, had been available during the final approach, it would likely have activated and
increased power about 20 seconds before impact, which may have prevented the accident.
AF-12. A review of the design of the 777 automatic flight control system, with special attention
given to the issues identified in this accident investigation and the issues identified by the
Federal Aviation Administration and European Aviation Safety Agency during the 787
certification program, could yield insights about how to improve the intuitiveness of the 777 and
787 flight crew interfaces as well as those incorporated into future designs.
AF-13. If the pilot monitoring had supervised a trainee pilot in operational service during his
instructor training, he would likely have been better prepared to promptly intervene when needed
to ensure effective management of the airplane’s flightpath.
AF-14. If Asiana Airlines had not allowed an informal practice of keeping the pilot monitoring’s
(PM) flight director (F/D) on during a visual approach, the PM would likely have switched off
both F/Ds, which would have corrected the unintended deactivation of automatic airspeed
control.
AF-15. By encouraging flight crews to manually fly the airplane before the last 1,000 ft of the
approach, Asiana Airlines would improve its pilots’ abilities to cope with maneuvering changes
commonly experienced at major airports and would allow them to be more proficient in
establishing stabilized approaches under demanding conditions; in this accident, the pilot flying
may have better used pitch trim, recognized that the airspeed was decaying, and taken the
appropriate corrective action of adding power.
AF-16. A context-dependent low energy alert would help pilots successfully recover from
unexpected low energy situations like the situation encountered by the accident pilots.
108
AF-17. The flight attendants acted appropriately when they initiated an emergency evacuation
upon determining there was a fire outside door 2R. Further, the delay of about 90 seconds in
initiating an evacuation was likely due partly to the pilot monitoring’s command not to begin an
immediate evacuation, as well as disorientation and confusion.
AF-18. Passengers 41B and 41E were unrestrained for landing and ejected through the ruptured
tail of the airplane at different times during the impact sequence. It is likely that these passengers
would have remained in the cabin and survived if they had been wearing their seatbelts.
AF-19. Passenger 42A was likely restrained for landing, and the severity of her injuries was likely
due to being struck by door 4L when it separated during the airplane’s final impact.
AF-20. The dynamics of the impact sequence in this accident were such that occupants were thrown
forward and experienced a significant lateral force to the left, which resulted in serious passenger
injuries that included numerous left-sided rib fractures and one left-sided head injury.
AF-21. The reasons for the high number of serious injuries to the high thoracic spine in this accident
are poorly understood.
AF-22. The release and inflation of the 1R and 2R slide/rafts inside the airplane cabin was a result of
the catastrophic nature of the crash, which produced loads far exceeding design certification limits.
AF-23. Clearer guidance is needed to resolve the concern among airport fire departments and
individual firefighters that the potential risk of injuring airplane occupants while piercing aircraft
structure with a skin-penetrating nozzle outweighs the potential benefit of an early and aggressive
interior attack using this tool.
AF-24. Medical buses were not effectively integrated into San Francisco International Airport’s
monthly preparation drills, which played a part in their lack of use in the initial response to the
accident and delayed the arrival of backboards to treat seriously injured passengers.
AF-25. Guidance on task prioritization for responding aircraft rescue and firefighting personnel that
addresses the presence of seriously injured or deceased persons in the immediate vicinity of an
accident airplane is needed to minimize the risk of these persons being struck or rolled over by
vehicles during emergency response operations.
AF-26. The overall triage process in this mass casualty incident was effective with the exception of
the failure of responders to verify their visual assessments of the condition of passenger 41E.
AF-27. The San Francisco Fire Department’s aircraft rescue and firefighting staffing level was
instrumental in the department’s ability to conduct a successful interior fire attack and successfully
rescue five passengers who were unable to self-evacuate amid rapidly deteriorating cabin conditions.
AF-28. Although no additional injuries or loss of life were attributed to the fire attack, encourage
your members to install flight-tracking equipment on all public aircraft that would allow for
near-continuous flight tracking during missions. (A-11-63) supervisor’s lack of aircraft rescue
109
and firefighting (ARFF) knowledge and training, the decisions and assumptions he made
demonstrate the potential strategic and tactical challenges associated with having non-ARFF
trained personnel in positions of command at an airplane accident.
AF-29. Although some of the communications difficulties encountered during the emergency
response, including the lack of radio interoperability, have been remedied, others, such as the
breakdown in communications between the airport and city dispatch centers, should be addressed.
AF-30. The Alert 3 sections of San Francisco International Airport’s 2008 and 2012 emergency
procedures manuals were not sufficiently robust to anticipate and prevent the problems that occurred
in the accident response.
Recommendations
AR-1. Require Boeing to develop enhanced 777 training that will improve flight crew
understanding of autothrottle modes and automatic activation system logic through improved
documentation, courseware, and instructor training. (A-14-37)
AR-2. Once the enhanced Boeing 777 training has been developed, as requested in Safety
Recommendation A-14-37, require operators and training providers to provide this training to
777 pilots. (A-14-38)
AR-3. Require Boeing to revise its 777 Flight Crew Training Manual stall protection
demonstration to include an explanation and demonstration of the circumstances in which the
autothrottle does not provide low speed protection. (A-14-39)
AR-4. Once the revision to the Boeing 777 Flight Crew Training Manual has been completed, as
requested in Safety Recommendation A-14-39, require operators and training providers to
incorporate the revised stall protection demonstration in their training. (A-14-40)
AR-5. Convene an expert panel (including members with expertise in human factors, training,
and flight operations) to evaluate methods for training flight crews to understand the
functionality of automated systems for flightpath management, identify the most effective
training methods, and revise training guidance for operators in this area. (A-14-41)
AR-6. Convene a special certification design review of how the Boeing 777 automatic flight
control system controls airspeed and use the results of that evaluation to develop guidance that
will help manufacturers improve the intuitiveness of existing and future interfaces between flight
crews and autoflight systems. (A-14-42)
AR-7. Task a panel of human factors, aviation operations, and aircraft design specialists, such as
the Avionics Systems Harmonization Working Group, to develop design requirements for
context-dependent low energy alerting systems for airplanes engaged in commercial operations.
(A-14-43)
110
AR-8. Conduct research that examines the injury potential to occupants in accidents with
significant lateral forces, and if the research deems it necessary, implement regulations to
mitigate the hazards identified. (A-14-44)
AR-9. Conduct research to identify the mechanism that produces high thoracic spinal injuries in
commercial aviation accidents, and if the research deems it necessary, implement regulations to
mitigate the hazards identified. (A-14-45)
AR-10. Analyze, in conjunction with slide/raft manufacturers, the information obtained in this
accident investigation and evaluate the adequacy of slide and slide/raft certification standards
and test methods specified in Federal Aviation Administration regulations and guidance
materials. If appropriate, modify certification standards and test methods for future slide and
slide/raft design based on the results of this evaluation. (A-14-46)
AR-11. Work with the Aircraft Rescue and Firefighting Working Group and equipment
manufacturers to develop and distribute more specific policies and guidance about when, how,
and where to use the high-reach extendable turret’s unique capabilities. (A-14-47)
AR-12. Once the minimum staffing level has been developed by the Aircraft Rescue and
Firefighting (ARFF) Working Group, as requested in Safety Recommendation A-14-60, amend
14 Code of Federal Regulations 139.319(j) to require a minimum ARFF staffing level that would
allow exterior firefighting and rapid entry into an airplane to perform interior firefighting and
rescue of passengers and crewmembers. (A-14-48)
AR-13. Work with the Aircraft Rescue and Firefighting (ARFF) Working Group to develop and
distribute policy guidance and training materials to ensure that all airport and mutual aid
firefighting officers placed in command at the scene of an aircraft accident have at least a
minimum level of ARFF training. (A-14-49)
AR-14. Issue a CertAlert to all Part 139 airports to distribute the information contained in the
Federal Aviation Administration’s (FAA) legal interpretation of 14 Code of Federal Regulations
139.319 that requires all personnel assigned to aircraft rescue and firefighting duties to meet the
initial and recurrent training and live-fire drill requirements and clarify how the FAA will
enforce this regulation. (A-14-50)
AR-15. Conduct a special inspection of San Francisco International Airport’s emergency
procedures manual and work closely with the airport to ensure that the airport meets its
obligations under Part 139.325. (A-14-51)
To Asiana Airlines:
AR-16. Reinforce, through your pilot training programs, flight crew adherence to standard
operating procedures involving making inputs to the operation of autoflight system controls on
the Boeing 777 mode control panel and the performance of related callouts. (A-14-52)
AR-17. Revise your flight instructor operating experience (OE) qualification criteria to ensure
that all instructor candidates are supervised and observed by a more experienced instructor
111
during OE or line training until the new instructor demonstrates proficiency in the instructor role.
(A-14-53)
AR-18. Issue guidance in the Boeing 777 Pilot Operating Manual that after disconnecting the
autopilot on a visual approach, if flight director guidance is not being followed, both flight
director switches should be turned off. (A-14-54)
AR-19. Modify your automation policy to provide for more manual flight, both in training and in
line operations, to improve pilot proficiency. (A-14-55)
To Boeing
AR-20. Using the guidance developed by the low energy alerting system panel created in
accordance with Safety Recommendation A-14-43, develop and evaluate a modification to
Boeing wide-body automatic flight control systems to help ensure that the aircraft energy state
remains at or above the minimum desired energy condition during any portion of the flight. (A-
14-57)
AR-21. Revise the Boeing 777 Flight Crew Operating Manual to include a specific statement
that when the autopilot is off and both flight director switches are turned off, the autothrottle
mode goes to speed (SPD) mode and maintains the mode control panel-selected speed. (A-14-56)
To the Aircraft Rescue and Firefighting Working Group
AR-22. Work with the Federal Aviation Administration and equipment manufacturers to develop
and distribute more specific policies and guidance about when, how, and where to use the high-
reach extendable turret’s unique capabilities. (A-14-58)
AR-23. Work with medical and medicolegal professional organizations to develop and distribute
guidance on task prioritization for responding aircraft rescue and firefighting (ARFF) personnel
that includes recommended best practices to avoid striking or rolling over seriously injured or
deceased persons with ARFF vehicles in a mass casualty situation. (A-14-59)
AR-24. Develop a minimum aircraft rescue and firefighting staffing level that would allow
exterior firefighting and rapid entry into an airplane to perform interior firefighting and rescue of
passengers and crewmembers. (A-14-60)
AR-25. Develop and distribute, in conjunction with the Federal Aviation Administration,
guidance and training materials to ensure that all airport and mutual aid firefighting officers
placed in command at the scene of an aircraft accident have at least a minimum level of aircraft
rescue and firefighting training. (A-14-61)
To the City and County of San Francisco
AR-26. Routinely integrate the use of all San Francisco Fire Department medical and firefighting
vehicles in future disaster drills and preparatory exercises. (A-14-62)
112
AR-27. Implement solutions to the communications deficiencies identified in ICF International’s
after-action report as soon as practicable. (A-14-63)
Challenger (Adapted from Rogers, 1986)
Cause
BC-1. Failure of the pressure seal (O-ring) in the field joint of the right Solid Rocket Motor. The
failure was due to a faulty design unacceptably sensitive to a number of factors. These factors
were the effects of temperature, physical dimensions, the character of materials, the effects of
reusability, processing, and the reaction of the joint to dynamic loading.
Findings
BF-1. There was a serious flaw in the decision making process leading up to the launch of flight
51-L. A well structured and managed system emphasizing safety would have flagged the rising
doubts about the Solid Rocket Booster joint seal.
BF-2. The waiving of launch constraints appears to have been at the expense of flight safety.
There was no system which made it imperative that launch constraints be considered by all levels
of management.
BF-3. The Commission is troubled by what appears to be a propensity of management at
Marshall to contain potentially serious problems and to attempt to resolve them internally rather
then communicate them forward.
BF-4. The Commission concluded that the Thiokol Management reversed its position and
recommended the launch of 51-L, at the urging of Marshall and contrary to the views of its
engineers in order to accommodate a major customer.
Recommendations
BR-1. The faulty Solid Rocket Motor joint and seal must be changed.
BR-2. The Administrator of NASA should request the National Research Council to form an
independent Solid Rocket Motor design oversight committee to implement the Commission’s
design recommendations and oversee the design effort.
BR-3. The Shuttle Program Structure should be reviewed including a redefinition of the Program
Manager’s responsibility giving them requisite authority for all ongoing STS operations.
BR-4. NASA should encourage the transition of qualified astronauts into agency management
positions.
BR-5. NASA should establish an STS Safety Advisory Panel reporting the STS Program
Manager.
113
BR-6. NASA and the primary Shuttle contractors should review all Criticality 1, 1R, 2, and 2R
items and hazard analyses.
BR-7. NASA should establish an Office of Safety, Reliability and Quality Assurance reporting
directly to the NASA Administrator.
BR-8. Communication needs to improve.
BR-9. NASA must take actions to improve landing safety.
BR-10. NASA make all efforts to provide a crew escape system for use during controlled gliding
flight.
BR-11. The nation’s reliance on the Shuttle as its principal space launch capability created a
relentless pressure on NASA to increase the flight rate. NASA must establish a flight rate that is
consistent with its resources.
BR-12. Installation, test, and maintenance procedures must be especially rigorous for Space
Shuttle items designated Criticality 1.
New Mexico State Police (Adapted from NTSB, 2011)
Cause
CC-1. Pilot’s decision to take off from a remote, mountainous landing site in dark (moonless)
night, windy, instrument meteorological conditions
CC-2. An organizational culture that prioritized mission execution over aviation safety
CC-3. Pilot’s fatigue.
CC-4. Pilot’s self-induced pressure to conduct the flight.
CC-5. Pilot’s situational stress.
CC-6. Deficiencies in the New Mexico State Police aviation section’s safety-related policies.
CC-7. Lack of a requirement for a risk assessment at any point during the mission.
CC-8. Inadequate pilot staffing.
CC-9. Lack of an effective fatigue management program for pilots.
CC-10. Inadequate procedures and equipment to ensure effective communication between
airborne and ground personnel during search and rescue missions.
114
Findings
CF-1. The investigation determined that the accident helicopter was properly certificated and
maintained in accordance with New Mexico State Police policies and the manufacturer’s
recommended maintenance program. There was no evidence of any preimpact structural, engine,
or system failures.
CF-2. The investigation found no evidence that the pilot had any preexisting medical or
toxicological condition that adversely affected his performance during the accident flight.
CF-3. Postaccident examination of the helicopter’s seats and restraint systems revealed no
evidence of preimpact inadequacies. The pilot and the hiker were ejected from the helicopter
when their seats and restraint systems were subjected to forces beyond those for which they were
certificated during the helicopter’s roll down the steep, rocky mountainside.
CF-4. Neither the airborne nor the ground search and rescue (SAR) personnel could have
reached the pilot before he died of exposure given the adverse weather conditions, which
precluded a prompt airborne SAR response and hindered the ground SAR teams’ progress; the
darkness and the rugged terrain in which the ground SAR teams were responding; the distance
they had to travel; and the seriousness of the pilot’s injuries.
CF-5. When the pilot made the decision to launch, the weather and lighting conditions, even at
higher elevations, did not preclude the mission; however, after accepting a search and rescue
mission involving flight at high altitudes over mountainous terrain, with darkness approaching
and with a deteriorating weather forecast, the pilot should have taken steps to mitigate the
potential risks involved, for example, by brining cold-weather survival gear and ensuring that
night vision goggles were on board and readily available for the mission.
CF-6. The pilot exhibited poor decision-making when he chose to take off from a relatively
secure landing site at night and attempt visual flight rules flight in adverse weather conditions.
CF-7. The pilot decided to take off from the remote landing site, despite mounting evidence
indicating that the deteriorating weather made an immediate return to Santa Fe inadvisable,
because his fatigue, self-induced pressure to complete the mission, and situational stress
distracted him from identifying and evaluating alternative courses of action.
CF-8. Although there was no evidence of any direct New Mexico State Police of Department of
Public Safety management pressure on the pilot during the accident mission, there was evidence
of management actions that emphasized accepting all missions, without adequate regard for
conditions, which was not consistent with a safety-focused organizational safety culture, as
emphasized in current safety management system guidance.
CF-9. If operators of public aircraft implemented structured, task-specific risk assessment
management programs, their pilots would be more likely to thoroughly identify, and make efforts
to mitigate, the potential risks associated with a mission.
115
CF-10. An effective pilot flight and duty time program would address not only maximum flight
and duty times but would also contain requirements for minimum contiguous ensured rest
periods to reduce pilot fatigue; the New Mexico State Police aviation section’s flight and duty
time policies did not ensure minimum contiguous rest periods for its pilots.
CF-11. At the time of the accident, the New Mexico State Police aviation section staffing level
was insufficient to allow helicopter operations 24 hours a day, 7 days a week without creating an
unacceptable risk of pilot fatigue.
CF-12. New Mexico State Police (NMSP) personnel did not regularly follow the search and
rescue (SAR) plan, and NMSP pilots, including the accident pilot, did not routinely communicate
directly with the SAR commanders during SAR efforts, which reduced the safety and
effectiveness of SAR missions.
CF-13. Because the accident pilot did not have a helicopter instrument rating, experience in
helicopter instrument operations, or training specific to inadvertent helicopter instrument
meteorological condition encounters, he was not prepared to react appropriately to the loss of
visual references that he encountered shortly after takeoff.
CF-14. The 406-megahertz (MHz) emergency locator transmitter (ELT) signals received from
the accident helicopter’s 406-MHz ELT were primarily responsible for focusing searchers on
areas near the accident site and for eventually locating both the survivor and the helicopter
wreckage.
CF-15. Although it is unlikely that the use of flight-tracking systems would have resulted in a
different outcome in this case, the use of such systems, which provide real-time information
regarding an agency’s assets, could shorten search times for downed public aircraft and their
occupants.” (NTSB, 2011, p 63)
Recommendations
Following recommendations made to the Governor of New Mexico
CR-1. Require the New Mexico Department of Public Safety to bring its aviation section policies
and operations into conformance with industry standards, such as those established by the
Airborne Law Enforcement Association. (A-11-53)
CR-2. Require the New Mexico Department of Public Safety to develop and implement a
comprehensive fatigue management program for the New Mexico State Police (NMSP) aviation
section pilots that, at a minimum, requires NMSP to provide its pilots with protected rest periods
and defines pilot rest (in a manner consistent with 14 Code of Federal Regulations 91.1057) and
ensures adequate pilot staffing levels and aircraft hours of availability consistent with the pilot
rest requirements. (A-11-54)
116
CR-3. Revise or reinforce New Mexico State Police (NMSP) search and rescue (SAR) policies to
ensure direct communication between NMSP aviation units and SAR ground teams and field
personnel during a SAR mission. (A-11-55)
Following recommendations made to the Airborne Law Enforcement Association
CR-4. Revise your standards to define pilot rest and ensure that pilots receive protected rest
periods that are sufficient to minimize the likelihood of pilot fatigue during aviation operations.
(A-11-56)
CR-5. Revise your accreditation standards to require that all pilots receive training in methods
for safely exiting inadvertently encountered instrument meteorologically conditions for all
aircraft categories in which they operate. (A-11-57)
CR-6. Encourage your members to install 406-megahertz emergency locator transmitters on all
of their aircraft. (A-11-58)
CR-7. Encourage your members to install flight-tracking equipment on all public aircraft that
would allow for near-continuous flight tracking during missions. (A-11-59)
Following recommendations made to the National Association of State Aviation Officials
CR-8. Encourage your members to conduct an independent review and evaluation of their
policies and procedures and make changes as needed to align those policies and procedures with
safety standards, procedures, and guidelines, such as those outlined in Airborne Law
Enforcement Association guidance. (A-11-60)
CR-9. Encourage your members to develop and implement risk assessment and management
procedures specific to their operations. (A-11-61)
CR-10. Encourage your members to install 406-megahertz emergency locator transmitters on all
of their aircraft. (A-11-62)
CR-11. Encourage your members to install flight-tracking equipment on all public aircraft that
would allow for near-continuous flight tracking during missions. (A-11-63)
Following recommendations were made to the International Association of Chiefs of Police
CR-12. Encourage your members to conduct an independent review and evaluation of their
policies and procedures and make changes as needed to align those policies and procedures with
safety standards, procedures, and guidelines, such as those outlined in Airborne Law
Enforcement Association guidance. (A-11-64)
CR-13. Encourage your members to develop and implement risk assessment and management
procedures specific to their operations. (A-11-65)
117
CR-14. Encourage your members to install 406-megahertz emergency locator transmitters on all
of their aircraft. (A-11-66)
CR-15. Encourage your members to install flight-tracking equipment on all public aircraft that
would allow for near-continuous flight tracking during missions. (A-11-67)
118
Appendix B
119
120
121
122
123
124
125
126
127
Appendix C
Contribution to Practice PowerPoint Curriculum Presentation
128
129
130
Abstract (if available)
Abstract
One hundred and twelve years ago, the first recorded fatality from an airplane accident within the United States occurred and involved the Wright brothers. Since this time many years ago, there have not been any new ways invented to crash airplanes. With this information and experience, why do we continue to have airplane accidents? ❧ The purpose of this study is to answer the following two questions
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Constructing the achievement index: an assessment to help people achieve high-performance leadership in the workplace
PDF
Making an impact with high-net-worth philanthropists: understanding their attributes and engagement preferences at nonprofit organizations
PDF
The future of work: defining a healthy ecosystem that closes skills-gaps
PDF
Using innovative field model and competency-based student learning outcomes to level the playing field in social work distance learning education
PDF
Education based incarceration: educate to change the organizational culture of corrections in the Los Angeles County Sheriff’s Department
PDF
The impact of social capital: a case study on the role of social capital in the restoration and recovery of communities after disasters
PDF
China-Africa cooperation: an assessment through the lens of China’s development experience
PDF
Critical factors in evaluating compliance to United Nations Security Council Resolution 1540: developing a methodology for compliance evaluation
PDF
The nature of gang spawning communities: African American gangs in Compton, CA: 1960-2013
PDF
Organizational spiritual maturity (OSM)
PDF
Gangs beyond borders: California and the fight against transnational crime - have recommendations become reality?
PDF
Resilience by anticipating change: simple and robust decision making for coastal adaptation planners, communities and elected officials
PDF
Workplace conflict and employment retaliation in law enforcement; an examination of the causes, effects and viable solutions
PDF
The use of mobile technology and mobile applications as the next paradigm in development: can it be a game-changer in development for women in rural Afghanistan?
PDF
Looking to the stars: perceptions of celebrity influence in the nonprofit sector
PDF
County governance reform in California: introduction of the council‐executive model and the elected county‐executive
PDF
Improving post-secondary success for first generation college students through community partnerships: programming practices for charter high schools
PDF
Case studies for real estate valuation
PDF
Emerging best practices for using offline mobile phones to train English teachers in developing countries
PDF
Who learns where: understanding the equity implications of charter school reform in the District of Columbia
Asset Metadata
Creator
Robertson, Stephen
(author)
Core Title
Does organizational culture play a role in aviation safety? A qualitative case study analysis
School
School of Policy, Planning and Development
Degree
Doctor of Policy, Planning & Development
Degree Program
Policy, Planning, and Development
Publication Date
06/16/2020
Defense Date
03/11/2020
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
airline safety,aviation safety,culture,OAI-PMH Harvest,organizational culture
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Natoli, Deborah (
committee chair
), Anthony, Thomas (
committee member
), Robertson, Peter (
committee member
)
Creator Email
pastorian@aol.com,steverobertson001@gmail.com
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c89-320596
Unique identifier
UC11663431
Identifier
etd-RobertsonS-8597.pdf (filename),usctheses-c89-320596 (legacy record id)
Legacy Identifier
etd-RobertsonS-8597.pdf
Dmrecord
320596
Document Type
Dissertation
Rights
Robertson, Stephen
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
airline safety
aviation safety
organizational culture