Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Adjusting the algorithm: how experts intervene in algorithmic hiring tools
(USC Thesis Other)
Adjusting the algorithm: how experts intervene in algorithmic hiring tools
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
ADJUSTING THE ALGORITHM:
HOW EXPERTS INTERVENE IN ALGORITHMIC HIRING TOOLS
by
Ignacio Fernandez Cruz
A Dissertation Presented to the
FACULTY OF THE USC GRADUATE SCHOOL
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
DOCTOR OF PHILOSOPHY
COMMUNICATION
August 2021
Copyright 2021 Ignacio Fernandez Cruz
ii
DEDICATION
Para mi familia.
Mom, without your sacrifices, I could never have soared.
iii
ACKNOWLEDGMENTS
This work is a direct reflection of the unwavering support from my family, friends,
mentors, and colleagues on the pathway toward my doctorate. You have nurtured the grit
and resilience within me and helped me continue to foster my love of learning.
An enormous debt of gratitude is owed to my dissertation committee for their
supreme mentorship, expertise, and encouragement throughout the years. We built this
project together; thank you for believing in me each step of the way. I would like to give a
special thank you to my co-chair, Janet Fulk, for being in my corner since Day One. I met
you when I was an undergraduate and since then, you have been instrumental in
developing me into the scholar I am today, and the scholar I aspire to be in the future.
Thank you for always listening to my ideas and encouraging me to always push boundaries
in my thinking. I also would like to extend my thanks to my co-chair, Andrea B.
Hollingshead. You created space for me to feel heard and seen, giving me the confidence to
explore my ideas with keen conviction. My co-chairs have been a tremendous source of
support throughout the years with their dedication and acumen toward cultivating my
scholarship in communication.
I must thank my committee members, Patricia Riley and Ann Majchrzak, for opening
a channel of advice and support that fostered my scholarly growth in writing this
dissertation. Our Zooms made me develop my ideas, be not afraid to think outside the box,
and sustained the intellectual passion in this project. I stand alongside a powerhouse
committee of brilliant communication and management scholars, all of whom are women,
who had to trail blaze their way through the field. With your guidance, I now join the less
than eight percent of Latinx doctorates in the United States.
iv
I have had the privilege of working with and learning from a world class group of
colleagues at the Annenberg School for Communication and Journalism. I especially thank
Peter Monge, Margaret McLaughlin, Dmitri Williams, and Sheila Murphy for your teaching
and camaraderie over the years. I am sincerely grateful for the administrative support from
Anne Marie Campian and Sarah Holterman; you both are a wealth of knowledge and are the
glue of this school. Thank you to Meredith Drake Reitan and Kate Tegmeyer from the USC
Graduate School for always giving me a seat at the table and supporting my research. I
would be remiss not to thank my undergraduate McNair Scholar’s mentor, Keri K.
Stephens, at the University of Texas at Austin. Your mentorship and friendship are
invaluable— words cannot express how grateful and honored I am to have followed in your
footsteps in the field.
I would like to acknowledge the National Science Foundation’s Graduate Research
Fellowship Program, who funded part of this dissertation. I would also like to thank the
USC Annenberg School of Communication for supporting this work. Versions of this
dissertation have been improved from feedback at the Oxford Internet Institute’s Summer
Doctoral Progamme and the Vrije Universiteit Amsterdam’s KIN Center for Digital
Innovation Summer School. In producing this work, I thank my participants, a group of
recruiters who lent their time and voices to me.
My journey at the University of Southern California has deeply been impacted by my
friends near and far. To my hermanita, LaToya Council, you are my shoulder to lean on and
I cannot quite put words together to express how grateful I am to have you in my life. Your
brilliance pushes me to be a better scholar and friend every day. You have been my rock
throughout these years—you are family. My fellow Annenberg besties, Jillian Kwong and
v
Azeb Madebo, have been instrumental in helping me across the finish line. I will never
forget our adventures near and far, or the way that you both show up for me when I need
you. Karina Santellano, thank you for always being a Facetime call away and teaching me
what true resilience is. During the pandemic, I formed a special bond with my sociology
gals, Mary Ippolito and Blanca Ramirez; thank you for inviting me into your Zoom writing
spaces, your company played a tremendous role in this dissertation. Thank you to Nathian
Shae Rodriguez for taking the plunge and opening doors for queer Latinx academics and
keeping them open; your friendship is invaluable, amigx. Jorge Luis Galán, thank you for
making my time in Los Angeles special— you take me out of the Ivory Tower and allow me
to let my hair down. I have also had the privilege of forming lifelong friends throughout the
years and thank you for your support in producing this dissertation: Sonia Jawaid Shaikh,
Sarah Clayton, Cynthia Rodriguez, Mehr Mumtaz, Cortney Sanders, Jude Paul Dizon,
JooWha Hong, Eduardo Gonzalez, Yuehan Grace Wang, Hyun Tae Calvin Kim, Lauren
Levitt, Yiqi Li, Emily Sidnam-Mauch, Yusi Aveva Xu, Antonio Marquez, Anthony Vasquez,
LooLoo Amante, TauLau Tupua, Marc Roaquin, and Sabrina Wilkinson.
At the onset of writing this dissertation, I met a special man who has since stood by
my side day and night, Ryan Andrew Rodriguez. You have grown into being my partner, my
personal administrative assistant, my food courier, my barista who is always shy of getting
a pink slip, and my future. BooBoo, thank you for showing me what true love is. Our
adventure is just beginning.
Lastly, and most notably, I thank my family. From afar, I embraced your endless
supply of love, empowerment, strength, and hope. I do this work for you—for us. David, Ali,
and Dad, graduate school provided little time for us to be together but know that these
vi
sacrifices made are not in vain. We did this together. My nieces, Alissa and Alina, I love you
girls with all my heart and hope I have done my duty as your uncle to instill a spirit of
perseverance and wonder in each of you.
Mom, you have kept the flame burning inside of me. You have given up so much so
that I could flourish, and I am honored to be your son. Esto es por tí. Me diste alas para
volar, y lo logramos. Gracias, mamá.
vii
TABLE OF CONTENTS
Dedication ..................................................................................................................................................... ii
Acknowledgments ...................................................................................................................................... iii
List of Tables ................................................................................................................................................. x
List of Figures .............................................................................................................................................. xi
Abstract ........................................................................................................................................................ xii
Chapter One: Introduction ........................................................................................................................ 1
How Experts Use Algorithms Throughout the Hiring Life Cycle ....................................... 1
AI Hiring Tools in Action ................................................................................................. 3
Blending Expertise and Technology Use ................................................................................. 6
The Relationship Between Expertise and Algorithms .......................................................... 9
Chapter Two: The Role of Expertise in AI-Powered Recruitment ................................................ 13
The History of Modern-Day Recruitment ............................................................................. 13
The Process of Hiring .................................................................................................... 14
The Relevance of Studying Expertise In AI-Powered Work ................................. 17
How is Expertise Defined? ......................................................................................................... 18
AI Hiring Tools and Human Recruiters Engaging in Trading Zones .................. 25
Chapter Three: Algorithms in Practice at Work ................................................................................ 29
A Focus on The Social Implications of Algorithms .............................................................. 29
Algorithmic Transparency ............................................................................................ 31
Folk Theories .................................................................................................................... 33
Algorithms in Practice: Aversion or Appreciation of Algorithms .................................... 36
Algorithmic Aversion ..................................................................................................... 38
Expertise and Algorithmic Use ................................................................................... 40
The Rise of Algoactivism ............................................................................................................ 41
Chapter Four: Methodology ................................................................................................................... 48
The Research Context: Recruitment Practices in a Global Pandemic ........................... 48
A Focus on High-Volume Recruitment .................................................................................. 50
Types of AI-Powered Tools for Hiring Needs ....................................................................... 51
Sampling ......................................................................................................................................... 57
Pilot Study ........................................................................................................................ 57
viii
Selecting Research Participants ................................................................................. 59
Snowball Sampling and Later Theoretical Sampling ............................................. 62
Sampling Criterion ........................................................................................................ 63
Procedure for Interviews ........................................................................................................... 64
The Interview ................................................................................................................... 64
Data Analysis ................................................................................................................................ 66
Data Pre-Processing for Analysis .............................................................................. 66
Stages of Data Analysis ............................................................................................................... 67
Stage 1: Open and Focused Coding ............................................................................ 67
Stage 2: Axial Coding ..................................................................................................... 69
Stage 3: Theoretical Coding ......................................................................................... 70
Chapter Five: Rival or Ally? Expertise is Negotiated in Algorithmic Powered Decisions ....... 77
Identifying Common Ground Among Different Forms of AI Hiring Tools .................... 78
Making Sense of the Blackbox .................................................................................................. 79
Algorithms Aid or Rival Work Expertise ................................................................................. 81
AI as Ally ............................................................................................................................ 82
AI as a Rival ....................................................................................................................... 87
Chapter Six: Human Expertise Configures the Algorithmic Playing Field ................................. 101
The Sequenceof Evaluation in Hiring .................................................................................... 101
Sourcing Accelerates Legitimized Screening Processes .................................... 104
Using Expertise to Source for Explicit Criteria ..................................................... 106
Chapter Seven: Embeddedness of Expertise And AI Use in Organizational Ecosystems ...... 113
Algorithms in Practice: A Focus on the Organizational Context .................................... 113
Negotiating Expertise in the Recruiter-Algorithm Trading Zone .................................. 115
Working With AI is an Alignment Process ............................................................... 115
Deepening The Human-AI Trading Zone ................................................................ 121
Professional Boundary Identities in the Human-AI Trading Zone .................. 122
Treating Expertise as Multilevel in the Organizational Ecosystem ................. 125
Opportunities for Theory ......................................................................................................... 126
Opportunities for Practice ........................................................................................................ 127
Limitations ................................................................................................................................... 128
Future Work ................................................................................................................................ 130
Concluding Thoughts ................................................................................................................. 131
References ................................................................................................................................................. 133
Appendices ................................................................................................................................................ 149
Appendix A: Pilot Study Interview Guide ............................................................................. 149
ix
Appendix B: Primary Interview Guide ................................................................................... 151
Appendix C: Recruitment Information Sheet ...................................................................... 157
x
LIST OF TABLES
Table 1: List of Folk Theories about Social Media ............................................................................. 34
Table 2: List of Common AI Hiring Tools ............................................................................................ 52
Table 3: Example of Initial Coding ....................................................................................................... 69
Table 4: Participant Inventory ................................................................................................................ 72
Table 5: Conceptual Matrix: AI as Ally or Rival ................................................................................. 99
xi
LIST OF FIGURES
Figure 1: Screenshot of Applicant’s Video Assessment ...................................................................... 5
Figure 2: Hiring as Funnel Process ......................................................................................................... 17
Figure 3: Recruiter-Algorithm Trading Zone ..................................................................................... 46
Figure 4: Demo of Ideal, AI Hiring Tool. .............................................................................................. 56
Figure 5: Demo of Ideal with metrics from Chatbot ......................................................................... 56
Figure 6: HireVue's Testimonial Case Sudy. ...................................................................................... 60
Figure 7: Demo of LinkedIn Recruiter ................................................................................................ 103
Figure 8: Figure of Standard Filters on LinkedIn Recruiter .......................................................... 103
xii
ABSTRACT
This dissertation is a culmination of work that argues for a fresh examination into
the world of smart technology and evolving hiring practices. I examine how recruitment
experts account for algorithmic systems in their work when making decisions about
potential job candidates. I draw from literature on organizational expertise and developing
theories of human-AI interaction to explore the ways how experts evaluate the meaning
and value of implementing AI-powered technologies into their work. By employing in-
depth interviews (n=42) with recruitment experts, I empirically uncover the mechanisms
associated with how experts accept or reject the recommendations from algorithmic hiring
tools.
I provide findings that explain how recruitment experts generally perceive AI-hiring
tools when they engage in decision-making strategies for screening job candidates. Human
experts are found to perceive the use of AI-powered technologies either as complementary
(as an ally) or adversarial (as a rival) depending on organizational factors such as the level of
employee oversight, their trust in the technology’s output, and the degree to which using
AI technologies are prioritized in their organization. Next, I provide findings that
demonstrate how human experts advantageously invoke their expertise at the onset of
human-technology decision making processes. More specifically, recruiters in this study
use their expertise during initial job screening processes before considering the outputs of
AI technologies into their decisions—this act inadvertently circumvents the impact that the
expertise of an AI system could leverage in decision making processes.
Moreover, these findings elaborate on how human experts engage in a trading zone
with algorithms to negotiate the terms of interaction and legitimacy of AI-powered hiring
xiii
tools against their own set of expertise. In turn, I argue that the product of this
sociotechnical trading zone is the conceptualization of multilevel expertise among human
experts and the AI technologies they use for work processes. I provide an explanation of
my work’s findings in relation to our current understanding of research around artificial
intelligence use at work. Finally, I offer theoretical and practical considerations for future
research and practice.
Keywords: expertise, multilevel expertise, algorithms, trading zones, recruiting
1
CHAPTER ONE: INTRODUCTION
How Experts Use Algorithms throughout the Hiring Life Cycle
The world of recruiting is undergoing dramatic changes in the ways that recruiters
leverage algorithms to perform fundamental talent acquisition processes in their work.
Organizations are replacing human labor with algorithms in many parts of the recruitment
process when attracting and selecting talent. Because the recruitment process consumes a
significant amount of time and cost to organizations, the integration of algorithmic hiring
tools is pervasive within a recruiter’s job. Algorithmic hiring tools are AI-powered tools that
are integrated as software extensions into an organization’s applicant tracking system, or
an organization’s central hub for managing their job applicant pool. The algorithmic hiring
tools within applicant tracking systems are used by recruitment professionals throughout
the hiring cycle (e.g., aiding in processes related to sourcing and screening applicants,
interviewing, and the successful placing of job candidates). At a functional level, algorithmic
hiring tools are a series of automated tools that are designed into a software to automate
work processes once performed by a human. These systems are composed of singular
algorithms, or step-by-step computational procedures for solving a problem or
accomplishing some end. In the world of recruiting, these emergent technologies entice
recruitment professionals by serving as a tool that will automate segments of their work.
Recruiters are integrating algorithmic hiring tools to do tasks that were once
completed manually, such as scouring through applicant databases and creating lists of
candidates that they should reach out to, sifting through resumes and automatically
weeding out job applicants that do not fit job qualifications, and scheduling interviews
2
between candidates and hiring managers (LinkedIn, 2018). A key characteristic of an
algorithmic hiring system is its reliance on “supervised learning,” which means that an
algorithm models a relation between job applicant variables and outcomes in a designated
training dataset, and then applies the model to predict outcomes for future job applicants.
In other words, algorithmic hiring tools run on a series of algorithms that use historical
data about successful internal hiring practices to uncover predictive relationships within
an organization’s job applicant pool (Bogen & Rieke, 2018; Li, Raymond, & Bergman, 2020).
In 2018, the results from a report by LinkedIn examined how 9,000 recruiters and
hiring managers from 39 countries perceived the key benefits of using algorithmic hiring
tools at work. Overwhelmingly, recruitment professionals think that artificial intelligence in
hiring is a bold disrupter in their field. These recruiters, who possess hiring expertise, think
the use of these technologies will help them save time (67%), remove human bias (43%),
deliver quality candidates for the job (31%), and save their organization money (30%). There
has been a general upward trend in adoption and use of these emergent technologies
across a variety of recruitment industries. For instance, by the end of the third quarter of
2019, there was more than $4 billion in investment by venture capitalists in recruitment
technology firms (Gale, 2020). Many of these investments went to established recruitment
platforms like Jobvite and SmartRecruiter to acquire smaller recruiting platforms to its
portfolio, but also integrate algorithmic hiring tools into their current systems. For
recruitment platforms, the value proposition to invest in algorithmic hiring tools is to allow
for clients with hefty recruitment needs to eliminate human intervention until the point of
an interview with a hiring manager (Bogen & Rieke, 2018; Li, Raymond, & Bergman, 2020).
3
The current market share of algorithmic hiring tools is fragmented across a variety
of different industries with numerous companies that produce their own proprietary form
of AI. These companies pitch and sell their own AI as a product that is often integrated into
a client's talent acquisition software enterprise. A common characteristic among the
clientele of algorithmic hiring tools are organizations that engage in high-volume
recruiting, or the practice of hiring large numbers of candidates in a short period of time.
More specifically, high-volume recruiters aim to identify high quality candidates and assess
their qualifications for a job position. The use of algorithmic hiring tools is often ambivalent
to industry type, with popular AI hiring software companies like HireVue and Jobvite
providing services to UniLever, KraftHeinz, Dow Jones, LinkedIn, and Hulu among others.
The use of algorithmic hiring tools vary across industry type, across manual and knowledge
work, and its implementation spans to organizations worldwide (Bogen & Rieke, 2018).
AI Hiring Tools in Action
Take, for example, a case study of Unilever— a British multinational consumer goods
company—that uses artificial intelligence throughout their recruitment lifecycle (e.g., from
sourcing to hiring). Prior to using a system in their hiring process, Unilever averaged 4-6
months to narrow their 250,000-college graduate application pool to a couple hundred
hires. By introducing HireVue, a U.S. based algorithmic hiring tool, they were able to
increase the speed of hire to two weeks. The partnership between the corporation and the
AI hiring tool company implemented a variety of algorithms.
First, applicants submitted their LinkedIn profile, which was then parsed through an
algorithm to rank top candidates. Then, those identified candidates completed a gamified
assessment for company fit by playing a series of behavioral games. The candidates who
4
achieved a set score were then invited to a video interview where HireVue’s algorithm used
the recordings to assess personality and behavioral traits by facial expressions, eye-
movements, body movements, details of clothing, and nuances of their voice. In this stage
of the hiring process, applicants who met the criteria established by Unilever and HireVue
were invited to a virtual onsite interview where candidates interacted with others from the
company, and finally, hiring managers made final hiring decisions. In general, the criteria
used to program the proprietary AI technology was set by both HireVue and Unilever. In an
interview with The Guardian discussing strategic hiring practices, Unilever explained how
the use of their AI hiring tools saves hundreds of thousands of dollars by automating
recruitment processes and estimated that they saved over 100,000 hours of human
recruitment time by deploying AI software to aid their talent acquisition (Booth, 2019).
5
Figure 1: Screenshot of Applicant’s Video Assessment
Note: This image was originally published in the article “We Tried the AI Software
Companies like Goldman Sachs and Unilever Use to Analyze Job Applicants” by Richard
Feloni in Business Insider. The image is a screenshot capture of a job assessment’s results
on HireVue’s platform. Screenshot of an applicant’s video assessment on HireVue, an
Algorithmic Management System that uses video recordings to score candidates for a job.
As the practice of using algorithms is becoming ever more ubiquitous in the hiring
process, there is no industry standard for designing and deploying algorithmic hiring tool
systems in recruitment (Engler, 2021; Forman & Glasser, 2021). Typically, organizations opt
to use AI in their recruiting contract with third-party vendors that sell AI-powered
algorithms (i.e., tools for hiring), which perform a variety of functions, such as
administering cognitive assessments, parsing through applicant documents, and ranking
them, and conducting and analyzing video interviews. Organizations purchase these
systems, collaborate with vendors to train their recruiters with these AI-powered tools,
and allow recruiters to integrate these new tools within their recruitment efforts. The most
6
common characteristic among algorithmic hiring tools is that the technology uses data
collected from job candidates to predict how well they will perform in a job.
The parameters of an algorithmic hiring tool prediction models are usually
determined through a partnership between a technology vendor and a company that
purchases and uses the technology. The nature of these partnerships highlights the
heterogeneity of practices across algorithmic hiring tools--organizations contract with AI
software developers and co-design specific functionalities of the hiring software. As a
result, the lack of industry standards and federal regulation
1
for algorithmic hiring tools
result in unique adoption and use practices by organizations and their workers (Forman &
Glasser, 2021).
Blending Expertise and Technology Use
Integrating new forms of technology in the workplace is not a new phenomenon--
and the ways that algorithmic hiring technologies are introduced as tools are no different.
However, the ways employees appropriate these tools and assess them against their own
expertise is different and worthy of a closer look. Communication and technology scholars
have long questioned the ways that organizations and their workers modify their work
experiences, practices, and structures when adopting new technologies (Barley, 1986; Fulk,
1993; Leonardi, 2012; Orlikowski & Scott, 2008).
In regard to algorithms and other AI-powered systems, these sophisticated tools are
augmenting (and at times, replacing) the work of humans in many industries (Faraj, Pachidi,
1
To date, there is no specific federal legislation governing the use of hiring technologies.
Algorithmic hiring tool developers are mandated to adhere to federal legislation related to
discrimination against protected classes such as race and gender. However, laws related to the
specific use and impact of AI in hiring are only found at the state and local government level.
7
& Sayegh, 2018; Huysman, 2020). In radiology, innovations in deep learning algorithms are
able to detect and classify different forms of skin cancer at the same precision, or at times
even better than a doctor (Hosny, Parmar, Quackenbush, Schwartz, & Aerts, 2018). Amazon
Inc.’s, Amazon Go—a chain of cashier-less grocery stores that launched in 2018—uses
computer vision, sensor fusion, and deep learning to remove the role of a cashier by
allowing customers to shop and checkout by using their phone (Polacco & Backes, 2018). In
journalism, algorithms are substituting for the work of journalists by sourcing material and
generating written articles for publication (Diakopoulos, 2020). For these skilled
professions, where a particular portion of a worker’s expertise is deemed suitable to be
replaced by an algorithm, the use of AI-powered technologies has a developmental and
sustaining impact on an individual’s sense of expertise at work (Faraj et al., 2018). The field
of recruitment is an interesting research site for examining how experts negotiate the use
of algorithms in work processes.
Recruitment experts navigate a heterogeneous nature of work that is becoming
riddled with algorithms. These algorithmic hiring tools and their AI-powered systems
provide a space where experts (i.e., recruiters) can saliently compare their own expertise
against computed and trained recommendations from AI-powered tools at work. One focal
point of this process that is less understood both anecdotally and empirically are moments
when a recruitment professional uses their expertise to override, intervene, or adjust the
output of the algorithms that are designed to aid their work efforts.
Recruiters in an organization come to rely on their own expertise when making
sense of how to advantageously integrate algorithms into their own work (van den Broek,
Sergeeva, & Huysman, 2019). Most knowledge work that requires the use of AI-powered
8
technology for work processes is carried out by domain experts who are non-technical
experts that are unfamiliar with the technical operations and calculations (referring to how
an AI system uses data to make a decision or provide an output) of the algorithms used on
the job (Barley, Treem & Leonardi, 2020). To complete their work, experts turn to
developing practical theories about algorithmic processes to rationalize its use for work
processes (Cameron, 2020). Practically speaking, when experts (i.e., recruiters) interpret
the results after using an algorithmic hiring tool, they tap into their own domain expertise
to evaluate the efficacy of the AI tool’s output. This interpretation of the AI-powered
output is later included as a factor that is measured against their own expertise judgments
during decision-making processes.
Past research suggests that individuals interact with an algorithm’s output by
engaging in two stark ways: appreciating the output or rejecting the output (Dietvorst,
Simmons, & Massey, 2015; Logg, Minson, & Moore, 2019). However, lay theories of
algorithmic use in hiring suggests that recruitment professionals who use algorithmic
hiring tools may also undertake steps to modify or adjust specific input data in order to re-
generate a favorable output from the algorithmic tool (Bogen & Rieke, 2018). When experts
intentionally intervene in, manipulate, or adjust the computed output of a work technology
in order to align a result with their expertise-held beliefs, this behavior has consequential
effects on the veracity and value of a technology to an organization (Bogen & Rieke, 2018;
Engler, 2021). Adding to this, adjusting an algorithm idiosyncratically may have perverse
effects for organizations in upholding ethical, unbiased AI-technology practices (Raghavan,
Barocas, Kleinberg, & Levy, 2020; Raghavan & Barocas, 2019). While there are benefits in
documenting the prevalence of how domain experts purposefully intervene in processes
9
related to algorithmic hiring tools, it naturally begs the question: Why are experts
misappropriating AI-powered tools?
The Relationship Between Expertise and Algorithms
Developing theories in AI-human interaction provide evidence that individuals make
task-oriented decisions with algorithms in two ways: algorithm aversion and algorithm
appreciation (Dietvorst et al., 2015; Logg et al., 2019). The term algorithmic aversion refers
to a process where individuals disagree or reject the output from an algorithm. Dietvorst et
al. (2015) details how this phenomenon is amplified when an algorithm errs or makes a
mistake. On the other hand, a series of experiments in Logg et al. (2019) finds that lay
individuals working on a task will adhere to advice from an algorithm rather than a human-
- they term this phenomenon of accepting AI-driven recommendations, algorithmic
appreciation. In complicating these findings, when individuals possess a form of domain
expertise, their reliance on an algorithm’s recommendation is waned. These findings point
to the need for organization and technology scholarship to “compare a theory of expertise
with theory of machine” (Logg et al., 2019, p.100).
Over 30 years ago, in his book Artificial Experts, Collins (1990) argued that an
algorithm’s output is only true if it is recognized as true and legitimate by others.
Recognizing the legitimacy of a technology's output is a social process. While an
algorithm’s output is objectively derived through trained, complex calculations, its
interpretation by others is a subjective, social, and human process. An algorithm’s output is
rendered true when it adheres to the social, cultural, and contextual context where it is
implemented. This dissertation echoes Collins’ (1990) decade’s old assertions about
algorithm’s significance relegated to the norms of its sociotechnical environment and
10
offers a fresh examination into the role of expertise in motivating and influencing the
recognition, legitimization, and use of algorithms at work.
More specifically, this dissertation explores the ways that experts evaluate the
meaning and value of implementing AI-powered technologies into their work. A guiding
research question of this project is: “How do experts who work with an algorithmic expert
system account for a technology that is central to their practice?” I examine this question
in the context of the recruitment and talent acquisition industry through interviews with
recruiters across multiple different work sectors. In particular, the scope of this project
focuses on the technology practices of recruitment professionals engaged in work that
involves using AI-powered algorithmic hiring tools when screening job applicants.
Research on expertise and AI-human interaction points to the way that a recruiter’s
expertise plays a critical role in how the output of an algorithm is accepted, rejected, or
modified to an extent when using algorithmic hiring tools to support decision-making
processes.
This dissertation explores the antecedent factors that explain how experts become
motivated to misappropriate AI-powered tools by asking the following research questions:
What factors influence an expert to intervene in the recommended outcome of an
algorithmic hiring tool? More specifically: When do experts accept or not accept the
recommendations from an algorithm? The aim of this project is to empirically examine
how recruitment professionals negotiate the use and legitimacy of AI-powered hiring tools
against their own set of expertise. Findings from this work document how an expert’s
perception of expertise can help explain how individuals use technology in a way that
impacts real-world decisions around hiring practices. Secondly, this project extends our
11
theoretical understanding about the precedence and scope of expertise in influencing
human and technology (i.e., AI-powered technologies) decision-making.
The dissertation is structured as follows. Chapter 2 introduces the nature of
knowledge work within recruitment and talent acquisition industries and the roles that
recruiters play within the hiring lifecycle. This chapter introduces the types of expertise
that recruiters use when screening job applicants and differentiates between human
expertise and AI-powered technology in decision-making processes. Chapter 3 introduces
relevant theories of AI-human interaction; this section provides an overview of recent
research describing how and why humans reject, accept, or modify outputs provided by
algorithms. The research questions of this dissertation are proposed at the end of this
chapter. Accompanying the previous chapter on theory, Chapter 4 provides a detailed
account of methodological considerations and data collection strategies. Drawing from
qualitative interviews (n= 42) with recruiters across a mixed set of industries, this chapter
describes the scope of the interviews that were conducted, the considerations for
prioritizing how informants disclosed sensitive work-related processes, and the
procedures taken for analyzing the interview data. Chapters 5 and 6 provides an extended
discussion of the findings of this project that situate the use of AI-powered technologies as
being entangled in a system among experts who are end-users, technology developers, and
organizational stakeholders. Chapter 7 provides a discussion about the findings in relation
to current theories and practices of how AI-powered technology, expertise, and the role of
the organization interact to influence how recruitment professionals make decisions that
are central to their work. It is helpful to conceptualize the process by which experts
perceive, use, and at times misappropriate AI-technology as part of a dynamic multilevel
12
sociotechnical ecosystem in an organization. Finally, provides recommendations for future
scholarship across theoretical intersections and offers considerations for applied use of AI-
powered hiring tools in organizations.
It should be noted that this project does not advocate for or against the use of AI-
powered tools in hiring. The goal of this work is to empirically and theoretically ground the
experiences of experts within organizations who are mandated to use these emerging
forms of technology in addition to their own expertise. My analyses and findings provide a
deeper, clearer explanation of how the use of evolving technologies defy design goals of
objectivity by their developers. In fact, human experts interact with AI-powered tools
across social and technical boundaries to shift prescribed organizational outcomes to
reach human-AI centered decisions.
13
CHAPTER TWO: THE ROLE OF EXPERTISE IN AI-POWERED RECRUITMENT
To form a comprehensive understanding of how AI-powered hiring tools are
embedded into recruitment decisions, this chapter details how the history of recruitment
practices have influenced the development of emerging technology across the various
stages of the hiring lifecycle. The following sections present a brief history of modern
recruitment practices that elaborate on the types of expertise that recruiters possess and
use when sourcing, screening, and selecting talent. Within this discussion of expertise, a
term called trading zones is introduced to demonstrate the theoretical relationship
between recruiters (i.e., human experts) and AI-powered technologies when they interact
during talent screening decisions.
The History of Modern-Day Recruitment
In the United States, the modern-day industry of recruitment traces its infancy back
to World War II. At this time, the prevalence of recruitment agencies soared as the need to
fill labor market vacancies increased when soldiers were called to war (American Staffing
Association, 2016). After WWII these recruitment agencies sustained the recruitment
industry and established their permanence by helping soldiers find work upon their return
home and expanding their services to a variety of different labor industries. Up until the
1980s, job recruiters acted primarily in head hunting roles by recruiting job applicants
through print job ads in newspaper classifieds, posting on physical job bulletin boards, and
storing applicant data in physical rolodexes, paper resumes, and business cards. This
practice evolved digitally into the 1990s when the use of computing introduced online job
boards, the use of resume databases, and the storage of job applicant data through
applicant tracking systems.
14
Applicant tracking systems are proprietary software that are often purchased for
human resource departments in an organization and allow recruiters to access job
applicant data to actively pursue and recruit candidates to fill job requisitions or open
positions at a company. Reflecting current technological innovations, recent changes in the
practice of recruitment involve greater use of mobile devices, development of more
sophisticated job boards like Monster.com and Indeed.com, the use of social networking
sites for hiring like LinkedIn.com, and the nurturing of internal talent pipelines through
extensive digital data collection (Lee Cruz, De Souza, & Boukani, 2016). With the rampant
adoption of technology in recruitment, recruiters leverage the use of emerging
technologies to aid across the hiring lifecycle -- a process that begins with sourcing
applicants and ending with providing an offer (Bogen & Rieke, 2018).
The Process of Hiring
Hiring is rarely a single decision that is made by a single individual. Rather, hiring is
conceptualized as a process that involves narrowing down decisions about applicants
through different levels of decision-making. Within the realm of high-volume recruitment
2
,
or the management of large amounts of job applicants for a position, it is useful to
conceptualize this process as a type of decision funnel that results in rejecting or offering a
job to an applicant (Bogen & Rieke, 2018; Lee Cruz et al., 2016). In high-volume recruiting,
recruiters often make use of an applicant tracking system to manage job applicant data and
2
High-volume recruitment can be largely defined as the hiring for a large number of positions in a
given time frame. In a 2018 report by Jobvite, the average number of applications for a typical job
advertisement turns up about 59 applications, while high-volume recruitment positions average
about 250 applications per position.
15
aid in making decisions about applicants. The different stages in the process of hiring for
high-volume recruitment positions include sourcing, screening, interviewing, and selecting
job applicants. Furthermore, AI-powered hiring tools are often integrated into these
different hiring stages across the applicant tracking systems that recruitment professionals
use.
The initial process of hiring begins with finding and attracting potential candidates
to apply for an open position. This stage is referred to as sourcing and involves recruiters
working with hiring management to develop and post a job advertisement, actively
reaching out to potential candidates on recruitment social networking sites like LinkedIn,
and curating a pool of applicants, or what is often referred to as a hiring pipeline. Following
the sourcing of candidates, recruiters often begin to screen or assess candidates against
the criteria for a job at the organization. Screening involves analyzing the experience, skills,
and other relevant characteristics of an applicant against the fit for a job opening. Given
the sheer number of applications per job position, predictive technologies such as AI-
powered hiring tools are readily used during this stage by recruitment professionals. These
tools automate elements of decision-making processes such as ranking and matching
relevant information about candidates for a job, scoring the qualifications of an applicant
based on their job application materials, and parsing through job applicant data in an
attempt to narrow down the best candidates for an interview (Bogen & Rieke, 2018;
LinkedIn, 2018).
The result of screening job applicants is the creation of a shortlist of candidates who
are invited for an interview at an organization. The interview process for a position is
typically conducted in different stages through phone, video conferencing, or in person
16
interviews with a hiring manager and other relevant employees at an organization.
Although the focal point of this project does not examine how AI-powered hiring tools are
used in job interviews, there are a growing number of algorithmic hiring tools that partially,
or at times, completely automate this process for recruitment professionals. The interview
serves as a time where recruitment professionals can strategically assess the quality and fit
of an individual applicant for the position they have applied for. Following the interview
process, the hiring process reaches its final stage where final job candidates are either
selected and extended an offer of employment, or they are rejected for the position.
Typically, the hiring process varies by the amount of time, resources, and availability of
both recruitment professionals and the applicants involved in the process (Mercer, 2021). In
each stage of the hiring process, recruitment professionals often leverage their tools, skills,
and expertise to facilitate their performance.
17
Figure 2: Hiring as Funnel Process
Note: The left section of the graphic lists the four stages of the hiring funnel:
sourcing, screening, interviewing, and selecting a candidate for a position. The right side of
the graphic lists the job tasks that recruiters typically engage with during each stage of the
hiring funnel. This graphical figure was created by the author.
The Relevance of Studying Expertise in AI-Powered Work
While the hiring funnel encompasses four stages of work that are central to the task
responsibilities of a recruiter, this current study examines the task responsibilities
associated in the second stage of hiring -- the process of screening job applicants for their
fit and qualifications for a role. Although the use of algorithmic hiring tools is prevalent in
this process, recruiters accomplish work by tapping into their own relevant expertise to
effectively do their work. To gain a familiar understanding of how individuals apply their
expertise to their line of work, the following section details relevant theory of expertise
about knowledge work relevant to recruitment.
In their theorizing about the implementation of algorithms in knowledge work
processes, Faraj, Pachidi, and Sayegh (2018) warn scholars and practitioners that algorithms
18
will have consequences for the conceptualization of professional expertise when a
technology’s “co-existence with humans is likely to lead to clashes over what type of
expertise is valued and whose expertise has primacy” (p. 65). That is, as new forms of AI-
powered tools are designed and used with similar levels of expertise comparable to a
human employee, organizations are faced with a challenge of understanding the processes
and potential consequences of outsourcing (or augmenting) work from humans to
technologies for tasks. While any potential work challenges related to expertise varies
across organizations, employees, nonetheless, may experience a reconfiguration of their
relationship to their professional responsibilities and ways of knowing due to the utilization
of AI-powered technologies (Faraj et al., 2018; Waardenburg, Sergeeva, & Huysman, 2018).
How is Expertise Defined?
The design and use of algorithm hiring tools in recruitment requires the
involvement of an organization (referring to both relevant management and workers) and a
third-party developer of a technology. These two parties engage in a process consisting of
a collaboration between the developer and the organization. The purpose for the
collaboration is to design a technology that fulfills a need for an organization. In the
context of designing an AI-hiring tool, an organization will need to identify work tasks or
processes pertaining to an recruiter’s job responsibilities that can be integrated into a
workflow for potential automation (Whittaker, Crawford, & Dobbe, 2018).
While the identification of specific types of knowledge and skill embodied within a task can
be viewed as a form of professional expertise (Faraj et al., 2018; Sergeeva et al., 2018), this
study conceptualizes expertise in a multidimensional level-- expertise can be the
embodiment and acknowledgment of knowledge at an individual, group, or organizational
19
level across complex relationships involving actors that include humans and technologies
(e.g., Fulk, 2016). The embodiment of expertise refers to the possession of expert
knowledge within a given domain, while the acknowledgment of expertise refers to the
recognition that an entity has that expertise. While this project treats expertise as a
multidimensional phenomenon, it is important to consider how expertise has traditionally
been attributed at the individual level.
Communication scholarship has often conceptualized expertise as a property
inherent within individuals (e.g., Ackerman, Pipek, & Wulf; 2003; Treem & Leonardi, 2016).
Expertise is conventionally thought of as the mastery of knowledge within a domain and
“involves the wisdom or competence in what a person can do given mastery of source
materials” (Fulk, 2016, p. 273). Within organizations, a person’s expertise is realized through
a collective process whereby individuals are identified and recognized as experts by others.
In turn, these identified experts must also acknowledge their ability to provide relevant
expertise to a given task or process (McDonald & Ackerman, 1998).
McDonald and Ackerman explain how expertise is a dynamic individual trait that
manifests in relation to organizational processes.
The term expertise assumes the embodiment of knowledge and skills
within an individual. Our definition distinguishes expertise, which is a range,
from expert. An individual may have different levels of expertise about
different topics. Expertise can be topical or procedural and is arranged and
valued within social and institutional settings (p.315).
The context of a situation and a person’s relationship to their environment produces
the effective use of a person’s expertise. For example, a software engineer at a startup
company may consider themselves an expert at full stack developing (i.e., the ability to
develop both client and server software), but if their coworkers do not perceive them to
20
have this form of expertise, they are unlikely to assume responsibilities or be designated to
work on tasks related to their specialized form of expertise. In short, if an expert is not able
to have their expertise assessed by others, then their expertise cannot be recognized.
Collins & Evans (2007) highlight this practice-centered study of expertise by explaining
how over the past half-century, scholars have moved away from conceptualizing expertise
as a calculated amount of skill that an individual possesses, but rather as a trait that a
person enacts through practice and with experience.
A practice-centric view of expertise consists of two forms: contributory and
interactional expertise (Collins & Evans, 2007). Contributory expertise refers to an
individual’s ability to competently perform an activity, whereas interactional expertise is a
person’s ability to communicate about specific knowledge. Adding to this, interactional
experts do not have domain expertise within an environment, but they are well-equipped
to talk about that specific domain expertise. Collins and Evans (2007) contend that
interactional experts are most useful in interdisciplinary, collaborative work settings that
require the expertise of individuals from different knowledge domains. They offer an
example of an interactional expert in the context of a scientific laboratory setting: an
interactional expert could be a lab mate who is able to communicate with others about
experiments, data analyses, and other types of practices related to the manipulation of
laboratory objects. However, this particular lab mate is unable to satisfactorily perform
these manipulations themselves because they lack the tacit and professional competence
to do so.
Interactional expertise often is exemplified as the ability for others to attain a
practical understanding of a skill rather than having the ability to perform the skill. This
21
practice-centered form of expertise is helpful in interdisciplinary work where tasks are
coordinated across different specializations but allows people to cross disparate
knowledge boundaries toward a collective goal (Collins, 2011). Both contributory and
interactional expertise treats knowledge as either tacit, which refers to knowledge that is
acquired by experience, or as explicit knowledge, which refers to knowledge that is
specific, learned, and explanatory. For example, in the context of learning to ride a bicycle,
explicit knowledge would refer to the instructions a person follows from a user manual--
the steps that refer to a person adjusting the bicycle’s seat, putting their foot on a pedal,
and thrusting in a forward motion. In contrast, tacit knowledge refers to the intuition a
bicyclist follows when they mount a bike and just pedal with rhythm. Tacit knowledge is
hard to codify, document, or explain for another person to follow.
Tacit knowledge plays a central role in organizational knowledge production, which
can be one way that emergent forms of expertise are enacted (Nonaka, 1994; Nonaka & von
Krogh, 2009; Nonaka, von Krogh, & Voepel, 2006). Organizations rely on the conversion
between tacit and explicit knowledge to effectively innovate across individual and group
levels. In other words, when a work task is accomplished through high-quality tacit
knowledge practice, over time, these practices become embedded across individuals and
groups in the organization. In these situations, tacit knowledge becomes explicit
knowledge that is situated and contextualized for an organization’s advantageous use
(Nonaka & von Krogh, 2009; Nonaka et al., 2006). Nonaka and von Krogh (2009) add that
organizational knowledge operates along a continuum of explicit and tacit knowledge. They
argue that not all tacit knowledge undergoes a complete conversion to explicit knowledge;
22
rather, some forms of tacit knowledge remain as “action, procedures, routines,
commitments, ideals, and emotion” (Nonaka & von Krogh, 2009, p. 636).
Although this view of expertise (e.g., contributory expertise and interactional
expertise) conceptualizes knowledge as tacit and explicit, other scholars extend this
thinking by demonstrating how the experts also perform their expertise considering
workplace dynamics like knowledge domain, social relationships, task complexity, etc.
(Barbour, Sommer, & Gill, 2016; Goddiksen, 2014). For instance, Barbour et al. (2016) expand
on Collins and Evans’ (2007) tacit vs. explicit knowledge binary and argue for a more
heterogeneous understanding of expertise performances. They theorize four new kinds of
expertise performance: (1) technical: which describes the properties of work or a task; (2)
arcane: which describes information that is germane to the organizational policies related
to the work being performed; (3) interpersonal: which pertains to the relational history or
relationships involved in the conduct of work; and (4) embodied: which describes the
physical requirements needed to conduct work.
Take for example, work in a healthcare clinic. Technical expertise would be the
ability of a physician to be trained and qualified about the anatomy of the human body,
whereas arcane expertise focuses on the mastery of the regulations related to the work.
For example, a physician, after visiting with a patient, is aware of workplace practices that
dictate how to protect the privacy of the patient. Interpersonal expertise accounts for the
ability of a physician to understand the relational history she has with her patients and
staff. This relational history aids her in performing expertise because they are aware of the
historical context (with others or in the institution they work for). Lastly, embodied
expertise highlights how expertise includes the physical nature of work, and not just the
23
cognitive processes of a worker. For instance, “a physician has a gut feeling about a
patient’s overall health before consulting a specific diagnostic tool” (Barbour et al., 2016,
p.70). While not exhaustive, these approaches highlight the dynamic way that expertise is
conceptualized given the nature of work—yet these approaches each contextualize
expertise at the individual level. Alternatively, Fulk (2016) theorizes how this phenomenon
can be conceptualized as a multilevel phenomenon, whereby expertise can be shared
across the individual level to a collective level.
Relying on ecology theory, multilevel expertise focuses on the relationship between
an individual and a collective. Following analogies to biological systems, multilevel
expertise emerges from the processes that individuals carry out, which then forms part of
a larger social system. This form of multilevel theorizing examines the relationship
between two levels of an entity (e.g., individual and group) in two processes: compositional
and compilational (Kozlowski & Klein, 2000). In terms of organizations, compositional
processes describe phenomena that do not change as it moves up to a higher level in the
relationship. For instance, organizational climate is representative of a compositional
process since the relations between workers need not be hierarchical to determine the
overall outcome (Fulk, 2016). In contrast, compilational processes, which acknowledge the
slight differences between individuals at the individual level, make up a congruent level of
expertise collectively. Accordingly, multilevel expertise represents three relationships
within ecological theory: symbiotic, parasitical, and communalistic.
Fulk (2016) offers an example of organizational collective expertise through
Contractor, Monge & Leonardi (2011) study of an automobile company’s crashworthiness
engineering unit. In this work unit, members were engineers with different levels of
24
expertise. An individual’s expertise was relational in the sense that expertise was attributed
to individuals when others went to them for advice. In their study, they conceptualized
expertise within a multiplex network of both human actors and advice-seeking
technologies used by engineers. In sum, the types of advice-seeking processes represent
cooperative network relationships between actors (both experts and technologies) over
time. As discussed, the study of expertise comprises relationships within complex social
systems across varying levels of analyses and entities (i.e., from an individual trait to a
collective trait and from a human to a technology; Barbour et al., 2016; Collins & Evans,
2007; Fulk, 2016; Goddiksen, 2014; Treem & Leonardi, 2017).
A cornerstone of communication work processes is the effective coordination of
expertise in facilitating effective collaboration (Barley, Treem, & Leonardi, 2020; Majchrzak,
Jarvenpaa, & Hollingshead, 2007; Yan, Hollingshead, Alexander, Cruz, & Shaikh, 2021). By
applying a multilevel expertise approach (Contractor et al., 2007; Fulk, 2016) to the
collaborative work between recruiters and the AI-powered hiring tools they use within the
hiring lifecycle, we can treat both entities (i.e., human recruiter and AI-hiring tool) as
actors within a collaborative decision-making environment. By isolating the process of
screening talent, recruiters conduct this task by utilizing their own individual expertise. On
the other hand, algorithmic hiring tools, although not fully autonomous actors, are
programmed with a certain type of domain expertise that provides outputs for a task that
human recruiters receive, interpret, and contend with during applicant screening
decisions. In fact, recent research suggests that workers do not recognize the expertise of
an algorithmic system as similar to their own held expertise (Kellogg, Valentine, & Christin,
2020). One way to think about the intersection of two kinds of expertise in times of
25
collaboration is through a theoretical framework in science and technology studies (STS)
that is referred to as a trading zone.
AI Hiring Tools and Human Recruiters Engaging in Trading Zones
Trading zones offers a helpful conceptualization to understand how different types
of multilevel experts can effectively collaborate across knowledge domains (Ananny, 2013;
Collins, Evans, & Gorman, 2007; Galison, 1997; Lewis & Usher, 2014). Deriving from
anthropology, Galison (1997) introduced the term trading zone as a metaphor to describe
how different groups from different ideological paradigms can find common ground and
collaborate with each other. In his original theorization of this term, Galison details that
collaboration between dissimilar groups of people creates mutual linguistic systems akin to
pidgins or creoles, which are largely underdeveloped forms of language systems, that can
“one degree or another, facilitate local communication between communities of what
would otherwise be mutually incompatible languages while preserving the separateness of
the parent languages” (p. 47). The canonical example Galison (1997) uses to elucidate this
framework is in his study of how physicists and engineers, who belong to cultural and
technical dissimilar groups, co-developed radar and high-energy physics particle detectors
in MIT’s Rad Lab during WW2. The physicists and engineers, both constituted as domain
experts with distinctive identities, traditions, and claims of knowledge, were able to
collaborate across social and technical boundaries to effectively organize. We can apply
this analogy to humans and their AI technology counterparts. For instance, during a trading
zone interaction, dissimilar groups (e.g., human experts and AI expert systems)
communicate through a shared language that facilitates communication to allow for groups
to collaborate, converge their thinking, and produce. Management and organization
26
scholars conceptualize trading zones as a type of ongoing form of organizing between
experts that is based on the situated activities of each party (Kellogg, Orlikowski, & Yates,
2006).
However, the emergence of a trading zone between two groups of dissimilar
experts does not assume that each group agrees and forms a consensus around ideas that
are being exchanged; rather, trading zones offer a space to allow for “enough mutual
understanding to emerge to allow interdisciplinary productivity or ‘trade’ to occur”
(Galison, 1997; Lewis & Usher, 2014, p.385) and where the interaction is “always in the
making… across cross-boundary coordination that is contingent, emergent, and dynamic”
(Kellogg et al., 2006, pg. 39). Research using case-studies of expertise collaboration find
that individuals who possess dissimilar forms of expertise and knowledge begin to trade
around jointly created boundary systems, with interactional expertise as one possible
result from their interaction (Collins, Evans, & Gorman, 2007; Gorman, 2002,2010). Galison
(1997) argues that the “work that goes into creating, contesting, and sustaining local
coordination [is at] the core of how local knowledge becomes widely accepted” (p.47). Most
recently, trading zones have been studied in the context of computational journalism to
answer questions related to the processes by which journalists work together with
computational engineers to co-create new forms of news innovation (Ananny, 2013; Lewis
& Usher, 2014). These cases highlight the process by which non-technical experts are
situated in an environment where communicative strategies are enacted with technical
experts to support a common goal. Galison (1997) provides more clarification:
"Two groups can agree on rules of exchange even if they ascribe
utterly different significance to the objects being exchanged; they may even
disagree on the meaning of the exchange process itself. Nonetheless, the
trading partners can hammer out a local coordination, despite vast global
27
differences. In an even more sophisticated way, cultures in interaction
frequently establish contact languages, systems of discourse that can vary
from the most function-specific jargons, through semi-specific pidgins, to
full-fledged creoles rich enough to support activities as complex as poetry
and metalinguistic reflection" (p. 783).
Most studies of trading zones report findings that are situated and analyzed at the
institutional level, and scholars such as Lewis and Usher (2014) highlight how a deeper
theoretical understanding about trading zones could benefit from individual level analyses.
By placing an emphasis on individual level interactions from the collaboration between two
actors, we can begin to acquire a clearer picture of relational processes within trading
zones.
The use of trading zones has gained currency across various research domains
including cultural history, education, communication and media, and management studies
(Kellogg, Orlikowski, & Yates, 2006; Lewis & Usher, 2014; 2016; Zeiss & Groenewegen,
2009). In management and technology studies, trading zones have been found to be the
product of “ongoing, technology-based coordination practices” to accomplish dynamic
forms of decision making (Kellogg et al., 2006, p. 42). The negotiations over expertise in a
trading zone can lead to contradictions with expert’s attempts to “hold onto their local
knowledge, social identities, and perceived interests as they work across boundaries” (p.
42). While there is no one type of interaction that can occur within a trading zone between
any two entities, the trade that occurs between experts is enacted through the exchange of
meaning through actions and their corresponding interpretation (Galison, 1997; Kellogg et
al., 2006). At an organizational level, the trade within a trading zone can refer to the cross-
boundary coordination of actions or behaviors between domain experts (e.g., Kellogg et al.,
2006).
28
This current study examines how recruitment professionals use their expertise to
collaborate with the embodied (i.e., programmed) expertise of an algorithmic-hiring tool.
This type of collaboration offers a unique opportunity to understand the dynamics of how
experts (with different forms of expertise; human expert and algorithmic system) can
achieve effective collaboration across cross-knowledge and multilevel expertise. For this
reason, in asking this study’s central research question-- how do experts who work with an
algorithmic expert system account for a technology that is central to their practice--we
can conceptualize trading zones to represent the multilevel interaction between human
experts and algorithmic hiring tools (e.g., Ananny, 2013; Lewis & Usher, 2014).
Research in the area of human-AI interaction at work has yet to offer a
conceptualization of how human experts negotiate their expertise when collaborating with
AI-powered systems (Kellogg et al., 2020). However, in order to fully understand any
relationship based on expertise collaboration, we must have a clear understanding of the
boundary conditions between any two actors within a trading zone (Collins et al., 2007;
Galison, 1997; Gorman, 2010). In other words, we should form an understanding about the
forms of interaction and perceptions of relationships that human experts develop when
using algorithmic expert systems. To identify any possible types of boundary conditions
that may currently exist between human experts (e.g., recruiters) and AI-powered experts
(e.g., an algorithmic hiring tool system), we can place focus on the current relationships
that humans have with algorithmic systems as part of expertise trading zones. The
following chapter builds towards this conceptualization of a recruiter-AI trading zone
through a discussion about current research of how humans interact with algorithms along
a spectrum of appreciating and accepting them, or by rejecting them.
29
CHAPTER THREE: ALGORITHMS IN PRACTICE AT WORK
The most overused metaphor is that there’s a war for talent right now, but it’s
actually not a war, it’s a race. And the people that are the fastest-selecting and reaching out
to candidates are the people that win and enjoy a competitive advantage.
-HireVue
3
CEO, Kevin Parker
In today’s race to attract and acquire the best talent for organizations, human
resource departments are turning to algorithmic hiring tools to help optimize their
workflows to get work done quicker and smarter. Specifically, recruiters utilize these
algorithms to aid in informing part of their decision-making processes when they screen
job candidates. This chapter helps to contextualize how recruitment professionals use
algorithms in practice. First, this chapter will review recent, relevant research frameworks
of algorithmic folk theories to demonstrate how individuals at large make sense of
algorithms when making decisions. Within this area of research, two dominant approaches
have emerged from the literature that demonstrate how individuals respond to algorithmic
suggestions when they make decisions—algorithmic aversion and appreciation. Relying on
limited research on how algorithmic aversion and appreciation is applied in the context of
work, this chapter concludes with a discussion about how a person’s expertise can be a
factor that is leveraged when people use algorithms to make decisions and organize.
A Focus on the Social Implications of Algorithms
In recent years, the use of algorithms at work (more broadly, artificial intelligence)
has garnered a substantial amount of attention from scholars, practitioners, and journalists.
3
HireVue is a leading human resources technology company that specializes in AI-screening
products. Parker was quoted in a 2018 CNBC article “Get ready, this year your next job interview
may be with an A.I. robot” by Tonya Riley.
30
A particular focus in this trend places an emphasis on algorithms as a type of predictive
technology, one that is designed and implemented to estimate the likelihood of a future
event.
4
Algorithms can be conceptualized as automated decision-making tools, decision
support systems, recommender systems, algorithmic systems, or simply as “artificial
intelligence”—these terms are used synonymously with each other and their particular uses
often depend on how technology systems are framed to the end user (Araujo, Helberger,
Kruikemeier, & de Vreese, 2020). The understanding of what an algorithm is, or its
potential to impact a situation, is varied.
Computer scientists often refer to algorithms as a set of machine-readable
instructions that perform a specific task (Bucher, 2017). Meanwhile, social scientists,
specifically communication scholars, are often less concerned with the technical
operations and definitions of algorithms. Communication scholars pay greater attention to
the relevance of algorithms (e.g., Gillespie, 2016), how to hold them accountable (e.g.,
Diakopoulos, 2020), how to reduce bias and increase fairness (Raghavan & Barocas, 2019),
and the degree to which the use of algorithms in a given context impacts current social
phenomena like decision-making (Dietvorst et al., 2018; Faraj et al., 2018; Huysman, 2020;
West, 2018) . However average users of algorithmic systems are often less aware about the
implications that these technologies have for their actions (Kellogg et al., 2020).
Outside of academic spaces, to the average user of technology, an algorithm is
commonly viewed as a type of “trick,” or tricks of the trade, that is used as a heuristic to
4
For the purpose of this discussion, algorithms are conceptualized in the form of technical code
that executes specific procedures in a task. Algorithms also have been studied in embodied forms
through social robots, chat bots, and in other machine-type performative ways (Guzman & Lewis,
2019), but this study examines these technologies in the form of visual outputs that are contextually
embedded in software that are interpreted by human users.
31
help simplify a process that would be laborious to conduct manually (MacCormick, 2012).
The idea of an algorithm being a trick is also described by Gillespie (2016) who contends
that when individuals use a technology encompassed by algorithm(s), they experience a
sort of magic trick. The end user rarely has any idea how the trick (i.e., the process by
which an algorithm arrives at a suggestion) is performed— even the designer of an
algorithm (e.g., a software engineer who designs an algorithm for a company’s task) may
not know how the algorithm computed its results (Burrell, 2016; Crain, 2018). Practitioners
and scholars have used the term black box to describe this gap between the technological
and social understanding of algorithms (Faraj et al., 2018; Pasquale, 2016). In studies of AI at
work, this lack of transparency is often referred to as an explainability affordance, or the
inability of an AI system to describe its processes and decision outputs to a user (Robert et
al., 2020, p. 27).
Algorithmic Transparency
In organizations, the lack of transparency (i.e., the black box) in algorithms is
heightened by factors influencing the process of design and deployment of technologies
(Burrell, 2016). First, algorithms are designed, distributed, and maintained by third-party
developers that typically sell their software to organizations. The code that is embedded
into algorithms is considered proprietary knowledge, or intellectual property, that limits
how much those outside a developer’s organization can understand about the technology.
While the practice of keeping code private provides a competitive advantage for
technology developers, a lack of transparency leaves users little choice but to accept the
results of an algorithmic technology at face value, or the “deploy and comply” approach
(Crawford & Calo, 2016).
32
For example, when recruiters of a company use the hiring platform LinkedIn
Recruiter to source job candidates for an open position, LinkedIn hides how its algorithms
decide how to rank, filter, and propose candidates that recruiters view. Recruiters who use
this platform are offered surface-level choices to engage with the platform through
Boolean searches, advanced searches, and filtering by useful skills; however, they are not
privy, nor have access to understand the specifics behind the technology’s outputs
(LinkedIn Talent Solutions, 2021).
Second, the backbone of an algorithmic tool is the source code that it is powered
by— the code (and its technical language) is a specialized skill that remains inaccessible to
the majority of people who engage with the technology (Burrell, 2016; Liao & Tyson, 2021).
Crawford and Calo (2016) describe that the scale and scope of designing algorithmic
systems becomes more complex over time as systems are being updated with new
information. Because of the speed and magnitude of growth in these technology systems,
organizations often resort to employing a deploy and employ approach to using algorithms.
In other words, employees who rely on technologies that are composed of algorithmic
systems blindly use these tools without knowing exactly what they do. More surprisingly,
at the fast rate of development of these tools, sometimes the results from algorithmic
systems are inscrutable to the creators of the technologies themselves (Burrell, 2016; Crain,
2018). The opacity of the process behind an algorithm’s output limits a user’s understanding
and interpretation of its results; oftentimes end-users are left using heuristic strategies or
developing folk theories in an attempt to make sense of how these technologies generate
their outputs (Liao & Tyson, 2021).
33
Folk Theories
Folk theories of technology offer a perspective for a user of a particular technology
to reason about their expectations, interactions, and practices regarding a technology they
use (Norman, 1988)
5
. Folk theories help to “bridge gaps between mental conceptions of
phenomena and objectively-demonstrated principles of science” (Dorton & Thirey, 2017, p.
6). In practice, folk theories can serve as high-level frames that technology users create to
understand the operation of a technological system (DeVito, Gergle, & Birnholtz, 2017).
Technology users often rely on these frames to form expectations about how a technology
should perform; and at times, folk theory generation can potentially explain why a user
may negatively or positively react to using a technology (DeVito et al., 2017).
In their application to algorithms, research on folk theories have found them to be
explanative because they: (1) offer surface understandings or beliefs about how algorithmic
systems work, (2) they describe explanatory or causal relationships pertaining to the
operation of an algorithm, and (3) they provide an account about lived experiences about
algorithmic use (DeVito et al., 2018; French & Hancock, 2017). In the early theoretical work
on algorithms and folk theories, Eslami and colleagues (2016) mapped several types of folk
theories they found that individuals used to explain why they were shown certain content
on social media feeds. The following tables show a few examples of different folk theories
5
Following established lines of research about folk theories of technology (e.g., Dorton & Thirey,
2017; Eslami et al., 2016; French & Hancock, 2017; Norman, 1988), this dissertation examines the ways
technology users use folk theories as guides to explain their perceptions about the operations of a
particular technology. This form of conceptualization differs from a related research area of folklore
narratives and the impact of storytelling strategies within organizations (Boje, 1991). Instead, the
creation of folk theories of technology places a focus on highlighting user perceptions of cultural
and symbolic representations within digital technology use.
34
that individuals created in response to algorithms to explain an interaction with
algorithmic outputs.
Table 1: List of Folk Theories about Social Media
Folk theory Reasoning
Theory of Loud and Quiet
Friends
The frequency of posts from a sender influences how likely
their post is to appear on my feed
Narcissus Theory The more similar a poster is to me, the more likely it is to
appear
Format Theory Certain types of stories will appear more often (e.g., videos
over text)
Personal Engagement
Theory
The more interactions I have with someone, the more likely
their content is to appear on my feed
Fresh Blood Theory Posts from new/recent followers are more likely to be
shown on my feed
Randomness Theory There is no pattern to the algorithm
Global Popularity Theory The more likes something gets, the more likely it is to
appear on my feed
Eye of Providence Theory The App knows a lot about me and curates each post based
on what they think my preferences are
Source: This list is adapted from Eslami et al. (2016) and Liao & Tyson (2021).
In addition to speculative ideas from users, folk theories also emerge from
organizational changes made to an existing algorithm. For instance, if a company changes
the algorithm for their news feed from a time-based model (e.g., showing posts that appear
in chronological order) to a curated-based model (e.g., showing posts that appear based on
user preferences), these types of system wide changes are a contributing factor to
algorithmic-based folk theories (DeVito et al., 2018; French & Hancock, 2017). Along with
identifying the types of folk theories that people create and hold about algorithmic
35
processes when they use general forms of technologies, folk theories have also been
studied as a way to explain people’s attitudes and behaviors when using AI-powered
technologies (Eslami, et al. 2016; French & Hancock, 2017; Liao & Tyson, 2021).
As AI-powered technologies continue to infiltrate most aspects of technological life,
our common disposition toward algorithms moves from a sense of general exposure to
algorithms towards becoming entangled with our technology practices (Ananny &
Crawford, 2018; Liao & Tyson, 2021). Folk theories have been found to facilitate processes
by which people will try to test their predictions and assumptions about how technologies
work, or perhaps even alter their behavior when using a particular technology (e.g, Eslami
et al., 2016).
An example of how individuals may alter behavior as a response to an algorithmic
folk theory is highlighted in a recent study by Liao and Tyson (2021). In this study, the
authors interviewed undergraduates about their experiences with using a commercial
algorithm
6
that claimed to be able to predict their personality from public online data
about themselves. Liao and Tyson (2021) found that people will form assumptions about
how the algorithm generated an output—and in some cases, they explain how they actively
take steps to influence the algorithm’s output. For instance, participants could assume that
the algorithm computed their personality score by using their public Twitter data rather
than their private Facebook data-- they formed this folk theory about privacy since they
6
The algorithm used in this study was Crystal Knows, a commercial algorithm that is designed to
use public, online data to predict a person’s personality. This algorithm is sold to other companies as
a plug-in extension that links with social media profiles like LinkedIn. In this particular case, a
person can navigate an individual’s LinkedIn profile, execute the plug-in command, and the
algorithm computes a personality profile about the person’s profile that is being viewed.
36
actively restricted privacy settings on Twitter and not Facebook, thus they were able to
assume where their data were sourced from.
In addition to explaining how users rationalize their behavioral responses from AI-
powered technology, folk theories have been found to influence individual user
perceptions and behaviors across a variety of different factors, such as the perceived
degree of algorithmic fairness (Lee, 2018), a user’s likelihood to engage in impression
management tactics on social media (DeVito et al., 2018), and the degree of control a user
feels they have over their privacy by limiting or hiding the amount of information
algorithms can collect about them (e.g., enabling particular privacy or viewing settings on
public social media accounts; Lomborg & Kapsch, 2020).
Much of the literature around folk theories and algorithms rely on exploring past
experiences that users have with AI-powered technologies; folk theories of AI report
findings that describe the processes by which an algorithm generated an output as
perceived by the user (e.g., Eslami et al., 2016; Liao & Tyson, 2021; Lomborg & Kapsch,
2020). However, there is less research that investigates moments when individuals have
direct interaction with algorithmic technologies and how folk theories are applied as a
behavioral response (Brayne & Christin, 2020; Christin, 2017, 2020). In other words, there is
a difference between the perception a person has about how an algorithm functions,
compared to the study of ‘algorithms in practice’ (Christin, 2017).
Algorithms in Practice: Aversion or Appreciation of Algorithms
What is of particular interest to organizational scholars and those interested in the
nature of technological innovation at work is how the use of algorithms enables a sense of
automated judgment, or “the replacement -- or at least the augmentation -- of human
37
discretion by mechanical procedures” (Brayne & Christin, 2020, p.1). Within AI-human
interaction literature, folk theories run parallel to theoretical frameworks of algorithmic-
use in decision-making by deepening our understanding of how technology users make
sense of algorithms (Liao & Tyson, 2021). Emerging research examining direct AI-supported
decision-making upholds a fundamental element about arriving at judgments: decision-
making is situational—using AI to aid decision-making processes is no different. There are a
variety of contextual factors that influence how users create folk theories about algorithms
and how they choose to either use them or reject their recommendations (Bogen & Rieke,
2018; Burton et al., 2020; Raghavan & Barocas, 2019).
In psychology, researchers have begun to place attention on algorithms in practice
by designing studies that explore how workers interpret and respond to algorithms when
making decisions. Logg et al. (2019) find that lay individuals adhere to advice from an
algorithm rather than a human when making a decision about popularly known knowledge;
they term this phenomenon algorithm appreciation. Within the context of work,
Raveendhran and Fast’s (2021) lab study of workplace productivity tracking found that
employees were more likely to accept and agree with outputs about their performance if it
was generated by an algorithm rather than a human co-worker. They argue that working
with the recommendations of an algorithm provides less social evaluation concerns
compared to working with a human, regardless of task and objectives. Moreover,
algorithms that are promoted to increase general workplace productivity and efficiency are
met with acceptance by individuals (Uhde et al., 2020). For instance, in a field experiment
on nursing staff scheduling at a clinic, Uhde and colleagues (2020) found that nurses saw
the possibility of having an algorithm automate general tasks like shift scheduling by
38
availability as fair and helpful. To date, lay users of technology (and their developers)
generally agree that when compared to human decisions, algorithmic decision systems are
accurate, consistent with their results, objective, and faster than humans (Logg et al., 2019;
Tambe et al., 2019).
Algorithmic Aversion
Yet, in spite of the widespread ubiquity of using algorithmic tools , another stream
of AI-interaction research demonstrates that people may tend to avoid use algorithms to
aid their decision-making process—this behavior is referred to as algorithmic aversion
(Burton et al., 2020; Dietvorst et al., 2018). Algorithmic aversion is defined as “the
reluctance of human decision makers to use superior but imperfect algorithms” (Burton et
al., 2020, p. 220). In its initial conceptualization, the term algorithmic aversion refers to a
process whereby humans prefer to confide in the advice from human decision forecasters
compared to algorithms. Dietvorst, Simmons, and Massey (2015) find that this behavior is a
situational response when individuals view an algorithm err or make a mistake. However, in
a series of experiments, they conclude that overall, individuals have more confidence in
human decision makers rather than an algorithm’s suggestions even when they see the
algorithm outperform a human.
In a recent review of literature spanning over 68 years (1950-2018) of research on
algorithmic aversion across various contexts involving automated support systems, Burton
et. al (2020) propose five causes of why people generally are disinclined to using
algorithms: (1) people feel as if they lack control over the decision-making process of an
output, (2) people have different expectations about what an algorithm’s output may be, (3)
there is a lack of incentivization for adhering to the algorithm’s result, (4) people feel
39
algorithms do not make rational decisions based on context, and (5) people often feel
decisions are made with intuition, which is a property that an algorithm is not programmed
to do or have. Algorithmic aversion is amplified when these technologies assist decision-
making processes about subjective evaluations (Lee, 2018) or when used to inform
questions related to moral and ethical considerations (Jago, 2019); this aversion is a result
of technologies lacking a complete human mind, thus restricting rationalizations that a
human is capable of doing (Burton et al., 2020; Bigman & Gray, 2018). Based on findings
from a series of experiments that tested the degree to which individuals would prefer
autonomous agents to make high-stakes and morally relevant decisions (e.g., legal, medical,
military, driving, etc.), Bigman and Gray (2018) find that people are generally aversive to
having technology predict and influence these types of decisions:
“Machines are becoming ubiquitous in modern society, with algorithms
making decisions about navigation (Google Maps), advertising (Amazon), and
even dating (OK Cupid). Although people are often indifferent about the
relentless creep of artificial intelligence, they appear to be less accepting of
machines making moral decisions. When human life and death hang in the
balance, it seems that we want another human—with a fully human mind—to
make the call” (p. 32).
As argued, decisions supported in conjunction with algorithmic tools are
useful depending on the context in which they are employed in (Burton et al., 2020).
In the current state of research surrounding AI-interaction and decision-making,
Burton et. al (2020) join Logg et. al (2019) to note that when and why humans may
use or reject AI-powered technologies is a rich and murky area with context-
dependent findings. While these two streams of research diverge among their main
findings (i.e., aversion vs. appreciation of algorithms), there is one factor that
overlaps both divergent findings: the level of expertise of a human decision maker
40
engaged with an algorithm influences the likelihood that they are more or less likely
to accept the recommendation from an algorithm.
Expertise and Algorithmic Use
While research in the area of human-AI interaction at work (and in general, too) has
yet to fully offer a clear conceptualization of how an individual’s expertise impacts their
relationship with algorithms (Kellogg et. al, 2020), there is some research that is beginning
to shed light on this dynamic. In their initial writings about algorithmic aversion, Logg et. al
(2019) advocate for organizational scholars to “compare a theory of expertise with a theory
of machine” (p.100). This research call was prompted by their findings in a series of
experiments that found that an individual’s level of expertise negatively influenced the
likelihood of them accepting advice from an algorithm. Langer and colleagues (2020) echo
findings toward aversion in their laboratory studies of individuals using automated support
systems (i.e., an algorithmic-powered tool) before, after, or during a decision-making task.
They found that participants reported feeling more satisfied and confident with their
decisions only after they had completed a task on their own and used the algorithmic
system to verify their work. Another study suggests that when given the choice between
following the advice of a human with domain expertise in a topic compared to an algorithm
trained with domain-relevant data, individuals were more likely to seek and trust the
opinion of a human adviser compared to the forecasted prediction by an algorithm
(Dietvorst et al., 2015).
Wang, Molina, and Sundar (2020) offer an alternative view to how AI-enabled
technologies may play an important role in decision-making processes. In their study, they
found that recommendations from both an AI system and human expert can influence a
41
decision judgment; however, participants perceived less identification with the AI system
compared to a human expert. In other words, humans can rely on algorithms to make
certain decisions, but they view algorithms as cold, static, and transient entities. Wang et
al. (2020) suspect that since the task in their experiment was low-stakes (e.g., agreeing with
the classification of a photo), participants may have relied on machine heuristics and
positively evaluated the algorithm’s performance as expert for a low-stakes and low-cost
decision. Consequently, Wang et al. (2020) provides a theoretical and practical example of
how AI systems are not only perceived as mere tools, but also as experts that can work to
achieve similar levels of influence as human experts (Guzman & Lewis, 2019; Wang et al.,
2020).
The Rise of Algoactivism
Kellogg and colleagues (2020) argue that when organizations implement algorithmic
expert systems (e.g., the practice of employees using algorithmic technologies in lieu of
human labor), these work policies inherently allow for organizations to place a degree of
control over their employees. They propose that while algorithms are employed to be used
as a technology that enhance employee performance, algorithms can also open a channel
to restrict and nudge behaviors of the employees who use them, monitor, and record their
actions, and at times, algorithms may discipline workers by enabling visible reward
systems. These mechanisms manifest during an employee’s interaction with algorithms,
and it may amplify a strong hold of control that organizations can exert over their
employees. Kellogg et al. (2020) advance a concept called “algoactivism” to describe tactics
that employees use when resisting the recommendations from algorithms through
practical action.
42
For example, Valentine and Hinds (2019) find that fashion buyers who use
algorithmic-recommendation systems that are required in their job would ignore the
recommendations from an algorithm. The fashion buyers often rejected the automated
recommendation as they perceived the advice to be inconsistent with their own
experience on the job (Valentine & Hinds, 2019 as cited in Kellogg et al., 2020), or their own
expertise (e.g., Bailey & Leonardi, 2015). Similarly, Christin (2017) found that web journalists
and legal professionals would ignore risk scores and data analytics and actively not
consider them as part of their corpus of data when making ‘data driven decisions.’ For
example, journalists relied on their own sources and sense of expertise rather than logging
into an analytics program that forecasted trends to cover. In legal settings, during criminal
court proceedings, workers at one county courthouse hid AI-generated recidivism risk
scores of defendants in files that were reviewed by pretrial parties and probation officers.
Christin (2017) noted that algorithmically computed risk scores were placed at the end of
hundreds of pages worth of paperwork and were usually not annotated compared to
several relevant components of a defendant’s file. These two examples highlight the
relevant challenges that experts face when tasked to use and trust outputs from
algorithmic systems and tools.
One way to explain moments of algoactivism—or the behavioral responses of
experts attenuating the effect of algorithms in their decisions and overall work (Christin,
2017; Kellogg et al., 2020)—is to understand the folk theories that experts form about the
nature of the algorithmic systems that they are tasked to use for work (Christin, 2017; Liao
& Tyson, 2021). In her study of how journalists and legal professionals resisted using
algorithms, Christin (2017) speculates that experts “``have distinct algorithmic imaginaries,
43
in the sense that they interpret and make sense of algorithms in different ways'' (p.10).
Through persistent use of algorithms in their daily work, experts come to form opinions
and rationalize their experiences with using algorithmics tools. Essentially, they create folk
theories about algorithmic technologies just like lay people, but they are able to rationalize
their theories with specialized tacit and explicit knowledge about their job tasks—expert
folk theories can be a way to make sense of past experiences and can be used as cognitive
tools to justify future use of the technology (Christin, 2017; Kellogg et al., 2020; Liao &
Tyson, 2021).
Following this current discussion of how individuals may invoke their expertise in
influencing their use of algorithmic systems, we can return to a theoretical concept called
trading zones that was introduced in Chapter 2. As a refresher, trading zones refers to the
collaborative interaction that occurs when dissimilar experts work together (Galison, 1997).
In relation to the current context of this dissertation, trading zones help to demonstrate
the theoretical relationship between recruiters (i.e., human experts) and AI-powered
technologies when they interact during talent screening decisions. This dissertation places
an emphasis on one type of AI-process within recruitment: screening. In reflecting how
screening is conducted in high-volume recruitment, recruiters screen job candidates with
the assistance of algorithmic hiring tools (Bogen & Rieke, 2018).
In a recruiting setting, both a human recruiter and an algorithmic hiring tool
possess expertise that is applied within the algorithm-recruiter trading zone. For instance,
both the algorithmic hiring tool and human recruiter have technical domain expertise—
with the recruiter’s expertise being about abstract knowledge related to leadership skills
that applicants can bring to bear on the job, while the algorithmic hiring tool can use its
44
technical expertise to compute a leadership score using applicant data. Given Kellogg et
al.’s (2020) research about the processes of algorithmic aversion and Christin’s (2017) and
Liao and Tyson’s (2020) work on how individuals generate folk theories as a way to
rationalize and justify their responses to algorithmic systems, I theorize that in order to
understand the negotiations over expertise within an algorithm-recruiter trading zone, the
degree to which tacit
7
(rather than explicit) knowledge negotiations are found to support a
recruiter’s algorithmic-folk theory will be imperative to determining how human recruiters
accept, reject, or modify their use of algorithms.
Recruiters are expected to make screening decisions that take into account a
candidate’s proficiency related to future performance at the job (for example, they can
bring specific skills from previous employment to their future jobs) and should rely on
algorithmic hiring tools to filter, rank, and generate a qualified output they can rely on
(Oberst et al., 2020). However, recruiters tend to not fully accept the recommendations
from the algorithms they are mandated to use (Oberst et al., 2020), and at times, may use
strategies to modify or intervene in the overall output from a system (e.g., Christin, 2017;
Kellogg et al., 2020). The disagreements and overriding modifications between algorithms
and human experts could be the result from a human-algorithm trading zone, where
negotiations over expertise types lead human recruiters to appropriate algorithms by
accepting, rejecting, or modifying outputs.
7
For the purposes of this discussion, tacit knowledge refers to knowledge that is acquired by
experience. Whereas, explicit knowledge refers to knowledge that is specific, learned, and
explanatory. In the context of algorithmic programming, research suggests that programming tacit
knowledge into an algorithm is a current challenge faced by developers since this form of knowledge
is based on context (such as workplace culture, policies, norms, affective states, etc.; Barley et al.,
2020; Christin, 2017; Kellogg et al., 2020).
45
Overall, the current state of literature on AI-interaction and expertise reveals many
unexplored and challenging questions about the ways that human experts engage and
interact with algorithmic systems at work. Christin (2017) highlights the importance for
scholarship in this area to “study the practices, uses, and implementations surrounding
algorithmic technologies. Algorithms may be described as ‘revolutionary’, but this kind of
discourse is as much prescriptive (algorithms should do all of these things) as it is
descriptive… we need to pay close attention to the actual rather than aspirational practices
connected to algorithms” (p.11-12). One helpful way to study these practices is to
conceptualize expertise multidimensionally as an actionable, cumulative source of
knowledge that expert actors draw from when they use their own expertise to make
decisions. These actors (i.e., human recruiters and algorithmic systems) engage in a trading
zone (See Figure 1 below), whereby human experts leverage strategies from their
understanding of how an algorithmic expert system performs. It is important to note that
these interactions occur within the boundaries and context of an organization that
mandates a partnership between human and algorithmic experts. To date, there is very
little understanding about what types of negotiations occur within human expert-
algorithmic expert trading zones (Ananny, 2013; Christin, 2017; Lewis & Usher, 2014). While
folk theories of algorithms help explain why individuals choose to accept or reject AI
recommendations, the theoretical conceptualization about how experts at work engage
with algorithms, form their own context-specific folk theories, and respond by accepting,
rejecting, or modifying algorithm expert outputs exist within a murky and scattered
research landscape. The following research questions guide this dissertation and strongly
consider these existing theoretical gaps within AI-interaction at work.
46
Human-algorithm trading zone. How do recruiters who work with an algorithmic
expert system use a technology that is central to their practice?
Factors of influence. What factors influence a recruiter to intervene in the
recommended outcome of an algorithmic hiring tool?
Motivations for action. When do recruiters accept or not accept the
recommendations from an algorithm?
Figure 3: Recruiter-Algorithm Trading Zone
Note: Graphic created by the author.
This dissertation takes on several imperative, challenging questions related to
algorithms in practice by experts who use them for work. I focus on human-AI interactions
47
by exploring these questions in the field of high-volume recruitment and talent acquisition.
This is a field where the use of algorithms (more specifically, algorithmic hiring tools) is
fundamental to decision-making processes within recruiting departments in organizations.
I employ qualitative methods that are well-suited to purposefully exploring emerging
phenomena in their naturalistic settings and build theory in these burgeoning areas. In the
following chapter, I introduce the research setting and data collection methods that were
employed in this dissertation.
48
CHAPTER FOUR: METHODOLOGY
This chapter details the methodological approach used to examine the meaning of
expertise and AI technologies within hiring decisions. First, I provide context to explain the
rationale behind data collection and analytic decisions considering the COVID-19
pandemic. Next, the chapter outlines sampling strategies employed, as well as a discussion
about the creation of the interview guide used. Finally, the chapter concludes with details
about how data was pre-processed and analyzed.
The Research Context: Recruitment Practices in a Global Pandemic
To provide a clear understanding about the research methodology employed in this
dissertation, it is necessary to note that the study’s conceptualization, methods, and
analyses were conducted during the COVID-19 pandemic. To start, conducting research
during a pandemic highlighted the need to employ a research design that accommodated
shifts in remote work practices of talent acquisition recruiters. To that end, the data
collection process included sampling and performing interviews with research participants
exclusively online through video and audio-conferencing platforms. Moreover, the COVID-
19 pandemic has resulted in the largest economic disruption since the Great Depression
(Campello, Kankanhalli, & Pradeep, 2020). During the onset of the pandemic, U.S.
organizations faced a unique challenge—a systematic pause in the human capital supply for
work.
Emerging data from labor reports demonstrate how firms reduced the amount of
job postings offered at their companies, which is an action that reflects the widespread
hiring freezes and layoffs that occurred through 2020. While firms tightened the reins on
hiring, metrics about recruiting demonstrated how job insecurity and uncertainty were
49
among the top concerns for job applicants who were actively searching for work (Klahre &
Klahre, 2020). The increase in furloughs and layoffs across organizations and industries
were speculated to have negatively impacted the number of applications received per job
posting and the rate of completion per application throughout the latter half of 2020
(Klahre & Klahre, 2020). These effects could be seen publicly when large, multinational
companies made news headlines regarding their responses to employment within a
pandemic. For example, Coca-Cola removed 2,200 jobs across their company, LinkedIn
reduced over six percent of its global workforce, General Motors furloughed about 6,500
salaried workers, and Walt Disney World Resort -- the largest single-site employer in the
U.S.-- furloughed roughly 70,000 of its staff across all departments. In July 2020, over 3.7
million jobs were reported to have been terminated in the United States (Wiener-Bronner,
2020). However, over the past year into 2021, efforts to regain jobs have steadily increased
(Cox, 2021).
The COVID-19 pandemic also upended many work practices recruiters traditionally
enlisted. While most sourcing and screening processes were performed using digital
technologies even before the pandemic, virtually all high-volume recruitment processes
went digital and were conducted remotely by April 2020. As a result of this shift to remote
work, the methodological approach to studying the interactions between recruitment
professionals and the algorithmic hiring tools they use accommodated this shift. As a
researcher attempting to gain insight into the practices of how recruiters use technologies
that are fundamental to their jobs, I employed sampling methods and data collection
techniques that best attenuated the lack of physical co-presence during interviews or
observations. Virtual ethnographic methods are far from new (Christin, 2020). In the lack of
50
physical interviews, conducting interviews via a virtual medium was the most viable
alternative to face-to-face interviews given the need to overcome social distancing
restrictions. Although this approach cannot adequately observe the differences between
what people say they do from what they actually do at work, in-depth semi-structured
interviews can shed light on an important social phenomenon—the meaning of AI
technologies among recruiters within their organizations. I examined this phenomenon by
sampling 42 recruitment professionals across a variety of industries.
A Focus on High-Volume Recruitment
This dissertation focuses on one specific type of recruiting: high-volume recruiting.
While there is no industry standard threshold for what is considered high-volume
recruiting, it is largely defined as the hiring for employing people in a number of positions
within a specific time frame. High-volume recruiting is seen in contrast to another
prominent type of recruiting called selective, or often referred to as targeted hiring or
head-hunting (Rynes-Weller et al., 2013). In selective recruiting, recruiters place efforts to
actively seek and attract talent with a specific type of skill set or relevant characteristics
for a job. Meanwhile, in high-volume recruiting, recruiters cast a wider net by placing
efforts to build and maintain a pipeline of job applicants for a position. In turn, applicants
from this pipeline are vetted and screened using a suite of hiring technologies for fit and
future potential at an organization. A latest figure from Jobvite, a leading recruiting
software company based in the U.S., defined high-volume recruitment as a position that
averaged about 250 applications per requisition, or open job position. However, the
number of requisitions across recruitment departments typically vary depending on the
size of the organization that is hiring.
51
Startups and small businesses often have about 10 - 20 job vacancies per year
compared to larger organizations (e.g., employing more than 500 employees) which can
average between 80 - 1000 vacancies (Digital HR Tech, 2020). At the upper end of
recruiting efforts are large and multinational organizations that often recruit for over 1000
vacancies in any given calendar year. Most high-volume recruiting efforts make use of
robust job applicant pools, or pipelines that are sourced through online searches, social
media (e.g., LinkedIn), internal organization referrals, or through college campus outreach
(Digital HR Tech, 2020; LinkedIn, 2018).
Types of AI-Powered Tools for Hiring Needs
As previously discussed in Chapter 2, high-volume recruiters model their work
through a hiring funnel, which includes processes related to sourcing and attracting
applicants, screening their job applications, interviewing them, and selecting candidates for
an open position. Technology forms part of a recruiter's toolkit, with AI-powered hiring
tools often being integrated into these different hiring stages. A report published in
December 2020 by the Center for Democracy and Technology estimated that over 76% of
organizations with more than 100 employees use an AI-powered assessment (e.g.,
personality assessments, skills assessment) when making hiring decisions. While hiring
algorithmic tools can be used throughout the hiring funnel, this dissertation focuses on
algorithmic hiring tools in employee screening practices. The current market share of
algorithmic hiring tools is fragmented across different industries and vendors who sell
their tools for an organization’s recruitment needs. Often, vendors of algorithmic hiring
tools market and sell their own AI-powered products as proprietary tools that can be
integrated into a client’s application tracking systems (the software that houses job
52
applicant data). The most common algorithmic hiring tools used for screening candidates
can be narrowed down to document scoring tools, facial and voice recognition, games, or
trait assessments (Center for Democracy and Technology, 2020). The following table
provides more details about each of the most used tools in job candidate screening and
assessment.
Table 2: List of Common AI Hiring Tools
Type of Algorithmic Hiring Tool Purpose
Resume or document scoring
These tools are designed to scan
documents (often resumes or applicant-
provided data) submitted by candidates
and screen, match, or rank information in
an application. Or these tools can screen a
document to identify particular trends,
terms, or patterns in applicant data.
A common list of relevant applicant
characteristics screened from resumes are
education, skills, relevant experience, work
eligibility, location, number of jobs, and
employment history.
Facial and voice expression These tools analyze data from recorded
interviews to process digital facial
expressions and emotion, choices of word,
vocal tone, and quality of an applicant.
Games These tools analyze data from games
played by applicants that often assess skills
through simulations of tasks (e.g., measures
an applicant’s level of coding via a software
coding task).
Games can also take the form of
simulations that can measure an applicant’s
judgment through vocal, logical, or
numerical reasoning.
Trait assessments Like games, trait assessments are tools that
take the form of surveys or tests that
53
applicants take to measure individual level
traits such as personality or ability.
Other times, these tools are non-intrusive
and use applicant data (e.g., application
materials or publicly accessible information
about candidates like social media profiles)
to measure an applicant’s traits.
Note: This list was compiled using examples from the Center for Democracy and
Technology’s, “Algorithm-Driven Hiring Tools Report” published in 2020.
High-volume recruiters use algorithmic hiring tools to quickly identify quality
candidates and assess their qualifications for a job position. The use of algorithmic hiring
tools varies across industry type, across physical manual and knowledge work, and its
implementation spans to organizations worldwide (Bogen & Rieke, 2018). AI hiring software
developers and organizations will partner to identify tools that match with the relevant
skills and qualifications that recruiters use to identify talent. For example, a recruiter trying
to fill a software engineering position may rely on scores from a coding game assessment
compared to a recruiter staffing a paralegal position, in which case, they may place a
higher emphasis on resume scores for GPA or type of education an applicant attained, or
previous job experience.
Most often, recruiters interact with the front-end user interface of an algorithmic
hiring tool. An example of this interaction is with Ideal, a recruitment automation software
that performs as a virtual AI assistant that aids recruiters to screen applicant files, chat
with applicants as a chat bot, and administers trait assessments. Figures 1 and 2 are
screenshots of a demo interface of Ideal that recruiters view. Figure 1 depicts the file of an
applicant named Nikki Holding where an algorithmic hiring tool has designated their file as
an “A” and lists relevant application traits. To the right of the figure is an option where a
54
recruiter can alter the designated score given to the file
8
. In figure 2, the applicant file for
Nikki reflects their performance on an assessment administered by Ideal as well as a rating
based on the applicant's communication with the chatbot from Ideal.
While these figures reflect the interfaces designed by the AI-software company
(e.g., in this case, Ideal), it is common practice for the metrics of these algorithmic hiring
tools to be loaded within the hiring organization’s proprietary applicant tracking system.
Depending on the hiring organization’s recruitment processes, recruiters can access, use,
and work with the algorithmic proprietary tools within their own suite of in-house
technologies at work.
In conducting background research for this project, I reached out to the company
Ideal to participate in an AI product demonstration. During the demonstration, I had an
opportunity to engage with a member of their product development team. They specified
that most AI-powered technologies function best when used for high-volume recruiting
efforts, in which companies receive a minimum of 300 or more applications a month for a
specific role. This requirement for high-volume recruiting is common throughout the
product landscape for AI hiring tools. The efficacy of algorithmic hiring tools is measured
by the tool’s ability to meet the needs of recruiters who fill large quantities of requisitions,
or job openings (HR Research, 2021). To ensure this level of efficacy, developers work with
organizations to update AI-hiring algorithms with recent, relevant sets of training data.
8
In the complete software of Ideal, the software allows recruiters to use algorithmic hiring tools to
conduct more sophisticated forms of intervention rather than changing a score from one letter
grade to another letter grade. Ideal offers recruiters the option to input relevant data that can alter
an applicant’s file within the system.
55
This knowledge about algorithmic design aided in informing the criteria for sampling in
this study.
56
Figure 4: Demo of Ideal, AI Hiring Tool.
Note: Screenshot of Ideal interface from Ideal.com product demonstration.
Figure 5: Demo of Ideal with metrics from chatbot
Note: Screenshot of Ideal interface from Ideal.com product demonstration.
57
The adherence to two characteristics-- high-volume recruiters and those that use
AI technologies-- formed the initial criteria for sampling participants to study for this
dissertation. The following section will discuss the sampling frame and the data collection
strategies used to recruit study participants. Next, I will provide an overview of the scope,
content, and procedure relevant to the interviews conducted with recruitment
professionals. Finally, this chapter will end by discussing procedures for analyzing and
interpreting the collected data.
Sampling
The data for this dissertation project was collected exclusively online through in-
depth, semi-structured interviews using the video conferencing platform Zoom, or by
phone. Primary data collection for this study lasted four months, with virtual interviews
scheduled in January 2021 and concluding in May 2021. The data collected for the pilot of
this study began in October 2020 and concluded in December 2020.
Pilot Study
To gain a deeper understanding about how experts in recruiting conduct their work
while using algorithmic hiring tools, a pilot study was conducted. The purpose of the pilot
study was to understand the lay of the land in recruiting. The objective in conducting a
pilot study was to aid in developing a cohesive and representative interview guide that
captured, as comprehensively as possible, a recruiter’s uses and practices of technology at
work within the boundaries of a cross-sectional interview. In this pilot study, I conducted
informal interviews with 14 recruiters who I identified on LinkedIn (as LinkedIn is currently
one of the most comprehensive professional networking sites available) who worked in
high-volume recruiting. I identified these recruiters by conducting a search for “high-
58
volume recruiters” and composed a list of users that I messaged on the site. In my message
to these recruiters, I requested a video meeting with them to discuss how they used
technologies in their daily work.
Each unstructured interview lasted between 25 minutes (minimum) to one hour
(maximum). The pilot interview was guided by open-ended questions related to job
responsibilities, types of technologies used (both AI and non-AI), and moments of
agreement or disagreement with the output of AI technologies. Sample questions included,
“Tell me about your role in your organization?” “Can you walk me through the steps you
take when you fill a job opening?” and “What kinds of technologies do you use to find and
select talent?” Through these interviews, I was able to formulate the basis of an interview
guide used for primary data collection. Later in this chapter, I will provide details on the
mechanics of the interview guide that was used for data collection. Interviews enabled me
to gain a better understanding of the actions, behaviors, and general practices of recruiters
in relation to AI-powered technologies. The pilot study helped me construct an interview
guide that established the boundaries of technology use in recruiting. For example, I was
able to discover that an applicant tracking system is a system that stores data from AI-
powered resume screeners, and an AI-powered software is often designed and integrated
into an applicant tracking system for maximum functionality. This finding allowed me to
incorporate questions and probes into the final interview guide about the role of an
applicant tracking system and its affordances that potentially impact how recruiters use
algorithms for screening. The insights from this pilot study enabled me to develop a
baseline understanding of how recruiters leverage technologies within their daily work,
59
and it enabled me to identify and prioritize sampling from within a specific type of
recruiting: high-volume recruitment professionals.
Selecting Research Participants
A primary focus for recruiting participants was placed on identifying experts in
recruitment that possessed hands-on experience with algorithmic hiring tools within their
daily repertoire of work tasks. In recruitment and talent acquisition, recruiters are
designated to interact with job applicants in the initial phases of sourcing and screening,
then selecting top applicants for a job and aiding them along the candidacy process toward
the interview and selection stages (Lee Cruz et al., 2016). To identify these specific types of
recruiters I approached sampling in three ways: (1) through the clientele list provided by
algorithmic hiring developers on their websites, (2) by conducting searches for recruitment
professionals with specific recruiter job titles on LinkedIn, and (3) locating recruiters on
groups within social networking sites like Facebook and LinkedIn.
In the first sampling strategy, I identified a list of common algorithmic hiring tool
software companies that are used in screening processes. This list was composed of 11
organizations that were derived from industry reports about AI-powered tools for
recruiting. For example, I considered AI-hiring software companies like HireVue,
Pymetrics, PredictiveHire, and Ideal that were included in the latest report from the
Brookings Institute (Engler, 2020). I also included Washington, D.C.-based think-tank,
UpTurn, which provided insight into additional companies like Koru or Predictim. The
complete list of companies I identified to have developed and sold AI-software for
screening purposes included: HireVue, Pymetrics, Koru, Predictim, Ideal, Crystal Knows,
Jobvite, WePow, Visume, and Modern Hire. While this list of companies is not exhaustive,
60
the inclusion of these companies within think tank reports on AI-hiring, trade websites for
recruitment, and competitor searches for these companies corroborated that this list
represented the top AI-recruiting software companies at the time of study recruitment.
Furthermore, this process allowed me to realize the heterogeneity of the field of AI-
powered hiring software.
Next, after identifying these ‘mainstream’ developers of AI-powered hiring tools, I
visited each of their company websites and searched for case studies, client success
stories, client testimonials, or listings of clientele that used their software. Figure 3 below
illustrates a common practice of software companies creating web content to highlight the
success of their client’s use of their products. When available, I made note of the different
companies that each algorithmic hiring tool software listed on their website.
Figure 6: HireVue's testimonial case study.
From this point, I conducted a search using LinkedIn Premium to search for
recruiters from the list of companies that used an AI-hiring tool. The Premium feature in
LinkedIn is a paid-subscription version of the social networking site that allowed me to
reach out to users I was not directly connected to. Each search included the name of the
61
company that was indicated to have used an AI-hiring tool, as well as a search for keywords
that included either “recruiter”, “recruitment manager”, “talent acquisition”, or “recruiting
manager” and any company specific variations that LinkedIn suggested as a correction to
the job title. The search was filtered for users that were located and worked within the U.S.
and currently worked at the company associated with the search. This sampling strategy
yielded a total of five interviews.
Another strategy used to ensure that I was not sampling primarily only from
companies listed on algorithmic hiring tool software websites was to sample for high-
volume recruiters. As previously discussed, the process of high-volume recruitment usually
involves the use of algorithmic hiring tools to conduct work. Using this logic as a proxy to
identify recruiters who use AI-hiring tools in their recruiting efforts, I conducted searches
for recruiters by using LinkedIn Premium. I used keywords like “recruiter”, “high-volume
recruiter”, “talent acquisition recruiter”, “senior high-volume recruiter”, “staffing manager”,
“recruitment manager.” I filtered for users that were located and worked for a company
within the U.S. There were no restrictions placed on company type or industry. This
sampling strategy yielded a total of eight interviews, with more than 30 recruiters turning
down or ignoring an invitation for an interview.
The last strategy used to identify recruiters for an interview was to place open-call
ads on social media groups within Facebook and LinkedIn that were dedicated for
recruiters to informally network. In these groups, I would post a comment or create a post
on the group’s homepage. Within these types of postings, I introduced myself as a
researcher interested in understanding how recruiters used new, emerging forms of
technologies to screen talent. In this post, I would provide context about the study and
62
request that a user from the group send me a message if they were interested in
participating in an interview. Before scheduling an interview with any participant from
these groups, I ensured eligibility criteria by verifying that they currently held a job in
recruiting and worked for a large company (i.e., a company that employed over 500
employees) that was based within the United States. This was the least lucrative type of
sampling technique (n= 3) because it required recruiters who were interested in
participating in an interview to actively reach out to me by sending a message.
Snowball Sampling and Later Theoretical Sampling
In addition to sampling participants through cold calling strategies (e.g., reaching
participants through social media sites or direct emails), I also employed a snowball
sampling technique by asking participants for referrals to other recruiters who did similar
work. This type of sampling proved to be the most helpful since most recruiters work
independently, while being part of a team. This outcome resulted in 26 interviews through
referrals from colleagues that held similar positions in different recruitment departments.
In addition to the above, I adhered to theoretical sampling procedures (Charmaz
2006; Glaser, 2001; Glaser & Strauss, 1967). Theoretical sampling forms part of a larger
methodological approach to qualitative research called constructivist grounded theory. In
brief, a constructivist approach “places priority on the phenomena of study and sees both
data and analysis as created from shared experiences and relationships with participants
and other sources of data” (Charmaz, 2006, p. 130). By using the constant comparative
approach (i.e., the process of moving between data and analysis for emerging theoretical
construction) between data and analysis allowed for the use of theoretical sampling since I
was reflecting on tentative categories or relationships that emerged during the initial
63
phase of data collection. In turn, I identified lack of understandings about specific
phenomena and sampled specifically for participants that could provide more insight into a
specific process or behavior that needed further clarification
9
. A total of 42 interviews were
conducted, with 34 interviews conducted using the video conferencing platform, Zoom,
and the remaining eight interviews conducted by phone. Follow up interviews were
conducted with three participants to provide more information about organization-
specific use of technology.
Sampling Criterion
In an attempt to form a standard level of characteristics from participants working
high-volume recruiting industries, all participants were screened for several work
eligibility criteria, including: (a) Participants had to be at least 18 years of age, (b)
Participants had to be employed at their current company for a minimum of six months, (c)
Participants' official job title had to include some type of recruitment or comparable talent
acquisition role, (d) Participants self-reported that they engaged in screening talent
processes as part of their job responsibilities. A primary reason for creating this standard
eligibility for participants was to aid in compensating for the heterogeneity of work
industries represented in the sample. Once all participants were screened for eligibility, an
interview was scheduled and conducted. A total of four participants were ineligible to
9
An instance of employing theoretical sampling (Charmaz, 2006; Glaser and Strauss, 1967) emerged
when my analytic memos began to highlight gaps of understanding about the processual role that
sourcing candidates had on screening efforts. In other words, a pattern about screening workflow
emerged when recruiters often disclosed that they screen a potential candidate for an open position
on social media sites, well before a candidate submits an application. Because of this pattern’s
frequency in my memos, as well as its pertinence to the process of screening and using algorithmic
technologies, this lack of understanding about the sourcing and screening process encouraged me
to recruit three additional participants who could speak to the degree to which sourcing impacted
the evaluation and assessment of candidates.
64
participate in the study as they did not engage in high-volume recruiting at the time of the
interview. Table 3 provides the complete Participant Inventory for this study.
Procedure for Interviews
After establishing initial contact with participants, each participant was invited and
scheduled for an interview on Zoom or by phone. Prior to each interview, I emailed
participants a copy of the study’s participation information sheet and informed consent
documents which were approved by the University of Southern California’s Institutional
Review Board (see Appendix for a copy of each of these forms). Interviews lasted between
25 and 90 minutes, with the average amount of time for an interview lasting approximately
45 minutes.
The Interview
The insights that were generated from the initial pilot interviews helped to build a
structure for the interview guide employed in this study. The interview guide comprises of
four sections: (1) Warm-up questions to gain background information about the
participant’s organization and their role (including tasks, responsibilities, affiliation to
teams, etc.), (2) Questions related to participant’s perception of technology use in
screening, with a focus on algorithmic tools, (3) Questions related to specific actions taken
when using algorithms, and (4) Individual-level demographic questions about each
participant. Following standard in-depth, semi-structured interviewing techniques
(Charmaz, 2006; Miles, Huberman, & Saldana, 2019), I probed for specific examples of
memorable interactions and use of technologies or critical incidents that exemplified
intervention into technology use.
65
During the first stage of the interview, warm up questions were asked to gauge
participant’s recruiting environment, including the type of organization and industry that
they recruit in. Furthermore, this section provided probes on their responsibilities in
evaluating candidates, given their role in the organization. Sample questions included, “Tell
me about your role in X company,” and “What are your primary responsibilities when it
comes to evaluating job candidates?”
Next, participants were asked about their experiences and perceptions of using
technologies when screening job candidates. This section helped uncover and understand
the specific types of technologies recruiters used at work, as well as to understand how
social and organizational factors may influence technology use. Sample questions included,
“How do others in your workplace generally use these tools?” “Would you say you use it
differently or the same?” “How were you trained to use this tool?” In this section of the
interview guide, participants were probed about the extent to which they used specific AI-
powered tools in their work. Sample questions included, “To what extent do the
technologies you use for your job rely on any form of AI, automation, or algorithmic
calculations?” “How accurate is this tool? How do you measure its accuracy?” and “Do you
typically trust the results from the tool? Why or why not?”
The third section of the interview focused on understanding how recruiters utilized
AI. I placed an emphasis on experiences when recruiters agreed or disagreed, and accepted
or rejected the recommendations from AI tools when they evaluated job candidate files.
Sample questions included, “...Can you describe a time when you did not agree with the
ranking, or score provided by X tool? How did you react to this situation? Probe: Did you
take any action to adjust the tool’s output?” “Can you comment on the reasons why you
66
think this decision was the best course of action to take?” and “If you could change
anything about the way your company uses AI to evaluate job candidates, what would it
be?”
The final section of the interview collected demographic information about
participants. This information included the verification of a participant’s organization and
industry type, job title, length of employment at the organization, gender, ethnicity, age,
level of obtained education, and location within the United States. After collecting this
information, the interview came to an end and participants were debriefed about the
study’s goals. Participants were provided with contact information for the author in case
they had any questions in the future. All participants were given pseudonyms to maintain
confidentiality. A list of participants is provided in Table 3.
Data Analysis
Data Pre-Processing for Analysis
A total of 40 interviews were audio recorded and two were accompanied with hand-
written notes, per the respondents’ request. Each recorded interview was transcribed
verbatim using Rev.com, an online paid transcription service. Each transcript was
downloaded and uploaded into NVivo, a qualitative data analysis computer software
program. After processing each transcript within NVivo, I listened to the audio recording of
the interview while reading the transcript for accuracy and correcting syntax. The average
transcript length was about 12 pages, and the interviews for this dissertation totaled 402
pages. After pre-processing the transcripts, each interview transcript was analyzed.
67
Stages of Data Analysis
I analyzed data using a qualitative data analysis approach (Miles, Huberman, &
Saldana, 2019), while placing emphasis on coding techniques derived from a constructivist
grounded theory approach (Charmaz, 2006; Glaser and Strauss, 1967; Glaser, 2001) with the
in-depth, semi-structured interviews as my primary source of data. Adhering to the
qualitative analysis genre of grounded theory is appropriate when attempting to develop or
extend theory around a phenomenon that is inadequately explained by current theoretical
work, but offers a well-defined research problem (Charmaz, 2006). Moreover, “situating
grounded theories in their social, historical, local, and interactional contexts strengthens
them” (p. 180). Utilizing the constant comparative technique of data analysis and
interpretation (Charmaz, 2006), the data from interviews were analyzed in three stages:
open and focused coding, axial coding, and finally theoretical coding. Throughout the three
stages of coding, frequent annotation and analytic memos were used to iterate between
data and the analysis— it was a technique that allowed for elaboration on emerging
concepts and categories, provided details on categories and their properties, and aided in
the construction of categories on how the data related to each other. By carefully placing
attention on coding, researchers can use data to weave pieces of nascent theory together
by placing an emphasis on generating theoretical statements that contextually describe
actions (Charmaz, 2006)
Stage 1: Open and Focused Coding
As Charmaz (2006) puts it, the process of coding “generates the bones of your
analysis” (p.46). During the initial stage of coding, I often referred to my interview notes as
I read and listened to each interview during the preliminary stages of analysis. I engaged in
68
segment-by-segment coding, where I designated a code to every completed response or
cluster of sentences in the interview transcripts spoken by a participant. The coding
created was initially intentionally broad to capture the wide range of possible interactions
that recruiters had with AI technologies at work. The goal of this round of initial coding
was to be as open and non-prescriptive as possible. To the extent possible, I coded each
segment to flexibly describe the speaker’s intended meaning. I used in vivo coding or
referring to specific terms spoken by participants, to contextualize technical terms and
processes that were fundamental to recruiting. For instance, using code-specific jargon
such as high-volume recruiting, sourcing from the applicant tracking system, creating the
pipeline, filling the requisition I captured fundamental processes in recruiting that would
risk losing their implicit meanings if coded otherwise.
An example of initial, open coding is featured in Table 2. During an interview with a
participant named Andres. He described the following about what components make up an
algorithmic score in his work system:
69
Table 3: Example of Initial Coding
Direct quote from Andres Initial code
But I would, I would say that, in all
likelihood, the two factors that contribute
to that initial score, the two strongest
factors are probably GPA and leadership
score.”
Likelihood of two factors making up AI
score
GPA and leadership of applicant are used to
form AI score
GPA and leadership are numerically
calculated
Unsure about exact calculation or weight of
score
The purpose of utilizing this form of initial coding is to treat the interpretation of
data as openly as possible. These initial codes created categories of codes to demonstrate
their relation to processes within the data. In stage two of coding, these fragmented
categories of codes are purposefully interpreted to relate to larger categories to give
coherence to the analysis.
Stage 2: Axial Coding
Charmaz (2006) describes the second stage of coding as focused coding, or what
Strauss and Corbin (1998) originally offer as axial coding. Axial coding is a process by which
data reduction occurs by compiling initial codes into larger meaningful concepts. After
performing an initial stage of coding of a series of interviews, I created analytic memos that
offered interpretations about relationships between concepts. I adhered to Strauss and
Corbin’s (1998) and Charmaz’s (2006) guidelines to link categories of codes together by
identifying conditions (i.e., circumstances that form structure between phenomena),
actions or interactions (i.e., a participant’s reaction to specific events, processes, or issues),
and consequences (i.e., the outcomes of interactions or actions that were performed).
70
During this process, I began to gain clarity about the specifics and properties of a
category, and in turn, how these categories formed part of patterns within the data. For
example, a larger category related to “moments that recruiters reject AI scores” emerged
after collapsing or reducing a series of initial codes that described reasons, motivations,
and actions when recruitment professionals actively rejected the output of an algorithm.
This period of axial coding allowed for me to sort among initial codes, synthesize their
meaning, and organize the data in a way that produced a reassembled meaning to them.
Lastly, during this coding stage, I conducted follow up interviews with three
interviewees to provide clarity about unclear topics like moments of algorithmic use and
acceptance that were not expanded upon in the initial interview.
Stage 3: Theoretical Coding
Finally, the generation of codes from axial coding was subjected to theoretical
coding which enabled me to move between data and emerging theoretical reconstruction.
Specifically, the process of theoretical coding allows for data to come full circle, as they
“weave the fractured story back together” (Glaser, 1978 as cited in Charmaz, 2006, p. 63). In
this stage, the categories generated from the extractive process of axial coding allowed for
the ability to create specific relationships between categories.
To do this I re-engaged with the categories generated from axial coding, as well as
reviewed my analytic memos and annotations that were created during initial coding. I
attempted to formulate contextualized, theoretical logics about categories in my data. I
returned to the research questions of the study: 1) “How do recruiters account for
algorithms when it is so central to their work?”, 2) “What factors lead these recruitment
experts to adjust algorithmic recommendations?” and a question about motivation, 3)
71
“When do recruiters accept or reject the advice provided by an AI tool?” In performing this
analysis, I considered each category as an independent element working within a larger
system of phenomena. It became clear that the way experts used AI technologies was
bounded within social and cultural contexts embedded within an organization. The
metaphor for algorithmic use embedded within an ecosystem of organization-and-
technology practice that was introduced in Chapter 1 was a helpful guide in theorizing
about the degree to which the behaviors, perceptions, and motivations of experts were
realized and performed.
I moved between analyzing data, creating flow charts and models that explained the
mechanisms for behaviors, and writing analytic memos during this process. I also refined
my categories and theoretical logics. For example, I mapped the degree to which
organizational and personal contexts impacted the motivation of experts, their motivations
to their behaviors, and behaviors to specific outcomes related to technology use. The
process of outlining these conditions allowed for the emergence of several propositions to
model the emergence of algorithmic acceptance and rejection among experts, as well as
the factors that motivate these behaviors. The following chapter provides an account of the
findings generated by these analyses.
72
Table 4: Participant Inventory
Name
Position (Method
of Recruitment)
Industry AI Type Location Tenure Education Gender Race Age
Will
External
Recruitment
Manager
(Referral)
Education,
Non-Profit
Document
screening; Trait
assessments
Brooklyn, NY 2 years Bachelors Man White 30
Jose
Recruitment
Manager
(Testimonial)
Creative,
Private
Document
screening, Trait
assessments
Austin, TX 3 years Bachelors Man Latinx 26
Ahmed
Recruitment
Manager
(LinkedIn)
Real Estate,
Private
Document
screening
Los Angeles,
CA
2 years Bachelors Man Asian 29
Andres
Recruitment
Manager
(Referral)
Charity, Non-
Profit
Document
screening, Trait
assessments
Brooklyn, NY 2 years Masters Man Latinx 31
Anne
Recruiter
(Referral)
Legal, Private
Document
screening
Los Angeles,
CA
5 years Bachelors Woman White 28
Ashley
Recruiter
(Referral)
Sales, Private
Document
screening; Games
Los Angeles,
CA
9 years Bachelors Woman White 38
Jen
Recruiter
(LinkedIn)
E-Marketing,
Private
Document
screening; Trait
assessments
Bay Area, CA
13
years
Bachelors Woman Asian 40
Johnny
Managing
Director of
Recruiting
(Referral)
Education,
Non-Profit
Document
screening; Trait
assessments
Austin, TX 4 years Masters Man Latinx 29
Leslie
Recruiter
(Referral)
Education,
Non-Profit
Document
screening; Trait
Hoboken, NJ 3 years Masters Woman Black 34
73
assessments
Mel
Division Director
of Recruitment
(Referral)
Legal, Private
Document
screening
Los Angeles,
CA
4 years Bachelors Woman White 28
Rebekah
Business
Solutions
Manager
(Referral)
Staffing,
Private
Document
screening; Games
Los Angeles,
CA
7 years Bachelors Woman Latinx N/A
Sandra
Senior Talent
Manager (Social
Media Group)
E-Marketing,
Private
Document
screening; Facial
and voice
expressions
Los Angeles,
CA
16
years
Masters Woman Asian N/A
Siggy
Search Associate
(Referral)
Staffing,
Private
Document
screening; Games
Manhattan,
NY
5 years Bachelors Man White 29
Victoria
Recruiting
Coordinator
(LinkedIn)
Legal, Private
Document
screening
Denver, CO 1 year JD Woman Asian 36
Javier
Recruiter
Manager
(Referral)
Technology,
Private
Document
screening; Games
Los Angeles,
CA
3 years Bachelors Man Latinx 28
Jessie
Talent Acquisition
Manager
(Testimonial)
Food &
Beverage,
Private
Document
screening; Trait
assessments
Atlanta, GA 6 years Bachelors Woman White 31
Maria
Recruiter (Social
Media Group)
Technology,
Private
Document
screening;
Games; Facial and
voice expressions
Austin, TX 3 years Bachelors Woman Latinx 26
Matt
Program
Recruiter
Technology,
Private
Document
screening; Facial
Austin, TX 6 years Bachelors Man White 30
74
(Testimonial) and voice
expressions
Alex
Talent Acquisition
Associate
(Referral)
Technology,
Private
Document
screening; Games
Denver, CO 3 years Bachelors Woman White 29
Jill
Talent Acquisition
Recruiting
Partner (Referral)
Food &
Beverage,
Private
Document
screening; Trait
assessments
Atlanta, GA 7 years Bachelors Woman Black 27
Miguel
Recruiter
(Referral)
Technology,
Private
Document
screening; Games
Denver, CO 2 years Bachelors Man Latinx 29
Teddy
Applications
Recruiter Lead
(Referral)
Technology,
Private
Facial and voice
expression;
Games
Los Angeles,
CA
8 years Bachelors Man Asian 42
Roxie
Technical
Recruiter
(Testimonial)
Technology,
Private
Facial and voice
expression;
Games
Seattle, WA 2 years Bachelors Woman White 24
Derek
University
Recruiter
(Referral)
Technology,
Private
Document
screening; Trait
assessments
Los Angeles,
CA
3 years Bachelors Man Black 28
Camille
Recruiter
(Referral)
E-Marketing,
Private
Document
screening; Facial
and voice
expressions
Los Angeles,
CA
2 years Bachelors Woman White 24
Gloria
Senior Recruiter
(LinkedIn)
Chemicals,
Private
Document
screening
San Antonio,
TX
3 years Bachelors Woman Latinx 37
Marc
Recruiter
(Referral)
Health Care,
Private
Document
screening
Houston, TX 6 years Bachelors Man Latinx 31
Max
Recruiter
(Referral)
Technology,
Private
Document
screening; Facial
Bay Area, CA 4 years Bachelors Man White 37
75
and voice
expressions;
Games
Mike
Talent Acquisition
Manager (Social
Media Group)
Airlines,
Private
Document
screening; Trait
assessments
Atlanta, GA
12
years
Bachelors Man Black 11
Monica
Senior Recruiter
(Referral)
Financial,
Private
Document
screening; Trait
assessments
Chicago, IL
15
years
Masters Woman Black 46
Olivia
Talent Acquisition
Recruiter
(LinkedIn)
Media,
Private
Document
screening
Los Angeles,
CA
4 years Bachelors Woman White 28
Raul
Recruiter
(Referral)
Media,
Private
Document
screening
Los Angeles,
CA
<1
years
Bachelors Man Latinx 25
Roger
Executive
Recruiter
(Referral)
Financial,
Private
Document
screening; Games
Manhattan,
NY
7 years Bachelors Man White 38
Valeria
Recruiter
(LinkedIn)
Education,
Non-Profit
Document
screening; Trait
assessments
Manhattan,
NY
3 years Masters Woman Latinx 31
Tao
Recruiter
(Referral)
Technology,
Private
Document
screening;
Games, Facial and
voice expressions
Seattle, WA 9 years Bachelors Man Asian 33
Jude
Senior Recruiter
(Referral)
Technology,
Private
Document
screening;
Games; Facial and
voice expressions
Seattle, WA 7 years Masters Man White 47
Kerry Full-Cycle Technology, Document Los Angeles, 5 years Bachelors Woman Black 40
76
Note: N/A indicates that participants did not consent to providing this information. The column “AI Types” refers to
the types of algorithmic hiring tools used by recruiters in this study. AI Types is a category comprised of four different tool
types, which include: (1) document screening, (2) assessment of facial and voice expressions, (3) games, or (4) trait
assessments. A description of each AI hiring tool type can be found in Table 1.
Recruiter
(Referral)
Private screening; Games CA
Jeff
Technical
Recruiter
(Referral)
Technology,
Private
Document
screening; Games
Los Angeles,
CA
8 years Masters Man White 35
Tan
Recruiter
(LinkedIn)
Technology,
Private
Document
screening; Facial
and voice
expressions
Los Angeles,
CA
3 years Bachelors Man Asian 29
Miles
Senior Talent
Manager
(Testimonial)
Consumer
Goods,
Private
Document
screening; Trait
assessments
Newark, NJ 9 years Bachelors Man White 32
Yvonne
Talent Recruiter
(Referral)
Consumer
Goods,
Private
Document
screening; Trait
assessments
Newark, NJ 3 years Bachelors Woman White 28
Sarah
Human Resources
Recruiter
(LinkedIn)
Sales, Private
Document
screening; Trait
assessments
Los Angeles,
CA
4 years Bachelors Woman White 29
77
CHAPTER FIVE: RIVAL OR ALLY? HOW EXPERTISE IS NEGOTIATED IN ALGORITHMIC
POWERED DECISIONS
“What a seamless end-to-end experience would mean, is that, in theory, my
recruiters do not need to ever reach out to a candidate besides getting them [job
candidate] an offer letter. It’s proven to be a tool that has the potential to help us out
tremendously. Obviously, we are not even close there, but that’s what we’re working
towards.”.
-Sandra, Senior Talent Manager
Sandra comments on the prospect of using AI tools for recruiting.
• • •
I will be transparent. I know you’re recording this, but this thing [AI-hiring tool]
doesn’t know people. I do most of my work on the phone or I’m in meetings with these
people [job candidates]. I know what I’m looking for and if the score helps, then good. If it
doesn’t, I usually can make it help.”
-Camille, Recruiter
Camille comments on how she uses an AI tool at work.
This chapter uses data from 42 recruitment experts across 26 different
organizations to address research questions that attempt to answer what factors influence
experts to intervene in the recommendation from an AI-powered hiring tool. Moreover,
this chapter provides more depth into the process by explaining what factors motivate
moments when experts accept or not accept the recommendations from an algorithm. I
present the empirical findings in three sections. First, I provide findings that explain how
experts generally perceive AI-hiring tools when they engage in screening decision-making
strategies. Depending on factors such as knowledge about the data source, visibility or
monitoring of work processes, and relational dynamics outside of technology use, human
78
experts are found to perceive the use of AI-powered technologies as either complementary
(as an ally) or adversarially (as a rival). Next, I provide findings that detail how an expert’s
perception of AI-powered tools is associated with the ways that they evaluate and use the
recommendations from these technologies by accepting, rejecting, or at times, engaging in
a faux negotiation to generate a desired outcome.
Identifying Common Ground Among Different Forms of AI Hiring Tools
Job candidate screening is the process of reviewing job application materials
submitted by applicants. It is a process where recruiters use applicant job materials to find
and select the closest applicant—job description matches with special attention placed on
qualification, previous work experience, skills, and fit within an organization. Given the
demands of processing a sheer number of applications, high-volume recruitment
departments integrate different forms of algorithmic hiring tools to assess and determine
whether an applicant should continue in the hiring selection process. Across the different
work industries and organizations used in the analysis, algorithmic tools shared a common
characteristic: they assess a candidate’s application materials and generate an output
reflecting fundamental characteristics that a successful job hire should possess. In other
words, each algorithmic hiring tool used by recruiters had a specific purpose. Some
algorithmic hiring tools were designed to calculate scores for specific traits, while other AI
tools were holistic in nature and provided a general recommendation for a recruiter to
interpret.
While algorithmic hiring tools offer data to produce concrete predictions in the
form of rankings or count frequencies (e.g., finding an applicant’s GPA or counting years of
experience), often AI tools produce opaque predictions for more holistic qualities. For
79
example, the algorithmic hiring tool used at a Brooklyn-based education non-profit
organization used data from an applicant’s resume to calculate a leadership score that
ranged between zero (no leadership experience) to five (highest leadership experience). At
this organization, recruiters were trained to identify applicants who aligned with the
organization’s mission. Leadership was a key character trait that was associated with
success on the job. Therefore, when screening large numbers of applications, the
algorithmic hiring tool was trained to identify leaders in the screening phase. In contrast,
another algorithmic software system in a private food and beverage company based in
Atlanta utilized a system where recruiters were provided with a calculated score about an
applicant’s fit within an organization. Specifically, this score ranged between 0 (meaning no
fit with the organization) to 100 (denoting extreme fit with the organization). In this
particular case, the AI system produced a score and provided a recommendation about the
candidate holistically and did not isolate specific characteristics or traits in its output.
Recruiters in this study used algorithmic hiring tools to generate outputs in the form of
raw numerical scores (e.g., a number out of 100), ordinal scores (e.g., low performing, high
performing), dichotomous recommendations (e.g., yes/no or thumbs up/thumbs down), or
as a classification (e.g., for instance, an AI hiring tool predicted an applicant’s personality
type).
Making Sense of the Blackbox
From an outsider’s perspective, the outputs from an organization’s algorithmic
hiring tools may appear to evaluate against applicant criteria in a strict (e.g., a score for a
particular trait) or loose (e.g., a holistic applicant score) form; however, recruiters are able
to understand why algorithmic hiring tools are embedded into the screening process.
80
Recruiters share a common experience with using algorithmic hiring tools during the
screening stages of recruitment because they are “trained to use them,” “expected to use
them,” or “it is a part of the file.” When introduced to algorithmic systems, recruiters are
often trained on how to interpret what an AI-output means. In turn, recruiters make sense
of how these algorithms potentially arrive at their decisions and can impact their work.
Johnny, a Managing Director of Recruitment at an education non-profit in Austin,
explains how he and his team are not privy to the exact way his organization’s algorithmic
hiring software calculates a leadership score for applicants, but through extensive use of
the system, they have come to form a quasi-understanding of how it operates. He explains:
Nine times out of ten, if you’re an Ivy League student with a decent
GPA, the leadership score that Salesforce generates doesn’t matter. You’re
[the applicant] automatically invited to a final interview. But if you’re more
like an UT-Austin student and you have a lower GPA, but a significant
leadership score, you’ll land somewhere in the middle and make it to a
screener interview. We don’t know how things are weighted or what goes
into the calculation. I know it’s messed up, but we have sort of figured out
how it works.
Experiences like Johnny and his team’s are shared by the majority of
recruiters interviewed. Algorithmic hiring tools can be developed with recruiting
needs in mind (e.g., screening for a leadership score in a non-profit environment),
but recruiters who use these systems are almost always not aware of how
calculations are made. General perceptions about the nature of how AI-powered
tools perform often are referred to as being part of the blackbox of technology
design (Faraj et al., 2018; Pasquale, 2016). In practice, recruiters find themselves in
situations where they lack sufficient information about the technical components of
the tools that they are required to use in their work processes. Participants are
81
often “unsure about the calculations” (Teddy, Applications Recruiting Lead at a
Technology firm in Los Angeles). However, recruiters adapt to working with unclear
technological outputs that provide suggestions that may influence decisions. The
exposure and use of these tools allow for recruiters to decipher patterns of how the
technology reacts to data. In other words, phrases like “it usually will rank that way”
and “I can just tell that it’s going to give it that score'' reflect a process of
algorithmic folk theory generation. By recruiters making sense of the opacity behind
an AI’s output, participants are forming an understanding and interpretation of the
system without firsthand knowledge about its actions (Liao & Tyson, 2021, DeVito et
al., 2018; French & Hancock, 2017).
While the inclusion of AI-powered tools is common practice (and oftentimes
required) in high-volume recruitment screenings, through the use of these technologies,
recruitment professionals develop different perceptions about the use of AI-powered tools
in one of three ways: AI as an ally, AI as a rival, or indifference about AI. Organization-level
factors influence how recruiters form perceptions about AI technology. More specifically,
the degree to which an AI output is weighted in decisions, the degree of organizational
oversight of AI-use, and the extent to which recruiters trust the recommendations of an
AI-powered system are associated with degrees of acceptance or rejection by experts.
Algorithms Aid or Rival Work Expertise
In this section, I present the empirical findings in two parts to address what factors
motivate recruitment experts to intervene in recommendations from algorithmic hiring
tools. Specifically, this section details two emergent themes of how experts perceive AI-
powered technology: as an ally and as a rival. The decision to categorize and present
82
recruiters in two opposing perspectives is due to an emergent distinction in how recruiters
described the relationship between their expertise and the performance of an algorithmic
hiring tool at work during screening processes. In each section, I elaborate on findings that
demonstrate how perceptions of AI are associated with behavioral responses to these
technologies. A conceptual matrix of this set of findings can be found in Display 1 located at
the end of this chapter.
AI as Ally
One group of recruiters (n= 14) described their experiences with using an
algorithmic hiring tool as positive, or complementary to their work. At times, these
recruiters often expressed how the output of algorithms “would match what they would be
looking for in a candidate,” (Siggy, Search Associate, Staffing in Manhattan) or simply put,
when algorithmic outputs reflect the opinion of what a recruiter would assume. Recruiters
who describe algorithms that provide a high degree of match to their own expertise use
systems that screen for specific applicant characteristics like previous work experience,
desired pay range, and location. These are characteristics that are explicit to job positions
and these characteristics are fixed without much latitude for variance in final selection
processes. In other words, when candidates are screened against these criteria, they are
either accepted or rejected by the algorithm based on their classification of these fixed
variables. In high-volume recruitment efforts, when recruiters are tasked to review large
amounts of applications within a short period of time, algorithms are seen as useful tools
that help recruiters sift through large amounts of data within a short period of time. Derek,
a University Recruiter in Technology based in Los Angeles, comments on how algorithms
save him time by ensuring each applicant is screened:
83
We can’t just get through all the applications with a fine-tooth comb.
On average, if I’m reading resumes, I’m spending like a few seconds on each
one. At the very least, the system can ensure that each application is
reviewed. The criteria is set and it helps parse through those who may not be
a good fit.
Derek’s description of the use of algorithms at work highlights the nature of
reviewing applications in a fast-paced, high-volume, environment of recruiting. The
performance of algorithmic systems in allyship relationships centers on recruiters
establishing a convergence between their own set of expectations around a process
and the performance of an algorithmic system. These moments of matching, or
alignment of expertise between a recruiter and the output of an algorithmic hiring
tool are associated with three conditions: high trust in the algorithmic screening
process, low oversight of work, and in moments of screening obscurity, low
prioritization of algorithmic scores.
High trust in the system. As previously described, the process of screening
candidates varies by organization and position-type. In practice, candidates are initially
reviewed by algorithmic systems, their applications are screened, and scores are produced
using specific criteria, then these scores are reviewed at a later time by a human recruiter
(if at all). During this process, recruiters who perceived algorithmic systems as supportive
to their job often expressed a high level of trust in the results produced by the AI system.
Trust in the system was described in a process by which algorithmic systems know how to
identify and present explicit knowledge. For example, Yvonne, a Talent Recruiter in
Consumer Goods based in Newark, explains how her use of LinkedIn Talent Insights, a
software that provides analytics based on an applicant’s LinkedIn profile, helps to locate
knowledge quickly and accurately. When screening candidates, she is able to sift through
84
files that have been screened for a range of previous work experience years and specific
skills sets.
Recruiters often build trust with their AI-system over time and through repeated
successful exposure to the technology. These interactions are facilitated by setting specific
filtering criteria that the algorithm is able to reproduce in a consistent manner. Therefore,
recruiters begin to form a trusting relationship with the outputs of the technology because
the boundary conditions for a decision search space have been established (Tambe et al.,
2019). In recruiting environments where algorithmic processes are less opaque and
recruiters can specify well-defined, job-related competencies, recruiters often delegate
these laborious tasks to automation. In turn, recruiters often come to accept algorithmic
outputs as the norm and standard of screening in this process. This finding is not
surprising given the nature of high-volume recruiting where recruiters are responsible for
processing applications quickly, and technologies that perform consistently for a specific
process are later seamlessly integrated into a workflow.
High oversight of work. Recruiters who viewed algorithmic systems in a positive,
helpful way described belonging to work environments where their screening processes
were reviewed by others, usually as part of a team or a senior recruiting member. In some
recruiting spaces, recruiters often work in teams to cover applicants from a certain
geographical area or position type.
For example, recruitment professionals at an education non-profit organization
have recruitment efforts segmented by university clusters based on geography. In
conversation with Valeria, a recruiter in education based in Manhattan, she explains that
she works on a coastal recruiting team of eight recruiters who reviews applications from 25
85
college campuses in the Northeast. Specifically, Valeria is responsible for managing
applicants from four college campuses within the greater New York City area. She
describes the applications that she reviews are reviewed by her peers and a regional
managing director during bi-weekly check in meetings. She explains, “In our meetings, we
go around and talk about our numbers… how many people were rejected and why and how
many people are going to be interviewed. I check the Cornell list and Connor [her
teammate] checks my NYU list. We just usually verify that everything looks good.” Valeria,
like all the recruiters who are classified as viewing AI as complementary, describe being
part of workplace configurations where the use of an algorithm is viewed as an established
tool used in the screening process. The output from algorithms, which then forms part of a
candidate’s review file, is recognized as established expertise that recruiters (who arguably
share the same kind of explicit expertise) accept as true.
While a high degree of oversight from other team members or managers have
previously been conceptualized as a form of surveillance in relation to technology use
(Kellogg et al., 2020), the way that recruiters reporting high degrees with alignment with AI
perceive the use of AI-technology is positive, rather than coercive. Mike, who works in
airline talent acquisition based in Atlanta, describes the benefit of technological oversight
as it affords him the opportunity to “make sure to cover all the bases.” Moreover, Mike
describes that in airline recruiting, there are technical skills that applicants who are to be
invited for an interview must possess. Therefore, the use of an AI-tool enables recruiters to
set particular parameters for screening and then have their decisions verified for accuracy.
Outputs from AI are not prioritized as high. Recruitment departments set specific
standards of metrics when screening and evaluating candidates. Moreover, when filling job
86
requisitions, recruiters often evaluate a candidate’s application against a series of criteria
for a specific position. While algorithms can offer recommendations or generate an output
that provides rationale for specific decisions to be made, however, the degree to which
algorithmic recommendations are prioritized in final screening decisions vary. Participants
who viewed AI as complementary to their recruitment skills show how the emphasis placed
on algorithmic scores may not be an indication of how much they are prioritized or used.
In other words, recruiters can be mandated to use an AI system for their work; however,
recruiters are not required to demonstrate they have used the AI hiring tool.
Furthermore, recruiters view algorithms as accurate and often trust their results,
however, they treat their work holistically and algorithmic outputs as one aspect of the
decision-making process. Roxie, a technical recruiter in Seattle explains: “Typically, I stay
pretty close to what the score generates. It typically aligns to what I ultimately feel the
decision would be. But at the end of the day, we use the algorithm to identify the level of
skill of an applicant. But skills are not the only thing we look at when we make these
decisions.”
When recruiters place low priority on an automated tool, they often do so when the
screening process for a candidate entails other stages, which can include a preliminary
interview conducted by the recruiter, consideration of referrals (internal or external), and
other types of factors like location of applicant, culture fit, and pay expectations. While a
premium is placed on the score generated by an algorithm, recruiters place equal, if not
more, emphasis on other factors related to the job search.
Human expertise is reified in AI. Recruiters who place a high degree of trust in
algorithmic hiring tools are found to attribute accuracy to these systems and frequently
87
use these AI tools to engage in a process of algorithmic appreciation (Logg et al., 2019).
During moments of algorithmic appreciation, recruiters frame algorithms as “checks for
their work” (Siggy). Algorithmic systems are treated as assistants that verify a recruiter’s
sense of expertise. Recruiters frame the outputs of automation as tools that aid in
efficiency and save time when processing high volumes of applications. Algorithms perform
expertise by using explicit knowledge to screen candidates, thereby generating outputs for
human recruiters to review and approve when deciding to offer an interview to a job
candidate. Perceptions of technological appreciation are established over time by
continuous usage and reaffirmed by positive performance. A few recruiters (n=5) within
this subset of participants, described a sense of ambivalence about using algorithmic
systems for screening candidates. The continued, predictable use of algorithms were
associated with recruiters treating these technologies as passive tools.
AI as a Rival
The majority of recruiters (n=29) that participated in this study described their use
and perceptions of algorithmic hiring systems at work to be in conflict with their own
expertise. However, because of organizational policies and demands, recruitment
professionals often engage in strategies to reject the recommendations from screening
algorithms. In addition to strategies of algorithmic aversion, recruitment professionals who
perceived algorithmic hiring tools as rivals actively intervened in changing the input and
interpretation of algorithmic systems. The following sections describe how low trust in an
AI-system, low oversight of work, and low prioritization of AI-tools are associated with
challenges to expertise.
88
Low trust in the system. Recruiters with low trust in the AI-powered tools used for
screening often described the tool as deficient in representing tacit knowledge in their
outputs. While algorithmic systems can produce outputs (i.e., scores, rankings, or
classifications) that are derived from explicit knowledge based on applicant data (i.e.,
resumes, assessments, and recorded interviews), recruiters who viewed the use of AI as a
divergent practice centered around its inability to holistically assess applicants for a
position. Will, an External Recruitment Manager for a non-profit in Brooklyn, provides a
vivid example of a screening algorithm that calculates a leadership score for applicants
10
based on parsing resume materials for keywords related to a leadership position (e.g.,
elected official for an organization, volunteer experience, organizational affiliations, etc.).
From what I know, our selection model is heavily reliant on leadership
experience, academic achievement, and experience in low-income
communities. A leadership score is what is calculated from their resumes.
But this thing [algorithm] needs to assess leadership beyond what a role is. It
needs to have a more human-centered way of viewing leadership. It can’t
calibrate how organized you are. It can’t calibrate how charismatic you are.
These people aren’t just scores. You read their application and talk to them
and you just have a good feeling about them [applicant].
In this situation, Will acknowledges that while leadership is an important element in
predicting a future employee’s success on the job, the way that an algorithmic system is
trained to conceptualize and operationalize is through an understanding of leadership that
is in opposition to his knowledge of leadership. Sarah, who is a Human Resources Recruiter
in Sales based in Los Angeles, also uses a technology to measure an applicant’s leadership
potential based on data provided from an applicant’s resume and a video personality
10
This non-profit organization primarily recruited applicants from college campuses. Therefore, the
choice to use an algorithm to calculate leadership was designed to be representative of the
population of applicants that traditionally applied to this organization.
89
assessment designed by a third-party developer. Sarah explains how algorithmic scores
lose their meaning when they are decontextualized. In an anecdote she shared, Sarah
explains a situation where she had previously established rapport with an applicant that
the algorithm classified as possessing low leadership potential. Sarah explains: “This
woman was a single mom with a kid and put herself through school, while working two
jobs. Yes, she only had one internship, but to say she doesn’t have leadership skills is
outright wrong. She even talked about being a single mom in her video!” This experience
with Sarah highlights how algorithmic systems often will accurately classify data based on a
narrow definition of expertise (e.g., an elected position in an organization is classified as
high leadership), however, AI systems are unable to account for context specific situations
that rely on tacit knowledge that may pertain to concerns around recruiting ethics. In the
case of Sarah’s applicant, this lack of expertise within the algorithmic system caused her to
lose trust in the system’s veracity and effectiveness.
In addition to a mismatch between explicit knowledge represented by an
algorithmic system and the tacit knowledge possessed by human recruiters, a lack of
transparency about how an AI-system generates an output is also associated with a lack of
trust in a system’s output. Recruiters often questioned the accuracy of their AI-powered
systems by “taking results with a grain of salt” (Olivia, Talent Acquisition in Media, Los
Angeles) or rejecting outputs by explaining “how can I trust a score if I don’t know how it
was even calculated” (Raul, Recruiter in Media, Los Angeles).
Low oversight of work. The lack of organizational oversight was found to be a
justification for recruiters who disagreed with algorithmic recommendations to ignore or
not include scores in a candidate’s screening file. Although as previously mentioned,
90
recruitment professionals often work across distributed teams with other recruiters in
lateral positions. However, the degree to which recruiters felt comfortable resisting the
inclusion of algorithmic scores in the recruitment process was related to the lack of
surveillance or requirements to report algorithmic scores in candidate files. In contexts of
low oversight, recruiters screening candidates frequently expressed that they ignored
sections of a candidate’s file that was automatically generated by the algorithmic system.
Instead, they resorted to using their own expertise to screen an applicant. Christin (2017)
describes this process as “foot dragging,” or actively resisting or bypassing algorithmic
outputs as part of work procedures. In practice, the act of foot dragging often occurs in
two ways: (1) recruiters actively ignore algorithmic outputs and conduct the applicant
screening process by reviewing candidates using other tools like keyword searches for
skills or attributes (e.g., previous work experience, GPA, etc.), support from reviewing
previous email or phone rapport, and internal recommendations, and (2) ignoring the
automated recommendation and providing written or verbal feedback to others involved in
subsequent hiring processes as justification for not adhering to the output from the
automated screening tool.
Recruiters engage in this active form of resistance, or plainly reject an algorithm’s
recommendations, as a combined result of a lack of trust in an algorithm’s efficacy, as well
as a lack of managerial surveillance to ensure proper use of AI-powered hiring tools.
Recruiters who are not held responsible for a lack of adherence to an algorithmic system’s
recommendations by others in their recruiting team often freely engage in foot dragging
strategies to avoid using the system. The lack of surveillance, or monitoring of employees’
91
behaviors, is seen to justify recruiters to exercise aversion to technologies they may not
agree with.
Outputs from AI are not highly prioritized. Although recruiters may not have direct
oversight over their technology use, those that challenge the efficacy of an algorithmic
system are frequently required to make use of these automated systems. Within this group
(i.e., recruiters who oppose algorithmic recommendations), there is an acknowledgement
that the AI-powered score is a standard of the screening criteria used to evaluate
candidates. In most high-volume recruitment environments, each person who submits an
application is designated a type of score—a derivative of materials they submit including
resumes, LinkedIn, and social media profiles, written responses to application prompts, or
general application data. Recruiters like Jill, a Talent Acquisition Recruiter for a food and
beverage company in Atlanta, explains that the use and implementation of an AI-powered
tool in her line of work is futile. She says:
I figured out that just because we’re told to look at the predictive
success score when inviting candidates to an interview doesn’t mean we
actually do that. They [recruitment managers] can say it matters and it's
important, and I do look at the scores, but it doesn’t drive my final decision.
That predictive success score is there just for show. I’ve been doing this for
years. There are other things that matter. The other thing that people need
to really understand is that the recruiter actually has a whole lot of weight.
We’re like a gatekeeper, we get you in and out of that job.
Recruitment professionals find themselves caught between exercising their own
work expertise and complying with the responsibilities and requirements of their job, even
at the expense of working with technologies that do not produce accurate and acceptable
results. Recruiters like Jill highlight that while their recruiting jobs require them to use AI-
powered technologies, employees using these tools realize algorithmic hiring tools are
implemented as a means for organizations to keep up with latest innovations at work.
92
Experiences like that of Jill exemplify current AI adoption trends across organizations that
detail how organizational leaders explain the need for their companies to adopt AI, but do
not know how AI can help leverage their work processes (Ransbotham et al., 2020). The
lack of trust in AI-systems, coupled with low surveillance, yet a high need to evidence the
use of these technologies provide a space for recruiters to engage in behaviors that exert
control over the output and interpretation of AI-powered tools within screening processes.
Employees intervene in AI-powered processes. Participants that reported
experiencing challenges in utilizing algorithmic hiring tools during screening processes
were more likely to express low confidence in the AI tool and also worked for an
organization with low employee oversight but placed a high premium on using AI-powered
tools. These contentious moments of accepting algorithmic recommendations can be
considered a form of algorithmic aversion (e.g., Burton et al., 2020). In the practice of
screening within recruitment processes, algorithmic aversion involves the misalignment of
expectations in a situation where the expertise performance of an algorithmic system is
perceived as untrustworthy, decontextualized, and unfitting by a human expert who is
embedded in the same work domain as the technology. In addition to passive rejection
strategies like foot dragging, algorithmic aversion corresponds with two active strategies
that human experts in this study often employed in situations of high misalignment of
expertise with an AI-technology: exerting control over the technology and creating the
illusion of technology acceptance through faux negotiations.
Exerting control over the system. A set of strategies that recruiters employ in
moments of algorithmic aversion, or rejection, has to do with what Christin (2017) describes
as “gaming,” or the manipulation of organizational rules to undermine the utility of an
93
algorithm. While gaming strategies have typically focused on how individuals manipulate
the work context where algorithms are used (e.g., in journalism, reporters manipulate
analytics by persuading editors to run their stories at high traffic hours [Christin, 2017]),
recruiters in this study described actions where they deliberately manipulate the data
which triggers new outputs from an algorithm. This process usually starts when recruiters
disagree with the output from an algorithm. In terms of screening, two prominent
moments emerge as opportunities for algorithmic gaming: reviewing applicants who score
at a certain threshold or reviewing applications from applicants who recruiters have
established previous rapport with.
Participants that engaged in gaming strategies often reported that they pay
attention to applications that receive scores that are slightly below the threshold for
acceptance. Scores that are near a threshold signal to recruiters who the applicant
deserves a second evaluation against their own expertise. Ashley, a recruiter in Sales based
in Los Angeles reports that algorithms are trained narrowly and reject candidates even
when they possess a strong fit with the position. She recounts her experience of
algorithmic gaming as a reaction to the narrow constraints of the tool’s screening process:
There are often times where candidates are rejected based on the
score and I have overturned it to accept them. So, I think an accepted score
is a 2.7, so there are people who are like a 2.5 or 2.6, I will do that for those
kinds of applicants. It is those people on the cusp. I’ll go back into the file to
see what the algorithm missed. Sometimes they don't have enough
internship experience, but they do a lot of volunteering. I would be remiss
not to change their score.
In this situation, Ashley describes how a lack of internship experience may have
influenced why the algorithm poorly screened an application. At this moment, Ashley is
94
able to invoke her recruiting expertise and alter the score of the applicant based on the
other information that is present in the applicant’s data file.
Recruiters are the direct intermediary between an applicant and the organization
during the hiring process. Part of a recruiter’s job is to engage with prospective applicants
and provide guidance through the applying, screening, and selection phases. Recruiters
report that they communicate with applicants through a variety of channels, including
phone, email, and video conferencing. In fact, many popular applicant tracking systems
(i.e., the databases that house all relevant applicant information) track all email
correspondence between applicants and recruiters.
Recruiters often establish rapport with applicants either informally through
phone calls or in formal channels like email. An alternative algorithmic gaming
strategy of modifying applicant scores is associated with the degree to which
recruiters have established rapport with a candidate. Monica, who is a Senior
Recruiter at a Financial consulting firm based in Chicago, exemplifies a situation
where her expertise, or what she describes as her “vibe check,” supersedes the
advice from a screening algorithm when she actively alters scores on behalf of
applicants.
I think that sometimes what is missing is context. Let’s talk about
leadership, for example. So maybe somebody doesn’t have a strong
leadership score because they grew up in a low income area and they were,
you know, working a job instead. And they were really successful at that, and
really successful at school. But maybe they don’t have all these extra
curriculars like their peers do, but they might be a really high fit. So they
might score low because we cannot quantify their experience. And the
reason I know this experience is because I speak with these applicants. They
are people. So anyone who speaks with me, and passes my vibe check, gets
an interview. It doesn’t matter their score. I can always add a miscellaneous
metric and bump that score up.
95
The information exchanged in interactions that recruiters have with applicants
enables recruiters to add new forms of tacit knowledge, or arguably, data points that are
factored into gaming strategies used when overriding an AI-generated score. Other
recruiters shared how rapport building experiences allow applicants to share “contextual
information” (Monica) or “soft data” (Marc, recruiter in Healthcare based in Houston) that
is unable to be factored into computed scores.
Contextual information is data that is generated through disclosure from applicants
through conversations with recruiters. “Knowing what is best for the company” (Anne, a
recruiter in Legal based in Los Angeles) or “helping the hiring manager get what they really
need” (Alex, a Talent Acquisition Associate in Technology based in Denver) are common
phrases that recruiters use when defending their choice to reject and alter automated
scores to align with their own sense of expertise. In this process, recruiters gather
information about the applicant, assess the information against their own criteria for
success, and use their judgment and subjectivities when deciding how to override the AI-
system.
Faux negotiations. All participants in the study reported they engaged in high-
volume recruiting efforts and used an automated hiring tool to screen candidates. Adding
to this, after the screening process, qualifying job candidates are invited to interview at the
organization. This process varies across different organization types and recruitment
strategies. In some recruiting systems, all candidates that qualify are scheduled for an
interview with a hiring manager. An alternative approach to interview screening requires
that recruiters compile a slate of candidates that are “interview ready” (Gloria, Senior
Recruiter in Chemicals based in San Antonio) or “passed first review” (Leslie, Recruiter for
96
an education non-profit in Hoboken) and the slate of candidates are presented to a hiring
manager. In other words, when recruiters present a slate of candidates to a hiring
manager, they are providing a short list of candidates they have selected as qualified and
fitting for a position—the candidates are usually invited to participate in an interview with a
hiring manager or committee.
Recruiters often use slates as a loophole to bypass applicant rejections from an
algorithmic system. It should be noted that not all recruiters who use slates disclosed
engaging in this process, but a significant portion (n=8) of them described situations where
slates were used as a way to persuade or nudge hiring managers to select AI-rejected
candidates for a position. A common tactic that is used to elevate the standing of a rejected
candidate involves placing rejected candidates among other candidates that are blatantly
not a fit for the position among other candidates on a slate. The following describes a
situation where a recruiter uses a slate of rejected candidates to highlight one particular
candidate that they believe would be best suited for a position:
You don’t want to overwhelm a client with so many options [job
candidates to select from] that it almost feels like they’re having to do the
recruiting themselves. I’m in charge of taking those 200 applications and
getting them down to five or six interviews. So I already know what the
manager is looking for and I go out and find it. And if they don’t make it past
the cut [the algorithmic screener], but I know they are a good fit, I’ll pull
them out and put them next to a couple really bad files…. It’s about making
the job easier for everyone involved. It’s my job to find that needle in the
haystack for these managers.” (Gloria, Senior Recruiter in Chemicals based in
San Antonio)
Candidate slates present an opportunity for recruiters to bypass the expertise from
an AI-system and force their own expertise as the primary factor in screening decisions.
This behavior represents what is considered to be a type of faux negotiation about
97
expertise. Recruiters like Gloria acknowledge that her preferred candidate did not meet
organizational level screening criteria when evaluated by an AI-system. However, her
actions, when presented to a hiring manager reflect a fake, staged screening process that
places a once rejected candidate in an advantageous position. An alternative perspective to
understand the situation of gaming the system is that recruiters acknowledge the
misalignment of expertise between them and the AI system, and gaming offers a way to
correct and coordinate expertise.
Recruiting is personal. Kellogg et. al (2020) describes a concept called algoactivism
which generally refers to the active resistance that employees use to resist algorithms.
Recruiters who game the AI system engage in this form of algoactivism; however, there is a
lack of empirical evidence that suggests employees engage in algoactivism out of personal
convictions. Of the 42 recruiters interviewed, ten recruiters openly discussed the
implications of race and gender within their line of work. Each of these recruiters
identified as a person of color (Black, Latinx, or Asian in this instance) and recruited in both
non-profit and private industries. This subset of recruiters described the need to place
diversity of race and gender as a priority in recruiting. Oftentimes, they admitted to placing
candidates of color in advantageous positions by overriding algorithmic scores or placing
them in candidate slates where hiring managers would find racial attributes
complementary for diversity and inclusion efforts (DEI). Andres, a recruitment manager in
a charity non-profit based in New York, describes a situation where he prioritized a
personal attribute he identified as part of the screening process.
I can’t imagine anyone saying this is a good strategy or that we are
encouraged to do this. I know that some folks inflate “level of experience” for
the sake of advocating for their people, because that’s the only way to do it
through Salesforce. We can’t control your GPA, we can’t control what
98
university you went to, but we can control experience. So, sometimes that’s
leveraged, and I admit, it's not the best way to do it. But when you become
emotionally invested in an applicant, you do it. You help out your own. I’m
not saying it’s the right thing to do, but I know that it’s not just happened to
me. Other people do it, too.
Like Andres, recruiters often use rapport building to connect with applicants and
build a connection that discounts the data they submit for an application to the job. In his
case, he leverages an applicant’s score for their level of experience since it is a malleable
part of an application. This subset of recruiters demonstrates how personal ideologies
emerge as a form of tacit knowledge that can be invoked as an expertise mechanism to
override a trained organizational technology.
AI is a manageable challenge to expertise. Within high-volume recruitment
screening, several organizational-level factors are associated with the degree to which
experts experience algorithmic aversion. Experts that perceive an AI-system to have low
efficacy often use their tacit knowledge to supersede the output of algorithms that are
highly prioritized—experts are poised to act in an advantageous manner and exert control
when necessary, over an automated tool. Lastly, institutional knowledge is a form of
expertise that is contextualized within data (i.e., applicant data) and is difficult to program
and evaluate in AI-powered systems. Recruiters are experts in their domain. They assume
direct responsibility for their recruiting actions and its effects on the hiring lifecycle.
In short, technology is often viewed as an ancillary tool to aid their work. However,
as the majority of recruitment professionals interviewed in this study report, “they know
their job best” (Gloria). They are the gatekeepers of their expertise, not the technology.
99
Table 5: Conceptual Matrix: AI as Ally or Rival
Action
with AI
system
Tactics
Perceptions of AI Sub-category:
Level of Trust
Sub-category:
Level of
Oversight
Sub-category:
Level of AI
Priority
AI
as
an
All
-y
Accept
Recommen-
ation
1. Frame
recommendation
s as aids, or an
assistant
2. Use AI
recommendation
s as a check for
work
3. Perceive AI
results as
agnostic
Overall perception
Alignment of
expertise between a
recruiter and the
output of an
algorithmic hiring
Related codes:
complementary,
matches what they
would do, helpful
for screening
High
Related codes:
Consistency of
results,
becomes the
norm, a part of
the job,
seamless to use
High
Related
codes:
Managers
review
work,
team
check ins,
tag team
recruiting,
help to
cover
bases
Low
Related
codes:
Multiple
layers to
applicant,
holistic
view of
applicant,
indifferent
about use,
AI part of
screening
100
AI
as
a
Ri-
val
Reject Re
Recommend
-ation
1. Exert control
over AI system
through gaming
strategies
a) Reviewing
applicants on
threshold, or
cusp
b) Reviewing
applications -
with
established
rapport
2. Engage in faux
negotiation
a) High
preferred
applicants in a
candidate
slate
b) Prioritize
diverse
candidates
Overall perception
Misalignment of
expertise between a
recruiter and the
output of an
algorithmic hiring
Related codes:
Negative, hard to
understand,
challenging to
justify
Low
Related codes:
Divergent to
tacit
knowledge,
people are not
scores, gut
feelings,
rapport
building shows
context, gaps
in knowledge
about AI
Low
Related
codes:
Lonesome
team
member,
managers
trust their
recruiters,
no checks,
gatekeeper
s
High
Related
codes:
All files have
scores,
minimum
criteria for
screening,
use of
automatic
rejections,
selection
models
designed for
AI
101
CHAPTER SIX: HUMAN EXPERTISE CONFIGURES THE ALGORITHMIC PLAYING FIELD
In this chapter, I present a set of empirical findings that explain how human
expertise is advantageously invoked at the onset of human-technology decision-making
which inadvertently circumvents the impact of algorithmic processes. First, I describe how
recruiters often will utilize their expertise during applicant sourcing, an initial hiring stage
whereby a recruiter invokes different forms of knowledge to find, screen, and attract
specific candidates who are later screened by an algorithmic system. I focus on the
meanings and actions of recruiters when they draw upon their expertise and influence the
nature of input data (e.g., the applicants they source for a position). I term these actions as
a form of anticipated expert screening, and I identify instances when this behavior is
reinforced by recruiters.
The Sequence of Evaluation in Hiring
In organizations, the meaning of an algorithm is often referred to as a software tool
that operates on a set of programmed procedures that transforms input data into a desired
output (e.g., a ranking, score, classification; Kellogg et al., 2020). At a general level, the
participants interviewed in this study share this conceptualization about the algorithms
they use at work. Up to this point, this dissertation has explored the ways that recruiters
use algorithms during the second stage of the hiring process, or what is commonly
referred to as screening. In practice, screening algorithms use input data from applicants
to generate output data (e.g., algorithmic scores) which reflect a standard set of criteria
designated by the organization. However, recruiters often disclose that due to the lofty
demands of high-volume recruiting, they often place emphasis on external searches and
102
outreach strategies to identify potential successful job applicants before an application is
submitted. This process is referred to as sourcing, or the act of outreach to create and
maintain a database of likely applicants for a position. The majority (n=37) of the recruiters
interviewed in this dissertation disclosed that they engaged in sourcing processes. The
remaining recruiters who did not engage in sourcing candidates often described that their
recruiting departments had designated employees whose primary job responsibilities were
to source candidates for open positions.
Recruiters source candidates in a variety of different ways. They often rely on
finding candidates through internal company referrals, external recommendations from
social networks, in-person or virtual job fairs and trade events, or from previous job
applicants who had previously applied to a position at the organization. In addition to these
strategies, the most common way to source candidates emerged from using LinkedIn, or
more specifically, LinkedIn Recruiter. This platform acts as a kind of applicant tracking
system (i.e., a database that can store data on potential candidates) where recruiters can
use the website’s functionality to find prospective job candidates that are looking for work.
Below is a figure of LinkedIn Recruiter that displays the user interface seen by a recruiter.
103
Figure 7: Demo of LinkedIn Recruiter
Figure 8: Figure of standard filters on LinkedIn Recruiter
Note: These images were provided by LinkedIn Recruiter as part of their demo.
Anne, a recruiter in Legal based in Los Angeles, directed me to this image during an
interview.
104
Generally, recruiters perform keyword searches for various relevant factors
pertaining to prospective job candidates. Searches often filter for applicant characteristics
like required skills for a job, current type of employment, education level, current location,
and expected salary. Recruiters utilize filters as a type of preliminary screening process.
For instance, applicants that list a specific expected pay range that lies outside of the range
designated for a position are often eliminated from further consideration and these
applicants are not placed into the hiring pipeline
11
. Filtering tactics are often used to “not
waste anyone’s time when you bring them in for an interview” (referring to recruiter,
applicant, or hiring managers, Ashely, recruiter in Sales, Los Angeles) or to “not include
someone that is clearly under or overqualified for a job or someone who just doesn’t seem
like a good fit from the onset” (Anne). In general, sourcing candidates allows for the
cultivation of an applicant pipeline that eases work demands by reducing the time to fill a
position and by configuring a pipeline of credible candidates for further screening
processes.
Sourcing Accelerates Legitimized Screening Processes
Reducing the time to fill a position is a common benefit from filtering candidates
during the sourcing process. Javier, a Recruitment Manager for a Technology company in
Los Angeles, describes how sourcing eases the demands from recruiting a large number of
candidates in a short period of time. The following quote describes a situation where Javier
was filling positions for a summer internship program where he had to fill 15 vacancies
across three departments within a month.
11
Within sourcing, a hiring pipeline is often referred to as the database of job candidates that meet
the baseline criteria for a job position. Certain baseline criteria include job requirements such as
permission to work in the U.S., a college degree, or certain types of required skills for a position.
105
You know, sometimes more is not better. If you cast a wide net, you
run into the problem of having to weed through so many irrelevant people
[referring to applicants] and that takes time, even with all these technologies.
But if you work backwards from the start, that saves you so much time!
[pause] That starts with finding the people who you want to apply and get
them to submit their stuff [their application]. If you go into the game with an
idea of what you want, it makes the rest of the process much more easier—
and faster. No one is trying to do more work than they need to.
Sourcing provides an opportunity for recruiters to exert control over the screening
process from an initial stage. Technology, largely in the form of sourcing tools, is leveraged
to identify promising candidates and encourage or “cultivate” (Andres, recruitment
manager in Education based in Brooklyn) them apply for positions. Roxie, a technical
recruiter for a technology firm in Seattle, describes the benefit of sourcing as a “win-win
for everyone involved” as this form of preliminary screening refines an applicant pool for
relevant skills, experiences, and qualifications that hiring managers rely on when
interviewing and selecting potential hires—as well as it reduces the time to hire for
applicants who are under consideration and the recruiters who manage their applicant file.
I refer to these sourcing strategies as moments of anticipated expert screening as
they represent instances when recruiters exert different forms of expertise to screen
candidates prior to an application process. In addition, applicants are screened in this
preliminary stage before they are evaluated against a set of normative organizational
criteria with AI-powered hiring tools. Recruitment professionals in this study reported
engaging in anticipated expert screening in two ways: Recruiters source to match
prospective applicants to the demands of the hiring organization by identifying applicants
that align with (a) explicit and (b) tacit criteria about the organization.
106
Using Expertise to Source For Explicit Criteria
Recruiters are tasked to fill requisitions (i.e., open positions) that often require an
applicant to possess minimum qualifications for a position. Skills and professional
credentials are signals recognized by experts in order to qualify an applicant for a role. In
practice, recruiters are provided explicit skills and qualifications from hiring managers at
the onset of filling a requisition. Using filters built into their applicant tracking systems, or
other sourcing software like LinkedIn Recruiter, recruiters input these skills and
credentials as filters to identify potential applicants. Victoria, a recruiting coordinator in
Legal based in Denver, provides more detail about this process:
Although we get so many applications per position, sometimes I feel
like I’m a headhunter. Thankfully, I like to think I’ve gotten pretty good at
finding what I need. Right now, I’m working on filling three reqs
(requisitions), but the hiring manager wants people who are bilingual in
Spanish, live within 20 minutes of the law firm in Downtown, and do personal
injury (a type of law specialty). Can you imagine how long that would take if I
had to look up every single applicant in the system? I just plug all that into
Salesforce, and voila, I reach out to that pipeline and can start on getting
those reqs filled.
In this instance, Victoria describes how she primarily begins filling requisitions by
using sourcing functions before she posts jobs publicly. Within her applicant tracking
system, Victoria is able to harvest a list of relevant potential applicants and conduct
outreach efforts to encourage that specific group of people to apply for open positions.
Like Victoria, other recruiters engage in this overt process of sourcing candidates by
searching for education qualities, specific technical skills, distance from a location, and
107
employment history. These candidates are integrated into a specialized applicant pipeline,
where their application progress is monitored from beginning to end.
12
Sourcing to meet application quotas. Recruiters often receive direct requests from
their managers to fulfil certain demographic quotas in their recruitment efforts. More
specifically, it is not uncommon for recruiters to be expected to ensure racial and gender
diversity among applicants that apply to the requisitions they manage. However, since U.S.
employment laws
13
forbid employers from discriminating against protected classes such as
gender and race when hiring, recruiters often engage in search tactics to identify potential
applicants that fit specific racial or gender characteristics. Jessie, a Talent Acquisition
Manager in Food and Beverage in Atlanta describes the degree to which hiring managers at
her workplace require minimum quotas for diversity in their hiring process.
So the AI we use will only reject them [applicants] if they have a GPA
lower than 2.5. In fact, if someone applies and has lower than a 2.5, they
won’t be put into Salesforce and they won’t count for goals. On top of that,
we also have to meet an internal diversity benchmark. So we have to source a
specific number of Latino, African American and Asian American folks. And
then we also have a specific breakdown of goals based on race and ethnicity.
It is so specific. But that’s what the numbers need to be, so I need to make
sure my numbers apply [referring to management requesting demographic
quotas].
12
While there is much to be said about the type of work recruiters engage in during cultivation, or
the hiring stages after an applicant has applied to a job (but has yet to be selected for a position) this
study’s scope focuses on detailing experiences during the applicant sourcing stage.
13
To date, there is no U.S. federal legislation that accounts for algorithmic practices related to
employment processes. In the U.S., organizations are mandated to comply with Title VII of the Civil
Rights Act of 1964, which forms the basis to prohibit employment discrimination with respect to
protected classes (i.e., race, gender, national origin, etc.) In addition to federal law, organizations
also adhere to state level legislation on employment. However, algorithmic technologies which are
tools that facilitate the process of hiring are not considered in current widespread legislation. For a
comprehensive discussion about algorithms and employment legislation, see reports by Raghavan et
al. (2020) and Raghavan & Barocas (2019).
108
Jessie later goes on to describe how she, like other recruiters, often segment their
general applicant pool by crassly identifying racial or gender features of candidates from
their LinkedIn profiles or other available applicant data like resumes. Other recruiters
reported similar forms of sourcing strategies related to meeting demographic quotas.
Monica, a senior recruiter in finance based in Chicago, shared how she identifies diverse
candidates by performing Boolean searches (e.g., a type of search where keywords and
operators [AND, OR, NOT] are used to produce refined results) with gender or racial
descriptives as proxies for a particular race or gender designation. She details that she
typically conducts searches for “Black Management Association” (to identify black business
graduates) or “Women in Business” (to identify college graduates that were women who
majored in business). Other recruiters reported how diversity quotas transcend beyond
gender or race— searches for first generation college student status and religious diversity
were also reported as criteria used to source different applicant demographics.
Navigating Recruiting Within the Current U.S. Racial Movement. While this
dissertation’s main findings place an emphasis on how recruiters utilize their expertise
when engaging with AI technology at work, this analysis would be remiss not to make
mention of how the current U.S. social climate surrounding race and gender has had
implications for the ways the organizations perform human talent acquisition initiatives.
Across the stages of data coding, I made note of how recruitment professionals who work
in non-profit organizations were more open and expressive when discussing diversity and
inclusion in recruitment. Non-profit professionals often point out that they engage in
“mission-aligned” recruiting (Will, External Recruitment Manager in Education, Brooklyn),
109
which refers to how their organizations prioritize diversity in recruiting, as well as in their
overall mission. In contrast, recruiters who work in private industries often explained the
challenges in meeting diversity and inclusion expectations. Mel, a Division Director of
Recruitment for a Legal Staffing Firm in Los Angeles, comments on the misalignment
between the expectations and practices of diversity and inclusion initiatives within
recruiting. She adds:
Sometimes we have some managers say they want a diverse hire, and
we navigate that carefully based on the company and what they say they
want, and what they actually want. It’s funny that managers say, ‘Oh, we want
a diverse hire,’ meaning they want someone that doesn’t have white skin. But
when it comes to diversity of thought, diversity of background, and in all the
other senses of the word besides what you look like, they are not as thrilled.
They want someone who doesn’t look like them, but has been to the same
school as them, or has the same life experiences as them, and that’s not
diverse. So people get caught up, especially nowadays, in diverse hiring— but
they’re not looking beyond skin tone or gender.
When fulfilling requisitions with specific criteria, recruiters often find themselves in
situations where their understanding of an explicit trait is different than that of the
manager who decides on the final hiring decision. Recruiters take note of invisible
requests, or requests from managers that are often unclear, or opaque characteristics of
applicants that require experts to engage with tacit understandings of their workplace
norms, policies, and expectations in order to successfully execute their job search. These
forms of invisible requests can be considered types of tacit knowledge that are delegated
by hiring managers for human experts to work on during a preliminary stage of sourcing.
Job candidates who later submit an application are then subjected to additional forms of
explicit screening through AI technologies (e.g., AI hiring tools that parse resumes,
calculate scores, or classify traits), as well as others in the hiring process. The following
110
section elaborates more on this process of invisible requests, or frankly, when recruiters
have to read between the lines between explicit and implicit expectations of their
managers when sourcing, screening, and preparing candidates for the selection process.
Using expertise to source for tacit criteria. Like Mel, other recruiters often express
how their work assignments require them to source candidates for requisitions that call for
applicants to meet certain opaque criteria from hiring managers. Recruiters often decipher
invisible screening requests by applying their tacit understandings about an organization.
Code words, or descriptive phrases that signal specific applicant attributes are employed
by managers when requesting candidates with particular types of characteristics. For
instance, some recruiters who fill requisitions for international positions are told to “find
people that would feel comfortable there” (Roger, Executive Recruiter in Finance based in
Manhattan). In this example, Roger who recruits for a wealth management firm describes a
moment when filling requisitions based in Asia Pacific (specifically Taiwan, South Korea,
and Thailand), a hiring manager requested that he prioritize sourcing applicants of Asian
descent before an interview for these positions.
On the other hand, recruiters also use their expertise to match tacit understandings
of a position on behalf of applicants. Matching an applicant with the “fit” of an organization
is one primary strategy that recruiters employ during sourcing. Anne, a recruiter in Legal
based in Los Angeles, shares how sourcing often requires her to use her judgement to
attract prospective applicants that can fit the culture of an organization she is staffing for.
I have to match the hard skills, right, but I also have to match the
culture fit. If I'm placing someone at Tik Tok (a video sharing social media
company) versus I’m placing someone at a law firm, one of whom, the name
partners, worked for Donald Trump. The people I’m going to send on
interviews to those places will not be the same. I have to be cautious about
that and LinkedIn Recruiter can help screen for the best candidate, but when
111
my best candidate has a “I support of Black Lives Matter” banner on their
LinkedIn profile, I’m not sending them on an interview with a right-winged
law firm.
In this situation, Anne highlights that while sourcing and screening technologies can
often be beneficial for screening on explicit criteria (e.g., “hard skills”), her tacit knowledge
about an organization often supersedes the expertise that is practiced by the AI-
technologies she uses. These forms of tacit knowledge are actualized through signals from
applicants in their application materials and profiles afforded by the features in social
networking sites (i.e., LinkedIn profiles, e.g., Leonardi et al., 2021). Recruiters often form
assumptions about applicants based on location (e.g., using mapping functions to map
applicant zip codes to determine if an applicant lives too far from a job
14
), professional or
personal interests (e.g., LinkedIn offers applicants the option to list personal and
professional interests that are often reviewed by recruiters when sourcing candidates), and
types of experiences (e.g., previous job types, previous employers, or type of education).
Sourcing supports organizational needs in light of formal screening processes. The
nature of recruiting is often conceptualized as a hiring funnel (Bogen & Rieke, 2018;
LinkedIn, 2018). The different stages within the hiring funnel process involves attracting
qualified talent and ensuring that they apply for a position—often referred to as sourcing.
This chapter detailed sourcing strategies that recruiters engage in when cultivating a
robust pipeline of candidates. However, these strategies demonstrate how the practice of
human expert screening precedes any form of algorithmic screening. This is of interest to
14
Teddy, an applications recruiter lead in Technology based in Los Angeles described how he uses a
map function in his applicant tracking system, Salesforce, to map applicant’s zip codes. If an
applicant lived too far from the office, he considered that a negative quality when sourcing
prospective applicants.
112
note as organizations often legitimize their screening processes as objective and
accredited through the purchase and implementation of AI hiring systems. In practice,
algorithmic screening scores of applicants can arguably be conceptualized as a configured
reflection of initial screenings produced by humans during the sourcing process.
As a whole, these empirical findings reveal that the sequence of hiring stages are
often arbitrary and dependent on the organization, its needs for hiring, and its recruiter’s
expertise. In the following chapter, I discuss how these findings relate to the ways
recruitment experts account for their technology use, in particular AI-powered
technologies, and the overall impact these practices have on our current theoretical and
practical conceptualizations of artificial intelligent technology at work.
113
CHAPTER SEVEN: THE EMBEDDEDNESS OF EXPERTISE AND AI USE IN
ORGANIZATIONAL ECOSYSTEMS
This chapter facilitates a discussion of this dissertation’s empirical findings about
the nature of expertise within algorithmic-aided decision-making. In the following
sections, I provide implications for theory regarding how multilevel expertise of both
humans and technologies is a product of a negotiated sociotechnical trading zone within
situated organizational contexts. I provide an explanation of my work’s findings in relation
to our current understanding of research around artificial intelligence use at work. Finally,
I offer theoretical and practical considerations for future research and practice.
• • •
When we think about AI development, we need to think of how to design our future
with AI, not the future of AI.
-Francesca Rossi, IBM Research, USA, President Elect of Association for the
Advancement of Artificial Intelligence, quoted from her lecture at the University of
Southern California’s Symposium on Equity and AI in April 2021.
Algorithms in Practice: A Focus on the Organizational Context
Hiring is rarely a single decision. In high-volume recruiting, the selection or
rejection of an applicant for a position is the culmination of a series of decisions made by
humans, but largely informed by and mediated through algorithmic hiring tools. This study
finds that AI-powered hiring tools are useful predictive technologies that can complement
the expertise of recruitment professionals during sourcing and screening stages of the
114
hiring lifecycle. However, the empirical findings of this dissertation also accentuate the
complexity of relationships between recruiters and the AI-hiring systems their
organizations provide to aid in decision-making needs. This relationship hinges on how the
expertise of an AI system is negotiated and accounted for by human experts who are
embedded within organizational social structures of norms, policies, and ideologies.
Theoretically, these current findings can be used to re-conceptualize sociotechnical
relationships between experts and AI systems as being part of a human-AI trading zone of
expertise (e.g., Ananny, 2013; Galison, 1997; Lewis & Usher, 2014). Trading zones suggest the
importance of context-specific factors inside organizations that permeate across the
multilevel performance of expertise in exchanges between humans and technologies. The
above quote from Francesca Rossi, a global leader in the study of AI, echoes a careful, yet
powerful distinction about the use of AI: If organizations are poised to use AI systems
meaningfully, they should be mindful of the contextual factors within their workplaces that
can impact the effective adoption of AI technologies at work. In the following section, I
offer insight based on my findings to explain how institutional-specific norms may
influence the multidimensional relationship of expertise between AI systems and human
experts through negotiations within a trading zone.
Algorithms do not exist in a social vacuum— the organizational contexts in which
sociotechnical systems are adopted matter for how human-machine interactions emerge,
are sustained, and change the nature of work (e.g., Barley, 1986; DeSanctis & Poole, 1994;
Fulk, 1993; Leonardi, 2012; Orlikowski, 2000). The findings in this study demonstrate how
the use and reception of algorithmic systems occur within dense institutional and personal
social structures. The role of situated organizational structures become salient during a
115
human-AI trading zone when experts attempt to coordinate and align their expertise with
that of an AI expert system’s recommendation. In this dissertation, I set out to investigate
how recruitment experts account for the use of AI-powered technologies that are central
to their decision-making processes. Through this research, it became apparent that
recruitment experts often acknowledged the technical opacity or lack of understanding
about the technologies they used. Recruiters turned to creating working theories of
explainability, or what sociotechnical theorists refer to as folk theories, in attempts to
make sense of their working relationships with their AI-powered tools (Eslami et al., 2016;
Liao & Tyson, 2021). Folk theories about AI may have motivated how experts developed
complementary or incompatible associations with the algorithmic tools they use at work.
An initial set of findings showcased in Chapter 5, demonstrate how organizational factors
like the degree to which decisions are monitored, the level of prioritization of AI use, and
the efficacy of technological outputs may also have played an additive role how experts
formed meanings of technology use.
Negotiating Expertise in the Recruiter-Algorithm Trading Zone
Working with AI is an Alignment Process in Action
It can be helpful to situate the case of these findings within our previous
conceptualization of the recruiter-algorithm trading zone (e.g., Ananny, 2013; Galison, 1997;
Lewis & Usher, 2014). In recruiting, within this trading zone, both recruiters and AI-
powered technologies engage in negotiations, or trade over recognized forms of expertise.
These negotiations are enacted within the context of the organizational structure that
espouses the AI technology. The goal of a trading zone between human and AI-powered
experts is to achieve coordinated alignment for decision making processes. In other words,
116
human experts should achieve a match between their expertise and that of the AI hiring
system. However, in my first set of findings, I find that two opposing groups of recruitment
experts emerged from the human-AI trading zone with different types of perceptions
toward their AI hiring tools. Recruiters either viewed their relationship with an algorithmic
hiring tool as an ally, or antagonistically, as a rival. Inside the recruiter-AI trading zone,
achieving high levels of alignment between human and AI experts is contingent on three
organizational factors: the level of trust in AI-generated output, the degree to which
employee tasks are monitored, and the amount of prioritization of AI technology for tasks.
Trading Zone Alignment. Across both types of AI-human relationship outcomes
(i.e., allyship or rivalry), two organizational-level factors are associated with the formation
of these perceptions: employee oversight and the emphasis of AI-tools in the decision-
making processes. Organizations that adopt tight control, or oversight over their
employees, often impose policies of bureaucratic surveillance and tracking that have
restrictive consequences for the actions of employees. Recruiters working in organizations
with high levels of oversight engage in behaviors that are reflective of the expectations
they are placed under. Leonardi & Treem (2020) propose the term behavioral visibility to
depict similar moments of managerial control and covert monitoring through technology
use. The constant sense of surveillance often leads employees to police and over-monitor
their behavior, in turn, creating situations where they fully comply with technology use
(Bailey et al., 2019; Kellogg et al., 2020).
Kellogg and colleagues (2020) term this behavior, algorithmic recording, or the
process by which employees are subjected to technical control because of bureaucratic
expectations and comply with strict adherence to policies of technology use. In practice,
117
recruiters who formed a complementary relationship with algorithmic tools often reported
having to “put it in the file” (Ahmed, Recruiter in Real Estate, Los Angeles) for upper
management to merely demonstrate that the technology was being used in practice.
Moreover, the degree to which algorithmic tools were prioritized or emphasized in
decisions added to the complexity of how recruiters form their relationships with these
technologies.
Recruiters in this study who experienced a sense of allyship with AI-hiring tools
often reported that while their workplaces required the use of AI for reporting purposes,
the actual weight that these technologies had on decision-making processes were low.
When recruiters use and follow the recommendations from algorithms, they are seen to be
engaging in work processes that align with the expectations of management and the
organization at large. In turn, the use of an algorithmic system, although required, can be
seen as an action purely done as a formality. In her book, Metrics at Work: Journalism and
the Contested Meaning of Algorithms, Christin (2020) offers an explanation about these
actions when she argues that the use and interpretation of algorithms in workplaces
“should be understood as contested symbolic objects” (p. 153). In a recruiting context, I
suspect that algorithmic use becomes part of a symbolic performance of expertise that is
dictated by the meso-and-macro level features of the larger institution that recruiters
belong to. Adding to this, for this subset of faithful technology users, since the use of AI-
technologies is mandated but not prioritized, when engaging with the output from an
algorithm, their expertise is rarely challenged or called into question.
Findings from this study suggest that experts form an alignment of expertise
between their own knowledge and that of the technology. Their interpretation of
118
algorithmic outputs is viewed as matches or checks on their work. Recruiters who accept
recommendations from AI often view the tool in aiding efficiency and saving time when
processing high volumes of data. Theoretically, among allied users of AI technology, less
negotiation of knowledge is exchanged within a trading zone between the recruiter and
algorithm. Moreover, algorithms that produce outputs that provide clarification or
verification about explicit knowledge often result in recruiters accepting those
recommendations. For example, when algorithms produce outputs that are verifiable such
as an applicant’s GPA, type of university, or numbers of work experiences, this form of
explicit knowledge is syntactically (e.g., using the same words) and semantically (e.g.,
having the same meaning) verified between humans and AI.
Through time, these practices could become second nature to recruiters and a
sense of trust may be formed between the AI and human experts if future interactions are
successful (i.e., accuracy of AI is maintained). Thus, this type of relationship can be
considered a sustained form of algorithmic appreciation (Logg et al., 2019). Logg and
colleagues (2019) advance a term called theory of machine which “requires people to
consider the internal processes of another agent” (p. 100). This may translate in practice
when recruiters internalize and enact folk theories of algorithmic use based on prolonged,
sustained interactions with AI. One possible reason for this form of alignment between
human and AI expertise is that recruiters may not rely much on their own tacit knowledge
when completing parts of their job. In turn, the AI expert system generates outputs that are
explicit and concrete, with little room for misinterpretation between a human expert and
an algorithmic system.
119
Trading Zone Misalignment. In contrast to experts who use algorithms to
complement work processes, this study placed an emphasis on understanding the
mechanisms associated with experiences of algorithmic aversion (e.g., Burton et al., 2020;
Dietvorst et al., 2015, 2018). Experts that rejected outputs from an algorithm reported
perceiving a lack of accuracy or trust in the AI system’s results. Adding to this, algorithmic
aversion was associated with experts working in a complex organizational system where AI
hiring tools were expected and prioritized for use, but with little oversight over its
implementation. In turn, experts who experience forms of algorithmic aversion tend to
engage in strategies that exert control over the AI system by manipulating or gaming the
output it produces. On the other hand, experts may also take advantage of the lack of
control within their organizations and engage in subversive practices such as faux
negotiations to align their expertise with that of the AI system.
The actions by this set of recruiters who view AI as a rival can be explained by
understanding the type (and its capabilities) of the AI systems experts use to inform their
work. In other words, through sustained use of AI technologies at work, human experts
may have developed negative folk theories about the efficacy of the hiring tools they use.
After all, the current widespread use of artificial intelligence is in a stage referred to as
narrow AI (or also known as weak AI. Narrow AI refers to the powerful, yet limited
capabilities that AI technology currently performs (Davenport et al., 2017). Algorithmic
hiring tools often reflect this form of narrow AI as they are developed to simulate human
cognition through codified, repetitive, information-processing tasks (Ridhawi, Otoum,
Aloqaily, & Boukerche, 2021; Shestakofsky, 2017). In other words, these forms of
technologies can represent human intelligence, but lack the sentient or conscious ability to
120
rationalize decisions like a human. Current AI lack the tacit knowledge capabilities that
recruiters need to perform fundamental screening and sourcing processes. In practice, if
an AI technology underperforms in a high-volume recruitment situation where recruiters
need to process hundreds of applications in a short period of time, recruiters may choose
to turn to disagree and manipulate the AI technology to align with their own held expertise.
In the context of screening job candidates, recruiters often disagreed with the AI’s
predictions about subjective human qualities like leadership. For example, AI systems are
usually trained to identify a series of keywords, assign each occurrence of the word a
weight, assess the word frequencies that occur in an applicant file, and then calculate a
score for a particular attribute. However, when recruiters engage in this same process,
they can widen their decision search space, or the degree to which they gather evidence
from multiple knowledge sources to inform their decisions. In widening their decision
space, recruiters in this study arrive at completely different assessments of a candidates
compared to an AI expert system. This phenomenon can be explained by how recruiters
rationalize a decision about an applicant through drawing on different knowledge domains
(e.g., resumes, personal anecdotes, previous interactions with an applicant.)
The inability for an AI system to widen its decision search space amplifies the
misalignment between a human expert and AI expert system. Given that the nature of a
human-AI trading zone begins unidirectionally (i.e., the human expert starts the trade with
the AI system), during moments of misalignment with an AI system, human experts actively
disagree with the AI to assuage incompatibility between their expertise. In turn, I suspect
that acts of disagreement often lead recruiters to exert control over the AI system, thus
combining both types of expertise to generatively arrive at a type of alignment between
121
human and technology. While AI can mimic human judgment in its interpretation of data
and generation of outputs, judgments about human qualities are often the “most vexing to
automate as those demand flexibility, judgment, and common sense—skills that we
understand only tacitly” (Autor, 2015, p. 11)
Deepening our Knowledge of the Human-AI Trading Zone
When screening for job applicants, recruiters evaluate the output of an AI hiring tool
against their own set of expertise. Multidimensionally, algorithms possess (and their
outputs represent) a set of expertise that human experts acknowledge when evaluating an
automated output. If we return to the recruiter-algorithm trading zone to describe this
process, human recruiters are responsible for initiating the trade within the human-AI
trading zone system. Human recruiters are in an advantageous position to discern
knowledge presented by the AI system by creating, contesting, or maintaining the
coordination of knowledge. However, this process is largely influenced by the
organizational structures that enable it to exist in the first place (Christin, 2020; Kellogg et
al., 2020). As previously mentioned, human experts often reach consensus with algorithms
when explicit knowledge is negotiated, thus allowing for a match among technical
expertise and the arrival at a successful trading agreement. However, when more
subjective forms of knowledge or knowledge related to soft skills are negotiated, human
experts are more likely to fail to reach consensus with an algorithm’s interpretation and
engage in strategies to impose their expertise over the AI system. These subversive
strategies raise critical questions about the nature of AI use among knowledge experts.
More specifically, it points to a lack of transparency among workers, managers, and
developers of AI technologies when systems are easily manipulated and different forms of
122
expertise can be imposed over others, sometimes even without notice. Furthermore, these
behaviors are couched within a rhetoric of treating AI as a cold, objective arbiter of
decisions (Kellogg et al., 2020; Tambe et al., 2019) that has the inability to be trained to
cognitively rationalize a decision.
Exploring Professional Boundary Identities in the Human-AI Trading Zone
As seen both in screening and sourcing processes, applicants who would often be
denied the opportunity to advance in the hiring process are often supported by recruiters
despite being rejected by an AI system. This main source of support is derived from
recruiters engaging their tacit knowledge within a multilevel expertise trading zone. The
data from this study indicates that most recruitment experts have delineated expectations
about what an AI expert system can offer to a trading zone. Conceptually, when tasked to
arrive at a decision in conjunction with the results from an AI system, human experts have
prescriptive uses of artificial intelligence when accounting for the use of an algorithm in
decision-making.
In hiring, the outputs generated by algorithms are used by human experts to verify
and uphold explicit knowledge that is presented in a concrete, syntactic form. For
example, recruiters are unlikely to disagree with predictive metrics when they directly
parse for specific information within a resume, such as an applicant’s GPA, employment
eligibility, or types of qualifications. Whereas human experts may perceive an AI system as
inferior when pertaining to matters that can be interpreted through different semantic
meanings. For instance, when an algorithm predicts an applicant’s average fit with an
organization, a human expert’s understanding of what “fit” means may semantically (e.g.,
123
the meaning of a concept) or pragmatically (e.g., how a concept is put into practice) be
misaligned between the human and machine perception of fit.
In this study, during high levels of expertise misalignment (within screening and
sourcing) between recruiters and their AI-counterparts, recruiters detail how they
engaged in strategic actions such as data manipulation, ‘gaming the system’, or
misrepresenting input data as attempts to alter the output generated by an algorithm—
ultimately, these actions were performed to coerce the AI system to align with human
expertise. These actions fall in line with what previous management literature details about
the challenges that experts face when negotiating how to adopt and implement new forms
of technology that are central to their expertise (Barrett et al., 2012; Nelson & Irwin, 2014;
Sergeeva, Faraj, & Huysman, 2020). More specifically, when new technologies are
introduced in organizations, research suggests that experts often engage in resistance to
these new forms of technologies as a response to protect their role boundaries (Barrett et
al., 2012; Nelson & Irwin, 2014). A person’s expertise, which is built over an extended period
of time, is a quality that is intertwined with one’s personal and professional identity
(Barrett et al., 2012). Often, when a new technology is introduced to assist in a fundamental
aspect of a person’s expertise, this change can lead to internal threats to an expert’s sense
of role identity. This form of technological resistance may manifest for recruiters who
perceive AI as a rival. Moreover, the heightened lack of transparency and low trust in
efficacy, alongside the pressure from management to integrate these systems with their
sense of expertise, may also play a factor in exacerbating this form of resistance.
At the same time, recent research finds that new technology can positively impact
expertise by being introduced cooperatively (Sergeeva et al., 2020). For example, Sergeeva
124
and colleagues (2020) studied the implementation of the DaVinci Robot (i.e., a robotic
assistive robot used in surgery) among surgeons and nursing staff in surgery rooms. They
found that surgeons, who held specialized forms of expertise, were more likely to
relinquish parts of their explicit expertise to nurses. The research team attributes this
cooperative transfer of expertise to the surgeon’s ability to take advantage of new, novel
forms of expertise that arise from using the robot. Therefore, experts who are able to
advantageously adopt and adapt particular affordances of a technology are more likely to
form cooperative relationships with them and others who they collaborate with.
This rationale can help provide an explanation as to how a subset of experts in this
current study reported perceptions of allyship with AI technologies. Moreover, in addition
to the organizational constraints placed on experts who viewed AI as an ally, this group of
recruiters reported using algorithmic outputs to verify their work. In other words,
recruiters managed to engage in cooperative relationships with AI technologies by
accepting recommendations about decisions that were primarily on explicit information.
The findings from this dissertation complement both sides of the relationship
between expertise and technology use at work. Based on my findings, the degree to which
organizational variables directly related to the monitoring and prioritization of AI
technologies is strongly related to cooperative vs. noncooperative expertise
transformations. Integrating how cooperation emerges among dissimilar experts (e.g., both
human experts and AI experts) may have an impact on how human experts view their
overall expertise and role in an organization. These moments of cooperation can also be a
starting point to uncover the specific types of negotiations that occur within a recruiter-
algorithm trading zone.
125
In turn, this current study urges the need for research to go beyond the recognition
of an algorithm’s material form and instead offer empirical and analytical approaches to
understanding its engagement in the social, cultural, and contextual environment where it
is implemented. Only then, if we consider new, evolving forms of technology as part of a
multilevel performance of expertise in an organization can we begin to actualize a space to
allow for “enough mutual understanding to emerge to allow interdisciplinary productivity
or ‘trade’ to occur” between humans and AI (Lewis & Usher, 2014, p.385).
Treating Expertise as Multilevel in the Organizational Ecosystem
By taking a multilevel perspective of expertise (Fulk, 2016), we can begin to ground
the conceptualization of how human experts collaborate with artificial intelligent
technologies across different kinds of knowledge domains. The findings from this
dissertation offer a demonstration of how the processes co-created by human and AI
technologies can be considered a distinct form of expertise performance that is
constructed through the interactions inside a trading zone between human experts and
their algorithmic systems. This form of expertise performance, or what can be called
algorithmic expertise, is practiced through negotiations made by human experts about AI
when they evaluate, interpret, and respond to the output from an AI tool— a technology
that is designed to achieve similar forms of expertise to that of its human counterparts
(Tambe et al., 2019). This form of expertise emerges through practice and is sustained
through an actor’s dependency on each other. In other words, humans rely on algorithms
to successfully complete their work and the vitality of algorithms is reliant on humans
using them and updating their training data and models (Faraj et al., 2018; Kellogg et al.,
2020). Humans and algorithms possess inherent individual-level expertise that applies to a
126
wider collective practice of expertise through their interactions. In practice, the expertise
embedded in a recruiter and in an algorithmic system is reflected in the relationships
among these actors through their collaboration. Consequently, these relationships are also
reflected in the larger organizational systems where they are enacted, recognized, and
performed in (Fulk, 2016). By applying this conceptual basis to recruitment experts in this
study, collective and multilevel expertise at the organizational level may have implications
for how human experts structure their work and can help explain how experts use
technologies when arriving at screening and sourcing decisions.
At large, this performance of expertise is practiced within the boundaries of an
organization and its demands, norms, and policies over its workers. Adding to this,
technological innovation like AI is subjected to institutional pressures faced by
organizations to meet certain industry standards of practice or as a way to gain and sustain
legitimacy in the market (DiMaggio & Powell, 1983; Meyer & Rowan, 1977). In turn,
employee-technology relationships at an individual level can be impacted by these
institutional influences. By taking an ecological, multilevel approach to organizations, and
more specifically to the practice of expertise, we can begin to understand the ways how
multilevel expertise of both humans and technologies impact across all levels of an
organization through a type of dynamic, knowledge-based ecosystem.
Opportunities for Theory
The aim of this dissertation was to explore how experts organize and account for
artificial intelligent systems that are perceived to enact similar levels of expertise alongside
their human counterparts. Using trading zones as a metaphor to understand a form of
algorithmic expertise, or the performance of collective expertise among humans and AI
127
systems, is a helpful way to gain insight into the multilevel processes of AI-human
collaboration. However, more specific theoretical and empirical exploration into this area
of research is critical to unpack the mechanisms involved in creating these forms of
collective expertise.
For instance, in her conceptualization of multilevel expertise, Fulk (2016) outlines
how collective expertise is facilitated across a spectrum of compositional (i.e., actors with
similar forms of expertise in a multilevel system) or compilational practices (i.e., actors
working interdependently in a multilevel system) among different actors working together.
These practices can lead experts in a collective system to develop different forms of
working relationships coordinated by their expertise. By applying these forms of
relationships to notions of algorithmic expertise, future research can be able to
theoretically understand how different models of multilevel expertise are enacted and
performed through human-AI interactions.
Opportunities for Practice
While not explicitly studied, different aspects of multilevel expertise can play a role
in how expertise is negotiated when new technologies are introduced into a workplace
(Sergeeva et al., 2020). For example, when surgery robots were introduced in operating
rooms in a hospital, surgeons who formed cooperative relationships with the new
technologies were more likely to adjust and adopt new, innovative forms of expertise.
Speculatively speaking, if applied to the current context of hiring, by strategically
introducing new forms of technology in a cooperative manner or complementary form,
recruitment professionals may be able to adapt new forms of expertise and have an easier
time relinquishing previously held domain expertise to an AI system.
128
By employing a multilevel expertise perspective to the study of AI-human
interaction, we can continue to nuance the relationship between human experts and the AI
expert systems they use for work. Scholarship that places a focus on use can begin to
unpack how new forms of expertise may emerge from AI-human interaction over time.
Relating to research on technological innovation and changing role relations, we find that
after closure (i.e., the degree to which a technology is stabilized in an environment) of
technology use is reached, expertise often stabilizes over time (Bailey & Leonardi, 2015;
Barley, 1986; Sergeeva et al., 2020). Given the influx of development of AI-systems across
different forms of work, our understanding of the theoretical and practical applications of
AI technologies at work is currently malleable. Moreover, as new forms of expertise are
configured from interactions with AI technologies, little is known as to how the
organizational or institutional memory of processes will establish its permanence.
Practically speaking, if an AI technology augments or replaces human expertise, new
organizational members who are onboarded may not have institutional knowledge of how
AI-mediated work processes occur in the absence of technology. These types of questions
highlight how the current landscape of technological innovation is ripe for researchers and
practitioners who are interested in designing efficient and ethical artificial intelligent
systems for tomorrow’s workplace (Kane, Young, Majchrzak, & Ransbotham, 2021).
Limitations
These findings can offer naturalistic generalizations about how algorithms are used
within expert work processes among a specific type recruiting field. By venturing into the
“contested terrain” (Kellogg et al., 2020) of algorithms and their use within organizations,
this dissertation offers an opportunity for researchers to explore questions of expertise
129
and AI use. However, limitations of this dissertation include the type of data procured and
analyzed. While participant interviews can offer nuanced insights into how experts
constructed the meanings and perceptions of algorithmic use at work, employing more in-
depth naturalistic methods like ethnographic observations could more adequately describe
the differences between what participants say they do from what they actually practice.
Adding to this, triangulating the qualitative insights with robust survey or experimental
methods can illustrate a clearer picture of the degree to which specific organizational and
contextual mechanisms influence relationships of algorithmic appreciation or aversion in
this study of recruiters. Using quantitative approaches can extend the some of the useful
insights that outline initial relationships and patterns between human experts and their AI
tools. Adding to this, the current study was exploratory and the sample of 42 recruitment
professionals, as well as the AI technologies they used, was not a representative sample of
vast talent acquisition industry.
Another limitation of this study is related to the degree of exposure that experts had
with their AI tools. The cross-sectional nature of the interviews employed captured
moments of technology use among a heterogeneous sample of respondents who worked
across a variety of different recruiting industries. It is possible that differences in
technology training or general domain expertise may influence the actions that recruiters
employ with their technologies. Future research could track how technological innovation
and expertise negotiations are made within one or multiple organizational sites at the
onset of technology adoption. By studying the introduction of technology in an
organization, researchers can account for nuances related to institutional training, norms,
130
or policies that may influence future technology use, and thus the mechanisms that impact
multilevel expertise development.
Future Work
This dissertation contributes to the widespread scholarly discussion of how
algorithmic technologies are embedded in practice within organizations. This work enters
the conversation by considering the practice of expertise in knowledge work across
different forms of human-AI interaction. The application of trading zones to human-AI
relationships raises fascinating questions for future research. Throughout this dissertation,
I have discussed the potential for situated, organizational variables to influence expertise
coordination between human and AI actors. Yet, these mere associations should be
nuanced to examine the degree to which factors are most influential in achieving an
aligned trading zone.
At a broad level, the nature of human-AI interaction at work hinges on the
partnership between AI developers and the organizational stakeholders that they work
with. Before deployment of any technology into a workplace, decisions about the
capabilities and affordances of a technology are often given high consideration before
implementation. Future directions for research in human-AI interaction at work should
pose questions to investigate the role of AI technology design for workplace technologies.
For example, research questions should aim to understand the motivations and factors that
influence the creation of data structures and modeling for technology use, as well as, how
an AI technology is evaluated upon deployment. Creating this type of knowledge
transparency (i.e., the degree to which a user of technology understands how it functions,
131
Bailey & Leonardi, 2015), provides the ability for scholarship to go beyond the black box of
‘plug and play’
1
technology at work.
Concluding Thoughts
We have had it coming. As scholars of technology and organizations have repeatedly
shown, organizations adopt technologies for a variety of purposes and their employees
integrate them in various ways (e.g., Bailey & Leonardi, 2015; Barley, 1986; Fulk, 1993;
Orlikowski, 2000). In general, we live in an era of data— a moment where an
unprecedented amount of information is used to train algorithmic technologies in the
hopes of achieving human-cognitive capabilities. The developers that build these
technologies and the organizations that adopt them often describe these sophisticated
tools as objective in nature. However, it is critical for researchers to nuance these glorified
typecasts of AI technologies. Algorithms are powerful tools with immense potential for
organizational development. They are technically trained, complex, and innovative
technologies that can be used in advantageous ways by employees. However, at the same
time, these technologies are precarious— their use and interpretations are subjective,
value-laden, social, and part of a human process when we integrate them into work. For
this reason, new technologies should be treated as part of an ecological network of humans
and tools embedded within occupations. We are nowhere near the end of AI design or
development; however, it is our responsibility as scholars to consider multilevel and
1
Plug and play technologies refers to a style of artificial intelligent technologies that rely on data
modeled after machine learning. Plug and play AI often refer to commercial types of technology like
the algorithmic hiring tools that recruiters in this study reporting having used. These technologies
are self-manageable and self-configurable technologies that are designed for easing a user
accessibility (Ridhawi et al., 2021).
132
multifaceted practices of technologies like algorithms when they are built into the
framework of the social worlds that we work and live in.
133
REFERENCES
Ackerman, M., Pipek, V., & Wulf, E. (Eds.). (2003). Sharing expertise: Beyond knowledge
management. Cambridge, MA: MIT Press.
American Staffing Association. (2016). American Staffing Association History. ASA 50th
Anniversary. https://americanstaffing.net/asa50/
Ananny, M. (2013). Press-Public Collaboration as Infrastructure: Tracing News
Organizations and Programming Publics in Application Programming Interfaces.
American Behavioral Scientist, 57(5), 623–642.
https://doi.org/10.1177/0002764212469363
Ananny, M., & Crawford, K. (2018). Seeing without knowing: Limitations of the transparency
ideal and its application to algorithmic accountability. New Media & Society, 20(3),
973–989. https://doi.org/10.1177/1461444816676645
Anonymous. (2019). Towards Effective Workforce Management: Hiring Algorithms, Big
Data-driven Accountability Systems, and Organizational Performance.
Psychosociological Issues in Human Resource Management, 7(2), 19–24.
http://dx.doi.org.libproxy2.usc.edu/10.22381/PIHRM7120193
Araujo, T., Helberger, N., Kruikemeier, S., & de Vreese, C. H. (2020). In AI we trust?
Perceptions about automated decision-making by artificial intelligence. AI &
SOCIETY, 35(3), 611–623. https://doi.org/10.1007/s00146-019-00931-w
Autor, D. H. (2015). Why Are There Still So Many Jobs? The History and Future of Workplace
Automation. Journal of Economic Perspectives, 29(3), 3–30.
https://doi.org/10.1257/jep.29.3.3
134
Bailey, D. E., & Leonardi, P. M. (2015). Technology Choices: Why Occupations Differ in Their
Embrace of New Technology. Cambridge, MA: MIT Press.
Bailey, D., Silbley, I., & Teasley, S. (2019). Emerging audit cultures: Data, analytics, and rising
quantification in professors’ work. Academy of Management., Boston.
Barbour, J., B., Sommer, P., A., & Gill, R. (2016). Technical, arcane, interpersonal, and
embodied expertise. In P. Leonardi M. & J. Treem W. (Eds.), Communication,
expertise, and organization (pp. 44–59). Oxford University Press.
Barley, S. R. (1986). Technology as an Occasion for Structuring: Evidence from Observations
of CT Scanners and the Social Order of Radiology Departments. Administrative
Science Quarterly, 31(1), 78–108. https://doi.org/10.2307/2392767
Barley, W. C., Treem, J. W., & Leonardi, P. M. (2020). Experts at Coordination: Examining
the Performance, Production, and Value of Process Expertise. Journal of
Communication, 70(1), 60–89. https://doi.org/10.1093/joc/jqz041
Barrett, M., Oborn, E., Orlikowski, W. J., & Yates, J. (2012). Reconfiguring Boundary
Relations: Robotic Innovations in Pharmacy Work. Organization Science, 23(5),
1448–1466. https://doi.org/10.1287/orsc.1100.0639
Bigman, Y. E., & Gray, K. (2018). People are averse to machines making moral decisions.
Cognition, 181, 21–34. https://doi.org/10.1016/j.cognition.2018.08.003
Bogen, M., & Rieke, A. (2018). Help Wanted: An Examination of Hiring Algorithms, Equity,
and Bias. Upturn.
Boje, D.M. (1991). The Storytelling Organization: A Study of Story Performance in an Office-
Supply Firm. Administrative Science Quarterly, 36(1), 106-126.
135
Booth, R. (2019, October 25). Unilever saves on recruiters by using AI to assess job
interviews. The Guardian.
http://www.theguardian.com/technology/2019/oct/25/unilever-saves-on-
recruiters-by-using-ai-to-assess-job-interviews
Brayne, S., & Christin, A. (2020). Technologies of Crime Prediction: The Reception of
Algorithms in Policing and Criminal Courts. Social Problems, spaa004.
https://doi.org/10.1093/socpro/spaa004
Bucher, T. (2017). The algorithmic imaginary: Exploring the ordinary effects of Facebook
algorithms. Information, Communication & Society, 20(1), 30–44.
https://doi.org/10.1080/1369118X.2016.1154086
Burrell, J. (2016). How the machine ‘thinks’: Understanding opacity in machine learning
algorithms. Big Data & Society, 3(1), 2053951715622512.
https://doi.org/10.1177/2053951715622512
Burton, J. W., Stein, M.-K., & Jensen, T. B. (2020). A systematic review of algorithm aversion
in augmented decision-making. Journal of Behavioral Decision-making, 33(2), 220–
239. https://doi.org/10.1002/bdm.2155
Cameron, L. (2020). The Rise of Algorithmic Work: Implications for Organizational Control
and Worker Autonomy [Dissertation, University of Michigan].
https://deepblue.lib.umich.edu/handle/2027.42/155277
Campello, M., Kankanhalli, G., & Pradeep, M. (2020). Corporate hiring under COVID-19:
Labor market concentration, downskilling, and income inequality. National Bureau
of Economic Research.
https://www.nber.org/system/files/working_papers/w27208/w27208.pdf
136
Charmaz, K. (2014). Constructing grounded theory. Sage.
https://books.google.com/books?hl=en&lr=&id=v_GGAwAAQBAJ&oi=fnd&pg=PP1&
dq=constructing+grounded+theory+charmaz&ots=YVZvK6GBf8&sig=YywG6bief-H-
oZ0Q3Bmp0JEPAtI
Christin, A. (2017). Algorithms in practice: Comparing web journalism and criminal justice.
Big Data & Society, 4(2), 2053951717718855.
https://doi.org/10.1177/2053951717718855
Christin, A. (2020). Algorithmic ethnography, during and after COVID-19. Communication
and the Public, 5(4), 108-111.
Christin, A. (2020). Metrics at Work: Journalism and the Contested Meaning of Algorithms.
Princeton University Press.
Christin, A. (2020). What Data Can Do: A Typology of Mechanisms. International Journal of
Communication, 14(0), 20. https://ijoc.org/index.php/ijoc/article/view/12220
Collins, H., & Evans, R. (2007). Rethinking expertise. University of Chicago Press.
Collins, Harry, Evans, R., & Gorman, M. (2007). Trading zones and interactional expertise.
Studies in History and Philosophy of Science Part A, 38(4), 657–666.
https://doi.org/10.1016/j.shpsa.2007.09.003
Contractor, N., Monge, P., & Leonardi, P. M. (2011). Network Theory | Multidimensional
Networks and the Dynamics of Sociomateriality: Bringing Technology Inside the
Network. International Journal of Communication, 5(0), 39.
137
Cox, J. (2021, February 5). Even with unprecedented gains, the jobs market is still struggling
to get back to normal. CNBC. https://www.cnbc.com/2021/02/05/even-with-
unprecedented-gains-the-jobs-market-is-still-struggling-to-get-back-to-
normal.html
Crain, M. (2018). The limits of transparency: Data brokers and commodification. New Media
& Society, 20(1), 88–104. https://doi.org/10.1177/1461444816657096
Crawford, K., & Calo, R. (2016). There is a blind spot in AI research. Nature News, 538(7625),
311. https://doi.org/10.1038/538311a
Davenport, T., Loucks, J., & Schatsky, D. (2017). Bullish on the business value of cognitive:
Leaders in cognitive and AI weigh in on what’s working and what’s next. Deloitte
Development. https://www2.deloitte.com/us/en/pages/deloitte-
analytics/articles/cognitive-technology-adoption-survey.html
DeSanctis, G., & Poole, M. S. (1994). Capturing the complexity in advanced technology use:
Adaptive structuration theory. Organization Science, 5(2), 121–147.
http://pubsonline.informs.org.libproxy1.usc.edu/doi/abs/10.1287/orsc.5.2.121
DeVito, M. A., Birnholtz, J., Hancock, J. T., French, M., & Liu, S. (2018). How people form folk
theories of social media feeds and what it means for how we study self-
presentation. Proceedings of the 2018 CHI Conference on Human Factors in
Computing Systems, 1–12.
DeVito, M.A., Gergle, D., & Birnholtz, J. (2017). “Algorithms Ruin Everything”: #RIPTwitter,
Folk Theories, and Resistance to Algorithmic Change in Social Media. In the
Proceedings of CHI, HCI and Collective Action, Denver, 3163-3174.
138
Diakopoulos, N. (2020). Computational News Discovery: Towards Design Considerations
for Editorial Orientation Algorithms in Journalism. Digital Journalism, 8(7), 945–967.
https://doi.org/10.1080/21670811.2020.1736946
Dietvorst, B. J., Simmons, J. P., & Massey, C. (2015). Algorithm aversion: People erroneously
avoid algorithms after seeing them err. Journal of Experimental Psychology:
General, 144(1), 114–126. https://doi.org/10.1037/xge0000033
Dietvorst, B. J., Simmons, J. P., & Massey, C. (2015). Algorithm aversion: People erroneously
avoid algorithms after seeing them err. Journal of Experimental Psychology:
General, 144(1), 114–126. https://doi.org/10.1037/xge0000033
Dietvorst, B. J., Simmons, J. P., & Massey, C. (2018). Overcoming Algorithm Aversion: People
Will Use Imperfect Algorithms If They Can (Even Slightly) Modify Them.
Management Science, 64(3), 1155–1170. https://doi.org/10.1287/mnsc.2016.2643
Digital HR Tech. (2020, June 9). The Recruitment Team: Size, Configuration, RPO. AIHR
Digital. https://www.digitalhrtech.com/recruitment-team/
DiMaggio, P. J., & Powell, W. W. (1983). The Iron Cage Revisited: Institutional Isomorphism
and Collective Rationality in Organizational Fields. American Sociological Review,
48(2), 147–160. https://doi.org/10.2307/2095101
Dorton, S., & Thirey, M. (2017). Effective Variety? For Whom (Or What)? A Folk Theory on
Interface Complexity and Situation Awareness. In the Proceedings of IEEE
Conference on Cognitive and Computational Aspects of Situation Managements
(CogSIMA).
139
Elmira van den Broek, Sergeeva, A., & Huysman, M. (2019). Hiring Algorithms: An
Ethnography of Fairness in Practice. Fortieth International Conference on
Information Systems, 1–9. https://core.ac.uk/download/pdf/301385085.pdf
Engler, A. (2020, March 12). Auditing employment algorithms for discrimination. Brookings
Institute. https://www.brookings.edu/research/auditing-employment-
algorithms-for-discrimination/
Eslami, M., Karahalios, K., Sandvig, C., Vaccaro, K., Rickman, A., Hamilton, K., & Kirlik, A.
(2016). First I “like” it, then I hide it: Folk Theories of Social Feeds. Proceedings of the
2016 CHI Conference on Human Factors in Computing Systems, 2371–2382.
https://doi.org/10.1145/2858036.2858494
Faraj, S., Pachidi, S., & Sayegh, K. (2018). Working and organizing in the age of the learning
algorithm. Information and Organization, 28(1), 62–70.
https://doi.org/10.1016/j.infoandorg.2018.02.005
Forman, A., & Glasser, N. (2021). Hiring by Algorithm: Legal Issues Presented by the Use of
Artificial Intelligence in Sourcing and Selection. JD Supra.
https://www.jdsupra.com/legalnews/hiring-by-algorithm-legal-issues-4602652/
French, M., & Hancock, J. T. (2017). What’s the folk theory?
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2910571
Fulk, J. (1993). Social Construction of Communication Technology. Academy of
Management Journal, 36(5), 921–950. https://doi.org/10.2307/256641
Fulk, J. (2016). Conceptualizing multilevel expertise. In J. Treem W. & P. Leonardi M. (Eds.),
Communication, expertise, and organization (pp. 251–270). Oxford University Press.
140
Gale, S. F. (2020). The future of recruiting technology
[https://www.workforce.com/news/sector-report-future-of-recruiting-
technology]. Workforce.Com. https://www.workforce.com/news/sector-report-
future-of-recruiting-technology
Galison, P. (1997). Image and logic: A material culture of microphysics. University of Chicago
Press.
Gillespie, T. (2016). Algorithm. In B. Peters (Ed.), Digital Keywords (pp. 1–10). Princeton
University Press.
Gl-2021-gtt-global-eng-mercer.pdf. (n.d.). Retrieved April 21, 2021, from
https://www.mercer.com/content/dam/mercer/attachments/private/global-
talent-trends/2021/gl-2021-gtt-global-eng-mercer.pdf
Glaser, B. G. (2001). The Grounded Theory Perspective: Conceptualization Contrasted with
Description. Sociology Press.
Glaser, B. G., & Strauss, A. L. (1967). The discovery of grounded theory: Strategies for
qualitative research. Aldine.
Goddiksen, M. (2014). Clarifying interactional and contributory expertise. Studies in History
and Philosophy of Science Part A, 47, 111–117.
https://doi.org/10.1016/j.shpsa.2014.06.001
Gorman, M.E. (2010). Trading Zones and Interactional Expertise: Creating New Kinds of
Collaboration. Cambridge, MA: MIT Press.
141
Gorman, Michael E. (2002). Levels of Expertise and Trading Zones: A Framework for
Multidisciplinary Collaboration. Social Studies of Science, 32(5–6), 933–938.
https://doi.org/10.1177/030631270203200511
Guzman, A. L., & Lewis, S. C. (2019). Artificial intelligence and communication: A Human–
Machine Communication research agenda. New Media & Society, 1461444819858691.
https://doi.org/10.1177/1461444819858691
Hosny, A., Parmar, C., Quackenbush, J., Schwartz, L. H., & Aerts, H. J. W. L. (2018). Artificial
intelligence in radiology. Nature Reviews. Cancer, 18(8), 500–510.
https://doi.org/10.1038/s41568-018-0016-5
HR Research. (2021). The State of High-volume Recruiting and Assessment Report. HR.com
Huysman, M. (2020). Tales from the Field: How Smart Algorithms are Changing Work.
Reshaping Work Conference, Amsterdam, Netherlands.
Kane, G., Young, A., Majchrzak, A., & Ransbotham, S. (2021). Avoiding an Oppressive Future
of Machine Learning: A Design Theory for Emancipatory Assistants. MIS Quarterly,
Forthcoming.
Kellogg, K. C., Valentine, M. A., & Christin, A. (2020). Algorithms at Work: The New
Contested Terrain of Control. Academy of Management Annals, 14(1), 366–410.
https://doi.org/10.5465/annals.2018.0174
Kellogg, K.C., Orlikowski, W.J., & Yates, J. (2006). Life in the Trading Zone: Structuring
Coordination Across Boundaries in Postbureaucratic Organizations. Organization
Science, 17(1), 22-44.
142
Klahre, B. A., & Klahre, B. A. (2020, August 28). Recruiting Goes Remote. SHRM.
https://www.shrm.org/resourcesandtools/hr-topics/talent-
acquisition/pages/recruiting-goes-remote.aspx
Kozlowski, K., & Klein, K. J. (2000). A multilevel approach to theory and research in
organizations: Contextual, temporal, and emergent processes. In Multilevel theory,
research, and methods in organizations: Foundations, extensions, and new
directions (pp. 3–90). Jossey-Bass.
Lee Cruz, E., De Souza, A., & Boukani, S. (2016). The Savvy Recruiter’s Career Guide.
LinkedIn Talent Solutions.
https://business.linkedin.com/content/dam/business/talent-
solutions/global/en_us/c/pdfs/recruiter-career-guide-updated.pdf
Lee, M. K. (2018). Understanding perception of algorithmic decisions: Fairness, trust, and
emotion in response to algorithmic management. Big Data & Society, 5(1),
2053951718756684. https://doi.org/10.1177/2053951718756684
Leonardi, P. M. (2012). Car Crashes Without Cars: Lessons about Simulation Technology
and Organizational Change from Automotive Design. MIT Press.
Leonardi, P. M., & Treem, J. W. (2020). Behavioral Visibility: A new paradigm for
organization studies in the age of digitization, digitalization, and datafication.
Organization Studies, 41(12), 1601–1625. https://doi.org/10.1177/0170840620970728
Leonardi, P. M., Barley, W. C., & Woo, D. (2021). Why should I trust your model? How to
successfully enroll digital models for innovation. Innovation, 1–19.
https://doi.org/10.1080/14479338.2021.1873787
143
Lewis, S. C., & Usher, N. (2014). Code, Collaboration, And The Future Of Journalism: A case
study of the Hacks/Hackers global network. Digital Journalism, 2(3), 383–393.
https://doi.org/10.1080/21670811.2014.895504
Lewis, S.C., & Usher, N. (2016). Trading zones, boundary objects, and the pursuit of news
innovation: A case study of journalists and programmers. CONVERGENCE: The
International Journal of Research Into New Media and Technologies, 22(5), 543-560.
Li, D., Raymond, L. R., & Bergman, P. (2020). Hiring as Exploration. National Bureau of
Economic Research, 61.
Liao, T., & Tyson, O. (2021). “Crystal Is Creepy, but Cool”: Mapping Folk Theories and
Responses to Automated Personality Recognition Algorithms. Social Media +
Society, 7(2), 20563051211010170. https://doi.org/10.1177/20563051211010170
LinkedIn. (2018). Global Recruiting Trends: The 4 Ideas Changing How You Hire.
https://app.box.com/s/y5i7635s15rx3yl78hj5jlnnh02asa75/file/263787765561
Logg, J. M., Minson, J. A., & Moore, D. A. (2019). Algorithm appreciation: People prefer
algorithmic to human judgment. Organizational Behavior and Human Decision
Processes, 151, 90–103. https://doi.org/10.1016/j.obhdp.2018.12.005
Lomborg, S., & Kapsch, P. H. (2020). Decoding algorithms. Media, Culture & Society, 42(5),
745–761. https://doi.org/10.1177/0163443719855301
MacCormick, J. (2012). Stochastic Algorithms for Visual Tracking: Probabilistic modelling
and stochastic algorithms for visual localisation and tracking. Springer Science &
Business Media.
144
Majchrzak, A., Jarvenpaa, S. L., & Hollingshead, A. B. (2007). Coordinating Expertise Among
Emergent Groups Responding to Disasters. Organization Science, 18(1), 147–161.
https://doi.org/10.1287/orsc.1060.0228
McDonald, D. W., & Ackerman, M. S. (1998). Just Talk to Me: A Field Study of Expertise
Location. Proceedings of the 1998 ACM Conference on Computer Supported
Cooperative Work (CSCW), 315–324.
Meyer, J. W., & Rowan, B. (1977). Institutionalized Organizations: Formal Structure as Myth
and Ceremony. American Journal of Sociology, 83(2), 340–363.
https://doi.org/10.1086/226550
Miles, M., Huberman, A. M., & Saldaña, J. (2019). Qualitative Data Analysis: A Methods
Sourcebook. SAGE Publications.
Nelson, A., & Irwin, J. (2014). “DEFINING WHAT WE DO—ALL OVER AGAIN”:
OCCUPATIONAL IDENTITY, TECHNOLOGICAL CHANGE, AND THE
LIBRARIAN/INTERNET-SEARCH RELATIONSHIP. The Academy of Management
Journal, 57(3), 892–928. https://www.jstor.org/stable/43589287
Nonaka, I. (1994). A Dynamic Theory of Organizational Knowledge Creation. Organization
Science, 5(1), 13-37.
Nonaka, I. & von Krogh, G. (2009). Tacit Knowledge and Knowledge Conversion:
Controversy and Advancement in Organizational Knowledge Creation Theory.
Organization Science, 20(3), 635-652.
Nonaka, I., & von Krog, G. (2009). Perspective— Tacit Knowledge and Knowledge
Conversion: Controversy and Advancement in Organizational Knowledge Creation
Theory. Organization Science, 20(3), 635-562.
145
Nonaka, I., von Krogh, G., & Voepel, S. (2006). Organizational Knowledge Creation Theory:
Evolutionary Paths and Future Advances. Organization Studies, 27(8), 1179-1208.
Norman, D. A. (1988). The psychology of everyday things. Basic Books.
Oberst, U., De Quintana, M., Del Cerro, S., & Chamarro, A. (2020). Recruiters prefer expert
recommendations over digital hiring algorithms: A choice-based conjoint study in a
pre-employment screening scenario. Management Research Review, 44(4), 625–641.
https://doi.org/10.1108/MRR-06-2020-0356
Orlikowski, W. J. (2000). Using Technology and Constituting Structures: A Practice Lens for
Studying Technology in Organizations. Organization Science, 11(4), 404–428.
https://doi.org/10.1287/orsc.11.4.404.14600
Orlikowski, W. J., & Scott, S. V. (2008). Sociomateriality: Challenging the separation of
technology, work, and organization. The Academy of Management Annals, 2(1), 433–
474.
Pasquale, F. (2016). The Black Box Society: The Secret Algorithms that Control Money and
Information. Harvard University Press.
Polacco, A., & Backes, K. (2018). The Amazon Go Concept: Implications, Applications, and
Sustainability. Journal of Business and Management, 24(1), 79–92.
https://doi.org/10.6347/JBM.201803_24(1).0004
Raghavan, M., & Barocas, S. (2019, December 6). Challenges for mitigating bias in
algorithmic hiring. Brookings. https://www.brookings.edu/research/challenges-
for-mitigating-bias-in-algorithmic-hiring/
146
Raghavan, M., Barocas, S., Kleinberg, J., & Levy, K. (2020). Mitigating bias in algorithmic
hiring: Evaluating claims and practices. Proceedings of the 2020 Conference on
Fairness, Accountability, and Transparency, 469–481.
https://doi.org/10.1145/3351095.3372828
Raghavan, M., Barocas, S., Kleinberg, J., & Levy, K. (2020). Mitigating bias in algorithmic
hiring: Evaluating claims and practices. Proceedings of the 2020 Conference on
Fairness, Accountability, and Transparency, 469–481.
https://doi.org/10.1145/3351095.3372828
Raveendhran, R., & Fast, N. J. (2021). Humans judge, algorithms nudge: The psychology of
behavior tracking acceptance. Organizational Behavior and Human Decision
Processes, 164, 11–26. https://doi.org/10.1016/j.obhdp.2021.01.001
Recruiter features | LinkedIn Talent Solutions. (n.d.). Retrieved May 5, 2021, from
https://business.linkedin.com/talent-solutions/recruiter/recruiter-features
Ridhawi, I.A., Otoum, S., Aloqaily, M., & Boukerche, A. (2021). Generalizing AI: Challenges
and Opportunities for Plug and Play AI Solutions. In IEEE Network, 35(1), 372-379.
Robert, L. P., Pierce, C., Marquis, L., Kim, S., & Alahmad, R. (2020). Designing fair AI for
managing employees in organizations: A review, critique, and design agenda.
Human–Computer Interaction, 35(5–6), 545–575.
https://doi.org/10.1080/07370024.2020.1735391
Rynes-Weller, S. L., Reeves, C. J., & Darnold, T. C. (2013). The History of Recruitment
Research (D. M. Cable & K. Y. T. Yu, Eds.). Oxford University Press.
https://doi.org/10.1093/oxfordhb/9780199756094.013.020
147
Sergeeva, A. V., Faraj, S., & Huysman, M. (2020). Losing Touch: An Embodiment Perspective
on Coordination in Robotic Surgery. Organization Science, 31(5), 1248–1271.
https://doi.org/10.1287/orsc.2019.1343
Shestakofsky, B. (2017). Working Algorithms: Software Automation and the Future of Work.
Work and Occupations, 44(4), 376–423. https://doi.org/10.1177/0730888417726119
Strauss, A., & Corbin, J. M. (1998). Grounded Theory in Practice. Sage Publications.
Tambe, P., Cappelli, P., & Yakubovich, V. (2019). Artificial Intelligence in Human Resources
Management: Challenges and a Path Forward. California Management Review, 61(4),
15–42. https://doi.org/10.1177/0008125619867910
Toff, B. & Kleis Nielsen, R. (2018). “I Just Googled It”: Folk Theories of Distributed Discovery.
Journal of Communication, 68, 6363-567.
Treem, J. W., & Leonardi, P. M. (2017). Recognizing Expertise: Factors Promoting Congruity
Between Individuals’ Perceptions of Their Own Expertise and the Perceptions of
Their Coworkers. Communication Research, 44(2), 198–224.
Treem, J., W., & Leonardi, P., M. (Eds.). (2016). Expertise, communication, and organizing.
Oxford University Press.
Uhde, A., Schlicker, N., Wallach, D. P., & Hassenzahl, M. (2020). Fairness and Decision-
making in Collaborative Shift Scheduling Systems. Proceedings of the 2020 CHI
Conference on Human Factors in Computing Systems, 1–13.
https://doi.org/10.1145/3313831.3376656
148
Waardenburg, L., Sergeeva, A., & Huysman, M. (2018). Hotspots and Blind Spots. In Living
with monsters? Social implications of algorithmic phenomena, hybrid agency, and
the performativity of technology. (pp. 96–109). Springer International Publishing.
Wang, J., Molina, M. D., & Sundar, S. S. (2020). When expert recommendation contradicts
peer opinion: Relative social influence of valence, group identity and artificial
intelligence. Computers in Human Behavior, 107, 106278.
https://doi.org/10.1016/j.chb.2020.106278
West, D., M. (2018). The future of work: AI, robots, and automation. The Brookings
Institution.
Whittaker, M., Crawford, K., & Dobbe, R. (2018). AI Now Report 2018. AI Now.
https://ainowinstitute.org/reports.html
Wiener-Bronner, D. (2020). Coca-Cola is cutting 2,200 jobs—CNN.
https://www.cnn.com/2020/12/17/business/coca-cola-job-cuts/index.html
Yan, B., Hollingshead, A. B., Alexander, K. S., Cruz, I., & Shaikh, S. J. (2021). Communication
in Transactive Memory Systems: A Review and Multidimensional Network
Perspective. Small Group Research, 52(1), 3–32.
https://doi.org/10.1177/1046496420967764
Zeiss, R. & Groenewegen, P. (2009). Engaging Boundary Objects in OMS and STS? Exploring
the Subtleties of Layered Engagement. Organization, 16(1), 81-100.
149
APPENDICES
Appendix A: Pilot Study Interview Guide
1. To start, can you tell me a bit about the organization you work for?
a. Probe: Tell me about your role in your organization.
2. Walk me through a day-in-the-life of you at work.
a. Probe: What would I see if I looked over your shoulder for a day?
b. Probe: Do you work as part of a team, or individually?
3. Can you walk me through the steps you take when you fill a job opening?
4. What kinds of technologies do you use to find and select talent?
5. Do you use any type of AI technologies when you recruit talent?
a. Probe: Do you rely on any predictive analytics or algorithms?
b. Probe: How reliant are you on these technologies when doing your job?
6. How often would you say that you agree or disagree with the AI technology?
a. Probe: What about the technology’s output leads you to agree or disagree
with it?
7. How do you typically react when you agree or disagree with the technology?
a. Probe: How do you know to take these actions in this situation?
Demographics
This data will be collected and recorded demographic information in a separate file.
1. Pseudonym of interviewee
2. Pseudonym of organization
3. Job title
4. Length of Employment
150
5. Gender
6. Age
7. Education (less than high school diploma, high school diploma/GED, Bachelor’s,
Master’s, Doctorate, Other)
151
Appendix B: Primary Interview Guide
Practices of Algorithmic Adjustment Among Experts Who Use Algorithms in Recruitment
Questionnaire Tool
Interview Outline
Script
Interview guide on use of algorithms and adjustments to algorithms
Demographics
Script
[Researcher to meet participants via phone call or video call]
Greeting and Prompt
Hello, my name is Ignacio Cruz, and I am a Ph.D. candidate from the University of
Southern California. I am interviewing Human Resources and Recruiting professionals to
understand how they utilize different kinds of smart technologies in their day-to-day
work. More specifically, I am conducting a study to understand the ways that recruitment
professionals incorporate work technologies to source, screen, and hire the best talent for
their organizations. I am particularly interested in learning about how your expertise and
experience in your industry informs the ways you use software and other kinds of
technologies to assess applicants, as well as help you make decisions to select the top
talent for your company.
We are here today because you previously indicated interest in being part of this
study. Are you still interested in participating in the study?
152
● YES – If yes, continue with the informed consent sheet for exempt research. Once
participants have been given time to read it and ask questions, begin the interview.
● NO – If no, thank them for their time and end the interview.
● DON’T KNOW – If not know, thank them for their time and end the interview.
In-Depth Semi-Structured Interview Protocol (60 – 90 Minutes)
Warm-Up Questions
This section will gain background information from the company that the participants
work for.
1. Can you tell me a little bit about the company that you work for?
▪ Follow-up: Can you describe the mission of your company?
2. What is your official job title?
▪ Follow-up: How long have you been working for this company?
3. Tell me about your role in X company.
▪ Follow-up: What are your primary responsibilities when it comes to
evaluating job candidates?
4. Can you tell me about the steps involved in evaluating job candidates?
▪ Follow-up: Can you describe how you work on this process?
Questions related to use of algorithms
This section will gain information about the specific types of technologies that the
participants typically use for their work. In uncovering the specific types of technologies,
this section will uncover how participants make use of the affordances of these
technologies, as well as understand how contextual, social, and organizational factors may
influence the usage of technology.
153
5. Now that we are on the topic of talent acquisition, I want you to take a moment and
think about the different types of technologies that you use to do this work. What
are those technologies that you use?
▪ Follow-up 1: Can you give an example of how you last used these
technologies?
▪ Follow-up 2: How is it typically used in your company? How does your
experience with this technology compare to how others at work use it?
6. How would you rate your level of skill with these technologies?
7. As you may know, algorithms are a form of artificial intelligence (AI) that are built
into software to help recruiters hire the best applicants. To what extent do the
technologies you use for your job rely on any form of algorithms, or algorithmic
calculations?
▪ Follow-up 1: If applicable, how is your data set trained?
▪ Follow-up 2: If applicable, where does the data come from?
8. How do others in your workplace generally use these tools (or a specific tool)?
▪ Follow-up 1: Do you typically reach out to others for help when using these
technologies?
▪ Probe 1: Would you say you use it differently or similarly?
▪ Probe 2: What kind of discussions do you have with colleagues about
this tool?
▪ Follow-up 2: How were you trained to use this tool?
▪ Probe: Are you required to use this tool in a certain way?
154
9. Which of these tools is most central to your job?
▪ Follow-up: How do you interact with these tools?
10. How accurate is this tool?
▪ Follow-up: What metrics do you look at to measure this accuracy?
11. Do you typically trust the results from this tool? Why or why not?
12. What unique insights, if any, has this tool offered you when doing your job?
Questions related to use of algorithms
We are nearing the end of this interview. I just have a few more questions about how you
use this tool for your job.
13. Still thinking about this tool that you use for X; can you describe a time when you
did not agree with the advice given to you from this tool?
▪ Follow-up 1: What percentage of the time do you agree with the tool’s
recommendations?
14. Keeping in mind this time when you did not agree with the tool, how did you react
to this situation?
▪ Follow-up 1: Did you take any action to adjust the tool’s output?
▪ Follow-up 2: How did you know to take this action?
▪ Follow-up 3: Do you find yourself typically having to adjust the tool’s output?
15. Can you comment on the reasons why you think this decision(s) was the best course
of action to take in this situation?
16. How has the use of this tool impacted the way you do your job?
155
▪ Follow-up 1: How has the use of this tool impacted the way that others do
their work at this company?
17. If you could change anything about the way your company evaluates job candidates,
what would it be?
18. We are at the end of the interview. Before we wrap up, is there anything else you’d
like to share about how you evaluate job candidates?
▪ Follow-up 1: Are there questions I should have asked and didn’t?
Demographics
Collect and record demographic information in a separate file.
1. Pseudonym of interviewee
2. Pseudonym of company
3. Job title
4. Length of employment at company in YEAR/ MONTH format
5. Employment status (full time or part-time)
6. Gender (male, female, other: specify)
7. Ethnicity
8. Birth year (age)
9. Education (less than high school diploma, high school diploma/GED, Bachelor’s,
Master’s, Doctorate, Other)
10. Location
156
Close and Debrief
Thank you for participating in this interview. Your time and responses are extremely
appreciated. The purpose of this study is to understand how recruitment professionals
utilize different forms of artificial intelligent technologies in their work. The findings of this
study will help to explain the strategies that professionals employ when they work with an
algorithm and can provide recommendations for scholarship and practitioners to design
and implement these technologies at work. Upon completion of this study, you can elect to
receive a summary of findings. You may choose to withdraw your data at any time. Please
contact Ignacio Cruz at ignacioc@usc.edu for any further questions or comments related to
your participation in this study. Thank you for your time.
157
Appendix C: Recruitment Information Sheet
Understanding the Work of Recruitment Professionals
Background
Working in talent acquisition requires recruitment professionals to leverage their
time and resources to work and communicate with a variety of people to accomplish their
goals. Recruiters represent an industry that is vital to the social fabric of work and hiring.
Recruiters possess collective knowledge and expertise about their roles when they conduct
work to source, screen, and hire the best talent. This diverse, yet unique, group of
professionals are understudied in how they may use communication tools for collaboration
and task accomplishment. This research seeks to understand the role of communication
and technology in shaping how recruiters accomplish their work and achieve growth.
Research Objectives
This study in no way aims to evaluate the responsibilities or projects that
recruitment professionals are currently working on. Rather, the goal is to understand the
role of communication and technology used by a recruiter and their collaborators. Upon
the request of recruitment professionals, findings that detail recommendations about
effective use of communication and technology tools can be shared. This study will explore
the following questions:
(1) What are the communicative practices that encompass the role of professionals
in talent acquisition?
(2) What is the role, if any, of technology that encompasses the role of professionals
in talent acquisition?
158
Research Design & Methods
To investigate the questions above, I am collecting data by:
(1) Virtual interview that will last approximately 30 minutes via Zoom or phone.
Confidentiality
All data collected will remain secure and confidential. All names of people, places,
organizations, departments, etc. will be anonymized in transcripts and records. This
research project will adhere to strict guides for anonymity and protection of participant
rights as upheld by the guidelines of the human subjects approval board at the University of
Southern California. Under no circumstances will I report any specific information that
divulges your or your company’s intellectual property. The primary concern of the research
is to explore communication behaviors in the recruitment industry.
Abstract (if available)
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Bounded technological rationality: the intersection between artificial intelligence, cognition, and environment and its effects on decision-making
PDF
Staying ahead of the digital tsunami: strategy, innovation and change in public media organizations
PDF
Forests, fires, and flights: examining safety and communication practices within aerial firefighting teams
PDF
Conspicuous connections as signals of expertise in networks
PDF
Building a unicorn: management of innovation, collaboration, and change in a Silicon Valley start-up
PDF
Collaborative care capacity: developing culture, power relationships and leadership support for team care in a hospital
PDF
Ecology and network evolution in online innovation contest crowdsourcing
PDF
The evolution of knowledge creation online: what is driving the dynamics of Wikipedia networks
PDF
Living with the most humanlike nonhuman: understanding human-AI interactions in different social contexts
PDF
A seat at the table: navigating identity, power, and safety in work meetings
PDF
Enchanted algorithms: the quantification of organizational decision-making
PDF
The social meaning of sharing and geocoding: features and social processes in online communities
PDF
A multitheoretical multilevel explication of crowd-enabled organizations: exploration/exploitation, social capital, signaling, and homophily as determinants of associative mechanisms in donation-...
PDF
Communicating organizational knowledge in a sociomaterial network: the influences of communication load, legitimacy, and credibility on health care best-practice communication
PDF
The evolution of multidimensional and multilevel networks in online crowdsourcing
PDF
Automated contracts and the lawyers who don't review them: adoption and use of machine learning technology
PDF
The new news: vision, structure, and the digital myth in online journalism
PDF
Toward counteralgorithms: the contestation of interpretability in machine learning
PDF
Informing the exam room: understanding the process of translating evidence-based medical research into clinical care
PDF
Social value orientation, social influence and creativity in crowdsourced idea generation
Asset Metadata
Creator
Cruz, Ignacio Fernandez
(author)
Core Title
Adjusting the algorithm: how experts intervene in algorithmic hiring tools
School
Annenberg School for Communication
Degree
Doctor of Philosophy
Degree Program
Communication
Degree Conferral Date
2021-08
Publication Date
07/09/2023
Defense Date
06/08/2021
Tag
algorithms,expertise,multilevel expertise,OAI-PMH Harvest,recruiting,trading zones
Format
application/pdf
(imt)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Fulk, Janet (
committee chair
), Hollingshead, Andrea (
committee chair
), Majchrzak, Ann (
committee member
), Riley, Patricia (
committee member
)
Creator Email
cruz.ignacio@live.com,ignacioc@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-oUC15294159
Unique identifier
UC15294159
Legacy Identifier
etd-CruzIgnaci-9717
Document Type
Dissertation
Format
application/pdf (imt)
Rights
Cruz, Ignacio Fernandez
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the author, as the original true and official version of the work, but does not grant the reader permission to use the work if the desired use is covered by copyright. It is the author, as rights holder, who must provide use permission if such use is covered by copyright. The original signature page accompanying the original submission of the work to the USC Libraries is retained by the USC Libraries and a copy of it may be obtained by authorized requesters contacting the repository e-mail address given.
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Repository Email
cisadmin@lib.usc.edu
Tags
algorithms
expertise
multilevel expertise
recruiting
trading zones