Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Identity, trust, and credibility online: evaluating contradictory user-generated information via the warranting principle
(USC Thesis Other)
Identity, trust, and credibility online: evaluating contradictory user-generated information via the warranting principle
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
IDENTITY, TRUST, AND CREDIBILITY ONLINE: EVALUATING CONTRADICTORY USER-GENERATED INFORMATION VIA THE WARRANTING PRINCIPLE by William Scott Sanders A Dissertation Presented to the FACULTY OF THE USC GRADUATE SCHOOL UNIVERSITY OF SOUTHERN CALIFORNIA In Partial Fulfillment of the Requirements for the Degree DOCTOR OF PHILOSOPHY (COMMUNICATION) August 2012 Copyright 2012 William Scott Sanders ii Dedication This dissertation is dedicated to Gina and Jenn, my grandmother and wife. iii Acknowledgements I would like to thank… Dr. Margaret McLaughlin for advising and guiding me through my dissertation. Dr. Andrea Hollingshead, Dr. Lian Jian, and Dr. Dennis McLeod for graciously serving on my dissertation committee and providing me with invaluable feedback on my work. Dr. Patricia Amason for guiding me through my first research experience and continuing to provide me with support and advice as I progress in my career. Dr. Howard Sypher and Dr. Sorin Matei for nurturing me through my master’s degree. Robbie Ratan for statistical support and reminding me to let go, loosen up, and move on when I felt stuck. Andrew Shrock for lending his time and intellect for critiquing the first drafts of my scale. Rebecca Sanders, my mother, for helping refine and polish my writing in both in this dissertation and the countless other papers of my academic career. And last, but not least, the Annenberg School of Communication for their fellowship support of this dissertation and for providing me innumerable opportunities and a stellar education. iv Table of Contents Dedication ..................................................................................................................... ii Acknowledgements ...................................................................................................... iii List of Tables .................................................................................................................v List of Figures ............................................................................................................. vii Abstract ...................................................................................................................... viii Chapter One: Literature Review ....................................................................................1 Chapter Two: Development of the Warranting Value Measures .................................15 Chapter Three: Credibility Heuristics and Astroturfing .............................................34 Chapter Four: Warranting and Signaling Theory .......................................................62 Chapter Five: Conclusion ...........................................................................................85 References ....................................................................................................................91 Appendices: Appendix A: Example Stimulus from Chapter Three .....................................98 Appendix B: Example Stimulus from Chapter Four .......................................99 Appendix C: Research Instruments ..............................................................100 v List of Tables Table 1: Descriptive Statistics for the Social Warranting Value Measure ..................19 Table 2: Social Warranting Value Measure – Rotated Factor Matrix ........................21 Table 3: Descriptive Statistics for the Self Warranting Value Measure .....................22 Table 4: Self Warranting Value Measure – Rotated Factor Matrix ............................24 Table 5: Descriptive Statistics for System Warranting Value Measure .....................25 Table 6: System Warranting Value Measure – Rotated Factor Matrix ......................27 Table 7: Correlations Between the WVMs and Attributional Confidence ..................28 Table 8: Social WVM Factor Correlations .................................................................29 Table 9: Self WVM Factor Correlations .....................................................................30 Table 10: System WVM Factor Correlations .............................................................30 Table 11: Comparison of Social Warranting Value Measure CFA Models ...............46 Table 12: Descriptive Statistics for the Social Warranting Value Measure ...............46 Table 13: Comparison of Self Warranting Value Measure CFA Models ...................48 Table 14: Descriptive Statistics for Self Warranting Value Measure .........................48 Table 15: Comparison of System Warranting Value Measure CFA Models .............50 Table 16: Descriptive Statistics for System Warranting Value Measure ...................50 Table 17: Descriptive Statistics for the Social Warranting Value Measure ...............71 Table 18: Social Warranting Value Measure – Rotated Factor Matrix ......................71 Table 19: Descriptive Statistics for Self Warranting Value Measure .........................72 Table 20: Self Warranting Value Measure – Rotated Factor Matrix ...........................73 vi Table 21: Descriptive Statistics for System Warranting Value Measure ....................74 Table 22: System Warranting Value Measure – Rotated Factor Matrix ....................74 vii List of Figures Figure 1: CFA of Social Warranting Value Measure .................................................47 Figure 2: CFA of Self Warranting Value Measure .....................................................49 Figure 3: CFA of System Warranting Value Measure ................................................51 viii Abstract Despite a renewed interest in credibility research over the past decade focusing on the evaluation of online information, very little research has been conducted concerning how people evaluate the credibility of user-generated content. This is a notable gap in the literature considering the growing importance of social media to e-commerce and everyday life. In this dissertation I use theories of online impression formation to examine how people use heuristic cues to evaluate credibility. Specifically, this dissertation examines the warranting principle that holds that linking a physical self to a given self-presentation online increases the perceived accuracy of the self-presentation (Stone, 1995; Walther & Parks, 2002). First, after carefully defining warrant, a set of scales to measure the warranting value of information from different sources online are developed and validated. Next, I conduct an experiment to test the proposition that warrant, which is bolstered by an individual’s inability to manipulate self-referential information, mediates the relationship between social confirmation heuristics and credibility evaluations online. Following this I test a set of opposing hypotheses generated by the warranting principle and signaling theory, a superficially similar but conflicting theoretical framework. Overall results suggest that the warranting value measures perform consistently across studies and that social information, in keeping with the warranting principle, exerts the strongest effect on attributions of credibility and trust. The theoretical and practical implications of the results are discussed. 1 Chapter One: Literature Review Although the past decade brought a resurgence of credibility research focused on how people evaluate information online, an overwhelming majority of the studies concerned the evaluation of static content on web pages. Many researchers chose to focus on how users evaluate online news stories (Cassidy, 2007; Greer, 2003; Kiousis, 2001), science information, and health information (Dutta-Bergman, 2004; Eastin, 2001; Eysenbach & Kohler, 2002; Morahan-Martin, 2004), neglecting to study how interpersonal communication via social media and user-generated content is evaluated (See Banning & Sweetser, 20007; Johnson & Kaye, 2004; Metzger, Flanagin & Medders, 2010 for notable exceptions). However, the increasing relevance of social media highlights the need to understand how people evaluate user-generated content. Internet users get information on everything from hobbies to healthcare via Internet forums. Online commerce thrives upon reviews and recommendations generated by other customers. Indeed, companies such as Yelp and Amazon are built on the backbone of consumer created content. Such content is treated as a financial asset and protected in user agreements by specifically prohibiting its collection and replication (e.g., “Your User Agreement”, 2011). Numerous Internet researchers have noted that online environments can be relatively devoid of information regarding the identity and the characteristics of other users (e.g., Donath, 1999; Stone, 1995; Walther, 1996; Walther & Parks, 2002). Indeed, one of the fundamental problems facing scholars is to explain how participants in online communities use relatively limited information to form impressions of and relationships 2 with others. Although undoubtedly many users endeavor to portray themselves accurately, anonymity and the lack of disconfirming information means that online identity is fluid and that people are freed from many of the self-presentational constraints inherent in face-to-face communication. Unscrupulous individuals and organizations can, and do, exploit this fact for financial or personal gain. Numerous examples can be found in the popular press and academic literature of marketers misrepresenting themselves in online communities or hiding conflicts of interest. Examples include Belkin’s paying consumers for positive Amazon reviews (Meyer, 2009), Sony marketers creating a fake blog, ostensibly written by teenage video game enthusiasts, in an attempt to sell portable video game units (Nudd, 2006), and bloggers’ failing to report their sponsorship by Walmart while on a purportedly independent cross-country road trip (Barbaro, 2006). In addition, some individuals may misrepresent their experiences and qualifications in order to gain status within an online community (Donath, 1999). Therefore, there is a practical need to understand the theoretical basis of how users evaluate credibility of user-generated content. My purpose in this study is to examine how people assess the credibility of user- generated content and make trust decisions online. I apply two burgeoning communication theories to the assessment of credibility online and conduct a set of empirically driven studies to examine how these theories may complement or contradict one another. First, I define the related constructs of credibility and trust and discuss their theoretical distinctiveness. Next, I briefly introduce two emerging theories, credibility heuristics and warrant, which may explain attributions of credibility and the extension of 3 trust online. Following this, I detail the creation of the warranting value measures which may prove useful to researchers interested in studying how users weigh and evaluate different sources of information when making attributions. Finally, a pair of empirical studies are conducted with the intent of testing the theories’ internal assumptions and exploring how users resolve contradictory information regarding credibility. Credibility and Trust Credibility research experienced a renaissance over the past decade as researchers sought to apply traditional conceptualizations of credibility to online environments and develop theories that encompass new modes of communication. Traditionally, distinctions have been made between source, message, and media credibility that reflect the various foundations from which credibility can arise (Metzger, Flanagin, Eyal, & Lemus, 2003). Source credibility, the most widely studied form by communication scholars, is considered a multi-dimensional construct composed of receivers’ perceptions of a source’s expertise and trustworthiness (Hovland, Janis, & Kelly, 1953) and, according to some, goodwill (McCroskey & Teven, 1999). Message credibility is derived from the language used, the message’s content, and its organization (Metzger et al., 2003). Finally, media credibility reflects receivers’ evaluations of credibility based upon the channel through which they receive the message. However, it should be noted that these distinctions can potentially be conflated and that some scholars hold that receivers do not make meaningful distinctions between the message’s source and the channel through which it is received (Chaffee, 1982). 4 Several researchers have noted that the terms trust and credibility have been used inconsistently throughout the academic literature (Tseng & Fogg, 1999; Rieh & Danielson, 2006). Despite being closely related, trust and credibility are distinct constructs that are often confused in academic discussions. Part of this confusion may lie in the terminology used to describe the dimensions of source credibility. Traditionally, source credibility has been considered to be composed of two dimensions, expertise and trustworthiness (O’Keefe, 2002). Expertise refers to the knowledge, skills, and abilities of a source. Trustworthiness refers to the benevolence of the source and by extension the unbiased nature of the information. Therefore, the trustworthiness dimension reflects the degree to which the source is perceived as honest. Indeed, O’Keefe (2002) notes that what is typically labeled as trustworthiness is sometimes referred to as personal integrity or character. Credibility and trust can be usefully distinguished from one another when credibility is defined as the believability of information and trust is defined as “the willingness of a party to be vulnerable to the actions of another party based on the expectation that the other will perform a particular action important to the trustor, irrespective of the ability to monitor or control the other party” (Mayer, Davis, & Schoorman, 1995). Therefore, because trust involves risk taking and its conceptualization explicitly includes a behavioral component it is distinguished from credibility, which lacks a behavioral element. Trust is important because it represents the behavioral consequences of communication and credibility evaluations. If online community members lack credibility and people do not believe what other users say, risk-taking cannot occur unless 5 there are powerful institutional controls to prevent undesirable outcomes. However, given the lack of information gatekeepers and the pseudonymity that typifies Internet interaction, institutional controls are often weak if they exist at all. Therefore, online community participants must evaluate the credibility of user-generated content prior to choosing whether to act on it. The Heuristic Evaluation of Credibility New media technologies create a number of difficulties for users attempting to evaluate credibility. Metzger et al. (2003) note that online environments lack the gatekeepers present in traditional media, such as newspapers or television news, which ensure the accuracy and quality of information. Therefore, many of the early studies of online credibility focused on how users evaluate the credibility of news and health information found on static web pages as compared to other forms of media. Another issue is the difficulty of identifying the source of information. Information can be published online anonymously or pseudonymously so that users may not be in a position to evaluate characteristics of a source. Source credibility is further compromised by collaborative documents that may not have a single author, such as Wikipedia, and the obscured origins of certain types of messages (e.g., forwarded emails). Some researchers have gone so far as to suggest abandoning conventional understandings of what constitutes a source. Sundar (2008) proposes that the increasing customizability and the ability to filter content results in users themselves being the source of content. Although very intriguing, this conceptualization of source is ultimately solipsistic and does not provide much, if any, guidance to message producers. 6 One solution intended to address these problems is a dual-process model of credibility evaluation. Borrowing from persuasion models describing how information is processed, such as the elaboration likelihood model (Petty & Cacioppo, 1986) and the heuristic-systematic model (Chaiken, Liberman, & Eagly, 1989), a dual process model of credibility evaluation holds that users often apply heuristics, in other words simple decision-making rules, when evaluating credibility (See Sundar, 2007; Metzger, 2007; Metzger et al., 2010). Heuristics may be exceptionally important when evaluating information because they can be both a conscious strategy for making decisions in addition to being applied automatically. If an individual is both highly motivated and able he or she may choose to engage in systematic processing by closely scrutinizing message content, thus decreasing the impact of heuristics. However, even then users may choose to apply heuristics consciously as tools in the evaluation process. Three conditions must be met in order for a heuristic to be applied (Sundar, 2007). First, a cue, or trigger, must be cognitively available and recognized in the content. Second, a specific heuristic should be easily brought to mind due either to frequent or recent use. Finally, the heuristic should be relevant to the issue, thus providing meaningful guidance in the evaluation process. In short, absent high ability and motivation, heuristics drive evaluations. User generated or shared content is subject to heuristic evaluation. Metzger et al. (2010), using qualitative interviews, found that two primary heuristics, social confirmation and expectancy within contexts, guided users’ perceptions of credibility online. First, social confirmation, conceptually similar to social proof, holds that the 7 opinions and actions of others can either bolster or negate the credibility of a source. For example, Sundar and Nass (2001) found that users liked news stories shared by other users more than those selected by professional news editors, and believed them to be of higher quality. Sundar (2007) proposed that this provided evidence of a social consensus effect online. Second, expectancy within contexts holds that a violation of the contextual norms will result in negative evaluation of credibility. For example, an online restaurant review written by a chef might be perceived as very credible because he or she has a high level of expertise with which to judge the food and service. However, if the review was written about a restaurant at which the chef worked this might violate consumers’ expectations that reviews are unbiased and written by typical consumers, thus leading to negative evaluations of credibility. In short, in the absence of information about the source, users may rely on social consensus and their own expectations to evaluate the credibility of information. A heuristic approach to online credibility assessment directly addresses the paradox of how users can evaluate a source’s credibility when there is very little information about the source’s identity. Users do not require large amounts of information about a source in order to apply heuristics. Instead, scant information is sufficient to determine credibility if it activates a heuristic used to determine credibility. Unfortunately, the dearth of research on credibility evaluations of user-generated content means that it is not clear how users apply heuristics or when they engage in systematic information processing. It is clear, however, that social media users often interact in complicated social environments and that there are many different and potentially 8 conflicting cues available to them. Currently, it is not known how they resolve these conflicts but theories of attribution and self-presentation may provide a useful starting place for the evaluation of source credibility. Warrant Although the heuristic approach to credibility assessment can be applied with little information about a source, users’ ability to discern a source’s identity and attributes is an imperative condition for other emerging theories of online attribution. During the Internet’s nascent phase one observation that fascinated researchers was that the lean, text-based environments allowed great freedom to diverge from a “real world” identity. Many aspects of a self-presentation, such as race and gender, that are easily confirmed or disproved offline, are obscured in online contexts. This stands in stark contrast to face- to-face contexts where form and identity are often treated synonymously (Stone, 1995). Indeed, it is the strength of this assumption that allows grifters, undercover policemen, and spies to operate effectively. The fact that online contexts obscure attributes of a source raises the question of how individuals can evaluate credibility and develop trust in an environment where claims cannot be substantiated. The concept of warrant, defined as “the capacity to draw a reliable connection between a presented persona online and a corporeally anchored person in the physical world,” may provide a partial answer (Walther, Van Der Heide, Hamel, & Schulman, 2009, p 232). Linking online self-presentations to offline identities bolsters warrant because it makes the online identities more concrete and less malleable. Information that provides warrant may do so to a greater or lesser degree and warrant, rather than being a discrete 9 property, falls along a continuum representing different degrees of confidence (Walther & Parks, 2002). Furthermore, warrant is not a property of communication technologies, but rather it is based on the receiver’s evaluation of the evidence connecting online presentations to an offline identity. Walther and Parks (2002) noted that users must evaluate the extent to which warrant is present when communication technologies allow people the freedom to diverge from their offline identities, thereby creating ambiguity in regard to identity. Therefore, warrant is a perceptual variable and different individuals may view the same situation as having more or less warrant. Sources of warranting information. Information which provides warrant can be derived from a number of different sources. First, having access to another’s social network supports identity claims because it allows users to verify information with others and to observe whether self-presentational claims are unchallenged (Walther & Parks, 2002). While access to an individual’s social network may reduce uncertainty (Parks & Adelman, 1983), Donath and boyd (2004) note that the usefulness of social structure in determining warrant is dependent upon a number of qualifications. Individuals within the target’s social network must be real people, as opposed to fake profiles, who know the person well enough to detect deception. Additionally, these individuals must be willing to speak out or impose sanctions if deception is detected. Therefore, while access to a social network contributes to the development of warrant, it is contingent upon users’ perceptions of their target’s social environment. Second, self-generated information and direct communication between parties contributes to the development of warrant. Parks and Archey-Ladas (n.d.) found that 10 creators of personal homepages often provided verifiable elements, such as real names and pictures, which linked online identities to real world people. Similarly, direct communication between online parties is strongly associated with uncertainty reduction (Sanders, 2008; Antheunis, Valkenburg, & Peter, 2010). Walther and Parks (2002) note that face-to-face relationships become fully warranted when individuals interact over time across multiple contexts. The perceived stability across interactions results in a strong certainty of an individual’s identity. Likewise, recurrent online interactions may provide perceived stability to an identity even in the absence of a link to an offline counterpart. However, self-generated statements should be attributed relatively low weight in determining warrant compared to other sources of information because users can easily fabricate messages that create a desired impression. Finally, communication systems may contribute to warrant by requiring users to provide information that can be corroborated. For example, historically, to gain access to university or corporate networks on Facebook, a user was required to have an email address issued by the relevant entity (boyd & Ellison, 2007). Even requiring the use of real names may reinforce warrant because it allows users to search for information about one another. Utz (2010) noted that system-generated information, such as the number of friends to whom an individual is connected on a social network, can also contribute to warrant. System-generated information should have a high warranting value for those who attend to it because it is often beyond users’ abilities to manipulate easily. The warranting principle. When evaluating an identity claim users must weigh and consider the extent to which information from multiple sources contributes to 11 warrant. This is not necessarily straight forward. While information from different sources may complement or corroborate one another, they can also contradict requiring the receiver to decide which source to privilege. The warranting principle holds that when individuals encounter contradictory information they will privilege the source with the higher warranting value. The warranting value of information is the receiver’s judgment of the extent to which warrant exists in a given situation. Therefore, it is a subjective weight representing the degree to which warrant contributes to the receiver’s confidence in the identity and attributes of a focal individual. Walther and Parks (2002) propose that information’s warranting value is determined by the “receiver’s perception about the extent to which the content of that information is immune to manipulation by the person to whom it refers (p. 552).” For example, system information from others, which is relatively difficult for an individual to manipulate, has a higher warranting value than self-generated information where a user could easily create a flattering self- presentation. In short, warrant serves is a framework for understanding how people evaluate and weigh information from different types of online sources. Although warrant is a potentially useful construct, there are a few problems with the current conceptualization that limit its applications. Warrant, at its core, is the ability to verify information so that inaccurate self-presentations are constrained. While establishing a link between an online persona and offline identity may bolster warrant, it is not a necessary condition for warrant. For example, interpersonal relationships can and do form between people who never meet and interaction over time is likely sufficient to establish many identity claims. Additionally, certain qualities, such as expertise, or 12 certain skills may be directly observable and, thus, need no corroboration. Finally, an individual’s language or practices can serve as shibboleths that mark a person as a member of a particular group. Not only is a online-offline link unnecessary to establish warrant, it is also not sufficient. Although offline self-presentations typically contain a high degree of warrant, not all offline relationships are fully warranted. For instance, a person searching for a romantic relationship may spend a significant amount of time with another person before realizing that he or she is married. The second critique of warrant is that it may apply to relatively few situations. Parks (2011) describes three conditions that must be satisfied for warrant to impact receiver attributions. Specifically, users must actively engage in self-presentation, a third-party source must provide corroborating or conflicting information, and the receiver must be able to compare the user’s claim and the third party information in a practical manner. Although these criteria appear to superficially match the affordances provided by social network sites, Parks (2011) found that they could only be applied to a relatively small percentage of MySpace pages. Therefore, warrant may be internally valid but lack ecological validity. This dissertation offers a revised definition of warrant to address the conceptual problems detailed above. Rather than focusing on ties between online and offline identities, it is posited that any information that constrains self-presentation and is able to be verified via other sources enhances warrant. Removing the requirement that warranting information link online and offline identities honors its original intent of 13 explaining how people evaluate information during the attribution process while greatly increasing its applicability. The concept of warrant may have important implications for how people evaluate credibility and make trust decisions online. First, warrant suggests that a “source” can be identified in the traditional sense (i.e., a distinct individual or organization) and thus invites the application of conventional conceptualizations of source credibility to social media. Indeed, extremely high levels of warrant result in an identity that, rather than being anonymous and fluid, is known and concrete. Second, warrant provides guidance as to how people might evaluate and weigh contradictory information in the attribution process. Specifically, the warranting principle suggests that third party information is weighed more heavily than self-generated information, thus indicating that the social consensus heuristics may be extremely powerful in online environments. Third, warrant suggests a mechanism by which some technological affordances may impact credibility evaluations. Specifically, technological features that are relatively easily manipulated by a target individual are perceived as less informative. Finally, theories of online attribution must consider the unique attributes of the medium which distinguish it from face-to-face interaction while remaining flexible enough to provide broad applicability. The warranting principle accomplishes this because it is not tied to a particular modality but relies on how people perceive information in mediated social environments. The goal of this dissertation is to determine how people evaluate the credibility of user-generated content in online forums and social network sites. The first study develops a set of measures for warranting value designed to tap into receivers’ 14 perceptions of the degree to which different sources contribute to warrant. These measures are an important pre-cursor to the following studies because they allow for warrant to be tested as a potential mediating variable. The second study examines how users interpret and resolve contradictory heuristic cues for credibility. Additionally, it asks whether the warranting principle can explain how heuristics are applied and weighted when making attributions of credibility. Finally, warrant is evaluated against a set of conflicting hypotheses generated by an alternative theoretical framework, signaling theory, to test whether there are special cases where self-generated information may be privileged over social or system-generated information. 15 Chapter Two: Development of the Warranting Value Measures Although the warranting principle was introduced over fifteen years ago (Stone, 1995), there has been little effort until recently to understand its role in attribution. Furthermore, the research that has been conducted treats warrant as an independent variable operationalized by manipulating the presence and valence of system and social information. While these studies are suggestive they do not provide direct evidence for a warranting effect. Warrant was initially conceptualized as the value of information “derived from the receiver’s perception about the extent to which the content of that information is immune from manipulation by the person to whom it refers” (Walther & Parks, 2002). This indicates that warrant, as a perceptual variable, functions not as an attribute of information but as a mediating variable rooted in the receiver’s perceptions. Previous studies demonstrating that social information is important when forming attributions were not designed to show the mediating role that warrant is proposed to play or to demonstrate that it is the warranting mechanism, the inability of the target to manipulate information, which is driving results. Therefore, self-report measures of the warranting value of information allows for clarification of warrant’s role in the attribution process. The measures developed here, tentatively titled the Warranting Value Measures (WVM), are designed to gauge the warranting value of the different sources of warranting information: social, self, and system. Furthermore, it is posited that the mechanisms which contribute to perceived warrant are consistent across the various sources of warranting information. Specifically, information enhances warrant when it 16 constrains self-presentational claims, is difficult to manipulate or change, and provides meaningful information about the target. When self-presentational claims are constrained it is difficult for an individual to make false or misleading claims about their identity or attributes. For example, a person might be reluctant to proclaim their athletic prowess on Facebook if they believed their friends would quickly counteract their claims with numerous examples of failures and fouls. When information is difficult to tamper with or change it should be seen as more reliable and contribute to perceived warrant. High ratings on eBay, for example, may be trusted because they are believed to be beyond the ability of a user to manipulate. Finally, information increases warrant when it is salient to the interpersonal judgment a person is attempting to make. For instance, the number of friends a person has on a social network may be useful in determining how extraverted the profile owner is but is not necessarily informative concerning their credibility as a source of information. In short, the proposed warranting value measures should allow researchers to determine which sources of information contribute to warrant and why that information adds to warrant. Methods Participants Two hundred seventy-one participants were recruited from Amazon’s crowdsourcing marketplace, Mechanical Turk, and received a small monetary compensation for their participation. Geographic filtering based on mailing address was used to ensure that participants were located within the United States. Participants’ ages ranged from 18 to 66 with the mean age being 32.64 years (SD=10.92). Women 17 comprised 65% of the sample with the remaining 35% reporting their gender as male. Finally, a vast majority of participants indicated that they had a Facebook account (89%) suggesting that they were familiar with the site and its affordances. Procedure This study followed DeVellis’s (2003) process for scale creation and analysis. First, a large pool of items intended to tap into the proposed factors, self-presentational constraints, inability to manipulate self-referential information, and uncertainty reduction from information, was written by the researcher for each scale. All items were measured on a 7-point Likert scale anchored by Strongly Disagree and Strongly Agree. Next, these items, along with a working definition of warrant, were provided to other scholars for review. Reviewers were asked to rate the relevance of each item to the construct and to evaluate items for clarity. They also were invited to provide additional notes and commentary on scale items. This process bolstered the completeness of the items’ content range representing the construct of interest and, thus, content validity. Potential scale items were administered to participants following item development. Participants were recruited via a message posted on Amazon Mechanical Turk which included a link to the study. Following an information page enumerating their rights as research participants and providing a general description of the study, participants were asked to examine carefully a mock Facebook page. After this, they were presented with the proposed items for the WVMs (i.e., social, self, and system). Finally, participants were asked to answer Clatterbuck’s CL7 measure of attributional confidence (Clatterbuck, 1979) and Short, Williams, & Christie’s (1976) 18 measure of social presence to assess convergent and discriminant validity respectively. Attributional confidence is a logical choice for assessing convergent validity because a high degree of warrant should constrain an individual’s self-presentation thus increasing certainty in attributions. In contrast, information offering a low level of warrant should not bolster attributional confidence. Thus, one would anticipate a moderate to strong positive correlation between attributional confidence and the warranting value of information. In turn, social presence is ideal for measuring discriminant validity because rather than focusing on the conveyance of information concerned with identity it focuses on ability of a channel to transmit emotional warmth. Social presence should exhibit a low correlation with the warranting value of information because while emotional cues may not convey information about identity, they may be slightly uncertainty reducing. Results Section The purpose of the work described in this study was to create a set of scales that measured the warrant produced by different sources of information online. Each scale started with a large pool of initial items that were reduced for using three criteria. First, factor loadings were used to eliminate items that failed to load, or that failed to load strongly, on the expected factors. Second, items were evaluated for content and wording to eliminate any excessive redundancy that might have existed in the larger initial pool of items. Finally, Cronbach’s alpha was used to ensure that the set of items displayed adequate reliabilities both on a factor and composite variable level. This was an iterative and exploratory process that resulted in three factors for each scale with four items per factor. 19 Social warrant. Social warrant is the degree of certainty a person has in another’s self-presentation based on information from others in the individual’s social network. Items of the Social Warranting Value Measure (Social WVM) were subjected to principal-axis factor analysis with varimax rotation. The scree plot and eigenvalues suggested a three-factor solution with all items loading uniquely onto their expected factor. Overall factor reliabilities, as assessed by Cronbach’s alpha, were good and can be found in Table 1 along with additional descriptive statistics. Table 1 Descriptive Statistics for the Social Warranting Value Measure Scale/Factors α M SD N Social WVM .70 4.18 .70 263 SPC .81 4.50 1.06 271 IM .79 4.00 1.23 269 UR .83 4.04 1.16 265 The three factors were labeled as following: a) self-presentational constraints (Social SPC) b) inability to manipulate (Social IM) and c) uncertainty reduction from information (Social UR). The Social SPC factor is concerned with the degree to which other community members could prevent an individual from misrepresenting him or herself (e.g., “Nobody would point it out if a person told a lie about him or herself on Facebook.”). The Social IM factor represents the extent an individual can manipulate self-referential information originally posted by other community members (e.g., “A person can easily delete other people’s posts on Facebook that reflect poorly on him or her.”). Finally, items like “I can predict how a person will behave based on what other 20 users post about them on Facebook” typify the Social UR factor. This factor represents the degree to which participants find the information informative. The items for the social warranting value measure can be found in Table 2 along with the item’s factor loadings. 21 Table 2 Social Warranting Value Measure - Rotated Factor Matrix Questionnaire Item Factor SCP IM UR Nobody would point it out if a person told a lie about him or herself on Facebook. .691 .064 -.031 Other users on Facebook would point out if a person was misrepresenting him or herself. .762 -.076 .056 People who know the truth will speak up if a person misrepresents him or herself on Facebook. .723 .018 .130 If a Facebook member found out a person was lying about him or herself, they would let other people know. .716 -.021 .160 A person can easily delete other people’s posts on Facebook that reflect poorly on him or her. -.011 .637 -.044 People on Facebook often remove other people's posts that are uncomplimentary. -.063 .661 -.002 People frequently edit other people s posts to present themselves in a more flattering light. .015 .712 .020 It is easy to change other people s posts on Facebook if it sends the wrong message. .060 .734 .037 I know what a person will be like based on the pictures other people share of him or her on Facebook. .015 .007 .773 Other people’s posts on Facebook make me confident that a person is accurately representing him or herself. .116 .058 .725 I can predict how a person will behave based on what other users post about them on Facebook. .080 -.002 .774 The pictures other people share of a person on Facebook reveal a lot about that person’s attitudes. .088 -.053 .680 22 Self warrant. Self warrant is the level of confidence a person has in his or her attributions about an individual derived from information or statements generated by that individual. Principal-axis factor analysis was used to examine the items making up the Self Warranting Value Measure (Self WVM). Three factors were found with eigenvalues greater than one and all items loaded uniquely onto one of the three factors. The items also produced good reliabilities as assessed by Cronbach’s alpha. Table 3 summarizes the descriptive statistics and reliabilities for each factor. Table 3 Descriptive Statistics for the Self Warranting Value Measure Scale/Factors α M SD N Self WVM .75 2.96 .64 264 SPC .74 2.29 .86 269 IM .88 2.99 1.13 268 UR .82 3.87 1.10 268 The factor structure mirrored that of the Social WVM with three factors representing a) self-presentational constraints (Self SPC), b) Inability to Manipulate (Self IM), and c) uncertainty reduction from information (Self UR). The Self SPC factor, which was designed to gauge the extent an individual is able to misrepresent him or herself, is represented by items like, “Nothing stops a person from saying things about him or herself that are untrue”. The Self IM factor was designed to measure the degree to which an individual can select what information he or she shares with others in order to create a desired impression. Thus, it differs slightly from the IM factors in the Social WVM and System WVM in that it does not involve hiding or changing information but is consistent with them in the sense that it concerns how an individual might control access 23 to information during the impression development process. For example, an item from the IM factor in the Self WVM reads, “People only share things that convey how they want others to see them”. Finally, the Self UR factor represents the participant’s perception that self-generated information is informative. A typical item reads, “I know what this person will be like based on the pictures he or she shares on Facebook”. The Self WVM items can be found in Table 4 along with their factor loadings. 24 Table 4 Self Warranting Value Measure - Rotated Factor Matrix Questionnaire Item Factor SPC IM UR It is easy for a user to pretend to be someone they are not when using Facebook. .757 .131 .077 You can never be sure that people on Facebook are who they claim to be. .777 -.042 .091 Nothing stops a person from saying things on Facebook about him or herself that are untrue. .615 .122 .095 Everything people share about themselves on Facebook is true. .410 -.001 .175 Users normally only post flattering information about themselves on Facebook. .024 .770 .068 People on Facebook usually only share things that make themselves look good. -.037 .884 -.042 People only share things that convey how they want others to see them. .090 .789 -.068 People only post things on Facebook that depict themselves as they would like to be seen. .152 .807 -.048 I know what a person will be like based on the pictures he or she shares on Facebook. .164 -.008 .813 A person’s posts on Facebook make me confident that person is accurately representing him or herself. .351 -.034 .705 I can predict how a person will behave based on the posts he or she makes on Facebook. .024 .007 .758 The pictures a person shares on Facebook reveal a lot about his or her attitudes. .082 -.041 .602 25 System warrant. System warrant is the level of confidence a person has in his or her interpersonal judgments of an individual based on information generated by a computer system or website. Items for the System Warranting Value Measure (System WVM) were evaluated using principal-axis factor analysis with varimax rotation. Once again, a three-factor solution was found based on the scree plot and eigenvalues (λ's > 1) with all items loading uniquely. The Cronbach’s alphas for each factor were good and the reliabilities, means, and standard deviations can be found in Table 5. Table 5 Descriptive Statistics for System Warranting Value Measure Scale/Factors α M SD N System WVM .81 3.38 .81 263 SPC .80 3.53 1.25 267 IM .84 3.51 1.17 268 UR .80 3.30 1.13 270 The same underlying factor structure as in previous scales was found for system warrant. Specifically, the three factors are a) self-presentational constraints (System SPC), b) inability to manipulate (System IM), and c) uncertainty reduction from information (System UR). System SPC was typified by items such as, “It is difficult for a person to misrepresent himself when there is a history of his or her posts.” These items represent the extent participants believed system information prevents an individual from misrepresenting him or herself. The System IM factor was represented by items like “A person can hide information that is automatically posted by Facebook if it presents the wrong image,” and measures the perception that information generated by a computer or website is easily manipulated. Finally, the System UR factor included items such as, 26 “The number of friends a person has on Facebook makes me confident that person is accurately representing him or herself.” This factor was designed to assess how informative participants found system-generated information. The items for this factor along with their factor loadings can be found in Table 6. 27 Table 6 System Warranting Value Measure - Rotated Factor Matrix Questionnaire Item Factor SPC IM UR The total number of friends displayed on Facebook prevents a person from making inflated claims about him or herself. .613 .097 .291 It is difficult to exaggerate about yourself on Facebook due to the total number of photos. .609 .041 .346 The time stamps on posts make deceit difficult on Facebook. .693 .084 .190 It is difficult for a person to misrepresent him or herself when there is a history of his or her posts. .646 .095 .381 A person can hide information that is automatically posted by Facebook if it presents the wrong image. .159 .801 -.060 If a user wants, he or she can easily hide information automatically generated by Facebook. .168 .806 -.073 It is difficult to edit information automatically posted to a profile by the Facebook website. .046 .663 .069 Information that is automatically posted by Facebook is easy to change if it sends the wrong message. -.085 .763 .047 I know what a person will be like based on the record of his or her past activity on Facebook. .307 .019 .592 The number of friends a person has on Facebook makes me confident that person is accurately representing him or herself. .332 -.036 .647 The number of photos a person has on Facebook makes me confident that person is accurately representing him or herself. .236 -.023 .768 The history of a person’s activity on Facebook reveals a lot about his or her skills and abilities. .194 .014 .615 28 Convergent and discriminant validity. Pearson correlations were conducted between the various warranting value measures and Clatterbuck’s measure of attributional confidence (Clatterbuck, 1979) to help establish convergent validity. First, the different warranting value measures were all moderately intercorrelated suggesting that while related, they are not measuring the same construct. Second, each of the warranting value measures was moderately correlated to Clatterbuck’s attributional confidence scale which measures an individual’s confidence in their evaluation of others. This was expected and supports convergent validity for these scales. Table 7 summarizes the intercorrelations between the measures and attributional confidence and contains the means and standard deviations of the scale. Table 7 Correlations Between the WVMs and Attributional Confidence Measure 1 2 3 4 1. Social WVM 1 - - - 2. Self WVM .429 ** 1 - - 3. System WVM .474 ** .496 ** 1 - 4. Attributional Confidence .386 ** .489 ** .514 ** 1 M 4.18 2.96 3.37 3.54 SD .70 .64 .81 1.10 ** P<.01 29 Discriminant validity was evaluated by comparing the average variance extracted (AVE) for each pair of factors to the squared correlation between factors (Fornell & Larcker, 1981). If the AVE for both factors was larger than the squared correlation between factors, factors were held to be distinct. The standardized loadings for the calculation of the AVE was derived from structural equation models created using LISREL 8.8. A model was created for each scale with the proposed three factors and social presence. The AVE values for the social WVM were .52, .46, .55, and .44 for the Social SPC, Social IM, Social UR factors and social presence respectively. The Self WVM AVE values were .44 (Self SPC), .73 (Self IM), .55 (Self UR), and .46 (social presence). Finally, the System SPC, System IM, System UR, and social presence AVE values for the system WVM were .51, .57, .51, and .46. It should be noted that all AVE values exceeded the squared correlations for their respective scales. Thus, there is some evidence for the measures’ discriminant validity. Specifically, the factors which the scales appear to be distinct from one another and from social presence, a distantly related construct. Tables 8, 9, and 10 contain the correlation values for Social WVM, Self WVM, and System WVM respectively. Table 8 Social WVM Factor Correlations 1 2 3 4 1. SPC 1 - - - 2. IM .009 1 - - 3. UR .119 .044 1 - 4. Social Presence .081 .034 .239 1 N=270 30 Table 9 Self WVM Factor Correlations 1 2 3 4 1. SPC 1 - - - 2. IM .121 1 - - 3. UR .274 -.040 1 - 4. Social Presence .061 .071 .200 1 N=269 Table 10 System WVM Factor Correlations 1 2 3 4 1. SPC 1 - - - 2. IM .135 1 - - 3. UR .577 .000 1 - 4. Social Presence .175 .040 .217 1 N=270 Discussion The growing importance of user-generated content to online life and commerce highlights the need to develop instruments that allow researchers to study how people evaluate the contributions of others they will likely never meet. The warranting principle may provide a partial answer by describing how information is weighted during the attribution process and stipulating what types of information are most informative. The results from this study provide preliminary evidence for the reliability and validity of the different warranting value measures. All scales displayed internal consistency and the reliabilities of their respective factors all achieved acceptable values. The scales were only moderately inter-correlated, suggesting that they measure distinct, but related 31 constructs. Scores were also found to be moderately correlated with Clatterbuck’s measure of attributional confidence, providing some evidence for convergent validity. Finally, an examination of the AVE values and squared correlations provides some evidence for discriminant validity between factors and related constructs. The warranting value measures make a number of contributions to the utility of warrant as a theoretical construct. Currently all extant studies of warrant are experimental in nature and thus are limited to claims of internal validity. The warranting value measures address this limitation by allowing warrant to be examined in real world contexts by inclusion in cross-sectional surveys. Additionally, warrant has traditionally been studied as a discrete, independent variable which has been manipulated in experiments. A continuous measure of the perceived warranting value of information is not only more in line with its theoretical conceptualization but also allows for warrant to be studied as a mediating variable or covariate. Indeed, the warranting value measures provide greater methodological flexibility and create the opportunity to include warrant in SEM and regression models. Finally, a measure of warranting value allows researchers for the first time to consider warrant as a dependent variable. Although warrant is not typically thought of as an outcome, it is similar to constructs such as credibility in that it derives its importance from its affect on the attributions and the subsequent behavior of individuals. A warranting value measure means that studies that test how design choices, policies, and technological affordances affect perceptions of warrant are now possible. 32 Warrant was initially conceived as a monolithic construct and some may question the need to develop multiple scales to measure different sources of warrant. However, there are several reasons why this is preferable. First, when treated as separate constructs, researchers gain the ability to evaluate discrepancies between different sources of warranting information. For example, it is not difficult to imagine a scenario in which self-generated claims contradict social or system information. Second, measuring the different sources of warrant allows for the creation of statistical models that demonstrate how the various sources of warrant are weighed in respect to one another. Currently, it is unclear whether the relationship is additive or whether certain types of warranting information suppress the effect of others in the attribution process. Third, creating separate warranting value measures allows for fine-grained examination of exactly why information provides warrant. Multiple scales allow us not only to identify the type of information but also the mechanism. For example, Facebook wall posts may greatly increase warrant because they are social information that impose self- presentational constraints. This study asked participants to respond to items intended to gauge their perceived warrant on Facebook, a popular online social network. A common difficulty in developing measures intended for online research is that different contexts provide distinct affordances and modalities that make the direct application of an instrument developed in one context hard to apply to another. The scales proposed here are no different. The items may be most useful to researchers as stems which can be modified or tailored to reflect the research environment. However, given that most online 33 interactions take the form of pictures and text, and this seems unlikely to change in the near future, the researcher believes that, typically, only minor revisions will be necessary. Future research should determine whether the scales behave consistently in other contexts and if minor alterations affect their internal consistency and performance. 34 Chapter Three: Credibility Heuristics and Astroturfing The heuristic approach to credibility evaluation online is a particularly parsimonious explanation of how people assess the credibility of user-generated content. Metzger et al. (2010) proposed two general classes of heuristics for the evaluation of credibility, social confirmation heuristics and expectancies within context heuristics. Social confirmation heuristics hold that the actions and beliefs of others attest to the credibility and trustworthiness of a source. In contrast, expectancies within context heuristics are based on the violation of an individual’s expectations about the characteristics of a credible or trustworthy source. However, these heuristics are not mutually exclusive and, as of yet, little is known about how people evaluate contradictory heuristic cues. The warranting principle suggests that in these instances social confirmation cues should be privileged over expectancy violation cues produced by the focal individual. The practice of astroturfing, in which organizations disguise corporate-sponsored campaigns as grassroots communication with the intent of creating a false sense of public consensus, is particularly intriguing because it illustrates how credibility heuristics can work in tandem or conflict. Astroturfing is based on the logic that when messages are identified as originating from brands or marketers a persuasive intent heuristic is activated. The persuasive intent heuristic, an example of the expectations within context heuristic, is generally thought both by practitioners and by researchers to result in a negative evaluation of credibility and trust. Astroturfing is also intimately tied to social confirmation heuristics because when marketers pose as ordinary community members 35 they are attempting to manipulate public opinion and give an impression of widespread support. It should be noted that credibility assessment heuristics likely have a broad applicability to the evaluation of online user-generated content, including health information, consumer reviews, and even online dating profiles. Furthermore, deception, although a characteristic of astroturfing, is not a necessary factor for the application of these credibility heuristics. Rather, astroturfing is merely a specific context in which it is easy to understand how credibility assessment heuristics may complement or conflict with one another. Community members might become aware of a marketer’s association with a brand in two different circumstances. Marketers might choose to self-disclose their association with the brand publicly. Alternatively, marketers might attempt to hide their relationship with the brand only to have the relationship detected nevertheless by community members. If the assumptions upon which astroturfing is based are sound, both of these scenarios should result in a decreased evaluation of credibility and trust. However, there is some evidence which suggests that this may not be the case. Carl (2008) found that when marketers revealed that they were acting on behalf of a marketing firm, they were rated as more trustworthy and having more goodwill than when they did not initially disclose and an association was discovered later. While compelling, this study was not experimental and could not demonstrate causality. Furthermore, it lacked a control group, in which the marketer neither self-disclosed nor was caught, to anchor the pattern of the findings. As a result, it is unclear whether marketers who failed to disclose their associations were censured, whether honest marketers were rewarded with higher 36 perceptions of credibility, or whether both honest and dishonest marketers were rated more poorly but to differing degrees. Therefore, based on the persuasive intent heuristic and the findings reported by Carl (2008) it is hypothesized: H1: Participants will perceive marketers who disclose their association with a brand to be more credible than marketers that attempt to conceal their association with a brand. H2: Participants will report higher levels of trust for marketers who disclose their associations with the brand than for marketers who attempt to conceal it. Although expectancy violations are anticipated to predict credibility and trust, they have to be evaluated in the presence of other potentially contradictory information. Specifically, users may be faced with a situation where a social confirmation heuristic is in conflict with an expectations within context heuristic. For example, online forum users may encounter a vendor or brand representative with a high reputation score, a positively valenced social confirmation heuristic, who is recommending the purchase of his or her company’s own products, a negatively valenced expectancy violation. As of yet, little is known about how users resolve these conflicts. Warrant suggests that social confirmation heuristics should be privileged over expectations within context heuristics, such as persuasive intent, which are self-generated by the source. In turn, Metzger et al. (2010) suggest that expectations within context heuristics may overpower social confirmation heuristics. Indeed, Sanders and Hollingshead (2010) found that whether brand agents chose to identify themselves influenced how credible an online community was judged to be, but did not detect an effect for social consensus. Unfortunately, while 37 they collected data on the perceived credibility of the focal brand and online community, they neglected to measure perceptions of the brand agent. Thus, while we know something about the wider consequences of astroturfing for the brand and community, it is still unknown how users evaluate the actual source of information in these situations. Considering what is known, the following hypotheses are proposed: H3: There will be a main effect for social confirmation so that when other community members concur with the marketer’s statements the marketer will be perceived as more credible than when community members disagree. H4: Social confirmation and persuasive intent will interact so that self-identified marketers with whom community members agree will enjoy the highest levels of credibility. Rather than attempting to generate public support for a brand or product, an inverted strategy to traditional astroturfing might consist of marketers undermining a competitor’s brand with negative feedback or false comments. However, despite its flip- flopped nature, this tactic should produce identical results in regard to how community members evaluate the marketer, the source of the information. Specifically, a marketer who is detected running down a competitor would violate the expectancies within context heuristic by cuing perceived persuasive intent. Likewise, community participants can be expected to respond similarly to social consensus and evaluate the marketer in turn. What remains unclear is the effect these statements may have on the focal brand’s perceived product quality and brand trust. Sanders and Hollingshead (2010) found that when traditional astroturfing conducted by a brand’s own marketers was detected within 38 an online community, brand trust and perceived product quality suffered. However, when competitors engage in this behavior, there are at least three potential outcomes. First, online community participants may view these comments as irrelevant and uninformative concerning the focal brand. Alternatively, the effort that rival marketers make to disparage a brand may raise its estimation in the eyes of a consumer. The fact that a competitor views a brand as a threat may signal high quality to some consumers. Finally, regardless of the source or the validity of the statements, the brand may suffer deleterious effects from competitors’ comments. Therefore, the following research question is proposed: RQ1: How does astroturfing by a brand competitor affect brand trust and consumer purchase intent? It is important to note that the root of credibility evaluations differs for social confirmation heuristics and expectancies within context heuristics. Social confirmation heuristics function because people look to others to help them make judgments about their social world. Therefore, information about a source’s reputation or consensus regarding the veracity of their claims can bolster source credibility. In contrast, expectancies within context heuristics function by violating assumptions about what constitutes a credible source. Expectancy violations consisting of poor spelling or obvious factual errors reflect upon a source’s competence and expertise but do not necessarily compromise perceptions of the source’s integrity. In turn, a source may be expert but his or her trustworthiness (i.e., personal integrity) is compromised by perceived persuasive intent. Due to differences in how social confirmation and 39 expectancies within context heuristics affect credibility, this paper proposes that warrant will only mediate the relationship between social confirmation heuristics and credibility and trust. H5a: Social warrant will mediate the relationship between social confirmation heuristics and credibility. H5b: Social warrant will mediate the relationship between social confirmation heuristics and trust. Method Participants Two hundred and ninety participants were recruited from the Amazon Mechanical Turk crowdsourcing marketplace. Although this service has a global reach, locale filtering based on Amazon account mailing addresses was used to ensure that all participants were in the United States. The sample was found to be predominantly female (64.2%) with men making up 34.5% of the sample. The mean age for participants in this study was 34.21 (SD = 11.01) years. Age ranged from 18 to 71 indicating a much more diverse and representative sample in terms of age than can be achieved in a traditional university setting. Participants also indicated a moderate level of familiarity with the TripAdvisor website when asked if it was an important resource for making travel plans (M=3.87, SD=1.71) or if it was among the first places they turned to look for travel advice (M=3.42, SD=1.64). Overall, the age and gender distributions of this sample appear to be representative of the larger population of U.S. based Mechanical Turk users (Paolacci, Chandler, & Ipeirotis, 2010). 40 Design and Procedure This study is a 2 (Brand Competitor vs. Brand Representative) X 2 (Social Confirmation) X 2 (Persuasive Intent) design. Mock forum pages of a well known travel web site, TripAdvisor.com, were created using GIMP, an open-source photo editing software. The pages featured a conversation with a user eliciting recommendations for accommodations during an upcoming trip. Each condition featured a brand representative, either from the focal brand or the competition, recommending or disparaging, respectively, a specific hotel room. The statements made by the focal brand representative were identical to those made by the brand competitor with the exception of bi-polar adjectives describing the hotel. For example, the rooms were described as “comfortable” versus “uncomfortable” and “quiet” instead of “noisy” by the brand representative and brand competitor, respectively, in different versions of the stimulus. Persuasive intent was manipulated by either having the brand representative or competitor self-identify his association with the hotel chain within their signature or by having this information revealed by other community members’ posts. When a brand representative or competitor chose to acknowledge a brand association, he signed his name along with his position following the comment. For example, the signature for the brand representative read, “Thomas Bollen – Manager of the Cannonboro Inn.” When brand associations were identified by a community member, they posted a statement identifying the representative’s association and a link to corroborating evidence on an external website. The failure to self-identify an association with the brand should, in 41 context, be a strong expectancy violation indicative of persuasive intent. There was no information associating the brand representative with the brand in the control condition. Social confirmation was manipulated by using matched statements with bipolar adjectives where community members either praised or disparaged the hotel. For instance, one statement read “The staff was (exceptional/horrible) and the rooms were (spotless/filthy).” All of the community member’s statements in a condition were either positively or negatively valenced. Social confirmation was coded as whether the community agreed or disagreed with the brand representative’s (always positive) or competitor’s (always negative) statements. This served to disentangle social confirmation from the valence of the community member’s comments. An example of the stimuli can be found in Appendix A. Participants were recruited via a message posted on Amazon Mechanical Turk which included a link to the study. Following an information page enumerating their rights as research participants and providing a general description of the study, one of nine conditions (eight experimental and one control) were randomly presented to the participants by a computer. Thus, participants viewed the stimuli in a context similar to how they would be naturally encountered. After examining the web page carefully, users completed an instrument containing the three measures of perceived warrant, credibility, and trust, in addition to demographic information concerning to their use of social media. Measures Credibility. Credibility was gauged using a three factor semantic differential scale developed by McCroskey and Teven (1999). Each factor was composed of six 42 items with higher scores representing higher credibility (M=3.44, SD=1.29). Only two factors were detected when an exploratory factor analysis using oblimin rotation was conducted to corroborate the expected factor structure. An examination of the pattern matrix suggests that the trustworthiness factor (honesty) cleanly loaded with the goodwill factor (benevolence). It is unclear why this would be the case. Despite this, the scale proved to be reliable (α=.83). Trust. A four-item modified version of the Willingness to Risk scale (Mayer & Davis, 1999; Ferrin et al., 2007) was used to assess participants’ interpersonal trust of the online marketer in the study. Higher scores are representative of greater trust and an exploratory factor analysis using varimax rotation confirmed a unidimensional factor structure. A composite variable (M=5.15, SD=1.25) was created from the mean of the four items and was found to be reliable (α=.83). Brand trust. Brand trust (Delgado-Ballester et al., 2003) was measured using eight items designed to tap into participants’ perceptions of brand reliability and goodwill. Despite the proposed two factor structure, an exploratory factor analysis conducted using varimax rotation found a single factor that could account for 84% of variance. This may be a result of the brand in question being fictional and, thus, participants’ having no real experience upon which to distinguish the brand’s reliability from intentions. Given that the items were found to be reliable (α=.97) a composite variable was created from the average of the eight items (M=3.50, SD=1.53) with higher scores equating to higher brand trust. 43 Purchase intent. Purchase intent was measured using three items written for this study. For example, one item read, “I would definitely book a room at the Cannonboro Inn for a trip to Charleston, South Carolina.” The three items displayed excellent reliability (α=.94). Experience with website. Participant experience with the review website was measured as a possible covariate which may affect perceived trust or credibility (α=.89). This was evaluated using two items. For example, one item reads, “TripAdvisor is the first place that I go for recommendations about where to stay when I travel.” Both items were measured on a 7-point Likert scale anchored in Strongly Disagree and Strongly Agree. Social warrant. Social warrant was measured using a modified version of the Social Warranting Value Measure developed in the first study. Items were modified to refer to Tripadvisor.com rather than Facebook and the word “posts” was substituted for “photos” in relevant items since photo sharing is not common in the bulletin board forums. The Cronbach’s alpha was very good at .81. Self warrant. Self warrant was measured using a modified version of the self warranting value measure developed in the first study. As with social warranting value, scale items were modified to refer to Tripadvisor.com and the word “posts” was substituted for “photos”. The Cronbach’s alpha was .80. System warrant. An adapted version of the system warranting value measure developed in the first study was used to measure system warrant. This scale was tailored similarly to the previous warranting value measures used in this study. Specifically, 44 references to Facebook were replaced with talkstats.com and the word “photos” in items had been replaced with “posts”. The scale achieved a Cronbach’s alpha of .81. Results Data Screening Data were carefully screened and cleaned prior to analysis. Several reading check questions were included in the instrument to ensure that participants were answering the items in good faith rather than trying to cheat the system for the incentive. For example, one reading check item read, “Mark the response furthest to the right to demonstrate you are reading these questions carefully.” Any participant who failed a reading check item was eliminated from the data set. Likewise, any indication that participant responses were not independent of one another resulted in both being excluded from the analysis. SPSS’s missing value’s analysis was used to test the assumption that the data was missing completely at random. Little’s MCAR test proved to be significant (χ 2 = 4,863.60, df = 4666, p = .02) indicating that data was not missing completely at random. However, no variable was missing in more than 2% of the cases suggesting that missing data is negligible and SPSS, by default, fails to compute T-tests for testing MAR from MNAR unless a variable has greater than 5% missing data. Thus, analysis took place with the assumption that the data set has minimal missing data which is missing at random. Confirmatory Factor Analysis of the Warranting Value Measures A three factor structure for each WVM was expected based on the exploratory factor analyses in Study One and because the instruments were originally designed to 45 reflect a three factor conceptualization of warrant. However, a confirmatory factor analysis was conducted for each scale to determine if a three factor structure best represented the data compared to alternative models. All models were tested using correlation matrices in LISREL 8.8. No cross loadings were allowed so all items loaded uniquely onto a single factor in each model tested. A number of different indices were used to assess model fit. First, chi-square was used to evaluate fit. It has been noted that chi-square’s significance is sensitive to sample size and that it should be interpreted with respect to its degrees of freedom. The chi- square is considered to be suitably small if the ratio of chi-square to degrees of freedom is less than five (Wheaton, Muthen, Alwin, & Summers, 1977). Second, the root mean square error of approximation (RMSEA) was evaluated using the criteria suggested by MacCallum, Browne, and Sugawara (1996). Specifically, values less than .10 and .05 are respectively considered representative of adequate and good fit. Third, the standardized root mean residual (SRMR) represents the standardized difference between observed and predicted correlations. Therefore, fit increases as values approach zero and values that are less than .08 are considered to be good (Hu & Bentler, 1999). Finally, non-normed fit index (NNFI) is a ratio based on the difference between the chi-square to degrees of freedom ratios in the null model and hypothesized model. Values higher than .90 are considered to represent adequate fit (Bollen, 1989). 46 Social warranting value measure. Social warrant is the degree of certainty a person has in another’s self-presentation based on information from others in an individual’s social network. Table 11 displays the fit indices for five tested models of the Social WVM. The hypothesized model showed a good fit to the data, χ 2 (51, N=278) = 113.67, p < .01, RMSEA=.06, SRMR=.05, NNFI=.96, with all indices providing acceptable values. None of the alternative models provided acceptable values for any index of fit. Thus, the hypothesized model appears to provide the best fit for the data. All factors and the composite scale were found to be reliable. Table 12 summarizes the Cronbach’s alpha, mean, and standard deviation for each factor and the scale as a whole. Table 11 Comparison of Social Warranting Value Measure CFA Models Model χ 2 df χ 2 /df RMSEA SRMR NNFI Hypothesized Model 113.67 51 2.23 .07 .05 .96 One-Factor 1,015.70 54 18.80 .28 .21 .45 Two-Factor (SPC+IM) 682.69 53 12.88 .23 .18 .69 Two Factor (SPC+UR) 447.04 53 8.43 .18 .13 .77 Two-Factor (IM+UR) 526.32 53 9.93 .20 .18 .72 Table 12 Descriptive Statistics for Social Warrant Value Measure Scale/Factors α M SD Social Warrant .81 4.85 .71 SPC .87 5.34 1.03 IM .89 4.96 1.13 UR .83 4.25 1.04 Figure 1. CFA of Social Warranting Value Measure χ 2 (51, N=278) = 113.67, p < .01, RMSEA=.06, SRMR=.05, NNFI=.96 47 48 Self warranting value measure. Self warrant is the level of confidence a person has in his or her attributions about an individual derived from information or statements generated by the focal individual. The indices of fit for the hypothesized measurement model for the Self WVM in general indicated an acceptable fit, χ 2 (51, N=275)=159.94, p < .01, RMSEA=.09, SRMR=.06, NNFI=.92. Although the RMSEA at .09 was slightly greater than the .08 criterion for good fit stipulated in this study, values as great as .10 were considered to be tolerable. None of the alternative models indicate acceptable fit for any values and do not appear to be viable alternatives (See Table 13 for a comparison of models). Finally, Self WVM displayed good reliabilities both on a factor and composite variable level. Table 14 contains the Cronbach’s alpha, means, and standard deviations of both the composite scale and the individual factors. Table 13 Comparison of Self Warranting Value Measure CFA Models Model χ 2 df χ 2 /df RMSEA SRMR NNFI Hypothesized Model 159.94 51 3.14 .09 .06 .92 One-Factor 871.48 54 16.14 .26 .18 .46 Two-Factor (SPC+IM) 586.53 53 11.07 .22 .14 .64 Two Factor (SPC +UR) 444.31 53 8.38 .18 .13 .74 Two-Factor (IM+UR) 517.86 53 9.77 .20 .17 .69 Table 14 Descriptive Statistics for Self Warranting Value Measure Scale/Factors α M SD Self Warrant .80 3.05 .70 SPC .80 2.29 .88 IM .86 3.11 1.11 UR .80 3.77 1.08 Figure 2. CFA of Self Warranting Value Measure χ 2 (51, N=275)=159.94, p < .01, RMSEA=.09, SRMR=.06, NNFI=.92 49 50 System warranting value measure. System warrant is the level of confidence a person has in his or her interpersonal judgments about an individual based on information generated by a computer system or website. The hypothesized model for the system WVM displayed a good fit to the data, χ 2 (51, N=276)=115.87, p < .01, RMSEA=.06, SRMR=.05, NNFI=.96. One two factor-model (SPC+UR) approached acceptable values but did not exceed them. The modification indices suggest that some items in the self- presentational constraints factor would load onto the uncertainty reduction factor if allowed to. However, given the lack of theoretical justification and the generally good fit of the hypothesized model to the data, no changes were made (See Table 15 for a comparison of models). All factors and the composite variable were also found to be reliable. Table 16 provides a summary of descriptive statistics and Cronbach’s alphas for the scale and its factors. Table 15 Comparison of System Warranting Value Measure CFA Models Model χ 2 df χ 2 /df RMSEA SRMR NNFI Hypothesized Model 115.87 51 2.27 .06 .05 .96 One-Factor 556.14 54 10.30 .20 .15 .68 Two-Factor (SPC+IM) 392.68 53 7.41 .16 .13 .78 Two Factor (SPC +UR) 271.53 53 5.12 .13 .08 .86 Two-Factor (IM+UR) 796.41 53 15.03 .24 .23 .51 Table 16 Descriptive Statistics for System Warranting Value Measure Scale/Factors α M SD System Warrant .81 4.26 .79 SPC .79 4.00 1.17 IM .77 4.62 .97 UR .84 4.17 1.16 Figure 3. CFA of System Warranting Value Measure χ 2 (51, N=276)=115.87, p < .01, RMSEA=.06, SRMR=.05, NNFI=.96 51 52 Hypothesis Testing Hypothesis One proposed that participants would perceive forthcoming brand representatives, who disclosed their relationship with the brand, as more credible than brand representatives who attempted to conceal their associations. A 2 x 2 x 2 ANOVA was used to test this hypothesis and found that self-identified brand representatives (M=3.48, SD=1.37) were indeed viewed as more credible than brand representatives who attempted to hide their associations (M=3.09, SD=1.14), F(1,248 ) = 7.17, p<.01, η 2 =.03. Both means for the experimental conditions were lower than the control group, (M=4.49, SD=.80). Hypothesis 3a was supported. Hypothesis Two held that participants would have greater trust in brand representatives who disclosed their brand associations than in brand representatives who attempted to hide them. A 2 x 2 x 2 ANOVA detected a main effect, F(1, 249) = 12.91, p<.001, η 2 =.05, demonstrating that brand representatives who self-disclosed their relationship with the brand (M= 2.99, SD=1.33) were more trusted than those who did not (M=2.49, SD= 1.11). Again, the means for the experimental groups were lower than the control group, (M= 3.65 , SD= .88). Therefore, hypothesis 3b was also supported. Hypothesis Three held that when other community members concurred with the brand representative’s statements the representative would be accorded higher levels of credibility than when community members disagreed. Again, this was tested with a 2 x 2 x 2 ANOVA and a significant main effect was detected, F(1, 249) = 46.79, p<.001, η 2 =.16. The brand representative was seen as more credible when other community members made statements similar to the brand representative’s concerning the product or 53 brand, (M=3.79, SD=1.28) than when they made contradictory statements (M=2.76, SD=1.04). Hypothesis 4 was supported. Hypothesis Four proposed that social confirmation and persuasive intent would interact so that self-identified brand representatives with whom the community agreed would enjoy the highest levels of credibility. Although this hypothesis closely approached significance, it was ultimately found to be not be statistically significant, F(1, 249) = 3.65, p=.057. Thus, hypothesis five is not supported. Hypothesis 5a held that social warrant would mediate the relationship between social confirmation (i.e., whether the community agreed or disagreed with the marketer) and credibility. Likewise, Hypothesis 5b posited that social warrant would mediate social confirmation and trust. However, social confirmation was not a significant predictor of the proposed mediator, social warrant, β=-.141, p >.05, despite the fact that social warrant was a significant predictor for both credibility, ( β=-.270, p <.05), and trust, ( β=.282, p<.05). Therefore, hypotheses 6a and 6b both are not supported. Research question one asked, “How does astroturfing by a brand competitor affect brand trust and consumers’ purchase intent?” This question was evaluated using a 2 x 2 x 2 ANOVA. A main effect was detected for both brand trust, F(1, 249) = 10.73 , p<.01, η 2 =.04, and purchase intent, F(1,249) = 6.308, p<.05, η 2 =.03. It appears that participants have greater brand trust when competitors (M=3.67, SD=1.54) are caught engaging in stealth marketing than when the brand’s own marketers are (M=3.32, SD=1.64). 54 Likewise, participants indicate greater purchase intent when a competitor engages in stealth marketing (M=3.76, SD=1.79) than when the brand’s own representatives participate in it (M=3.51, SD=1.86). The effects of astroturfing differed for brand competitors and brand representatives in one other important way. Superficially, the behaviors associated with astroturfing appear to have no negative consequences for the brand. The representative’s choice to disclose or not disclose their association with the brand or brand competitor was not found to be a significant predictor for either brand trust, F(1,249)=.069, p>.05, or purchase intent, F(1,249)=.016, p>.05. Likewise, whether a brand representative’s statements contradicted the prevailing community opinion or not was neither statistically significant for brand trust, F(1, 249) =.157, p>.05, or purchase intent, F(1,249)=.031, p>.05. However, additional exploratory analyses revealed a significant indirect effect for agreement between the brand representative and the community through credibility on both brand trust, β=.27, p<.01, and purchase intent, β=.27, p>.05. Furthermore, this relationship increases when only cases with the brand’s own agent are examined and is attenuated to the point of non-significance when only competitor’s cases are considered. Similarly, the indirect effect of identification through credibility on brand trust, β=-.09, p=.055, and purchase intent, β= .09, p=.07, closely approached, but did not meet, the criteria for statistical significance. In short, there appears to be a causal chain whereby contradicting the community results in reduced credibility for the brand agent that, in turn, erodes brand trust and purchase intent. 55 Discussion This study examines the warranting value measures in the context of a controlled experiment which sought to understand how users applied heuristics to evaluate credibility and trust in online contexts. The results make a number of contributions. First, the confirmatory factor analysis (CFA) for the warranting value measures provides evidence for the WVM’s ability to generalize to other online contexts and bolsters claims of the factors’ discriminant validity. Next, the relative importance of different heuristics in the credibility attribution process is explored. Third, this study provides theoretical clarification concerning how astroturfing affects the credibility of and trust for marketers. Finally, the practical implications of the findings are discussed in the context of online marketing and brand-to-consumer communication. The confirmatory factor analysis conducted for each WVM has three important implications. First, unlike exploratory factor analysis, which allows all items to load on all factors, CFA specifies a specific factor structure thus fixing many factor loadings to zero. This allows CFA to produce a unique mathematical solution. As a result, a well fitting model with CFA requires that the data fit the proposed factor structure more closely and items that cross load onto multiple factors decrease the model fit. Thus, CFA provides a somewhat more rigorous replication of the factor structure from Study One than exploratory factor analysis. In general, the proposed factor structure developed in Study One appears to provide an appropriate description of the current data. Although some goodness-of-fit indices were less than ideal, their values were within the range of the stipulated evaluation criteria and those suggested by others in the literature. Second, 56 discriminant validity among factors is bolstered by comparisons to alternative models with different factor structures. None of the tested models with alternative factor structures demonstrated acceptable goodness-of-fit, suggesting that the three factor structure of the proposed scales was needed to describe the data. Additionally, this suggests that the three factors are indeed distinct. Finally, the CFA demonstrates that the small changes necessary to use the warranting value measures in different research contexts do not undermine its construct validity. Social network sites, such as Facebook, provide a different, and somewhat richer, set of technological affordances than online forums and the warranting value measures were tailored to reflect these differences. Their consistency in performance between studies implies that the warranting value measures are able to generalize to other contexts. This study found ample evidence that participants used heuristics to evaluate the credibility of online community participants. Earlier work by Metzger et al. (2010) proposed that heuristics for the evaluation of credibility online could be classified as either expectancy violations or social confirmation heuristics. The current study detected a main effect for persuasive intent, an expectancy violation heuristic. Specifically, marketers who chose to reveal their association with the brand were viewed as more credible than those who chose to conceal it. Additionally, a main effect was detected for social consensus, a social confirmation heuristic. Therefore, marketers who agreed with the established community opinion were viewed as more credible than those who contradicted it. However, no interaction effect between the two heuristics was found for 57 credibility. This suggests that the heuristics each exert an independent effect on credibility evaluation. While Metzger et al. (2010) speculated that expectancy violations might overpower social confirmation heuristics, the explained variance of each heuristic in this study suggests that social confirmation exerts a much more powerful effect. One explanation for the relative weakness of persuasive intent in this study is that when marketers are identified, regardless of whether they self-identify or are discovered by other community members, their participation in the discussion constitutes an expectancy violation. Indeed, both conditions with identified marketers were rated as less credible than the control group where no association with the brand was made. Alternatively, social confirmation cues may be viewed as particularly informative because they are typically outside of an individual’s ability to change or manipulate (Walther & Parks, 2002; Donath, 2007). Indeed, the results reported here comport with the warranting principle. Practically, the strong social confirmation effect suggests that when the prevailing community opinion is negative, marketers might be best served by staying above the fray and resisting the urge to intervene. Engaging and contradicting the community can damage their credibility and may interfere with their effectiveness to represent the brand at a later date. The hypothesis that social warrant would mediate social confirmation and credibility was not supported. It is important to note that while the relationship between social confirmation and social warrant was not statistically significant, social warrant did have a significant relationship to credibility. This provides some evidence, albeit weak, 58 for predictive validity. There are two possibilities that might account for the lack of a significant relationship between social confirmation and social warrant. First, the warranting value measures are designed to reflect participants’ perceptions of the social and technological context. Online forums have existed for a long time and participants likely have established preconceptions about what is possible in terms of self- presentation in this medium. It is possible that perceptions of warrant are relatively stable in any given online environment and are not easily manipulated in an experimental study. Second, social confirmation was operationalized as to whether other community members agreed or disagreed with the brand agent. Although this information has implications for the participant’s attributions about the brand agent, it may be that it did not adequately address self-presentation and, by extension, warrant. Another goal of this study was to provide theoretical clarification of how stealth marketing influences perceptions of credibility and willingness to trust. While Carl (2008) found that marketers who disclosed their associations with a brand were viewed as more credible, the exact relationship between credibility and stealth marketing in that study was not discernible because his study did not include a control condition. The current study helps resolve that ambiguity. First, the marketer in the control condition, in which no link between the marketer and the brand was made, was rated as the most credible. Next, when brand representatives revealed a relationship with the brand, they were seen as more credible than representatives who unsuccessfully attempted to hide the same association. However, they were seen as less credible than the representatives in the control condition. Finally, when marketers attempted to hide an association with the 59 brand which was later detected by the community, they were perceived as the least credible. Interpersonal trust in the marketers followed an identical pattern to credibility. This order of means suggests that forthcoming marketers were not being rewarded for honesty. To the contrary, the marketer’s perceived credibility and participant’s trust were lowered, albeit to a lesser extent than for deceptive marketers. These findings confirm the basic logic undergirding stealth marketing and demonstrate why it has been a viable practice. Known marketers are perceived to have persuasive intent and, therefore, are viewed as less credible sources of information. Marketers operating online where identity is often difficult to ascertain may enjoy enhanced credibility when they are able to pass as typical consumers. However, there is a caveat; once a marketer is detected engaging in deceptive practices, they pay a heavier price than their honest colleagues in terms of credibility and trust. Furthermore, a minority of cases may garner popular media attention and evolve into public relations quagmires (Sprague & Wells, 2010) with the potential to affect perceptions of the brand negatively. Still, stealth marketing may be a viable tactic when a brand has a low profile among consumers, such as with a new brand or small business, since there may be more to gain than to lose by making consumers aware of their product or service. Likewise, fly-by-night brands unconcerned with long term reputation management might benefit from stealth marketing. Ethically, stealth marketing will likely always be suspect. Even so, marketers may feel justified engaging in stealth marketing if they believe they are only relaying accurate information about their products in a manner that will not be automatically dismissed as biased by consumers. However, the significant drop in trust 60 when marketers fail to identify their associations strongly suggests that consumers view the practice as deceptive. Ultimately, a brand may not care that its representatives are not viewed as credible or trustworthy if astroturfing, when detected, does not damage brand trust or purchase intent. This study detected no direct effect on brand trust or purchase intent for perceived persuasive intent, when the brand representative chose to identify or hide his association with the brand. Furthermore, whether the marketer agreed with or contradicted the prevailing community opinion had no direct effect on brand trust or purchase intent. Based on these results, it is tempting to infer that astroturfing does not truly harm the focal brand. However, there is a significant indirect effect for social confirmation for both purchase intent and brand trust through credibility. This indirect effect clearly demonstrates that the behavior of individual brand representatives does matter to consumers’ perceptions of the brand and their future decisions to purchase the brand. Mothers often tell their children that their behavior is a representation of the family. The same is true of marketers; what they do affects how the brand is seen. It should be noted that while social confirmation displayed several significant indirect effects, identification merely approached significance. This suggests that failing to identify one’s associations to consumers was far less important than contradicting the community and actively trying to persuade them. In short, the study reported here provides additional evidence supporting the validity of the WVM’s and illuminates how different heuristics may be weighted in the attribution of credibility. Specifically, social confirmation heuristics appear to exert a 61 greater effect on both the attribution of credibility and the extension of trust in online forums. Finally, the study explores how credibility heuristics can be applied to understand stealth marketing practices. 62 Chapter Four: Warranting and Signaling Theory The core assumption of the warranting principle is that self-generated information is accorded less weight than social or system-generated information because of the ease with which it can be manipulated to create a desired impression. However, signaling theory, an alternative theoretical framework, suggests that certain types of self-generated information are sufficient to establish credibility even in the presence of conflicting social information. This study tests both theories. First, the criteria which should be demonstrated to show a warranting effect are described. Next, signaling theory is introduced and how and why it might contradict the warranting principle is examined in detail. Finally, the results of an empirical experiment testing a set of contradictory hypotheses generated by the warranting principle and signaling theory are reported. Researchers must show three things in order to demonstrate a warranting effect for credibility. First, warranting information, whether social, self, or system, must display a statistically significant relationship with evaluations of credibility. Second, users should weigh information provided by other people or computer systems more heavily than self-generated statements. This is because the warranting principle stipulates that the difficulty of manipulating self-referential information should determine the weight users ascribe to different sources of information in the attribution process. Finally, receivers’ perceptions of the warranting value of information should mediate the relationship between the independent variables and the outcome variables. Since warrant is a receiver-based construct, it is impossible to determine whether warrant is actually the 63 mechanism by which the independent variable exerts an effect on dependent variables without first gauging participants’ perceptions of the warranting value of the information. Although Walther et al. (2008) reported a warranting effect for credibility, Utz (2010) notes that only other-generated information was manipulated. Therefore, that study actually only demonstrated that social information has a statistically significant relationship to attributions of credibility, and did not establish a warranting effect. Although subsequent studies have manipulated both self and other-generated information and demonstrated a warranting effect for other variables such as extraversion, physical attractiveness, and social attractiveness (Walther et al, 2009; Utz, 2010), to date no research study purporting to demonstrate a warranting effect for credibility demonstrating that users favor information from other people or computer systems more heavily than self-generated statements. Therefore, it is hypothesized that: H1a: Social information reporting a source’s expertise will result in higher evaluations of credibility than social information indicating a source’s lack of expertise. H1b: Social information reporting a source’s expertise will result in higher evaluations of trust than social information indicating a source’s lack of expertise. H2a: The perceived warranting value of social information will mediate the relationship between social information and credibility. H2b: The perceived warranting value of social information will mediate the relationship between social information and trust in the source. While both the warranting principle and signaling theory provide an explanation of how people use social information to make attributions about others online, the 64 theories differ in an important way. As previously noted, the warranting principle holds that information gleaned from others is more valuable than self-generated information because it is perceived as being relatively resistant to manipulation. In contrast, signaling theory holds that there are types of self-generated information that can actually be more informative than social information when making attributions. Signaling theory holds that there are two basic types of signals used to convey information, assessment signals and conventional signals (Donath, 2007). First, assessment signals are those that must actually be possessed in order to display them. There are two types of assessment signals, index signals and handicap signals. Index signals are those that are directly related to their display. For example, a good reputation rating on eBay is directly related to being a good seller because in order to achieve a good rating you must perform as such (Shami, Erhlich, Gay, & Hancock, 2009). A handicap signal involves the waste of a resource thereby indicating that an individual has an excess and can afford to do so. Therefore, benefit accrues only to those individuals who actually possess a quality and can, thus, cheaply display it. For example, users of online social network sites often post pictures of trips taken to exotic locations. While these pictures may be of limited interest to most, they may serve a larger purpose as handicap signals of wealth and leisure since only individuals who have both would be able to take such trips and share the resulting pictures. Second, conventional signals are arbitrary and are not intrinsically linked to the underlying quality or message that they represent. Therefore, by their nature they are less reliable than assessment signals. However, this does not make them useless. When two 65 parties’ interests coincide, conventional signals are advantageous because they allow information to be conveyed much more cheaply than with handicap signals. Furthermore, even when interests do not coincide, conventional signals can be highly reliable (i.e., accurate) if social sanctions increase the cost of displaying a signal past the utility in doing so. Indeed, the cost of a conventional signal is not inherent in its production, but is incurred by deviating from established systems of meaning. Human language is a good example of a system of conventional signals because the meaning of words is arbitrary and utterances typically involve little intrinsic cost. However, communication is reliable, and languages can evolve and change, because costs are imposed upon statements rather than words or phonemes (Lachmann et al., 2001). For example, an individual online may simply claim to work as a law enforcement officer. Although anyone could theoretically display this signal, the cost imposed by society in the form of fines and imprisonment should the deception be detected and reported makes this a highly reliable signal. In such a case, the liar is not being punished for the words he or she chose but for the meaning of his or his statement. The warranting principle and signaling theory, although superficially similar, produce diametrically opposing hypotheses. Warrant suggests that social information heavily influences interpersonal evaluations and that self-generated information is not to be trusted. In contrast, signaling theory suggests that self-generated information will be privileged in the attribution process when it takes the form of an index or handicap signal that is incapable of being faked. Thus, performing at a high level and demonstrating competence, would override contradictory social information reporting incompetence. 66 This is because although truly competent or skilled individuals can, for any number of reasons, produce poor performances, incompetent individuals can never demonstrate a competent or skilled performance. This suggests the following hypotheses based on signaling theory: H3a: Self-generated information that demonstrates competence will result in higher evaluations of credibility than self-generated information that does not demonstrate competence. H3b: Self-generated information that demonstrates competence will result in higher evaluations of trust than self-generated information that does not demonstrate competence. H3c: Self-generated information that demonstrates competence will interact with social information reporting the source’s expertise to produce higher evaluations of credibility than in any of the other conditions. Given the contradictory nature of these theories the following research question is asked: RQ1: Does the pattern of results support the predictions of the warranting principle or are they more closely aligned with signaling theory? Signaling theory was initially developed in the context of biology to explain how animals communicated certain traits about themselves to others both within and between species. However, human culture allows us to move beyond the biological limitations of the body when producing signals, creating the potential to fake what previously would have been considered assessment symbols. For example, economists, who quickly 67 adopted signaling theory, are often concerned with costly, or handicap, signals with which an individual cannot display a certain trait unless he or she has it in excess. An example of this from marketing is a warranty or guarantee. There is a presumption that only high quality manufacturers and vendors will choose to display this signal since the cost of honoring warranties will be excessively great if the product is not truly high quality (Boulding & Kirmani, 1993). Despite this logic, it is easy to imagine scenarios in which these signals are used deceptively. For example, a manufacturer may provide a warranty, a signal of high quality, but subsequently discover a flaw in their product that could result in increased warranty claims and potential lawsuits. They may choose to continue to display this signal, despite actual low quality, and pay the resulting warranty claims. This is because the benefit of increasing immediate sales and avoiding the costs of a product recall and potential lawsuits exceed the cost of continuing to display the signal. The point is that signaling theory, as applied to humans, should never be considered absolute. Rather, it should be considered indicative of heuristic rules that people may use to evaluate information. It should also be noted that the ability to “fake” index signals, at least with humans, might have a longitudinal component. Specifically, it may be possible, and indeed advantageous in some scenarios, to falsify what would be considered an index signal for a short period of time. However, long-term displays of these signals would ultimately prove too costly to be beneficial or be impossible to sustain. For example, trappings in the form of an expensive sports car can be a costly signal of wealth. Despite this, anyone with a modest amount of money can rent one for a short period of time. 68 However, the signal is reliable because it cannot be sustained over time by less affluent individuals. This has important implications for online communication. It suggests that in short-term interactions, what are traditionally considered to be index signals may not initially be interpreted as such. However, long-term participation within a community may lend weight to these signals because they would have to be maintained. This suggests the following hypotheses: H4a: Length of participation within an online community will be positively associated with perceptions of credibility. H4b: Length of participation within an online community will be positively associated with perceptions of trust. H5a: Length of participation within an online community will interact with displays of competence to produce higher evaluations of credibility. H5b: Length of participation within an online community will interact with displays of competence to produce higher evaluations of trust. Methods Participants One hundred fifty-seven participants from the United States were recruited from the Amazon Mechanical Turk marketplace. The average age for participants was 34.87 (SD=12.58) with the age ranging between 18 and 66. Overall, the sample skewed younger with over 73% of participants being under the age of 40. The sample was roughly split along gender lines with women (56%) holding a slight majority to men (41%). Again, the gender and age demographics of this sample do not seem to diverge 69 from the known demographics of the United States based Mechanical Turk population (Paolacci et al., 2010). Almost all of the participants (94%) indicated that they were not familiar with the website, talkstats.com, used as a context for this study and only a single individual (.6%) reported that he or she could have solved the original poster’s problem prior to being presented with information within the study. Design and Procedure This study used a 2 (Competent Performance) x 2 (Social Information) x 2 (Length of Participation) design. The open-source photo editing software GIMP was used to create mock forum pages where online community participants attempted to help another user solve a software error produced by a statistics program, LISREL. Thus, the manipulation required participants to have expertise in both statistics and the software to directly evaluate potential solutions. This was done intentionally to force participants to rely on other forms of information, such as social information or displays of competence, when making credibility assessments rather than closely evaluating proposed solutions. Another reason a statistical and software problem was chosen is that solutions in both domains are objective in that they tend to either function or fail. All conditions featured an original poster requesting help from the community to solve a statistics problem and another user responding with a proposed solution. Competent performance was manipulated by the reported effectiveness of the support provider’s solution. In the expert performance conditions, the original poster, who had requested assistance, responded indicating that the proposed solution worked as described. In contrast, the original poster in the non-expert condition reported that the 70 solution had not been effective. Social information was manipulated using matched statements with bipolar adjectives. Community members in the positive condition endorsed the support provider as an expert, while those in the negative condition made statements indicative of the support provider’s incompetence. For instance, one response read, “The problem’s that John has IDed (typically/only occasionally) cause that sort of error message. I’m (convinced/not convinced) that this is going to fix your error.” Length of participation within the community was manipulated by altering the date that the support provider joined the forum and the number of posts they had made. The support provider in the long participation condition had joined the community four years previously and had 1,118 posts. In contrast, the support provider in the short participation condition had joined the community 6 months prior and only had 118 posts. An example of the stimuli can be found in the Appendix B. Participants were recruited via a message posted on Amazon Mechanical Turk which included a link to the study. Following an information page enumerating their rights as research participants and providing a general description of the study, participants were randomly presented with one of nine conditions (eight experimental and one control). Users were asked to complete an instrument containing measures of credibility and trust concerning the initial support provider after closely examining the web page. Measures Social warrant. Social warrant was measured using a modified version of the Social Warranting Value Measure developed in Chapter Two. Items were modified to 71 refer to talkstats.com rather than Facebook and the word “posts” was substituted for “photos” in relevant items since talkstats.com does not offer photo sharing. The Cronbach’s alphas, means, and standard deviation for each factor and the composite variable can be found in Table 17. Principle axis factor analysis with varimax rotation revealed the expected three factor structure (see Table 18). Table 17 Descriptive Statistics for Social Warranting Value Measure Scale/Factors α M SD Social Warrant .79 4.92 .68 Social SPC .84 5.36 .97 Social IM .88 4.90 1.14 Social UR .81 4.50 .96 Table 18 Social Warranting Value Measure - Rotated Factor Matrix Factor 1 2 3 Soc. IM 2 .844 .104 -.079 Soc. IM 3 .811 .243 -.012 Soc. IM 4 .793 .172 .065 Soc. IM 1 .777 .136 -.046 Soc. SPC 1 .211 .816 -.015 Soc. SPC 2 .073 .811 .007 Soc. SPC 3 .131 .730 .253 Soc. SPC 4 .215 .561 .105 Soc. UR 1 -.001 .087 .808 Soc. UR 3 .021 .048 .792 Soc. UR 2 .027 .144 .752 Soc. UR 4 -.081 .018 .539 . 72 Self warrant. Self warrant was measured using a modified version of the self warranting value measure developed in Study One. As with social warranting value, scale items were modified to refer to talkstats.com and the word “posts” was substituted for “photos”. An exploratory factor analysis found the expected three factor structure and all factors proved to be reliable. Table 19 contains the Cronbach’s alpha, mean, and standard deviation for each factor and the composite variables while Table 20 provides the rotated factor matrix. Table 19 Descriptive Statistics for Self Warranting Value Measure Scale/Factors α M SD Self Warrant .83 3.19 .69 Self SPC .81 2.52 .91 Self IM .89 3.07 .93 Self UR .83 3.99 1.04 73 Table 20 Self Warranting Value Measure - Rotated Factor Matrix Factor 1 2 3 Self IM 2 .826 .074 .073 Self IM 3 .808 .214 .036 Self IM 1 .746 .091 .013 Self IM 4 .735 .166 .077 Self SPC 2 .181 .870 .151 Self SPC 3 .077 .841 .086 Self SPC 1 .314 .565 .224 Self SPC 4 .080 .538 .143 Self UR 3 .049 .152 .903 Self UR 1 .080 .254 .759 Self UR 4 -.029 .000 .602 Self UR 2 .123 .208 .596 System warrant. An adapted version of the system warranting value measure developed in Study One was used to measure system warrant. This scale was tailored similarly to the previous warranting value measures used in this study. Specifically, references to Facebook were replaced with talkstats.com and the word “photos” in items were replaced with “posts”. The scale and factor reliabilities were adequate and Cronbach’s alphas, means, and standard deviations can be found in Table 21. However, a principal axis factor analysis with varimax rotation revealed a three factor structure with one item cross loading on two factors (See Table 22 for the rotated factor matrix). 74 Table 21 Descriptive Statistics for System Warranting Value Measure Scale/Factors α M SD System Warrant .77 4.63 .70 System SPC .80 4.46 1.12 System IM .76 4.77 .96 System UR .77 4.65 .98 Table 22 System Warranting Value Measure - Rotated Factor Matrix Factor 1 2 3 Sys. SPC 2 .795 .009 .246 Sys. SPC 4 .668 .143 .299 Sys. SPC 3 .611 -.095 .188 Sys. SPC 1 .585 .012 .279 Sys. IM 1 -.100 .853 -.120 Sys. IM 2 .045 .847 -.018 Sys. IM 4 -.071 .574 .036 Sys. IM 3 .116 .399 .045 Sys. UR 2 .201 .030 .667 Sys. UR 4 .250 .012 .627 Sys. UR 1 .212 -.091 .615 Sys. UR 3 .448 .080 .610 . . Credibility. Credibility was measured with McCroskey and Teven’s (1999) measure of source credibility. Principle axis factoring with oblimin rotation was used to examine the scales structure. Although three factors were detected, a number of items intended to gauge goodwill loaded along with items intended to measure trustworthiness. The composite scale had a Cronbach’s alpha of .94 and the scale’s factors, competence, trustworthiness, and goodwill, had Cronbach’s alphas of .92, .91, and .89 respectively. 75 Trust. Trust was measured using a modified version of Mayer and Davis’s (1999) Willingness to Risk measure (M=3.43, SD=1.02) which gauges the extent a participant would allow themselves to be vulnerable to the target. Overall, this scale was found to be reliable (α=.83) and loaded onto a single factor when analyzed using principal axis factor analysis with varimax rotation. Results Data Screening All data was screened prior to analysis. First, participants who failed any reading check items (e.g., “Please skip this question to show that you are reading carefully.”) were excluded from the analysis. Cases with data that indicated they were not independent of one another were also excluded from the analysis. Second, the scenario for this study evaluated online community support for an individual attempting to solve an issue with the LISREL software used for structural equation modeling. Participants (n=9) who indicated familiarity with either LISREL or structural equation modeling were excluded from the analysis because to evaluate the impact of social information on trust and credibility decisions participants should lack the knowledge or ability for critical analysis of the statements presented. Third, participants who failed the manipulation check (e.g., “Was the original poster’s problem solved?”) were excluded from the analysis. Finally, Little’s MCAR test analysis was conducted to test the assumption that the data was missing completely at random and was not found to be significant (χ 2 =2,016.87, df=1941, p=.11), indicating that data was, indeed, missing completely at random. 76 Hypothesis Testing Hypothesis 1a proposed that information from the community attesting to a source’s expertise would result in higher evaluations of credibility than information from the community indicating a lack of expertise. This was evaluated using a 2 X 2 X 2 ANOVA. Results indicated that when the community attested to the source’s expertise (M= 5.55, SD=.78) the source was perceived as significantly more credible than when the community did not (M=4.15, SD=.86), F(1, 136) = 30.66, p<.001, η 2 =.18. Therefore, hypothesis 1a was supported. Hypothesis 1b held that information from the community reporting a source’s expertise would result in higher evaluations of trust than community statements indicating a lack of expertise. As was the case with credibility, when the community provided assurances of the source’s expertise (M=5.07, SD=.83) participants indicated a greater willingness to trust them than when the community did not (M=4.15, SD=.86), F(1, 136) = 33.71, p<.001, η 2 =.20. Therefore, hypothesis 1b was supported. Hypothesis 2a posited that social warrant would mediate the relationship between pooled social information and credibility. In turn, hypothesis 2b held that social warrant would mediate the relationship between social information and trust. Despite a significant relationship between social warrant and trust ( β=-.261, p<.05), social information was not a significant predictor of social warrant ( β=.038, p > .05). Therefore, both hypothesis 2a and 2b were not supported. Hypothesis 3a stated that when a source demonstrated competence, they would receive higher evaluations of credibility than when they failed to demonstrate 77 competence. The data showed that when an individual was able to solve the original poster’s problem (M=5.41, SD=.84), participants judged them as significantly more credible than when the same individual could not (M=4.88, SD=.91), F(1, 136) = 5.30, p<.05, η 2 =.04. Therefore, Hypothesis 3a was supported. Hypothesis 3b held that participants would trust an individual who could solve the original poster’s problem more than if they could not. This seems to be the case; when an individual solved the original posters problem (M= 4.87, SD=.95) they were trusted more than when they were not able to solve it (M=4.18, SD=.99), F(1,136) =9.52, p<.01. Thus, Hypothesis 3b was not supported. Hypothesis 3c postulated that a demonstration of competence would interact with social information regarding the source’s expertise to produce higher evaluations of credibility. However, the 2 X 2 X 2 ANOVA failed to detected the anticipated interaction, F(1, 136) = .57, p>.05. Therefore, Hypothesis 3c is not supported. Hypothesis 4a proposed that that the length of participation in an online community would be positively associated with perceptions of credibility. Upon examination, no significant difference in credibility was found between longtime participants (M=5.04, SD=.98) and relative newcomers (M=5.24, SD=.84), F(1, 136) = 1.34, p>.05. Therefore, Hypothesis 4a is not supported. Hypothesis 4b suggested that longtime participants would be more trusted than relative newcomers. Although a significant relationship was detected, F(1, 136) = 3.94, p<.05, η 2 =.03, the pattern of the means implies that newer community members (M=4.67, 78 SD=.95) were actually more trusted than long term community participants (M=4.38, SD=1.08). Therefore, based on the pattern of means, Hypothesis 4b is also not supported. Hypothesis 5a stated that the length of participation in an online community would interact with displays of competence to produce higher evaluations of credibility. Similarly, hypothesis 5b held that the length of participation in online communities would interact with displays of competence to yield higher levels of trust. However, no interaction effect for length of participation and displays of expertise was detected for either credibility, F(1, 136) = .12, p>.05, or trust, F(1, 136) = .62, p>.05. Therefore, Hypothesis 5a and 5b are not supported. Exploratory analyses using a 2 x 2 x 2 ANOVA were conducted to determine the relationship of social information and displays of competence to the various factors which make up credibility. Social information was found to be significantly related to all three factors: competence, F(1, 136) =51.51, p<.001, η 2 =.28, trustworthiness, F(1, 136) =11.06, p<.01, η 2 =.08, and goodwill, F(1, 136) =12.34, p<.01, η 2 =.08. In turn, the display of competence was significantly related to the competence factor, F(1, 136) =6.27, p<.05, η 2 =.04, but not to trustworthiness, F(1, 136) =2.33, p>.05, or goodwill, F(1, 136) =3.25, p>.05. Discussion Study Three had several goals. First, this study established a set of three criteria for determining whether a warranting effect was present and tested these criteria via an empirical experiment. Second, this study sought to evaluate the warranting principle in contrast to a competing set of hypotheses generated by an alternative theoretical 79 framework, signaling theory. Finally, this study sought to understand how system information might influence the evaluation of credibility and the willingness to extend trust. Specifically, it was hypothesized that information from a long time community member would be given more weight by participants because of the inherent difficulty in misrepresenting one’s abilities or attributes over an extended period of time. This study proposed using three criteria to demonstrate a warranting effect for attributions of credibility. First, social information should have a positive relationship with credibility and trust. As in Study Two, this study found that social information regarding the source had a strong effect on perceptions of credibility. Specifically, when other community members confirmed the information provider’s course of action, he was rated as more credible than when they cautioned against it. Likewise, participants were more likely to trust a source that had the community’s support than one who did not. The second criterion for a warranting effect was that the warranting value measure should mediate the relationship between social information and credibility. As in Study Two, the warranting value measures failed to demonstrate a significant relationship with the independent variable. Finally, social information should be weighted far more heavily in the attribution process than self-generated information, such as competent performances. This third criterion is particularly intriguing because it places the warranting principle in conflict with a similar, but distinct, alternative theoretical framework, signaling theory. Another goal of this study was to evaluate whether the warranting principle or signaling theory best described the pattern of results. Signaling theory holds that certain types of signals are intrinsically linked to their display so that an individual must actually 80 have an attribute to display it. Competent performance, operationalized as the target individual being able to solve the original poster’s problem, was hypothesized as a sufficient indicator of credibility. Therefore, if an individual was able to solve the original poster’s problem, it should be sufficient to establish their competence and, by extension, their credibility. In contrast, the warranting principle holds that social information should always be privileged in the attribution process because it is more difficult to fake or manipulate. The study’s results indicated that when an individual was able to solve the original poster’s problem they were rated as both more credible and more trusted than when they were not. In short, as predicted by signaling theory, competent displays contributed to the assessment of credibility. Since competent performance and social information failed to interact, thereby exerting an independent effect on perceptions of trust and credibility, we must look at their explained variance to discern their relative importance. A closer examination of the results reveals that social information is the proverbial 800 pound gorilla in the room explaining four to five times the variance of competent performance for both credibility and trust. While this would seem to indicate clear support for the warranting principle, some researchers have suggested that social information can be viewed as an honest signal because it would be difficult for the target to manipulate (Donath, 2007). However, this interpretation produces numerous theoretical problems given the common understanding of signals. A signal is a “behavior or phenotype produced by one individual (the signaler) that serves to influence the behavior of a second individual (the signal receiver) by transmitting information” (Lachmann et al., 2001). Although 81 informative in making attributions, because social information exists outside of a two party sender-receiver system, it is difficult to claim that the sender is “signaling” a good reputation through the communication of others. Alternatively, social information might be considered a signal if we assume that there are more than two parties in an interaction. However, the application of signaling theory to multi-party interactions, such as those found in online communities, is highly atypical. Normally when signaling theory is used to explain scenarios with multiple parties (e.g., mate selection in the biological sciences), individuals are still communicating about their own attributes and not those of others. In short, given the difficulty of considering social information to be a signal, the pattern of results seems to most strongly support warranting theory. One explanation for the strong effect of social information, as opposed to index signals such as a displayed competence, is that it is simply more informative regarding credibility and trust. Credibility is a multi-dimensional construct comprising expertise, trustworthiness, and goodwill. While a competent display should directly affect perceptions of an online community member’s competence, it provides little information about that individual’s honesty or benevolence. Likewise, trust, defined as a willingness to accept vulnerability, is sustained by the perceived goodwill of other parties in addition to perceptions of competence. An exploratory examination of the credibility factors found that while social information is significantly related to all three factors of credibility, displays of competence were only significantly related to perceptions of competence. However, it should be noted that even for the competence factor, social information explained a much larger amount of variance than the competence of 82 performance. This suggests that not only is social information more revealing about the attributes of an individual, but that competent displays may not be all that informative. Indeed, displaying competence may be sufficient to establish one’s expertise and skills but failing to do so does not necessarily signify incompetence. For example, even though Babe Ruth was able to indicate the part of the stands into which he would hit a home run, he also struck out from time to time. Failure to perform in a specific instance did not mean that he was an incompetent batter. As previously noted, a statistics and software problem was chosen because it required two domains of expertise (i.e. software & mathematics) to centrally process information. The goal was to force participants down a peripheral processing route which judging by the manipulation checks worked. However, it is possible that this manipulation may have favored the warranting principle over signaling theory. Specifically, individuals who are competent in a domain may attribute more weight to the competent performances of others because they have the experience and knowledge to judge these performances accurately. Future research should consider whether competent individuals interpret social information differently than individuals who are not competent in a domain. Finally, this study sought to determine if system information concerning an information source’s length of participation within an online community would influence perceptions of credibility or trust. It was hypothesized that longer periods of participation would be positively related to credibility and trust because it would be costly and difficult to consistently perform competently unless an individual was truly competent. However, 83 this study found no relationship between the length of time a person participated in the community and judgments of credibility. Nor did it find an interaction effect with competent performance suggesting that it does not moderate the relationship between displays of competence and credibility. The length of participation in the online community was significantly related to trust but, counter to expectations, long-term participants were trusted less than new participants. One explanation for the length of participation’s general lack of effect is that users may not typically attend to this information when they are not highly motivated to evaluate credibility. On the web page used for the current study, social information was, perhaps, more evident because multiple users posted and the posts were in a more prominent position on the page. This might account for its strong effects on attributions. In contrast, information about the length of participation was only featured below the avatar icon of the target individual and, thus, was not highly visible. One avenue for future research would be to gauge the effect of different sources of information when participants are highly motivated to make a correct judgment. In general, the overall pattern of results was interpreted as being more in line with the warranting principle than signaling theory. Social information was found to be more important to attributions of credibility than displays of competence. This result directly conflicts with signaling theory. However, the warranting value measures failed to be significantly related to independent variables, thus raising validity concerns. Future studies should consider the relationship between the warranting value measures and 84 variables more carefully and should focus on revising and refining them as a set of instruments. 85 Chapter Five: Conclusion My purpose in this dissertation was to examine whether the warranting principle could explain attributions of credibility and the extension of trust in online communities. In particular, it examined how the warranting principle complemented and contradicted both the heuristic approach to credibility evaluation and signaling theory, an alternative theoretical framework. Furthermore, it developed a set of instruments that should have theoretical utility for exploring the dynamics of online communities and practical value in guiding their design. This chapter will briefly summarize this dissertation’s primary contributions, acknowledge its limitations, and conclude with a discussion of potential future research. Summary of Contributions This dissertation proposed a slightly modified definition of warrant. Specifically, any information that constrains potential self-presentations and is informative about the identity or attributes of an individual is considered to contribute to warrant. This definition acknowledges the past decade’s technological and cultural shifts which enable users to develop confidence in their attributions regardless of whether an online self- presentation is tied to an offline identity. When the warranting principle was initially proposed in 1995, online environments provided limited cues to identity and few opportunities to engage in first hand observation. Today, people are often socially embedded in networks of their peers, family members, and friends. Additionally, websites commonly include reputation features and provide histories of users’ past behaviors that can be followed like bread crumbs until a coherent picture of who they are 86 emerges. Finally, the Internet is offering people more opportunities to tell and demonstrate what kinds of people they are whether through taste performances (Liu, 2007), providing expertise (Shami et al., 2009), or displaying skills and abilities. A revised conceptualization of warrant allows researchers to explore how people make sense of these new sources of information while not requiring online information to be linked with offline identities. The warranting value measures developed in this study should prove a useful tool for researchers because, at its core, the warranting principle is concerned with how people evaluate and weigh information from different types of sources. The scales and their subordinate factors have proven reliable in multiple contexts including social network sites and online forums. Furthermore, the factors and a related construct, social presence, have demonstrated discriminant validity. However, one weakness is that the scales did not always perform as expected when they were employed in experiments. Despite this, they typically demonstrated a relationship with the dependent variables, credibility and trust. Future research should be conducted to clarify these findings and to establish a significant relationships with other types of attributions such as extraversion, physical attractiveness, and intelligence. This dissertation also explored whether the warranting principle could explain how heuristic rules were applied in the evaluation of credibility. The study reported in Chapter Three found that social confirmation cues exerted a greater effect on the attribution of credibility than did persuasive intent, an expectancy violation cue. This result conflicts with previous researchers’ assumptions about the relative impact of social 87 confirmation and expectancy violation cues and generally comports with the warranting principle. In particular, social information should be accorded greater weight in the attribution process than other types of information because it is generally immune from manipulation. Finally, the warranting principle was tested against an alternative theoretical framework, signaling theory. Signaling theory holds that some types of information, index signals, are sufficient to establish that the source has a particular attribute because the ability to produce the information is intrinsically linked to that attribute. This dissertation found that social confirmation information exerted a much greater effect on attributions of credibility than did competent performance, an index signal. Therefore, the results of the study appear to favor the warranting principle. Overall, these results suggest that the warranting principle may have broad applicability and may serve to clarify the inner dynamics and boundary conditions of emerging communication theory. Limitations There were a number of limitations to the studies in this dissertation. Foremost is that all of the participants were solicited from Amazon’s crowdsourcing marketplace, Mechanical Turk. While this is a limitation, it is also an advantage. It is a limitation because, contrary to the control one can exert in a laboratory setting, it is impossible to ensure that participants are answering items in good faith and following procedures that result in good data. In fact, when screening the data the researcher detected a small percentage of participants who attempted to cheat in order to receive the small participation incentive. Also, participants tended to answer questions quickly and may 88 not have been as attentive or conscientious as laboratory participants. However, the advantage of recruiting participants from Mechanical Turk is that they more closely represent the general population of the United States than do university subject pools (See Paolacci et al., 2010 for a full discussion). Even so, future studies should replicate the current findings with additional populations. Another limitation is that the visibility of different types of information on the web pages used in the study may have shaped results. While the stimuli used in this dissertation are ecologically valid, they may favor social information over self or system- generated information. Thus, it is possible that the findings reported here and in the literature do not accurately represent the psychological processes of online community members but, rather, reflect the design and architecture of the supporting technology. For example, Facebook’s current design shows wall postings by default so that users must click a separate tab for self-generated profile information. Therefore, future research should take care to design stimuli that do not necessarily favor one type of information. Although such stimuli might be significantly different than typical technologies used to support online communities, it would serve to theoretically disentangle the effects of design choices from the attribution process. Finally, alternative operationalizations of the independent variables might obtain different results. First, the experiment concerning astroturfing in Chapter Three employed a fictional small business rather than a large, well-known brand. Although this was done intentionally to control for preconceptions participants might have about a famous brand, uncertainty and lack of previous experience with the brand might have led 89 participants to rely heavily on social information when making attributions. It is possible that persuasive intent could play a much stronger role in the evaluation of familiar people and corporations. Future research should carefully consider this possibility. Second, competent performance, as presented in the study found in Chapter Four, was manipulated by having the original poster report to the community whether a proffered solution had solved their problem. Although it is the norm to mark issues as resolved in this manner in support forums and knowledge sharing communities, if the participants were able to verify for themselves the effectiveness of the solution it might alter their perceptions of the support provider’s competence. One can also argue that this operationalization is problematic because having a community member mark an issue as resolved may be interpreted as social information even though it makes no evaluative judgment of the support provider or their solution. Future studies should be designed so that participants can test solutions for themselves. Concluding Remarks During the past decade the Internet population has exploded as more and more people come online. Consequently, the emphasis of much communication research has shifted away from studying the formation of online relationships between strangers to studying how Internet technologies support primarily face-to-face relationships and local communities. However, there is still a need to understand how Internet users evaluate the comments and contributions of strangers. The Internet population has grown in tandem with technologies that allow even the most non-technical of users to create content, converse, and share information and opinions online. Most of these technologies lack the 90 formal gate-keeping structures that have traditionally ensured the quality of information. Given the preponderance of information, the lack of gatekeepers, and that logic dictates that most users will never meet face-to-face, there is a growing need to understand how users decide whom to believe and whom to trust. The implications of such research may extend to almost every facet of online life from e-commerce to cooperation in massively multiplayer online games. The warranting principle is one explanation that provides a set of structuring rules regarding how individuals evaluate information online. Although perhaps not robust enough to be considered a theory outright, it can complement existing theories of communication by suggesting potential hypotheses that clarify the theories internal processes and boundaries. The results of this dissertation show warrants utility and imply a broad potential for application to multiple contexts. 91 References Antheunis, M.L., Valkenburg, P.M., & Peter J. (2009). Getting acquainted through social network sites: Testing a model of online uncertainty reduction and social attraction. Computers in Human Behavior, 26, 100-109. Banning, S. A., & Sweetser, K. D. (2007). How much do they think it affects them and whom do they believe? Comparing the third-person effect and credibility of blogs and traditional media. Communication Quarterly, 55, 451–466. Barbaro, M. (2006). Wal-mart enlists bloggers in P.R. campaign. Retrieved March 19, 2011 from http://www.nytimes.com/2006/03/07/technology/07blog.html Baron, R. M., & Kenny, D. A. (1986). The moderator-mediator variable distinction in social psychological research: Conceptual, strategic and statistical considerations. Journal of Personality and Social Psychology, 51, 1173-1182. Biocca, F. (1997). The cyborg's dilemma: Progressive embodiment in virtual environments. Journal of Computer-Mediated Communication, 3(2), 0-0. Biocca, F., Harms, C., & Burgoon, J. K. (2003). Toward a more robust theory and measure of social presence: Review and suggested criteria. Presence: Teleoperators & Virtual Environments, 12(5), 456-480. Bollen, K. A. (1989). Structural equations with latent variables. New York: Wiley. Boulding, W., & Kirmani, A. (1993). A consumer-side experimental examination of signaling theory: Do consumers perceive warranties as signals of quality?. Journal of Consumer Research, 20, 111-123. boyd, d. m., & Ellison, N. B. (2007). Social network sites: Definition, history, and scholarship. Journal of Computer-Mediated Communication, 13(1), 210-230. doi:10.1111/j.1083-6101.2007.00393.x Carl, W. J. (2008). The role of disclosure in organized word-of-mouth marketing programs. Journal of Marketing Communications, 14(3), 225-241. doi: 10.1080/13527260701833839 Cassidy, W. P. (2007). Online news credibility: An examination of the perceptions of newspaper journalists, Journal of Computer-Mediated Communication, 12 (2), 478-498. 92 Chaffee, S.H. (1982). Mass media and interpersonal channels: Competitive, convergent, or complementary? In G. Gumpert and R. Cathcart (Eds.), Inter/Media: Interpersonal Communication in a Media World. (pp. 57-77). New York, Oxford Press. Chaiken, S., Liberman, A., & Eagly, A.H. (1989). Heuristic and systematic processing within and beyond the persuasion context. In J.S. Uleman & J.A. Bargh (Eds.), Unintended thought (pp. 212-252). New York: Guilford Press. Clatterbuck, G. W. (1979). Attributional confidence and uncertainty in initial interaction. Human Communication Research, 5(2), 147-157. doi: doi:10.1111/j.1468-2958.1979.tb00630.x Colquitt, J. A., Scott, B. A., & LePine, J. A. (2007). Trust, trustworthiness, and trust propensity: A meta-analytic test of their unique relationships with risk taking and job performance. Journal of Applied Psychology, 92(4), 909-927. Delgado-Ballester, E., Munuera-Aleman, J. L., & Yague-Guillen, M. a. J. s. (2003). Development and validation of a brand trust scale. International Journal of Market Research, 45(1), 35-53. DeVellis, R. F. (2003). Scale development: Theory and applications (2nd ed.). Thousand Oaks, Calif.: Sage Publications, Inc. Donath, J. (1999). Identity and deception in the virtual community. In M.A. Smith& P. Kollock (Eds.), Communities in Cyberspace (pp. 29–59). London: Routledge. Donath, J., & boyd, d. m. (2004). Public displays of connection. BT Technology Journal, 22(4), 71-82. Donath, J. (2007). Signals in social supernets. Journal of Computer-Mediated Communication, 13(1), 231-251. Dutta-Bergman, M. (2004). The impact of completeness and web use motivation on the credibility of e-health information, Journal of Communication, 54, 253-269. Eastin, M. (2001). Credibility assessments of online health information: The effects of source expertise and knowledge of content. Journal of Communication, 6(4), 0-0. Eysenbach, G., & Kohler, C. (2002). How do consumers search for and appraise health information on the world wide web? Qualitative study using focus groups, usability tests, and in-depth interviews. British Medical Journal, 324,573-577. 93 Ferrin, D. L., Kim, P. H., Cooper, C. D., & Dirks, K. T. (2007). Silence speaks volumes: The effectiveness of reticence in comparison to apology and denial for responding to integrity- and competence-based trust violations. Journal of Applied Psychology, 92(4), 893-908. doi: 10.1037/0021-9010.92.4.893 Fornell, C., & Larcker, D.F. (1981). Evaluating structural equation models with unobservable variables and measurement error. Journal of Marketing Research, 18(1), 39-50. Greer, J.D. (2003). Evaluating the credibility of online information: A test of source and advertising influence. Mass Communication and Society, 6 (1), 11-28. Hovland, C.I., Janis, I.L., Kelly, H.H. (1953). Communication and persuasion. New Haven, CT: Yale University Press. Hu, L., & Bentler, P.M. (1998). Fit indices in covariance structure modeling: Sensitivity to underparameterized model misspecification. Psychological Methods, 3, 424-453. Johnson, T. J., & Kaye, B. K. (2004). Wag the blog: How reliance on traditional media and the Internet influence credibility perceptions of Weblogs among blog users. Journalism and Mass Communication Quarterly, 81, 622–642. Kent, R.J., & Allen, C.T. (1994). Competitive interference effects in consumer memory for advertising: The role of brand familiarity. Journal of Marketing, 58, 97- 105. Kiousis, S. (2001). Public trust or mistrust? Perceptions of media credibility in the information age. Mass Communication and Society, 4(1), 381-403. Lachmann, M., Szamado, S., & Bergstrom, C. T. (2001). Cost and conflict in animal signals and human language. Proceedings of the National Academy of Sciences of the United States of America, 98(23), 13189-13194. Lampe, C., Ellison, N., & Steinfield, C. (2006). A face(book) in the crowd: social Searching vs. social browsing. Paper presented at the Proceedings of the 2006 20th anniversary conference on Computer supported cooperative work. Banff, Alberta, Canada Liu, H. (2007). Social network profiles as taste performances. Journal of Computer- Mediated Communication, 13(1), 252-275. 94 MacCallum, R. C., Browne, M. W., & Sugawara, H. M. (1996). Power analysis and determination of sample size for covariance structure modeling. Psychological Methods, 1(2), 130-149. Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An integrative model of organizational trust. The Academy of Management Review, 20(3), 709-734. Mayer, R. C., & Davis, J. H. (1999). The effect of the performance appraisal system on trust for management: A field quasi-experiment. Journal of Applied Psychology, 84, 123–136. McCroskey, J.C., & Teven, J.J. (1999). Goodwill: A reexamination of the construct and its measurement. Communication Monographs, 66, 90-103. Metzger, M.J. (2007). Making sense of credibility on the web: Models for evaluating online information and recommendations for future research. Journal of the American Society for Information Sciences, 58 (13), 2078-2091. Metzger, M. J., Flanagin, A. J., Eyal, K., Lemus, D. R., & McCann, R. (2003). Credibility in the 21 st century: Integrating perspectives on source, message, and media credibility in the contemporary media environment. In P. Kalbfleisch (Ed.), Communication Yearbook (pp. 293-335). Mahwah, NJ: Lawrence Erlbaum. Metzger, M. J., Flanagin, A. J., & Medders, R. B. (2010). Social and heuristic approaches to credibility evaluation online. Journal of Communication, 60(3), 413-439. doi: 10.1111/j.1460-2466.2010.01488.x Meyer, D. (2009). Fake reviews prompt belkin apology. Retrieved March 19 th 2011, from http://news.cnet.com/8301-1001_3-10145399-92.html Morahan-Martin, J.M. (2004). How internet users find, evaluate and use online health information: A cross-cultural review. Cyberpsychology & Behavior,7(5), 497-510. Nudd, T. (2006). Sony gets ripped for a bogus PSP blog. Retrieved March 19, 2011, from http://adweek.blogs.com/adfreak/2006/12/sony_gets_rippe.html O'Keefe, D. J. (2002). Persuasion: Theory & research (2nd ed.). Thousand Oaks, CA: Sage Publications. Paolacci, G., Chandler, J., & Ipeirotis, P. G. (2010). Running experiments on amazon mechanical turk. Judgement and Decision Making, 5(5), XX-XX. 95 Parks, M. R. (2011). Boundary conditions for the application of three theories of computer-mediated communication to MySpace. Journal of Communication, 61(4), 557-574. doi: 10.1111/j.1460-2466.2011.01569.x Parks, M.R., & Adelman, M.B. (1983). Communication networks and the development. Communication Research, 10, 55-79. Parks, M., & Archey-Ladas, T. (2003). Communicating self through personal homepages: Is identity more than screen deep? Paper presented at the International Communication Association conference. Pavlou, P. A., & Dimoka, A. (2006). The nature and role of feedback text comments in online marketplaces: Implications for trust building, price premiums, and seller differentiation. Information Systems Research, 17(4), 392-414. Petty, R.E., & Cacioppo, J.T. (1986). The elaboration likelihood model of persuasion. In L. Berkowitz (Ed.), Advances in experimental social psychology (Vol. 19, pp. 123-205). New York: Academic Press. Resnick, P., Zeckhauser, R., Swanson, J., & Lockwood, K. (2006). The value of reputation on eBay: A controlled experiment. Experimental Economics, 9(2), 79- 101. Rieh, S. Y., & Danielson, D. R. (2007). Credibility: A multidisciplinary framework. In B. Cronin (Ed.), Annual Review of Information Science and Technology (pp. 307- 364). Medford, NJ: Lawrence Erlbaum. Riegelsberger, J., Sasse, M. A., & McCarthey, J. D. (2007). Trust in mediated interaction. In A. Joinson, Y. A. McKenna, T. Postmes & U. Reips (Eds.), Oxford handbook of internet psychology (pp. 53-69). Oxford, UK: Oxford University Press. Rotter, J. B. (1971). Generalized expectancies for interpersonal trust. American Psychologist, 26(5), 443-452. Sanders, W. S. (2008). Uncertainty reduction and information seeking strategies on Facebook. Paper presented at the annual meeting of the NCA 94th Annual Convention, November 22, San Diego, CA Sanders, W.S., Hollingshead, A.B. (2010). Brand authenticity in online consumer communities. Paper presented to the Annual Conference of the International Communication Association, Singapore. 96 Shami, N. S., Ehrlich, K., Gay, G., & Hancok, J. (2009). Making sense of stranger' expertise from signals in digital artifacts. Paper presented at the CHI 2009. Short, J., Williams, E., & Christie, B. (1976). The social psychology of telecommunications. London: John Wiley & Sons. Shrock, A. (2010). Are you what you Tweet? Warranting trustworthiness on Twitter. Paper presented at the Education in Journalism & Mass Communication, Denver, CO. Sprague, R., & Wells, M. E. (2010). Regulating online buzz marketing: Untangling a web deceit. American Business Law Journal, 47(3), 415-454. doi: 10.1111/j.1744-1714.2010.01100.x Stone, A.R. (1995). The war of desire and technology at the close of the mechanical age. Cambridge: MIT Press. Sundar, S.S. (2008). Self as source: Agency and customization in interactive media, In E.A. Konijn, S. Utz, M. Tanis, & S. B. Barnes (Eds.) Mediated Interpersonal Communiction (pp. 58-74). New York: Routledge. Sundar, S. S. (2007). The MAIN Model: A Heuristic Approach to Understanding Technology Effects on Credibility. In M.J. Metzger & A.J. Flanagin (Eds.) Digital Media, Youth, and Credibility (pp. 73-100). Cambridge, MA: The MIT Press. doi:10.1162/dmal.9780262562324.073 Sundar,S.S., & Nass, C. (2001). Conceptualizing sources in online news, Journal of Communication, 51(1), 52-72. Tseng, S. & Fogg, B.J. (1999). Credibility and computing technology. Communications of the ACM, 42(5), 39-44. Utz, S. (2010). Show me your friends and I will tell you what type of person you are: How one's profile, number of friends, and type of friends influence impression formation on social network sites. Journal of Computer-Mediated Communication, 15(2), 314-335. doi: 10.1111/j.1083-6101.2010.01522.x Walther, J. B. (1996). Computer-mediated communication: Impersonal, interpersonal, and hyperpersonal interaction. Communication Research, 23(1), 3-43. doi: 10.1177/009365096023001001 Walther, J. B., & Parks, M. R. (2002). Cues filtered out, cues filtered in: Computer mediated communication and relationships. Handbook of interpersonal communication, 3, 529-563. 97 Walther, J. B., Van Der Heide, B., Kim, S.-Y., Westerman, D., & Tong, S. T. (2008). The role of friends' appearance and behavior on evaluations of individuals on facebook: Are we known by the company we keep? Human Communication Research, 34(1), 28-49. doi: 10.1111/j.1468-2958.2007.00312.x Walther, J. B., Van Der Heide, B., Hamel, L. M., & Shulman, H. C. (2009). Self generated versus other-generated statements and impressions in computer-mediated communication: A test of warranting theory using Facebook. Communication Research, 36(2), 229-253. doi: 10.1177/0093650208330251 Wheaten, B, Muthen, B., Alwin, D.R., & Summers, A.F. (1977). Assessing reliability and stability in panel models. In D.R. Heise (Ed.), Sociological Methodology 1977 (pp. 84-136). Your user agreement. (2010). Retrieved March 19th, 2011, from http://pages.ebay.com/help/policies/user-agreement.html 98 Appendix A: Example Stimulus from Chapter Three 99 Appendix B: Example Stimulus from Chapter Four 100 Appendix C: Research Instruments This appendix contains the scales that were used in Chapter Three and Chapter Four with the exception of the WVMs. In the following items, The “Cannonbridge Inn” is the name of a fictional hotel that was reviewed by the online community in Chapter Three. Credibility (McCroskey & Teven, 1999) Items represent the anchors on a 7-point semantic differential scale. Competence factor. 1. Intelligent / Unintelligent 2. Untrained / Trained 3. Inexpert / Expert 4. Informed / Uninformed 5. Incompetent / Competent 6. Bright / Stupid Goodwill factor. 1. Cares about me / Doesn’t care about me 2. Has my interests at heart / Doesn’t have my interests at heart 3. Self-centered / Not Self-centered 4. Concerned with me / Unconcerned with Me 5. Insensitive / Sensitive 6. Not Understanding / Understanding 101 Trustworthiness factor. 1. Honest / Dishonest 2. Untrustworthy / Trustworthy 3. Honorable / Dishonorable 4. Moral / Immoral 5. Unethical / Ethical 6. Phoney / Genuine Willingness to Risk (adapted from Mayer & Davis, 1999) Anchors were Strongly Disagree and Strongly Agree. 1. If I had my way, I wouldn’t let <target individual> have any influence over issues that are important to me. 2. I would keep an eye on <target individual>. 3. I would give <target individual> a task or problem that was critical to me, even if I could not monitor his actions. 4. I trust <target individual>. Purchase Intent Anchors were Strongly Disagree and Strongly Agree. 1. I would definitely book a room at the Cannonbridge Inn for a trip to Charleston, South Carolina. 2. The Cannonbridge Inn offers a good value. 3. The Cannonbridge Inn offers high quality rooms. 102 Brand Trust (Delgado-Ballester et al., 2003) Anchors were Strongly Disagree and Strongly Agree. 1. The Cannonbridge Inn is a brand that meets my expectations. 2. I feel confidence in the Cannonbridge Inn brand. 3. The Cannonbridge Inn is a brand name that never disappoints me. 4. The Cannonbridge Inn brand name guarantees satisfaction. 5. The Cannonbridge Inn would be honest and sincere in addressing my concerns. 6. I could rely on the Cannonbridge Inn to solve any problem. 7. The Cannonbridge Inn would make any effort to satisfy me. 8. The Cannonbridge Inn would compensate me in some way for a problem with the room
Abstract (if available)
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Social motivation and credibility in crowdfunding
PDF
The social meaning of sharing and geocoding: features and social processes in online communities
PDF
Gamification + HCI + CMC: effects of persuasive video games on consumers’ mental and physical health
PDF
Leveraging social normative influence to design online platforms for healthy eating
PDF
The formation and influence of online health social networks on social support, self-tracking behavior and weight loss outcomes
PDF
Designer ID: brands, T-shirts, & the communication of identity
PDF
A seat at the table: navigating identity, power, and safety in work meetings
PDF
Running with newbies: understanding online communities through the eyes of second-generation gamers
PDF
Translating data protection into practice: exploring gaps in data privacy and cybersecurity within organizations
PDF
Evaluating the impact of CUE's action research processes and tools on practitioners' beliefs and practices
PDF
Conservation 'on the natch': maintenance and remembrance at the Alcoholism Center for Women
Asset Metadata
Creator
Sanders, William Scott
(author)
Core Title
Identity, trust, and credibility online: evaluating contradictory user-generated information via the warranting principle
School
Annenberg School for Communication
Degree
Doctor of Philosophy
Degree Program
Communication
Publication Date
07/06/2012
Defense Date
04/23/2012
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
astroturfing,attribution,communication,computer-mediated communication,confirmatory factor analysis,credibility,experiment,OAI-PMH Harvest,scale development,signalling theory,social network sites,stealth marketing,trust,user-generated content,warranting principle
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
McLaughlin, Margaret L. (
committee chair
), Hollingshead, Andrea B. (
committee member
), Jian, Lian (
committee member
), McLeod, Dennis (
committee member
)
Creator Email
sandersw@usc.edu,scott.sanders.usc@gmail.com
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c3-49825
Unique identifier
UC11289302
Identifier
usctheses-c3-49825 (legacy record id)
Legacy Identifier
etd-SandersWil-914.pdf
Dmrecord
49825
Document Type
Dissertation
Rights
Sanders, William Scott
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
astroturfing
attribution
communication
computer-mediated communication
confirmatory factor analysis
credibility
experiment
scale development
signalling theory
social network sites
stealth marketing
trust
user-generated content
warranting principle