User generated content and credibility evaluation of online health information: A meta analytic study

User generated content and credibility evaluation of online health information: A meta analytic study

Telematics and Informatics xxx (2016) xxx–xxx Contents lists available at ScienceDirect Telematics and Informatics journal homepage: www.elsevier.co...

658KB Sizes 0 Downloads 19 Views

Telematics and Informatics xxx (2016) xxx–xxx

Contents lists available at ScienceDirect

Telematics and Informatics journal homepage: www.elsevier.com/locate/tele

User generated content and credibility evaluation of online health information: A meta analytic study Tao (Jennifer) Ma a, David Atkin b a b

Department of Mass Communication, Winona State University, P.O. Box 5838, 175 West Mark Street, Winona, MN 55987, United States Department of Communication, University of Connecticut, Storrs, CT 06269-1259, United States

a r t i c l e

i n f o

Article history: Received 6 June 2016 Received in revised form 9 September 2016 Accepted 9 September 2016 Available online xxxx Keywords: User-generated content Online health information Source credibility Perceived information credibility

a b s t r a c t The present study provides a meta-analysis of perceived credibility concerns for usergenerated-online-health information. Past work yields inconsistent findings regarding whether high credible versus low credible sources would relate to perceived credibility of online health information. A collection of empirical studies was synthesized to reach an explanation of the conflicting findings. Analysis of 22 effect sizes with 1346 participants indicated that source credibility had no significant overall influence on perceived information credibility (r = 0.03, n.s.). However, the variances across the studies suggest that the platform where the information was posted might be a contingent factor. Specifically, when user-generated health information was posted on a common website, high credible sources were significantly related to high perceived information credibility. Ó 2016 Elsevier Ltd. All rights reserved.

1. Introduction The rapid diffusion of the Internet has significantly changed the health communication landscape (e.g., Chou et al., 2013). Patients have traditionally visited their doctors for health information and advice (Biggs et al., 2013). With healthcare predicted to subsume 20% of America’s GDP by 2020, the Internet is emerging as a leading source for health information (e.g., Dobransky and Hargittai, 2012; Lin and Associates, 2015). Millions of consumers go online daily for health information (Freeman and Spyridakis, 2004; Ruppel and Rains, 2012; Sillence et al., 2006) about various health issues (Fox and Duggan, 2013; Pew, 2012). People may search online health information for different reasons, such as to be better informed, talk to a doctor, seek support, or to find reassurance (Sillence et al., 2006; Walther and Boyd, 2002). However, concerns persist about the reliability of online health information (e.g., Lagoe and Atkin, 2015). Some researchers challenge the notion that the Internet is a reliable source for health information (e.g., Eysenbach, 2008), because users may not know where to find trusted health information online. People may use search engines to browse for health-related information rather than go directly to trusted health portals (e.g., WebMD), but search engines may lead them to any health-related websites (Metzger et al., 2010). A longitudinal study found an increase in unregulated individual sites that contain health-related information – and patients using this information independently – to guide their understanding of personal health concerns (Sillence et al., 2007). Other research reveals that the quality of online health-related information varies across different websites, suggesting that health-oriented online information is harmful if inaccurate (e.g., Adams, 2010b). A key factor used by consumers to judge the quality of online health information is source credibility (Bates et al., 2006; Lin et al., 2015), defined in the persuasion literature as ‘‘judgments made by a perceiver concerning the

E-mail addresses: [email protected] (T.J. Ma), [email protected] (D. Atkin) http://dx.doi.org/10.1016/j.tele.2016.09.009 0736-5853/Ó 2016 Elsevier Ltd. All rights reserved.

Please cite this article in press as: Ma, T.J., Atkin, D. User generated content and credibility evaluation of online health information: A meta analytic study. Telemat. Informat. (2016), http://dx.doi.org/10.1016/j.tele.2016.09.009

2

T.J. Ma, D. Atkin / Telematics and Informatics xxx (2016) xxx–xxx

believability of the communicator” (O’Keefe, 2002, p. 181). Individuals working in isolation may assess information credibility systematically, although they routinely apply short-cuts by using cognitive heuristics (e.g., reputation, endorsement) to evaluate the credibility of online information (Metzger et al., 2010). With the development of Web 2.0 technology, the credibility concerns governing online health information are renewed by the prevalence of end user-generated-content (Adams, 2010b). Such health-related information can be defined as that which appears in verbal (e.g., web blog) and nonverbal (e.g., video) forms, created either individually (e.g., personal comments) or collectively (e.g., review ratings). These end- user-generated contents are not peer-reviewed, they have no source citations, and therefore it is often difficult to verify the sources or the credibility of the content. It appears that unreliable health information (e.g., anti-vaccine or crash diet information) can be presented (Kata, 2012) and shared (Villiard and Moreno, 2012) by end-users. Many scholars worry that end user- generated contents may lead to potentially misleading or frankly dangerous advice (Biggs et al., 2013). Indeed, some research found that user-generated content encouraged participation and therefore could be applied to attract more people to online health communication, but the ability of a lay person to assess the credibility of health information provided by other end-users was a concern (Chou et al., 2013). Research disagrees about whether user-generated-content introduces more opportunities or challenges as a health information source (e.g., Freeman and Chapman, 2008). 2. Literature review So far there is neither a systematic review nor meta-analytic evidence focusing on user generated content in the context of online health information. But a cursory review of individual studies points to some variation in findings regarding perceived online health information credibility. For example, some studies suggest that the quality of end-user-generated online health information is unreliable and not useful (Biggs et al., 2013), whereas others identify such contents high in perceived quality (Carroll et al., 2013) and thus patronized more by lay persons (Lo et al., 2010). In terms of the processing of user-generated online health information, some studies found that source attributes (e.g., homophily) drove the evaluation of user-generated online information (Wang et al., 2008), whereas other studies found that source attributions (e.g., expertise and homophily) did not have a significant effect on consumers’ evaluation of online health information quality (Bates et al., 2006). Several key questions remain. What are the factors that influence peoples’ evaluation of the user-generated online health information? Do users find health information generated by lay persons to be more reliable than that attributed to experts? How does Internet technology influence people’s utilization of user-generated-content across health-oriented topics? As more researchers seek to incorporate user-generated-content into the strategy of health promotions and health campaigns (Chou et al., 2013; Lehto and Oinas-Kukkonen, 2011), research addressing these questions is likely to identify implications for theories about Internet-based communication in general and health marketing in particular. To investigate these questions and to identify an explanation for the inconsistent empirical findings, this paper reviewed the existing empirical literature on user-generated online health information, presenting a meta-analysis of the studies addressing the impact of source credibility on the perceived credibility of user-generated online health information. The present meta-analysis study aims to help synthesize the results across studies addressing credibility concerns for user-generated-online-health information and to help determine meaningful boundary effects of the relationship. 2.1. User-generated content and Web2.0 Unlike Web1.0 – which is defined by one-way communication from the creator of the website (e.g., static health portals) to the user – Web2.0 enables two-way and multi-way communication and defined by both interaction and user-generated content (Betsch et al., 2012). The user-generated content allowed by Web2.0 therefore includes user-created and -uploaded new content, comments on existing content and shared content with other users (Betsch et al., 2012; Kata, 2012). User-generated content is usually present through Web2.0 tools such as personal websites, discussion boards, web blogs, podcasts, and social media networks (such as YouTube, Facebook, Twitter, and Wikipedia).1 End users may explicitly or implicitly convey health information through these tools. For example, users may share links of various topics which include health-related topics or practices, or users can set up a blog explicitly for a particular health issue in mind (Adams, 2010a). Because of the open nature of Web 2.0 applications, both end users and institutions can create and disseminate content about particular health issues. Research usually looks at these modalities as different original sources of online health information. End users can include experts, lay persons, institutions, public health agents, industry corporations, and activists (Betsch et al., 2012). When people go online for health information, both source factors and media factors may influence their perception of online health information. Some tools may provide informational resources for users to evaluate the credibility of these contents, such as tags, or tag clouds (O’Grady et al., 2012). Most of the tools do 1 Each of these platforms provides different applications for the end user and therefore user-generated content may vary across the tools (Tenhaven et al., 2013). For instance, discussion boards provide a virtual space to archive user-comments around specific topics and the discussion usually is not in real time. A weblog or blog is a virtual place for end users to post a diary to the public. Social media are a web service for users to log on, connect with others, create and share their content for those with similar interests, and comment on others’ online content. A Wiki is an application that allows collective work on web page content through unrestricted editing by individuals from their browser.

Please cite this article in press as: Ma, T.J., Atkin, D. User generated content and credibility evaluation of online health information: A meta analytic study. Telemat. Informat. (2016), http://dx.doi.org/10.1016/j.tele.2016.09.009

T.J. Ma, D. Atkin / Telematics and Informatics xxx (2016) xxx–xxx

3

not offer any evaluation services, so that information users have to rely on different mechanisms to evaluate the credibility of the sources and messages as well as the quality of the application. A similar concept, eWOM, is proposed by researchers in the economic behavior domain (Cheung and Thadani, 2012). eWOM is described as an evolving version of Word of Mouth (WOM) and stands for any statement made by potential, actual, and former customers about a product or a company via Internet. A recent review of eWOM provided a comprehensive list of factors studied addressing eWOM (Cheung, 2010). User-generated-content in the online health domain is unique because the level of risk perception associated with health issues is usually quite high, and sources involved in the credibility evaluation are more complex (Bernstam et al., 2005; Trumbo and McComas, 2003). Therefore, the evaluation of user-generated online health information may be rated as being more profound and in-depth than that for a typical commercial product. 3. Theoretical framework 3.1. Applying persuasion research theory to online credibility A commonly researched source in persuasion research is the message communicator. The source factor naturally focused on communicator credibility with two dimensions, one is expertise and another is trustworthiness (O’Keefe, 2002; Petty and Brinol, 2010). Relatively high versus low source credibility can influence persuasion, depending on the degree of issue relevance to the receivers and their identification to the communicator. Under high personal relevance, the source credibility may not influence many persuasion outcomes. Immediate identification of a credible source accounts for a difference in source credibility effects on persuasion. A low credible source will be more influential than a high credible source in communicating messages towards which respondents held a favorable predisposition (e.g., counseling against an annual X-ray) (e.g., Dean et al., 1971). Other factors such as source-liking can influence the perceived credibility and impact of a message. O’Keefe’s (2002) work, for instance, supports the general principle that likable sources are more persuasive than disliked sources. In particular, source liking may interact with source credibility. A high credible (but disliked) source is more effective in persuasion than a low credible (but liked) source. The effect of liking also weakens when the topic is relevant to the receiver. Furthermore, source similarity shows an inconsistent effect on persuasion. Attitudinal similarity may indirectly influence persuasion through liking, but liking may either positively or negatively affect persuasion. Similarity does not necessarily enhance perception of credibility. Moreover, physically attractive sources may influence persuasion indirectly through perceptions of credibility (Petty and Cacioppo, 2001). In the context of online health communication, sources have been defined differently, owing the complications created by web technology. Hu and Sundar (2010), for instance, offer a categorization that includes visible sources (those that are seen as delivering information), technological sources (media that are psychologically perceived as the source), and receiver sources (e.g., a user who posts on Facebook). With user-generated-content, both experts and lay persons can be visible content sources, while the Web 2.0 tools or platforms can be technological sources. This study is designed to test the persuasive effect of the original content sources and the technological platform source for user-generated-contents. A stream of research addresses factors that influence the credibility of online health information supported by web 1.0 technology, which is represented by one-way communication tools such as websites (Bernstam et al., 2005; Freeman and Spyridakis, 2004; Moturu et al., 2008; Neumark et al., 2012; Wathen and Burkell, 2002). In particular, research suggests that the quality of different websites varies across criteria – like source, message, medium, and receiver – which can make searching for quality online health information a problem (D’Auria, 2010). In the contexts of mass communication and interpersonal communication, credibility is influenced by factors like source expertise, trustworthiness, attractiveness, likability, and similarity; receivers will evaluate these source cues mainly through heuristic processing, which is a function of issue relevance, motivation, prior knowledge, and affect (Chen and Chaiken, 1999; Petty et al., 2004). In the Internet or electronic media context, because of the multiplicity nature of sources, scholars acknowledge the greater ability of end users for quality insurance and quality checks (Wathen and Burkell, 2002). The challenge with online health information resides largely in the potential for distribution of inaccurate medical information from unqualified sources (e.g., Lagoe and Atkin, 2015). Many scholars propose web quality assessment criteria to examine trust factors associated with websites that offer health information (D’Auria, 2010; Joshi et al., 2011; Sillence et al., 2006). Among the factors proposed are authority (e.g., author qualification, sponsor, advisory board committee), design (e.g., easy to navigate, internal search capability, interactivity) and content (e.g., accurate, complete, currency, cross reference, evidence-based). Preliminary work in this area involved focus group studies, which found that design-appeal predicted rejection or mistrust of web-based health advice, whereas reliable information and personalized content predicted selection and trust of online health advice (Sillence et al., 2006). 3.2. Credibility evaluations of user-generated online health information Since user-generated content provides the promise of personalized online health information, challenges to credibility assessment have been acknowledged by some scholars (Moturu et al., 2008). Some challenges stem from the complexity and volume of information, which are due to the dynamic nature of the user-generated contents (e.g., constant evolution Please cite this article in press as: Ma, T.J., Atkin, D. User generated content and credibility evaluation of online health information: A meta analytic study. Telemat. Informat. (2016), http://dx.doi.org/10.1016/j.tele.2016.09.009

4

T.J. Ma, D. Atkin / Telematics and Informatics xxx (2016) xxx–xxx

of online reviews and wiki contents). Trustworthiness of the sources and quality of the information have to be evaluated by information receivers in a piece-by-piece manner (Moturu et al., 2008). Some researchers suggest that technology platforms (e.g., web sites, blogs, and discussion boards) may influence perceived credibility because the platforms play the role of gatekeepers for the quality of online information (Sundar, 2008). Thus far, the literature has yet to reveal a consistent pattern of relationships between source credibility and perceived credibility of user-generated online health information. Some evidence suggests that the framing of sources as experts or lay persons makes no difference on perceived message credibility (Bates et al., 2006; Hu and Sundar, 2010). By contrast, other evidence demonstrates that low credibility sources generate higher perceived credibility and content access than high credibility sources (Lo et al., 2010; Wang et al., 2008), findings that were contrary to those of still other studies (Winter et al., 2010). It is not yet clear whether the technology platforms are more influential with lay persons as the content creator than with experts as the content creator. Through a meta-analytical approach, this study is aiming to examine the effect of content creators and technological platforms on the perceived credibility of user-generated online health information. More formally, in order to address the countervailing findings uncovered in past work, we pose the following research question: Research Question 1: What is the relationship between source credibility and perceived credibility of user-generated online health information? Thus far, research has yet to uncover evidence that the perceived credibility of information will be contingent on the different platforms (e.g., web sites v. blogs). However, the nature of health messages will influence the two-way interaction between source and platform (Hu and Sundar, 2010). Since the dearth of past work provides little basis for conceptualization in this area, we inquire about main as well as moderation influences involving platform type and perceived credibility of online health information; more formally: Research Question 2: How does platform type (e.g., blog v. bulletin board) influence the perceived credibility of usergenerated online health information? In addition, since past work (e.g., Eastin, 2006) suggests that study findings can be influenced by study design (e.g., study v. experimental approaches), we pose the following research question: Research Question 3: How does study design influence the perceived credibility of user-generated online health? 4. Methods 4.1. Article selection The present study is based on a meta-analysis of the scholarly literature addressing online health content. Published manuscripts were retrieved through (1) electronic reference databases such as PubMED, PsycINFO, and Google Scholar and (2) reference sections of relevant reviews and published articles. Studies were included if they (a) examined online health information as a domain, (b) assessed end user-generated content of a health issue, (c) compared lay persons and experts as content creators, and/or (d) applied quantitative research methods to investigate perceived information credibility. Studies that fulfilled these criteria, available as of mid-2014, were included. Key words used in searches through the electronic reference databases were ‘‘user generated content” and ‘‘online health information.” The original searches provided over two-hundred articles. After using the article collection criteria, fifteen quantitative studies were selected (see in Table 1). These studies reflect research conducted from the fields of public health, psychology, and communication, utilizing empirical research on user-generated-content for various health issues. (Note that only quantitative studies could be considered as candidates for inclusion). For example, the health issues include lung cancer (Bates et al., 2006), children and teenagers’ health (Winter et al., 2010), cigarette and hookah use (Carroll et al., 2013), epilepsy (Lo et al., 2010), seizures (Wong et al., 2013), rhinosinusitis (Biggs et al., 2013), diabetes (O’Grady et al., 2012), organ donation (Tian, 2010), HPV vaccine (Briones et al., 2012), HIV (Eastin, 2006), milk and sunscreen (Hu and Sundar, 2010), and health care (Kadry et al., 2011). However, as is often the case in meta-analytic work involving the media, only a limited number of studies (four) employed consistent measures that could render statistics required for effect size estimation, i.e. the mean credibility scores for different content sources; this represents a critical mass of work that can facilitate analysis in specialized media use domains (e.g., Abelman et al., 2007). Using the final screening criteria, the data set included tests sampling 1347 participants across our criterion studies, which were published in four academic journals between 2004 and 2010 (Bates et al., 2006; Eastin, 2006; Hu and Sundar, 2010; Winter et al., 2010). 4.2. Study coding Eligible studies were coded on the following dimensions: Year, Platform (Website, Blog, Bulletin Board, and Personal Homepage) (Hu and Sundar, 2010; Sundar and Nass, 2001), Study Design (Experiment and Survey), and Sample size. The goal of this study is to compare the mean difference effect sizes in different platforms rather than combine them. Studies report more than one technical platform, which were all based on the same user generated content and therefore can be considered as platform subgroups. According to Borenstein et al. (2009), each subgroup should be treated as a separate study in the coding process. Such analyses have been termed complex data structures in meta-analysis research (e.g., Borenstein et al., 2009). Please cite this article in press as: Ma, T.J., Atkin, D. User generated content and credibility evaluation of online health information: A meta analytic study. Telemat. Informat. (2016), http://dx.doi.org/10.1016/j.tele.2016.09.009

Author year

Design

Health issues

Bender (2011)

Content analysis

Biggs et al. (2013)

Briones et al. (2012)

Theory

UGC Tools

Concept

Measure

Analysis techniques

Sample

Sample size

Breast cancer support groups

Support group Facebook

Age and location of support group creators

Number of Wall post discussion post, number of size of the group members

Chi-square

Facebook

620

Content analysis

Rhinosinusitis

YouTube video

Quality of video

Uploaded source, ability to inform a lay person: from useful to completed misleading

Chi-square

YouTube video

100

Content analysis

HPV vaccine

YouTube video viewers response

Quality of video

Video sources news source, consumer, hospital, advocacy groups, non-profit, for-profit, gov agency, etc, tone, viewer responses; HBM factors

ANOVA

YouTube video

172

Health believe model

Demographics

Key finds Facebook groups have become a popular tool for the purpose of awareness-raising, fundraising, and support-seeking. Number of Wall posts is grater especially for support-seeking than number of discussion posts. Almost all creators provided personal information. None of the support group creators appeared to be health care professionals or associated with a health care organization YouTube appears to be an unreliable resource for accurate and up to date medical information related to rhinosinusitis. Useful videos were liked less than missing leading videos No difference of sources on number of likes, number of views, tones; no difference of sources on HBM factors. Majority video were negative, negative videos were liked more. Mixed information related to factors indicated by HBM model

T.J. Ma, D. Atkin / Telematics and Informatics xxx (2016) xxx–xxx

(continued on next page) 5

Please cite this article in press as: Ma, T.J., Atkin, D. User generated content and credibility evaluation of online health information: A meta analytic study. Telemat. Informat. (2016), http://dx.doi.org/10.1016/j.tele.2016.09.009

Table 1 Review summary of studies on user-generated online health information.

6

Author year

Design

Health issues

Carroll et al. (2013)

Content analysis

Kadry et al. (2011)

Theory

UGC Tools

Concept

Measure

Analysis techniques

Sample

Sample size

Cigarette and Hookah

YouTube video

Quality of video, positive or negative associated with smoking

Positive vs negative portrayal of smoking, harmfulness claim, smoking associations. Intercoder reliability provided

Chi-square

YouTube videos: 66cigarette, 61hookah

127

Content analysis

Health care

Doctor rating

Quality of website: availability of board certification.

Patient rating, website quality rating

Correlation

Doctor rating

4999

Lo et al. (2010)

Content analysis

Epilepsy and seizure

YouTube video comments

Amateur vs Professional postings

Coded emotion derogatory, neutral, sympathetic and information seeking, providing in the comments toward the health issue

Chi-square

YouTube video

10

Tian (2010)

Content analysis

Organ donation

YouTube content and comments

Non-profit and student YouTube posting

Media frame overall tone secondary frame towards people

ANOVA

YouTube video

355

Demographics

Key finds Cigarette videos have high quality, are more likely to acknowledge harmful consequences and contain antismoking messages than hookah videos. Misconception of hookah portrayal and consequences by YouTube sources. Video quality does not influence liking Not all of the top 10 websites have board-certification. Various rating scales with multiple items. A single overall rating to evaluate physicians may be sufficient to assess a patient’s opinion of the physician Amateur-posted videos were accessed more than professional posted video. Fewer info seeking than info providing, info providing has incorrect info; seen as more sympathetic, albeit some were derogatory. There is persistent stigma and misinformation among YouTube users Overall comments are positive, not many people indicated behavior intention. No comment differences of the sources.

T.J. Ma, D. Atkin / Telematics and Informatics xxx (2016) xxx–xxx

Please cite this article in press as: Ma, T.J., Atkin, D. User generated content and credibility evaluation of online health information: A meta analytic study. Telemat. Informat. (2016), http://dx.doi.org/10.1016/j.tele.2016.09.009

Table 1 (continued)

Author year

Design

Health issues

Wong et al. (2013)

Content analysis

Epilepsy and seizure

Wang et al. (2008)

On-line experiment

Nausea from chemotherapy

Winter et al. (2010)⁄

On-line experiment

Eastin (2006)⁄

Bates et al. (2006)⁄

Theory

UGC Tools

Concept

Measure

Analysis techniques

Sample

Sample size

Demographics

YouTube content

Accuracy of information

Accuracy of information, overall attitude

YouTube video

100

Homophily and credibility as underlining persuasion factors

Online support group

Credibility: communicator’s legitimacy and authority. Homophily: those who have similar health problems or experience

Manipulation: a trusted website cancer.gov versus a discussion board posting. Credibility of online health information 16 items, bipolar, a = 0.93, homophily 8 items bipolar = 0.87, BIlikelihood to act on advice

Two tailed paired t-test and chisquare test Causal modeling

Adults

97

60% females. Age M = 50.59

Children and adolescents health

Dual process model, selective exposure, social comparison

Web blog

Blog author’s credibility: trustworthy, experienced, altruistic

Manipulations of self-reported expertise, community rating, age of authors. 16 combinations. DV = information selection, rating of information, perceived source credibility

ANOVA

Parents, GE

60

50% female. Age M = 36.93 sd = 6.54

On-line experiment

HIV

Persuasion

Personal webpage

Source credibility: expertise x subject matter knowledge

Manipulation: 2 information  3 sources. IV: manipulated source credibility HIV expert, AIDS victim, high school freshman, DV: Perceived credibility of message accuracy, believability, factualness a = 0.89

ANOVA

Students

125

44% female, age M = 20.

Online experiment

Lung cancer

Generic webpage

Credibility manipulated

Manipulation: source – trusted website vs generic webpage. perceived

Independent t-test

People in the high traffic city area

519

54.7% female, age M = 28.25

Key finds YouTube content is more sympathetic and accurate than other mass media Homophily grounded credibility perceptions of the information. Homophily leads to both credibility and utility assessment of the message when the source is credible. When the source is end user, there is a path from homophily to credibility to utility to likelihood to act Self reported expertise had a strong influence on the perception of credibility and on selected reading. Other’s rating is only influential for author with low self reported expertise. Older authors were perceived more credible Higher credibility source was perceived more credible on message. Known topic was perceived more credible. Source credibility made difference for unknown topic. No interaction between source and content type on perceived message credibility There was no difference between high credible source and low credible

7

(continued on next page)

T.J. Ma, D. Atkin / Telematics and Informatics xxx (2016) xxx–xxx

Please cite this article in press as: Ma, T.J., Atkin, D. User generated content and credibility evaluation of online health information: A meta analytic study. Telemat. Informat. (2016), http://dx.doi.org/10.1016/j.tele.2016.09.009

Table 1 (continued)

8

Author year

Design

Health issues

Theory

UGC Tools

Concept

Hu and Sundar (2010)⁄

Online experiment

Milk/ Sunscreen

Typology of sources for the internet

Website, home page, bulletin board, blogs

Source: Expertise and homophily; Message: believable, accurate

O’Grady et al. (2012)

Survey

Diabetes

Credibility assessment for persuasiveness of online message, 6 structured credibility terms

Discussion Forums user generated tags

Credibility is represented by the structured tags

Thackeray et al. (2013)

Survey

Health activities Health care providers

Ranking through social network sits

Source credibility: online ranking or reviews 2010 Health Tracking Survey

Measure trustworthiness, truthfulness, readability and completeness of the information Manipulation: 2 message  2 original source  5 selecting sources Perceived message credibility 2 items accuracy believability, 10 point Likert type scale, perceived source expertise and homophily, perceived information completeness, BI completeness, perceived gate keeping Ability to use structured credible statistics

Demographics, uses of online ranking or reviews for health: 3 items, a = 0.69

Analysis techniques

Sample

Sample size

Demographics

ANCOVA

Undergraduate students

553

70% female, Age M = 20.8 sd = 1.72

Descriptive

Diabetes patient, CA

22

50% female, Mean age is around 50

Correlation

Adults

1745

56.16% female

Key finds source on consumers’ perception of the quality of online health information No source expertise and homophily effects on perceived message credibility and BI. No two way interaction of two types of sources. Three way message milk, original source, and selective sources effectselecting source and original source on credibility is upon different health issue

Most participants used both structured and their own tags, but most people searching for info using an unstructured tag. Credibility is not a strong factor in using tags or tag clouds. Credibility assessments are determined by corroboration, personal relevance and context People are using social media for seeking health information. People younger, female, living in an urban location and having a chronic disease are more likely to use online ranking and review via SNS

T.J. Ma, D. Atkin / Telematics and Informatics xxx (2016) xxx–xxx

Please cite this article in press as: Ma, T.J., Atkin, D. User generated content and credibility evaluation of online health information: A meta analytic study. Telemat. Informat. (2016), http://dx.doi.org/10.1016/j.tele.2016.09.009

Table 1 (continued)

T.J. Ma, D. Atkin / Telematics and Informatics xxx (2016) xxx–xxx

9

Applying this approach in the current study, each platform subgroup was treated separately when calculating effect sizes. The inter-coder reliability test reflected coding done by two coders, including the first author. The Krippendorff’s alpha (KALPHA) statistics (Hayes and Krippendorff, 2007; Hallgren, 2012) rendered a result of Kalpha estimates = 0.84, which indicated high inter-coder reliability (Hallgren, 2012). This complex data structure yielded 22 effect sizes, which is equal to 22 studies in the meta-analysis (Borenstein et al., 2009, p. 218). Of the 22 effect sizes, two were associated with using websites as the technical platform for the user-generated content, three were associated with blogs, two were associated with bulletin-boards, and 15 were associated with homepages. The 15 effect sizes for homepage as the platform were from a survey that used three pairs of sources comparisons among four different credibility measures (Bates et al., 2006). About half of the effect sizes were from studies with factorial experimental design and the other half involved a survey design. 4.3. Meta-analysis procedures The data was analyzed with Borenstein and colleges’ (2009) meta-analytic procedures. Effect size estimates were calculated from the information provided in the report(s), which include means, standard deviations, and sample sizes of high versus low credibility sources on perceived content credibility. Effect sizes (r) were converted from the d statistics that were calculated as the mean differences between the high credible and low credible source groups, divided by the pooled standard deviation (Cohen, 1988). With that, the positive effect size value meant that the online health information provided by the high credible sources was perceived as more credible, whereas the negative value suggests that the information from the low credible source was perceived as more credible. To standardize the effect size (r), a Fisher’s Z transformation was conducted, 95% confidence intervals were calculated, and individual effect sizes were weighted by the inverse of their random-effect variance (Lipsey and Wilson, 2001). In the analysis, random effect assumptions were adopted because the factors (e.g., the credibility criteria of sources and online information measured) that could influence the effect sizes were different in the studies. In other words, the studies had enough in common, but there was generally no reason to assume that they were identical. A funnel plot was used to examine whether there was a publication bias in the data. Funnel plots are particularly useful for detecting potential bias due to the under-representation of studies with small participant samples (Lipsey and Wilson, 2001). If a collection of effect sizes is unbiased and drawn from a single population, there should be greater variability among effect sizes based on small samples than those based on large samples, and therefore the scatter plot should take the shape of a funnel. The subjective assessment of the plot (shown in Fig. 1) supported the presence of a funnel shape – which tended toward symmetry – with a non-zero true effect size; it featured a large and empty section where the estimates from studies with small sample sizes and small effect sizes would otherwise be located. Following the guidelines of subjective estimation of the funnel plot (Albarracín et al., 2005), the examination suggested that the sample of studies exhibited a small degree of bias, but appeared to be acceptable. The homogeneity statistic, Q, was computed to determine whether each set of mean differences shared a common effect size (Borenstein et al., 2009). To further assess heterogeneity, the I2 index was calculated to assess inconsistencies in a set of effect sizes (Borenstein et al., 2009; Higgins and Thompson, 2002). The homogeneity test was conducted using SPSS 20 with the syntax or macro provided by Wilson (2002); the heterogeneity test was performed with STATA 10. [See Fig. 1, About Here.]

Fig. 1. Funnel plot.

Please cite this article in press as: Ma, T.J., Atkin, D. User generated content and credibility evaluation of online health information: A meta analytic study. Telemat. Informat. (2016), http://dx.doi.org/10.1016/j.tele.2016.09.009

10

T.J. Ma, D. Atkin / Telematics and Informatics xxx (2016) xxx–xxx

5. Results 5.1. Source credibility influences on perceived online health information Forest plots were used to present the visual results of the meta-analysis. Per Research Question 1, the forest plot shows that the 22 effect sizes show a positive summary effect of source credibility on credibility evaluation of user-generated online health information. The overall effect is 0.03 (CI = 0.04, 0.10), which meant that the source with high credibility resulted in higher perceived credibility of the online health information, but the effect is not significant. If the Fisher’s Z is transformed back to r, the value will be slightly less than 0.03. The effects of all effect sizes tend to line up, while those with a large weighting tend to vary more than the ones with a small weighting. The relatively constrained span of criterion articles—reflecting analyses conducted during the first decade of the millennium—preclude a definitive trend analysis of any changing effect sizes over time. Still, judging from this relatively restricted range of time points, recently completed studies emerge as dominant factors in the summary effect. The effect sizes for the meta-analysis are presented in Table 1. The homogeneity test using Wilson’s macro MetaES showed that the total amount of variance explained was slightly more than might be expected based on within-study error (Q = 32.78, df = 21, p = 0.049). In addition, the heterogeneity index I2 was 38.99, which meant that 38.99% of the variations in effect sizes were attributable to heterogeneity (Higgins et al., 2003). This suggests that although the positive relationship between source credibility and content credibility was not significant, certain moderation conditions may explain the variation in the results (Higgins et al., 2003) (see Table 2). 5.2. Moderator effects Although the homogeneity test showed a significant Q index (Q = 32.78, df = 21, p = 0.049), and the heterogeneity index I2 was moderate (I2 = 38.99%, CI = 0%, 63.83%, Higgins et al., 2003), further subgroup analysis was in order. Moreover, the inconsistencies in the prior research suggested that sublevel covariates might influence the source and online health information credibility relationship. STATA 10 and SPSS 20 were used with Wilson’s macro MetaF to test the efficacy and heterogeneity in the subgroups, including technology platforms and study designs. Platform. User-generated content research has yet to reach a consensus about which type of source would have better persuasive power with online health information. Thus far, research provides no evidence that the perceived credibility of information attributed to sources will be contingent on the different platforms; however, the nature of health messages will influence the two way interaction between source and platform (Hu and Sundar, 2010). Research Question 2’s query about platform influences was assessed with subgroup analysis. Results uncovered significant variances between platform groups (Q = 9.30, df = 3, p = 0.03; I2 = 78.51%), over the studies done with the information credibility evaluation between high versus low credible sources. Using the MetaF macro, the results showed a significant effect when the platform was websites (mean effect size = 0.45, k = 2, p < 0.01), which meant that a high credibility source led to higher perceived credibility of information when the platform is a website. High credible sources were also contingent on home page use, but the effect is not significant (mean es = 0.02, k = 15, p = 0.49). Although the results also showed that low credible sources were contingent on blog and bulletin board use, the effects were not significant (mean es blog = 0.04, k = 3, p = 0.70; mean es bulletin board = 0.03, k = 2, p = 0.82). The results were also demonstrated in the subgroup forest plot analysis (see in Fig. 3, About Here). Study design. Per Research Question 3’s inquiry about the influence of study design on perceived credibility of usergenerated online health, analysis with the MetaF macro did not demonstrate a significant effect between groups (Q = 0.01, df = 1, p = 0.92; I2 = 0%). But study results reveal a significant effect within groups (Q = 32.77, df = 20, p = 0.04; I2 = 38.97%). The subgroup forest plot analysis showed that the largest variances were observed in studies using an experimental design (see in Figs. 2 and 3). Indeed, studies using experimental designs provided conflicting and interesting

Table 2 Efficacy of source credibility on perceived credibility of information: main effect and moderator effects. Mean difference d convert to r

High vs. Low credible source Platforms 1. Website 2. Blog 3. Bulletin Board 4. Homepage Study Design 1. Empirical 0. Survey * ** ***

Effect size

Participants

r

95% CI

k

N

Fishers z

Low

22

1346

0.03

2 3 2 15

55 115 56 1120

0.45 0.05 0.03 0.02

10 12

302 1044

0.03 0.03

0.04

Q

I2

Sign. Test

32.78* 32.79* 0.28 3.03 3.74 16.43 32.79* 32.08*** 0.7

35.90% 35.90% 0.00% 34.00% 73% 14.80% 35.90% 71.90% 0.00%

z = 0.84 z = 0.84 z = 3.13** z = 0.44 z = 0.12 z = 0.60 z = 0.84 z = 0.29 z = 0.92

High 0.1

0.17 0.3 0.57 0.05

0.73 0.19 0.51 0.09

0.19 0.04

0.26 0.1

p < 0.05. P < 0.01. p < 0.001.

Please cite this article in press as: Ma, T.J., Atkin, D. User generated content and credibility evaluation of online health information: A meta analytic study. Telemat. Informat. (2016), http://dx.doi.org/10.1016/j.tele.2016.09.009

11

T.J. Ma, D. Atkin / Telematics and Informatics xxx (2016) xxx–xxx

%

Study ID

ES (95% CI)

Weight

Hu & Sundar, 2010 (2010)

0.37 (-0.03, 0.77)

2.56

Hu & Sundar, 2010 (2010)

0.52 (0.13, 0.92)

2.60

Hu & Sundar, 2010 (2010)

0.03 (-0.36, 0.42)

2.60

Hu & Sundar, 2010 (2010)

-0.34 (-0.74, 0.05)

2.60

Hu & Sundar, 2010 (2010)

-0.31 (-0.70, 0.09)

2.60

Hu & Sundar, 2010 (2010)

0.24 (-0.15, 0.64)

2.60

Hu & Sundar, 2010 (2010)

-0.70 (-1.09, -0.31)

2.60

Hu & Sundar, 2010 (2010)

0.35 (-0.05, 0.74)

2.60

Winter, et al, 2010 (2010)

0.07 (-0.19, 0.33)

4.80

Eastin, 2006 (2006)

0.11 (-0.36, 0.57)

1.99

Bates, et al, 2006 (2006)

0.03 (-0.18, 0.25)

6.04

Bates, et al, 2006 (2006)

0.05 (-0.16, 0.27)

6.04

Bates, et al, 2006 (2006)

0.03 (-0.18, 0.24)

6.04

Bates, et al, 2006 (2006)

0.02 (-0.19, 0.23)

6.04

Bates, et al, 2006 (2006)

0.00 (-0.21, 0.21)

6.04

Bates, et al, 2006 (2006)

0.03 (-0.18, 0.24)

6.04

Bates, et al, 2006 (2006)

0.00 (-0.21, 0.22)

6.04

Bates, et al, 2006 (2006)

0.02 (-0.19, 0.23)

6.04

Bates, et al, 2006 (2006)

0.00 (-0.21, 0.22)

6.04

Bates, et al, 2006 (2006)

0.04 (-0.17, 0.26)

6.04

Bates, et al, 2006 (2006)

0.10 (-0.11, 0.31)

6.04

Bates, et al, 2006 (2006)

0.01 (-0.20, 0.22)

6.04

Overall (I-squared = 35.9%, p = 0.049)

0.03 (-0.04, 0.10)

100.00

NOTE: Weights are from random effects analysis -1.09

0

1.09

Fig. 2. Forest plot analysis of heterogeneity. Note: 1 = Website, 2 = Blog, 3 = Bulletin Board, 4 = Personal Homepage.

evidence. One study found that a higher credibility source was perceived as more credible on message, and no interaction between source and content type on perceived message credibility (Eastin, 2006). Another study found that there was no difference between high and low credible sources in terms of consumer perceptions of the quality of online health information (Bates et al., 2006). Interestingly, one study found that self-reported expertise by a content author had a strong influence on reading time, which implies perceived content credibility (Winter et al., 2010).

6. Discussion The purpose of this study is to synthesize the findings in past research about the source and content credibility relationship in the context of user-generated online health information. The meta-analytical findings concluded that original sources of online health information generally did not differ in their impacts on the perceived information credibility. However, according to the direction of the effect, the findings revealed that the perceived credibility of the online health information may be high when the information was created by experts rather than by lay persons, though the trend was not statistically significant. Furthermore, the results suggested that the source and content credibility relationship varied when the information was delivered from different technology platforms. When the platform was general Internet websites, the health information created by experts was perceived as high in credibility. When the platforms were blogs and discussion boards, health information created by lay persons had the potential to be perceived as high in credibility. Findings from this study can be used to explain conflicting findings from past work (e.g., Lin et al., 2015), as they suggest that users may have normative expectations when searching for health information from different Internet platforms. Users may expect to find credible sources when searching for information from common websites. In other words, users will seek Please cite this article in press as: Ma, T.J., Atkin, D. User generated content and credibility evaluation of online health information: A meta analytic study. Telemat. Informat. (2016), http://dx.doi.org/10.1016/j.tele.2016.09.009

12

T.J. Ma, D. Atkin / Telematics and Informatics xxx (2016) xxx–xxx

Study ID 1 Hu & Sundar, 2010 (2010) Hu & Sundar, 2010 (2010) Subtotal (I-squared = 0.0%, p = 0.600) . 2 Hu & Sundar, 2010 (2010) Hu & Sundar, 2010 (2010)

Winter, et al, 2010 (2010) Subtotal (I-squared = 34.0%, p = 0.220) . 3 Hu & Sundar, 2010 (2010) Hu & Sundar, 2010 (2010) Subtotal (I-squared = 73.3%, p = 0.053) . 4 Hu & Sundar, 2010 (2010) Hu & Sundar, 2010 (2010) Eastin, 2006 (2006) Bates, et al, 2006 (2006) Bates, et al, 2006 (2006) Bates, et al, 2006 (2006) Bates, et al, 2006 (2006) Bates, et al, 2006 (2006) Bates, et al, 2006 (2006) Bates, et al, 2006 (2006) Bates, et al, 2006 (2006) Bates, et al, 2006 (2006) Bates, et al, 2006 (2006) Bates, et al, 2006 (2006) Bates, et al, 2006 (2006) Subtotal (I-squared = 14.8%, p = 0.288) . Overall (I-squared = 35.9%, p = 0.049)

ES (95% CI)

% Weight

0.37 (-0.03, 0.77) 0.52 (0.13, 0.92) 0.45 (0.17, 0.73)

2.56 2.60 5.16

0.03 (-0.36, 0.42) -0.34 (-0.74, 0.05) 0.07 (-0.19, 0.33) -0.05 (-0.30, 0.19)

2.60 2.60 4.80 10.01

-0.31 (-0.70, 0.09) 0.24 (-0.15, 0.64) -0.03 (-0.57, 0.51)

2.60 2.60 5.20

-0.70 (-1.09, -0.31) 0.35 (-0.05, 0.74) 0.11 (-0.36, 0.57) 0.03 (-0.18, 0.25) 0.05 (-0.16, 0.27) 0.03 (-0.18, 0.24) 0.02 (-0.19, 0.23) 0.00 (-0.21, 0.21) 0.03 (-0.18, 0.24) 0.00 (-0.21, 0.22) 0.02 (-0.19, 0.23) 0.00 (-0.21, 0.22) 0.04 (-0.17, 0.26) 0.10 (-0.11, 0.31) 0.01 (-0.20, 0.22) 0.02 (-0.05, 0.09)

2.60 2.60 1.99 6.04 6.04 6.04 6.04 6.04 6.04 6.04 6.04 6.04 6.04 6.04 6.04 79.62

0.03 (-0.04, 0.10)

100.00

NOTE: Weights are from random effects analysis -1.09

0

1.09

Note: 1=Website, 2=Blog, 3=Bulletin Board, 4= Personal Homepage. Fig. 3. Forest plot analysis of subgroup heterogeneity.

the expert-generated content in the common website platform. By contrast, users may expect to find lay person-generated content when searching for information from blog and discussion board platforms, where user expectations will likely be defined by homophily. Indeed, one study found that the relationship between source credibility and content credibility was mediated by source homophily; that is, when the source has low credibility, homophily influences credibility perceptions of information, which in turn influences attitude towards information (Wang et al., 2008). Therefore, the present meta-analytical findings may shed light on the different mechanisms underlying the persuasive effect of different sources. These meta-analytic results also indicate potential methodological reasons for the inconsistent results of source and content credibility relationship in prior experimental studies. A detailed review of the studies suggests several concerns, beginning with the inconsistency of the measurements used for measuring perceived information credibility. For example, one study measured perceived information credibility with three items—accuracy, believability, and factualness, a = 0.89 (Eastin, 2006), whereas another study measured information credibility with the average of two items—accuracy and believability, r = 0.74 (Hu and Sundar, 2010). Moreover, the other concern is that sources were manipulated differently in the factorial experiment studies. For example, one study differentiated sources ranging from doctors to laypersons (Hu and Sundar, 2010), while another study considered sources ranging from institutions to individual persons (Winter et al., 2010). Lastly, the methodological concerns include sample sizes and sampling procedures, which may limit the analytical techniques in these studies.

Please cite this article in press as: Ma, T.J., Atkin, D. User generated content and credibility evaluation of online health information: A meta analytic study. Telemat. Informat. (2016), http://dx.doi.org/10.1016/j.tele.2016.09.009

T.J. Ma, D. Atkin / Telematics and Informatics xxx (2016) xxx–xxx

13

Consideration of these concerns underscores the need to extend research addressing user-generated online health information. The investigations can encompass more health issues, more technological platforms, and more factors that may influence the information process. For example, prior studies usually focus on a specific health issue. But it is possible that health issues may influence evaluation of source credibility. Eastin’s (2006) exploration of online health information found that sources identified as ‘‘highly credible” were deemed significantly more credible than their low credible source counterparts. By contrast, when milk was the criterion topic, there was no source expertise and homophily effect on perceived message credibility and behavioral intentions (Hu and Sundar, 2010). This suggests that the health issues imply degree of elaboration of the information, such that HIV may cause more involvement with the issue and receivers may be motivated to use both systematic and heuristic processes to evaluate the message. Furthermore, as research involving nationally representative data revealed, demographic differences can help explain differential uses of online health information (Thackeray et al., 2013). Some of the conflicting results in the literature can thus be explained by the fact that constituent studies employ respondent pools that vary widely in their demographic make-up (e.g., gender composition). Later work could extend this research with longitudinal and multilevel research designs (Raudenbush et al., 2002). The latter design might profitably include multiple messages or source types in their examinations. Participants are nested within messages or within source types. Multilevel research can be used to discern between-and within-message or source variances by controlling for cross-level variances. Moreover, longitudinal studies can of course assess the strength of source or message effects over time. For instance, two epilepsy content analyses (from 2010 to 2013) showed conflicting results. One possible explanation is that the facts surrounding this health issue may have been evolving over the years. As the research base addressing online health information continues to grow, researchers can profitably track these influences across a wider domain of time points and sub-topics (e.g., severe health concerns). 6.1. Limitations The present findings should be viewed in light of study limitations. Of primary importance is that the meta-analysis reports the relationship between the mean difference and the outcome variable, so that causality cannot be associated with any of the results reported herein. The second limitation is that many effect sizes were based upon data collected through cross-sectional research designs and provided by common sources. So the multiple effect sizes are not independent in nature. The third limitation is that, given the restricted number of studies screened for this meta-analysis, the numbers of effect sizes in each subgroups often are small and may thus influence the power of the analysis. Other limitations include the lack of coding for health issues explored in the studies as well as the coding for the information credibility measures. The limitations of this study indicate promising directions for future research. For example, by conducting an ongoing search of empirical studies and extending the search criteria to the unpublished papers, more studies and effect sizes can be included in the meta-analysis. A future meta-analysis with more studies and effect sizes will help inform our understanding of this topic. Moreover, a wider range of variables can be included in the analysis, such as the behavioral intention or behavior, so that more direct persuasive effect of sources on health information related outcomes can be explained. 7. Conclusion On balance, the Internet allows open, anonymous, and democratic access of online health information provided by different sources, but online health information by different sources – such as health agency, industry organizations, anonymous institutions, and consumers – may influence evaluation of online health information. The present meta-analysis suggests that – when people access user-generated health information – source credibility may or may not influence user evaluations of information credibility, depending on the platform upon which the information is posted. Study findings also suggest that evaluation of user-generated online health information requires integrated consideration of health issues, information sources, and technology platforms. Since empirical research around user-generatedcontent for online health information is still in its infancy, the present meta-analysis can shed light on the literature on the user-generated-content and help to resolve the debate caused by inconsistencies found in the literature base. The dynamism of user-generated online health information—like the digital environment from which it emerges—calls for further research effort to study different sources in order to help health practitioners to better understand these novel interventions. Conflict of interest None of the authors have a conflict of interest with this research. References Abelman, R., Lin, C.A., Atkin, D., 2007. A meta-analysis of television’s impact on special populations. In: Preiss, R., Gayle, B., Burrell, N., Allen, M., Bryant, J. (Eds.), Mass Media Theories and Processes: Advances through Meta-Analysis. LEA, Mahwah, NJ, pp. 111–127.

Please cite this article in press as: Ma, T.J., Atkin, D. User generated content and credibility evaluation of online health information: A meta analytic study. Telemat. Informat. (2016), http://dx.doi.org/10.1016/j.tele.2016.09.009

14

T.J. Ma, D. Atkin / Telematics and Informatics xxx (2016) xxx–xxx

Adams, S.A., 2010a. Blog-based applications and health information: two case studies that illustrate important questions for Consumer Health Informatics CHI research. Int. J. Med. Informatics 796, e89–e96. http://dx.doi.org/10.1016/j.ijmedinf.2008.06.009. Adams, S.A., 2010b. Revisiting the online health information reliability debate in the wake of ‘‘web 2.0”: an inter-disciplinary literature and website review. Int. J. Med. Informatics 796, 391–400. http://dx.doi.org/10.1016/j.ijmedinf.2010.01.006. Albarracín, D., Gillette, J.C., Earl, A.N., Glasman, L.R., Durantini, M.R., Ho, M., 2005. A test of major assumptions about behavior change: a comprehensive look at the effects of passive and active HIV-prevention interventions since the beginning of the epidemic. Psychol. Bull. 131, 856–897. Bates, B.R., Romina, S., Ahmed, R., Hopson, D., 2006. The effect of source credibility on consumers’ perceptions of the quality of health information on the Internet. Inform. Health Soc. Care 311, 45–52. Bender, J., 2011. Seeking support on facebook: a content analysis of breast cancer groups. J. Med. Internet Res. 131. http://dx.doi.org/10.2196/jmir.1560. Bernstam, E.V., Sagaram, S., Walji, M., Johnson, C.W., Meric-Bernstam, F., 2005. Usability of quality measures for online health information: can commonly used technical quality criteria be reliably assessed? Int. J. Med. Informatics 747–8, 675–683. http://dx.doi.org/10.1016/j.ijmedinf.2005.02.002. Betsch, C., Brewer, N.T., Brocard, P., Davies, P., Gaissmaier, W., Haase, N., Stryk, M., 2012. Opportunities and challenges of Web 2.0 for vaccination decisions. Vaccine 3025, 3727–3733. http://dx.doi.org/10.1016/j.vaccine.2012.02.025. Biggs, T.C., Bird, J.H., Harries, P.G., Salib, R.J., 2013. YouTube as a source of information on rhinosinusitis: the good, the bad and the ugly. J. Laryngol. Otol. 1278, 749–754. http://dx.doi.org/10.1017/S0022215113001473. Borenstein, M., Hedges, L.V., Higgins, J.P.T., Rothstein, H.R., 2009. Introduction to Meta-Analysis. Wiley, New York. Briones, R., Nan, X., Madden, K., Waks, L., 2012. When vaccines go viral: an analysis of HPV vaccine coverage on YouTube. Health Commun. 275, 478–485. http://dx.doi.org/10.1080/10410236.2011.610258. Carroll, M.V., Shensa, A., Primack, B.A., 2013. A comparison of cigarette- and hookah-related videos on YouTube. Tobacco Control 225, 319–323. http://dx. doi.org/10.1136/tobaccocontrol-2011-050253. Chen, S., Chaiken, S., 1999. The heuristic-systematic model in its broader context. In: Chaiken, S., Trope, Y. (Eds.), Dual Process Theories in Social Psychology. Guilford Press, New York, pp. 73–96. Retrieved from http://search.ebscohost.com/login.aspx?direct=truedb=psyhAN=1999-02377-003site=ehostlivescope=site. Cheung, C.M.K., 2010. The effectiveness of electronic word-of-mouth communication: a literature analysis electronic word-of-mouth communication, February 2009, pp. 329–345. Cheung, C.M.K., Thadani, D.R., 2012. The impact of electronic word-of-mouth communication: a literature analysis and integrative model. Decis. Support Syst. 541, 461–470. http://dx.doi.org/10.1016/j.dss.2012.06.008. Chou, W.S., Prestin, A., Lyons, C., Wen, K., 2013. Web 2.0 for health promotion: reviewing the current evidence. Am. J. Public Health 1031, e9–e18. http://dx. doi.org/10.2105/AJPH.2012.301071. Cohen, Jacob, 1988. Statistical Power Analysis for the Behavioral Sciences,. Lawrence Erlbaum Associates, New Jersey, ISBN 0-8058-0283-5. retrieved 10 July 2010. D’Auria, J.P., 2010. In search of quality health information. J. Pediatr. Health Care 242, 137–140. http://dx.doi.org/10.1016/j.pedhc.2009.11.001. Dean, R., Austin, J., Watts, W., 1971. Forewarning effects in persuasion: field and classroom experiments. J. Pers. Soc. Psychol. 18, 210–221. Dobransky, K., Hargittai, E., 2012. Inquiring minds acquiring wellness: uses of online and offline sources for health information. Health Commun. 274, 331– 343. Eastin, M.S., 2006. Credibility assessments of online health information: the effects of source expertise and knowledge of content. J. Comput.-Mediat. Commun. 64. http://dx.doi.org/10.1111/j.1083-6101.2001.tb00126.x. Eysenbach, G., 2008. Credibility of health information and digital media: new perspectives and implications for youth pp. 123–154, doi: 10.1162/ dmal.9780262562324.123. Fox, S., Duggan, M., 2013. Health online 2013. Retrieved from . Freeman, B., Chapman, S., 2008. Gone viral? Heard the buzz? A guide for public health practitioners and researchers on how Web 2.0 can subvert advertising restrictions and spread health information. J. Epidemiol. Community Health 629, 778–782. http://dx.doi.org/10.1136/jech.2008.073759. Freeman, K.S., Spyridakis, J.H., 2004. An examination of factors that affect the credibility of online health information. Tech. Commun. 512, 239–263. Hallgren, K.A., 2012. Computing inter-rater reliability for observational data: an overview and tutorial. Tutorials Quant. Methods Psychol. 81, 23–34. Hayes, A.F., Krippendorff, K., 2007. Answering the call for a standard reliability measure for coding data. Commun. Methods Measures 1, 77–89. Higgins, J.P.T., Thompson, S.G., Deeks, J.J., Altman, D.G., 2003. Measuring inconsistency in meta-analyses. BMJ 327, 557–560. Hu, Y., Sundar, S.S., 2010. Effects of online health sources on credibility and behavioral intentions. Commun. Res. 371, 105–132. http://dx.doi.org/10.1177/ 0093650209351512. Joshi, M.P.H.A., Bhangoo, R.S., Kumar, K., 2011. Quality of nutrition related information on the Internet for osteoporosis patients: a critical review. Technol. Health Care 196, 391–400. http://dx.doi.org/10.3233/THC-2011-0643. Kadry, B., Chu, L.F., Kadry, B., Gammas, D., Macario, A., 2011. Analysis of 4999 online physician ratings indicates that most patients give physicians a favorable rating. J. Med. Internet Res., 134 Retrieved from http://www.ncbi.nlm.nih.gov.ezproxy.lib.uconn.edu/pmc/articles/PMC3222200/. Kata, A., 2012. Anti-vaccine activists, Web 2.0, and the postmodern paradigm–an overview of tactics and tropes used online by the anti-vaccination movement. Vaccine 3025, 3778–3789. http://dx.doi.org/10.1016/j.vaccine.2011.11.112. Lagoe, C., Atkin, D., 2015. Searching for sickness online: The new world of cyberchondriacs. Comput. Hum. Behav., 484–491 Lehto, T., Oinas-Kukkonen, H., 2011. Persuasive features in web-based alcohol and smoking interventions: a systematic review of the literature. J. Med. Internet Res. 133, e46. http://dx.doi.org/10.2196/jmir.1559. Lin, C.A., Atkin, D., Vidican, S., 2015. Ethnicity, the digital divide and uses of the Internet for health information. Comput. Hum. Behav. 51, 216–223. Lipsey, M.W., Wilson, D.B., 2001. Practical Meta-Analysis. Sage, Thousand Oaks, CA. Lo, A.S., Esser, M.J., Gordon, K.E., 2010. YouTube: a gauge of public perception and awareness surrounding epilepsy. Epilepsy Behav. 174, 541–545. http://dx. doi.org/10.1016/j.yebeh.2010.02.004. Metzger, M.J., Flanagin, A.J., Medders, R.B., 2010. Social and heuristic approaches to credibility evaluation online. J. Commun. 603, 413–439. http://dx.doi. org/10.1111/j.1460-2466.2010.01488.x. Moturu, S.T., Liu, H., Johnson, W.G., 2008. Trust evaluation in health information on the World Wide Web. Conference Proceedings: Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society, Conference, 2008, pp. 1525– 1528, doi: 10.1109/IEMBS.2008.4649459. Neumark, Y., Flum, L., Lopez-Quintero, C., Shtarkshall, R., 2012. Quality of online health information about oral contraceptives from Hebrew-language websites. Israel J. Health Policy Res. 11, 38. http://dx.doi.org/10.1186/2045-4015-1-38. O’Grady, L., Wathen, C.N., Charnaw-Burger, J., Betel, L., Shachak, A., Luke, R., Jadad, A.R., 2012. The use of tags and tag clouds to discern credible content in online health message forums. Int. J. Med. Informatics 811, 36–44. http://dx.doi.org/10.1016/j.ijmedinf.2011.10.001. O’Keefe, D., 2002. Persuasion: Theory and Research. Sage Publications, Inc.. Retrieved from http://books.google.com/books?hl=enlr=id=e3V6Zen0UGwCoi= fndpg=PR13dq=o%27keefe+persuasion+theory+and+research+ots=ieGtwk-2f5sig=cQ81JpI2fly9_OjJVQzjOBAYNTc. Petty, R.E., Brinol, P., 2010. Attitude change. In: Advanced Social Psychology: The State of the Science, pp. 217–259. Retrieved from . Petty, R.E., Cacioppo, J.T., 2001. Source factors and the elaboration likelihood model of persuasion. Adv. Consum. Res., 668–673 Petty, R.E., Rucker, D.D., Bizer, G.Y., Cacioppo, J.T., 2004. The elaboration likelihood model of persuasion. In: Seiter, J.S., Gass, R.H. (Eds.), Perspectives on Persuasion, Social Influence, and Compliance Gaining. Pearson Education, Inc., pp. 65–89. Retrieved from http://scholar.google.com/scholar?hl=enas_ sdt=0,7q=petty+rucker+bizer+cacioppo#0. Pew 2012. Mobile health 2012. Retrieved from: .

Please cite this article in press as: Ma, T.J., Atkin, D. User generated content and credibility evaluation of online health information: A meta analytic study. Telemat. Informat. (2016), http://dx.doi.org/10.1016/j.tele.2016.09.009

T.J. Ma, D. Atkin / Telematics and Informatics xxx (2016) xxx–xxx

15

Raudenbush, S.W., Bryk, A.S., Congdon, R.T., 2002. Hierarchical Linear Modeling. Sage, Thousands Oaks, CA. Ruppel, E.K., Rains, S.A., 2012. Information sources and the health-information seeking process: an application and extension of channel complementary theory. Commun. Monogr. 79, 385–405. Sillence, E., Briggs, P., Harris, P., Fishwick, L., 2006. A framework for understanding trust factors in web-based health advice. Int. J. Hum Comput Stud. 648, 697–713. http://dx.doi.org/10.1016/j.ijhcs.2006.02.007. Sillence, E., Briggs, P., Harris, P., Fishwick, L., 2007. Going online for health advice: changes in usage and trust practices over the last five years. Interact. Comput. 193, 397–406. http://dx.doi.org/10.1016/j.intcom.2006.10.002. Sundar, S.S., 2008. Self as source: agency and customization in interactive media. In: Konjin, E., Utz, S., Tanis, M., Barnes, S. (Eds.), Mediated Interpersonal Communication. Routeledge, New York, pp. 217–233. Sundar, S.S., Nass, C., 2001. Conceptualizing sources in online news. J. Commun. 51, 52–72. Tenhaven, C., Tipold, A., Fischer, M.R., Ehlers, J.P., 2013. Is there a ‘‘net generation” in veterinary medicine? A comparative study on the use of the Internet and Web 2.0 by students and the veterinary profession. GMS Zeitschrift Für Medizinische Ausbildung, 301, Doc7, doi: 10.3205/zma000850. Thackeray, R., Crookston, B., West, J., 2013. Correlates of health-related social media use among adults. J. Med. Internet Res. 151. http://dx.doi.org/10.2196/ jmir.2297. Tian, Y., 2010. Organ donation on Web 2.0: content and audience analysis of organ donation videos on YouTube. Health Commun. 253, 238–246. http://dx. doi.org/10.1080/10410231003698911. Trumbo, C.W., McComas, K.A., 2003. The function of credibility in information processing for risk perception. Risk Analysis: An Official Publication of the Society for Risk Analysis, Vol. 232, pp. 343–353. Retrieved from . Villiard, H., Moreno, M.A., 2012. Fitness on facebook: advertisements generated in response to profile content. Cyberpsychol Behav Soc Netw 1510, 564– 568. http://dx.doi.org/10.1089/cyber.2011.0642. Walther, J., Boyd, S., 2002. Attraction to computer-mediated social support. In: Lin, C., Atkin, D. (Eds.), Communication Technology and Society: Audience Adoption and Uses. LEA, Cresskill, NJ. Wang, Z., Walther, J.B., Pingree, S., Hawkins, R.P., 2008. Health information, credibility, homophily, and influence via the Internet: web sites versus discussion groups. Health Commun. 234, 358–368. http://dx.doi.org/10.1080/10410230802229738. Wathen, C.N., Burkell, J., 2002. Believe it or not: factors influencing credibility on the Web. J. Am. Soc. Inform. Sci. Technol. 532, 134–144. http://dx.doi.org/ 10.1002/asi.10016. Wilson, D.B. January 15, 2002 version. Meta-analysis macros for SAS, SPSS, and Stata. Retrieved, May 5, 2010, from . Winter, S., Krämer, N.C., Appel, J., Schielke, K., 2010. Information selection in the blogosphere: the effect of expertise, community rating, and age. In: Ohlsson, S., Catrambone, R. (Eds.), Proceedings of the 32nd Annual Conference of the Cognitive Science Society. Cognitive Science Society, Austin, TX, pp. 802–807. Wong, V.S.S., Stevenson, M., Selwa, L., 2013. The presentation of seizures and epilepsy in YouTube videos. Epilepsy Behav. 271, 247–250. http://dx.doi.org/ 10.1016/j.yebeh.2013.01.017. Dr. Tao (Jennifer) Ma (Ph.D., University of Connecticut) is an Assistant Professor in the Department of Mass Communication at the Winona State University. Her research interests span the areas of integrated marketing communication, health communication, new media technology, and persuasion process under consumer heuristics and social influences. She has done the National Institute of Health grant supported work on the effectiveness of communication, including advertising and communication campaigns. Her studies apply advanced quantitative research methods including longitudinal, multilevel, structural equation modeling and meta-analyses to investigate influences of mediated and strategic communication on social changes at the individual, institutional, and societal level. Dr. David Atkin (Ph.D., Michigan State University) is Professor in the Department of Communication at the University of Connecticut. His research interests include communication policy as well as uses and effects of new media. He has done grant-supported work on the adoption, use and regulation of new media.

Please cite this article in press as: Ma, T.J., Atkin, D. User generated content and credibility evaluation of online health information: A meta analytic study. Telemat. Informat. (2016), http://dx.doi.org/10.1016/j.tele.2016.09.009