Analyzing dynamic review manipulation and its impact on movie box office revenue

Analyzing dynamic review manipulation and its impact on movie box office revenue

Accepted Manuscript Analyzing dynamic review manipulation and its impact on movie box office revenue Haoxiang Ma, Jong Min Kim, Eunkyung Lee PII: DOI:...

517KB Sizes 0 Downloads 46 Views

Accepted Manuscript Analyzing dynamic review manipulation and its impact on movie box office revenue Haoxiang Ma, Jong Min Kim, Eunkyung Lee PII: DOI: Article Number: Reference:

S1567-4223(19)30017-1 https://doi.org/10.1016/j.elerap.2019.100840 100840 ELERAP 100840

To appear in:

Electronic Commerce Research and Applications

Received Date: Revised Date: Accepted Date:

9 September 2018 10 March 2019 10 March 2019

Please cite this article as: H. Ma, J.M. Kim, E. Lee, Analyzing dynamic review manipulation and its impact on movie box office revenue, Electronic Commerce Research and Applications (2019), doi: https://doi.org/10.1016/ j.elerap.2019.100840

This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

ANALYZING DYNAMIC REVIEW MANIPULATION AND ITS IMPACT ON MOVIE BOX OFFICE REVENUE Haoxiang Ma Department of Statistics and Finance at the School of Management University of Science and Technology of China, Hefei, China, 230026 Jong Min Kim (Corresponding Author) University of Science and Technology of China 96, JinZhai Road, Baohe District, Hefei, Anhui 230026, China Email: [email protected] Eunkyung Lee Graduate School of Techno Design, Koomin University Seoul, Korea, 02707 Last revised: March 10, 2019 _____________________________________________________________________________________ ABSTRACT We examine dynamic review manipulation behavior for movies and its impacts on box office revenue. To do so, we investigated review distribution patterns of multiple types, viewer and netizen reviews, and from verified and unverified review websites, over a movie’s life cycle. Because the motivational level for promotional reviews varies over the product life cycle, we propose that review distributions differ across review types. We found that the impact of promotional reviews on review distributions was greater in the early stages of a movie opening but the impact disappeared two weeks after the movie release. Also, we show that such impacts occurred because unverified reviews are more vulnerable to the influence of review manipulation. When consumers encountered a higher valence and higher volume of unverified reviews, they perceived these as a sign of review manipulation and exhibited psychological reactance. Keywords: Box office revenue; causal inference; dynamic review behavior; movie life-cycles; netizen reviews; promotional reviews; reactance theory; review credibility; review distributions; review manipulation; unverified reviews; verified reviews; viewer reviews _____________________________________________________________________________________

1

1. INTRODUCTION The Internet has been transforming how individuals generate, communicate, and share information with its ubiquity and accessibility. Consumers share their experiences through the Internet in the form of online word-of-mouth (OWOM). This information influences purchase decisions and the intentions of other consumers (Park et al., 2007). Subsequently, it is becoming increasingly important for firms to understand how OWOM is generated and spreads, because this may further lead to an influence on firms’ overall sales (Chevalier and Mayzlin, 2006). Thus, understanding the characteristics and mechanisms of OWOM and finding ways to strategically incorporate it into firms’ strategies are imperative for them to compete more successfully and boost sales (Luca and Zervas, 2016; Malbon, 2013; Mayzlin et al., 2014). Based on the importance of OWOM in influencing consumers and firms, it is receiving attention from practitioners and academic researchers. In this research, we focus on investigating factors that influence OWOM generation and their subsequent impact on firm performance. We focus on the effect of review manipulation on OWOM generation and firm performance. Examining the impact of review manipulation is imperative in understanding the role of OWOM in business because firms are often motivated to involve themselves into driving OWOM to their advantage. For example, approximately 16% of Yelp’s restaurant reviews are perceived to be dubious by its users (Luca and Zervas, 2016). Online magazines like Money (Tuttle, 2014) or web posts by the Internet law specialists1 also show that employees are often involved in review manipulation (e.g., promotional reviews or fake reviews) in the hotel industry where OWOM is utilized to gain competitive advantage over competitors. Thus, understanding how such review manipulation intentions and behaviors influence OWOM and firm performance is essential since it enables firms to build strategies more effectively and predict the potential outcomes of their strategic decisions. The focus of this research is to examine the effects of review manipulation on OWOM generation and firm performance, and to investigate how such effects change over time. This has received limited attention in the prior literature on OWOM. We will focus on examining how the effect of review manipulation shifts over time. We study the movie industry in this research; unlike hotel or restaurant services, entertainment products such as movies and DVDs usually have a short life cycle. Because the life cycle of a movie is condensed into a relatively short period of time, we can examine the full trend in the dynamic patterns of review manipulation over time. To examine the effect, we used OWOM data from a South Korean movie review website for the empirical analysis. In particular, this website runs a review policy in which there are two types of reviews – netizen-type vs. viewer-type (hereafter, shortened

1

The Internet Law Centre of Cohen Davis Solicitors specializes in online privacy concerns publicized a series of articles on privacy issues regarding online reviews (www.internetlawcentre.co.uk/illegal-online-reviews-legal-advice).

2

to just netizen and viewer reviews for clarity) – depending on whether the users must buy the tickets and watch the movies using the website to post reviews. Such posted reviews drive the difference in motivation for review manipulation. This allows us to examine how impacts on OWOM generation and firm performance are represented by box office sales for each movie over time. This study contributes to the literature on OWOM in two ways. First, while previous works have mainly focused on showing the cross-sectional effects of review manipulation, we will take a dynamic approach to see how the effect changes over time. Second, we extend the literature by taking a more comprehensive view and examining the consequences of OWOM on reviewing and firm performance. 2. THEORETICAL BACKGROUND AND HYPOTHESES 2.1 Theoretical Background 2.1.1. Previous Literature on OWOM and Firm Performance OWOM is often defined as any comments or statements about a product or service shared via the Internet by actual customers but also former and potential customers (Hennig-Thurau et al., 2004). It has now become an essential part of consumer decision-making. This is because the Internet plays a key role in each step of the purchase decision and is a precursor of firm performance. The importance of understanding the characteristics of OWOM and its impacts on consumers and firms is increasing. Prior literature on OWOM has shown that OWOM is closely related to firm performance including sales. Key things, such as the volume and valence of OWOM have been considered sas the key dimensions that explains its effect on firm performance (Baum and Spann, 2014; Chevalier and Mayzlin, 2006; Godes and Mayzlin, 2004; Neelamegham and Chintagunta, 1999). Volume refers to the total number of WOM interactions generated online (Duan et al., 2008b; Liu, 2006). When the amount of conversation generated online about a firm’s products or services increases, the firms’ sales should also increase. This is because more people are exposed to and informed about them, resulting in a greater level of awareness (Amblee and Bui, 2011; Godes and Mayzlin, 2004). Other prior work has shown that the volume of information generated and exchanged among consumers is a factor in determining how effectively an innovation will spread among consumers (Neelamegham and Chintagunta, 1999; Zufryden, 1996). The impact of OWOM volume is likely to persist over time since customer reviews remain available online unless they are deleted or the website becomes inaccessible (Duan, et al., 2008a). The second dimension of OWOM is valence, which captures the negative or positive nature of messages generated online that influence sales through the persuasive effect, influencing how positive or negative individuals’ evaluations are for a particular product or service (Duan et al., 2008b; Liu, 2006). Previous empirical findings on the impact of OWOM valence on consumer behavior and sales have shown mixed results. Many prior studies have reported that positive review ratings are likely to result in

3

greater sales (Chevalier and Mayzlin, 2006; Sun, 2012; Zhang and Dellarocas, 2006). However, there are also studies showing that negative reviews may have positive effects on sales too (Doh and Hwang, 2009). In addition, consumer decisions have been found to be more strongly influenced by negative than positive OWOM (Chevalier and Mayzlin, 2006), providing supporting evidence for its impact. The latter relationship is quite evident in the movie industry. Negative WOM is believed to be less influenced by promotional activities and thus is more reliable (Basuroy et al., 2003; Chevalier and Mayzlin, 2006; Dellarocas, 2006; Yoon et al., 2017). Based on research areas, we focus on volume and valence as dimensions for understanding OWOM characteristics and its influence on firm sales performance. We also assume that those likely to benefit from assisting or engaging in review manipulation will be motivated to intentionally increase the valence of reviews by posting positive OWOM. The presence of such motivation will often be accompanied by an increase in the review volume. This assumption is meaningful for the movie context because movies, in general, have relatively short life cycles and a shorter period of time for manipulation incentivization.2 Those who are closely related to the success of a movie are more likely to focus their manipulation efforts to boost movie performance (Eliashberg, Elberse, and Leenders, 2006; Legoux et al., 2016). Thus, we focus on the movie industry to test the influence of review manipulation on OWOM. 2.1.2. Online Review Policies and Review Manipulation For OWOM to benefit both consumers and firms, the reviews generated on these sites need to be credible (Reimer and Benkenstein, 2016) and of large enough volume (Dhar and Chang, 2009; Mayzlin et al., 2014). Information credibility and the volume of OWOM are closely related: as the number of reviews increases, consumers are more likely to perceive that the information contained in them is trustworthy. This is because it is more likely to reflect the true product quality (Zhu and Zhang, 2010). Consumers, as a result, are more likely to trust the information that a greater number of people agree upon. However, in the actual market environment, review credibility and the number of reviews often exhibit a trade-off relationship (Mayzlin et al., 2014; Park and Kim, 2008). Online review sites can maximize the number of reviews shared on their sites if they choose a more lenient review policy and allow anyone to post reviews on their sites. At the same time, they cannot prevent fraud or manipulated reviews from being shared, so there is a risk of diminished review credibility. In contrast, if the sites allow only verified users to post reviews to ensure the quality and credibility of their reviews, the number of reviews that can be generated may be limited. So increasing the volume of reviews may not always be 2

According to an article released in one of the most popular movie magazines in South Korea, ”cine21”, this assumption is also observed in the South Korean film industry. Many marketing companies claim that film distributors tend to hire marketing companies to manipulate online customer reviews. However, such manipulation focuses on promoting their films by posting positive reviews, and not on negative review posting as a means to discourage the consumption of other movies. This reality is cited in the following article in Korean: www.cine21.com/news/view/?mag_id=90596.

4

accompanied by enhanced credibility. Based on such a trade-off relationship, online review sites take one of the two approaches – verified reviews or unverified reviews – as their review policies depend on their business model, market size, industry, business objectives, or management’s vision. Online review sites including Expedia.com, Booking.com and many other online travel agencies follow the verified review approach through which they aim to maintain a higher level of review credibility although the number of reviews they can generate may be limited. In contrast, review sites such as TripAdvisor.com and Yelp.com use unverified review sites. They are more open and lenient in terms of who can post reviews. This policy makes it easier to maximize the number of reviews, but at the same time the risk of review manipulation is greater. Depending on which of the approaches a firm chooses to pursue for their review policies, review posters can be motivated to post fake reviews and subsequently influence the distribution of reviews. For example, Mayzlin et al. (2014) showed that the distribution pattern of reviews posted on Expedia.com and TripAdvisor.com differ from each other because they follow opposite review policy approaches. Hotels with a stronger incentive to post promotional reviews due to high competition tend to have both more negative and positive reviews on TripAdvisor than on Expedia. This is because there is greater room for posting reviews that are advantageous for a particular hotel when using TripAdvisor. Such findings are limited in that they focus on cross-sectional differences in review distributions caused by manipulation. The level of motivation to disingenuously boost sales is likely to vary over time. So it will be necessary to examine how the online customer review distribution is influenced by the dynamic effect of manipulation. The movie industry is a good one to examine the dynamic influence of review manipulation because it has a short, condensed product life cycle, so we can adopt a more comprehensive perspective in understanding the downstream consequences. The potential benefits of review manipulation are mostly limited to the period near the opening day of a movie when most sales occur, within the first couple of weeks after release (Eliashberg et al., 2006; Legoux et al., 2016).3 2.2 Hypothesis Development 2.2.1 Influence of Review Manipulation Motivation on Review Distributions Previous literature has investigated the influence of review credibility or reliability on OWOM, and has mainly focused on reviews from unverified sites because the motivation to post fake or fraud reviews varies to a greater degree compared to verified sites. According to Luca and Zervas (2016), a significant proportion of movie reviews on unverified review sites is likely to contain promotional reviews posted by anyone, from studio staff and producers to incentivized review posters, who may benefit from posting

3

According to the box office revenue data of the South Korean market used in this study, the revenue of movies declines rapidly after the first or second weeks after the release.

5

positive reviews. In the movie industry, the box office results from the opening week are considered to be a critical indicator for determining how much distributional support should be committed in the future (Krider et al., 2005), and how much cumulative success can be expected (Elberse and Eliashberg, 2003; Legoux et al., 2016). Consequently, the motivation to put forth promotional reviews in the early stage of a movie’s screening should be greater, particularly for unverified review sites to gain a sufficient level of momentum to drive future success. Building upon such findings, we focus on the impact of review manipulation on the distribution of reviews that are generated. We posit that the review distribution should differ depending on the type of review policy used because of the difference in the level of motivation for posting manipulated reviews. To do this, we used a unique dataset of movies from Naver.com, the leader of the portal market in South Korea. In early January of 2014, Naver started a review policy in which they collect two types of reviews for movies: netizen and viewer reviews. Netizen reviews are reviews posted by anyone with an account at Naver who wants to post their opinions about movies,4 without having to purchase movie tickets through Naver or watching the movie. Viewer reviews are reviews posted by those who have purchased tickets for a movie via Naver and watched the movie before posting a review. In other words, netizen reviews are closely related to the unverified site approach, whereas viewer reviews are associated with the verified site review policy approach. Therefore, netizen reviews ought to be more vulnerable as promotional reviews than viewer reviews. Based on such definition of the review types, we further predict that the differences in the review distributions will be significant in the opening week. This is because netizen reviews are more likely to include promotional elements, and the efforts to promote reviews should be concentrated in the early stage of a movie’s life cycle. So, if heavy promotional activities are present in the early stage of a movie’s opening, the ratio of positive reviews for netizen reviews will be greater than viewer reviews. However, the difference in the ratio should lessen as growth of box office sales decelerates and the motivation to post promotional reviews decreases over time (De Vany and Walls, 2002; Duan et al., 2008b). We offer: 

Hypothesis 1 (The Viewer-Netizen Review Ratio Diminution Hypothesis). There is a difference in the ratio of positive reviews between viewer and netizen-authored reviews, and the difference will diminish over time.

(See Table 1 for all the hypotheses we will present.) INSERT TABLE 1 ABOUT HERE As we have discussed, viewer reviews strictly allow only the actual movie viewers, verified viewers, to post reviews. In contrast, netizen reviews allow anyone, including those who have watched movies but

4

The reviews posted by those who watched a movie but didn’t buy a ticket through Naver would be included as netizen reviews.

6

also those with the incentive to post promotional reviews (including movie studios, movie-related staff members, or incentivized reviewers) – and even those who have not watched movies – to post. Especially for netizen reviews, it is possible that reviewers post for personal reasons without actually watching movies. For example, consumers may try to sabotage a movie by sharing extremely negative reviews to punish an actor or a director’s moral transgressions, or simply because other users have posted reviews. Thus, it is possible that promotional reviews are not the only factor that drives differences in the review distributions between the viewer and the netizen review. Non-viewers can also have a meaningful influence on OWOM, even without purchasing and watching a movie. Wioe can infer from this that promotional reviews will not be the only factor influencing the differences in review distributions between the review types. This is because the differences in the preferences of the review posters who did not watch a movie may differ from those who have watched it. Such differences will be reflected in their review postings. In related research, Anderson and Simester (2014) have shown that there is a difference in preferences between review posters with and without confirmed purchases. Thus, we posit that the differences in the distributions between the viewer and netizen reviews should remain significant even after the incentive to post promotional reviews has faded with time, if the reviews posted by non-viewers truly differ from those of viewers. However, if there are no differences in preferences between viewers and non-viewers, then the differences in review distributions in the early stage of a movie’s release can be attributed to the effect of promotional reviews. Based on the findings of Anderson and Simester (2014), we predict that there will be discernible differences in the review distributions between viewer and netizen reviews, even when the influence of promotional activities has diminished. This brings us to our second hypothesis:. 

Hypothesis 2 (The Elimination of Promotional Review Effects Hypothesis). There will be a discernible difference in the distributions for viewer and netizen reviews when the effect of promotional reviews is eliminated.

Thus, we predict that there are two possible causes for the differences in the review distributions between viewers and netizen (Anderson and Simester, 2014; Luca and Zervas, 2016; Mayzlin et al., 2014). While review manipulation behavior, including promotional reviews, should be the primary cause that drives such differences, different preferences of viewers and non-viewers who post also may influence the distribution differences. They will be driven by review manipulation, which will further influence box office revenue by affecting how consumers evaluate and make choices on movies using information contained in OWOM. We next develop hypotheses to examine how the differences influence the downstream consequences of firm performance – box office revenue.

7

2.1.2. Influence of Valence Difference on Box Office Revenue In the OWOM context, a firm’s financial performance will be influenced by how consumers perceive and utilize information contained in online reviews and reflect it in their decisions. Movies, the context we consider, are experience goods that are difficult for consumers to accurately judge in terms of their quality prior to consumption (Koh et al., 2010; Sawhney and Eliashberg, 1996). Because there is much uncertainty involved with predicting product quality, prospective customers will be inclined to integrate information from various sources to form opinions about the expected utility of product quality and make decisions based upon such judgments (Anderson and Shanteau, 1970). When information from various sources is inconsistent or newly-acquired information conflicts with whatever information they have had, individuals will be inclined to process such inconsistencies before making the decision. Online reviews that are generated by other users are a good source of information because they contain information about the experience other consumers have had. Prospective consumers will assess their expected utility for a movie by considering reviews that other users have generated based on their own experience. So they will be more likely to watch a movie if they think that their expected utility from watching will be greater than the related cost based on their assessment of online customer reviews (Lee et al., 2015; Li, 2018). Ultimately, such OWOM effects will would determine movie box office sales. Considering that the characteristics of reviews differ depending on the review type though, the reviews that consumers take into consideration for making decisions will not always be consistent in terms of contents. In particular, online review information that consumers encounter about an identical product can differ depending on whether the review is a viewer or netizen type. Because individuals who are allowed to post reviews differ across review types, their reviews are likely to contain a different tone and evaluation regarding the quality of each movie. We often observe situations in which an identical movie receives different ratings depending on the review type; for example, an average 3.5-rating from viewer reviews (higher credibility) but a 4.5-rating from netizen reviews (lower credibility). Thus, we must ask: Do such differences in ratings for an identical movie transcend consumer choice? Do higher rating from netizen reviews have a positive impact on a movie’s box office revenue? If individuals perceive such differences across review types as a result of promotional activities, will the effect on the box office revenue still be salient? We predict that the impact that the differences across review types have on box office revenue will depend on the degree to which individuals take into consideration information from a less credible source (netizen reviews) in their decision-making. According to Hu et al.(2012), review manipulation has a negative impact on the perceived credibility of reviews. If consumers are inclined to underweight the information with lower credibility (netizen reviews) and focus on information from a highly credible

8

source, then the expected average of the review rating in the example above would be lower than the overall mean (4.0, the average of 3.5 and 4.5), and the impact of the higher netizen review ratings on box office revenue will be limited. However, if prospective customers take into consideration information of all review types regardless of information credibility, then the expected average review rating will be closer to the overall mean of 4.0. Hence, we predict that if individuals consider all information, including low credibility information, then the difference in review ratings will have a positive effect on box office revenue, given that a higher review rating usually leads to more sales (Chevalier and Mayzlin, 2006; Sun, 2012; Zhang and Dellarocas, 2006). Considering that TripAdvisor and Yelp have review policies that allow anyone to post reviews on their sites, and they are two of the most influential review and search sites, it is reasonable to infer that customers do not necessarily ignore information from a less credible source despite the possibility of review fraud. Based on this inference, we predict that individuals will consider information from less credible sources rather than recognizing it as a cue indicating possible review fraud. In other words, we propose that prospective customers will be likely to incorporate information about product quality from netizen reviews when viewer and netizen reviews deliver different information. Thus, we assert: 

Hypothesis 3 (The Review Rating Differences Across Review Types and Box Office Revenues Hypothesis). If a movie has a higher average rating for netizen reviews than for viewer reviews, then the difference in rating has a positive effect on box office revenue.

However, such review rating information may not be sufficient to reduce the uncertainty regarding movie quality. Even when the review rating score indicate high quality, consumers may perceive such information to be less diagnostic if the number of reviews generated is small (Zhu and Zhang, 2010). This is because consumers consider the volume of information as an important cue for determining how trustworthy and informative the review information is (Cohen and Golden, 1972). Previous research also provides further evidence that the number of reviews acts as a proxy for product popularity and that the number of reviews reflects how reliable the customer reviews are (Zhu and Zhang, 2010). Thus, we predict that consumers are likely to regard information supported by more posts to be more trustworthy. In the Review Rating Differences across Review Types and Box Office Revenues Hypothesis (H3), we stated that the difference in review ratings across review types should positively influence box office revenue when the netizen review ratings are higher. Building on this, we further predict that a sufficient number of reviews should complement the lower credibility of netizen reviews. We further hypothesize: 

Hypothesis 4a (The Higher Netizen Review Rating Effect on Box Office Receipts Hypothesis). A greater number of netizen reviews than viewer reviews increases the positive effect of the difference in ratings on box office revenue when the rating for netizen reviews is higher than viewer reviews.

However, when an information source has less credibility, the combined effect of the valence and the

9

volume may sometimes lead to a result that is different from what is predicted in the Higher Netizen Review Rating Effect on Box Office Receipts Hypothesis (H4a). In general, consumers are aware of the fact that movie studios sometimes attempt to boost sales by posting promotional positive reviews. However, it is often difficult for consumers to immediately detect whether there are promotional reviews. In order to make a judgment about the presence of promotional reviews, consumers not only use the review rating scores but also the number of reviews as important cues. If they see reviews with a lower level of credibility for a particular movie that have a high rating (persuasive effect via valence), and also a large number of reviews indicating positive scores (awareness effect via volume), then they are likely to recognize that there is a substantial presence of promotional reviews for the movie. When they recognize there is a promotional review, consumers are likely to exhibit psychological reactance and resist the marketing efforts (Brehm, 1966). According to reactance theory, when prospective customers feel that their free choice is threatened by promotional reviews, they make an effort to prove that it has not been compromised (Pennebaker and Sanders, 1976). Building upon this theory, the interaction between the valence and volume of generated reviews can cause a boomerang effect on box office revenue if prospective customers recognize the presence of manipulation and decide to lash out (Burgoon, Alvaro et al., 2002). So we posit that the interaction between higher ratings and a greater number of reviews from a less credible source will be perceived as a signal of manipulation and further lead to a negative impact on box office revenue. If the combined influence of volume and valence of reviews hold, then a prediction that is different from Hypothesis H4a can be established as: 

Hypothesis 4b (The Box Office Revenue Decrease Hypothesis). A greater number of netizen reviews than viewer reviews, along with the higher rating for a movie, decreases box office revenue for the movie.

3. DATA DESCRIPTION 3.1 Data Collection For the empirical analysis, we combined box office data with online customer reviews for movies. We gathered daily OWOM data from the Naver Movies website (movie.naver.com) from January 1, 2014 to December 31, 2016. Naver provides a multitude of services, from basic features such as e-mail and blogs to news and entertainment including movies and music.5 We collected the following data on a daily basis for each customer review: review poster’s ID, review rating score, time of review posting, content of each review, and whether the review was posted by a netizen or viewer poster.

5

Naver.com is still the dominant search engine in South Korea in 2019. It began categorizing customer reviews for movies into the viewer and netizen categories in early January 2014.

10

We also collected data for box office revenue from the website of the Korean Film Council (KOFIC), which provides daily box office ticket sales data on all movies released in South Korea. In particular, we gathered daily sales data for 60 movies that were ranked in the top 20 of the annual box office revenue from 2014 to 2016. However, we screened out data for two of these movies because they were released in late December of 2013, before Naver started to categorize customer reviews into viewer and netizen reviews. The customer reviews for these two movies were not applicable to our investigation of the differences in the review distributions and its impact on box office revenue. As a result, we were left with customer review data for 58 movies as a basis to empirically test our hypotheses. For box office revenue data, we collected information on the allocation of the number of screens, share of screens, box office revenue, daily number of viewers, cumulative box office revenue, cumulative number of viewers, ranking within the market, and other basic movie characteristics such as the director, leading actors, and genre. 3.2 Data Overview Table 2 provides an overview of the final data set used for analysis: 29 of 58 movies (50%) were domestic projects and 31 movies (53% of 58 movies considered for analysis) were distributed by one of the four major South Korean film distributors, including CJ Entertainment, Show Box, Next, and Lotte. INSERT TABLE 2 ABOUT HERE Table 3 describes the variables used in the empirical analysis. There are two types of reviews: viewer reviews and netizen reviews. The daily rating for each movie is calculated by taking the arithmetic mean of the all rating scores that each movie received each day. The daily number of reviews is calculated by summing up the number of reviews that each movie received each day. INSERT TABLE 3 ABOUT HERE Table 4 shows the summary statistics of the customer reviews from Naver Movies, for the period of four weeks after the release of each movie. The total number of netizen reviews was greater than the viewer reviews (504,602 > 369,066). This is unsurprising considering the difference in review policy between verified and unverified reviews. However, the average rating of the netizen reviews was lower viewier reviews (4.13 < 4.33). Such differences between the review types are consistent with observations from Mayzlin et al. (2014). The percentage of negative reviews having a rating less than or equal to 2.0 was greater for the netizen than the viewer reviews. INSERT TABLE 4 ABOUT HERE In Table 5, we compare the review postings of viewer and netizen reviews for the period of four weeks after the release of each movie. The average number of overall daily review posts was 537.97 and the average daily review rating was 4.20. When we examined each of the review types separately, the average daily review rating for viewer reviews was higher than netizen reviews (4.30 > 4.05) while the

11

number of netizen daily reviews is greater than that of the viewer daily reviews (310.71 > 227.25). Such results show that the review distributions of the viewer and netizen reviews differ from each other, meaning that it is important to consider these two types of reviews separately when investigating the impact of customer reviews on box office revenue. INSERT TABLE 5 ABOUT HERE

4. EMPIRICAL MODEL DEVELOPMENT 4.1 Review Manipulation Behavior The specification of our model to examine review fraud behavior were developed with the following assumption: if review fraud behavior is incentivized with a potential financial gain, then the motivation for generating fraudulent reviews will be stronger immediately after a movie’s release (Duan et al., 2008a; Elberse and Anand, 2007). Thus, if there is review manipulation, then the distribution of reviews should exhibit a sharp discontinuity soon after the release. To capture the discontinuity, we employ regression models with specifications based on Garthwaite (2014), who examined the economic impact of celebrity endorsements on book sales. The author used a regression model representing path of the influence of endorsements on book sales over time. This is reflected in the model by including dummy variables for time after the endorsement by celebrities has been offered. If the estimated coefficients for these dummy variables are positive, then it suggests that celebrity endorsements have a positive effect on book sales, based on Hendricks and Sorensen (2009). Following the prior studies, we estimate the review manipulation effect by examining the path of review posting over time using the following equation: 𝑅𝑒𝑣𝑖𝑒𝑤_𝑝𝑜𝑠𝑡𝑖𝑛𝑔𝑖𝑡 = α + ∑𝑠𝛽𝑠 ∗ 𝐼{𝑠_𝑑𝑎𝑦𝑠_𝑎𝑓𝑡𝑒𝑟_𝑟𝑒𝑙𝑒𝑎𝑠𝑒} + 𝑟 ∗ 𝐼{𝑊𝑒𝑒𝑘𝑒𝑛𝑑} + 𝜇𝑖 + 𝜀𝑖𝑡

(1)

where γ controls for the potential differences in review posting behavior between weekdays and weekends, μi is a movie-specific fixed effect, and

 it is a stochastic error term.

The βs coefficients capture changes in the review postings for day s after an event. In this equation, we consider various dimensions of OWOM as dependent variables and regard a movie’s opening day as an event. The indicator, I{s_days_after_release}, equals 1 if the current day is the day immediately after the opening day of movie i for s days; otherwise, it is 0. If review manipulation is closely related to the opening day event, then there should be noticeable changes in the flow of review posts immediately after the release, leading to the estimated βs coefficients being statistically significant. 4.2 Review Manipulation Behavior After Movie Release As we noted, viewer reviews only include reviews by movie-goers who purchased their tickets via Naver Movies and watched the movies they purchased. Netizen reviews can be provided by anyone who

12

wants to post a review. The cost for posting promotional reviews is expected to be significantly lower for netizen versus viewer reviews. So if review manipulation behavior is indeed a key factor influencing OWOM, then it should explain the differences in the review distributions across review types. It is possible though that review manipulation through promotional reviews is not the only factor driving the differences in the review distributions across review types. The difference in preferences between viewers and non-viewers who post reviews may also influence the review distributions when the incentives to post promotional reviews fade and become less salient. The incentive to post promotional reviews is concentrated during the period when a movie is still screening. Once the screening ends, such incentive for posting promotional reviews should rapidly diminish. Thus, we compare the differences in the review distributions around the time when a movie screening is finished. When this occurs, there no longer are incentives to boost movie’ sales by posting promotional reviews. However, since the differences in the review distributions driven by review manipulation in the early stage of a movie release still remain, such differences would likely boost the movie sales via DVDs. TV viewership may rise. Meanwhile, movie-goers who buy tickets on Naver and watch the movie are also allowed to post their opinions even after the movies are no longer in theaters. This provides us with the opportunity to indirectly examine whether or the differences in the review distributions between the viewer and netizen reviews were caused by review manipulation. If there is a difference in the distributions after screening ends, then it is less likely the differences in the early stage of a movie release are solely due to review manipulation. If such difference are statistically insignificant, we can argue that the explanation for differences in the review distributions after a movie release are driven by review manipulation. This result would lead us to the conclusion that there is actually no difference in preferences for movies between actual movie viewers and those who have not watched it yet. We test the Viewer-Netizen Review Ratio Diminution Hypothesis (H1). We estimated coefficients of a model over 21 days after a movie release using customer review data for 28 days from the release: 𝐷𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒𝑖𝑛𝑟𝑒𝑣𝑖𝑒𝑤

𝑝𝑜𝑠𝑡𝑖𝑛𝑔𝑖𝑡

20

= 𝛼 + ∑𝑠 = 0𝛽𝑠 ∗ 𝐼{𝑠𝑑𝑎𝑦𝑠𝑎𝑓𝑡𝑒𝑟

} + 𝛾 ∗ 𝐼{𝑊𝑒𝑒𝑘𝑒𝑛𝑑} + 𝜇𝑖 + 𝜀𝑖𝑡

𝑟𝑒𝑙𝑒𝑎𝑠𝑒

(2)

where Difference_in_review_postingit = ((Daily_N_Ratingit - Daily_V_Ratingit) or (Positive_Review_RatioitN- Positive_Review_RatioitV), N = Netizen Review, and V = Viewer Review). By using the differences in postings across the review types as our dependent variables, we eliminate influences of movie-specific fixed effects on review posting, such as movie production or ad costs. If there is review manipulation immediately after a movie release, then the estimated coefficients should be significant and positive (βs > 0). We found that the difference in the positive review ratios were statistically significant up to two weeks after the release. The estimated constant for (Positive_Review_ RatioitN-Positive_Review_RatioitV) is .09 (p < 0.1). This means that the positive review ratio of netizen

13

reviews was higher than viewer reviews over the time period of 28 days after a movie release. The coefficients of the lagged terms from 1 to 13 days after the release were also positive (p < .01). However, the magnitude of the differences across review types decreased with time. These findings show that the difference in the positive review ratio across review types immediately increases in the first week of the movie release but then decreases in the following week, thus, supporting our Viewer-Netizen Review Ratio Diminution Hypothesis (H1). However, such a pattern in the positive review ratios between the viewer and netizen reviews did not persist for the daily average ratings. The estimated constant was positive but not significant (constant = .16, p = .35). Also, no coefficients for the lagged terms were significant.6 This shows that review manipulation is likely to take place during the first two weeks after a movie release, but the magnitude of its influence is not sufficient to increase the average ratings. Separately examining and comparing the shift in review distributions for the viewer and netizen reviews would provide further support for our analysis. The differences in the review distributions across review types could have been caused by the difference in preferences between viewers and non-viewers who posted. However, we assume that such difference such preferences were not related to the review types. We predict that the periods of jumps in the daily ratings and positive review ratios would not differ across review types if we examine the review distributions of each type separately. To investigate the impact of such difference in preferences on the review distributions over time after the release, we analyzed the same model for the period when a movie screening was ending. Tables 7 and 8 show the empirical results for the viewer and the netizenreview s, respectively. In Table 7, we find that the coefficients of Positive Review RatioV for viewer reviews from the opening day to six days after the release were positive and the magnitude decreased rapidly with time. We found that the coefficient of Daily_Viewer_Type _Ratings for Opening Day (β = .39) was significant while the coefficients of the lagged terms were not. However, the coefficients of Positive Review RatioN for netizen reviews remained significant longer. (See Table 8.) The path of the distribution for netizen reviews over time was significant and positive for more than two weeks, but the positive review ratio gradually faded two weeks following the release. Such results provide support for the impact of review manipulation since the positive review ratio for netizens was sustained for a longer period of time than for

6

To save space, we do not report results on the negative review ratios, (Negative_Review_RatioitN - Negative_Review_ RatioitV). The estimated constant was positive and significant; and for the first week, the coefficients of the lagged terms were significant and negative. This explains why the constant for the difference in the daily average ratings was insignificant. Also, early moviegoers were difficult to satisfy; the coefficients of the lagged terms were negative and significant. Thus, the difference for the negative review ratios was lower than the constant over the 28-days from movie release. And the constant for the difference in the negative review ratios was positive. This could be evidence of hostile review manipulation by competing studios.

14

viewer reviews. Hence, our analysis shows that review manipulation is actively practiced in the early stage of a movie’s life cycle, but its impact disappears two weeks after the release. However, the differences in review distributions during the early period of a movie’s life cycle may partially be driven by differences in the review posters who watched the movie versus those who did not (Anderson and Simester, 2014; Mayzlin et al., 2014). By exploring the differences in the review distributions across the review types when the incentives for manipulation have faded, we can examine whether the differences in preferences influences the review ratings distribution. To do this, we considered the review distributions around the time when the movie screening ended. Since the incentive to promote movies by manipulation disappears after a movie screening ends, we used reviews for a fourweek period of time.7 We tested the Elimination of Promotional Review Effects Hypothesis (H2) estimating the coefficients over a time period of 15 days – from seven days before to seven days after the final day of screening – using online review data for 29 days before and after the final day of screening: 7

𝐷𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒𝑖𝑛𝑟𝑒𝑣𝑖𝑒𝑤

𝑝𝑜𝑠𝑡𝑖𝑛𝑔𝑖𝑡

+ 𝜇𝑖 + 𝜀𝑖𝑡

= 𝛼 + ∑𝑠 =‒ 7𝛿𝑠 ∗ 𝐼{𝑠_𝑑𝑎𝑦𝑠_𝑚𝑜𝑣𝑖𝑒_𝑒𝑛𝑑} + 𝛾 ∗ 𝐼{𝑊𝑒𝑒𝑘𝑒𝑛𝑑}

(3)

Significant differences in the review distributions across review types will indicate that review manipulation is not the only factor driving the differences in review distributions during the early stage of a movie’s release. If the differences in the distributions are not significant, then it will mean the posters did not differ from each other, regardless of if they have watched the movie or not. This result would confirm that the differences in the distributions are significantly driven by manipulation. As shown in Table 9, the estimated constants for the differences in both review ratings and (Daily_Netizen_Type_Ratingit - Daily_Viewer_Type_Ratingit) and the positive review ratio (Positive_Review_RatioitN - Positive_Review_RatioitV) were not significant. Most of the coefficients of the lagged and leading terms from seven days before the end of screening until seven days after were found to be insignificant and with no extreme fluctuations in the differences in review postings. These results imply that time does not correlate with the differences in review distributions across review types at approximately the time when the incentives to engage in manipulation have faded. Thus, the Elimination of Promotional Review Effects Hypothesis (H2) was not supported by the data in our context. The empirical results further indicate that the differences in the distributions immediately after the movie release were mainly driven by review manipulation behavior. Prospective customers will account for differences in the review distributions when they make purchase decisions for movies. We next develop a model to find the conditions under which customers 7

Naver.com has a policy to allow viewer reviewers to post their reviews for up to 2 weeks after watching the movies. Because of this, we chose 4 weeks, including 2 weeks after the end of screening.

15

tend to discount the information about the differences in the review distributions. 4.3 Effect of Review Distributions on Box Office Revenue Since we are also interested in the influence of the differences in review distributions between the viewer and netizen reviews on box office revenue, we develoedp a system of equations using review and box office data for 28 days after release (Duan et al., 2008a, 2008b). This is because OWOM and box office revenue are interdependent and may result in endogeneity in the estimation. We incorporated this interdependence in the data-generating process by assuming that prospective customers are inclined to place greater weight on information from viewer reviews and that information from netizen reviews will mainly be used as a supplement due to its lower credibility. The system of equations consists of the movie’s revenue (Box Office Revenue Equation) and the volume of the viewer reviews (Review Posting Equation). Since the interaction between the volume of reviews and the box office revenue is not limited to the concurrent term (Duan et al., 2008a; Elberse and Eliashberg, 2003) in the Box Office Revenue Equation, we included a lagged term for the volume of viewer reviews, as well as a term for the volume of reviews, while adding a lagged box office revenue term in the Review Posting Equation. We are interested in the impacts of the differences in the review distributions between the viewer and netizen reviews on box office revenue, so we added the lagged explanatory variables for both the difference in ratings and the difference in volume between the two in the Box Office Revenue Equation. Thus, we examined the impacts of these differences on box office revenue (Bellemare et al., 2017). Finally, we added movie-level fixed effects to control for time-invariant factors, such as budget, genre, and director, which influence both review posting and movie revenue. The system of equations is: Box Office Revenue Equation log(Daily_Sales)it = α + β1log(Daily_Viewer_Reviews)it + β2log(Daily_Viewer_Reviews)it-1 + β3 (Cumulative_Viewer_Rating)it + β4 (Daily_Viewer_Rating)it + β5 (Daily_Viewer_Rating)it-1 + β6(Netizen_Rating_High)it-1 + β7 (Netizen_Volume_High)it-1 + β8 (Netizen_Interaction)it-1 + γ1 log(Screen)it + γ2 log(Age)it + γ3(Weekend)t + ηi + εit..

(4)

Posting Equation log(Daily_Viewer_Reviews)it = τ + δ1 log(Daily_Sales)it + δ2 log(Daily_Sales)it-1 + δ3 log (Cumulative_Viewer_Rating)it + δ4 (Daily_Viewer_Rating)it + ζ1log(Age)it + ζ2(Weekend)t + θi + ρit.

(5)

Equation 4 considers the daily patterns of box office revenue and Equation 5 shows the volume of the viewer reviews. Daily_Salesit is the daily revenue for movie i at time t. Daily_Viewer_Reviewsit is the

16

number of the viewer reviews posted for movie i on day t. Daily_Viewer_Reviewsit-1 denotes the lagged term of the number of the viewer reviews for movie i at time t-1. In Equation 5, we similarly incorporate both a current and a lagged term with Daily_Salesit. Also, Cumulative_Viewer_Ratingit denotes the cumulative average review ratings of the viewer reviews after a movie has been released. This captures the overall evaluations of the movie-goers through Naver Movies. Further, Daily_Viewer_Ratingit represents the average daily review ratings from the viewer reviews for movie i at time t. We further added the difference in ratings and in volume between the viewer and netizen reviews, and the lagged variable of Daily_Viewer_Rating, to causally identify impacts of the differences on box office revenue. Netizen_Rating_High denotes the dummy variable value of 1, when Daily_Netizen_Type_Rating is greater than Daily_Viewer_Type_Rating, and 0 otherwise. Netizen_Volume_High denotes the dummy variable value of 1, when log(Daily_Netizen_Type_Reviews) is greater than log(Daily_Viewer_Type_ Reviews), and 0 otherwise. Netizen_Interaction is the interaction of two variables: Netizen_Rating_High and Netizen_Volume_High. In the box office revenue equation, our interests center on two terms, Netizen_Rating_High and Netizen_Interaction. When the rating from the lower level of review credibility is higher and delivers a more favorable opinion of a movie, customers can either ignore the favorable rating or increase their expectations by incorporating this difference in ratings. According to previous research (Anderson and Shanteau, 1970), customers are likely to integrate all available information that they receive and then reach a final decision. The coefficient of Netizen_Rating_High is thus expected to be positive. Regarding the interaction term, Netizen_Interaction, if prospective customers perceive the information backed by more posts as more trustworthy, then the positive impact of the higher rating in the netizen reviews will be amplified, leading to the coefficient of the interaction term being positive. However, if customers consider higher ratings from the netizen reviews together with the higher numbers from the type as a signal of review manipulation, this mat not lead to a salient positive effect on box office revenue, and they could even take a reverse action by choosing not to watch the movie in order to punish it. This will result in a negative effect on revenue. If such boomerang effect occurs, the coefficient of the interaction term will be negative. Three control variables are also included in Model 1 based on previous research (Basuroy et al., 2003; Duan et al., 2008a; Elberse and Eliashberg, 2003): log(Screen)it, log(Age)it, and (Weekend)t. Note that log(Screen)it denotes the number of screens for movie i at time t. log(Age)it is the number of days from the opening day. Also, (Weekend)t denotes whether or day t is a weekend. The two variables, log(Age)it, and (Weekend)t, are also incorporated in Model 2. We control for movie-level heterogeneity by including the movie dummy variables (ηi and θi) in both models. Finally, εit and ρit are stochastic errors. We estimate the system of equations using three-stage least-

17

square estimation (3SLS). We also provide results for each equation separately from the ordinary least squares (OLS) estimation to compare the empirical results. Both results are displayed in Tables 10 and 11. INSERT TABLES 10 and 11 ABOUT HERE For the box office revenue equation, we confirmed that the awareness effect on revenue is strong.. The awareness effect appears as increasing product awareness, and is reflected by the volume of customer reviews (Duan et al. 2008b). Both the coefficients of log(Daily_Viewer_Type_Reviews)it and log(Daily_Viewer_Type_Reviews)i1 were significant and positive (.53 and .17, respectively). An increase in volume results in an increase in revenue. However, we did not find evidence of a persuasive effect, which influenced customer evaluation of product quality (Duan et al. 2008b). This is reflected in the measures for ratings: (Cumulative_Viewer_Rating)it, (Daily_Viewer_Rating)it, and (Daily_Viewer_Rating)it-1. The estimated coefficients were all positive but not significant. This shows the valence impact was weaker than the volume effect in driving box office revenue. The coefficient of (Netizen_Rating_High)it-1, in contrast, was significant and positive (βNetizen_Rating_High=.08). The higher rating of the netizen versus viewer reviews resulted in a positive effect with respect to box office revenue. This finding suggests that individuals did not completely ignore information from a less credible source. Prospective customers were likely to incorporate favorable information from netizen reviews, when the netizen reviews delivered more positive information regarding product quality. This supports the Review Rating Differences Across Review Types and Box Office Revenues Hypothesis (H3): when the rating for a movie from netizen reviews is higher than that from viewer reviews, the difference in ratings has a positive impact on box office revenue. Higher awareness or movie popularity was reflected in the difference in volume between the netizen and viewer reviews also showed a positive impact on revenue (βNetizen_Volume_High=.16). A 10% increase in the volume difference between the types led to a 1.6% increase in box office revenue for the following day. The estimated coefficient of the interaction between Netizen_Rating_High and Netizen_Volume_High turned out to be negative (βNetizen_Interaction = -.17). Prospective customers seemed to regard the combination of a higher rating and a higher number of reviews from netizen reviewers as a signal of fraud. From reactance theory, prospective customers are known to be inclined to act against the intentions of movie studios or movie-related staff members (Brehm, 1966; Pennebaker and Sanders, 1976). Even though they intend to boost sales of their movies by posting promotional reviews, this still may result in a decrease in box office revenue, which supports the Box Office Revenue Decrease Hypothesis (H4b). In our OLS estimation, the coefficient for this interaction was negative but insignificant; however, after controlling for endogeneity in the 3SLS estimation, it became significant. This result shows the reason why we needed to simultaneously consider OWOM and box office revenue. If we considered the OWOM and box

18

office revenue separately using OLS, this would result in endogeneity in the estimation. However, the OLS results overall were the same as the 3SLS results; and the coefficients signs were mostly the same. For the posting equation (3SLS), the coefficient of log(Daily_Sales)it-1 was significant and positive (.53), suggesting the number of reviews was influenced by revenue. This shows there was interdependence between OWOM and revenue. However, log(Daily_Sales)it was insignificant. Also, the coefficient of (Daily_Viewer_Rating)it is positively correlated with the volume of reviews (β = .20), while (Cumulative_Viewer_Rating)it was strongly negatively correlated with the number of reviews (β = -2.27). This result shows that the cumulative and daily review ratings could be quite different in their impacts on the intention to post reviews, a finding that is consistent with Duan et al. (2008b). The contrasting roles of the cumulative and daily ratings as well as their impacts on review posting were not thoroughly examined in prior literature. The earlier results imply that review posters are inclined to differentiate themselves from others by posting negative reviews when the average review rating increases (Moe and Trusov, 2011). This supports the idea that the cumulative and the daily ratings are related to positive and negative posting. However, this requires a different model specification. This is outside the scope of this study, and so we leave it for future research. The robustness of the interaction effect between a higher valence and a higher volume of netizen reviews on box office revenue can be assessed by examining the difference in a higher review ratio between netizen and viewer reviews. It is unlikely for prospective customers to immediately detect which of the reviews are intentionally posted, and thus manipulated. Movie-goers tend to perceive positive reviews as less reliable because they are likely to be biased by promotional activities (Basuroy et al., 2003; Dellarocas, 2006; Mayzlin et al., 2014). Therefore, customers are more likely to assume that posted reviews are unusual and likely to have been posted by the movie studios when they observe a high frequency of positive reviews (5-star ratings) (Mayzlin et al., 2014). We estimated these impacts by replacing two terms related to the interaction in Model 1 with the following two terms: (Netizen_Ratio_High)it-1 and (Netizen_RInteraction)it-1.In our model, (Netizen_Ratio_High)it-1 is a dummy variable, 1, when the daily ratio of positive reviews for netizen reviews is greater than for viewer reviews, and 0 otherwise. (Netizen_RInteraction)it-1 denotes the interaction between (Netizen_Ratio_High)it-1 and (Netizen_Volume_High)it-1. (See Table 11 for the results.) INSERT TABLE 11 ABOUT HERE The estimation results for this new specification (as shown in Table 11) are consistent with the results from Table 10. The coefficients of (Netizen_Ratio_High)it-1 are positive and significant (.09 and .12, OLS and 3SLS estimation respectively. They indicate that individuals are more likely to choose to watch a

19

movie when the information from a less credible source (netizens) delivers a more positive voice than a more credible source (viewers). However, the coefficients of (Netizen_RInteraction)it-1 were significant and negative (-.12 and -.13, OLS and 3SLS estimation respectively), meaning that it had a negative effect on box office revenue. The estimated coefficients suggest that prospective customers seem to have inferred that a higher positive review ratio with a higher number of reviews from the netizens was a signal of review manipulation, dampening their intention to watch the movie and resulting in decreased box office revenue. When we consider the difference in the positive ratio instead of the review rating between the viewers and netizens, we found a consistent result, further reinforcing the Box Office Revenue Decrease Hypothesis (H4b). Overall, our analysis results show that when the flow of reviews with a higher valence (or positive ratio) and greater volume are simultaneously observed, individuals are likely to make inferences about the presence of review manipulation and discount the favorable reviews. Subsequently, they will be likely to discount the favorable message of these reviews, leading to a decrease in box office revenue. 5. CONCLUSION AND LIMITATIONS

In this study, we examined the dynamic effect of review fraud on OWOM, and its subsequent impact on firm performance, specifically, box office revenue in the movie industry. We investigated how the distributions of reviews depending on the type of review policy approach – the viewer and netizen types – as well as how such differences in review ratings and positive review ratios influence box office revenue. We find that for more than two weeks after a movie release, the motivation for review fraud has a significant impact on the generated reviews and such influence gradually fades as time passes. What is notable in our findings is that the difference in the positive review ratio decreased steadily during two weeks following the opening day. On the other hand, when the incentive to post promotional reviews completely diminished, we did not find any differences in the review distributions across review types. This suggests there are no differences in preferences among reviewers in these two groups. So, from our analysis of the data set from Naver Movies, we can conclude that promotional reviews are the main driver that causes the differences in the review distributions in the early stage of movie release. We further tested the impact of the differences in review distributions between the viewer and netizen types on box office revenue to explore whether customers responded to signals of review manipulation. From the 3SLS results on the simultaneous equation model, we found that the awareness or popularity effect reflected in the volume of reviews, while we cannot confirm the persuasive effect reflected in the valence of reviews. In addition, the higher valence and greater volume of netizen reviews had a positive impact on box office revenue. However, when we examined the effect of the two factors combined, we found that individuals combined two factors to make inferences about the potential presence of review

20

manipulation. Subsequently, if they perceived that there was review fraud, they were likely to not choose to watch such movies, which punished review fraud and caused box office revenue decreases as a result. The results of hypotheses testing are summarized in Table 12. INSERT TABLE 12 ABOUT HERE This research contributes to the literature on promotional review fraud and review policies. Prior literature has shown that if the perception of review credibility due to promotional reviews is a critical factor for customers to trust reviews, customers are less likely to trust reviews on unverified sites that allow anyone to post reviews. This could generate a serious credibility issue when reviews on unverified sites deliver information that is inconsistent with those on verified sites for an identical product or service. However, though our findings that a higher rating in netizen reviews had a positive impact on box office revenue but both a higher rating and a greater volume from netizen reviews combined had a negative impact, we showed that review policy may not always be a critical factor for review credibility. Further, we found that customers were likely to recognize signals for review manipulation from the unusual flow of reviews, such as the combination of higher valence and a greater volume of reviews from less credible information sources. These findings suggest that reviews from unverified sites were still valuable and likely to be accounted for in decision-making by prospective customers to reduce product uncertainty. Consequently, the managers of these sites must keep in mind that a significant portion of the generated reviews could have been posted by promotional review posters. If these reviews look strange or unusual to customers, then allowing unverified reviews may lead to substantial credibility damage and a decrease in firm performance. In addition, while previous literature mainly focused on showing the crosssectional effects of review manipulation, we took a dynamic approach to examine how the effects of OWOM change over time. In addition, we provide important managerial implications to practitioners, in particular, the managers of online review sites. Our findings suggest that adopting a two-track approach in terms of review policy, similar to the current review policy of Naver, which provides netizen and viewer reviews, is a feasible option to consider. This is because there is a trade-off between the volume and credibility of the generated reviews. Since there are both strengths and weaknesses for each approach, adopting both and finding a good balance so they complement each other will be more effective. However, as our findings show, there could be a backfire effect if the difference between netizen and viewer reviews is overly large. This would suspicion about the presence of promotional reviews among customers. Thus, it is imperative that the managers of review sites simultaneously keep track of both review types, so that they can predict customer reactions and respond accordingly.

21

Although this study provides a new perspective for understanding the dynamic effect of review fraud behavior along with its impact on box office revenue, there are limitations that need to be addressed. First, the contents of reviews may differ depending on the review type: whether it is a viewer or a netizen review. However, this study does not compare the differences in their review contents. Because the differences in the contents could also have a significant impact on box office revenue, future research must examine how the review contents of viewer and netizen reviews differ from each other, as well as how such differences influence box office revenue. Second, we focused on the reviews generated after the movie release in this research but not on the time period before the release. This was mainly because viewer reviews cannot be generated before the release: the review poster must watch the movie to post a viewer review. In other words, the reviews before the movie release cannot be separated into viewer and netizen reviews, as with the reviews that are shared after a movie is released. Consequently, it is difficult to fully investigate the impact of manipulation from reviews before the release. Thus, future research should be undertaken with additional data sets that include reviews before movies are released. Another limitation is related to the impacts of review ratings. According to our results, cumulative and daily ratings seem to play different roles in determining review posting behavior. We infer this to be closely related to positive and negative review posting. However, we did not examine such impacts in depth, as they are not directly in the scope of this study. Thus, it is necessary to develop a different but deeper model specification to further delve into this relationship. We leave this for future research. REFERENCES Amblee, N., Bui, T. 2011. Harnessing the influence of social proof in online shopping: The effect of electronic word of mouth on sales of digital microproducts. International Journal of Electronic Commerce, 16(2), 91-114. Anderson, E., Simester, D. 2014. Reviews without a purchase, Low ratings, loyal customers, and deception. Journal of Marketing Research, 51(3), 249-269. Anderson, N., Shanteau, J. 1970. Information integration in risky decision making. J. Exp. Psych., 84(3), 441-451. Basuroy, S., Chatterjee, S., Ravid, S. A. 2003. How critical are critical reviews? The box office effects of film critics, star power, and budgets. Journal of Marketing, 67(4), 103-117. Baum, D., Spann, M. 2014. The interplay between online consumer reviews and recommender systems, An experimental analysis. International Journal of Electronic Commerce, 19(1), 129-162. Bellemare, M. F., Masaki, T., Pepinsky, T. B. 2017. Lagged explanatory variables and the estimation of causal effect. Journal of Politics, 79(3), 949-963. Brehm, J. W. 1966. A theory of psychological reactance. Oxford, England: Academic Press. Burgoon, M., Alvaro, E., Grandpre, J., Voulodakis, M. 2002. Revisiting the theory of psychological reactance. The Persuasion Handbook, 213-232. Chevalier, J. A., Mayzlin, D. 2006. The effect of word of mouth on sales: Online book reviews. Journal of Marketing Research, 43(3), 345-354. Cohen, J. B., Golden, E. 1972. Informational social influence and product evaluation. J. Appl. Psych., 56(1), 54. De Vany, A., Walls, W. D. 2002. Does Hollywood make too many R‐rated movies? Risk, stochastic dominance, and the illusion of expectation. Journal of Business, 75(3), 425-451. Dellarocas, C. 2006. Strategic manipulation of internet opinion forums: Implications for consumers and firms.

22 Management Science, 52(10), 1577-1593. Dhar, V., Chang, E. A. 2009. Does chatter matter? The impact of user-generated content on music sales. Journal of Interactive Marketing, 23(4), 300-307. Doh, S.-J., Hwang, J.-S. 2009. How consumers evaluate eWOM (electronic word-of-mouth) messages. CyberPsychology & Behavior, 12(2), 193-197. Duan, W., Gu, B., Whinston, A. B. 2008a. Do online reviews matter? An empirical investigation of panel data. Decision Support Systems, 45(4), 1007-1016. Duan, W., Gu, B., Whinston, A. B. 2008b. The dynamics of online word-of-mouth and product sales—An empirical investigation of the movie industry. Journal of Retailing, 84(2), 233-242. Elberse, A., Anand, B. 2007. The effectiveness of pre-release advertising for motion pictures: An empirical investigation using a simulated market. Information Economics and Policy, 19(3-4), 319-343. Elberse, A., Eliashberg, J. 2003. Demand and supply dynamics for sequentially released products in international markets: The case of motion pictures. Marketing Science, 22(3), 329-354. Eliashberg, J., Elberse, A., Leenders, M. A. 2006. The motion picture industry: Critical issues in practice, current research, and new research directions. Marketing Science, 25(6), 638-661. Garthwaite, C. L. 2014. Demand spillovers, combative advertising, and celebrity endorsements. American Economic Journal: Applied Economics, 6(2), 76-104. Godes, D., Mayzlin, D. 2004. Using online conversations to study word-of-mouth communication. Marketing Science, 23(4), 545-560. Hendricks, K., Sorensen, A. 2009. Information and the Skewness of Music Sales. J. Pol. Econ., 117(2), 324-369. Hennig-Thurau, T., Gwinner, K. P., Walsh, G., Gremler, D. D. 2004. Electronic word-of-mouth via consumeropinion platforms: What motivates consumers to articulate themselves on the internet? Journal of Interactive Marketing, 18(1), 38-52. Hu, N., Bose, I., Koh, N. S., Liu, L. 2012. Manipulation of online reviews: An analysis of ratings, readability, and sentiments. Decision Support Systems, 52(3), 674-684. Koh, N. S., Hu, N., Clemons, E. K. 2010. Do online reviews reflect a product’s true perceived quality? An investigation of online movie reviews across cultures. Elec. Comm. Res. Appl., 9(5), 374-385. Krider, R. E., Li, T., Liu, Y., Weinberg, C. B. 2005. The lead-lag puzzle of demand and distribution: A graphical method applied to movies. Marketing Science, 24(4), 635-645. Lee, Y.-J., Hosanagar, K., Tan, Y. 2015. Do I follow my friends or the crowd? Information cascades in online movie ratings. Management Science, 61(9), 2241-2258. Legoux, R., Larocque, D., Laporte, S., Belmati, S., Boquet, T. 2016. The effect of critical reviews on exhibitors' decisions: Do reviews affect the survival of a movie on screen? Intl. J. Research in Marketing, 33(2), 357-374. Li, X. 2018. Impact of average rating on social media endorsement: The moderating role of rating dispersion and discount threshold. Information Systems Research, 29(3), 739-754. Liu, Y. 2006. Word of mouth for movies: Its dynamics and impact on box office revenue. J. Mktg., 70(3), 74-89. Luca, M., Zervas, G. 2016. Fake it till you make it: Reputation, competition, and Yelp review fraud. Management Science, 62(12), 3412-3427. Malbon, J. 2013. Taking fake online consumer reviews seriously. Journal of Consumer Policy, 36(2), 139-157. Mayzlin, D., Dover, Y., Chevalier, J. 2014. Promotional reviews: An empirical investigation of online review manipulation. American Economic Review, 104(8), 2421-2455. Moe, W. W., Trusov, M. 2011. The value of social dynamics in online product ratings forums. Journal of Marketing Research, 48(3), 444-456. Neelamegham, R., Chintagunta, P. 1999. A Bayesian model to forecast new product performance in domestic and international markets. Marketing Science, 18(2), 115-136. Park, D. H., Kim, S. 2008. The effects of consumer knowledge on message processing of electronic word-of-mouth via online consumer reviews. Journal of Electronic Commerce Research, 7(4), 399-410. Park, D. H., Lee, J., Han, I. 2007. The effect of on-line consumer reviews on consumer purchasing intention: The

23 moderating role of involvement. International Journal of Electronic Commerce, 11(4), 125-148. Pennebaker, J. W., Sanders, D. Y. 1976. American graffiti: Effects of authority and reactance arousal. Personality and Social Psychology Bulletin, 2(3), 264-267. Reimer, T., Benkenstein, M. 2016. Altruistic eWOM marketing: More than an alternative to monetary incentives. Journal of Retailing and Consumer Services, 31, 323-333. Sawhney, M. S., Eliashberg, J. 1996. A parsimonious model for forecasting gross box-office revenues of motion pictures. Marketing Science, 15(2), 113-131. Sun, M. 2012. How does the variance of product ratings matter? Management Science, 58(4), 696-707. Tuttle, B. 2014. Five outrageous ways people try to game online reviews, Money, August 5. Yoon, Y., Polpanumas, C., Park, Y. J. 2017. The impact of word of mouth via Twitter on moviegoers' decisions and film revenues: Revisiting prospect theory – How WOM about movies drives loss-aversion and referencedependence behaviors. Journal of Advertising Research, 57(2), 144-158. Zhang, X., Dellarocas, C. 2006. The lord of the ratings: Is a movie's fate is influenced by reviews? If Proc. 2006 International Conference on Information Systems, Atlanta, GA: Association for Information Systems, 117. Zhu, F., Zhang, X. M. 2010. Impact of online consumer reviews on sales: The moderating role of product and consumer characteristics. Journal of Marketing, 74(2), 133-148. Zufryden, F. S. 1996. Linking advertising to box office performance of new film releases - a marketing planning model. Journal of Advertising Research, 36(4), 29-41. Table 1. Summary of Hypotheses H1 H2 H3 H4a H4b

Viewer-Netizen Review Ratio Diminution Hypothesis. There There is a difference in the ratio of positive reviews between viewer and netizen-authored reviews, and the difference will diminish over time. Elimination of Promotional Review Effects Hypothesis. There will be a discernible difference in the distributions for viewer and netizen reviews when the effect of promotional reviews is eliminated. Review Rating Differences across Review Types and Box Office Revenues Hypothesis. If a movie has a higher average rating for netizen reviews than for viewer reviews, then the difference in rating has a positive effect on box office revenue. Higher Netizen Review Rating Effect on Box Office Receipts Hypothesis. A greater number of netizen reviews than viewer reviews increases the positive effect of the difference in ratings on box office revenue when the rating for netizen reviews is higher than viewer reviews. Box Office Revenue Decrease Hypothesis. A greater number of netizen reviews than viewer reviews, along with the higher rating for a movie, decreases box office revenue for the movie.

Table 2. Summary Statistics for Each Movie Variable Total Revenue(a) Total Movie-goers Total Screenings Variable # Domestic Movies # Shown by Major Distributors(b) (a) Billion

N 58 58 58 N 29

Mean 44,561,582 5,601,295 104,379 Percent 50.00% (29/58)

31

53.45% (31/58)

Won.(b) Four

Median 37,038,625 4,702,143 95,114

Maximum 135,748,398 17,613,682 199,231

Minimum 16,912,706 2,108,273 39,651

Notes. Korean major distributors: CJ Entertainment, Show Box, Next, and Lotte. These four major distributors were responsible for up to a 53.1% share of all movie-going customers in 2014 (Yonhapnews 2017, available in Korean at: www.yonhapnews.co.kr/bulletin/2017/01/21/0200000000AKR20170121056600005.HTML?input=1195m.

24 Table 3. Description of Key Variables Variable

Description (i = movie and t = day)

Screenit Daily_Salesit Daily_Viewer_Reviewsit Daily_Netizen_Reviewsit Daily_Viewer_Ratingit Cumulative_Viewer_Ratingit Daily_Netizen_Ratingit Cum_Netizen_Ratingsit Positive_Review Positive_Review_Ratioit Positive_Review_RatioitV Positive_Review_RatioitN Negative_Review Negative_Review_Ratioit Negative_Review_RatioitV Negative_Review_RatioitN Netizen_Rating_Highit Netizen_Volume_Highit Netizen_Interactionit Netizen_Ratio_Highit Netizen_RInteractionit Ageit Weekendit

Daily screening number. Daily box office revenue. Daily # posted viewer reviews after movie release. Daily 3 posted netizen reviews after movie release Daily average rating of viewer reviews after movie release Cumulative average review rating of viewer reviews after movie release Daily average rating of netizen reviews after movie release Cumulative average review rating of netizen reviews after movie release Dummy for each review where 1=5 stars, 0 otherwise (Mayzlin et al. 2014) # daily positive reviews divided by total # daily reviews # daily viewer positive reviews divided by total # daily viewer reviews # daily netizen positive reviews divided by total # daily netizen reviews Dummy for each review; where i = 1 or 2 stars, 0 otherwise # daily negative reviews divided by total # daily reviews # daily viewer negative reviews divided by total # daily viewer reviews # daily netizen negative reviews divided by total # daily netizen reviews Dummy; 1 if Daily_Netizen_Rating > Daily_Viewer_Rating, 0 otherwise Dummy; 1 if Daily_Netizen_Reviews > Daily_Viewer_Reviews, 0 otherwise Dummy, Netizen_Rating_High * Netizen_Volume_High Dummy; 1 if daily positive ratio of netizen reviews > viewer reviews, else 0 Dummy; Netizen_Ratio_High * Netizen_Volume_High Number days after movie release. Dummy; where 1 = Friday, Saturday, and Sunday; and otherwise=0.

Table 4. Summary Statistics of Customer Reviews for Two Review Types Variable # Obs. Mean ≤ 2.0 2.0 < & ≤ 3.0 3.0 < & ≤ 4.0 4.0 < & ≤ 5.0 Min Max

Viewer Reviews 369,066 4.33 9,892 (2.68%) 22,761(6.17%) 103,275(27.98%) 233,141(63.17%) 0.5 5.0

Netizen Reviews 504,602 4.13 66,434(13.17%) 29,541(5.85%) 55,165(10.93%) 353,762(70.05%) 0.5 5.0

Note: We conducted a two-sided t-test to compare the averages of viewer and netizen reviews (Diff = Mean(Netizen Reviews)Mean(Viewer Reviews)). The difference in the averages of viewer and netizen reviews is -.20 (Diff = 4.13-4.33), and the corresponding t-value is -75.06, signif. at 99% level.

25 Table 5. Comparison of Review Posting After the Release of 58 Movies After Movie Release (for 4 weeks) Variable N Mean Maximum After-Release(a) Daily Posting 1,624 537.97 5,119 After-Release(b) Daily Rating 1,624 4.20 4.96 Daily_Viewer_Reviews 1,624 227.25 2,731 Daily_Viewer_Rating 1,624 4.30 4.96 Daily_Netizen_Reviews 1,624 310.71 3,916 Daily_Netizen_Rating 1,624 4.05 5.00

Minimum 26 1.52 6 3.12 3 1.14

Notes. (a) Including two types of reviews after the movie release: viewer and netizen. (b)We conducted a two-sided t-test to compare the averages of Daily_Viewer_ Reviews and Daily_Netizen_Reviews (Diff = Mean(Daily_Viewer_ Reviews) Mean(Daily_Netizen_ Reviews)). The difference was -83.45 (Diff = 227.25-310.71), and the corresponding t-value is - 6.84, signif. at 99% level. We also conducted twosided t-tests to compare the averages of Daily_ Viewer_Rating and Daily_Netizen_Rating (Diff = Mean(Daily_Viewer_Rating)-Mean(Daily_ Netizen_ Rating)). The difference was 0.25 (Diff = 4.30-4.05), and the corresponding t-value is 17.44, signif. at 99% level.

Table 6. Empirical Results of Review Fraud Behavior after Movie Release

Variable Opening Day 1 Day After Release 2 Days After Release 3 Days After Release 4 Days After Release 5 Days After Release 6 Days After Release 7 Days After Release 8 Days After Release 9 Days After Release 10 Days After Release 11 Days After Release 12 Days After Release 13 Days After Release 14 Days After Release 15 Days After Release 16 Days After Release 17 Days After Release 18 Days After Release 19 Days After Release 20 Days After Release Weekend Dummy Movie-Level Dummy Constant R2

Coefficient (SE) (Daily_Netizen_Ratingit (Positive_Review_RatioitN Daily_Viewer_Ratingit) Positive_Review_RatioitV) .19(.19) .06(.01)*** .38(.20) .08(.01)*** -.14(.24) .08(.01)*** .24(.23) .08(.01)*** .03(.25) .06(.01)*** -.00(.23) .06(.01)*** -.03(.23) .06(.01)*** .26(.21) .06(.01)*** .27(.21) .04(.01)*** .04(.21) .02(.01)*** -.15(.23) .03(.01)** -.09(.25) .03(.01)*** .19(.26) .04(.01)*** .13(.20) .04(.01)*** -.20(.23) .02(.01) -.03(.23) .02(.01) .16(.24) -.00(.01) .12(.23) .03(.01)*** -.37(.26) .01(.01) .02(.22) .00(.01) -.30(.23) .01(.01) Yes Yes Yes Yes .16(.35) .09(.01)*** 6.27% 42.78%

Note: Daily_Netizen_Ratingit - Daily_Viewer_Ratingit, and Positive_Review_RatioitN - Positive_Review_RatioitV are dependent variables (N = Netizen, and V = Viewer). ***p < .01, ** p < .05.

26 Table 7. Empirical Results of Viewer Reviews After Movie Release Variable Opening Day 1 Day After Release 2 Days After Release 3 Days After Release 4 Days After Release 5 Days After Release 6 Days After Release 7 Days After Release 8 Days After Release 9 Days After Release 10 Days After Release 11 Days After Release 12 Days After Release 13 Days After Release 14 Days After Release 15 Days After Release 16 Days After Release 17 Days After Release 18 Days After Release 19 Days After Release 20 Days After Release Weekend Dummy Movie-Level Dummy Constant R2

Coefficient (SE) Daily_Viewer_Ratingit Positive_Review_RatioitV .39(.08)*** .10(.01)*** .10(.12) .06(.01)*** .15(.10) .04(.01)*** -.08(.13) .02(.01)*** .01(.11) .02(.01)*** .02(.13) .02(.01)*** .08(.11) .02(.01)*** -.00(.12) .01(.01) .05(.10) .02(.01)** .12(.11) .02(.01)** .07(.12) .00(.01) .13(.10) .01(.00) -.05(.13) -.00(.00) .14(.09) -.00(.01) .07(.09) .01(.01) .05(.09) .00(.00) -.01(.11) .00(.00) .00(.13) -.00(.00) .09(.12) -.00(.00) .16(.11) .00(.00) .06(.09) -.00(.00) Yes Yes Yes Yes 3.97(.17)*** .19(.01)*** 11.11% 85.02%

Note: Daily_Viewer_Ratingit, and Positive_Review_RatioitV are dependent variables. ***p < .01, ** p < .05.

27 Table 8. Empirical Results of Netizen Reviews After Movie Release Variable Opening Day 1 Day After Release 2 Days After Release 3 Days After Release 4 Days After Release 5 Days After Release 6 Days After Release 7 Days After Release 8 Days After Release 9 Days After Release 10 Days After Release 11 Days After Release 12 Days After Release 13 Days After Release 14 Days After Release 15 Days After Release 16 Days After Release 17 Days After Release 18 Days After Release 19 Days After Release 20 Days After Release Weekend Dummy Movie-Level Dummy Constant R2

Coefficient (SE) Daily_Netizen_Ratingit Positive_Review_RatioitN .59(.16)*** .16(.01)*** .48(.17)*** .14(.01)*** .01(.21) .12(.01)*** .16(.19) .10(.01)*** .05(.21) .08(.01)*** .01(.19) .08(.01)*** .05(.20) .08(.01)*** .26(.19) .07(.01)*** .32(.17) .05(.01)*** .17(.19) .04(.01)*** -.08(.18) .04(.01)** .04(.20) .04(.01)*** .14(.21) .03(.01)*** .28(.18) .03(.01)*** -.12(.22) .03(.01)*** .01(.20) .03(.01)*** .15(.22) .00(.01) .12(.19) .02(.01)** -.28(.22) .01(.01) .19(.18) .01(.01) -.24(.20) .00(.01) Yes Yes Yes Yes 4.13(.25)*** .28(.01)*** 12.22% 82.29%

Note: Daily_Netizen_Ratingit, and Positive_Review_RatioitN are dependent variables. ***p < .01, ** p < .05.

28 Table 9. Empirical Results of Review Manipulation Behavior After End of Screening Variable 7 Days Before End of Release 6 Days Before End of Release 5 Days Before End of Release 4 Days Before End of Release 3 Days Before End of Release 2 Days Before End of Release 1 Day Before End of Release Final Day of Release 1 Day After End of Release 2 Days After End of Release 3 Days After End of Release 4 Days After End of Release 5 Days After End of Release 6 Days After End of Release 7 Days After End of Release Weekend Dummy Movie-Level Dummy Constant R2

Coefficient (SE) Daily_Netizen_Ratingit Positive_Review_RatioitNDaily_Viewer_Ratingit Positive_Review_RatioitV -.29(.23) -.03(.03) .01(.23) -.01(.04) -.22(.25) -.03(.04) .20(.21) -.04(.04) .06(.24) -.02(.03) -.23(.21) -.06(.04) .18(.22) -.07(.03)** .17(.25) -.00(.04) .16(.24) -.03(.04) .02(.23) .00(.04) -.16(.28) -.11(.03)*** -.05(.24) -.04(.04) .18(.17) -.05(.04) .31(.23) -.05(.04) .03(.20) -.06(.04) Yes Yes Yes Yes -.43(.37) .03(.05) 9.40% 17.94%

Note: (Daily_Netizen_Type_Ratingit - Daily_Viewer_Type_Ratingit), and (Positive_Review_RatioitN Positive_Review_RatioitV) are dependent variables. ***p < .01, ** p < .05.

29 Table 10. Review Rating: OLS and 3SLS Empirical Results Coefficient (SE) OLS 3SLS Eq. (1): Box Office Revenue Equation where dependent variable is log(Daily_Sales)it log(Daily_Viewer_Reviews)it .83(.02)*** .53(.12)*** log(Daily_Viewer_Reviews)it-1 -.16(.01)*** .17(.07)*** (Cumulative_Viewer_Rating)it .89(.31)*** .70(.55) (Daily_Viewer_Type_Rating)it .07(.08) .15(.13) (Daily_Viewer_Type_Rating)it-1 -.01(.08) .10(.08) (Netizen_Rating_High)it-1 .00(.03) .08(.03)*** (Netizen_Volume_High)it-1 .06(.02)*** .16(.02)*** (Netizen_Interaction)it-1 -.03(.03) -.17(.04)*** log(Screen)it 1.05(.02)*** 1.01(.04)*** log(Age)it -.16(.01)*** -.17(.02)*** (Weekend))t .40(.01)*** .56(.05)*** Movie-Level Dummy Yes Yes Constant 6.34(1.22)*** 6.25(2.32)*** R2 96.31% 95.30% Eq. (2): Posting equation where the dependent variable is log(Daily_Viewer_Reviews)it log(Daily_Sales)it .39(.02)*** -.03(.03) log(Daily_Sales)it-1 .21(.01)*** .53(.02)*** (Cumulative_Viewer_Rating)it -1.95(.47)*** -2.27(.40)*** (Daily_Viewer_Rating)it .13(.10) .20(.10)** log(Age)it .12(.02)*** -.00(.02) (Weekend)t .03(.02) .33(.02)*** Movie-Level Dummy Yes Yes Constant .71(2.24) 4.49(1.78)*** R2 89.63% 85.44% ***p < .01, ** p < .05. Variable

30 Table 11. Positive Review Ratio: OLS and 3SLS Empirical Results Coefficient (SE) OLS 3SLS Eq. (1): Box Office Revenue Equation where dependent variable is log(Daily_Sales)it log(Daily_Viewer_Reviews)it .83(.01)*** .46(.12)*** log(Daily_Viewer_Reviews)it-1 -.17(.01)*** .21(.07)*** (Cumulative_Viewer_Rating)it .80(.31)*** .44(.56) (Daily_Viewer_Rating)it .06(.07) .14(.13) (Daily_Viewer_Rating)it-1 .01(.07) .17(.08)** (Netizen_Ratio_High)it-1 .09(.22)*** .12(.03)*** (Netizen_Volume_High)it-1 .17(.05)*** .26(.06)*** (Netizen_RInteraction)it-1 -.12(.05)** -.13(.06)** log(Screen)it 1.05(.01)*** 1.02(.04)*** log(Age)it -.16(.01)*** -.16(.03)*** (Weekend)t .40(.01)*** .58(.05)*** Movie-Level Dummy Yes Yes Constant 6.64(1.25)*** 7.13(2.34)*** R2 96.34% 95.09% Eq. (2): Posting equation where the dependent variable is log(Daily_Viewer_Reviews)it log(Daily_Sales)it .39(.01)*** -.04(.03) log(Daily_Sales)it-1 .21(.01)*** .53(.02)*** (Cumulative_Viewer_Rating)it -1.95(.34)*** -2.27(.40)*** (Daily_Viewer_Rating)it .13(.09) .20(.10)** log(Age)it .12(.01)*** -.00(.02) (Weekend)t .03(.02) .33(.02)*** Movie-Level Dummy Yes Yes Constant .71(1.52) 4.52(1.78)*** R2 89.63% 85.37% Variable

***p < .01, ** p < .05.

Table 12. Summary of Hypothesis Test Results Hypothesis H1

H2

H3

H4a

Viewer-Netizen Review Ratio Diminution Hypothesis. There is a difference in the ratio of positive reviews between viewer and netizen-authored reviews, and the difference will diminish over time. Elimination of Promotional Review Effects Hypothesis. There will be a discernible difference in the distributions for viewer and netizen reviews when the effect of promotional reviews is eliminated. Review Rating Differences across Review Types and Box Office Revenues Hypothesis. If a movie has a higher average rating for netizen reviews than for viewer reviews, then the difference in rating has a positive effect on box office revenue. Higher Netizen Review Rating Effect on Box Office Receipts Hypothesis. A greater number of netizen reviews than viewer reviews increases the positive effect of the difference in ratings on box office revenue when the rating for netizen reviews is higher than viewer reviews.

Test Result Supported

Not Supported Supported Not Supported

31

H4b

Box Office Revenue Decrease Hypothesis. A greater number of netizen reviews than viewer reviews, along with the higher rating for a movie, decreases box office revenue for the movie.

Supported

Highlights 

Review manipulation influences the review distribution and sales



The impact of manipulation on review distribution and sales fades over time



The volume and valence of reviews influence consumer perceptions of review manipulation