1 © 2016, American Marketing Association Journal of Marketing Research PrePrint, Unedited All rights reserved. Cannot be reprinted without the express permission of the American Marketing Association.
The Value of Marketing Crowdsourced New Products as Such: Evidence from Two Randomized Field Experiments
Hidehiko Nishikawa, Martin Schreier, Christoph Fuchs, and Susumu Ogawa
Hidehiko Nishikawa (
[email protected]), Professor, Hosei University, Faculty of Business Administration, 2-17-1 Fujimi, Chiyoda-ku, Tokyo, 102-8160, Japan Martin Schreier (
[email protected]), Professor of Marketing, Department of Marketing, WU Vienna University of Economics and Business, Welthandelsplatz 1, 1020 Vienna, Austria Christoph Fuchs (
[email protected]), Professor of Marketing, TUM School of Management, Technical University of Munich, 80333 Munich, Arcisstrasse 21, Germany, and Rotterdam School of Management, Erasmus University, Burgemeester Oudlaan 50, 3062 PA, Rotterdam, The Netherlands Susumu Ogawa (
[email protected]), Professor of Marketing, School of Business Administration, Kobe University, 2-1 Rokkodai-cho, Nada-ku, Kobe Hyogo, 657-8501, Japan
Acknowledgements: We are thankful to Ryohin Keikaku Co., Ltd., in particular to Michiko Suzuki, Takahiro Otomo, Tsunemi Kawana, Hajime Ikeuchi, Ikuko Yonezawa, Makoto Nakagawa, and Naohiro Osaka for their support in realizing this project, to Ajay Kohli for his feedback on how to best position this research, and to the entire JMR review team for their very helpful comments and suggestions. This work was supported by JSPS KAKENHI Grant Numbers JP24530535, JP16K03951, and JP15H03393 and by the Erasmus Research Institute of Management (ERIM).
2
The Value of Marketing Crowdsourced New Products as Such: Evidence from Two Randomized Field Experiments
Abstract In order to complement their in-house, designer-driven efforts, companies are increasingly experimenting with crowdsourcing initiatives in which they invite their user communities to generate new product ideas. While innovation scholars have started to analyze the objective promise of crowdsourcing, the research presented here is unique in pointing out that merely marketing the source of design to customers might bring about an incremental increase in product sales. The findings from two randomized field experiments reveal that labeling crowdsourced new products as such, that is, marketing the product as “customer-ideated” at the point of purchase versus not mentioning the specific source of design, increased the product’s actual market performance by up to 20 percent. Two controlled follow-up studies reveal that the effect observed in two distinct consumer goods domains (food, electronics) can be attributed to a quality inference: consumers perceive “customer-ideated” products to be based on ideas that address their needs more effectively, and the corresponding design mode is considered superior in generating promising new products.
Keywords: crowdsourcing, idea generation, user involvement, new products, new product development, customer-centric innovation
3
In order to achieve success in innovation, firms are now increasingly searching for new product ideas outside their own boundaries (Franke et al. 2014; Jeppesen and Lakhani 2010; Lilien et al. 2002; von Hippel 2005). One such method to tap into the a priori unknown is crowdsourcing, that is, outsourcing idea generation to a potentially large and unknown population – “the crowd” – in the form of an open call (Afuah and Tucci 2012; Howe 2006; Jeppesen and Frederiksen 2006; Terwiesch and Ulrich 2009). In practice, crowdsourcing systems are often implemented online and aim to “gather ideas for new products and services from a large, dispersed ‘crowd’ of nonexperts (e.g., consumers)” (Bayus 2013, p. 226). As exemplified by consumer goods brands such as Dell, Lego, Starbucks, or Threadless, this typically takes the form of one-time or ongoing idea contests positioned around and within the firm’s user communities (Bayus 2013; Poetz and Schreier 2012; Stephen et al. 2015). The best of the resulting usergenerated ideas are subsequently selected and taken up by the firm and converted into marketable new products. 1 Depending on the goal and framing of the idea contest, the spectrum of outcomes might span from truly new products to simply improved variants of the firm’s existing product offerings. While innovation scholars have started to analyze the objective promise of crowdsourcing as well as some contextual boundary conditions under which crowdsourcing appears to be more versus less beneficial to firms (Afuah and Tucci 2012; Bayus 2013; Boudreau et al. 2011; Girotra et al. 2010; Nishikawa et al. 2013; Ogawa and Piller 2006; Poetz and Schreier 2012; Stephen et al. 2015; von Hippel 2005), the primary goal of the research presented here is different. We look at how consumers perceive crowdsourced new products and, in particular, how the inferences
1
Strictly speaking, we are referring to a new product that originates from and is thus based on a crowdsourced product idea. For the sake of simplicity and brevity, we use the shorter term “crowdsourced new product” below to refer to a new product based on a crowdsourced, user-generated product idea which has been taken up and subsequently converted into a marketable new product by the contest-hosting firm.
4
they make impact their choices. Specifically, we aim to analyze the mere “face value” of marketing crowdsourced new products as such, that is, we ask whether actively marketing the source of design to customers might have an incremental effect on the product’s actual market performance. In other words, will consumer preferences for one and the same product be any different if that product is prominently labeled as “customer-ideated” at the point of purchase (POP) compared to a purchasing context where the source of the product’s design is not revealed? From an innovation management perspective, this research question is novel and interesting because the merit of a certain design mode is typically judged exclusively on the basis of its promise to generate objectively superior new products. In other areas of marketing, however, we have learned that communicating a firm’s activities to consumers might incrementally affect bottom-line figures. For example, organic food becomes tastier if we know it is organic (Johansson et al. 1999), handmade products are particularly attractive to us if labeled as such (Fuchs et al. 2015), and a German engine, Italian pasta, or French wine will be perceived to be of higher quality if its country of origin is revealed to consumers (Bilkey and Nes 1982). Recent lab-based consumer research has pointed to the possibility that a similar inferential process also exists in favor of crowdsourcing and user-driven design (Dahl et al. 2015; Fuchs and Schreier 2011; Schreier et al. 2012). In the present research, we thus argue that merely marketing crowdsourced new products as such might incrementally affect the product’s actual market performance. The empirical work presented below provides strong support for this idea. In particular, we present two randomized field experiments conducted in cooperation with the Japanese consumer goods brand Muji: one in their consumer electronics division (Study 1) and one in their food division (Study 2). In both studies, we were able to manipulate the POP display featuring
5
the crowdsourced new product upon its market introduction. Most critically, participating Muji stores were randomly assigned to conditions in which the source of the focal product’s design was or was not revealed on its POP display. We find that the presence of the crowdsourcing label (“ideated by customers”) increased the product’s actual market performance by up to 20 percent. Two controlled follow-up studies (Studies 3 and 4) suggest that this effect is largely due to a quality inference in which consumers perceive “customer-ideated” new products to be based on ideas that address their needs more effectively and in which the corresponding design mode is considered superior in generating promising new products, at least in the consumer goods domains studied here (a discussion of the findings’ limitations and generalizability is provided in the general discussion section). The results obtained in this research thus highlight that crowdsourcing might not only constitute a promising route to better new products, it might also help marketers set their products apart from the competition by actively communicating the source of design to customers.
The Potential Promise of Crowdsourcing In this section, we review the emerging literature on crowdsourcing with a view to motivating our primary research question. We start by discussing its “objective” promise, that is, how crowdsourcing might allow firms to derive innovative new products, particularly in low-tech consumer domains (i.e., the context of the present inquiry). This discussion is important because it constitutes the phenomenological grounding of the main argument put forth here: as we reason in more detail below, consumers might value “customer-ideated” new products as such: over and above the product’s factual, objective properties, the mere presence of the crowdsourcing label at the POP might incrementally boost the product’s actual market performance.
6
The objective argument: Better new products The main argument as to why firms engage in and hope to benefit from crowdsourcing is that the crowd might simply suggest promising new product ideas. As indicated above, the present analysis is centered on low-tech consumer goods fields where individual crowdsourcing participants are typically consumers or end users (e.g., Bayus 2013; Poetz and Schreier 2012; Stephen et al. 2015). In such crowdsourcing contests, new product ideas are submitted to an online platform in the form of verbal descriptions or rough concept drawings; based on some further community involvement or not, the contest-hosting firm finally selects the best ideas to be converted into marketable products. The key insight is that some, but of course not all, of the crowdsourced new product ideas might be commercially attractive – potentially even more so than the best ideas generated by a firm’s in-house designers. At first sight, this conjecture runs counter to the intuition of the traditional marketer, who typically believes that consumers or users do not possess the expertise (and/or motivation) to come up with truly meaningful ideas for the broader mass of a firm’s potential customer base (Moreau and Herd 2010; Schulze and Hoegl 2008; Ulrich 2007). On the other hand, research on the sources of innovation has repeatedly shown that users often innovate for themselves and that many of their innovations are also appealing to other consumers (Franke et al. 2006; Jeppesen and Frederiksen 2006; Lilien et al. 2002; von Hippel 2005). One historical case study on snowboarding, windsurfing, and skateboarding, for example, demonstrated that most of the first-of-type innovations and major improvements across those industries’ life cycles were originally invented by leading-edge users and not by firms (Shah 2002). More generally, research has revealed that a surprisingly large share of the general population has already engaged in innovation for their own personal use. For example, six percent of all UK consumers (i.e., three million individuals) have been identified as “user innovators” in the domain of
7
household products (von Hippel et al. 2012). Similar user-innovation statistics have been reported for the US and Japan (von Hippel et al. 2011). Against this backdrop, it seems plausible to predict that if firms manage to tap into this creative potential through crowdsourcing, at least a few promising new product ideas might result, particularly in low-tech consumer domains. Here, the underlying problem for which ideas are sought is relatively simple to understand and thus accessible to many external individuals, including the firm’s customers and the members of their user communities. In addition to a necessary match between design task complexity and user expertise, a further critical requirement for crowdsourcing to work in practice is the mere size and composition of the crowd activated. For example, Girotra et al. (2010, p. 593) draw on extreme value theory from statistics to model the idea generation process as one of repeated sampling from an underlying quality distribution; their finding is that “the expected quality of the best ideas is driven by the number of ideas generated” (in addition to the average quality and variance of the underlying quality distribution). Thus, the larger and more diverse the participant population, the higher the likelihood that a few exceptional ideas might be identified (Gross 1972; Terwiesch and Ulrich 2009). As an example, consider the T-shirt platform Threadless, which can draw on a pool of 800,000 users from around the globe who regularly submit 150 to 200 new and printable designs on a daily basis (Schreier et al. 2012). Compared to a necessarily far smaller and less diverse team of inhouse designers, the crowd, in aggregate, should be more effective in mining the potential solution space to identify the few winning ideas (von Hippel 2005). 2 In support of this reasoning,
2
A similar argument has recently been put forth by a LEGO executive to support the promise of crowdsourcing: “90% of our customers just want to consume. Perhaps 10% want to make their own stuff. 1% have the skills to make something which is good enough for others to want to buy it. Perhaps 1% is high, let us say 0.1 or even 0.01%, but with a customer base of 3.2 million, that is still more than 3,000 people! At the moment we have 150 designers at LEGO” (cited in Jensen et al. 2014, p. 75).
8
a recent case study conducted in the context of baby products indeed revealed that the best crowdsourced ideas to solve a consumer problem faced by a given company significantly outperformed those generated by the firm’s designers in terms of novelty and customer benefit (Poetz and Schreier 2012). As indicated above, the firm still needs to convert the best crowdsourced ideas into marketable new products, which might prove to be a non-trivial task (e.g., with potential problems ranging from selecting the “right” ideas to translating a given raw idea into a reliable and aesthetically appealing product; Poetz and Schreier 2012; Nishikawa et al. 2013; Kornish and Ulrich 2014; Toubia and Flores 2007). However, the better the idea, the better the final product, ceteris paribus; based on an empirical study of the crowdsourcing platform Quirky, for example, Kornish and Ulrich (2014, p. 14) find that the commercial importance of the quality of the raw idea is huge, “with ideas one standard deviation better translating to an approximately 50% increase in sales rate.” Consistent with this line of arguments, Nishikawa et al. (2013) recently presented a case study on a brand’s furniture division, revealing that the underlying firm’s crowdsourced new products actually outperformed their designer-ideated counterparts in terms of actual sales and market survival rate. Thus, under certain conditions, crowdsourcing might indeed help firms to develop better new products.
The psychological argument: “Customer-ideated” as a cue that sells In addition to the design mode’s “objective” promise, and more central to the present study’s intended contribution, we put forth a second argument as to why crowdsourcing might be beneficial to firms. Specifically, we argue that marketing crowdsourced new products as such, that is, actively marketing the source of design to customers, may incrementally increase the product’s actual market performance. The conceptual basis underlying this idea is consumer
9
inference literature, which broadly maintains that consumers construct spontaneous if-then linkages between information and conclusions (Kardes et al. 2004). For example, if we have to choose between two restaurants and our factual knowledge about them is limited, we might simply believe the more crowded one is better, and thus go with the flow (Banerjee 1992). Similarly, kidney transplant candidates are reportedly more inclined to refuse an organ offer in the case of earlier refusals in the queue due to negative quality inferences (Zhang 2010). In addition, consumers typically perceive products to be of higher quality in cases where their prices are high (price-quality inference; e.g., Rao and Monroe 1989), organic food becomes tastier if we know it is organic (Johansson et al. 1999), handmade products are particularly attractive to us if labeled as such (Fuchs et al. 2015), and German engines, Italian pasta, or French wine will be perceived to exhibit higher quality if the country of origin is revealed to consumers (Bilkey and Nes 1982). In short, and in more general terms, the focal “signal” received by consumers might change their beliefs about the product to be evaluated. Critical to the present inquiry, recent lab-based consumer research has pointed to the possibility that a similar inferential process also exists in favor of crowdsourcing and user-driven design (Dahl et al. 2015; Fuchs and Schreier 2011; Schreier et al. 2012). A “customer-ideated” new product might signal quality; consumers might believe that a product based on a customergenerated idea will simply address their needs better, thus discounting or reshaping their own opinions and perceptions about the product in question and its attributes. The reasons reported to underlie such a quality inference mirror the arguments put forth in favor of the “objective” promise of crowdsourcing. In particular, consumers have been found to readily construct a simple “user argument,” namely that “users know better what other users need.” This is because participants in crowdsourcing contests are believed to “belong to the same population as the [observing]
10
consumer and thus share characteristics inherent to group membership” (Schreier et al. 2012, p. 21). In-house designers, in contrast, are not (perceived as) consumers and therefore might not “get it right.” Not unlike pioneering scholars in the field (cf. von Hippel 2005), consumers thus believe that other users have “rich insight into unresolved consumption problems, which might provide them with multiple starting points to generate novel and useful ideas” (Schreier et al. 2012, p. 21). In a similar vein, lay consumers also associate crowdsourcing with a “numbers argument” similar to the one developed by Girotra et al. (2010): crowdsourcing might yield many more ideas compared to an in-house, designer-driven ideation track, and the more ideas are generated, the higher the likelihood of generating a few exceptional ones. Labeling a new product as “customer-ideated” might thus trigger a positive quality inference: consumers might equate “customer-ideated” with better products that address their needs more effectively (a discussion of the effect’s limitations and generalizability is provided in the general discussion section). Despite its conceptual appeal, and as with any lab-based experimental research, it remains unclear whether – and if so, to what extent – such an inference would also translate into behaviorally observable effects in the real world. At the POP, firms usually cannot accompany their products with detailed descriptions such as the design source manipulations used in the extant literature. Even if the use of such stimuli were possible, it remains unknown whether consumers would actually process and use the related information in their decision-making. In contrast to lab-based experiments, consumers in the “real world” typically have a far longer shopping list, which prevents them from engaging in elaborate processing for each and every item sought. For example, extant research highlights that US consumers only spend an average of 15 percent of their shopping time on actual decision-making; technically, this implies that more than two thirds of consumers spend less than 15 seconds in front of any given shelf (Flint et al. 2014). Finally, compared to merely hypothetical, lab-based experiments, it remains unclear
11
whether actual consumers behave similarly in the real world when taking decisions in a natural shopping environment with real economic (and other) consequences. 3 This is where the present research can make a strong contribution. In sum, we predict that actively marketing a crowdsourced new product as “customerideated” at the POP may incrementally increase that product’s actual market performance. The main rationale to motivate this effect, we argue, is a quality inference in which consumers perceive “customer-ideated” products as “the better products.” The critical assumption for this pattern of effects to unfold in the real world, of course, is that customers are aware of the source of design when taking the purchase decision.
Empirical Context and Overview of Studies Our central design source cue hypothesis was tested in the course of two randomized field experiments (Studies 1 and 2), supplemented by more controlled, process-oriented follow-up studies (Studies 3 and 4). The studies were conducted in cooperation with the Japanese company Ryohin Keikaku Co., Ltd., which owns and manages the international brand Muji. Under this retail brand, the manufacturer markets a wide array of consumer products such as food, fashion, household goods, stationery, electronics, furniture, and cosmetics. In aggregate, Muji products are sold in 26 countries worldwide and generate about three billion dollars in sales per year. Over the last few years, and in addition to their in-house innovation efforts, Muji has successfully
3
Despite potentially high levels of internal validity, the (lack of) ecological validity (realism) is a common problem in lab-based, experimental research (Harrison and List 2004; Wells 1993); without ecologically valid field data, it remains inherently inconclusive whether a certain theoretical effect also applies in the real world and is thus relevant for managers (e.g., Bertrand et al. 2010; Harrison and List 2004; Sheth 1972). As an example, consider a recent study published in the Quarterly Journal of Economics (Bertrand et al. 2010) which tested ten well-known “psychological” treatments that have been shown to impact consumer demand in the lab. A randomized field experiment revealed that the odds of seeing such effects in the real world are not substantially better than chance; in fact, they are less than 50 percent (i.e., only four out of ten treatments had significant main effects in the sample studied).
12
launched a number of crowdsourced new products in different categories (Nishikawa et al. 2013). In these recent instances, however, the brand has not broadly communicated the specific source of design to its customers, meaning that Muji customers could not tell at the POP whether a certain product was based on a crowdsourced or designer-generated new product idea. Crowdsourcing at Muji typically involves broadcasting the focal topic or theme of innovation to its user community online, and anyone who registers on the Muji website can participate and submit an idea. Although the community can often vote, comment and provide feedback on the submitted user ideas, the firm’s internal experts ultimately select the best ideas to be realized. The firm’s selection criteria are identical to those applied to internal ideation exercises (e.g., there must be a good fit with the brand vision, and a minimum threshold of expected sales and gross margins must be met before a project moves to full production). The winning ideas are subsequently adopted by a Muji project team, which translates the raw idea into a marketable new product; this post-idea stage typically requires substantial resources and firm-internal efforts, not unlike the more classic, designer-driven ideation track (and the incentive structure for Muji staff is identical in both tracks). Therefore, the only central difference between a crowdsourced versus designer-ideated new product at Muji is the original source of the idea. In this research, we study the brand’s two most recently launched crowdsourcing projects – one in their consumer electronics and one in their food division. Our field studies involved a significant number of Muji stores across Japan, and the stores were randomly assigned to one of several experimental conditions; in particular, we were able to manipulate whether or not the source of the product’s design was revealed on the POP display featuring the new product in the stores. We can thus test whether actively marketing the product as “customer-ideated” incrementally affects the product’s actual market performance. The two field experiments
13
presented below are supplemented by a series of more controlled follow-up studies to address the field studies’ limitations and to shed more light on the process underlying the effect.
Study 1 Objectives and Methods The primary objective of Study 1 is to test whether labeling crowdsourced new products as such, that is, marketing the product as “customer-ideated” at the POP versus not mentioning the specific source of the design, affects the product’s actual market performance. As indicated above, the context of Study 1 is consumer electronics. In particular, the brand launched a crowdsourcing competition to identify a new application for its Tag Tool, an accessory that consists of a silicone tag and a tool application that can be attached to it. The Tag Tool is designed to be used on the go and can be attached to a handbag, backpack, belt, etc. Past applications, ideated internally, included a pedometer, a thermometer, an alarm clock, and a compass. The present crowdsourcing initiative brought about the security buzzer, an application intended to provide its users with elevated levels of security on the go. The easy-to-use application consists of two buttons: pressing the first one activates a loud alarm noise in the case of an emergency, while pressing the second one deactivates the noise (retail price: 1,500 yen). During the product’s market introduction, we were able to implement a randomized field experiment involving a total of 46 Muji stores across Japan, and we tracked unit sales data for the security buzzer in these stores over a period of 67 days. Throughout the entire observation period, the new product was accompanied by a POP display describing the product’s newness, functionality, etc. (see Figure 1 for an example of the in-store display). Most importantly, we were able to manipulate whether or not the source of the product design was additionally revealed on the POP display (see Figure 2). We randomly assigned stores to one of two conditions. In the
14
first group of stores (the “baseline” condition; n = 23), we did not mention the source of the product’s design, while the POP display in the second group of stores (the “crowdsourced cue present” condition; n = 23) informed customers that the product was based on an “idea developed by Muji customers.” 4
INSERT FIGURES 1 and 2 ABOUT HERE
A comparison of the respective unit sales data between these two conditions (baseline vs. crowdsourced cue present) thus allows us to test our central prediction, namely that the mere presence of the design source cue will have a positive, incremental effect on the product’s actual market performance. Note that in an ideal setting, we would have involved a far larger number of stores to enhance power. In other words, we consider the sample size to be too small to allow meaningful statistical testing at the individual store level, particularly given the noisy environment in the field. However, it is critical that we can look at the (cell size-adjusted) aggregate outcome with adequate levels of confidence. In particular, we can test whether more security buzzers were sold in stores where the POP display revealed the source of the product design. In order to account for any general sales differences across the two pools of stores, we also collected the stores’ total (other) units sold during the observation period; thus, we can complement the absolute aggregate difference test with a proportion test by setting the security buzzer sales in relation to total sales. We also collected a series of control variables to test for significant differences between the stores in the two conditions. Variable selection was guided by Muji management. Our
4
We had to remove three stores from the baseline condition (giving us a final cell size of n = 20) because two stores were temporarily closed due to renovation and a third store mistakenly implemented a faulty set of POP displays.
15
findings suggest that the random assignment of stores to conditions was effective: the two pools of stores are comparable along all controls used. First, the stores did not differ in terms of general store size in square meters (Mgrand = 1,273 m², t(41) = .15, p =.88). Second, we found no significant differences (ps > .19) between the conditions in terms of the relative presence of store type (i.e., shop in shopping center, n = 19; shop in department store, n = 5, specialty store, n = 17; stand-alone shop on city street n = 2), location (i.e., in a station building, n = 8), or geographical area (as segmented by Muji, i.e., Tokyo Center, n = 11; Tokyo East, n = 2; Tokyo West, n = 4; Kanagawa, n = 3; Saitama, n = 4; North Japan, n = 4; Chukyo, n = 5; Kiniki West, n = 5; Kiniki East, n = 4; Kyushu, n = 7). 5
Results We first find that, in aggregate, more security buzzers were sold at stores where the POP display revealed the source of the product design versus not (units soldcue present = 330, units soldbaseline = 245). Because we ended up with three fewer stores in the baseline condition (n = 20), we adjusted the sales data in this condition for statistical testing (245/20*23). The difference between conditions remains sizable and significant: we find that 17 percent more products were sold where the source of the design was revealed at the POP (units soldcue present = 330, units soldbaseline = 282, χ2 = 3.77, p = .05). 6 We complemented this aggregate difference test with a proportion test which set the security buzzer sales in relation to the stores’ total (other) units sold. This test explicitly accounts for the general sales level across the two pools of stores. The
5
The only exception was Kiniki East, where the “crowdsourced cue present” condition was overrepresented (p = .05); however, store size did not differ between stores in this vs. the other areas (p = .49). 6 The same result is obtained if we instead adjust the sales data in the “crowdsourced cue present” condition (330/23*20): 17 percent more products were sold where the source of the design was revealed at the POP (units soldcue present = 287, units soldbaseline = 245, χ2 = 3.32, p = .07).
16
findings again support our primary prediction: the design source cue manipulation significantly and positively affected the market performance of the crowdsourced new product (Pcue present = 6.39e-5, Pbaseline = 5.00e-5, z = 2.92, p < .01). As indicated above, we consider the sample size to be too small to allow meaningful statistical testing at the individual store level. However, we can at least visually zoom in on the store-level sales data to examine whether the aggregate effect found is likely to be driven by a few “outlier” stores. 7 As shown in Figure 3, this appears not to be the case. In particular, Figure 3 visualizes the two distributions in a pairwise manner, that is, we pair the two best-performing stores in each condition, followed by the two second-best stores, the two third-best stores, etc. The striking visual result is that the baseline “wins” over the crowdsourced cue present condition in only one pair of stores (pair #5). In all other store pairs, the design source cue manipulation brought about a positive sales increment. A significance test following this matched sampling approach confirms the visual result (Mcue present = 16.2, Mcue not present = 12.25; tpaired sample (19) = 6.35, p < .001), even if the respective pairs of stores do not appear to be different along any of the control variables captured (ps > .05). Finally, we applied the same logic to the (adjusted) daily sales data, and the results provide a similarly clear picture: the pool of stores in the baseline condition “wins” over the “crowdsourced cue present” stores on only 9 out of 67 days. In all other day pairs, the design source cue manipulation brought about a positive sales increment (see Figure 4). A significance test following this matched sampling approach again confirms this visual result (Mcue present = 4.93 vs. Mcue not present = 4.21, tpaired sample(66) = 8.74, p < .001).
7
The descriptive statistics are as follows: Mcue present = 14.35 versus Mbaseline = 12.25, Mediancue present = 12.00 versus Medianbaseline = 8.50.
17
INSERT FIGURES 3 AND 4 ABOUT HERE
Discussion Study 1 provides preliminary evidence from the field that labeling crowdsourced new products as “customer-ideated” (versus not mentioning the specific source of design at the POP) positively affects the product’s actual market performance. In particular, we find that aggregate unit sales of the security buzzer were 17 percent higher in the pool of stores where the design source was revealed to consumers. However, this study has two key limitations, which we aim to address in Study 2. First, the relatively small sample size and the single-product context somewhat limit the findings’ weight and generalizability. Second, an alternative explanation could exist for the effect obtained. In particular, one could argue that the findings from Study 1 might be due to a mere information account; the POP displays in the stores assigned to the treatment (versus the baseline) condition simply contained more specific information about the focal product (“customer-ideated”). Thus, the effect found might not arise from the unique, user-driven process account explained in the theory section, but from the simple fact that more specific information about the source of the product design was revealed to the consumer. If this is indeed the case, then mentioning “designer-ideated” (in the case of designer-ideated new products) should also positively affect the product’s market performance. We assess this possibility in Study 2.
Study 2 Objectives and Methods The primary objectives of Study 2 are twofold. First, we aim to replicate the findings from Study 1 in a different product category using a different experimental field setting. Second,
18
we aim to address the “more specific information – more sales” account as an alternative explanation. As mentioned above, the context of Study 2 is Muji’s food division. In particular, the brand decided to introduce a new snack product on the market, a “crack pretzel” with a new flavor. In contrast to Study 1, we were also able to assess the objective promise of crowdsourcing here because the brand pursued two independent idea generation processes in parallel: on one track, ideas were generated by the firm’s professionals; on a second track, the brand launched a crowdsourcing competition. The process finally yielded two new products: the crowdsourced new product was a soybean-flavored crack pretzel and the designer-ideated one a jalapeñoflavored crack pretzel. Both ideas were new to the firm (i.e., Muji had never offered soybeanflavored or jalapeño-flavored snack products before). We were able to implement a randomized field experiment involving a total of 194 Muji stores across Japan, and we tracked unit sales data for both products in these stores over a period of 16 days. Because both products were introduced into the market simultaneously, sold at the same price (105 yen), and displayed on the same shelf in each of the stores, we had the unique opportunity to test which of the two products actually sold better in a highly controlled field setting. Relative unit sales thus serve as our dependent variable (i.e., unit sales of the crowdsourced new product divided by unit sales of the designer-ideated new product per store). Although we were able to involve a substantially larger number of stores in this study compared to Study 1, we still consider the critical cell size (which doubled from n > 20 to n > 40) to be less than ideal. However, because we can look at relative unit sales data for the two rival products, we believe our focal hypothesis test offers enough power to detect a theoretical effect at the individual store level as well.
19
Throughout the entire observation period, both products were accompanied by a POP display describing the product’s newness, taste, etc. (original size of the POP displays: 128mm x 91mm; see Figures A1 through A5 in Web Appendix A). Most importantly, we were able to manipulate whether or not the source of each design was also revealed on the POP display. We randomly assigned stores to four experimental conditions. In condition 1, the “baseline” condition (n = 48), we did not mention the source of design for either of the two products. A comparison of the products’ relative sales across stores in this condition thus allows a clean test to show which of the two products performed better in “objective” terms. In condition 2, the “crowdsourced cue present” condition (n = 49), the POP display featuring the crowdsourced new product informed customers that the product was based on an “idea developed by Muji customers.” The POP display featuring the designer-ideated new product was identical to the baseline display and did not reveal the source of design. Like in Study 1, a comparison of the respective sales between these two conditions (baseline vs. crowdsourced cue present) thus allows us to test our central prediction, namely that the mere presence of the design source cue has a positive, incremental effect on the relative market performance of the crowdsourced new product. In condition 3, the “designer-ideated cue present” condition (n = 48), the POP display featuring the designer-ideated new product informed customers that the product was based on an “idea developed by Muji designers.” The POP display featuring the crowdsourced new product was identical to the baseline and did not reveal the source of design. A comparison of sales between the first and the third condition (baseline vs. designer-ideated cue) thus allows us to assess whether an explicit design source cue will similarly affect sales of the designer-ideated new product. If this is indeed the case (that is, if the relative market performance of the designerideated new product increases similarly when the product’s design source cue is present on the
20
POP display), one could argue that the common mechanism underlying both contrasts is a simple “more specific information – more sales” account. Put differently, this would imply that mentioning the specific source of the design adds similar value for both types of products. However, if the relative market performance is more strongly affected when the design source cue of the crowdsourced new product is revealed (“idea developed by Muji customers”), we would have evidence suggesting a unique user-driven process account, as explained in the theory section. Finally, in condition 4, the “both cues present” condition (n = 49), the source of both product designs was featured on their POP displays. The original prediction was that compared to the baseline, the relative sales of the crowdsourced new product should see an increase similar to that observed in condition 2. Feedback provided by store managers, sales personnel and customers, however, indicated that the specific combination of the two POP displays implemented in this condition was problematic, if not experimentally ill-defined. As depicted in Figure A5 in Web Appendix A, both POP displays had the specific design source cue printed in a small font (~12.5-point Gothic), embedded in an identical graphical element (a silhouette of a little man) of the same color (dark red, the classic Muji brand color). As a consequence, we learned, the likelihood was very high that even motivated customers read the respective design source cue details in only one of the two POP displays carefully and then made the quick, implicit inference that the same information was also provided on the second, similar-looking POP display. Thus, it is hard to interpret the findings with confidence. We therefore postpone the presentation of these findings to the study discussion section (and address this issue again in Study 3). As in Study 1, we were able to collect a series of store-level control variables suggested by Muji’s management. First, we captured general store size in square meters (M = 730.60 m2).
21
Second, Muji categorizes its stores into five groups according to the amount of store space dedicated to food. We account for this difference using four dummies, with the most common category as the baseline (Category D: n = 90; dummies: Category A, n = 3; Category B, n = 20; Category C, n = 64; Category E, n = 17). Third, we captured the store type using three dummy variables, with the most common type as the baseline (specialty store, n = 84; dummies: shop in department store, n = 19; shop in shopping center, n = 83; stand-alone shop on city street, n = 18). Fourth, we captured whether the store is located in a station building (n = 49). Finally, the stores’ geographic area was captured using seven dummy variables, with the largest as the baseline (Kanagawa, n = 27; dummies: Tokyo Center, n = 22; Tokyo East, n = 23; Tokyo West, n = 23; Saitama, n =24; North Japan, n = 22; Chukyo, n = 26; Kyushu, n = 27). A comparison of stores along these variables broadly confirms that the randomization procedures were effective. In other words, stores did not differ systematically across conditions in terms of the controls captured. 8 Because we aim to test our central prediction not only in the aggregate but also at the individual store level in Study 2, we will complement the univariate hypothesis test with a multivariate analysis (accounting for the controls).
Results The objective promise of crowdsourcing. We first explored which of the two products, in aggregate, sold better on the market. Across all four conditions, we find that the crowdsourced new product performed substantially and significantly better (units sold = 8,507) than the
Three significant differences emerged: First, compared to condition 1 (baseline), more stores from the Tokyo East region were assigned to condition 2 (crowdsourced cue present) (χ2(1) = 4.19, p = .04). Second, compared to condition 1, more stores from the Kanagawa region were allocated to condition 3 (designer cue present) (χ2(1) = 5.79, p = .02). Third, compared to condition 1, condition 4 (both cues present) was used in a slightly different set of stores with regard to store space dedicated to food (i.e., store size dummy C is underrepresented; χ2(1) = 4.98, p = .03). 8
22
designer-ideated new product (units sold = 5,553, χ2 = 620.63, z = 35.23, p < .001). In order to assess the “objective” promise of crowdsourcing, however, we have to focus on condition 1 (baseline), as it was the only condition in which the POP displays did not reveal the source of either product’s design. Our findings reveal that the crowdsourced new product outperformed the designer-ideated one by a wide margin (units soldcrowdsourced new product = 2,086, units solddesignerideated new product
= 1,425, χ2 = 124.44, p < .001). Expressed in terms of our focal dependent variable
(relative unit sales), the findings indicate that for every designer-ideated new product sold, an average of 1.46 units of the crowdsourced new product were sold. We thus find that within the context studied, consumers prefer the crowdsourced new product over its designer-ideated counterpart. Testing the design source cue hypothesis. Next, we compared relative sales in condition 1 (baseline) and condition 2 (crowdsourced cue present). In support of our focal hypothesis, we find that the crowdsourced new product performed significantly better when its POP display revealed the source of its design (crowdsourced cue present: units soldcrowdsourced new product = 2,246, units solddesigner-ideated new product = 1,356; baseline: units soldcrowdsourced new product = 2,086, units solddesigner-ideated new product = 1,425; χ2 = 6.46, p = .01). These findings are mirrored in the analysis at the store level: the presence of the design source cue has a positive and significant effect on the market performance of the crowdsourced new product (Mcue present = 1.86, Mbaseline = 1.55, t(190) = 2.51, p = .01). Specifically, we found that the relative market performance of the crowdsourced new product increased by an average of 20 percent where it was marketed as “ideated by Muji customers.” 9
9
We also performed the same visualization exercise as in Study 1. As also shown in Figures A6 and A7 in Web Appendix A, the effects found in Study 2 tend to be similarly strong as the ones reported in Study 1.
23
We complemented the univariate hypothesis test with a multivariate ordinary least squares (OLS) regression analysis which controlled for the following extraneous variables at the store level: (1) general store size (in square meters; mean-centered), (2) amount of store space dedicated to snack food (four dummies), (3) store type (three dummies), (4) store location (one dummy), and (5) geographical area (seven dummies). The results provide converging evidence for the findings reported above (see Table A1 in Web Appendix A): the design source cue manipulation of the crowdsourced new product is significantly and positively related to sales (b = .45, SE = .32, p = .001; estimated means [GLM]: Mcrowdsourced cue present = 1.93, Mbaseline = 1.48). We also find that several control variables are significant predictors of our dependent variable; for example, the crowdsourced new product’s performance was generally worse in larger Muji stores, in stores located at station buildings and in the Tokyo Center area, while performance was better in Muji stores with more space dedicated to food, in stores located in shopping centers or in the Kyushu area. While these main effects are of limited interest conceptually, it is worth noting that they did not significantly interact with our focal treatment. This is important because it indicates that the treatment effect is not contingent on the objective performance of the crowdsourced new product (if this had been the case, for example, we should have seen a significantly weaker/stronger treatment effect in the Tokyo Center/Kyushu area). Alternative explanation: “more specific information – more sales.” In order to assess whether a similar design source cue effect is visible in the case of the designer-ideated new product, we next compared relative sales between condition 1 (baseline) and condition 3 (designer-ideated cue present). In the aggregate, we find no evidence for this account (designerideated cue present: units soldcrowdsourced new product = 2,048, units solddesigner-ideated new product = 1,288; baseline: units soldcrowdsourced new product = 2,086, units solddesigner-ideated new product = 1,425; χ2 = 2.80, p = .09). These findings are also reflected in the analysis at the store level (Mdesigner cue present = 1.71,
24
Mbaseline = 1.55, t(190) = 1.32, p = .19). One striking (albeit statistically insignificant) result was that the effect was even directionally “wrong”; if the design source cue manipulation generally did boost sales due to a “more specific information – more sales” account, then the relative sales of the crowdsourced new product would need to decrease when the designer-ideated new product was sold as such (“idea developed by Muji designers”). As we do not observe this pattern, we conclude that this alternative account is unlikely to explain the design source cue effect found for the crowdsourced new product. Similarly, an OLS regression accounting for the various extraneous variables captured at the store level reveals that the design source cue manipulation of the designer-ideated new product is not significantly related to sales (b = .10, SE = .12, p = .38; estimated means [GLM]: Mdesigner cue present = 1.68, Mbaseline = 1.58; see Table A2 in the Web Appendix A). 10
Discussion Study 2 yields three important findings. First, we detected a strong consumer preference for the crowdsourced new product over its designer-ideated counterpart, which provides empirical support for the objective promise of crowdsourcing discussed in the literature. Where the POP displays featuring the two products did not reveal their respective design sources, the stores sold an average of 1.55 units of the crowdsourced new product for every designer-ideated unit sold. Second, we successfully replicated the findings obtained in Study 1 and thus again found support for our primary prediction. In particular, we found that the relative market performance of the crowdsourced new product increased by 20 percent when it was actively
10
We also performed the two tests (crowdsourced cue present vs. baseline and designer-ideated cue present vs. baseline) in a simultaneous OLS regression analysis. As shown in Table A3 in Web Appendix A, the findings mirror the ones reported in the main body of the manuscript.
25
marketed as “ideated by Muji customers” compared to when the source of the product design was not revealed at the POP. Third, and critically, the respective design source cue manipulation did not significantly affect sales of the designer-ideated new product. Thus, a “more specific information – more sales” account is unlikely to underlie the design source cue effect found for the crowdsourced new product. Recall that originally we could also look at condition 4 (both cues present) with the prediction that, compared to the baseline, the relative sales of the crowdsourced new product would see an increase similar to the one in condition 2. This was not the case, however, even though the direction of the effect is as predicted (multivariate: b = .07, SE = .12, p = .53; estimated means [GLM]: Mboth cues present = 1.59, Mbaseline = 1.52). As indicated above, we argue that a plausible and qualitatively supported reason for this non-finding can be found in the fact that the design source cue manipulations in this condition were simply not received by consumers as intended. This is somewhat speculative, however, and alternative interpretations are possible. In order to provide more conclusive evidence on this issue, we designed a large-scale follow-up study (Study 3) in which we made the differences between POP displays more salient (see next section for details). If our reasoning is correct, we should thus observe that the positive effect of our design source cue manipulation of the crowdsourced new product is similarly strong and thus independent of the corresponding manipulation of the designer-ideated new product.
Study 3 Objectives and Methods The first aim of Study 3 is to clarify whether the positive effect of our design source cue manipulation of the crowdsourced new product is independent of the corresponding manipulation of the designer-ideated new product. Second, instead of ruling out alternative explanations, we
26
set out to “rule in” our suggested account, that is, we aim to shed more light on the process underlying the effect. In particular, we set out to test whether the effect is indeed due to the postulated quality inference. In this study, a professional market research agency recruited 3,296 Japanese consumers to participate in an online consumer survey (Mage = 42 years, 41% females). At the outset, participants were told that “In this survey, we are interested in your preferences for crack pretzels recently marketed by Muji.” Participants were then randomly assigned to one of the four conditions employed in Study 2. While both POP displays were of the same color in the field study (dark red; see Figures A2 through A5 in Web Appendix A), the present study utilized two different colors (green and blue) in order to visually highlight the differences between the two products featured on the POP displays (see Figures B1 through B8 in Web Appendix B). The specific assignment of POP colors to each product was counterbalanced among participants. Apart from this color change, the POP displays were identical to the ones used in Study 2. The dependent variable was a dichotomous preference measure: “If you wanted to buy a snack product, which of the two Muji crack pretzels would you prefer to buy?” Participants revealed their preferences by clicking on a check-box depicted below the image of the POP display. Revealed preferences were coded as 0 or 1 where participants preferred the designer-ideated or crowdsourced new product, respectively. On the next page, participants were invited to briefly elaborate on the reasons for their choice using an open-ended text-box (“Please elaborate in your own words on why you have chosen the product you indicated above”). Two trained research assistants who were blind to our hypotheses coded the participants’ statements along the following variables on a three-point scale (-1 = variable mentioned in relation to / in favor of designer-ideated new product, 0 = variable not mentioned, 1 = variable mentioned in relation to / in favor of crowdsourced new product). First,
27
we captured our main process variable, that is, whether participants indicated that they had chosen their product because they felt the product and/or its underlying design mode was better, of higher quality, etc. Second, we captured whether participants mentioned the mere newness of crowdsourcing as an alternative explanation underlying their potential preference for the crowdsourced new product. Third, and more generally, we captured whether participants indicated that certain elements of the POP displays (color, font, graphics, logo, etc.) affected their choices (e.g., “[Because the] color is good,” #3336220; “Because there was an illustration,” #8565174; “Because I could read the letters clearly,” #4110121). We will use the latter two
variables as rival mediators in the mediation analyses reported below. Interrater agreement was satisfactorily high for all three measures (quality inference: Κappa = .81, r = .90; newness: Κ = .66, r = .67, POP features: Κ = .82, r = .82). The questionnaire concluded with a final page containing a series of control measures. In particular, we captured several demographic variables, including the participants’ age, gender, education, occupation, and income as well as their consumption experience with Muji food and snacks, and their perceived similarity with Muji customers (see Table B1 in Web Appendix B).
Results Across all four conditions, we first find that consumers prefer the crowdsourced new product over its designer-ideated counterpart (choice sharecrowdsourced new product = 57%, z = 10.94, p < .001). We obtained the same basic finding when we zoomed in on condition 1 (baseline), in which the POP displays did not reveal the source of design for either of the two products (choice sharecrowdsourced new product = 55%, z = 2.93, p < .01). The findings thus replicate those obtained in the field (Study 2).
28
More critical to the primary objectives of the present study, we next ran a hierarchical logistic regression model with consumers’ revealed preferences as the dependent variable and the design source cue manipulations (crowdsourced cue present versus not; designer-ideated cue present versus not) as well as the respective two-way interaction (added in a second step) as independent variables. Recall that our central prediction in this follow-up study is that the positive effect of our design source cue manipulation of the crowdsourced new product should be similarly strong and thus independent of the corresponding manipulation of the designer-ideated new product. Our findings are affirmative. First, we find that consumers’ preferences are positively and significantly affected by our design source cue manipulation of the crowdsourced new product (b = .34, p < .001). The product’s choice share is higher when the design source cue was present on the POP display (choice sharecue present = 61%, choice sharecue not present = 53%). Second, we find that the respective two-way interaction is insignificant (b = .05, p = .71). This latter finding is important because it supports the idea that the positive effect of our design source cue manipulation of the crowdsourced new product is statistically independent of the corresponding manipulation of the designer-ideated new product. A multivariate analysis replicates this pattern of effects (see Table B2 in Web Appendix B). After having controlled for differences in respondents’ demographics (age, gender, education, and income), consumption experience with Muji food and snacks, and their perceived similarity with Muji customers, we find that the main effect of our design source cue manipulation of the crowdsourced new product remains significant (b = .35, SE = .08, p < .001). The two-way interaction again proves insignificant (b = .06, SE = .16, p = .72). Post-hoc analyses further revealed that featuring the crowdsourced new product as user-ideated (“idea developed by Muji customers”) has a similarly strong effect on consumer preferences in both cases: when the source of the designer-ideated new product’s design (1) is revealed (b = .38, SE = .11, p < .001) or (2) is
29
not revealed (b = .32, SE = .11, p < .01). We also explored whether the control variables interacted with the design source cue manipulation, but we found no robust effects (ps > .05). Our focal effect thus appears to be independent of the respondents’ demographics (age, gender, education, and income), consumption experience with Muji food and snacks, and their perceived similarity with Muji customers. We then analyzed participants’ open-ended statements to shed more light on the effect’s underlying process. First, consumers frequently indicated that the crowdsourced new product was perceived to be the better product and/or that crowdsourcing (where the source of design was revealed to consumers) is seen as the better design mode (e.g., “For the taste, I want to buy the soybean flour,” #8859701; “I feel that the product which comes from customers is more delicious than the product created by firm developers,” #8624754; “What is created by the voice of the customer is good,” #8638122). Of course, we also heard the opposite, that is, that the designerideated new product was perceived to be the better product (e.g., “I like the spicy jalapeno more than the sweet soybean flour,” #3510214). On average, however, we find that the crowdsourced new product performed significantly better on the focal process variable (Mgrand = .05, ttest-value: 0 = 3.45, p = .001). Most critically, and in support of our theorizing, we further find that this effect is significantly more pronounced when the design source cue of the crowdsourced new product was present versus not (Mcue present = .10, Mcue not present = -.01; F(1, 3292) = 16.50, p < .001). The respective two-way interaction proved insignificant (F(1, 3292) = .77, p = .38), which suggests that the quality inference triggered by the presence of the design source cue of the crowdsourced new product is independent of the corresponding manipulation of the designer-ideated new product. 11
11
We were also curious to learn whether participants systematically mentioned any of the reasons for the quality inference put forth in our theory section. Most participants, however, only mentioned the first-layer reason for their
30
To formally test for mediation, we ran a mediation model using bootstrapping procedures where the design source cue was specified as the independent variable, the quality inference measure as the mediator, and preference as the dependent variable. Our findings support mediation: the indirect effect of our design source cue manipulation of the crowdsourced new product on preference through the quality inference measure is significant (CI95%: .50, 3.87). The same conclusions emerge from a model that additionally includes the two rival mediators, i.e., the perceived newness of crowdsourcing and reasons pertaining to elements of the POP displays. In this model, the indirect effect through our central mediator remains significant (CI95%: .60, 2.46; perceived newness: CI95%: -.15, .31; POP display: CI95%: .14, .40; see Figure 5 for details).
INSERT FIGURE 5 ABOUT HERE
Discussion Study 3 extends the prior studies in several major ways. First, we find that the positive effect of our design source cue manipulation of the crowdsourced new product is independent of the corresponding manipulation of the designer-ideated new product. The findings thus further strengthen the conclusion arising from Study 2: a simple “more specific information – more sales” account is unlikely to explain the design source cue effect found for the crowdsourced new product. In addition to ruling out further alternative explanations (mere newness of crowdsourcing), we were also able to provide initial evidence in support of our central process account. In particular, we find that consumers’ preferences for crowdsourced new products, if
choice (i.e., the crowdsourced product is better). Yet, we note that several participants quite clearly pointed to the user argument, stating that users simply know better what other customers need (e.g., “Because it is a product made from the voice of the customer, I thought that it was [better] matched with the preferences of the buyer side…,” #8876560; “I can trust the voice of the customer more [than the idea of the designer],” #68393).
31
recognizable as such, are indeed driven by a quality inference: consumers perceive “customerideated” products to address their needs more effectively, and the corresponding design mode is considered superior in generating promising new product ideas. We build on this latter finding in the next study and attempt to replicate it using a more classic measured mediation approach.
Study 4 Objectives and Methods In Study 4, we aim to measure the quality inference using classic rating scales. We thus aim to test whether labeling a product as “customer-ideated” makes the consumer see the product in a different light and if so, whether such a quality inference mediates consumer preferences for crowdsourced new products. The study involved 179 students recruited from a large European university who participated in a series of unrelated studies. In the focal study, they were introduced to the idea of Muji’s Tag Tool (brand name blinded) and subsequently exposed to two concrete applications: the security buzzer and, as a choice alternative, the pedometer (a Tag Tool application that counts the number of steps the user makes; see Web Appendix C for details). Because the study was conducted in the lab outside Japan (where participants could be effectively deceived as well as debriefed), we could flip the assignment of design source cues to products between experimental conditions. In other words, participants were randomly assigned to one of two conditions: while in the first condition they received the correct information regarding the source of the design (security buzzer: ideated by users; pedometer: ideated by the brand’s designers), participants in a second condition learned the opposite (security buzzer: ideated by the brand’s designers; pedometer: ideated by users). The dependent variable, which was identical to the one used in Study 3, was a dichotomous preference measure: “If you wanted to buy one of the two
32
applications now, which one would you prefer?” (0 = pedometer, 1 = security buzzer). A comparison of conditions along this measure thus allows for an effective test of the design source cue hypothesis (which predicts that participants should more frequently choose the security buzzer/pedometer when it is framed as user-ideated). We also captured the participants’ perceptions of the applications’ quality (mediator) using three items (alpha = .70): (1) “The application is based on a great idea,” (2) “The idea underlying the application fits the user’s needs well,” and (3) “The application is very useful to consumers.” In order to avoid order effects, we counterbalanced the sequence in which the dependent and mediator variable were presented. As in Study 3, we also tried to capture participants’ perceptions of the mere newness of the design modes as an alternative explanation. The items were (alpha = .70): “The product development process was surprising,” “The product development process was somewhat unexpected,” “The way the application was created is completely new to me,” and “The application is based on a novel and innovative idea.” Both measures were captured on a seven-point scale (1 = more true of the pedometer, 7 = more true of the security buzzer). We subjected the items of both measures to an exploratory factor analysis (varimax rotation). The findings support discriminant validity: two factors were extracted, and all items loaded on their intended factors (loadings > .5).
Results In support of our design source cue hypothesis, we find that participants more frequently chose the security buzzer over the pedometer when it was framed as user- versus designer-ideated (choice share buzzercrowdsourced = 59% vs. choice share buzzerdesigner-ideated = 47%, χ2 = 2.92, p < .05, 1-tail). More central to the present study, we further find that labeling a product as “customer-ideated” significantly affects quality perceptions: on average, participants perceived
33
the idea behind the security buzzer to be of higher quality where it was labeled as user- versus designer-ideated (Mbuzzer crowdsourced = 4.50, Mbuzzer designer-ideated = 3.95, t(177) = 3.54, p = .001). 12 To formally test for mediation, we ran a mediation model where the design source cue was specified as the independent variable, the quality inference measure as the mediator, and preference as the dependent variable. Our findings support mediation: the indirect effect of the design source cue manipulation on preferences through the quality inference measure is significant (CI95%: .34, 1.44). The same conclusions are supported by a model that additionally includes the perceived newness of the respective design modes as a second mediator: the indirect effect through the quality inference remains significant (CI95%: .33, 1.46; perceived newness: CI95%: -.06, .11; see Figure C1 in Web Appendix C for details).
Discussion Using a classic measured mediation approach, Study 4 extends the previous studies by providing further evidence in support of the postulated quality inference underlying consumers’ preferences for crowdsourced new products: we find that on average, consumers simply consider a product labeled as “customer-ideated” to be better because they believe it is based on an idea that is more useful to consumers and more effectively addresses their needs. As in Study 3, we find that this mediation pattern holds after having controlled for differences in the perceived newness of the respective design modes.
General Discussion Summary, contributions, and practical implications
12
Participants across conditions did not significantly differ in terms of their perceptions of the design modes’ mere newness (Mbuzzer crowdsourced = 4.41, Mbuzzer designer-ideated = 4.37, t(177) = .35, p = .73).
34
Crowdsourcing is increasingly being used by firms to tap into the creative potential of their user communities to fuel their new product development pipelines. While most practitioners and academics have so far regarded crowdsourcing mainly as an “innovation tool” to identify promising ideas for new products, the research presented here is unique in pointing out that merely marketing the source of the design to customers might also have incremental effects on the firm’s bottom line. Indeed, the findings from two randomized field experiments (Studies 1 and 2) reveal that labeling crowdsourced new products as such, that is, marketing the product as “customer-ideated” at the POP versus not mentioning the specific source of design, produced an increase of up to 20 percent in the product’s market performance. Customers are thus observed to have a preference for crowdsourced, “customer-ideated” new products at the POP. This finding not only significantly extends our conceptual understanding of the potential real-life consequences of user involvement strategies, it also bears important and actionable implications for managers. In short, crowdsourcing (or more generally: customer-centric innovation) might not only constitute a promising route to better new products, it might also help marketers set their products apart from the competition. As demonstrated by our field experiments, this idea seems rather easy for firms to implement; simply labeling the crowdsourced new product as “ideated by customers” on a POP display was sufficient to visibly shift consumer preferences. Two controlled follow-up studies (Studies 3 and 4) further revealed that the effect observed in two distinct consumer goods domains (food and electronics) is largely due to a quality inference in which consumers perceive “customer-ideated” new products to address their needs more effectively, and the corresponding design mode is seen as superior in generating promising new product ideas. One particularly unique aspect of our second field experiment (Study 2) was that it also allowed us to effectively compare the objective quality, measured as relative market performance,
35
of the crowdsourced versus the designer-ideated new product. Interestingly, our findings reveal that the crowdsourced new product performed significantly better in this regard, even in cases where the source of the product’s design was not revealed at the POP: for each unit of the designer-ideated new product sold, the stores sold an average of 1.55 units of the crowdsourced new product. To the best of our knowledge, this is the first market performance test that directly contrasts a crowdsourced with a designer-ideated new product in a highly controlled, ceteris paribus setting. Because both products were in the same category and of the same type, and because both products were introduced into the market at the same time, at the same price, on the same shelf in each store, etc., this field test is characterized by an unusually high level of internal validity. The effect size found makes this finding highly relevant for firms and is at the same time thought-provoking with regard to mainstream thinking in the marketing field, as users outperformed a firm’s professional experts in terms of new product sales.
Limitations and avenues for future research It is important to note, of course, that we do not maintain that crowdsourcing is a “magic bullet” to remedy any innovation problem within a firm’s boundaries. Instead, we see the finding that crowdsourcing led to an objectively better new product (Study 2) as an important “it’s possible” finding that adds a unique piece of evidence to the emerging literature on crowdsourcing and that might stimulate future research. The extant literature has already developed a set of assumptions, or conceptual boundary conditions, which can inform managers about the conditions under which crowdsourcing might be more or less likely to be beneficial for firms. These assumptions center on the problem domain in which new product ideas are sought (crowdsourcing will work better for relatively simple design tasks), the relevant expertise and motivation of the participating crowd (the higher, the better), the specific contest setting (e.g.,
36
incentives, clustering and exposure to others’ ideas, one-shot versus ongoing contests), and the size and diversity of the population that can be activated (the bigger and the more diverse, the better) (Afuah and Tucci 2012; Bayus 2013; Boudreau et al. 2011; Girotra et al. 2010; Nishikawa et al. 2013; Poetz and Schreier 2012; Stephen et al. 2015; von Hippel 2005). If these assumptions are “met,” the crowd, in aggregate, can indeed be more effective in mining the potential solution space and thus in identifying a few truly exceptional ideas. While we have demonstrated in a one-to-one comparison that crowdsourcing might even outperform designer-ideated new products, future research should broaden the empirical scope to more fully inform managers about types of tasks, industries, or crowds for which we might (or might not) expect similarly positive results. We note, however, that addressing these boundary conditions seems challenging from an empirical perspective because firms are understandably skeptical about implementing crowdsourcing in situations where researchers predict no effects (or even negative effects). As a result, different empirical approaches are needed to shed more light on the conditions in which crowdsourcing might backfire. A further critical challenge which is noteworthy for practitioners and future research is the entire “black box” that follows the idea generation stage; the problem areas range from selecting the “right” ideas to converting the selected raw ideas into a marketable new product. As indicated by Girotra et al. (2010, p. 591), for example, “the success of idea generation in innovation usually depends on the quality of the best opportunity identified,” but “the focus of the existing literature is entirely on the creation process and ignores the selection processes that groups use to pick the most promising ideas for further exploration.” While there might be reasons to preserve the right to have the “best” ideas ultimately selected by firm-internal experts, for example, some extant research suggests that user communities could also effectively perform this task on behalf of the
37
firm (Fuchs et al. 2010; Kornish and Ulrich 2014; Toubia and Flores 2007; Terwiesch and Ulrich 2009). In addition, our study is also subject to some limitations regarding the “subjective” effect observed. While the primary objective of our field experiments (Studies 1 and 2) was to test whether labeling crowdsourced new products as “customer-ideated” does incrementally affect the underlying product’s market performance, we have also explicated and empirically tested in two controlled follow-up studies (Studies 3 and 4) a specific process account, that is, consumers perceive “customer-ideated” products to be based on ideas that address their needs more effectively. We find that the current studies are consistent with our “crowdsourcing signals quality” account and at the same time inconsistent with a series of rival explanations. Future research could build on these findings and dig deeper to uncover “the process underlying the process.” Specifically, our studies were not designed to empirically answer the question of why crowdsourced new products are associated with higher quality. An answer to this question, however, would be both theoretically and practically interesting. For example, it would guide firms as to the specific additional information they should choose to place next to the “customerideated” cue to further boost product attractiveness. Would it be better to stress the mere number and/or diversity of the individuals who participated in the crowdsourcing contest, for example, or would it be more effective to cite the classic user argument by highlighting that the focal product was ideated by users with deep insights into their (and thus our) lives as consumers? Similarly, how much detail should be revealed about the winning user-designer(s) and how would that resonate with different observing consumer segments? It appears plausible, for example, that the focal quality inference is stronger in cases where the “customer-ideated by” belongs to “my” reference group (because the perceived fit between his/her and my preferences should be higher).
38
Such future research efforts would also be helpful to identify the limits and boundary conditions of the effect documented in the present research. Naturally, we do not maintain that the identified quality inference will always be observed when a crowdsourced new product’s design source is revealed to the consumer. Along these lines, we would find it particularly interesting to explore the conditions under which the “objective” and “subjective” effects of crowdsourcing exhibit a like sign versus opposite signs. While we might expect two positives (i.e., objectively better new products might emerge and consumers might also perceive the outcome to be better) in low-tech consumer domains like the ones studied here, two negatives might emerge in high-tech domains such as pharmaceuticals. The most interesting situations would be those in which opposite signs might emerge, such as when the crowd has merit in objective terms but the firm’s customers simply do not “see” their potential (e.g., because they have the “wrong” ideating users in mind), or alternatively, when brand designers simply have “unbeatable” equity in the eyes of their customers, as is the case in the fashion industry. While fashion brands might benefit from crowdsourcing in the sense that promising ideas might emerge, luxury brands in particular should probably not actively market the outcome as “customerideated” (Fuchs et al. 2013; Moreau and Herd 2010). More subtly, one might argue that in addition to a quality inference, labeling a product as “customer-ideated” might also activate a “persuasion” type of account. That is, customers might prefer crowdsourced new products not because the product per se is perceived any differently, but because they want to support the idea of crowdsourcing; for example, seeing a “customerideated” new product might activate the observing customer’s social “user identity,” which might make people feel good when purchasing a product that was ideated by like-minded others – by “people like you,” and thus almost “by you” (Dahl et al. 2015). While we acknowledge that this process might in principle help to additionally explain the effect identified in this research, we
39
did not find strong evidence for it in the words of the consumers who participated in Study 2 (open-ended elaboration). At the same time, we would find it useful if future research explored the conditions under which both accounts might work in parallel, or alternatively, when one is more important than the other. For example, what happens if ideating customers are more explicitly described at the POP as mainstream customers “like you and me” as opposed to “lead users”? And what role do the product category and the observing consumer play in this regard? Finally, it might be interesting to assess the value of crowdsourcing as a quality signal in comparison to other, related quality cues that can be used at the POP, such as consumer ratings or sales popularity cues. For example, what is the inferential and downstream difference between a product that is marketed as “customer-ideated” versus one that is sold as a “5-star-rated” product or merely as a “top seller” (bought by many other customers)?
Conclusion In conclusion, this research is the first to demonstrate that labeling crowdsourced new products as such, that is, marketing the product as “customer-ideated” at the POP, holds the potential to shift consumer preferences in the real world and can thus incrementally increase the product’s actual market performance. As a result, crowdsourcing might not only constitute a promising route to better new products, it might also help marketers set their products apart from the competition by actively communicating the source of design to customers. As consumers, we conclude, we might soon see (and then draw the respective inferences from) the label “ideated by customers” printed prominently on the product’s packaging next to the other common indications like “organic,” “handmade,” or “made in Germany.”
40
References Afuah, Allan and Christopher L. Tucci (2012), “Crowdsourcing as a Solution to Distant Search,” Academy of Management Review, 37(3), 355-375. Banerjee, Abhijit V. (1992), "A Simple Model of Herd Behavior." The Quarterly Journal of Economics, 107 (3), 797-817. Bayus, Barry L. (2013), “Crowdsourcing New Product Ideas over Time: An Analysis of the Dell IdeaStorm Community,” Management Science, 59(1), 226-244. Bertrand, Marianne, Dean Karlin, Sendhil Mullainathan, Eldar Shafir, and Jonathan Zinman (2010), “What’s Advertising Content Worth? Evidence from a Consumer Credit Marketing Field Experiment,” Quarterly Journal of Economics, 125(1), 263-306. Bilkey, Warren J. and Erik Nes (1982), "Country-of-Origin Effects on Product Evaluations," Journal of International Business Studies, 13(1), 89-100. Boudreau, Kevin J., Nicola Lacetera, and Karim R. Lakhani (2011), “Incentives and Problem Uncertainty in Innovation Contests: An Empirical Analysis,” Management Science, 57(5), 843-863. Dahl, Darren W., Christoph Fuchs, and Martin Schreier (2015), "Why and When Consumers Prefer Products of User-Driven Firms: A Social Identification Account," Management Science, 61 (8), 1978-1988. Franke, Nikolaus, Marion K. Poetz, and Martin Schreier (2013), “Integrating Problem Solvers from Analogous Markets in New Product Ideation,” Management Science, 60(4), 10631081.
41
Fuchs Christoph and Martin Schreier (2011), “Customer Empowerment in New Product Development,” Journal of Product Innovation Management, 28(January), 17-32. ——, Martin Schreier, and Stijn M. J. van Osselaer, S. M. (2015), “The Handmade Effect: What's Love Got to Do with It?,” Journal of Marketing, 79(2), 98-110. ——, Emanuela Prandelli, and Martin Schreier (2010), “The Psychological Effects of Empowerment Strategies on Consumers´ Product Demand,” Journal of Marketing, 74(January), 65-79. ——, ——, ——, and Darren D. Dahl (2013), “All That Is Users Might Not Be Gold: How Labeling Products as User Designed Backfires in the Context of Luxury Fashion Brands,” Journal of Marketing, 77 (September): 75-91. Flint, Daniel J., Chris Hoyt, and Nancy Swift (2014), Shopper Marketing: Profiting from the Place where Suppliers, Brand Manufacturers, and Retailers Connect. Pearson Education. Girotra, Karan, Christian Terwiesch, and Karl T. Ulrich (2010). Idea Generation and the Quality of the Best Idea. Management Science, 56(4), 591-605. Gross, Irwin (1972), “Creative Aspects of Advertising,” Sloan Management Review, 14(1), 83109. Harrison, Glenn W and John A. List (2004), “Field Experiments,” Journal of Economic Literature, 42(December), 1009-1055. Howe, Jeff (2006), "The Rise of Crowdsourcing," Wired Magazine, 14(6), 1-4. Jensen, Morten B., Christoph Hienerth, and Christopher Lettl (2014), "Forecasting the Commercial Attractiveness of User‐Generated Designs Using Online Data: An Empirical
42
Study within the LEGO User Community," Journal of Product Innovation Management, 31(December), 75-93. Johansson, L., Å. Haglund, L. Berglund, P. Lea, and E. Risvik (1999), “Preference for Tomatoes, Affected by Sensory Attributes and Information about Growth Conditions,” Food Quality and Preference, 10(4), 289-298. Jeppesen, Lars Bo, and Lars Frederiksen (2006), "Why Do Users Contribute to Firm-Hosted User Communities? The Case of Computer-Controlled Music Instruments," Organization Science 17(1), 45-63. ——, and Karim R. Lakhani (2010), "Marginality and Problem-Solving Effectiveness in Broadcast Search," Organization Science, 21(5), 1016-1033. Kardes, Frank R., Steven S. Posavac, and Maria L. Cronley (2004), "Consumer Inference: A Review of Processes, Bases, and Judgment Contexts," Journal of Consumer Psychology, 14(3), 230-256. Kornish, Laura J. and Karl T. Ulrich. (2014). “The Importance of the Raw Idea in Innovation: Testing the Sow's Ear Hypothesis,” Journal of Marketing Research, 51(1), 14-26. Levitt, Steven D. and John A. List (2009). “Field Experiments in Economics: The Past, The Present, and The Future,” European Economic Review, 53(1), 1-18. Lilien Gary L., Pamela D. Morrison, Kathleen Searls, Mary Sonnack, and Eric von Hippel (2002), “Performance Assessment of the Lead User Idea-Generation Process for New Product Development,” Management Science 48(8), 1042–1059.
43
Moreau, C. Page and Kelly B. Herd. (2010), “To Each His Own? How Comparisons with Others Influence Consumers’ Evaluations of Their Self-Designed Products,” Journal of Consumer Research, 36(February), 806-819. Nishikawa, Hidehiko, Martin Schreier, and Susumu Ogawa (2013), “User-Generated versus Designer-Generated Products: A Performance Assessment at Muji,” International Journal of Research in Marketing, 30(2), 160-167. Ogawa, Susumu and Frank T. Piller (2006), “Reducing the Risks of New Product Development. MIT Sloan Management Review,” 47(2), 65-71. Poetz, Marion K., and Martin Schreier (2012), “The Value of Crowdsourcing: Can Users Really Compete with Professionals in Generating New Product Ideas?” Journal of Product Innovation Management, 29(2), 245-256. Rao, Akshay R. and Kent B. Monroe (1989), "The Effect of Price, Brand Name, and Store Name on Buyers' Perceptions of Product Quality: An Integrative Review," Journal of Marketing Research, 26(August), 351-357. Schreier Martin, Christoph Fuchs, and Darren W. Dahl (2012) “The Innovation Effect of User Design.” Exploring Consumers’ Innovation Perceptions of Firms Selling Products Designed by Users,” Journal of Marketing, 76(4), 18-32. Schulze, Anja and Martin Hoegl (2008), “Organizational Knowledge Creation and the Generation of New Product Ideas: A Behavioral Approach,” Research Policy, 37(10), 1742-1750. Shah Sonali K. (2006), “Motivation, Governance and the Viability of Hybrid Forms in Open Source Software Development,” Management Science, 52(7), 1000–1014.
44
Sheth, Jagdish N. (1972), "The Future of Buyer Behavior Theory," Proceedings of the Third Annual Conference of the Association for Consumer Research. College Park, MD: Association for Consumer Research. Stephen, Andrew T., Peter P. Zubcsek, and Jacob Goldenberg (2016), “Lower Connectivity Is Better: The Effects of Network Structure on Redundancy of Ideas and Customer Innovativeness in Interdependent Ideation Tasks,” Journal of Marketing Research, 53 (2), 263-279. Terwiesch, Christian and Karl T. Ulrich (2009). Innovation Tournaments: Creating and Selecting Exceptional Opportunities. Boston: Harvard Business School Publishing. Toubia, Olivier, and Laurent Florès (2007), “Adaptive Idea Screening Using Consumers,” Marketing Science, 26(3), 342-360. von Hippel Eric (2005) Democratizing Innovation. Cambridge, MA: MIT Press. ——, Jeroen P. J. de Jong, and Stephen Flowers (2012), "Comparing Business and Household Sector Innovation in Consumer Products: Findings from a Representative Study in the United Kingdom," Management Science, 58(9), 1669-1681. ——, Susumu Ogawa and Jeroen P.J. de Jong (2011), “The Age of the Consumer-Innovator,” MIT Sloan Management Review 53 (1), 27-35. Wells, William D. (1993), “Discovery-Oriented Consumer Research,” Journal of Consumer Research, 19 (March), 489-504. Ulrich, Karl T. (2007), Design: Creation of Artifacts in Society. Pontifica Press. Zhang, Juanjuan, and Peng Liu (2012), “Rational Herding in Microloan Markets,” Management Science, 58(5), 892-912.
45
FIGURE 1: Example of a Store Set-up (Study 1)
46
FIGURE 2: POP Displays (Study 1) Crowdsourced new product (Tag Tool Security Buzzer) Design source cue not present
Crowdsourced new product (Tag Tool Security Buzzer) Design source cue present
*
POP display size as used in the stores: 128mm x 91mm.
Manipulation of the design source cue: “Idea by Muji Customers”
47
FIGURE 3: Store-level Sales (Study 1)
60
Crowdsourced cue
No cue
50
Unit sales
40
30
20
10
0 1
2
3
4
5
6
7
8
9
10 11 12 13 14 15 16 17 18 19 20 21 22 23
Stores (ordered by unit sales)
48
FIGURE 4: Daily Sales (Study 1)
14
Crowdsourced cue
No cue
12
Unit sales*
10
8
6
4
2
0 1 2 3 4 5 6 7 8 9 10111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667
Days (ordered by unit sales)
*Aggregate unit sales across stores (adjusted for the number of stores per condition).
49
FIGURE 5: Mediation Model (Study 3)
Perceived quality of the crowdsourced new product
Design source cue manipulation of the crowdsourced new product (POP display): present vs. not
c’-path b = .20, SE = .15, p = .18 c-path b = .34, SE = .07, p < .001
Preference for the crowdsourced new product
Notes: Path values represent non-standardized regression coefficients (b), standard errors (SE), and p-values (p). The c-path (c’-path) represents the direct effect without (with) the three mediators (perceived quality of the crowdsourced new product, perceived newness of crowdsourcing, and reasons pertaining to elements of the POP displays).
50
51
The Value of Marketing Crowdsourced New Products as Such: Evidence from Two Randomized Field Experiments
Web Appendix
Web Appendix A: Supplemental Materials for Study 2
FIGURE A1: Example of a Store Set-up (Study 2)
52
53
FIGURE A2: POP Displays Used in Condition 1 (“Baseline”)* (Study 2) Crowdsourced new product (soybean-flavored crack pretzel) Design source cue not present
*
Designer-ideated new product (jalapeño-flavored crack pretzel) Design source cue not present
POP display size as used in the stores: 128mm x 91mm.
54
FIGURE A3: POP Displays Used in Condition 2 (“Crowdsourced Cue Present”)* (Study 2) Crowdsourced new product (soybean-flavored crack pretzel) Design source cue present
*
POP display size as used in the stores: 128mm x 91mm.
Designer-ideated new product (jalapeño-flavored crack pretzel) Design source cue not present
55
Manipulation of the design source cue: “Idea by Muji Customers”
56
FIGURE A4: POP Displays Used in Condition 3 (“Designer-ideated Cue Present”)* (Study 2) Crowdsourced new product (soybean-flavored crack pretzel) Design source cue not present
Designer-ideated new product (jalapeño-flavored crack pretzel) Design source cue present
*
POP display size as used in the stores: 128mm x 91mm. Manipulation of the design source cue: “Idea by Muji designers” (development staff)
57
58
FIGURE A5: POP displays used in Condition 4 (“Both Cues Present”)* (Study 2)
Crowdsourced new product (soybean-flavored crack pretzel) Design source cue present
*
POP display size as used in the stores: 128mm x 91mm.
Designer-ideated new product (jalapeño-flavored crack pretzel) Design source cue present
59
Manipulation of the design source cue: “Idea by Muji Customers”
Manipulation of the design source cue: “Idea by Muji designers” (development staff)
60
FIGURE A6: Store-level Sales Analysis (Study 2) (Condition 1 vs. Condition 2)
6
Crowdsourced cue
No cue
5
Relative Sales*
4
3
2
1
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49
Stores (ordered by relative sales)
61
* Relative unit sales (crowdsourced / designer-ideated) per store.
62
FIGURE A7: Daily Sales Analysis (Study 2) (Condition 1 vs. Condition 2)
3
Crowdsourced cue
No cue
Relative Sales*
2
1
0 1
2
3
4
5
6
7
8
9
10
Days (ordered by bestselling days)
11
12
13
14
15
16
63
* Relative unit sales (crowdsourced / designer-ideated) aggregated across stores per condition.
64
Table A1: Multivariate Test of the Design Source Cue Hypothesis (Study 2)
Constant
OLS regression analysis1 b 1.44***
SE .32
Beta
Focal independent variable: Design source cue manipulation: Baseline vs. crowdsourced cue present2
.45***
.13
.33
Store size (mean centered) -.00** Store space snack food: Dummy A .25 Store space snack food: Dummy B -.33 Store space snack food: Dummy C .15 Store space snack food: Dummy E .70** Store type: in department store .02 Story type: in shopping center .32* Store type: stand-alone shop on city street .07 Station building dummy -.27* Area: Tokyo Center -.70** Area: Tokyo East -.18 Area: Tokyo West -.18 Area: Saitama .04 Area: North Japan -.29 Area: Chukyo -.25 Area: Kyushu .42* R-Squared F-value (df) 1 Dependent variable: Relative unit sales per store
.00 .47 .25 .14 .27 .21 .16
-.21 .05 -.14 -.11 .29 .01 .23
Control variables:
2
.45 .16 .29 .25 .25 .27 .24 .27 .23 .40 3.04 (17)
Where baseline = 0 and crowdsourced cue present = 1.
* p < .10, ** p < .05, *** p < .01; n = 96
.01 -.17 -.32 -.09 -.09 .02 -.15 -.11 .23
65
66
Table A2: Multivariate Test of the “More Specific Information – More Sales” Account (Study 2)
Constant
OLS regression analysis1 b 1.71***
SE
Beta .20
Focal independent variable: Design source cue manipulation: Baseline vs. designer-ideated cue present2
.10
.12
.10
Store size (mean centered) .00 Store space snack food: Dummy A Store space snack food: Dummy B .13 Store space snack food: Dummy C .07 Store space snack food: Dummy E .39 Store type: in department store -.06 Story type: in shopping center .14 Store type: stand-alone shop on city .51* street Station building dummy -.17 Area: Tokyo Center -.70** Area: Tokyo East -.39* Area: Tokyo West -.34* Area: Saitama -.29 Area: North Japan -.41** Area: Chukyo -.29 Area: Kyushu -.01 R-Squared F-value (df) 1 Dependent variable: Relative unit sales per store
.00
.01
.19 .12 .24 .21 .13 .27
.08 .07 .21 -.03 .13 .23
Control variables:
2
.15 .28 .21 .20 .22 .21 .20 .19 .27 1.84 (16)
Where baseline = 0 and designer-ideated cue present = 1.
* p < .10, ** p < .05, *** p < .01; n = 95
-.13 -.32 -.22 -.22 -.17 -.26 -.20 -.01
67
Table A3: Simultaneous OLS Regression based on the Three Focal Conditions (Study 2)
Constant
OLS regression analysis1 b 1.61***
SE
Beta .18
Focal independent variable: Design source cue manipulation: -Crowdsourced cue present2 -Designer-ideated cue present3
.40*** .10
.13 .13
.30 .08
.00 -.06 -.03 .07 .38* -.02 .22* .55**
.00 .45 .19 .11 .23 .18 .13 .26
-.05 -.01 -.01 .06 .17 -.01 .18 .19
Control variables: Store size (mean centered) Store space snack food: Dummy A Store space snack food: Dummy B Store space snack food: Dummy C Store space snack food: Dummy E Store type: in department store Story type: in shopping center Store type: stand-alone shop on city street Station building dummy Area: Tokyo Center Area: Tokyo East Area: Tokyo West Area: Saitama Area: North Japan Area: Chukyo Area: Kyushu R-Squared F-value (df)
-.14 -.72*** -.27 -.35* -.17 -.42** -.22 .20
1
Dependent variable: Relative unit sales per store
2
Where baseline = 0 and crowdsourced cue present = 1.
3
Where baseline = 0 and designer-ideated cue present = 1.
* p < .10, ** p < .05, *** p < .01; n = 144
.13 .24 .18 .20 .20 .19 .19 .18 .26 2.74(18)
-10 -.33 -.15 -.18 -.08 -.22 -.12 .12
68
69
Web Appendix B: Supplemental Materials for Study 3
FIGURE B1: POP Displays Used in Condition 1 (“Baseline”) (Study 3) (Color Combination A)
Crowdsourced new product (soybean-flavored crack pretzel) Design source cue not present
Designer-ideated new product (jalapeño-flavored crack pretzel) Design source cue not present
70
FIGURE B2: POP Displays Used in Condition 1 (“Baseline”) (Study 3) (Color Combination B)
Crowdsourced new product (soybean-flavored crack pretzel) Design source cue not present
Designer-ideated new product (jalapeño-flavored crack pretzel) Design source cue not present
71
FIGURE B3: POP Displays Used in Condition 2 (“Crowdsourced Cue Present”)* (Study 3) (Color Combination A)
Crowdsourced new product (soybean-flavored crack pretzel) Design source cue present
Manipulation of the design source cue: “Idea by Muji Customers”
Designer-ideated new product (jalapeño-flavored crack pretzel) Design source cue not present
72
FIGURE B4: POP Displays Used in Condition 2 (“Crowdsourced Cue Present”)* (Study 3) (Color Combination B)
Crowdsourced new product (soybean-flavored crack pretzel) Design source cue present
Manipulation of the design source cue: “Idea by Muji Customers”
Designer-ideated new product (jalapeño-flavored crack pretzel) Design source cue not present
73
FIGURE B5: POP Displays Used in Condition 3 (“Designer-ideated Cue Present”)* (Study 3) (Color Combination A)
Crowdsourced new product (soybean-flavored crack pretzel) Design source cue not present
Designer-ideated new product (jalapeño-flavored crack pretzel) Design source cue present
Manipulation of the design source cue: “Idea by Muji designers” (development staff)
74
FIGURE B6: POP Displays Used in Condition 3 (“Designer-ideated Cue Present”)* (Study 3) (Color Combination B)
Crowdsourced new product (soybean-flavored crack pretzel) Design source cue not present
Designer-ideated new product (jalapeño-flavored crack pretzel) Design source cue present
Manipulation of the design source cue: “Idea by Muji designers” (development staff)
75
FIGURE B7: POP Displays Used in Condition 4 (“Both Cues Present”)* (Study 3) (Color Combination A)
Crowdsourced new product (soybean-flavored crack pretzel) Design source cue present
Manipulation of the design source cue: “Idea by Muji Customers”
Designer-ideated new product (jalapeño-flavored crack pretzel) Design source cue present
Manipulation of the design source cue: “Idea by Muji designers” (development staff)
76
FIGURE B8: POP Displays Used in Condition 4 (“Both Cues Present”)* (Study 3) (Color Combination B)
Crowdsourced new product (soybean-flavored crack pretzel) Design source cue present
Manipulation of the design source cue: “Idea by Muji Customers”
Designer-ideated new product (jalapeño-flavored crack pretzel) Design source cue present
Manipulation of the design source cue: “Idea by Muji designers” (development staff)
77
TABLE B1: Control Measures (Study 3)
Variable
Measure(s)
Gender
1 = male, 2 = female
Age
in years
Education (dummy)*
0 = higher education not present, 1 = higher education present (university)
Personal Income
1 = Less than 2 million yen 2 = Less than 4 million yen, 2 million yen or more 3 = Less than 6 million yen, 4 million yen or more 4 = Less than 8 million yen, 6 million yen or more 5 = Less than 10 million yen, 8 million yen or more 6 = Less than 12 million yen, 10 million yen or more 7 = Less than 15 million yen, 12 million yen or more 8 = Less than 20 million yen, 15 million yen or more 9 = 20 million yen or more 10 (missing value) = I don't understand it
Consumption experience with Muji food and snacks Perceived similarity with Muji customers
How often do you buy Muji food and snacks? (1 = never and 5 = very often) How ‘close’ and ‘similar’ do you feel to Muji customers? (I feel [not] similar to / I feel [not] close to / I can [cannot] identify with / there are [no] similarities between me and Muji customers; 1 = low, 5 = high similarity; alpha = .87)
*Note that a more fine-grained scale was used to determine education level by the market research company; for expository reasons, we transformed the measure into a binary (dummy) variable.
78
TABLE B2: Results of a Hierarchical Multivariate Logistic Regression Analysis (Study 3)
Model 11
Constant Independent variables Design source cue manipulation: -Crowdsourced cue present2 -Designer-ideated cue present3
Model 21
b
SE
Exp(B)
b
SE
Exp(B)
-.36
.30
.70
-.35
.30
.71
.35*** -.14*
.08 .08
1.42 .87
.32*** -.17
.11 .11
1.38 .85
.06
.16
1.06
-.02 .30*** .00 -.21** .10* .12***
.04 .10 .00 .09 .06
.98 1.35 1.00 .81 1.10
.04 3,560.58 87.61 (9)
.89
Interaction Design source cue manipulation of the crowdsourced x designerideated new product Control variables Purchase frequency Gender Age Education (dummy) Similarity Personal income
-.02 .30*** .00 -.21** .10* .12***
Log likelihood χ2 (df)
.04 .10 .00 .09 .06 .04 3,560.70 87.48 (8)
.98 1.36 1.00 .81 1.10 .89
Dependent variable: Consumer preferences (0 = designer-ideated, 1 = crowdsourced new product).
1
2
Where baseline = 0 and crowdsourced cue present = 1.
3
Where baseline = 0 and designer-ideated cue present = 1.
* p < .10, ** p < .05, *** p < .01; n = 2,666
79
Web Appendix C: Supplemental Materials for Study 4
Original Stimuli This is a new study. In this study we are interested in your preferences for a new tech gadget of a consumer goods brand (brand name blinded). The brand’s products are sold in more than 500 stores in 22 countries including the US. The focal product is called tag tool (see the pictures below).
80
[Page break]
The tag tool consists of a silicone tag and a set of tool applications that can be attached to it (see the picture below).
81
[Page break]
82
[Condition 1: Security Buzzer crowdsourced]
In the following, we are interested in your preferences regarding two concrete applications.
[Measures as reported in the main body of the manuscript]
83
[Condition 2: Security Buzzer designer-ideated]
In the following, we are interested in your preferences regarding two concrete applications.
[Measures as reported in the main body of the manuscript]
84
FIGURE C1: Mediation Model (Study 4)
Perceived product quality
Design source cue manipulation of the crowdsourced new product
c’ path b = -.19, SE = .38, p = .68 c path b = .51, SE = .30, p < .05, 1-tail
Product Preference
Notes: Path values represent non-standardized regression coefficients (b), standard errors (SE), and p-values (p). The c-path (c’-path) represents the direct effect without (with) the two mediators (perceived product quality and perceived newness of crowdsourcing).
The Value of Marketing Crowdsourced New Products as Such: Evidence from Two Randomized Field Experiments
Web Appendix
Web Appendix A: Supplemental Materials for Study 2
85
FIGURE A1: Example of a Store Set-up (Study 2)
86
FIGURE A2: POP Displays Used in Condition 1 (“Baseline”)* (Study 2) Crowdsourced new product (soybean-flavored crack pretzel) Design source cue not present
*
Designer-ideated new product (jalapeño-flavored crack pretzel) Design source cue not present
POP display size as used in the stores: 128mm x 91mm.
87
FIGURE A3: POP Displays Used in Condition 2 (“Crowdsourced Cue Present”)* (Study 2) Crowdsourced new product (soybean-flavored crack pretzel) Design source cue present
*
Designer-ideated new product (jalapeño-flavored crack pretzel) Design source cue not present
POP display size as used in the stores: 128mm x 91mm.
Manipulation of the design source cue: “Idea by Muji Customers”
88
FIGURE A4: POP Displays Used in Condition 3 (“Designer-ideated Cue Present”)* (Study 2) Crowdsourced new product (soybean-flavored crack pretzel) Design source cue not present
Designer-ideated new product (jalapeño-flavored crack pretzel) Design source cue present
*
POP display size as used in the stores: 128mm x 91mm. Manipulation of the design source cue: “Idea by Muji designers” (development staff)
89
FIGURE A5: POP displays used in Condition 4 (“Both Cues Present”)* (Study 2)
Crowdsourced new product (soybean-flavored crack pretzel) Design source cue present
Designer-ideated new product (jalapeño-flavored crack pretzel) Design source cue present
*
POP display size as used in the stores: 128mm x 91mm.
Manipulation of the design source cue: “Idea by Muji Customers”
Manipulation of the design source cue: “Idea by Muji designers” (development staff)
90
FIGURE A6: Store-level Sales Analysis (Study 2) (Condition 1 vs. Condition 2)
6
Crowdsourced cue
No cue
5
Relative Sales*
4
3
2
1
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49
Stores (ordered by relative sales)
91
* Relative unit sales (crowdsourced / designer-ideated) per store.
92
FIGURE A7: Daily Sales Analysis (Study 2) (Condition 1 vs. Condition 2)
3
Crowdsourced cue
No cue
Relative Sales*
2
1
0 1
2
3
4
5
6
7
8
9
10
Days (ordered by bestselling days)
11
12
13
14
15
16
93
* Relative unit sales (crowdsourced / designer-ideated) aggregated across stores per condition.
94
Table A1: Multivariate Test of the Design Source Cue Hypothesis (Study 2)
Constant
OLS regression analysis1 b 1.44***
SE .32
Beta
Focal independent variable: Design source cue manipulation: Baseline vs. crowdsourced cue present2
.45***
.13
.33
-.00** .25 -.33 .15 .70** .02 .32*
.00 .47 .25 .14 .27 .21 .16
-.21 .05 -.14 -.11 .29 .01 .23
.07 -.27* -.70** -.18
.45 .16 .29 .25
.01 -.17 -.32 -.09
Control variables: Store size (mean centered) Store space snack food: Dummy A Store space snack food: Dummy B Store space snack food: Dummy C Store space snack food: Dummy E Store type: in department store Story type: in shopping center Store type: stand-alone shop on city street Station building dummy Area: Tokyo Center Area: Tokyo East
95
Area: Tokyo West Area: Saitama Area: North Japan Area: Chukyo Area: Kyushu R-Squared F-value (df) 1 Dependent variable: Relative unit sales per store 2
Where baseline = 0 and crowdsourced cue present = 1.
* p < .10, ** p < .05, *** p < .01; n = 96
-.18 .04 -.29 -.25 .42*
.25 .27 .24 .27 .23 .40 3.04 (17)
-.09 .02 -.15 -.11 .23
96
Table A2: Multivariate Test of the “More Specific Information – More Sales” Account (Study 2)
Constant
OLS regression analysis1 b 1.71***
SE
Beta .20
Focal independent variable: Design source cue manipulation: Baseline vs. designer-ideated cue present2
.10
.12
.10
.00
.00
.01
.13 .07 .39 -.06 .14 .51*
.19 .12 .24 .21 .13 .27
.08 .07 .21 -.03 .13 .23
-.17 -.70** -.39* -.34* -.29 -.41** -.29 -.01
.15 .28 .21 .20 .22 .21 .20 .19
-.13 -.32 -.22 -.22 -.17 -.26 -.20 -.01
Control variables: Store size (mean centered) Store space snack food: Dummy A Store space snack food: Dummy B Store space snack food: Dummy C Store space snack food: Dummy E Store type: in department store Story type: in shopping center Store type: stand-alone shop on city street Station building dummy Area: Tokyo Center Area: Tokyo East Area: Tokyo West Area: Saitama Area: North Japan Area: Chukyo Area: Kyushu
97
R-Squared F-value (df) 1 Dependent variable: Relative unit sales per store 2
Where baseline = 0 and designer-ideated cue present = 1.
* p < .10, ** p < .05, *** p < .01; n = 95
.27 1.84 (16)
98
Table A3: Simultaneous OLS Regression based on the Three Focal Conditions (Study 2)
Constant
OLS regression analysis1 b 1.61***
SE
Beta .18
Focal independent variable: Design source cue manipulation: -Crowdsourced cue present2 -Designer-ideated cue present3
.40*** .10
.13 .13
.30 .08
.00 -.06 -.03 .07 .38* -.02 .22* .55**
.00 .45 .19 .11 .23 .18 .13 .26
-.05 -.01 -.01 .06 .17 -.01 .18 .19
-.14 -.72*** -.27 -.35* -.17
.13 .24 .18 .20 .20
-10 -.33 -.15 -.18 -.08
Control variables: Store size (mean centered) Store space snack food: Dummy A Store space snack food: Dummy B Store space snack food: Dummy C Store space snack food: Dummy E Store type: in department store Story type: in shopping center Store type: stand-alone shop on city street Station building dummy Area: Tokyo Center Area: Tokyo East Area: Tokyo West Area: Saitama
99
Area: North Japan Area: Chukyo Area: Kyushu R-Squared F-value (df) 1
Dependent variable: Relative unit sales per store
2
Where baseline = 0 and crowdsourced cue present = 1.
3
Where baseline = 0 and designer-ideated cue present = 1.
* p < .10, ** p < .05, *** p < .01; n = 144
-.42** -.22 .20
.19 .19 .18 .26 2.74(18)
-.22 -.12 .12
100
Web Appendix B: Supplemental Materials for Study 3
FIGURE B1: POP Displays Used in Condition 1 (“Baseline”) (Study 3) (Color Combination A)
Crowdsourced new product (soybean-flavored crack pretzel) Design source cue not present
Designer-ideated new product (jalapeño-flavored crack pretzel) Design source cue not present
101
102
FIGURE B2: POP Displays Used in Condition 1 (“Baseline”) (Study 3) (Color Combination B)
Crowdsourced new product (soybean-flavored crack pretzel) Design source cue not present
Designer-ideated new product (jalapeño-flavored crack pretzel) Design source cue not present
103
104
FIGURE B3: POP Displays Used in Condition 2 (“Crowdsourced Cue Present”)* (Study 3) (Color Combination A)
Crowdsourced new product (soybean-flavored crack pretzel) Design source cue present
Designer-ideated new product (jalapeño-flavored crack pretzel) Design source cue not present
105
Manipulation of the design source cue: “Idea by Muji Customers”
106
FIGURE B4: POP Displays Used in Condition 2 (“Crowdsourced Cue Present”)* (Study 3) (Color Combination B)
Crowdsourced new product (soybean-flavored crack pretzel) Design source cue present
Designer-ideated new product (jalapeño-flavored crack pretzel) Design source cue not present
107
Manipulation of the design source cue: “Idea by Muji Customers”
108
FIGURE B5: POP Displays Used in Condition 3 (“Designer-ideated Cue Present”)* (Study 3) (Color Combination A)
Crowdsourced new product (soybean-flavored crack pretzel) Design source cue not present
Designer-ideated new product (jalapeño-flavored crack pretzel) Design source cue present
109
Manipulation of the design source cue: “Idea by Muji designers” (development staff)
110
FIGURE B6: POP Displays Used in Condition 3 (“Designer-ideated Cue Present”)* (Study 3) (Color Combination B)
Crowdsourced new product (soybean-flavored crack pretzel) Design source cue not present
Designer-ideated new product (jalapeño-flavored crack pretzel) Design source cue present
111
Manipulation of the design source cue: “Idea by Muji designers” (development staff)
112
FIGURE B7: POP Displays Used in Condition 4 (“Both Cues Present”)* (Study 3) (Color Combination A)
Crowdsourced new product (soybean-flavored crack pretzel) Design source cue present
Designer-ideated new product (jalapeño-flavored crack pretzel) Design source cue present
113
Manipulation of the design source cue: “Idea by Muji Customers”
Manipulation of the design source cue: “Idea by Muji designers” (development staff)
114
FIGURE B8: POP Displays Used in Condition 4 (“Both Cues Present”)* (Study 3) (Color Combination B)
Crowdsourced new product (soybean-flavored crack pretzel) Design source cue present
Designer-ideated new product (jalapeño-flavored crack pretzel) Design source cue present
115
Manipulation of the design source cue: “Idea by Muji Customers”
Manipulation of the design source cue: “Idea by Muji designers” (development staff)
116
TABLE B1: Control Measures (Study 3)
Variable
Measure(s)
Gender
1 = male, 2 = female
Age
in years
Education (dummy)*
0 = higher education not present, 1 = higher education present (university)
Personal Income
1 = Less than 2 million yen 2 = Less than 4 million yen, 2 million yen or more 3 = Less than 6 million yen, 4 million yen or more 4 = Less than 8 million yen, 6 million yen or more 5 = Less than 10 million yen, 8 million yen or more 6 = Less than 12 million yen, 10 million yen or more 7 = Less than 15 million yen, 12 million yen or more 8 = Less than 20 million yen, 15 million yen or more 9 = 20 million yen or more 10 (missing value) = I don't understand it
Consumption experience with Muji food and snacks Perceived similarity with Muji customers
How often do you buy Muji food and snacks? (1 = never and 5 = very often) How ‘close’ and ‘similar’ do you feel to Muji customers? (I feel [not] similar to / I feel [not] close to / I can [cannot] identify with / there are [no] similarities between me and Muji customers; 1 = low, 5 = high similarity; alpha = .87)
117 *Note that a more fine-grained scale was used to determine education level by the market research company; for expository reasons, we transformed the measure into a binary (dummy) variable.
118
TABLE B2: Results of a Hierarchical Multivariate Logistic Regression Analysis (Study 3)
Model 11
Constant Independent variables Design source cue manipulation: -Crowdsourced cue present2 -Designer-ideated cue present3
Model 21
b
SE
Exp(B)
b
SE
Exp(B)
-.36
.30
.70
-.35
.30
.71
.35*** -.14*
.08 .08
1.42 .87
.32*** -.17
.11 .11
1.38 .85
.06
.16
1.06
-.02 .30*** .00 -.21** .10* .12***
.04 .10 .00 .09 .06
.98 1.35 1.00 .81 1.10
.04 3,560.58 87.61 (9)
.89
Interaction Design source cue manipulation of the crowdsourced x designerideated new product Control variables Purchase frequency Gender Age Education (dummy) Similarity Personal income Log likelihood χ2 (df)
-.02 .30*** .00 -.21** .10* .12***
.04 .10 .00 .09 .06 .04 3,560.70 87.48 (8)
.98 1.36 1.00 .81 1.10 .89
Dependent variable: Consumer preferences (0 = designer-ideated, 1 = crowdsourced new product).
1
2
Where baseline = 0 and crowdsourced cue present = 1.
119 3
Where baseline = 0 and designer-ideated cue present = 1.
* p < .10, ** p < .05, *** p < .01; n = 2,666
120
Web Appendix C: Supplemental Materials for Study 4
Original Stimuli This is a new study. In this study we are interested in your preferences for a new tech gadget of a consumer goods brand (brand name blinded). The brand’s products are sold in more than 500 stores in 22 countries including the US. The focal product is called tag tool (see the pictures below).
121
[Page break]
122
The tag tool consists of a silicone tag and a set of tool applications that can be attached to it (see the picture below).
123
124
[Page break]
125
[Condition 1: Security Buzzer crowdsourced]
In the following, we are interested in your preferences regarding two concrete applications.
[Measures as reported in the main body of the manuscript]
126
[Condition 2: Security Buzzer designer-ideated]
In the following, we are interested in your preferences regarding two concrete applications.
[Measures as reported in the main body of the manuscript]
127
128
FIGURE C1: Mediation Model (Study 4)
Perceived product quality
Design source cue manipulation of the crowdsourced new product
c’ path b = -.19, SE = .38, p = .68 c path b = .51, SE = .30, p < .05, 1-tail
Product Preference
Notes: Path values represent non-standardized regression coefficients (b), standard errors (SE), and p-values (p). The c-path (c’-path) represents the direct effect without (with) the two mediators (perceived product quality and perceived newness of crowdsourcing).