Agent-based simulation of policy induced diffusion of smart meters

Agent-based simulation of policy induced diffusion of smart meters

Technological Forecasting & Social Change 85 (2014) 153–167 Contents lists available at ScienceDirect Technological Forecasting & Social Change Age...

687KB Sizes 2 Downloads 66 Views

Technological Forecasting & Social Change 85 (2014) 153–167

Contents lists available at ScienceDirect

Technological Forecasting & Social Change

Agent-based simulation of policy induced diffusion of smart meters Martin Rixen ⁎, Jürgen Weigand Department of Microeconomics and Industrial Organization, WHU - Otto Beisheim School of Management, 56179 Vallendar, Germany

a r t i c l e

i n f o

Article history: Received 25 January 2012 Received in revised form 10 February 2013 Accepted 16 August 2013 Available online 20 September 2013 Jel classification: O33 O38 C63 C63 D43 Keywords: Induced diffusion Innovation adoption Agent-based modeling Smart Metering Competitive dynamics

a b s t r a c t How can policy makers influence the path of innovation diffusion effectively and efficiently? We tackle this question in an agent-based model that integrates demand and supply for Smart Meters. Consumers adopt due to awareness and attainment of price thresholds. Suppliers act strategically upon Cournot competition. We add different policies to the simulation and analyze effects on speed and level of Smart Meter adoption. The tested interventions are: Market liberalization, information policies, and monetary grants. From our results we conclude that “One size does not fit all”. The best-suited intervention depends on the regulator's objective. Information policies speed-up adoption, but are ineffective in monopolies and if the timing is late. Monetary grants boost speed and level, but policy costs as well. Market structure is critical: Interventions in closed markets primarily favor the monopolist, while intensifying competition raises effectiveness and efficiency. Regulators may combine policies to gain synergies and utilize strategic decision making of suppliers. © 2013 Elsevier Inc. All rights reserved.

1. Introduction This paper aims to evaluate policy options for inducing innovation adoption by simulation analysis. The diffusion and adoption of innovations have been an important field of research for decades, focusing on product and process innovations as major sources of creative destruction [1]. A wide variety of theoretical models and conceptual frameworks for analysis have been developed to examine the drivers of diffusion and explain adoption [2]. These frameworks typically build on empirical observations to explain autonomous adoption procedures. In the last couple of years however, a new research area has been established dealing with induced diffusion of innovations. Regulators actively intervene in the ⁎ Corresponding author. E-mail addresses: [email protected] (M. Rixen), [email protected] (J. Weigand). URL: http://www.whu.edu/en/research/faculty/economics-group/microeconomics-and-industrial-organization-i/team/ (J. Weigand). 0040-1625/$ – see front matter © 2013 Elsevier Inc. All rights reserved. http://dx.doi.org/10.1016/j.techfore.2013.08.011

diffusion process in order to boost adoption speed and adoption level [3]. The rise of “green” technologies has triggered the interest of researchers in induced innovation because climate protection initiatives and CO2 abatement goals require fast and widespread diffusion of innovations, such as organic fuel E10, photovoltaic, electric vehicles, and combined heat and power. “Green” innovations are typically disruptive and may create competitive advantages for economies [4,5]. Designing effective and efficient incentives remains difficult. Ineffective or inefficient policies may cause non-adoption and/or uncontrollable costs. For instance, the German regulator Bundesnetzagentur (BNA) failed to induce Smart Metering diffusion through market liberalization because of insufficient competition. By way of contrast BNA's feed-in tariff approach pushed photovoltaic solutions too effectively and raised actual policy intervention costs way above the expected costs. In this paper we examine for the German market environment the effectiveness and efficiency of regulatory interventions to induce the diffusion of Smart Meters, under

154

M. Rixen, J. Weigand / Technological Forecasting & Social Change 85 (2014) 153–167

the core assumption that market demand and supply evolve endogenously. Our research question is: “How effective and how efficient are regulatory interventions to induce the diffusion of Smart Meters in Germany, if demand and supply evolve endogenously?” We measure effectiveness based on adoption speed and adoption level. The metric for efficiency is policy costs. We understand “regulatory intervention” in our research context as having four dimensions:

basis for our methodology and policy design. Section 3 introduces the agent-based model. Section 4 presents simulation results and ten propositions on effective and efficient policies. In Section 5 we discuss the results. Section 6 offers conclusions and indicates future research directions.

− Type (market liberalization, information policy, monetary grant, or some combination thereof), − Targeting (geographical focus or adopter subgroup), − Timing (start-time and duration), − Scale (small vs. large magnitude).

We first provide a brief, selective review of the literature on innovation diffusion before we continue with an overview of the literature on agent-based modeling to derive the critical requirements for our simulation model. Innovation diffusion has been an interdisciplinary field of research for several decades. In 1969, Bass published a seminal paper on the adoption of innovations [7]. His model was based on generalizations of empirical diffusion data for consumer durables, such as fridges, TVs, tumble driers, and air conditioners. The General Bass Model describes cumulative adoption as an “S”-curve (see Fig. 1). This model was developed further in later works [8–10]. Two indicators measure the performance in Bass's model: Speed and level of adoption. In the present paper, we utilize both indicators to evaluate effectiveness of policy options to induce diffusion compared to a status quo without any intervention. Induced diffusion tackles questions of how regulatory interventions accelerate the adoption process (speed) and how they increase the long term penetration rate (level) [3]. Davies and Diaz-Rainey describe this pattern in a “green” technology context [11]. Consumer durables with short product lifecycles, high risk to be imitated, and/or huge development effort rely on fast speed. Therefore, quickly attaining a critical mass of adopters is crucial. A sufficient number of individuals needs to adopt the new product to induce a self-sustaining continued adoption [12]. Especially high-tech companies rely on fast spread of their products. Level describes the innovation's penetration rate (also referred to as saturation). Some products gain huge shares in the target market, for example Microsoft Windows. While others, for example Linux, attract specific subgroups and reach only small levels. In addition to effectiveness, we introduce efficiency measures

We apply an empirically validated market demand–supply model which integrates widely accepted diffusion models (namely Epidemic and Probit) and microeconomic theory. Agent-based modeling enables us to combine consumer adoption drivers in the form of network effects and adopter heterogeneity with strategic decision making of suppliers. By employing industrial organization theory to model supply-side competition we go beyond the existing literature on innovation diffusion and address the endogeneity of market evolution and competitive dynamics [6]. Scenario and sensitivity analyses provide insights on different policy options as well as options as regards targeting, timing, and scaling. Our discussion of the options is results-driven focusing on their effectiveness (speed and level) and efficiency (ratio of speed or level and policy costs). Our findings should support regulators to select the bestsuited policy framework in specific market situations. A variety of further scenarios are offered to help isolate specific adoption drivers, confirm their impact, and guide future research on new and promising policy options. Our predictions may support policy makers in the design of effective and efficient policies that use competitive dynamics, boost adoption speed and level, and combine interventions to generate synergetic gains. The paper is structured as follows. We begin with a literature review on adoption drivers which provides the

Communication

Epidemic model: Epidemic spread of information causes “S”-curve

2. Literature review

Diffusion +Level

Internal +Speed

External

“S”-curve

Time

Adopters

Potential adopters

Probit model: Price reductions and normal distribution of thresholds cause “S”-curve Price reduction

Price thresholds Time Adopter categories:

Innovators Early Adopters Early Majority

Late Majority

Fig. 1. Diffusion drivers, “S”-curve and adopter categories [3,13].

Laggards

M. Rixen, J. Weigand / Technological Forecasting & Social Change 85 (2014) 153–167

based on policy costs as an extension to literature. Efficiency is the critical decision variable with regard to cost–benefit assessments. Any intervention typically involves a trade-off between effective speed- or level-inducements and related policy costs. Regulators may optimize efficiency by combining different kinds of policies to gain synergies [3]. Target groups and their adoption timing are primarily determined by heterogeneity of adopters. Rogers established five adopter categories that describe and explain impacts of heterogeneity on adoption decisions [13]. These five categories are distinguished by adoption timing (see Fig. 1). Adopters in each category differ in several characteristics, for instance in their use of communication channels, readiness to assume risk, and social affiliation [12]. Our objective is to extend current diffusion research by simulating policy options. We configure the policy design in this study by the fundamental factors which influence speed and level of adoption. Depending on the direction of influence, these factors may also contribute to the formation of barriers to adoption (see Geroski [2] for a review). We conclude that state-of-the-art models as well as effective and efficient policies require to embody epidemic and probit effects: − Epidemic: Information transmission through network effects is vital in Epidemic models. Learning drives adoption. The “S”-shape results from contagion effects in interactions. Awareness rises exponentially when wordof-mouth of Innovators and Early Adopters triggers the awareness of residual buyers [12]. The General Bass Model is an Epidemic model. Bass explains the “S”-shape occurrence through a shift from external influence (mass media) to internal influence (word-of-mouth) [13]. We set up scenarios with informational policies that induce external interactions. − Probit: Probit models stress cost–benefit–thresholds, referred to as Probit thresholds in this context. Utility drives adoption. Attainment of positive cost–benefit– ratios creates demand. The “S”-shape is explained through a normal distribution of Probit thresholds in combination with price reductions over time [2]. For instance, costly phone tariffs prevent cutting edge cell phones to diffuse. Adoption kicks off when cheaper tariffs are launched that match the willingness-to-pay of the mainstream. Frequent sources for price reductions are learning curves and economies of scale. Search costs, switching costs, and opportunity costs may influence perceived utility. Economic risk and technical complexity create barriers that postpone the adoption or even cause resistance of potential adopters [14]. We provide scenarios with monetary incentive policies in the form of purchase bonuses for consumers. Supplier competition is another influencing factor in the innovation adoption process. Intensifying competition induces adoption, leading to more rapid diffusion and a higher level of diffusion [15]. Competition is rarely addressed in the literature as an influencing factor as compared to Epidemic and Probit models which are frequently cited. We do not see competition as a third driver, because the influencing rootcauses are cost and price reductions since suppliers compete

155

in quantities or price [16]. Competition models and Probit models are very similar. However competition is an important building block for determining prices and quantities in diffusion models. Our paper extends current research by including all three forces in a single model: Epidemic, Probit, and competition. We use the widely accepted Cournot oligopoly competition theory to model entry, supplier conduct, and price. Industrial organization theory emphasizes market structure—in particular, the number and size distribution of suppliers and buyers—as a critical determinant of the behavior of market participants and of market outcomes. A shift from monopoly to a non-cooperative oligopoly will increase total supply to the market so that industry sales go up [17]. Based on this reasoning, we incorporate market liberalization as one policy option. Examples of models that factor in Epidemic and Probit effects exist. Cantono and Silverberg describe a diffusion model in the context of eco-innovations [18]. Rixen and Weigand model diffusion upon consumer purchase-decision processes [19]. Nevertheless, recent models lack competitive supply-side dynamics, although findings showcase the importance of an endogenous relationship between supply and demand: Epidemic consumer awareness increases sales potential, firm entries become profitable, and rivalry ends up in Probit effects leading to innovation adoption [20]. Endogeneity arises when adoptions trigger the awareness of other potential adopters. These endogenous links between competitive dynamics and market evolution are not well understood yet and provide opportunities for research [6,21]. We explicitly tackle these links in our model by including supply-side dynamics. Timing and order of entry drive firm profitability in growth markets [22,23]. Consideration of first-mover advantages and follower strategies is crucial to understand competitive dynamics [24,25]. Pioneers spark off innovation diffusion and have long-term impact on adoption speed and adoption level. Followers enjoy free-rider benefits but need to handle fierce competition [26]. Late entrants face decreasing market potential due to saturation. Competitive dynamics and its driving forces on innovation diffusion are not limited to market entry. Market saturation and market exit decisions correlate. In the innovation adoption context, market potential explicitly decreases when the “S”-curve crosses the point of inflection. In line with the Industry Lifecycle, firms leave the market and shakeout begins [25,27]. We analyze these under-researched endogenous links in our simulation. In methodical terms, integration of network effects [28] as well as adopter heterogeneity [12,29] are critical model requirements. Providing an advantage over differential equation models, agent-based models are able to handle the complexity of both requirements [30,31]. These capabilities are seen as critical factors by innovation diffusion research and make agent-based modeling a promising venue for developing new diffusion theory [32–34]. Garcia confirms these capabilities in her study on network-types and their impact on diffusion processes [35]. Similar advantages over other methodologies are confirmed by Bohlmann, Calantone, and Meng who showcase the flexibility to simulate varying heterogeneous networks [36]. Schwarz and Ernst use Sinus Milieus to map heterogeneity of adopters and apply it to eco-innovation diffusion [37]. Recent contributions to the

156

M. Rixen, J. Weigand / Technological Forecasting & Social Change 85 (2014) 153–167

literature give examples to combine both requirements in explanatory simulations [18,19]. Over the past 20 years agent-based modeling has spread widely in numerous scientific areas due to progress in computer hardware and software [38]. An agent-based model is a conglomerate of decision-making entities and behavioral rules, simulated in a shared environment [39]. Autonomous agents inspect their current state time step per time step and act according to predefined behavioral rules. Analyses on input/output histories and transitions within the model give insights about micro behavior and macro system results [40]. Simulations contribute to scientific discussions via analysis of emergent and immergent effects [41]: − Micro on macro: how do shifts in individual behavior affect the overall system? − Macro on micro: If environmental conditions change, in which way does individual agent behavior evolve? Handling numerous input variables to reconstruct complex systems and complex behavior is a key capability of agentbased models [42,43]. This advantage opens a wide range of applications in several disciplines including Economics, Social Sciences, and Biology. As Epstein summarizes, different reasons for agent-based modeling exist: Prediction is the most important, others include explanation and education [44]. Many models simulate “Homo oeconomicus” attitudes. Epstein's and Axtell's Sugarscape model is an example in this category [45]. Stock market crashes, spread of epidemics, traffic jam incidences, and panic escape behavior are further applications. However, the use of agent-based models is not limited to human agents, as examples in supply chain management, climate change and search engine algorithms prove. Dealing with complexity is a key strength of the agent-based paradigm, but it is also a major source of errors and prone to critique [46–48]. An increasing number of variables, rules, and conditions—often described as overparameterization—renders replicability and validity difficult. Replicability is the ability to re-produce and re-execute models in different frameworks [31]. It is a fundamental requirement for generating reliable simulation results. Simplicity avoids over-parameterization. Modeling approaches based on simple rules and few configuration parameters are superior in terms of interpretation and replication. They do not necessarily produce simple results. In contrast, simple predictors frequently result in complex behavior, if they are executed in a multi-agent environment. Axelrod's KISS principle (“keep it simple, stupid”) best describes this methodology widely accepted by the scientific community [46]. “The complexity of agent-based modeling should be in the simulated results, not in the assumptions of the model.”[46: 5]. Validity is a twodimensional requirement: Internal validity focuses on the model's correctness in terms of simulation (programming) code. External validity, targets the matching of simulation configuration with real-world observations [49]. In our model validity derives from the integration of empirical data and the “grounding” with solid models and theories, like the use of Cournot competition and the General Bass Model. External validity is crucial to avoid crude input–output simulations and to build meaningful models for policy analyses.

3. The model 3.1. Objective and assumptions Our model's objective is to simulate innovation adoption within a multi-agent market environment that allows the evaluation of policies.1 Consumer demand is determined by learning status (awareness for the product) and individual price thresholds (willingness-to-pay) of consumers. Changes in demand drive market entry and exit decisions as well as quantity and price. Diffusion proceeds if market interactions distribute awareness (Epidemic effect) and rivalry reduces the market price (Probit effect). Adoption drives supply-side rivalry and vice versa rivalry determines pricing and therefore adoption. We use Cournot quantity competition to model competitive dynamics [50]. The basic Cournot model rests on the following assumptions: Firms… − behave non-cooperatively by maximizing individual profits, − share identical cost structures (variable and fixed costs), − produce a homogeneous product (no differentiation), − select their output quantity simultaneously, − have market power (that is, an individual firm's output decision affects market price). 3.2. Structure and procedures Our simulation procedure progresses in twelve steps. The first and last steps bracket a loop that contains the repeated calculations per time-step. These calculations can be classified into three blocks: Demand, supply, and adoption. One loop-repetition represents one simulation period. The loop is performed iteratively until an exit condition applies. Fig. 2 plots the end-to-end process flowchart in Unified Modeling Language (UML). The simulation starts with initial configurations that populate the “World” with a set of N = 10,000 consumer agents I = {1, 2,…, N} and 20 supplier agents. Global constants and scenario constants are calibrated. Global constants are input parameters that remain unchanged across experiments, while the three scenario constants are input parameters that determine our eight predefined scenarios. Table 1 presents all constants and variables. Unlike constants, variables are periodically calculated output parameters, for example market price p. Consumer agents change their status: At the beginning, all agents are unaware (subset Iunaware). They turn aware if they receive a mass media or word-of-mouth interaction (subset Iaware). Innovation gives initial impulses while imitation sparks off widespread diffusion. The General Bass Model is applied to empirically validate these communication processes. In line with common coefficient estimates from empirical data, we calibrated a coefficient of innovation (also referred to as external influence or advertising effect) of 1 The NetLogo code and screenshots are available on http://www. openabm.org/model/2609. A video documentation “Policy induced diffusion of innovations” is available online at http://www.youtube.com/watch? v=9jNTl7TloLM.

M. Rixen, J. Weigand / Technological Forecasting & Social Change 85 (2014) 153–167

157

START: Initial configuration 1) Demand-side calculations

2) Supply-side calculations

Mass media interactions

Individual quantities

Word-of-mouth interactions

Market prices

Demand function

Individual profits No. of profitable suppliers

3) Adoption calculations Selectadopters Define non-adopters Next iteration

no

Exit threshold ? yes

STOP: Calculate KPIs Fig. 2. UML flowchart of model procedures.

0.03 and a coefficient of imitation (also referred to as internal influence or word-of-mouth effect) of 0.38 [10]. The coefficient of imitation is indirectly calibrated through local dispersion measured by the individual number of neighbors n(i) in the agent´s “Small World”. It represents the environment of surrounding agents to interact with. If an agent receives an interaction and turns aware, he decides in the very same period whether to adopt (subset Iadopters) or not (subset Inonadopters). The decision is driven by the agent's individual price sensitivity z(i) and actual market price p. We assume that net income and price sensitivity correlate and use net income distributions from census microdata to empirically validate z(i) in line with general assumptions of Probit theory and Adopter characteristics [2,12]. The variable costs and periodic fixed costs are additional costs that originate mainly from the more expensive Smart Meter technology compared to conventional Ferraris Meters. We calculate both costs as well as market price p over a period of twelve years, which determines the official gauging period for Smart Meters in Germany. We assume additional Meter costs of about 30€: 120€ for a Smart Meter (including automatic Meter reading) compared to 90€ for a Ferraris Meter (30€ hardware costs plus 5€ per manual reading per year). More important are higher fixed costs that originate for example from Gateways, Head-Ends, data repositories and storage, server hardware, and web frontends. These components are shared across all consumers, but are necessary investments independent from the actual number of consumers. As a result with respect to market price p, households need to pay a premium of about 5–10€ per month for Smart Metering tariffs. This monthly premium accumulates to a premium of about 700–1400€ over a duration of twelve years (we measure this premium as market price p later in the results section). 3.2.1. Demand-side calculations The adoption-loop begins on the consumer-side with a simulation of external (mass media) and internal interactions

(word-of-mouth). Spread of awareness is the core driver in Epidemic theory. It assumes diffusion to be sparked off by mass media interactions, followed by epidemic rise in word-of-mouth causing the typical “S”-shaped curve (see Fig. 1). External and internal interactions trigger the awareness of consumers in our model. Periodically, D consumers are randomly selected and added to subset Iaware: Iaware ðt; DÞ ¼ Iunaware f1; 2; …; Dg:

ð1Þ

In case of an active information policy, M-additional consumers turn aware: Iaware ðt; M Þ ¼ Iunaware f1; 2; …; Mg:

ð2Þ

Word-of-mouth interactions trigger awareness through imitation. Adopters of former periods (subset Iadopters) interact periodically with one unaware consumer within their “Small World”: These consumers turn aware and are also added to subset Iaware: n o Iaware ðt; nÞ ¼ Iunaware 1; 2; …; Iadopters with nðiÞ of Iadopters N0:

ð3Þ Altogether, these three subsets define the periodic subset of aware consumers: Iaware ðt Þ ¼ Iaware ðt; DÞ þ Iaware ðt; M Þ þ Iaware ðt; nÞ:

ð4Þ

Iaware allows us to calculate the underlying demand function f with f = a * p + b, where the parameters a b 0 and b N 0 reflect consumer preferences: First, all agents are sorted by z(i) in descending order. Second, a linear regression analysis to define slope a and constant b is performed. The y-value is z(i), and the x-value is the order position. The following example illustrates the calculation of one period. We assume six aware consumers {z(1), z(2), z(3), z(4), z(5), z(6)} with z(Iaware) = {100, 80, 60, 40, 20, 0}. Running a linear regression provides the slope parameter as a = − 20

158

M. Rixen, J. Weigand / Technological Forecasting & Social Change 85 (2014) 153–167

Table 1 Parameter overview. Parameter

Type

Description

Value

Comment

N Nexit Cvar

Global input const. Global input const. Global input const.

Population (no. of consumers) Exit condition (no. of consumers) Variable costs

10000 1000 30€

Cfix D z(i)

Global input const. Global input const. Global input const.

Periodic fixed costs Periodic mass media interactions Consumer i's individual price sensitivity

1000€ 225 μ = 1307€

n(i) L M G f a b q p w s

Global input const. Scenario input const. Scenario input const. Scenario input const. Output variable Output variable Output variable Output variable Output variable Output variable Output variable

Consumer i's no. of neighbors Market liberalization Additional artificial mass media interactions Consumer's purchase bonus Demand function with f = a*p + b Slope-factor of demand function Constant-factor of demand function Output quantity of each supplier Market price Supplier's individual profit No. of suppliers

μ = 5.5 1 or 0 0 or 225 0€ or 200€

Empirical validation via General Bass Model Simulation ends if 90% of consumers are aware Additional costs per Smart Meter compared to conventional Meter Supplier's costs to offer a Smart Meter tariff Empirical validation via General Bass Model According to real-world income distribution from census microdata Empirical validation via General Bass Model L = 0 if monopolistic; L = 1 if liberalized M = 0 is default; M = 225 includes info. policy G = 0€ is default; G = 200€ includes grant

and constant b = 120. Hence our demand function is given by f = − 20 * p + 120. 3.2.2. Supply-side calculations The following steps simulate strategic decision making of suppliers with Cournot linear algebra [50]. Suppliers s compete in quantities. An essential behavioral assumption is the “Cournot conjecture”: Each supplier aims to maximize profit w based on the expectation that its own output choice will not have an effect on the output choices of its rivals. Put differently, they take their rival's quantity as a given. In the Cournot market equilibrium a firm's optimal quantity q is given by: qðsÞ ¼ ðb−Cvar Þ=ð−aðs þ 1ÞÞ:

ð5Þ

The profit maximizing output choice depends on the number of suppliers s as wells the exogenous demand factors, a and b, and the cost parameter Cvar determined by the underlying production technology. More competition, as measured by a larger number of suppliers, will lower individual output. In other words, with increasing competition individual market shares will decrease. The market price p is determined where demand is equal to total supply Q (which is the sum of individual supplies): pðQ ; sÞ ¼ a  ðQ  sÞ þ b:

ð6Þ

Knowledge of q and p across all possible supplier situations s allows us to calculate individual profits w: wðp; q; sÞ ¼ ððpðq; sÞ−Cvar Þ  qðsÞÞ−C fix :

ð7Þ

From this point, we know how much profit w each supplier would generate in case s suppliers compete in the market. Strategic decision making implies that competitors enter the market to earn profits. Negative profits (w b 0) cause firms to leave. One exception is important: In “M”-scenarios

with non-liberalized markets, one monopolist serves the market (s = 1). Entries are not allowed. In “C”-scenarios with competition, as many profitable suppliers as possible enter the market: s is increased step by step until another entry would lead to negative profits. We pick up the above example with six aware consumers and assume a monopolistic market with Cvar = 0€ and Cfix = 30€. The monopolist maximizes its profits at one half market quantity (q = 3) that results in a market price of p = 60€: wð1Þ ¼ ðð60−0Þ  3Þ−30 ¼ 150:

ð8Þ

Market liberalization would cause two additional competitors to step in. Market price shifts to p = 30€: wð2Þ ¼ ðð40−0Þ2Þ−30 ¼ 50 and wð3Þ ¼ ðð30−0Þ1:5Þ−30 ¼ 15:

ð9Þ

Any additional entrant would generate losses: w(4) = −1.2. Thus, strategic decision making prevents this rival to enter and s is set to s = 3. As the example shows, competitive impact is radical. The evolution from monopoly to duopoly shrinks firm's profits from 150 to 50 and ends up at 15 in the final instance (s = 3). The same procedure is applied to calculate exit decisions. Decreasing market potential reduces s and shakeout begins. One assumption includes the order of entry: According to Klepper's survival patterns, earlier entrants are assumed to stay longer in the market compared to later entrants [25]. The first mover never leaves the market, because we do not allow to reduce s below s = 1. Innovation diffusion would be impossible if all suppliers exit and nobody sells the product. This assumption may result in losses at the beginning and/or the end of the simulation, when awareness and adoption are too low to cover fixed costs. 3.2.3. Adoption calculations After the final decision on the number of suppliers s, the next step randomly selects the total market quantity of q * s adopters from Iaware whose willingness-to-pay is greater than

M. Rixen, J. Weigand / Technological Forecasting & Social Change 85 (2014) 153–167

159

or equal to market price p:

3.3. Policy design, scenarios, and simulation execution

Iadopters ¼ f1; 2; …; qsg with zðiÞ≥p and Iadopters Iaware : ð10Þ

Our research question asks how lawmakers may induce diffusion of Smart Meters effective and efficiently. Regulators face different decision variables within the process of policy design. An example for a decision variable is the kind of policy they issue. We guide our analysis by four decision dimensions: Type, targeting, timing, and scale of intervention. Type defines the kind of policy: Market liberalization, information policies to educate consumers, monetary grants released as purchase bonuses, or combinations thereof. We create eight predefined scenarios to measure the type dimension:

Residual aware agents who were not selected are added to Inonadopters. Non-adopters are not eligible for further adoptions: Inonadopters ¼ Iaware −Iadopters :

ð11Þ

We configured the restriction of adoption-eligibility in order to implement a precise exit condition based on awareness. This precision is critical for the calculation of key performance indicators (KPIs) that allow us to evaluate and compare the outcome of each repetition of each scenario. In contrast to other simulations—which typically set a maximum simulation runtime as exit condition (see [19] as an example)—our use of awareness as the exit trigger enables us to measure spread of awareness, speed of diffusion, and level of diffusion separately. For example, information policies always induce the spread of awareness, but this does not mean that these policies always induce adoption. The periodic loop is repeated until the exit condition Nexit is reached. The simulation loops until the number of residual consumers is lower than 1000 consumers: Iunaware b Nexit. The simulation ends with the calculation of KPIs. All KPIs have in common that they are calculated at the final time step when the exit condition becomes active. Table 2 explains our KPIs and how we used them to measure experiment outcomes. KPIs are mandatory to evaluate experiment results that include regulatory interventions. In line with our research question, we evaluate policies in two directions: Effectiveness and efficiency. Effectiveness is measured via saturation effects (LEVEL) and acceleration (SPEED) [3]. An effective policy initiates consumers to adopt earlier and/or induces consumers, who would not adopt without the intervention, to adopt. Efficiency measures the trade-off between policy effectiveness and associated costs. For instance, policies may boost adoption speed and level markedly, but generate offsetting costs (PCOST). We measure efficiency as the quotient between PCOST and LEVEL as well as the quotient between PCOST and SPEED. The associated KPIs for efficiency are PCLVL and PCSPD.

− − − − − − − −

M: Baseline scenario in the monopoly MI: Informational intervention in the monopoly MG: Purchase bonus in the monopoly MIG: Combination of both informational stimulus and monetary grant in the monopoly C: Baseline scenario for the competitive market (liberalization in t = 0) CI: Informational intervention in the competitive market CG: Purchase bonus in the competitive market CIG: Combination of both informational stimulus and monetary grant in the competitive market

Liberalization opens the market for competition and evolves scenario M to scenario C. We simplified the calculation of liberalization and did not associate specific liberalization costs. This simplification increases the ability to perform cross-scenario comparisons. Information policies are modeled as additional artificial mass media interactions M that increase awareness. Monetary grants G are modeled as an increase of consumer's price sensitivity z(i). For instance if agent i with z(i) = 1000€ receives a purchase bonus G = 200€, i will adopt at any market price p ≤ 1200€. We derive costs for the information policy from market price p and related advertising costs: The average market price across all competitive scenarios and across all simulated periods is 828 €. We assume a 30%-advertising share of the total purchase price, which leads to about 250€ in purchase-related advertising costs for information policies. We set M = 225 to duplicate the amount of mass media awareness interactions compared to the baseline scenarios. A simple duplication is the best-suited method to enable comparisons and interpretations to the baseline. The monetary grant was calibrated at 200€. The size of the grant was derived from the successful “Cash-for-Clunker” program in Germany's automotive sector,

Table 2 Key performance indicators. KPI

Description

Calculation

Interpretation

LEVEL SPEED FIRST MAXSU PRICE CPLUS SPLUS PCOST PCLVL PCSPD

Cumulative innovation diffusion after the final time step No. of time steps until exit condition is reached No. of initial competitors (“First Movers”) in t = 0 Maximum no. of suppliers across all time steps Average adoption price across all adopters Consumer surplus of adopters Supplier surplus (measured as cumulative profits) Cumulative policy costs (scenario specific) Ratio to measure the efficiency of the increase in LEVEL Ratio to measure the efficiency of the increase in SPEED

Iadopters/N Max (t) s (t = 0) Max (s) ∑ p/Iadopters ∑ (z(i) − p) ∑ ((p − Cvar) * q) − Cfix ∑ (Iadopters(t)* G) + (M * 250) PCOST/Δ LEVEL − (PCOST/Δ SPEED)

How How How How How How How How How How

many consumers adopted? fast was the adoption procedure? attractive was the market initially? intensive was rivalry at the peak? intensive was rivalry over the lifecycle? did consumers profit from the policies? did suppliers profit from the policies? much did the implemented policies cost? efficient was cum. diffusion induced? efficient was speed of adoption induced?

160

M. Rixen, J. Weigand / Technological Forecasting & Social Change 85 (2014) 153–167

which subsidized about 25% of the costs for a new car. We mapped this 25%-value to the average market price of 828€ in our Smart Meter simulation and calibrated the monetary grant at 200€. Monetary grants are accumulated only for adopters who received the grant (see PCOST-formula in Table 2). The other three dimensions (targeting, timing, and scale) were analyzed across the eight predefined scenarios. We performed scenario analyses for targeting and sensitivity analyses to calculate the impact of timing and scale. Sensitivity analyses are a well-suited data generation method that utilize the advantage of (agent-based) simulations: Numerous repetitions with varying parameter settings can be performed quickly [3]. Timing includes the point of time as well as the duration of intervention. Scale describes the magnitude of M and G: The number of periodically educated consumers within info policies (M) and the size of purchase bonuses within monetary interventions (G). Targeting tackles the decision of which target group should be in scope for the implemented policy. Anybody? A regional subgroup? Consumers with low income, little spending power, and low probability to adopt? We created five specific scenarios that tested the impact of targeting. The rationale behind n − and n + targeting is to address consumers with low probability to receive an awareness interaction (n −) or to address consumers who may act as opinion leaders with many interactions (n +) who spread the word and multiply awareness: − (z −) Targeting: Only consumers with a willingnessto-pay z(i) below the average of μ = 1307€ are affected by the intervention (details about the average means of z(i) and n(i) are given in Table 1) − (z +) Targeting: Only consumers with a willingnessto-pay z(i) above the average of μ = 1307€ are affected − (n-) Targeting: Only consumers with a number of neighbors n(i) below the average of μ = 5.5 are affected − (n +) Targeting: Only consumers with a number of neighbors n(i) above the average of μ = 5.5 are affected − (area) Targeting: Only consumers in a specific geographical region are affected

structures.2 Comparisons between both structures measure liberalization impacts. Fig. 3 and Table 3 present results across the eight predefined scenarios. Fig. 3 focuses on the visualization of periodic diffusion and its influencer's awareness and competition. Table 3 shows end results based on our KPIs. Diffusion evolves when mass media kicks off initial awareness of Innovators and Early Adopters who adopt and cause further spread of awareness through word-of-mouth interactions (network effects). Demand-curve shifts to the right, incentivizing more suppliers to enter in case of liberalized markets. Monopoly evolves to an oligopoly. Synergies occur: Intensifying rivalry reduces market price p, which increases adoption ratios and in turn awareness. Shakeout begins when the “S”-shape proceeds to the point of inflection and demand saturation shifts the demand curve back to the left. These processes showcase Soberman's and Gatignon's ideas on supply–demand endogeneity [6]. But they occur only in liberalized markets. A monopolist has no incentive to reduce the price below the profit maximizing monopoly prize. Although, all eight predefined scenarios show typical “S”-shaped diffusion, the KPIs confirm that they differ considerably in effectiveness and efficiency (see differences in LEVEL, SPEED, PCLVL, and PCSPD in Table 3). The effectiveness in terms of LEVEL and SPEED ranges from totally ineffective to highly effective interventions. An example for ineffectiveness is the LEVEL-inducement in scenario MI compared to M. Vice versa, the same comparison is a good example of an effective SPEED-inducement. Table 3 shows similar results with regard to efficiency. For example, both information policies as well as monetary grants are effective methods to speed-up diffusion in liberalized market. But as PCSPD results show, an information stimulus is much more efficient: Each year of acceleration costs about 95 T€ compared to 245 T€ with monetary grants. These discrepancies between effectiveness and efficiency stress the importance that policy makers need to decide about their primary inducement-objective: SPEED or LEVEL.

4.1. First dimension: type of intervention

4.1.1. Market liberalization Missing rivalry is a burden for consumers. It prevents price reductions and therefore inclining adoption. Vice versa, diffusion proceeds faster and attains higher levels in liberalized markets (see Fig. 3 and Table 3): M reaches a LEVEL of 36% after 33 years, while C ends up at 50% in a SPEED of 21 years. We did not measure efficiency due to the fact that we did not associate any cost of liberalization. But economic benefits in terms of CPLUS (consumer surplus) and SPLUS (supplier surplus) are considerably higher in competition: C results in a 49% higher consumer surplus compared to M and cuts monopolist's surplus by −65%. Overall welfare (CPLUS + SPLUS) exceeds by 9%. In sum, liberalized markets outperform monopolies in both effectiveness and efficiency. Policy makers should consider market liberalization as a preliminary requirement in advance of any other inducement.

Different intervention-types provide regulators with tools to induce diffusion. Our arsenal covers market liberalization, information policies, and monetary grants across eight predefined scenarios (see Table 3). M and C constitute baselines for monopolistic and competitive market

2 We refer to a non-liberalized market as monopoly or monopolistic and describe liberalized markets as competition or competitive. However, flexible entry and exit of suppliers may cause liberalized markets to constitute monopolistic market structures if s = 1.

We executed the model in NetLogo. It is a widely used agent-based simulation tool. It is very professional in its appearance, documentation, and usability and therefore popular among educational users [51]. Since random dispersion and random selection of consumers cause contingency effects, we perform numerous iterations to factor-out these contingency effects. Values presented in the results section are arithmetic means across 1000 repetitions per experiment. 4. Results

M. Rixen, J. Weigand / Technological Forecasting & Social Change 85 (2014) 153–167

161

Table 3 KPI results across the eight predefined scenarios. Scenario LEVEL in % SPEED in years PCOST in T € FIRST in firms MAXSU in firms PRICE in € CPLUS in T € SPLUS in T € PCLVL in T € PCSPD in T € M MI MG MIG C CI CG CIG

36 36 40 40 50 55 61 65

33 19 29 17 21 13 16 11

0 1139 797 1825 0 762 1225 1977

1.0 1.0 1.0 1.0 1.0 2.1 1.8 3.0

1.0 1.0 1.0 1.0 3.6 4.8 4.8 6.0

1194 1218 1304 1331 947 825 871 766

4.1.2. Information policy The second intervention-type forces suppliers to educate consumers. Scenarios MI and CI map this educational activity through a duplication of periodic awareness interactions. Surprising results occurred. Info policies are partly ineffective in monopolies, but highly effective in competition: LEVEL keeps unchanged in closed markets, whereas C predicts an increase from 50 to 55%. SPEED increases notably in both market structures, indicating that info policies generally accelerate diffusion of innovations. While effectiveness is limited in non-liberalized markets, it is an effective and efficient tool in competition. Diffusion-level increases if info policies induce competition. Awareness shifts the demand curve, appeals more firms to enter, and amplifies rivalry in terms of quantity and price setting. Quicker adoption and higher adoption are good examples how policy makers may utilize supplier-conduct to induce diffusion, as described by Robertson, Gatignon, and Vettas [15,16,20]. The impact is measurable through FIRST and MAXSU. PCSPD results for MI and CI underline the efficiency-boost of such educational stimuli. In addition, consumer surplus rises by 67% and overall economic welfare by 7%. 4.1.3. Monetary grants Our 200€-bonus releases its effectiveness in both market structures. SPEED and LEVEL outperform the baselines. But

3195 3210 3564 3581 4763 5357 6285 6937

1697 2252 2635 3088 593 466 547 348

ineffective 199 456

81 199 114

152 111 132

95 245 198

mechanisms differ: In an open market, bonuses induce market potential and intensify competition in favor of the consumer. Awareness and adoption interact synergistically. In closed markets, the monopolist skims the higher sales potential and increases market price by half of the consumer's purchase bonus. The monopolist's surplus rises. From a working principle, purchase bonuses increase adoption probability at a given market price, because more aware consumers will accept the price. Real-world examples exist especially among eco-innovations, when purchase bonuses were released as an economic recovery instrument during the financial crises. For example as “Cash-for-Clunker” programs in the automotive sector. In sum, a monetary intervention is an effective policy but favors primarily the monopolist in a non-liberalized environment. In both market structures, the efficiency of monetary grants is limited to LEVEL-inducements: The costs for one year in SPEED acceleration (PCSPD) are at least twice as high compared to information policies (for example 245 T€ in CG compared to 95 T€ in CI). Regulators should release monetary grants only if the objective is to reach a higher saturation. 4.1.4. Combined interventions The primary driver of combinations is to gain synergies between inducements. Results indicate that these synergies occur only in liberalized markets. Synergies between rising

Diffusion in %

No. of aware consumers

70

1200 1000 800

60

600 400

50

200 Time in years

0

40

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20

No. of competing suppliers

30

6 5

20

4 3

10

2 1 Time in years

0

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 Monopoly (M)

M + Info (MI)

M + Grant (MG)

M + both (MIG)

Time in years

0

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20

Competition (C)

C + Info (CI)

C + Grant (CG)

Fig. 3. Cumulative diffusion, awareness, and Industry Lifecycle across the eight predefined scenarios.

C + both (CIG)

162

M. Rixen, J. Weigand / Technological Forecasting & Social Change 85 (2014) 153–167

4.2.2. Monetary grants z − targeting is the best option with monetary grants. PCOST slump as the number of grant-receivers shifts from 6100 to 3400. Again, with z − there is only one option that increases policy effectiveness. z − tackles consumers with low willingness-to-pay who would not adopt otherwise and induce them to adopt. Targeting of consumers with low adoption probability increases the effectiveness in terms of LEVEL (+ 5%) and SPEED (− 6%) of monetary grants. Efficiency gains are enormous: PCLVL slumps by − 56% and PCSPD by − 54%. The huge impact on efficiency in the context of monetary grants is not surprising. Grants that are paid out to recipients who would adopt in any event are inefficient. Regulators operationalize z − targeting for example by paying out grants to residential households and exclude commercial sectors.

awareness and intensifying competition induce SPEED and LEVEL stronger than in any other predefined scenario. Rivalry is engaged earlier with a higher maximum number of competitors entering the market. Price slumps quickly. Fierce competition cuts supplier profits while consumer profits rise. Overall welfare is highest in this scenario. On the contrary, no synergies occur in MIG. SPEED increases like in MI, whereas LEVEL and PRICE match MG results. The monopolist gains huge surpluses. The duplication of policy costs indicates that a combined intervention is disadvantageous from a benefit– cost-perspective.

4.2. Second dimension: targeting of intervention Policy makers may use targeting to tackle the heterogeneity of adopters and limit interventions to specific subgroups. This “cherry picking” can reduce policy costs due to the fact that fewer recipients receive grants or education. Our simulation results measure both positive and negative impacts: There is limited potential to increase effectiveness and efficiency, but there is much potential to reduce both with poor targeting. We identified only one advantageous targeting option per policy. Therefore, regulators should apply targeting carefully. Table 4 summarizes results for the five targeting scenarios compared to their baselines without targeting (CI and CG). We measured virtually no difference between market structures. We therefore chose CI and CG results to be displayed in the table.

4.2.3. Combined interventions Results for info policies (z +) and purchase bonuses (z −) contradict. Combined interventions face regulators with the challenge to de-couple targeting of both interventions. If this is not possible, lawmakers should solely focus on z − targeting, because grant-related z − benefits outperform info policy-related z + benefits. Furthermore, z + causes strong negative effects applied with monetary grants. We performed data mining on micro-level characteristics to gain insights on targeting-drivers. In line with Rogers [12] and Mahajan et al. [13], we measured adopter heterogeneity through discrepancies in both Probit and Epidemic characteristics. Fig. 4 visualizes these statistics per adopter category in the C-scenario. The figure explains why z(i) targeting options affect diffusion more than n(i) options. The price threshold z(i) shows a huge gap between Innovators and Early Adopters, while the same gap with the number of neighbors n(i) is much smaller. Even if optimized n(i) targeting brings advantages, the impact on diffusion is smaller compared to optimized z(i) targeting.

4.2.1. Information policy z+ is the only advantageous option with info policies. Targeting of consumers with z(i) above average of μ = 1307€ results in slight improvements in terms of effectiveness across LEVEL (+3%) and SPEED (−4%). The z+ subgroup includes high adoption probability. As more consumers adopt early, awareness spreads faster. Vice versa, other targeting options— especially a regional focus (area)—slow down circulation of interactions and jeopardize inducements. Efficiency rises, as results for PCLVL (−20%) and PCSPD (−15%) show. Optimized targeting enables higher effectiveness at lower costs. The savings potential is high, even if additional execution costs to define and reach the specific consumer segment may jeopardize targeting benefits in the real-world.

4.3. Third dimension: timing of intervention Policy makers need to decide about policy start-time and duration. Sensitivity analyses measure clear results across all scenarios in our simulation: Early interventions are superior in effectiveness and efficiency. Almost linear correlations between start-time and effectiveness ratios exist. On the

Table 4 Impact of targeting options in competitive markets. Information policy LEVEL

Monetary grant

SPEED

PCLVL

PCSPD

LEVEL

SPEED

PCLVL

PCSPD

Targeting

in %

Δ in %

in years

Δ in %

in T €

Δ in %

in T €

Δ in %

in %

Δ in %

in years

Δ in %

in T €

Δ in %

in T €

Δ in %

Without z− z+ n− n+ area

55 54 56 55 53 52

0 −1 3 0 −3 −5

13 13 12 12 15 18

0 4 −4 −1 21 40

152 198 122 151 304 523

0 30 −20 −1 99 243

95 99 81 84 152 348

0 4 −15 −12 60 266

61 64 49 56 56 53

0 5 −19 −9 −9 −14

16 15 21 18 18 20

0 −6 32 14 13 21

111 49 −594 100 99 97

0 −56 −633 −10 −11 −13

245 114 n.a. 200 198 290

0 −54 n.a. −18 −19 18

Notes: The “delta” columns display the relative deviation to predefined scenario results CI respectively CG. Negative SPEED delta values are advantageous (acceleration).

M. Rixen, J. Weigand / Technological Forecasting & Social Change 85 (2014) 153–167

Average price threshold z(i) in €

Average no. of neighbors n(i)

1950

5,8

1900 1850

5,6

1800

5,4

1750 1700

5,2

1650

5,0

1600

4,8

1550 1500

Averageshare of awareness-spread 100% 90% 80% 70% 60% 50% 40% 30% 20% 10% 0%

6,0

2000

4,6 Early Late Laggards Innovators Early Adopters Majority Majority

163

Innovators

Early Early Late Laggards Adopters Majority Majority

Innovators Early Early Late Laggards Adopters Majority Majority Innovation (mass media)

Imitation (word-of-mouth)

Fig. 4. Characteristics per adopter category in scenario C.

contrary, duration correlates in a non-linear fashion with LEVEL and SPEED. In general, short durations increase effectiveness, while long durations are counterproductive in terms of efficiency. Early policy start-times increase their effectiveness and efficiency and short policy durations increase their efficiency. Keeping policy costs in mind, lawmakers do best if they intervene early and over a short timeframe. For instance, an info policy issued at the beginning and executed over a period of only ten years boosts SPEED from 33 to 24 with 65% less policy costs to the default MI-scenario. We measured two exceptions to this proposition: LEVEL keeps unchanged in the MI-scenario and timing has no impact. The second exception is efficiency in the CG-scenario. In this setup, policy costs can be reduced via long durations and small scale interventions. We will pick-up the aspect of scalability in the next sub-section. Start-time findings are in line with earlier publications on Epidemic models that stress external influence through mass media to create initial awareness impulses [13]. Fig. 4 displays this empirically validated [7] shift from innovation to imitation. Innovators primarily receive mass media interactions (N 90%) and spark-off awareness of others through wordof-mouth. Timing of interventions is more important in competitive markets compared to monopolies. Synergies originate from more intense competition in case they are released before the Industry Lifecycle reaches its climax. CI, CG, and CIG Industry Lifecycles in Fig. 3 indicate these synergies: Early timing shifts the rivalry-peak to an earlier and higher maximum. These effects are crucial with respect to market liberalization. Late liberalization reduces market potential for entrants. Early liberalization (t b 9) engages three or four rivals to compete (measured via MAXSU), whereas late liberalization (t N 11) reduces MAXSU to two suppliers. 4.4. Fourth dimension: scale of intervention Appropriate scale is important due to trade-offs between policy costs and policy effectiveness. Lower magnitudes are cheaper, but could be ineffective. Large scales might be wasteful. Correlations between scalability, effectiveness, and efficiency differ depending on market structure and KPI. Fig. 5 shows the results for different intervention-scales in

MI, CI, MG, and CG. The x-axis is calibrated to default configurations of the predefined scenarios. 0% represents the values M = 225 and G = 200€. No relevant synergies were measured in combined scenarios. MIG and CIG simply add up between their non-combined equivalents and are not displayed in Fig. 5. 4.4.1. Information policies Simulation outcomes confirm the type-dimension conclusions of Sub-section 4.1. The missing slope of MI as well as the slight incline of CI confirm that with respect to LEVEL, the scalability of info policies are not (MI) or only marginally (CI) effective. MI is ineffective and therefore inefficient, because any inducement is waste of money (not displayed in PCLVL). The LEVEL-related efficiency of CI is limited. An unsteady and slightly inclining slope characterizes PCLVL of CI. The increase in LEVEL-efficiency between a CI-scale of − 10% and 120% is caused by intensifying competition due to another market entrant. But this change in rivalry is almost irrelevant, as the infinitesimal change in the slope in LEVEL indicates. Results are different with regard to SPEED. The declining slopes of MI and CI indicate good scalability to trigger SPEED in both market structures. High efficiency is confirmed due to the steady and slowly inclining run of the PCSPD-curves. In sum, regulators are able to scale information policies if their primary focus is adoption-speed. In situations where adoptionlevel is the primary scope, educational inducements are not only the wrong type of policy, regulators will also be unable to scale these policies effectively or efficiently. 4.4.2. Monetary grants We measured almost opposite results for scenarios MG and CG. As the steadily inclining slope in terms of LEVEL shows, scalability of monetary grants is highly effective in both market structures if saturation is the primary objective. As expected, efficiency is limited to the competitive market: Monetary grants induce competition and cause comparatively strong jumps in LEVEL at relatively small policy costs. Vice versa, the monopolist profits most in closed markets. Even worse, PCLVL outcomes for small bonuses (scalability between − 100% and − 25%) indicate that grants are paid out to a huge number of consumers compared to the small increase in LEVEL. There is a minimum grant-size required to

164

M. Rixen, J. Weigand / Technological Forecasting & Social Change 85 (2014) 153–167

LEVEL in %

PCSPD in T €

PCLVL in T €

SPEED in years

90

35

350

600

80

30

300

500

70

25

250

400

60

20

200

300

50

15

150

200

40

10

100

100

Intervention-scale

30

-100%

0% MI

100% CI

CG

200%

-100%

MG

Intervention-scale

5

0% MI

100% CI

CG

200%

Intervention-scale

50

-100%

MG

0% CI

100% CG

200% -100%

MG

Intervention-scale

0

0% MI

100% CI

CG

200% MG

Fig. 5. Scale-sensitivity for LEVEL, SPEED, PCLVL, and PCSPD.

compensate the price-increase of the monopolist. SPEED is increased in both market structures. Monetary grants are less scalable compared to info policies. Furthermore, efficiency is low as the rising PCSPD-curves show. Grant-based SPEEDinducements are possible, but will be very expensive compared to the alternative option of educational policies.

4.4.3. Combined interventions MIG and CIG results were similar to MI and MG respectively CI and CG. As already mentioned in Subsection 4.1, we could measure synergies only in liberalized markets. Policy makers should base their decisions on their primary objective (SPEED or LEVEL). Then they may combine different types and potentially different scales only where effectiveness and efficiency are medium or high. For instance, if SPEED is the primary objective, a combined intervention could be superior in accelerating diffusion compared to an isolated monetary inducement.

4.5. Summary of all dimensions We are able to answer our primary research question based on the results from the previous sub-sections. How effectively and how efficiently can regulators induce the diffusion of Smart Meters in Germany? Table 5 presents a high-level summary of all results. Taking intervention-type, -targeting, -timing, and -scale into account, market liberalization excels any other inducement in effectiveness and efficiency. Liberalized markets outperform monopolies and policy makers should consider market liberalization as a preliminary requirement in advance of any other inducement. SPEED is best induced via educational policies, while LEVEL is best induced by monetary grants. Therefore it is critical, that policy makers decide on their primary objective first: SPEED or LEVEL? Then they can pick the best-suited intervention. Authorities can rely on the effectiveness and scalability of monetary inducements in terms of LEVEL and, vice versa, on the effectiveness and scalability of info policies

Table 5 Summary of type, targeting, timing, and scalability results. Effectiveness Type

LEVEL

SPEED

Liberalization ++/n.a. ++/n.a. Info policy −/+ ++/+

Efficiency PCLVL

Scalability PCSPD

Targeting

++/n.a. ++/n.a. −/+ ++/++ z+ (consumers with a high adoption probability due to their high willingness-to-pay) +/++ −/− z− (consumers with a low adoption probability due to their low willingness-to-pay)

Monetary grant

+/++

+/+

Info & grant

+/++

++/++ −/+

++/−

z− (grant-related z− benefits outperform info policy-related z+ benefits; z+ causes strong negative effects applied with grants)

Timing

LEVEL

As early as possible n.a. Early and over a short −/− timeframe

SPEED

PCLVL

n.a. n.a. ++/++ −/−

PCSPD n.a. ++/++

++/++ +/+ −/++ −/− Effectiveness: Early and over a short timeframe Efficiency: Early and over a long timeframe Early and over a short +/+ ++/++ −/+ +/+ timeframe

Legend: “−” no or low impact. “+” means medium impact. “++” means high impact. The evaluation includes both market structures: Monopoly/competition, e.g. +/++ means medium impact in monopoly and high impact in competition.

M. Rixen, J. Weigand / Technological Forecasting & Social Change 85 (2014) 153–167

in terms of SPEED. This applies at any stage in the innovation lifecycle, as timing results showed. Inefficiencies which we measured as rising and upward-sloping PCLVL and PCSPD are scalability trade-offs every lawmaker needs to consider. Targeting should be applied to minimize costs and boost efficiency. In general, regulators need to take huge surplus imbalances into account in non-competitive markets: Monopolist profit massively from large-scale interventions. 5. Discussion The successful reproduction of adoption drivers confirms agent-based modeling being a promising methodology for diffusion research in line with the recent literature [32–37]. We used it to predict implications of inducement stimuli based on effectiveness and efficiency ratios. The agent-based paradigm is well-suited to combine key diffusion drivers, namely network effects, adopter heterogeneity, and competition. Nevertheless, its explanatory power depends on rigorous implementation of functional requirements. We evaluate the following modeling requirements as critical: − KISS: Simplicity avoids over-parameterization and allows readers to interpret results. Simple micro-level behavior already causes complex and unforeseeable macro-level outcomes [46]. Many published models fall far short from the KISS-requirement. How should readers understand the model's functioning, if its input parameters need to be explained in the appendix, because number and complexity are too high to explain them in the main text? − Empirical validation: Simulations are pre-configured input– output calculations based on a priori-defined rules. Input drives output. Garbage-in, garbage-out. Without the validation with empirical data and solid frameworks (Cournot competition, the General Bass Model), simulations are just conglomerates of assumptions—toys and not tools [41]. − KPIs: Tracking of end-to-end KPIs is crucial to interpret how micro-level actors cause accumulated outcomes. A majority of published innovation diffusion models reduce their analysis to macro-level metrics—typically the diffusionlevel. Our study presents a set of balanced metrics, for example welfare (CPLUS, SPLUS) and competition ratios (FIRST, MAXSU). Our endogenous demand–supply integration is perhaps the most valuable extension of current diffusion research. Published diffusion models typically focus on Epidemic drivers. Changes in price, adoption ratios, and other Probit aspects often based on randomness and are rarely linked to demand-side evolution. We presented a methodical approach to link demand with supply, Epidemic theory with Probit theory, and diffusion research with Industrial Organization. This allows us to suggest ideas how policy makers can use these links in a co-evolutionary context. We contribute to research issues at the boundaries of market evolution and competitive dynamics [6,17]. Our model is capable to simulate the bi-directional linkage of market evolution and competitive dynamics: Awareness and heterogeneity of consumers drive market potential, which determines supplier conduct. Vice versa, supplier entry/exit decisions and

165

pricing trigger adoption ratios and therefore emergence of market potential. Put simply, suppliers have the ability to create their own demand. Our results confirm Vettas' propositions in [20]. In the context of commercial launches, they should embed coevolutionary impulses into competitive strategies: Intensive communication, lead-user targeting, and low pricing lead to quick uptake in sales. Market potential evolves quickly, resulting in economies of scale, lock-in effects, and entry barriers for rivals. But this strategy will encourage many competitors to step in, leading to intensive competition and slump in profits. This makes first movers extraordinary profitable, while followers enjoy free-rider effects, because the innovation is already well-known [22]. Firms could choose the opposite strategy (skimming) as well. They scale communication down and profit from high mark-ups over a long period of time, until market potential slowly evolves and new rivals appear. We measured various impacts of such skimming strategies on supplier and consumer surplus in our model. Effectiveness and efficiency are typically lower in closed markets, because missing rivalry allows the monopolist to skim profits. Fig. 6 highlights these findings in relation to order of entry and market structure. It plots a typical profit and survival pattern in the competitive scenario CI as well as average periodic profits across the eight predefined scenarios (see Fig. 3 supplementary). In line with Klepper's propositions [24,25], firm survival in our model depends on order of entry: Two first movers earn superior profits. Two followers earn less and survive only a few years. Firm 5 enters during the awareness peak, earns infinitesimal profits, and exits after one year. Policy makers may utilize these constraints to induce diffusion via intensifying competition and to avoid welfare losses by money transfers to a monopolist. Use of an awareness-driven exit threshold instead of fixed simulation runtimes—as in formerly published models— extends research to diffusion-speed. An important finding is the correlation between SPEED and LEVEL across all policy types. Any intervention that accelerates diffusion, increases its level—and vice versa. Regardless of this coherence, policy designers need to settle on the overall inducement objective (SPEED or LEVEL?), because efficiency differs remarkably per stimulus. Market liberalization is effective and efficient. We propose regulators to set the stage for competition first and then add other inducements if necessary. Interventions are amplified through demand–supply synergies in competitive markets. Inducements that stimulate rivalry are superior in efficiency. The Industry Lifecycle is a helpful model to support intervention timing, for example to issue policies in the growth phase of the “S”-curve. In the real-world, monetary grants are the most frequent inducement-option. A typical reason is supply-side lobbying— our measurements on supplier's surpluses in these scenarios explain why (see Fig. 6). We propose regulators to extend their stimulus-arsenal with information policies. In competitive market structures, they induce diffusion with less costs and risks. However, if monetary grants are required, they should be paid out to consumers directly. Real-world “Cashfor-Clunker” programs are good examples how policy designers can include optimized targeting, timing, and scale to improve policy effectiveness and efficiency. In addition, these

166

M. Rixen, J. Weigand / Technological Forecasting & Social Change 85 (2014) 153–167

Cumulative profit in thousand €

Periodic profit in thousand € 350

250

300

200

250 150

200

100

150 100

50 50 Time in years

0 0

1

2

3

4

Firm 1

5

6 Firm 2

7

8

9

Firm 3

10

11

Firm 4

12

13

14

15

16

Firm 5

Time in years

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 M

MI

MG

MIG

C

CI

CG

CIG

Fig. 6. Survival and profitability in a CI iteration and periodic supplier profits in predefined scenarios.

programs are typically accompanied by awareness-boosts due to intensive media coverage. Policy outcomes present typical “S”-curves across all scenarios. Bottom-up analyses confirm both Epidemic and Probit aspects as important adoption drivers. In line with Geroski's roundup [2], we measured Epidemic spread of awareness to be the primary driver that causes the “S”-curve. The attainment of Probit thresholds is secondary, as results in monopolies without price reductions show. This proportion may change with different innovations. Researchers may replicate our model to validate expected barriers of the following three “Green” technologies: − Organic fuel E10: Diffusion stagnates despite savings for car drivers. Consumers are unsure if their car might suffer damage. Awareness is the primary barrier and negative press boosts resistance. Educational policies are best suited to induce adoption. − Electric vehicles: Media coverage is high. Consumer's awareness is high. But Probit thresholds constitute adoption barriers. Many countries introduced purchase premiums to induce diffusion. These are best suited to induce adoption. − Smart Metering: Digital online metering is a critical component for intelligent energy networks (“Smart Grids”). Furthermore, direct consumption feedback reduces energy consumption and contributes to CO2 abatement. Obstacles exist in the form of missing awareness and mismatched Probit thresholds: Innovative energy tariffs are unknown and expensive. Two limitations restrict the explanatory power of our model. First, use of Cournot competition reduces heterogeneity of suppliers. Competitors incorporate identical cost curves, entry barriers, and competitive strategies. This approach reduces complexity and utilizes a widely accepted economic model. But other publications pointed out the relevance of firm heterogeneity [24,25,29]. Order of entry is one root-cause for heterogeneity. Early movers face different challenges as followers (see Fig. 6). Heterogeneity causes competitive advantages and determines competitive strategies. Second, consumer adoption processes are simplified in a two-stage awareness/decision-making process. Furthermore, this process is performed only once per agent. Non-adopters do not turn aware again in later periods. In the real-world,

consumers may receive several interactions until they ultimately adopt. Rixen and Weigand present an adoption model that factors-in a complex and repetitive purchase decision behavior routine [19].

6. Conclusion and future research Epidemic and Probit diffusion theories were combined in an agent-based model to simulate policy induced innovation adoption. We extended current diffusion research by adding Cournot-based supplier behavior to tackle demand–supply co-evolution showcasing Soberman's and Gatignon's dependencies between market evolution and competitive dynamics [6]. Scenario and sensitivity analyses identified primary adoption drivers for effective and efficient policy design. Effectiveness was measured via diffusion-speed and -level, efficiency via cost- and welfare-impacts. Simulation results underline that one policy does not fit in all situations. Market liberalization is a dominant strategy. Intensifying competition is an effective and efficient adoption driver, while closed markets primarily favor the monopolist. Information policies typically accelerate adoption. Monetary grants boost both speed and level. Policy makers must not underestimate synergies across inducements as well as supply and demand endogeneity to keep control over policy costs. Our model's focus was on Smart Metering technology. It is different to other innovations due to the existence of adoption barriers across both drivers: First, Smart Metering is unknown to most consumers, awareness is virtually zero. Second, available tariffs are too expensive. Other innovations have better starting points for diffusion (see Section 5 for examples). Our simulation results for regulatory interventions give detailed insights on diffusion-pathways for innovations with fewer barriers. In general, higher awareness accelerates the speed of diffusion even if the product is expensive. Apple products are good examples, especially the iPad and its impact on the market of tablet PCs. Vice versa, higher perceived value drives saturation. Products that are free of charge are good examples, for instance cellphone software like WhatsApp and Shazam. These innovations attain excellent saturation levels without advertising campaigns due to non-existence of monetary adoption barriers.

M. Rixen, J. Weigand / Technological Forecasting & Social Change 85 (2014) 153–167

Future research should address our limitations and extend our model with supply-side heterogeneity and Schumpetrian dynamics. Creative destruction and leapfrogging could be added by R&D and disruptive innovations [4,5,29]. Incumbents would need continuous improvements to stay competitive across business cycles and changing consumer requirements [1,23]. Looking beyond this paper, flexibility of agent-based modeling allows the tackling of several microeconomic paradigms. For instance, individual cost curves, economies of scale, mergers and acquisitions, competitive strategies, and firm survival patterns (see [21] for a review on demand– supply co-evolution building blocks). Acknowledgments The authors thank Tim Kochanski, Jan Abrell, and Jonas Egerer for their helpful comments. The paper has benefitted from discussions received during presentations at the Young Energy Economists and Engineers Seminar Spring 2011. References [1] C.M. Banbury, W. Mitchell, The effect of introducing important incremental innovations on market share and business survival, Strateg. Manag. J. 16 (1995) 161–182. [2] P.A. Geroski, Models of technology diffusion, Res. Policy 29 (2000) 603–625. [3] I. Diaz-Rainey, Induced Diffusion: Definition, Review and Suggestions for Further Research. ([online] (cited 27 September 2012), Available from) http://ssrn.com/abstract=1339869 2009. [4] C.M. Christensen, The Innovator's Dilemma, 1st HarperBusiness ed. HarperBusiness, New York, 2000. [5] C.M. Christensen, F.F. Suárez, J.M. Utterback, Strategies for survival in fast-changing industries, Manag. Sci. 44 (1998) S207–S220. [6] D. Soberman, H. Gatignon, Research issues at the boundary of competitive dynamics and market evolution, Mark. Sci. 24 (2005) 165–174. [7] F.M. Bass, A new product growth for model consumer durables, Manag. Sci. 15 (1969) 215–227. [8] F.M. Bass, T.V. Krishnan, D.C. Jain, Why the bass model fits without decision variables, Mark. Sci. 13 (1994) 203–223. [9] V. Mahajan, E. Muller, Innovation diffusion and new product growth models in marketing, J. Mark. 43 (1979) 55–68. [10] V. Mahajan, E. Muller, F.M. Bass, Diffusion of new products: empirical generalizations and managerial uses, Mark. Sci. 14 (1995) G79. [11] S.W. Davies, I. Diaz-Rainey, The patterns of induced diffusion: evidence from the international diffusion of wind energy, Technol. Forecast. Soc. Chang. 78 (2011) 1227–1241. [12] E.M. Rogers, Diffusion of Innovations, 5th ed. Free Press, New York, 2003. [13] V. Mahajan, E. Muller, R.K. Srivastava, Determination of adopter categories by using innovation diffusion models, J. Mark. Res. 27 (1990) 37–50. [14] M. Kleijnen, N. Lee, M. Wetzels, An exploration of consumer resistance to innovation and its antecedents, J. Econ. Psychol. 30 (2009) 344–357. [15] H. Gatignon, T.S. Robertson, Technology diffusion: an empirical test of competitive effects, J. Mark. 53 (1989) 35–49. [16] T.S. Robertson, H. Gatignon, Competitive effects on technology diffusion, J. Mark. 50 (1986) 1–12. [17] B.L. Bayus, W. Kang, R. Agarwal, Creating growth in new markets: a simultaneous model of firm entry and price, J. Prod. Innov. Manag. 24 (2007) 139–155. [18] S. Cantono, G. Silverberg, A percolation model of eco-innovation diffusion: the relationship between diffusion, learning economies and subsidies, Technol. Forecast. Soc. Chang. 76 (2009) 487–496. [19] M. Rixen, J. Weigand, Agent-based simulation of consumer demand for Smart Metering tariffs, Int. J. Innov. Technol. Manag. 10 (5) (2013), http://dx.doi.org/10.1142/S0219877013400208. [20] N. Vettas, Demand and supply in new markets: diffusion with bilateral learning, RAND J. Econ. 29 (1998) 215–233. [21] K. Safarzyńska, J.C.J.M. van den Bergh, Demand-supply coevolution with multiple increasing returns: policy analysis for unlocking and system transitions, Technol. Forecast. Soc. Chang. 77 (2010) 297–317.

167

[22] G.L. Lilien, E. Yoon, The timing of competitive market entry: an exploratory study of new industrial products, Manag. Sci. 36 (1990) 568–585. [23] D.A. Aaker, G.S. Day, The perils of high-growth markets, Strateg. Manag. J. 7 (1986) 409–421. [24] S. Klepper, Entry, exit, growth, and innovation over the product life cycle, Am. Econ. Rev. 86 (1996) 562–583. [25] S. Klepper, Firm survival and the evolution of oligopoly, RAND J. Econ. 33 (2002) 37–61. [26] P.M. Parker, H. Gatignon, Order of entry, trial diffusion, and elasticity dynamics: an empirical case, Mark. Lett. 7 (1996) 95–109. [27] M. Gort, S. Klepper, Time paths in the diffusion of product innovations, Econ. J. 92 (1982) 630–653. [28] E.M. Rogers, New product adoption and diffusion, J. Consum. Res. 2 (1976) 290–301. [29] D. Herbert, Chapter 25 agent-based models of innovation and technological change, in: L. Tesfatsion, K.L. Judd (Eds.), Handbook of Computational Economics, Elsevier, 2006, pp. 1235–1272. [30] T.J. Gordon, A simple agent model of an epidemic, Technol. Forecast. Soc. Chang. 70 (2003) 397–417. [31] H. Rahmandad, J. Sterman, Heterogeneity and network structure in the dynamics of diffusion: comparing agent-based and differential equation models, Manag. Sci. 54 (2008) 998–1014. [32] B. Zenobia, C. Weber, T. Daim, Artificial markets: a review and assessment of a new venue for innovation research, Technovation 29 (2009) 338–350. [33] M.W. Macy, R. Willer, From factors to actors: computational sociology and agent-based modeling, Annu. Rev. Sociol. 28 (2002) 143–166. [34] W. Rand, R.T. Rust, Agent-based modeling in marketing: guidelines for rigor, J. Artif. Soc. Social Simul. 28 (2011) 181–193. [35] R. Garcia, Uses of agent-based modeling in innovation/new product development research, J. Prod. Innov. Manag. 22 (2005) 380–398. [36] J.D. Bohlmann, R.J. Calantone, Z. Meng, The effects of market network heterogeneity on innovation diffusion: an agent-based modeling approach, J. Prod. Innov. Manag. 27 (2005) 741–760. [37] N. Schwarz, A. Ernst, Agent-based modeling of the diffusion of environmental innovations — an empirical approach, Technol. Forecast. Soc. Chang. 76 (2009) 497–511. [38] B. Heath, R. Hill, F. Ciarallo, A survey of agent-based modeling practices, J. Artif. Soc. Social Simul. 12 (2009) 9. [39] E. Bonabeau, Agent-based modeling: methods and techniques for simulating human systems, Proc. Natl. Acad. Sci. U. S. A. 99 (2002) 7280–7287. [40] B.P. Zeigler, H. Praehofer, T.G. Kim, Theory of Modeling and Simulation, 2nd ed. Academic Press, Amsterdam, Heidelberg, 2005. ([repr.] ed.). [41] B. Edmonds, Agent-based social simulation and its necessity for understanding socially embedded phenomena. ([online] (cited 27 September 2012), Available from) http://bruce.edmonds.name/abss/ abss.pdf 2010. [42] B. Edmonds, in: Mabs (Ed.), The Use of Models, Springer, Berlin, 2001, pp. 15–32. [43] J.H. Holland, J.H. Miller, Artificial adaptive agents in economic theory, Am. Econ. Rev. 81 (1991) 365. [44] J.M. Epstein, Why Model? J. Artif. Soc. Social Simul. 11 (2008) 12. [45] J.M. Epstein, R. Axtell, Growing Artificial Societies: Social Science from the Bottom Up, Brookings Institution Press, Washington D.C., 1996 [46] R. Axelrod, The Complexity of Cooperation, 1st ed. Princeton University Press, Princeton, 1997. [47] R. Leombruni, M. Richiardi, Why are economists sceptical about agent-based simulations? Physica 355 (2005) 103–109. [48] J.M. Galán, L.R. Izquierdo, S.S. Izquierdo, J.I. Santos, R. del Olmo, A. López-Paredes, B. Edmonds, Errors and artefacts in agent-based modelling, J. Artif. Soc. Social Simul. 12 (2009) 1. [49] K.M. Carley, Validating Computational Models. ([online] (cited 27 September 2012), Available from) http://www2.econ.iastate.edu/ tesfatsi/empvalid.carley.pdf 1996. [50] D. Besanko, Economics of Strategy, 4th ed. J. Wiley & Sons, Hoboken, NJ, 2007. [51] S.F. Railsback, S.L. Lytinen, S.K. Jackson, Agent-based simulation platforms review and development recommendations, Simulation 9 (2006) 609–623. Dr. Martin Rixen earned his doctorate at WHU – Otto Beisheim School of Mgmt. (Department of Microeconomics and Industrial Organization) in Vallendar, Germany. He is a freelancing Consultant and specialized in Business Transformation and Technology, e.g. Data Mgmt. and Data Quality. Prof. Dr. Jürgen Weigand holds the Chair of Microeconomics and Industrial Organization at WHU - Otto Beisheim School of Mgmt. Furthermore, he is the Associate Dean for International Programs and Academic Director of the university's MBA programs.