Network structure, games, and agent dynamics

Network structure, games, and agent dynamics

Journal of Economic Dynamics & Control 47 (2014) 225–238 Contents lists available at ScienceDirect Journal of Economic Dynamics & Control journal ho...

438KB Sizes 1 Downloads 65 Views

Journal of Economic Dynamics & Control 47 (2014) 225–238

Contents lists available at ScienceDirect

Journal of Economic Dynamics & Control journal homepage: www.elsevier.com/locate/jedc

Network structure, games, and agent dynamics Allen Wilhite Department of Economics, University of Alabama in Huntsville, Huntsville, AL 35899, USA

a r t i c l e i n f o

abstract

Article history: Received 29 April 2013 Received in revised form 25 June 2014 Accepted 10 August 2014 Available online 15 August 2014

Consider a group of agents embedded in a network, repeatedly playing a game with their neighbors. Each agent acts locally but through the links of the network local decisions percolate to the entire population. Past research shows that such a system converges either to an absorbing state (a fixed distribution of actions that once attained does not change) or to an absorbing set (a set of action distributions that may cycle in finite populations or behave chaotically in unbounded populations). In many network games, however, it is uncertain which situation emerges. In this paper I identify two fundamental network characteristics, boundary consistency and neighborhood overlap, that determine the outcome of all symmetric, binary-choice, network games. In quasi-consistent networks these games converge to an absorbing state regardless of the initial distribution of actions, and the degree to which neighborhoods overlap impacts the number and composition of those absorbing states. & 2014 Elsevier B.V. All rights reserved.

JEL classification: D85 C70 D23 Keywords: Games Networks Local interaction Network structure Agent dynamics

1. Introduction Sometimes our decisions influence and are influenced by others and game theory has proven to be a powerful tool for modeling such situations. Likewise many of our economic decisions are channeled through a few specific individuals or specific institutions with whom we interact on a regular basis and networks are a useful way to map such relationships. The combination of the two, embedding games into networks, allows us to investigate how different types of economic organizations (different networks) might affect interdependent decisions made under a variety of economic situations (different games). For this reason the theoretical literature on networks games is especially deep with contributions coming from physics, biology, sociology, and economics. Much of what we know about the evolution of local decision making and its percolation through a population comes from analyzing particular games played on particular networks. To mention only a few, Axelrod (1984) and Nowak and May (1992, 1993), investigate prisoners' dilemma games played on a grid, while Eshel et al. (1998) model that game on a ring. They find that the shape of some networks allows cooperation or altruism to survive in a prisoners' dilemma game, suggesting there may be a spatial explanation as to why cooperation survives in the wild. Bramoullé and Kranton (forthcoming) investigate local public goods and networks and find that social network structure influences the agents' investment in information and their willingness to share. Ellison (1993) and Young (1998, 2002) explore games of

E-mail address: [email protected] http://dx.doi.org/10.1016/j.jedc.2014.08.008 0165-1889/& 2014 Elsevier B.V. All rights reserved.

226

A. Wilhite / Journal of Economic Dynamics & Control 47 (2014) 225–238

coordination on grids and rings demonstrating that in most situations, the population eventually settles on one of the two conformist equilibria. Reviewing much of the network game literature, Goyal (2007) comments that networks are rich and complicated objects that make it “difficult to obtain tight and general predictions regarding their effects on individual behavior” (p. 54). However, he proceeds to generalize games of coordination, showing that in any connected network, a social coordination game with a best response decision rule and random perturbations of actions will eventually converge to a Nash equilibrium. In another comprehensive review of games on graphs Szabo and Fath (2007) also note that network effects have been investigated for only limited games and network structures and that recent research (such as Hauert et al., 2006) is trying to generalize across more games. In their conclusion they write, “We hope that future efforts will reveal the detailed relationships between these internal properties” (Szabo and Fath, 2007, p. 196). This paper continues that search to find general relationships that tell us about agent dynamics in network games. We identify two fundamental network characteristics that influence the distribution of actions in all symmetric, binary-choice games played on any network using imitation-type updating rules. In short, the neighborhood configuration of a group's boundary determines whether a game converges to a stable distribution of actions or alternatively enters a cyclical pattern of decisions. Neighborhood configurations also affect the potential number of those distributions and the mix of actions within each. To ensure convergence to an absorbing state, a network must have quasi-consistent boundaries (defined below). This is a strong result. Any binary-choice game played on any network with quasi-consistent boundaries will converge to a fixed state of actions—any binary-choice game, any set of payoffs, any initial distribution of actions, all the time. And the opposite is also demonstrated: in a network lacking this boundary characteristic, there is always some distribution of strategies and payoff combinations that will trigger cycles of behavior. Beyond the dynamics of game play there is parallel interest in the relationship between network structure and the composition of that absorbing set or state of actions. To that end we show that neighborhood overlap, the extent to which neighborhoods have common neighbors, affects the distributions of actions taken by players and it also affects the number of payoff combinations that trigger a phase transition, the sudden adjustment from one distribution of decisions to another. To my knowledge no one has identified network structures so fundamental that they apply to all binary-choice games and all networks. Knowing more about the pivotal role that organizational structure plays in interdependent decision making can help us more deeply understand many economic institutions. For example, supply chains, Boards of Directors, managements' organizational chart of their firm, the committee structure in Congress, firms adopting similar or competing technologies, bureaucracies, sports leagues, street gangs, and other such social and economic organizations involve agents interacting within an organization making decisions whose consequences are related to the decisions of others. Recognizing the degree to which these networks possesses the boundary characteristics discussed below can help us predict aggregate evolutionary behavior, construct testable hypotheses about that behavior, and/or illuminate the actions that can nudge a group to a particular result. Section 2 formally defines the games, networks, and decision making strategies considered in this paper and Section 3 presents some exploratory virtual experiments that guide the analytical study in the remaining sections. Section 4 addresses the dynamics of play examining how network structure affects convergence to absorbing sets or states while Section 5 examines the composition of those sets or states. Section 6 broadens the applicability of the results, and Section 7 concludes. 2. Network games and decision making 2.1. Games on networks Symmetric binary-choice games can be represented with the familiar payoff matrix in which players select an action, A or B, to receive payoffs a, b, c or d.

A B

A

B

a, a c, b

b, c d, d

An ordering of payoffs creates a particular game. For example, a 4b4c4 d establishes a game in which decision A becomes the dominant decision for both players, resulting in the play “AA” and payoffs of “a” for each. Changing the relative magnitudes of these payoffs creates other games; for example, c 4a4 d4b defines a prisoners' dilemma game in which A is the cooperative choice and B is defection. All possible combinations yield 4! ¼24 different orderings, half of which are mirror images of other games.1 We study all of these games.2 1

For example payoffs d 4c 4b4a mirrors the first example with BB being the dominant equilibrium. Rapoport (1966) provides a complete taxonomy of binary-choice games including asymmetric payoffs. However, incorporating asymmetric games into networks requires an a priori assignment of who is player #1 and player #2 for each pairing. This artificial assignment is avoided by focusing on symmetric games. 2

A. Wilhite / Journal of Economic Dynamics & Control 47 (2014) 225–238

227

A traditional binary-choice game becomes a network game when the matching of players for the game is determined by their position in a network, so that the network defines who plays. To define a network, start with a set of nodes, N ¼{1, 2, …, n}, where n is finite and add relationships between nodes that are binary, i.e., they exist or do not exist. Let g ij A f0; 1g indicate a relationship between nodes i and j such that the variable gij ¼1 if a link or edge exists between i and j and gij ¼ 0 otherwise. The set of nodes and the links between nodes defines a network, denoted by G, and the collection of all possible networks is labeled G. In this paper, edges between nodes are undirected; if gij ¼1 then gji ¼1, but only one edge connects any linked pair. Edges are unweighted in that no edge has any intrinsic value (length or quality) that differs from other edges. We will focus on networks that are connected, meaning any node can be reached from any other node by following a path made up of a finite number of edges. Unconnected networks can be addressed by treating each connected component as a separate network. Each node is occupied by an agent, and so agents and nodes are indistinguishable. In a connected network, G, every node has a neighborhood consisting of the nodes or agents who share a link. Define the neighborhood of agent i in graph G as Ni ¼ ffig [ fj A Ni jg ij ¼ 1gg and define the neighbors of i as N i \i. Thus, agent i is a member of his neighborhood but is not a neighbor to himself. The degree of node i is the number of neighbors in graph G or ηi ¼ jNi \ij. It will be useful to distinguish between the immediate neighborhood of a node and more distant neighborhoods that include nodes more than one link away. This extended neighborhood (or x-neighborhood) is defined inductively following (Goyal, 2007, p. 10). Let x be a nonnegative integer; then   N 1i ¼ Ni and N xi ¼ N xi  1 [ [ j A N x  1 Nj : i

By extension the “zero” neighborhood is the individual node, N 0i ¼ i. In this study networks do not evolve; the topology of the network is fixed at the beginning of a game and remains unchanged throughout. In practice networks do evolve, although for many social and economic situations the structure of an organization evolves more slowly than the economic decisions taking place in that organization. In those cases, the network's structure is, for practical purposes, fixed. For three distinctive reviews of network formation and evolution see Goyal (2007), Vriend (2006), and Jackson (2004, 2008).3 2.2. Decision making The primary interest of this study is the evolution of choices with local interaction and repeated play. In each period t¼1, 2, …, agents select an action and play that action with all of their neighbors. Over time, they adjust their actions to account for their neighbors' play using an updating algorithm. In most network games updating algorithms fall into two categories: best response models and imitation. Best response models assume agents know they are in a game; they know the rules, the payoffs, their opponents, and they may remember their past choices and their opponents' past. At each time step they consider what has occurred, calculate their opponents’ next round of play and act accordingly. Imitation envisions a simpler environment in which agents have less information; agents see what actions others have taken and they observe the consequences of those choices. They then copy what seems most appealing. In the last decade or two, imitation is becoming more prevalent in economic models of decision making. Vega-Redondo (1997) demonstrates how imitation can lead to Walrasian behavior; Schlag (1998) shows that imitation is ‘optimal’ when individual information is severely restricted; Rogers (1988) suggests individuals imitate to avoid search costs; and Banerjee (1992) suggests imitation is rational if one suspects others have more information. In more recent experimental studies Apesteguia et al. (2007) find subjects turning to imitation, especially if they have a chance to imitate their direct competitors, and Rendell et al. (2010) run a simulation tournament among various types of learning models and the most successful learning models all relied heavily on imitation. Their results suggest social learning is advantageous because successful individuals inadvertently filter information for copiers. So there are theoretical and esthetic reasons to use imitation: it is a simple rule that is mathematically tractable and plausible; it is a sensible behavior we recognize in ourselves and others. There are a variety of imitation rules in the existing literature. Nowak and May (1992, 1993) have agents adopt the most successful action in their neighborhood. Eshel et al. (1998) assume agents adopt the most successful average neighborhood action from each round of play. Ellison and Fudenberg (1993) employ a popularity weighting where agents favor the most popular option. The present analysis begins under the assumption that agents imitate the most successful agent in their neighborhood, the agent earning the highest returns. But, as shown below, the results hold for any of these imitation rules. The timing of decision making matters as well. Synchronous updating assumes every agent updates in every period of play. Asynchronous updating means only a portion of the population updates at any particular time, or in the extreme only a single agent updates in each round. It is well documented that the timing of updates alters the evolution of play and the eventual distribution of choices (Huberman and Glance, 1993; Page, 1997 and others). However, timing does not affect the veracity of the propositions presented here. Specifically, the timing of updates does not impact the fundamental relationship 3 Fosco and Mengel (2011) allow decisions and network structure to evolve through imitation in a prisoners’ dilemma game. Their absorbing states constitute a distribution of actions and a network structure.

228

A. Wilhite / Journal of Economic Dynamics & Control 47 (2014) 225–238

between network structure and the convergence of actions to an absorbing set or state, even though it alters the composition of actions within that absorbing set or state. For simplicity I begin by assuming all agents update simultaneously and then a corollary to Proposition 2 generalizes to asynchronous updating. 3. Results of some simple experiments It is useful to probe into the effects of network structure with some virtual experimentation. Consider three different networks that have the same number of neighbors (ηi ¼ 4), but different architectures: the ring, the tree, and the grid. Create several games by selecting a range of payoffs per game matrix given above. As an initial probe, we fix three of the payoffs (a¼0, b¼  1.1, and d¼  1) and let payoff c vary, 0oc o1. Creating virtual populations of 1600 agents and randomly assigning an initial action to each agent, we let these computational agents play, update, and replay until their decisions settle into a pattern. The resulting distributions of actions for these games are presented in Table 1. Reading across the top row of Table 1 we see how the dynamics of play lumps into groups. Starting on the left side of the table consider a particular network, an initial distribution of actions, and a payoff to c that lies between zero and 1/3. In this range, the network game converges to a particular distribution of choices. Changing the payoff c has no effect on the terminal distribution as long as c lies in the defined range. In this example with an initial distribution of actions, if c ¼0.06 or 0.32, the network game will follow an identical path to an identical result. However, increase payoff c a bit more, say to c ¼0.35, and play changes abruptly. But once c lies in this new range, 1/3oc o0.45, alternative values of c in this range lead to the same path of decision making and distribution of actions. In the derivation of Table 1 only payoff c was allowed to change, but a similar result applies to changes in payoffs a, b, and d. These abrupt changes in the course of play at certain critical combinations of payoffs are phase transitions that result from decisions being influenced by a network of decision makers. The experimental results displayed in Table 1 suggest that these critical combinations hold for all three networks, i.e., all three networks share the same phase transition points. This is our first clue to disentangling the dynamics of play and we shall see that this holds generally; all networks with a neighborhood of a particular size share the same set of potential critical values. However practical differences exist because some networks have fewer actual phase transitions than that potential. The existence of these phase transition points and their economic importance are addressed in Section 5. Second, Table 1 suggests that different networks display different behavior within each specified parameter range. For example, suppose c ¼2/5. Reading down the center column we see that a population playing this game on a ring network has 96.7% of its members choosing A, on average. That same population distributed on a grid chooses A about 40% of the time, and a tree yields a population in which only 23.3% of the population plays A. In addition, the nature of a particular network’s long-term, steady-state distribution of actions can differ markedly from another network’s distribution. There are static distributions and cycles. For example, returning to Table 1 and reading down the center column, a ring yields 3-period cycles, a grid develops complex cycles that span many periods, while a tree network has a stable, long-term, steady-state distribution of choices. Games that converge to a single distribution of actions are said to converge to an absorbing state. Games that have cycles of play (or irregular play that continues indefinitely in unbounded networks) are said to converge to an absorbing set of actions. This is a remarkable outcome. An identical game is played on each network. Each agent uses an identical updating algorithm and interacts with only his four neighbors. An agent knows nothing about the greater network. He knows his neighbors’ decisions and payoffs, but does not know if he is located on a ring, a tree, a grid, or that he lies on a network at all; nor does he even need to know that he is engaged in a game with those neighbors. However, depending on the pattern in which agents are connected—a pattern he cannot observe—he behaves differently. These outcomes—shared phase transition points, different aggregate dynamics, and different emerging compositions of actions across networks—depend on the topology of the network that defines an agent’s neighbors. The balance of the paper systematically examines this topology to identify two general network characteristics that affect these outcomes for all symmetric, binary-choice games played on any network. Our first task is to describe formally the decision process. We approach this by investigating the circumstances under which a particular action will spread. To derive a more general Table 1 Distribution of actions and dynamics on three networks (percentage of agents playing A). Value of c

0o c o1/3

1/3 oc o 0.45

0.45 o c o0.8

0.8 o c o1

Ring

98% 3-Cycles

96.7% 3-Cycles

1.06% Stable

1.06% Stable

Grid

74.3% Stable/cyclesa

39.6% Complex cycles

20% Various cycles

0.5% Stable

Tree

25.0% Stable

23.3% Stable

11.2% Stable

11.5% Stable

Each cell contains the percentage of agents playing the cooperative action (A) for the last 100 periods, averaged across 20 different experiments. a 45% of these experiments ended in a 3-period cycle, the rest were stable.

A. Wilhite / Journal of Economic Dynamics & Control 47 (2014) 225–238

229

ii

Fig. 1. Examples of two coalitions of shaded nodes, one coalition of non-shaded nodes and the 2-neighborhood of agent i (dotted line).

expression in our formal development, we drop the restrictions on the values of the payoffs and allow networks to have any size neighborhoods, ηi ¼ k. When examples are useful, we return to the ηi ¼ 4 applications used in Table 1.

4. Actions, payoffs and coalitions Depending on the actions adopted by players, the payoff matrix and how they react to those payoffs, a particular action can converge to an absorbing state by spreading across the network, by contracting to disappear, or by stabilizing at some distribution, neither spreading nor contracting. Alternatively, choices can converge to an absorbing set of actions consisting of some cyclical behavior that repeats indefinitely. The first task is to explore the relationship between network structure and the convergence of decisions to an absorbing state or absorbing set of actions. We need to more precisely define the payoffs a particular action elicits and how that payoff determines the spread or contraction of a particular choice. Each player takes an action, si A fA; Bg; 8 iA N. A profile of actions for a network is s ¼ ðs1 ; s2 ; …; sn Þ, and the set of all possible action profiles is SN. In general the payoff π i to player i depends on his action and the actions of others, or the network’s action profile. Thus, the set of possible payoffs is Π : SN  G-ℛ. This paper focuses on local interaction, agents interacting with their neighbors. Therefore, the agent’s action and the actions of his immediate neighbors determine his total payoff. Agent i’s neighborhood is N i , and so we can express the action profile of agent i’s neighborhood as sNi ¼ ðsi0 Þi0 A Ni . Given a set of payoffs and a neighborhood defined by the network, player i faced with action profile sNi has payoff π i ¼ π i ðsNi Þ: Agents consider altering their decision only if they have neighbors taking actions different from their own. A useful way to characterize these fungible agents is to define coalitions: groups of connected agents taking the same action. Using the definition of a component from Jackson (2008), a coalition is defined formally as a set of agents C i ðgjsi Þ DN for which there is a sequence of links between nodes i and i’ where i1i2, i2i3,…,iK  1iK such that g ik  1 ik ¼ 1 and sik  1 ¼ sik for each k A f1; …; K  1g with i1 ¼i and iK ¼ i0 . In other words, a coalition Ci is the distinct maximal connected sub-network in which all agents in that sub-network take the same action. Thus, coalitions and neighborhoods are definitively distinct. Neighborhoods and extended neighborhoods are defined by the topology of the network while coalitions are determined by the actions taken by agents. If a network has k coalitions then C 1 [ C 2 [ ⋯ [ C k ¼ N and C i \ C j ¼ ∅ 8 i; j; i aj. Fig. 1 visually demonstrates the difference between a neighborhood and a coalition. Let action A be represented by shaded nodes and action B by non-shaded nodes; Fig. 1 contains two coalitions playing action A (one with 14 agents, one with five) and a coalition of 44 agents playing B (the non-shaded nodes). Also shown is the 2-neighborhood of agent i, consisting of all agents within two steps of agent i. It is outlined with a dashed line and contains 13 nodes or agents. Every coalition in a connected network has a boundary that separates agents who are part of the coalition from their neighbors who are not part of that coalition; the coalition’s boundary agents thus have at least one neighbor who takes an action different from his own. Formally define this set of boundary agents, Bi for coalition C i ðgjsi Þ as, Bi ðgjsÞ ¼ fbjb A C i ; g bj ¼ 1; j2 = C i g. Each coalition boundary agent’s neighborhood has internal members who belong to the coalition and play the same action as agent b and external neighbors who do not belong to the coalition and take the other action. Formally the internal neighborhood of boundary agent b is I b ¼ fN b \ C i g, and the external neighborhood of boundary agent bi is J b ¼ fN b \C i g.4 By the definition of I and J: I b [ J b ¼ Nb and I b \ J b ¼ ∅. The number of interior neighbors of agent b is ηIb ¼ jI b \bj (agent b is a member of his neighborhood but is not counted as his own neighbor) and the number of exterior neighbors of agent b is ηJb ¼ jJ b j. Finally, define sIi ¼ ðsi0 Þi0 A Ii ðgÞ as the action profile of the internal neighborhood of agent i A Bi and define sJ i ¼ ðsj Þj A J i ðgÞ as the action profile of the external neighborhood of agent i. Then, the payoff to boundary agent i is π i ¼ π ðsIi ; sJi Þ. 4

Throughout the manuscript agents i will be taking one action, typically action A, and agents j the other.

230

A. Wilhite / Journal of Economic Dynamics & Control 47 (2014) 225–238

The eventual distribution of actions across the network depends on the circumstances under which a particular agent’s choice spreads to his neighbors. So, when does an action spread? Consider a coalition and focus on a boundary agent of that coalition, agent i, and without loss of generality suppose he takes action A. Because i is a boundary agent some agents in his neighborhood play action A. Also consider one of agent i’s external neighbors, say agent jA J i . Under what circumstances will agent j switch his action from sj to si? Using the imitation rule based on the most successful neighbor, agent j will copy agent i if i earns more than agent j and any of j’s neighbors. Define j* as the highest earning agent in j’s neighborhood who takes the same action as j (including agent j), or formally let π jn ¼ arg max fπ j0 jj0 A N j ; sj' ¼ sj g. The decision of agent i spreads to agent j if π i 4 π jn . Recall that each coalition boundary agent has ηI internal neighbors (who take one action) and ηJ external neighbors taking the other. Given a set of payoffs and assuming agent i currently takes action A, agent j switches if π i 4 π jn . In binary-choice games this occurs if: aηIi þ bηJi 4 cηJjn þ dηIjn :

ð1Þ

Action A does not spread if the inequality does not hold, and action B may or may not spread. Specifically, agent i will copy agent j and switch to action B if agent j earns more than the most successful member of i’s neighborhood taking action A. Defining in symmetrically to jn and labeling agent i’s most successful neighbor taking action B as j, agent i switches to action B if π j 4 π in or cηJj þ dηIj 4aηIin þ bηJin

ð2Þ

The third possibility is that neither agent switches; agent i continues to take action A and agent j takes action B. The key to the dynamic change across networks is that these two criteria, inequalities (1) and (2), are not symmetric. They can involve four different neighborhoods, Ni , Nin , N j , and N jn , but the spread of A depends on Ni and N jn while the spread of B depends on N in and Nj . Consequently, for any specific pair of neighbors, gij ¼1, the inequality in (1) can be true or false, inequality (2) can be true or false, and they can both be false, but both inequalities cannot be true.5 These combinations give rise to the spread or contraction of a particular action. Specifically, if (1) is true and (2) is false, action si spreads from agent i to agent j; if (2) is true and (1) is false, action sj spreads from agent j to agent i; and if both are false, both agents retain their current play. 4.1. Consistent neighborhoods and consistent networks Similar to the definition of boundary agents of a coalition, we can define the set of boundary agents of an xneighborhood. Consider the x-neighborhood of agent i, N xi . The set of boundary agents, BN xi are those agents with neighbors inside and outside the x-neighborhood. Let agent i’ be a boundary agent of agent i’s x-neighborhood. Agent i’ has interior neighbors, those within the x-neighborhood, Ni’ \ BN xi , and his exterior neighbors are those outside that x-neighborhood, Ni0 \BN xi . Our primary interest is the number of interior neighbors, defined as ηI ¼|Ni0 \ BN xi | and the number of exterior i' x neighbors, ηJi0 ¼|Ni0 \BN i |. If all of the boundary agents in an x-neighborhood have the same neighborhood configuration (the same number of interior neighbors and the same number of exterior neighbors) we say that x-neighborhood has consistent boundaries; shorthand is to say the x-neighborhood is consistent. Again, let i0 be a boundary agent of agent i’s neighborhood. Formally, a consistent neighborhood boundary is one in which ηI ¼ k 8 i' A BN xi and ηJ ¼ k' 8 i' A BN xi . Furthermore, an entire section i' i' x of a network (a component) is consistent if it consists of a series of consistent x-neighborhoods such that ηIi0 ¼ k 8 i0 A BN xi 0 x Jx 0 and η ¼ k 8 i A BN i , x A fm; m þ 1; … ; pg, and the entire network is consistent if m ¼0, p ¼n. Thus a consistent i' component of a network consists of string of consistent x-neighborhoods with a constant number of interior and exterior neighbors for each boundary agent across those x-neighborhoods. This definition implies that consistent networks are regular (agents have the same number of neighbors), but regular networks are not necessarily consistent. Consistency is related to Young’s (1998) measure of close-knittedness (the ratio of internal links to all links in a group) and Morris’ (2000) concept of cohesion (the proportion of links to members of the group relative to nonmembers of the group). Close-knittedness and cohesiveness reflect how closely agents are related or how dense are their connections. Consistency refers to the regularity of this relationship across neighborhoods. Thus tightly-knit neighborhoods and loosely-knit neighborhoods can both be consistent. Applying the Young and Morris measures to x-neighborhoods, consistent networks are those in which all agents are equally close-knit or equally cohesive across x-neighborhoods. Consistency directly affects the dynamics of game play. To begin we look at a special case to show that games played on consistent networks always converge to an absorbing state, in any symmetric binary-choice game if the initial distribution of actions align with a neighborhood. Specifically, suppose a coalition C i ðgÞ conforms to an x-neighborhood so that C i ðgÞ ¼ N i x ðg Þ. Then we can show Proposition 1. Proposition 1. In a consistent network all symmetric binary-choice games converge to an absorbing state if the initial distribution of actions aligns with an x-neighborhood. 5

To demonstrate this claim: suppose (1) and (2) are true: inequality (1) means π i 4 π jn , and inequality (2) means π j 4 π in ; together these imply that

π j 4 π i Z π i 4 π j which contradicts the definition of j*. n

n

A. Wilhite / Journal of Economic Dynamics & Control 47 (2014) 225–238

231

Proof. The proof is inductive. Without a loss of generality let the coalition’s (and thus the x-neighborhood’s) choice be A, sC i ¼ sN xi . Focus on boundary agent i and one of his exterior neighbors, agent j. (i) Suppose payoffs a, b, c, and d are such that inequality (1) is true. Action A spreads from agent i to agent j because π i 4 π jn . Because the neighborhood boundary is consistent, π i 4 π jn 8 i A BN xi ; jn A N j , gij ¼ 1. Thusx all x exterior þ1 x xþ1 neighbors follow suit and after one period, sN x þ 1 ¼ A. By consistency of the network, ηIi ¼ ηIi ; and ηJjn ¼ ηJjn ; thus xþ1 ni π i 4 π jn 8 i A BN i ; 8 x A f0; …; ng and 8 j A Nj ðgÞ, gij ¼ 1. Consistency guarantees that boundary of the (x þ1)neighborhood faces the same payoffs as the x-neighborhood of the previous period and so action A continues to spread until it engulfs the network, an absorbing state. (ii) Suppose payoffs are such that inequality (2) holds. Then action B spreads (action A shrinks) to engulf the network in a similar fashion, an absorbing state. (iii) Suppose payoffs are such that neither (1) nor (2) holds. Then neither agent i nor agent j changes its action 8 i A BN xi ; g ij ¼ 1, an absorbing state. □

Intuitively if a coalition aligns with a neighborhood in a consistent network, then all boundary agents earn the same payoff and face the same choices. If one switches, all switch. This relationship then holds in the subsequent (x þ1)neighborhood, so that everyone switches again, and the process continues until the entire network follows suit. Whether the initial decision spreads, contracts, or stabilizes depends on the magnitude of payoffs to agents i, agents j, and their most successful neighbors, agents i* and j*. This proposition extends easily to imitation based on the highest average return in the neighborhood. Let aπ ij be the average neighborhood return for agents in agent j’s neighborhood playing action A and aπ jj be the average return in j’s neighborhood to agents playing B. Agent j switches to action A if aπ ij 4 aπ jj and if that’s true, given neighborhood consistency, then aπ ij 4 aπ jj 8 i; j A Nj , gij ¼1. Similarly, if the (xþ1)-neighborhood is also consistent then A continues to spread. And imitation based on popularity follows suit, let ηIj be the number of agents adopting action A in agent j’s neighborhood and ηJj in the number of agents using B in j’s neighborhood. If action A spreads to a specific j because ηIj 4 ηJj , then by neighborhood consistency, ηIj 4 ηJj 8 i; j A N j . Again, because of network consistency the most popular decision continues to spread neighborhood by neighborhood.

4.2. Quasi-consistency Since consistent networks converge to an absorbing state, Proposition 1 implies that absorbing sets of action profiles, cyclical distributions of actions and more complex dynamics arise only in inconsistent networks. However some inconsistent networks still converge to an absorbing state regardless of a game’s payoffs and initial distribution of actions, and we can identify the network attribute that leads to this result. In short, a network game on an inconsistent network converges to an absorbing state if the external neighbors of boundary neighbors are themselves neighbors, or using the language of networks, if the external neighborhoods of different degree are clustered. To demonstrate this result we need to formally define a quasi-consistent neighborhood and a quasi-consistent network (or component of a network). Consider the boundary of an inconsistent x-neighborhood. By definition some members of the boundary have different neighborhood configurations; that is, they have different numbers of interior neighbors and different numbers of exterior neighbors. Suppose there are k different neighborhood configurations in an inconsistent x-neighborhood’s boundary. Order those agents by the size of their interior neighborhoods and then label them accordingly, i.e., let i1, i2,…, ik A NBxi represent the different neighborhood configurations such that ηIi1 4 ηIi2 4 ⋯ 4 ηIik . An x-neighborhood is quasi-consistent if 8 i fiA NBxi jI xi1 + I xi2 + ⋯ + I xik ; J xi1 DJ xi2 D⋯ DJ xik }. In other words, in a quasi-consistent neighborhood every boundary agent with a particular neighborhood configuration is a neighbor to an interior boundary agent of every other neighborhood configuration, and the exterior neighbors of these boundary agents are also neighbors. Inductively a quasi-consistent x x x component of a network consists of a series of quasi-consistent neighborhoods, so ηIi1 4 ηIi2 4 ⋯ 4 ηIik , I xi1 + I xi2 + ⋯ + I xik and x x x x J i1 D J i2 D ⋯ D J ik 8 i A BN i ; x A fm; m þ 1; … ; pg. Finally, note that by definition a set is a subset of itself so, I xi1 DI xi1 and J xi1 D J xi1 which means all consistent neighborhoods and all consistent networks are also quasi-consistent. Thus quasiconsistency is the more general attribute of interest in this manuscript. In Proposition 2 we show that games played on these more general networks also converge to an absorbing state of actions regardless of the payoffs in the game. In addition, Proposition 2 also demonstrates that this convergence occurs for any initial distributions of actions. Proposition 2. All symmetric binary-choice games played on a quasi-consistent network converge to an absorbing state, regardless of the initial distribution of actions. Proof. To begin, consider a coalition that conforms to an x-neighborhood in a quasi-consistent portion of a network. Suppose the coalition is taking action si and focus the neighboring boundary agents i1 ; i2 ; …; ik . Without loss of generality

232

A. Wilhite / Journal of Economic Dynamics & Control 47 (2014) 225–238

i

i’

i’

i

Fig. 2. Agents i and i’ are boundary agents with different neighborhood configurations, but their interior and exterior neighbors are themselves neighbors.

assume agents i currently take action A and that the payoffs are such that ð∂π i =∂ηIi Þ 4 0. Quasi-consistency implies π i1 4 π i2 4 ⋯ 4 π ik . Two cases arise: (i) π ik 4 π jnk and (ii) π ih 4 π jnh but π ih þ 1 r π jnh þ 1 : In case (i) because π i1 4 π i2 4⋯ 4 π ik ; π ik 4 π jnk , and J xi1 DJ xi2 D⋯ DJ xik then π ib 4 π jnb 8 i; b A f1; 2; …; kg. In other words, if the lowest earning boundary agent’s action spreads then all of his neighbors (who earn more than he and face a subset of his neighbors) will see their action spread as well. And inductively, in a quasi-consistent network the (xþ1)-neighborhood contains the same boundary neighborhood configurations and action si ¼A spreads across the network. In case (ii) only some of the interior boundary agents see their action spread, but because π i1 4 ⋯ 4 π ih 4 π ih þ 1 4 ⋯ 4 π ik , if 0

π ih 4 π jh

then π ib 4 π jnb 8 i; b A f1; 2; …; hg. However, since π ih þ 1 r π jnh þ 1 then π i r π jn 8 i; b A fh þ1; …; kg and the b' b' remaining exterior neighbors will not copy their action, so only some switch. At this point, the coalition of agents playing action A does not align with an x-neighborhood, however, it is still true that I i1 + I i2 + ⋯ + I ik and J i1 D J i2 D ⋯ D J ik so the new boundary is quasi-consistent. Consider agents ih and ih þ 1. For agent ih, π h 4 π jn and his action spreads to his exterior neighbors, thus agents j A J xh switch their action to si ¼A. By quasi-consistency, J xh D J xh þ 1 and therefore in the next round the n

1 x x xþ1 x x interior and exterior neighborhoods for agent ih þ 1 have changed. Specifically, I xh þ þ 1 ¼ I h þ 1 [ J h and J h þ 1 ¼ J h þ 1 \ J h . Similarly

1 x x I xh þ þ 2 ¼ Ih þ 2 [ J h þ 1 ;

…;I xk þ 1 ¼ I xk [ J xk  1

I xi1þ 1 + I xi2þ 1 + ⋯ + I xikþ 1

\ Jxh þ 1 ;

1 x and J xh þ þ 2 ¼ Jh þ 2 xþ1 xþ1 xþ1 J i1 D J i2 D ⋯ D J ik . Therefore

…;J xk þ 1 ¼ J xk \ J xk  1 .

Thus

in

the

new

coalition

and the coalition boundary is also quasi-consistent. Thus in next round of play either no one switches their action (an absorbing state) or cases (i) or (ii) repeat. □ Intuitively, as the action played by a quasi-consistent neighborhood boundary spreads, it either spreads to all of its exterior neighbors, aping the dynamics of the consistent neighborhood in Proposition 1, or it spreads to only some neighbors. However, the new coalition boundary is still quasi-consistent. In the next round, the lower-earning neighborhoods have added members of the spreading coalition to their interior neighborhoods; they now include agents who have switched actions ðJ ik DJ ik  1 D ⋯ D J i1 Þ. This raises their earnings (in this particular case) and in the next round of play their action can continue to spread. Or it stops. Either leads to an absorbing set. An important corollary of Proposition 2 is demonstrated in the second case. When the first round of decisions spread to only some of the neighboring agents, the coalition no longer aligns with an x-neighborhood. However, since the underlying network is quasi-consistent, so is the coalition boundary. In general, a quasi-consistent network meansJ i1 D J i2 D ⋯ D J ik . If the actions taken by any of these exterior agents were arbitrarily switched to action A (as might be the case with a random initial distribution of actions) those agents would be removed from the exterior neighborhood, Ji’ and all super subsets that had previously included that agent. Thus the ordering of subsets, and therefore quasi-consistency remains intact. Consequently we do not have to assume that our initial distribution of actions aligns with a neighborhood—the convergence to an absorbing state occurs with any initial distribution of actions. A ready example of a quasi-consistent network is a ring in which ηi ¼4 (the simple ring, ηi ¼2, is consistent). Fig. 2 shows such a portion of a ring network with a shaded neighborhood. Notice that the ring is not consistent; ηIi a ηI , but the exterior neighbors of i' the “inner” boundary agent i0 are neighbors of the “outer” boundary agent’s exterior neighbors. It is quasi-consistent. But quasi-consistent networks are the only networks that converge to an absorbing state, for any set of payoffs and any initial distribution of actions. In all other networks there is some combination of payoffs and distribution of actions that will converge to an absorbing set of action profiles. This is demonstrated in Proposition 3. Proposition 3. In a network that is not quasi-consistent there exists some combination of payoffs and initial distribution of actions that leads to an absorbing set. Proof. Consider a neighborhood that is not quasi-consistent. There exists (by definition) two boundary agents, i and i0 for which ηIi a ηI . Because payoffs are unconstrained there exists a set of payoffs such that si spreads (inequality (1) is true) and i' si0 shrinks (inequality (2) is true). After an arbitrary number of rounds, t, this leads to one of two situations: xþt

xþt

for t¼1, 2, … (i) ηIi a ηI i' or xþh xþh (ii) ηIi for some h r t. ¼ ηI i' xþt

xþt

If (i) ηIi a ηIi0 for t¼1, 2, … then subsequent payoffs lead to successive rounds of expansion for si and contraction for si0 ; by definition the absence of an absorbing state.

A. Wilhite / Journal of Economic Dynamics & Control 47 (2014) 225–238 xþh

xþh

xþh1

xþh

xþh1

If (ii) ηIi ¼ ηIi0 , then either ηIi a ηIi or ηIi0 the ðx þ h  1Þ and the ðx þ hÞ neighborhoods. □

xþh

a ηIi0

233

thus there exists a set of payoffs that creates a cycle between

Quasi-consistency is therefore a necessary and sufficient condition for convergence to an absorbing state across all payoffs and all initial distributions of actions. Or, absent quasi-consistency there is some payoff structure and distribution of actions that lead to a cycle of choices. 4.3. Asynchronous updating Propositions 1–3 are demonstrated assuming synchronous updating, but in many economic situations asynchronous updating, in which only a proportion of the population updates in any period, may be the more common routine. In some situations it makes sense to assume that only a single agent updates in each period, a protocol called continuous updating. And the timing of updates has been shown to alter the aggregate distribution of actions as well as the system dynamics in certain games on specific networks (Huberman and Glance, 1993; Nowak et al., 1996; Page, 1997; Wilhite, 2006). However, asynchronicity does not change the conclusions of Propositions 1–3. This conclusion comes from Proposition 2 and is therefore stated as a corollary. Corollary to Proposition 2. With asynchronous updating a quasi-consistent network still behaves consistently. Proof. Suppose agent i is playing A and consider his exterior neighbors (playing B). Recall that in a quasi-consistent neighborhood these exterior neighbors are neighbors of each other. As in Proposition 2, order those exterior neighborhoods by the number of neighbors playing B such that J i1 D J i2 D ⋯ D J ik . Suppose payoffs are such that some portion of those exterior neighbors are willing to switch to the other action but with asynchronous updating only one (or a subset of the exterior neighbors) updates in this period. Say agent ^jA J switches. This removes ^j from the set of neighbors J and also removes ^j from every super set of the ordered exterior i2

i2

neighborhoods. Note that J i1 DJ i2 \ ^j because ^j2 = J i2 but was a member (now removed) from every super subset Ji, thus, J D J \ ^jD ⋯ D J \ ^j and the new coalition boundary is still quasi-consistent. □ i1

i2

ik

Thus quasi-consistency is an axiomatic characteristic of networks. Quasi-consistent networks always converge to an absorbing set of decisions for all payoff combinations in binary-choice games and for any initial distribution of payoffs, when updating is imitation, even if that updating is asynchronous. 4.4. Limits to quasi-consistency No connected network, n41 is quasi-consistent. More precisely, there is a lower bound on consistency; Proposition 4 states this formally. Proposition 4. Except for the empty network, no network with undirected links is quasi-consistent from the 0-neighborhood to the 1-neighborhood. Proof. Let agent i be the single agent in a 0-neighborhood. By definition, ηJi ¼ ηi ¼ k. Now consider any boundary agent i0 of xþ1 i’s 1-neighborhood. Agent i0 has at least one internal agent, because g ii' ¼ 1; thus ηJi rk  1: □ Together Propositions 3 and 4 establish an important property of network games. There exists a set of payoffs in all networks that leads to a “blinker” or a cycle of actions in which the number of agents in a 1-neighborhood “blinks” from everyone taking one action to only a single agent taking that action back to everyone playing it and so forth. Such cycles have been observed in several different networks and games (Eshel et al., 1998; Nowak et al., 1996; Goyal, 2007; Szabo and Fath, 2007), but Propositions 3 and 4 demonstrate their existence in all connected networks. 5. Properties of the convergent action profiles Propositions 1–3 show that the topology of a network (quasi-consistency) determines whether a game converges to an absorbing state or an absorbing set of action profiles. This section examines the composition of that absorbing set or state. Much of our understanding about the composition of emergent action profiles in network games comes from introducing mutations (small, irregular perturbations to decision making) and concentrating on the long run results (see, for example, Foster and Young, 1990; Young, 1998; Kandori et al., 1993).6 Introducing mutation to game play allows the system to be modeled as a stochastic dynamical system with an ergodic process, but “this long run (asymptotic) behavior can differ radically from the corresponding deterministic process” (Young, 1998, p. 47). For example, consider a game of coordination with two Nash equilibria, all play A or all play B. Absent mutation, this system can converge to a host of different action profiles, some absorbing states and others absorbing sets. Introducing a low mutation rate shakes out these smaller, local 6 Benaim and Weibull (2003) show that the solution to a stochastic process enters a basin of attraction in finite time with probability approaching one as the population grows to infinity—or it approaches one for a finite population as time goes to infinity.

234

A. Wilhite / Journal of Economic Dynamics & Control 47 (2014) 225–238

results and narrows the pertinent choices to the two Nash equilibria: all A or all B. Young (1998) also shows that over time continued mutation will eventually shift the system from one equilibria to the other, and he defines a “stochastically stable equilibrium” as the equilibrium that is most frequently visited and/or most often occupied. Importantly, it can be shown that the stochastically stable equilibrium in a game of coordination is the risk dominant equilibrium (Young, 1998; Kandori et al., 1993). Mutation also imposes a cost in that it masks how decisions percolate through a network. With mutation, networks simply do not matter. The dynamics of decision making, which are central here, are unimportant in long-run, stochastically stable equilibria. Young (1998) recognizes this and even writes that stochastic stable convergence, “does not depend on the idea that actions spread by diffusion (which they may indeed do)” (p. 102). Market decisions are often of much shorter deliberation and duration as we are faced with decisions that matter today and next week and next month, and they depend on what others are doing right now, and next week and next month, not what they may eventually do once the system has settled down into its long-run, stochastically stable equilibrium. Thus this manuscript focuses on action profiles that emerge in the intermediate term, when low mutation rates have not had sufficient time to nullify the network’s influence on decision making. 5.1. The composition of the absorbing sets or states Quasi-consistency determines convergence to an absorbing set or state, but the composition of the eventual action profile is affected by neighborhood overlap. Overlap refers to the intersection of the neighborhoods of two neighbors. Formally, if gij ¼1 then overlap, Oij ðgÞ ¼ Ni \ N j . Neighborhood overlap refers to the common neighbors of two consecutive neighborhoods; and it effects the composition of the emergent distribution of actions. Consider the neighborhood of agent i and suppose every member of its neighborhood is taking the same action (say action A). Focus on a boundary agent of i; that boundary agent has ηIi internal neighbors and ηJi external neighbors. Since agent i is a neighbor to his or her boundary agent, the size of ηIi (the number of internal neighbors) increases with greater neighborhood overlap, and the size of the external neighborhood, ηJi , decreases with neighborhood overlap. A parallel argument also applies to a neighborhood taking action B; overlap increases ηIi and decreases ηJi , ceteris paribus. This has a direct impact on total payoffs and the inequalities of (1) and (2). For a given set of payoffs, boundary agents in networks with greater overlap have more internal neighbors and fewer external neighbors. Therefore the impact of continuing to take the same action as the rest of their internal neighborhood is augmented and the return to switching to copy their external neighbors declines. Consequently, neighborhood overlap tends to make a system “tippy”; it pushes the final distribution of actions towards more playing A, or more playing B. Conversely, in networks with little neighborhood overlap the benefit of an agent switching actions to agree with his external neighbors rises, at the margin and in a relative sense, because he has relatively more external neighbors. Thus, neighborhood overlap tends to push the final distribution of actions to the extremes (more playing A or more playing B). This result emerged in our sample simulations shown in Table 1; the ring, which has more neighborhood overlap than the grid or tree, has more agents choosing the same action. 5.2. Phase transitions and institutional inertia The experiments reviewed in Table 1 also show evidence that there are critical points that trigger phase transitions and these can help us understand sudden shifts in our economic fortunes. For example, during the financial crises of 2007 we quickly moved from a period of financial prosperity to the near collapse of the world’s financial structure. There was no discernible change in policy or regulation that triggered this crisis, but imperceptible changes eventually stressed some institutions to a critical point and because of the underlying connections between financial institutions this trouble rapidly spread around the world. This near collapse might be explained as a phase transition in a network. In addition, between these critical points we see ranges of payoff adjustment for which the organization’s behavior does not change. This then is a reflection of institutional inertia. In general, knowing the structure of underlying organizational networks can illuminate the extent of such institutional inertia and show when it is punctuated with the occasional abrupt transition. How does neighborhood overlap affect the potential number of phase transitions? Start with the games. Rather than posting a list of payoff combinations for all 12 unique binary-choice games, we can present the games graphically similar to the presentations by Stark (2010) and Eshel et al. (1998). Fix two payoffs, setting payoff a ¼0 and payoff d¼  1, and define the remaining payoff space (b, c) by measuring b on the horizontal axis and c on the vertical and we get a graphical representation of the games. As we move around in this space the payoffs b and c become larger or smaller relative to the fixed payoffs a and d, defining different games. In this fashion we can recreate all symmetric games. Fig. 3 sub-divides this (b, c) space into 12 shaded regions indicating the different permutations of the payoffs in symmetric games (named games are so indicated). On top of this game space we can map network influences. Recall if inequality (1) was true a coalition’s decision spreads, if inequality (2) was true the action declines and if neither was true neither action spreads. Solving those inequalities for payoff c yields co

aηIi ðgÞ þ bηJi ðgÞ  dηIjn ðgÞ

ηJjn ðgÞ

ð3Þ

A. Wilhite / Journal of Economic Dynamics & Control 47 (2014) 225–238

235

c 2. 3

3. Leader

chicken

1. prisoners’ dilemma

1

x

y 4. Battle of Sexes

0 5. Stag Hunt

b

6. 8. 7.

-1 9.

12. Deadlock

11. 10. -3

-2

-1

0

1

Fig. 3. Payoff combinations of the 12 symmetric binary-choice games.

and c4

aηIin ðgÞ þ bηJin ðgÞ  dηIj ðgÞ

ηJj ðgÞ

:

ð4Þ

For a given neighborhood size, N i ¼ k, we can determine combinations of payoffs that just satisfy (3) or (4), separating a spreading action from a stable one. For example, suppose ηi ¼ 4, ηIi ¼ 1 and ηJjn ¼ 4. Using (3) the combinations of critical values that differentiate a spreading of action A from a stable action A are c ¼ ða þ 3b  dð0ÞÞ=4 ¼ 14 a þ 34 b. Similarly, if ηJj ¼ 2 and ηIin ¼ 3 the demarcation of a spreading action B is defined by (4), or c ¼ 32 a þ 12 b d. These combinations of critical payoff values can be calculated for all possible neighborhood configurations of N i , Nin , Nj , and N jn given a particular neighborhood size. Imposing the conditions that a¼ 0, and d ¼ 1 and ignoring uniform neighbor configurations (where everyone plays the same action), we can graph all possible combinations in the (b, c) space of Fig. 3. Each resulting “region” defined by the intersecting lines in Fig. 3 constitutes a set of payoffs that yields the same dynamics and the same absorbing action profile, given an initial distribution of actions and a particular network. Thus, the size of each region reflects of the amount of institutional inertia attached to each decision distribution. Consider the region defined by x. As we move within region x the payoffs b and c change, but play remains the same. However, if we cross one of those bounds and move into another region, say region y, those payoffs lead to a different dynamic path and a different absorbing action profile. Other points within that new region y however, identify payoffs that converge to that new and different terminal action profile. Thus moving from region x to region y triggers a phase transition as seen in Table 1. Neighborhood overlap impacts this mapping. The many regions shown in Fig. 3 identify potential combinations that lead to all possible phase transitions in networks with ηi ¼ 4. However, neighborhood overlap reduces the total number of actual phase transitions. For example, in a ring network each boundary agent has at least two or three internal neighbors and only one or two external neighbors (see Fig. 2). Consequently a neighborhood configuration in which a boundary agent has only one internal neighbor and three external neighbors does not exist in a ring network. Thus, the payoff combination c ¼ ða þ 3b  dð0ÞÞ=4 ¼ 14 a þ 34 b is not pertinent. However, one internal neighbor and three external neighbors is a plausible neighborhood configuration in a tree network so trees have more phase transitions than rings. Visually, the potential phase transitions for the tree network are defined by the solid and dashed lines crossing the payoff space in Fig. 3. However, only the solid lines pertain to the ring network because of its restricted neighborhood configurations. Thus the ring has fewer potential phase transitions than the tree network. Conversely, ring networks have greater institutional inertia than trees. In general, as neighborhood overlap increases there are fewer boundary neighborhood configurations which leads to fewer potential phase transitions and greater institutional inertia. 6. Irregular networks While irregular networks are not quasi-consistent, the level of boundary consistency and neighborhood overlap provides significant information about games played on these structures. For example, we know there exists some combination of

236

A. Wilhite / Journal of Economic Dynamics & Control 47 (2014) 225–238

payoffs for which these games will converge to an absorbing set of actions, while other combinations may lead to an absorbing state. Using Eqs. (1) and (2) we can examine those differ regimes. Consider a network in which nodes are of different degree. Label the node with the highest degree, ηmax , the node of lowest degree as ηmin , and let their difference i i min ^ We can then define the middle-most node degree, ηmid ¼ ηmin þ h^ so that h^ represents the variation in node ðηmax  η Þ ¼ 2 h. i i i i degree around that middle-most node. Thus, the number of edges connecting any particular agent i can be written as ^ While there may be consistent neighborhoods in such a network, the network itself will not be ηmid 7 h; h A f0; …; hg. i consistent. Still, Proposition 2 tells us about the network’s dynamics. With neighborhood size defined as ^ we can express the configuration of actions in each neighborhood accordingly; the number of agent ηmid 7 h; h A f0; …; hg i i’s neighbors who choose action A equals ηmid 7 hi , the number of j*'s neighbors who choose B is ηmid i jn 7 hj , and so forth. Assuming si ¼A, sj ¼B and gij ¼1, inequality (1) can be rewritten such that action A will spread from i to j if J I mid  hi ÞI þbðηmid  hi ÞJ 4cðηmid aðηmid i i jn þ hjn Þ þ dðηjn þ hjn Þ :

ð5Þ

Rewriting inequality (2), action B spreads to agent i if I J mid  hj ÞJ þdðηmid hj ÞI 4 aðηmid cðηmid j j in þhin Þ þbðηin þ hin Þ :

ð6Þ

Consider an x-neighborhood and two boundary agents. Let ηi ¼ fηmid þ hg and ηi' ¼ fηmid  hg; i; i' A BN xb . The parameter h i i measures node degree variation in the x-neighborhood’s boundary. Following Proposition 3 and in equalities (5) and (6), an h absorbing set of actions exists if the payoffs to action B lie in the interval, aðηmid  hi' ÞI þ bðηmid  hi' ÞJ ; aðηmid i i' i' þ hi ÞI þ bðηmid þhi ÞJ Þ because {π i' o π jn r π i g. Payoffs outside that interval converge to an absorbing state (following i Proposition 2). Furthermore, because ð∂π i0 =∂hÞ o 0 and ð∂π i =∂hÞ 40 this interval increases with h and as the level of node degree variation rises, the range of payoffs that lead to an absorbing state declines and more games converge to cyclical patterns of actions.7 In many cases parts of the network will converge and other parts will not. If the variation in neighborhood size is small ðh⪡ηmid Þ then most games converge to an absorbing state. There are i important networks with minimal variation in node degree, most notably Watts and Strogatz (1998) pioneered “smallworld” networks and there have been thousands of citations to their work demonstrating the theoretical and empirical importance of these networks. Watts and Strogatz generate small worlds by rewiring regular networks (often quasiconsistent rings), cutting and relocating edges. One of their fundamental findings is that it takes surprisingly few rewired edges (typically less than 10%) for small-world properties (a high degree of clustering and short paths) to emerge. Thus dynamics of most games played on these resulting small-world networks are readily explained by our propositions; since there is little variation in node degree, most games converge to an absorbing state. Another network of practical importance has nodes of cascading-degree. Consider a network with nodes ordered by degree and label the degree of the x-neighborhood as ηxi , the degree of the (xþ1)-neighborhood as ηxi þ 1 and so forth. Suppose these nodes are linked such that ηxi Z ηxi þ 1 Z⋯ Z ηxi þ t . This network is not consistent nor is any particular component likely to be consistent. Once again however, rewriting Eqs. (1) and (2) provides insight into this structure. Suppose all payoffs are positive and label η~ Jjn as the minimum size of the exterior neighborhood that just keeps action A from spreading for a given set of payoffs. Define η~ Jj symmetrically as the maximum size exterior neighborhood under which action B will not spread, given a set of payoffs. Solving inequality (1) for ηJjn and inequality (2) for ηJj we get

ηJj o n

ηJj 4

aηIi þ bηJi  dηIjn c

aηIin þbηJin  dηIj c

¼ η~ Jjn

ð7Þ

¼ η~ Jj

ð8Þ

If ηi Z ηjn the partial derivatives of η~ Jjn with respect to ηIi ; ηJi ; and ηIjn show that action A more easily spreads from a hub to its neighboring smaller neighborhoods as payoffs a and b increase and payoffs c and d decrease.8 Specifically, the partial ð∂η~ Jjn =∂ηIi Þ 4 0 implies that as the number of interior neighbors playing action A increases, action A is more likely to spread. Similarly, ð∂η~ Jjn =∂ηJi Þ 4 0 implies that as the number of interior neighbors taking action B increases, action A is also more likely to spread. These seemingly conflicting results mean that agents in larger neighborhoods earn more when payoffs are positive. Since a particular agent’s choice is more likely to be copied when that agent earns more, agents with more neighbors have more influence. This is the main effect in these networks of decreasing degree. In short, with positive payoffs the action adopted by hubs is likely to flow downhill. Derivations of η~ Jj (inequality (8)) show that action B can climb uphill, but as these neighborhoods get larger the payoff differential for action B must increase to offset the population disadvantage. If neighborhood sizes vary significantly these payoff adjustments have to be substantial, thus there is a growing advantage for the action of the largest neighborhood to permeate the network. 7 8

This trait is shared with Morris’ (2000) contagion threshold. Negative payoffs simply flip the direction of these effects.

A. Wilhite / Journal of Economic Dynamics & Control 47 (2014) 225–238

237

These cascading-degree neighborhoods arise in scale-free networks, networks in which the distribution of node degree that follows a power law; a few nodes with many edges occupy one end of the degree distribution while the other end is populated with many nodes of small degree (Barabási and Albert, 1999). Scale-free networks do not necessarily have their hubs ordered by degree, but in most cases this trait is a byproduct of their evolution—hubs have more edges and are therefore likely to attach to other hubs. Examples of scale-free networks include the internet, the World Wide Web, and the network of scientific citations. In these important networks decisions tend to flow downhill—once a central hub converges on an action, that action is likely to overtake the network.

7. Conclusions That network characteristics affect the outcome of games played on those networks is well established, but generalizing these effects across games and networks has proven to be difficult. To make progress researchers have focused on particular games, particular networks, or on specific parts of a network to identify important edges, weak links, critical nodes, and hubs. This manuscript looks at the general weave of a network’s fabric to identify how this overall pattern can affect game play. Two rather simple features, neighborhood overlap and quasi-consistent boundaries, are central to the dynamics of network play and to the resulting distribution of actions. Quasi-consistency is a necessary condition for convergence to an absorbing state for all symmetric binary choice games and all initial distributions of payoffs. Further, Proposition 3 shows that only quasi-consistent networks converge to this state for any set of payoffs. This is a general result in a theoretical sense—if a network fits this condition then it converges—all games, all payoff combinations, regardless of the initial distribution of actions. And, if a network does not fit this condition then there exists some combination of payoffs and initial distribution of strategy that converges to an absorbing set. However, quasi-consistency is a restrictive condition and perfectly quasi-consistent networks are probably rare in nature. Since it is probably uncommon for naturally forming networks to evolve consistent or quasi-consistent boundaries, the possibility of cyclical play is likely to arise in most networks. Still, inconsistent networks can have regions in which most neighborhoods are quasi-consistent and local dynamics will be influenced by the propositions given above. Perhaps the most practical use of this insight is to realize that the degree of inconsistency can give us an indication as to how likely a particular game is to converge for a relevant range of payoffs, or how extreme payoffs might need to be to avoid cyclical play. We also saw how neighborhood overlap determines the neighborhood composition of actions, and the resulting presence or absence of phase transitions. And these results apply to all networks. Networks with inconsistent boundaries will further slice up the payoff space in Fig. 3 and in general that will reduce the amount of institution inertia and will likely reduce the severity of the phase transitions as payoffs change. Such differences can help to understand how different parts of an organization can respond differently to similar incentives. Consider an organization in which one department consists of sparse neighborhoods, agents having few neighbors, and another component or department has dense links. In this case an institutional change can easily cross a “payoff combination boundary” in the densely linked department but not in the sparsely linked department. Thus one part of the organization might respond to an incentive program or policy change while another may not. The applicability of these results extends beyond binary-choice games. Consider a game in which agents choose among three choices, A, B, or C. Define the payoff to agent i as π i ¼ π ðsIi ; sJ B ; sJ C Þ where sJ B and sJ C are the actions played by his i

i

i

i

external neighbors. Ifsi a sj a sk , π i 4 π jn and π i 4 π kn 8 i A BNxi ; 8 x A f0; …; ng, then these games will converge to an absorbing state in consistent networks. However, if payoffs are not transitive, for example, π i 4 π jn , π j 4 π kn , but π i' o π kn , g ii' ¼ 0 then some consistent networks can converge to an absorbing distribution of actions in which the number of players taking each action is stable but individual players cycle through their choices. For example, playing rock–paper–scissors on a ring, agents’ decisions can chase each other around the ring, maintaining a stable number of players of each type. Whether this is a general result for non-transitive games is an open question. Quasi-consistency and neighborhood overlap are fundamental network properties that affect any network activity with local interaction. And they are purely network attributes, that is, quasi-consistency and overlap are completely determined by the topology of the underlying network and are independent of any activity that might be occurring on the network. This then is a concrete example of institutional influence—a situation in which organizational structure affects decision making. Thus, the strategic decisions of firms, interactions between members of a supply chain, policy initiatives moving through political committees, and the popularity of some new fashion among friends and family are affected by the organization of those groups. Knowing the underlying structure, we can foresee much of adoption process. References Apesteguia, J., Huck, S., Oechssler, J., 2007. Imitation—theory and experimental evidence. J. Econ. Theory 136, 217–235. Axelrod, Robert, 1984. The Evolution of Cooperation. Basic Books, New York. Banerjee, A.V., 1992. A simple model of herd behavior. Quar. J. Econ 107 (3), 797–817. Barabási, A., Albert, R., 1999. Emergence of scaling in random networks. Science 286, 509–512. Benaim, M., Weibull, J.W., 2003. Deterministic approximation of stochastic evolution in games. Econometrica 71 (3), 873–903. Bramoullé, Y., Kranton, R., 2014. Local public goods in networks. J. Econ. Theory. (forthcoming). Ellison, G., 1993. Learning, local interaction, and coordination. Econometrica 61, 1047–1071.

238

A. Wilhite / Journal of Economic Dynamics & Control 47 (2014) 225–238

Ellison, G., Fudenberg, D., 1993. Rules of thumb for social learning. J. Polit. Econ. 101, 612–643. Eshel, I., Samuelson, L., Shaked, A., 1998. Altruists, egoists, and hooligans in a local interaction model. Am. Econ. Rev. 88 (1), 157–179. Fosco, C., Mengel, F., 2011. Cooperation through imitation and exclusion in networks. J. Econ. Dyn. Control 35, 641–658. Foster, D., Young, H.P., 1990. Stochastic evolutionary game dynamics. Theor. Popul. Biol. 38, 219–232. Goyal, S., 2007. Connections: An Introduction to the Economics of Networks. Princeton University Press, Princeton, NJ. Huberman, B., Glance, N., 1993. Evolutionary games and computer simulations. Proc. Natl. Acad. Sci. 90, 7716–7718. Hauert, C.F. Michor, Nowak, M., Doebeli, M., 2006. Synergy and discounting of cooperation in social dilemma. J. Theor. Biol. 239, 195–202. Jackson, M., 2004. A survey of models of network formation: stability and efficiency. In: Demange, G., Wooders, M. (Eds.), Group Formation in Economics: Networks, Clubs, and Coalitions, Cambridge University Press, Cambridge, UK. Jackson, M., 2008. Social and Economic Networks. Princeton University Press, Princeton, NJ. Kandori, M., Mailath, G.J., Rob, R., 1993. Learning, mutation, and long run equilibria in games. Econometrica 61 (1), 29–56. Morris, S., 2000. Contagion. Rev. Econ. Stud. 67, 57–78. Nowak, M., May, R., 1992. Evolutionary games and spatial chaos. Nature 359, 826–829. Nowak, M., May, R., 1993. The spatial dilemmas of evolution. Int. J. Bifurc. Chaos 3 (1), 35–78. Nowak, M., Bonhoeffer, S., May, R.M., 1996. Robustness of cooperation—reply. Nature 379, 125–126. Page, S.E., 1997. On incentives and updating in agent based models. Comput. Econ. 10, 67–87. Rapoport, A., 1966. Two-person Game Theory: The Essential Ideas. University of Michigan Press, Ann Arbor, MI. Rendell, L., Boyd, R., Cownden, D., Enquist, M., Eriksson, K., Feldman, M., Fogarty, L., Ghirlanda, S., Lillicrap, T., Laland, K., 2010. Why copy others? Insights from the Social Learning Strategies Tournament. Science 328, 208–213. Rogers, A., 1988. Does biology constrain culture? Am. Anthropol. 90 (4), 819–831. Schlag, K.H., 1998. Why imitate, and if so, how? J. Econ. Theory 78, 130–156. Szabo, G., Fath, G., 2007. Evolutionary games on graphs. Phys. Rep. 446, 97–216. Stark, H., 2010. Dilemmas of partial cooperation. Evolution 64 (8), 2458–2465. Vega-Redondo, F., 1997. The evolution of Walrasian behavior. Econometrica 65 (2), 375–384. Vriend, N., 2006. ACE models of endogenous interactions. In: Tesfatsion, L., Judd, K. (Eds.), Handbook of Computational Economics, vol II. North Holland, Amsterdam. Watts, D., Strogatz, S.H., 1998. Collective dynamics of ‘small-world’ networks. Nature 393, 440–442. Wilhite, A., 2006. Economic activity on fixed networks. In: Tesfatsion, L., Judd, K. (Eds.), Handbook of Computational Economics, vol II. North-Holland, Amsterdam. Young, H.P., 1998. In: Individual Strategy and Social Structure, Princeton University Press, Princeton, NJ. Young, H.P., 2002. The Diffusion of Innovations in Social Networks. Johns Hopkins University, Brookings Institute, and Santa Fe Institute, unpublished.