Quantum-like dynamics of decision-making

Quantum-like dynamics of decision-making

Physica A 391 (2012) 2083–2099 Contents lists available at SciVerse ScienceDirect Physica A journal homepage: www.elsevier.com/locate/physa Quantum...

759KB Sizes 2 Downloads 28 Views

Physica A 391 (2012) 2083–2099

Contents lists available at SciVerse ScienceDirect

Physica A journal homepage: www.elsevier.com/locate/physa

Quantum-like dynamics of decision-making Masanari Asano a , Irina Basieva b , Andrei Khrennikov b,∗ , Masanori Ohya a , Yoshiharu Tanaka a a

Department of Information Sciences, Tokyo University of Science, Yamasaki 2641, Noda-shi, Chiba, 278-8510, Japan

b

International Center for Mathematical Modeling in Physics, and Cognitive Sciences, Linnaeus University, S-35195, Växjö, Sweden

article

info

Article history: Received 5 June 2011 Received in revised form 16 November 2011 Available online 27 November 2011 Keywords: Decision making Quantum-like model Game theory Prisoner’s dilemma Behavioral economics Quantum channels Lifting Mental state

abstract In cognitive psychology, some experiments for games were reported, and they demonstrated that real players did not use the ‘‘rational strategy’’ provided by classical game theory and based on the notion of the Nasch equilibrium. This psychological phenomenon was called the disjunction effect. Recently, we proposed a model of decision making which can explain this effect (‘‘irrationality’’ of players) Asano et al. (2010, 2011) [23,24]. Our model is based on the mathematical formalism of quantum mechanics, because psychological fluctuations inducing the irrationality are formally represented as quantum fluctuations Asano et al. (2011) [55]. In this paper, we reconsider the process of quantum-like decision-making more closely and redefine it as a well-defined quantum dynamics by using the concept of lifting channel, which is an important concept in quantum information theory. We also present numerical simulation for this quantum-like mental dynamics. It is non-Markovian by its nature. Stabilization to the steady state solution (determining subjective probabilities for decision making) is based on the collective effect of mental fluctuations collected in the working memory of a decision maker. © 2011 Elsevier B.V. All rights reserved.

1. Introduction Game theory is a domain of applied mathematics used in the various disciplines: social science, economics, political science, international relations, computer science, and philosophy. This theory analyzes an interdependency of decisionmaking entities (players) under a certain institutional condition. In the normal form of a game, players have some strategies to be chosen and obtain payoffs which are assigned for results of their choices. The following three assumptions on player’s decision-making are typically required:

• Players know the rule of game; each player knows all selectable strategies and payoffs. • A player behaves ‘‘rationally’’ so as to maximize his own payoff. • Each ‘‘rational’’ player recognizes that other players are ‘‘rational’’. What is the ‘‘rational’’ decision-making in game theory? To explain this, we consider an example of game called prisoner’s dilemma (PD), see the following pay-off table. The notations 0A,B and 1A,B represent the strategies A/B 0A 1A

0B 4/4 5/2

1B 2/5 3/3

chosen by the players A and B. The values in the table are pay-offs assigned for results of (0A , 0B ), (0A , 1B ), (1A , 0B ) and (1A , 1B ). We explain the player A’s ‘‘rational’’ decision-making for the above game by the diagram of Fig. 1. The two solid



Correspondence to: International Center for Mathematical Modeling in Physics, Linnaeus University, S-35195, Växjö, Sweden. E-mail address: [email protected] (A. Khrennikov).

0378-4371/$ – see front matter © 2011 Elsevier B.V. All rights reserved. doi:10.1016/j.physa.2011.11.042

2084

M. Asano et al. / Physica A 391 (2012) 2083–2099

Fig. 1. The diagram of ‘‘rational’’ decision-making process.

Fig. 2. The decision-making process with additional effect providing violation of Nash equilibrium.

arrows in the diagram explains that A prefers the result (1A , 0B ) to (0A , 0B ) and prefers (1A , 1B ) to (0A , 1B ). The two dotted arrows explains that A reasons that B will prefer (0A , 1B ) to (0A , 0B ) and prefers the result (1A , 1B ) to (1A , 0B ). These arrows make a ‘‘flow’’ reaching to the consequence (1A , 1B ). Such terminal point is called Nash equilibrium in the game theory. In this game, the consequence (1A , 1B ) is the normative solution that ‘‘rational’’ players A and B should achieve. In general, if a game has the unique Nasch equilibrium [1,2] the selection of this equilibrium by players is considered as a rational strategy. (This is really the case in the PD game: there is the unique Nasch equilibrium.) This approach to rationality is basic for economics and evolutionary biology (see e.g. [3,4]). The payoff in economics is utility (or sometimes money), and in evolutionary biology gene transmission, both are the fundamental bottom line of survival. Researchers who apply games theory in these fields claim that strategies failing to maximize these for whatever reason will be competed out of the market or environment, which are ascribed the ability to test all strategies. The usage of the Nasch equilibrium by ‘‘players’’ is the fundamental assumption of these theories. We also remark that in the period of the cold war the notion of rationality based on the Nasch equilibrium played an important role in decision making policy in conflict situations of both the USA and Soviet Union [5]. However, we wonder whether such a description of decision-making process is enough to explain a real player’s behavior in the PD-type games. Actually, there are some experiments of PD game, in which, real players frequently do not achieve a rational solution [6–10]. These results show that in some cases, a real player’s decision-making process is essentially different from a ‘‘rational’’ one. Here, we consider a PD game with extreme pay-offs as given in the below table. It is expected that for this game, many real A/B 0A 1A

0B 100 000/100 000 100 000 + 1/2

1B 2/100 000 + 1 3/3

players will choose the strategy 0, because such a player will feel strongly that the payoff 100,000 at the result (0A , 0B ) is very attractive, compared with the payoff 3 at (1A , 1B ). Moreover, he will think ‘‘another player will feel the same thing’’. His expectation is shifted from (1A , 1B ) to (0A , 0B ). As seen in the diagram of Fig. 2, then, the consequence (1A , 1B ) is no longer the terminal point, that is, Nash equilibrium is violated. His mental state will oscillate between the choice of 1A and the choice of 0A . Here, note that the intention to choose 1A works for increasing a player’s payoff by only 1. On the other hand, the intention to choose 0A works for increasing the payoff by 100,000 − 3. It is clear that the intention to choose 0A is dominant. For any PD type game, the game theory concludes that the choice of 1A is ‘‘rational’’ and 0A is ‘‘irrational’’. Nevertheless, in the above case, the choice of ‘‘irrational’’ 0A will be ‘‘natural’’ for many real players. Conversely, we know that there are cases where the choice of ‘‘rational’’ 1A seems to be ‘‘natural’’ for real players, for example, it is seen in the original prisoner’s dilemma game.1 It is very difficult to answer to the question—what is rationality in the true sense? At least, in some PD games seen in the real world or experiments, a real player cannot ignore the effect of the additional shift in his decision-making process and holds possibility to choose a strategy which is ‘‘irrational’’ in the sense of classical game theory.

1 Two prisoners can choose two strategies, 0-cooperate or 1-defect. If both prisoners choose 0, they are jailed for one year. If both prisoners choose 1, they are jailed for 5 years. If one chooses 0 and another chooses 1, the one choosing 0 is jailed for 100 years, and another gets freedom.

M. Asano et al. / Physica A 391 (2012) 2083–2099

2085

The additional shift might seem to be strange in the framework of the game theory. Conventionally, the player A’s preferences are explained in the contexts as ‘‘If the player B chose xB ∈ {0B , 1B }, then I will prefer to choose yA ∈ {0A , 1A }’’, and the player B’s preferences reasoned by the A are explained in the same form. Such a context consists of two parts; the part of assumption and the part of analysis. In the first part, the player A assumes one of the player B’s choices. In the second part, the player A analyzes his preference on the basis of the assumption. Note that such an analysis is contextually same with a posterior analysis by someone who already knows a result of the B’s choice. All of the shifts seen in Fig. 1 are provided from such posterior analyses. On the other hand, the additional shift in Fig. 2 is never induced from posterior analysis, since the player B’s choice is not fixed on this shift. In principle, each player is not informed of another player’s choice, that is, each player feels uncertainty about another player’s choice. A prior analysis should be done in the presence of uncertainty. The prior analysis in the conventional game theory is just a mixture of the posterior analyses based on mixed strategy, a probabilistic distribution given for another player’s choice: The ‘‘rational’’ player A will consider posterior analyses mixed with probabilities P (0B ) and P (1B ). In each posterior analysis, the player A can estimate probabilities of his own choices, P (iA |0B ), (i = 0, 1) and P (iA |1B ), (i = 0, 1). For the mixed strategy, the player A decides his own choice with probability P (iA ) = P (iA |0B )P (0B ) + P (iA |1B )P (1B ). For example, in PD game, the P (1A ) always becomes 1. As mentioned above, the effect of the additional shift in Fig. 2 is not induced from posterior analyses, therefore, if the additional shift affects the decision-making, the total probability law will be violated; P (iA ) ̸= P (iA |0B )P (0B ) + P (iA |1B )P (1B ). Actually, such non-Kolmogorovian property can be seen in the statistical data obtained in some experiments, see Refs. [6–10]. The additional effect comes from a more ‘‘deep’’ uncertainty, where the player’s consciousness to another player’s actions is always fluctuated and not fixed. Such psychological fluctuation cannot be represented in terms of classical probability theory. Quantum probability is one of the most advanced mathematical models for non-classical probability, and a few authors pointed to a possibility to apply the mathematical formalism of quantum mechanics to cognitive psychology, see Refs. [11–22]. We also insist that the mathematical formalism of quantum mechanics is useful to represent the psychological uncertainty which a real player feels. In order to represent the psychological uncertainty, we use a state with quantum fluctuation; this is the mathematical representation of the ‘‘mental state’’ in our model, see Section 3. We assumed that a procedure of decision-making is described as a dynamics of the mental state. Recently we proposed [23] a model of decision making based on a simple system of differential equations for the equilibrium mental state. In the paper Ref. [24], we embedded the equilibrium equation of Ref. [23] in the standard quantum formalism, the Gorini–Kossakowski–Sudarshan–Lindblad (GKSL) equation, which describes a quantum open system dynamics providing the equilibrium quantum state. In this paper, we reconstruct the process of quantum-like decision-making in more simple and general form of quantum dynamics. As discussed in Section 4, the dynamics is represented as a sequence of lifting channels [25], each of which is specified by a time evolution operator e−iH τ . The Hamiltonian H discussed in Section 5 is very important in our model, because the construction of H summarizes all of player’s thinkings for game, off-course including the additional effects discussed above. In Section 6, a continuous limit of the dynamics is discussed. By this limit, the differential equations for the open system dynamics are derived. In Section 7, we demonstrate a numerical analysis of PD game. One may speculate that it is not necessary to use such mathematical models to explain why real players sometimes behave irrationally: this is because the punishment with the smallest pay-off is not so painful. Player has no fear to be punished, and can act irrationally to experiment with the other player. However, this is just a qualitative explanation of irrationality (which may be incomplete, since in concrete situations other mental parameters may play an important role). Our model provides a possibility of quantitative description that is needed for a goal of game theory; analysis of various interdependences of players in the real world. We emphasize that our model has no direct relation to models of ‘‘quantum physical brain’’, e.g., Penrose [26,27] and Hameroff [28,29]. We consider the brain formally as a ‘‘processor of information’’ using the rules described by quantum information theory; we do not try to reduce such information processing to quantum physical processes in the brain. One may wonder about a possibility to create the quantum-like representation in the ‘‘classical brain’’. Although this (exciting) problem does not belong directly to subject of this paper,2 we shall discuss it briefly in the Appendix, see also Ref. [30]. 2. Quasi-magical and quantum-like thinking The disjunction effect has been a hot topic in cognitive science, psychology, and behavioral economics during many years. Many authors contributed to this problem and proposed various models trying to explain the statistical data. The main paradoxical issue in the experiments is the ‘‘irrationally high level’’ of cooperation. This issue by itself is an important topic of research. It was studied in very detail in relation to another game, so called the public good game [31,32]: 2 Our quantum-like model can be considered as a formal mathematical model of decision making without any coupling to real neurophysiological activity in the brain.

2086

M. Asano et al. / Physica A 391 (2012) 2083–2099

In the public good game, each of n players is given 20 tokens that can be redeemed for cash. Player j chooses a discrete number of tokens 0 ≤ sj ≤ 20 to contribute to a common pool. Total contributions to the pool are then multiplied by some factor mn > 1 and distributed equally back to all players. Whatever the other players do, a player is always better off contributing nothing. The principle of dominance states that under such circumstances ‘‘rational’’ players should contribute nothing. In experimental situations, however, players consistently contribute around half their available funds to the common pool [31]. We briefly discuss the most important attempts to explain such statistical data. We are not able to go into detail and the reader can find an excellent analysis of variety of possible approaches in Masel’s paper [32]. Three basic explanations of the ‘‘irrationally high level’’ of cooperation are learning, reciprocity and altruism. 2.1. Learning, reciprocity, altruism In this section we follow [32]: ‘‘One possible explanation for observed cooperation is that players do not immediately understand the incentives of an artificial game and need repetition in order to learn. In agreement with the learning hypothesis, contributions fall when the game is repeated and players observe the choices made by others in previous rounds [33,34,31]. Nevertheless, by the final round of a repeated game, some contributions are still made’’. In any event, additional analysis of the possible impact of learning to the disjunction effect, see Refs. [35–40] clearly demonstrated that the learning could not explain statistical data on the disjunction effect. We also remark that the learning argument cannot be applied to the single-game experiments (in particular, to the oneshot PD). The ‘‘disjunction-data’’ could be explained by strategic play involving some form of reciprocity. According to this hypothesis, it is not common knowledge that all players are playing rationally. In this case, free-riding may no longer be the best option, as it may encourage other players to free-ride in subsequent rounds [41]. However, subsequent experimental studies demonstrated that the reciprocity cannot explain the ‘‘disjunction-data’’ [35,42,43,37,44]. Another explanation is that some players are genuinely altruistic rather than selfish. Altruism [45] or warm glow effects [46] exist when a players true utility function is not simply equal to the players personal monetary reward. However, as it is clear from analysis presented in Ref. [32], altruistic behavior could not explain completely statistical data. 2.2. Quasi-magical thinking The terms ‘‘magical thinking’’ and the ‘‘illusion of control’’ refer to erroneous beliefs that people can influence outcomes through their actions. The term ‘‘quasi-magical thinking’’ describes ‘‘cases in which people act as if they erroneously believe that their action influences the outcome, even though they do not really hold that belief’’, see Shafir and Tversky [6, p. 463]. Quasi-magical thinking can explain the ‘‘disjunction-data’’. In this case, a player acts as though she believes that her cooperation makes her partner more likely to cooperate, even though she knows that this cannot be true. The quasi-magical thinking approach to the disjunction effect was formalized by Masel [32]. She studied the public good game [32]. In this paper the framework of conditional expected utility (CEU) was in the usage, see Ref. [32, p. 217]: ‘‘In conventional economic theory, rational players maximize their expected utility function EU as a function of their strategy choice s. Expected utility may also depend on a set of perceived probabilities p(θ ) that the mean level of contribution in the population is equal to θ . In other words, they maximize EU (s) =



p(θ )U (s, θ )dθ .

(1)

An alternative idea is to maximize conditional expected utility (CEU) [47]. In this case, perceived probabilities p(θ|s) vary according to the strategy chosen, whether or not there exists a causal link. Players maximize CEU (s) =



p(θ|s)U (s, θ )dθ .

(2)

Players who maximize CEU exhibit quasi-magical thinking since they behave as if all conditional probabilities had a causal basis, even though they sometimes know that none exists. If a player assumes that other players may be like her, then she will assume a positive correlation between her own contribution s and the mean contribution θ of the other players. A player maximizing CEU will contribute to the public good if she believes this correlation is sufficiently large’’. This model invoked a non-causal perceived correlation in order to explain cooperation, but the extent of the perceived correlation was arbitrary [48]. In Masel’s paper [32] the mentioned CEU-model was extended by describing a natural way to calculate conditional probabilities in the public good game. Our quantum-like model describing the disjunction effect has some commonality with the aforementioned CEU-models and quasi-magic thinking in general. As is well known, in quantum theory not probabilities, but complex probability amplitudes (wave functions, quantum states) play the fundamental role. Therefore we operate not with conditional probabilities p(θ|s), but with quantum-like states describing entanglement between Alice’s possible decisions and her perception of possible Bob’s states, see (5)–(7). The conditional probabilities p(θ|s) in CEU model [32] correspond to the

M. Asano et al. / Physica A 391 (2012) 2083–2099

2087

complex amplitudes α0 and α1 (β0 and β1 ) in our prediction state, see again (5)–(7). However, our quantum-like model differs crucially from the CEU-model. The main difference from the CEU-approach is that, for the fixed value of the parameter s of Alice (we consider the PD-type games, so s = 0, 1), she perceives Bob’s state as superposition of all his possible states. Moreover, she also perceives her own state as superposition of all her possible states. The combination of these perceptions is represented as a general entangled state. In other words, the CEU model explains that the disjunction effect comes from ‘‘quasi-magical thinking, that is, the difference between p(θ|s = 0) and p(θ|s = 1). The disjunction effect is not appeared if p(θ|s = 0) = p(θ|s = 1). On the other hand, in our quantum-like model, the disjunction effect is appeared even if α0 = α1 (β0 = β1 ) in the prediction state, i.e., if the probabilities (which are squared amplitudes) coincide. Here an essential cause of the disjunction effect comes from the interference effect which has never been discussed in classical probability theory. This is the crucial difference between our model and the classical CEU model [32]. An essential cause of the disjunction effect is ‘‘deep uncertainty’’ providing the effect of interference. 3. Pay-off table of two-player game Our model is designed for describing a player’s decision-making process in a two-player game with two strategies. Two players ‘‘Alice’’ and ‘‘Bob’’ can choose two strategies denoted by ‘‘0’’ and ‘‘1’’. A\B 0A 1A

0B a\a c\b

1B b\c d\d

The above table is called ‘‘pay-off table’’, where a, b, c and d denote pay-offs assigned to possible four consequences of a game. The relation of c > a > d > b are given for the game called prisoner’s dilemma (PD) type. For Alice, her pay-off is a or c if Bob chooses ‘‘0’’ and b or d if Bob chooses ‘‘1’’. Conventional game theory explains; in the case of PD game, Alice who wants to maximize his own payoff, will choose ‘‘1’’ because of c > a and d > b. In this sense, the choice of ‘‘1’’ is ‘‘rational’’. However, the above discussion does not explain process of decision-making in real player’s mind, completely. Actually, as seen in statistical data in some experiments, real players frequently behave ‘‘irrational’’. Our model is an attempt to explain such real player’s behaviors in ‘‘a quantum-like model’’ which is derived from basic concepts of quantum mechanics. 4. Prediction state and mental state Our model describes a state of Alice’s mentality to make a decision in a two-player game, and uses the mathematical formalism of quantum mechanics. Firstly, we have to consider a basic rule of game: Alice is not informed of which action Bob chose. We assume that in this situation, Alice holds uncertainty about Bob’s action. In the sense of classical game theory, this uncertainty will be explained like ‘‘Alice predicts that Bob will choose 0 with certain probability P and choose 1 with 1 − P’’. Here note that it does not mean ‘‘Alice judge Bob’s action, 0B or 1B with the probability P or 1 − P’’.: Alice essentially cannot judge Bob’s action. In our model, this situation is represented by the following state vector:

|φB ⟩ = α |0B ⟩ + β |1B ⟩ ∈ B = C2 · (|α|2 + |β|2 = 1). (3) The orthogonal basis |0B ⟩ and |1B ⟩ represent two states of mentality judging Bob’s action. We call such state ‘‘mind’’ of judging and call the coefficients α and β ‘‘weights’’ assigned for the minds |0B ⟩ and |1B ⟩. |α|2 and |β|2 correspond to the probabilities P and 1 − P. The opposing minds coexist in the form of quantum superposition. We call the state |φB ⟩ ⟨φB | ≡ σB ‘‘prediction state ’’. Next, we introduce other basis denoted by |0A ⟩ and |1A ⟩, which are defined on another Hilbert space A = C2 . Each of them represents Alice’s mind of choosing her strategy, 0A or 1A . By these basis and the prediction state vector of Eq. (3), we define the following state vectors on A ⊗ B = C2 ⊗ C2 .

  Φ0 = |0A ⟩ ⊗ |φB ⟩ = α |0A 0B ⟩ + β |0A 1B ⟩ , A   Φ1 = |1A ⟩ ⊗ |φB ⟩ = α |1A 0B ⟩ + β |1A 1B ⟩ . A

(4)

We call them ‘‘alternative state vectors’’. For Alice who determines a strategy xA , the possible consequences of game are xA 0B and xA 1B . Alice’s estimation for possibilities of xA 0B and xA 1B is represented in the form of the alternative state vector ΦxA . In general, there exists a case that Alice changes her predictions with depending on her own choices. In this case, which is not discussed in the conventional game theory, the alternative state vectors are described as

    Φ0 = |0A ⟩ ⊗ φ 0 , B A     Φ1 = |1A ⟩ ⊗ φ 1 , B A

(5)

with different prediction state vectors,

 0 φ = α0 |0B ⟩ + β0 |1B ⟩ , B  1 φ = α1 |0B ⟩ + β1 |1B ⟩ · (|αi |2 + |βi |2 = 1). B

(6)

2088

M. Asano et al. / Physica A 391 (2012) 2083–2099

By using a set of the alternative state vectors, we define the state with the form as

Θ=



xij ΦiA ΦjA 







(7)

i,j=0,1

where coefficients {xij } are given to satisfy the positivity of Θ and probabilities of choosing 0A and 1A are defined by







i=0,1

tr(Θ ) = 1. We call Θ ‘‘mental state’’. The

P1A = tr Θ Φ1A Φ1A  .

P0A = tr Θ Φ0A Φ0A  ,













5. Dynamics of decision-making In our model, the process of decision-making is the dynamics changing the mental state Θ . 5.1. Initial mental state Before the decision-making, Alice’s mental state Θ (t = 0) = Θ0 at the time t = 0 is given as

Θ0 = |Φ (0)⟩ ⟨Φ (0)| ,

(8)

where

|Φ (0)⟩ = x0 |Φ0A ⟩ + x1 |Φ1A ⟩ . x0 and x1 are complex number with |x0 |2 + |x1 |2 = 1. The values of x0 and x1 are not so important, because Alice does not start her thinking for decision-making yet. In general, these are determined randomly. To simplify the discussion, we assume x0 = 1, that is Θ0 = |Φ0A ⟩ ⟨Φ0A |. 5.2. Shift of mind What is the source generating the dynamics? For this problem, we assume that Alice holds two ‘‘thinkings’’ about her own choices. One is ‘‘The choice of 1 is better than the choice of 0’’. and another is ‘‘The choice of 0 is better than the choice of 1’’. We denote them by T (0 → 1) and T (1 → 0). These two thinkings are opposing each other. There is a case that Alice has only one side of them, then, Alice will choose 0A or 1A with probability 1. However, T (0 → 1) and T (1 → 0) generally coexist in Alice’s mentality when Alice oscillates on her decision. Through T (0 → 1) (T (1 → 0)), the mind of choosing 0 (1) is decreased and the mind of choosing 1 (0) is developed. Such ‘‘shift of mind’’ generates the dynamics of decision-making. The mathematical description of the shift is discussed in the next subsection. We assume 1. The shift is occurred in a certain time interval τ . 2. The shift is repeated, and the mental state becomes stable. Further, we assume 3. All of the repeated shifts are memorized by Alice self. 5.3. Memory space and branching map We represent the shift of mind by T (0 → 1) by the diagram of ‘‘branching’’ as Fig. 3. Fig. 3 shows that the mind of choosing 0A is partially transited to the one of choosing 1A (the solid path) and partially remained (the dotted path). From the third assumption in the previous subsection, Alice should memorize these two paths. We introduce the new space M = C2 which is called ‘‘memory space’’ and describe the above branching as a vector on A ⊗ B ⊗ M;

|Φ0→1 ⟩ =



1 − ∆τ Φ0A ⊗ |R⟩ +



  ∆τ Φ1A ⊗ |T ⟩    √   = 1 − ∆τ Φ0A R + ∆τ Φ1A T . 



{|R⟩ , |T ⟩} is CONS of M, and |R⟩, |T ⟩ represent the dotted ∆τ (0 ≤ ∆τ ≤ 1) √ path and solid path respectively. The value of √ is the degree of ‘‘weight’’ assigned for the choice of 1. ∆τ represents the complex number satisfying | ∆τ |2 = ∆τ . The

M. Asano et al. / Physica A 391 (2012) 2083–2099

2089

Fig. 3. The branching by T (0 → 1).

√ value of ∆τ specifies the shift by T (0 → 1) quantitatively. In the similar way, the shift by T (1 → 0) is represented by the vector as

   ˜ τ Φ0A ⊗ |T ⟩ ∆       ˜ τ Φ1A R + ∆ ˜ τ Φ0A T . = 1−∆

|Φ1→0 ⟩ =



˜ τ Φ1A ⊗ |R⟩ + 1−∆ 



˜ τ is the time interval for the shift. The index τ in the parameters ∆τ and ∆ Here, we introduce the map Zτ : A ⊗ B → A ⊗ B ⊗ M defined by Zτ Φ0A = |Φ0→1 ⟩ ,





Zτ Φ1A = |Φ1→0 ⟩ ,





(9)

and call it the ‘‘branching map’’. The branching map Zτ is decomposed as Vτ ◦ M. The map M : A ⊗ B → A ⊗ B ⊗ M has the role of generating ‘‘memory’’; M Φ0A = Φ0A R ,









M Φ1A = Φ1A R ,









(10)

and Vτ : A ⊗ B ⊗ M → A ⊗ B ⊗ M is a unitary operator generating the shift; Vτ Φ0A R = |Φ0→1 ⟩ ,





Vτ Φ1A R = |Φ1→0 ⟩ .





(11)

Further, we assume that the unitary Vτ has the form of time evolution operator; Vτ = e−iλH τ .

(12)

The operator H is a Hamiltonian of the interaction between the mental and memory and represents the source of Alice’s thinkings. In Section 5, we discuss the construction of H in detail. The parameter λ in e−iH λ determines the strength of interaction. 5.4. Dynamics represented by lifting map By the branching map Zτ , the initial state Θ (0) = Θ0 is mapped into the state on A ⊗ B ⊗ M ;

Θ0 → Zτ Θ0 Zτ∗ . The new mental state at t = τ , Θ (τ ) = Θ1 , is given by

Θ1 = trM (Zτ Θ0 Zτ∗ ). As mentioned in the third assumption in the Section 4.2, the shift is repeated, that is, the operation by Zτ is repeated. In the next time interval τ , the following state is obtained;

(Zτ ⊗ I )Zτ Θ0 Zτ∗ (Zτ∗ ⊗ I ) ∈ S (A ⊗ B ⊗ M ⊗ M).    When the initial mental state is given by Θ0 = Φ0A Φ0A , (Zτ ⊗ I )Zτ Θ0 Zτ∗ (Zτ∗ ⊗ I ) = |Φ (2τ )⟩ ⟨Φ (2τ )| where

           ˜ Φ0A ⊗ |TT ⟩ + ∆(1 − ∆ ˜ ) Φ1A ⊗ |TR⟩ + ∆(1 − ∆) Φ1A ⊗ |RT ⟩ . |Φ (2τ )⟩ = (1 − ∆) Φ0A ⊗ |RR⟩ + ∆∆ In the above form, |RR⟩, |TT ⟩, |TR⟩ and |RT ⟩ on M ⊗ M represent the all paths in the branching of Fig. 4.

2090

M. Asano et al. / Physica A 391 (2012) 2083–2099

Fig. 4. The branching at t = 2τ .

In general, the state Θ (nτ ) = Θn , which is obtained after n times shifts, is represented as

Θn = trMn ⊗···⊗M1 (En∗ (En∗−1 (· · · E2∗ (E1∗ (Θ0 )) · · ·))).

(13)

Here, Ej is the lifting map [25] from B(A ⊗ B ⊗ Mj−1 ⊗ · · · ⊗ M2 ⊗ M1 ) to B(A ⊗ B ⊗ Mj ⊗ Mj−1 ⊗ · · · ⊗ M2 ⊗ M1 ) that is defined by ∗

Ej∗ (·) = (Z ⊗ I ⊗ · · · ⊗ I ) · (Z ∗ ⊗ I ⊗ · · · ⊗ I ).

(14)

5.5. Stabilization of mental state From the relation

Θn = trMn (Z Θn−1 Z ∗ ), we obtain

Θn = Q Θn−1 Q ∗ + Q ′ Θn−1 Q ′∗ ,

(15)

where

        ˜ τ Φ1A Φ1A  , Q = 1 − ∆τ Φ0A Φ0A  + 1 − ∆       √  ˜ τ Φ0A Φ1A  . Q ′ = ∆τ Φ1A Φ0A  + ∆ 

(16)

    Here, noting the mental state Θ is expanded by only the basis Φ0A and Φ1A , see the definition of Eq. (7), we replace the    ∑ mental state Θn = xij (n) Φi Φj  with the state on C2 ; i,j=0,1

ρn =



A

A

xij (n) |i⟩ ⟨ j| ,

(17)

i,j=0,1

where {|0⟩ , |1⟩} is CONS in C2 . Then, Eq. (15) is rewritten as

ρn = Γ ρn−1 Γ ∗ + Γ ′ ρn−1 Γ ′∗ ≡ Λ∗τ (ρn−1 ),

(18)

where

 Γ =

1 − ∆τ

 0

˜τ 1−∆

0

 0

Γ′ =

√ ∆τ



  ˜τ ∆ 0

.

Note that Γ ∗ Γ + Γ ′∗ Γ ′ = I is satisfied. It means that the channel Λ∗τ defined in Eq. (18) has the property of Kraus representation that is linear, trace preserving and complete positive. From Eq. (18), the components of

ρn =



x00 (n) x10 (n)

x01 (n) x11 (n)



M. Asano et al. / Physica A 391 (2012) 2083–2099

2091

satisfy x00 (n) = (1 − ∆τ )x00 (n − 1) + ∆˜τ x11 (n − 1),

 ∗ ˜ τ x10 (n − 1), ∆τ ∆   ˜ τ )x10 (n − 1) + ∆τ ∆ ˜ τ x01 (n − 1), x10 (n) = (1 − ∆τ )(1 − ∆ x01 (n) =



˜ τ )x01 (n − 1) + (1 − ∆τ )(1 − ∆

˜ τ )x11 (n − 1) + ∆τ x00 (n − 1). x11 (n) = (1 − ∆

(19)

The diagonal parts x00 and x11 corresponds to the probabilities of choices P0A and P1A . From the above equations, one can obtain the results,

˜ τ )n ) ˜ τ )n P0A (0) + (1 − (1 − ∆τ − ∆ P0A (n) = (1 − ∆τ − ∆ = (1 − ξ ) P0A (0) + (1 − (1 − ξ ) ) n

n

P0SA

˜τ ∆ ˜τ ∆τ + ∆

,

˜ τ )n ) ˜ τ )n P1A (0) + (1 − (1 − ∆τ − ∆ P1A (n) = (1 − ∆τ − ∆

∆τ

˜τ ∆τ + ∆

= (1 − ξ )n P1A (0) + (1 − (1 − ξ )n )P1SA . ˜τ ∆ ˜τ ∆ τ +∆

˜ τ = ξ, Here, ∆τ + ∆

= P0SA and

∆τ ˜τ ∆τ +∆

(20)

= P1SA . Noting 0 ≤ ξ ≤ 2, we can find that the probabilities P0A (n) and

P1A (n) approach to the stable values P0SA and P1SA in large numbers of n. Also, note that non-diagonal parts x01 (n) and x10 (n) approach to zero. Thus, the process of decision-making in our model stabilizes the mental state to

ρE =

P0SA 0





0 P1SA

.

(21)

6. Construction of Hamiltonian H In Eq. (11), we introduced the operator Vτ that plays the main role of the branching operator Zτ and in Eq. (12), we defined Vτ as the time evolution operator eiλH τ . In this section, we discuss a way of construction of H. Alice determines a strategy by comparing all possible consequences, {xA xB } = {0A 0B , 0A 1B , 1A 0B , 1A 1B }. We see the following matter in a comparison between 1A xB and 0A x′B : How much does Alice prefer the consequence 1A xB to 0A x′B ? (or how much does Alice prefer 0A x′B to 1A xB ?) To specify such degree of ‘‘preference’’, we introduce a quantity denoted by positive µxB x′ (or µ ˜ xB x′ ). Generally, these values are determined by the pay-offs of game. (In Section 7, we shows examples B

B

given in PD game.) In a comparison with µxB x′ > µ ˜ xB x′B (µxB x′B < µ ˜ xB x′B ), Alice is encouraged to choose 1A (0A ). B There are four kinds of comparisons, which are specified by the sets of parameters, (µ00 , µ ˜ 00 ), (µ01 , µ ˜ 01 ), (µ10 , µ ˜ 10 ) and (µ11 , µ ˜ 11 ). These comparisons should be totally reflected on Alice’s decision-making. In order to describe this, we introduce the operator G ∈ B (A ⊗ B ) defined by G=

− xB ,x′B =0,1

    µxB x′B |1A xB ⟩ 0A x′B  + µ ˜ xB x′B 0A x′B ⟨1A xB | ,

which is represented as the matrix



0  0

G= µ

µ ˜ 00 µ ˜ 10

0 0

 µ ˜ 01 µ ˜ 11  0 

µ10 0 00 µ01 µ11 0 0 in the basis {|0A 0B ⟩ , |0A 1B ⟩ , |1A 0B ⟩ , |1A 1B ⟩}. We call this ‘‘comparison operator’’. Here we consider the values given by     µ = Φ1A  G Φ0A ,     µ ˜ = Φ0  G Φ1 . A

(22)

A

Remind the form of alternative state vector;

  Φx = αx |xA 0B ⟩ + βx |xA 1B ⟩ . A A A which represents Alice’s estimation for possibilities of consequences, xA 0B and xA 1B . The µ and µ ˜ are rewritten as

µ = α0 α1∗ µ00 + β0 α1∗ µ10 + α0 β1∗ µ01 + β0 β1∗ µ11 , µ ˜ = α0∗ α1 µ ˜ 00 + β0∗ α1 µ ˜ 10 + α0∗ β1 µ ˜ 01 + β0∗ β1 µ ˜ 11 . (23) Alice’s preference for choosing 1A is represented by this µ, in which, the four kinds of comparisons totally encourage choosing 1A . Similarly, µ ˜ specifies the preference for choosing 0A .

2092

M. Asano et al. / Physica A 391 (2012) 2083–2099

By using the ‘‘preferences’’ µ and µ ˜ , the Hamiltonian H is constructed;

˜ Φ0A T Φ1A R + h.c .. H = µ Φ1A T Φ0A R + µ 











From Eq. (23), this H is rewritten as H = Φ1A T Φ1A T  K Φ0A R Φ0A R + Φ0A T Φ0A T  K Φ1A R Φ1A R + h.c .,

























(24)

where K = G ⊗ |T ⟩ ⟨R| . One can check that the vectors

 eiθ   1  |Ψ± ⟩ = √ Φ0A R ± √ Φ1A T 2

(25)

2

µ

are eigenvectors of H for eigenvalues ±|µ| = ±ω (eiθ is equal to |µ| ), and the vectors

  ′  eiθ˜   Φ0 T Ψ = √1 Φ1 R ± √ ± A A 2

(26)

2

˜ ˜ are eigenvectors for eigenvalues ±|µ| ˜ = ±ω˜ (eiθ = |µµ| ). Further, one can check that the time evolution operator Vτ = e−iλH τ ˜     transforms Φ0A R and Φ1A R as

Vτ Φ0A R = cos(ωλτ ) Φ0A R − ieiθ sin(ωλτ ) Φ1A T ,













˜ Vτ Φ1A R = cos(ωλτ ˜ ) Φ1A R − ieiθ sin(ωλτ ˜ ) Φ0A T .













    Here, we used the relations Φ0A R = √1 |Ψ+ ⟩ + √1 |Ψ− ⟩ and Φ1A R = 2 2      √   Vτ Φ0A R = 1 − ∆τ Φ0A R + ∆τ Φ1A T ,         ˜ τ Φ0A R + ∆ ˜ τ Φ1A T , Vτ Φ1A , R = 1 − ∆  √ ˜ τ | as we give | ∆τ | and | ∆     √  ∆ ˜ τ  = sin(ωλτ | ∆τ | = sin(ωλτ ), ˜ ).  

(27) √1 2

 ′ Ψ + +

√1 2

 ′ Ψ . In the definitions, −

(28)

7. Continuous limit for dynamics As discussed in Section 4.5, the dynamics of the mental state is represented by the channel Λ∗τ of Eq. (18);

ρn = Λ∗τ (ρn−1 ),

(29)

and for the components of ρn−1 and ρn , x00 (n) = (1 − ∆τ )x00 (n − 1) + ∆˜τ x11 (n − 1),

  ∗ ˜ τ )x01 (n − 1) + ∆τ ∆ ˜ τ x10 (n − 1), (1 − ∆τ )(1 − ∆   ˜ ˜ τ x01 (n − 1), x10 (n) = (1 − ∆τ )(1 − ∆τ )x10 (n − 1) + ∆τ ∆ x01 (n) =

˜ τ )x11 (n − 1) + ∆τ x00 (n − 1) x11 (n) = (1 − ∆ are satisfied. Here, as seen in Eq. (28),

√ | ∆τ | = sin(ωλτ ),

     ∆ ˜ τ  = sin(ωλτ ˜ ),  

where λ is the strength of the interaction. In this section, we consider a continuous limit for the above discrete dynamics such that t = nτ fixed with τ → 0 and n → ∞. Here, note that we consider the scaling of the strength λ at the same time; λ2 τ = 1 is assumed in the limit τ → 0. This regime of limitation is similar to van Hove limit. However, it is not weak coupling limit such as λ → 0, rather λ = √1τ becomes large, and the affection of the interaction becomes sensitive for τ → 0. (Such regime was proposed and discussed in the paper [49].) As result, the following differential equations are obtained.

M. Asano et al. / Physica A 391 (2012) 2083–2099

d dt d

2093

˜ 11 , x00 = −kx00 + kx

 1 ˜ ˜ −i(θ −θ) x01 = − (k + k˜ )x01 + kke x10 , dt 2  d 1 ˜ ˜ i(θ −θ) x10 = − (k + k˜ )x10 + kke x01 , dt 2 d ˜ 11 + kx00 . x11 = −kx dt

(30)

Here k = ω2 and k˜ = ω ˜ 2 . From these differential equations, one can check







k + k˜ lim ρ(t ) =   t →∞ 0



0

 , k  k + k˜

(31)

that is, the dynamics is a kind of procedure of decoherence and has an equilibrium state ρE . It should be noted that the differential equations for the diagonal part x00 and x11 , which are the probabilities of the choice of 0A and 1A , are similar to the equations given for a chemical reaction. The parameters k and k˜ are interpreted as the reaction velocities; k

Choice of 0 Choice of 1.

(32)







Actually, k = ω = |µ| and k˜ = ω ˜ = |µ| ˜ represent the degrees of preferences for choosing 0A and 1A , as mentioned in Section 5. From Eq. (23), k and k˜ are written as k=

4 −

|ci |2 ki +

4 −

ci cj∗ ui uj ,

i,j=1

i=1

k˜ =

4 −

|ci |2 k˜ i +

4 −

ci∗ cj u˜ i u˜ j ,

(33)

i,j=1

i=1

where {ci } = {α0 α1∗ , β0 α1∗ , α0 β1∗ , β0 β1∗ }, {µi } = {µ00 , µ10 , µ10 , µ11 }, {µ ˜ i } = {µ ˜ 00 , µ ˜ 10 , µ ˜ 10 , µ ˜ 11 } and ki = µ2i , k˜ i = µ ˜ 2i . Each ki is interpreted as a reaction velocity which specifies a comparison; k1

0A 0B 1A 0B , k˜ 1

k3

0A 0B 1A 0B , k˜ 3

The terms

∑4

∑4

i=1

k2

0A 1B 1A 0B , k˜ 2

k4

1A 1B 1A 1B .

(34)

k˜ 4

∑ |ci |2 ki and 4i=1 |ci |2 k˜ i of Eq. (33) represents a simple mixture of the four comparisons. We call the second ∑ 4 ∗ ∗

terms i,j=1 ci cj ui uj and i,j=1 ci cj ui uj ‘‘interference terms’’ which represent a kind of interdependency between the four comparisons. Such interpretation for the decision-making was explained conceptually in the paper [23]. In the paper [24], we showed that the differential equations of Eq. (30) can be represented by the form of GKSL (Gorini–Kossakowski–Sudarshan–Lindblad) equation, that is, as a open system dynamics. Further, in the paper [50], we analyzed the decision-making process, which seems like a decoherence procedure, by calculating the entropy of ρ(t ). 8. Numerical analysis of dynamics In this section, we demonstrate numerical analyses of the dynamics of decision-making in prisoner’s dilemma (PD) games. 8.1. Setting of parameters The decision-making process is summarized in the form,

ρn = Λ∗τ (ρn−1 ),

2094

M. Asano et al. / Physica A 391 (2012) 2083–2099

see Eq. (18) in Section 4.5. The diagonal components of the mental state ρn correspond to the probabilities of choosing strategies, P0A (n) and P1A (n). For an initial PxA (0), the probability PxA (n), which is obtained after n times operations of Λ∗τ , is given by

˜ τ )n ) ˜ τ )n P0A (0) + (1 − (1 − ∆τ − ∆ P0A (n) = (1 − ∆τ − ∆

˜τ ∆

, ˜τ ∆τ + ∆ ∆τ ˜ τ )n ) ˜ τ )n P1A (0) + (1 − (1 − ∆τ − ∆ P1A (n) = (1 − ∆τ − ∆ . ˜τ ∆τ + ∆ Here,

∆τ = sin2 (λωτ ),

˜ τ = sin2 (λωτ ∆ ˜ ).

The parameter λ represents the strength of interaction between the mental and memory, and ω and ω ˜ are given by

    ω =  Φ1A  G Φ0A  = |α0 α1∗ µ1 + β0 α1∗ µ2 + α0 β1∗ µ3 + β0 β1∗ µ4 |,     ˜ 1 + β0∗ α1 µ ˜ 2 + α0∗ β1 µ ˜ 3 + β0∗ β1 µ ˜ 4 |, ω˜ =  Φ0A  G Φ1A  = |α0∗ α1 µ

(35)

where

  Φ0 = α0 |0A 0B ⟩ + β0 |0A 1B ⟩ , A   Φ1 = α1 |1A 0B ⟩ + β1 |1A 1B ⟩ , A

are the alternative vectors with ‘‘weights’’ α0,1 and β0,1 ∈ C, and G is the comparison operator described as



0 0

G= µ1

µ3

0 0

µ2 µ4

µ ˜1 µ ˜2 0 0

 µ ˜3 µ ˜ 4 . 0

(36)

0

In order to demonstrate numerical analyses, we have to set the parameters τ , λ, α0 , α1 , {µi } and {µ ˜ i }. Remind that a set of reaction velocities (ki = µ2i , k˜ i = µ ˜ 2i ) specifies a comparison of consequences, see Eq. (34). We assume that the velocities (ki , k˜ i ) are determined by pay-offs of a game, in the following way: Let us give the game with the following pay-off table. A/B 0A 1A

0B a/a′ c /c ′

1B b/b′ d/d′

In the comparisons k1

0A 0B 1A 0B , k˜ 1

k4

0A 1B 1A 1B . k˜ 4

Bob’s strategy is fixed with 0B or 1B , and then (k1 , k˜ 1 ) and (k4 , k˜ 4 ) are determined by only Alice’s pay-offs;

 (c − a, 0) if a < c ˜ (k1 , k1 ) = (0, a − c ) if a > c ,  (d − b, 0) if b < d (k4 , k˜ 4 ) = (0, b − d) if b > d.

(37)

The other two comparisons k2

0A 1B 1A 0B , k˜ 2

k3

0A 0B 1A 1B k˜ 3

provide the additional effects which are discussed in the introduction. In these comparisons, Bob’s strategy is not fixed, so (k2 , k˜ 2 ) and (k3 , k˜ 3 ) depend on not only Alice’s pay-offs but also Bob’s pay-offs. We consider the following forms.

 ( ( c − b)(c ′ − b′ ), 0) if b < c and b′ < c ′ (k2 , k˜ 2 ) = (0, (b − c )(b′ − c ′ )) if b > c and b′ > c ′  (0, 0) in other cases  ( ( d − a)(d′ − a′ ), 0) if a < d and a′ < d (k3 , k˜ 3 ) = (0, (a − d)(a′ − d′ )) if a > d and a′ > d′  (0, 0) in other cases.

(38)

M. Asano et al. / Physica A 391 (2012) 2083–2099

2095

Table 1 PD game I. A/B

0B

1B

0A 1A

4/4 5/2

2/5 3/3

Fig. 5. Probabilities of irrational choice in PD game I.

In the case of PD game, the values of pay-offs have c > a > d > b and b′ > a′ > d′ > c ′ , so the velocities are given as k1 = c − a,

k˜ 3 =

 (a − d)(a′ − d′ )

k4 = d − b,

(39)

k˜ 1 , k2 , k˜ 2 , k3 and k˜ 4 are zero. 8.2. Analysis in PD game By the velocities (ki , k˜ i ) defined in Eq. (39), we analyze the process of decision-making in prisoner’s dilemma (PD) games and discuss Alice’s ‘‘irrational’’ behaviors. Firstly, we consider the PD game I of Table 1. Fig. 5 shows the probabilities of ‘‘irrationality’’ P0A in the four conditions of parameters; (α02 , α12 ) = (1 − β02 , 1 − β12 ) = (0, 0), (0.5, 0.5), (0.75, 0.25) and (1, 0). Here, α0,1 (β0,1 ) are assumed to be real. The strength of interaction λ and the time 1 and τ = 45π . We assume that before the decision-making, at n = 0, Alice does not feel the interval τ are set as λ = max(ω, ω) ˜ possibility of the irrational choice ‘‘0’’, that is, P0A (n = 0) = 0. At the condition (α02 , α12 ) = (0, 0), Alice does not feel uncertainty for Bob’s action; she can judge ‘‘Bob will choose the rational 1’’. Alice chooses ‘‘1’’ with probability 1 since the pay-off 3 is larger than 2. At the condition (α02 , α12 ) = (1, 0), Alice judges like ‘‘Bob’s choice will be the same with my choice’’. Then, Alice will choose ‘‘0’’ by comparing the pay-offs 4 and 3. As seen in Fig. 5, the probability P0A develops and approaches to 1. At the condition as (α02 , α12 ) = (0.5, 0.5) or (0.75, 0.25), Alice holds uncertainty and chooses 0 with 0 < P0A < 1. In the above calculation, as n becomes large, the probability P0A approaches to

P0EA

  ω˜  sin2 45π ω      sin2  4π  + sin2  4π ω˜  5 5 ω = lim P0A (n) =   2 4π n→∞  sin   5     2  4π ω  sin + sin2 45π 5 ω ˜

When ω = ω ˜ , the probability P0EA is





α1 = sin arctan 1 −



1 2

(if ω˜ ≥ ω).

and then the parameters α0 and, α1 satisfy

1 − α02

α0

(if ω ≥ ω) ˜

  .

2096

M. Asano et al. / Physica A 391 (2012) 2083–2099

Fig. 6. The boundary between rational solutions and irrational solutions in PD game I.

(We use Eq. (35) to derive this.) In a sense, this function gives a boundary between rational solutions (P0EA < 1/2) and

irrational solutions (P0EA > 1/2), see Fig. 6. If Alice makes her decision with (α0 , α1 ) on the boundary, she may strongly feel ‘‘dilemma’’: see Fig. 7, which shows P0A (n) at n = 1, 2, 3, 4, 7, 8 and 19. One can see that the probability P0A (n) is intensively fluctuated at near the boundary. The irrational choice in PD comes from the fact that the pay-off assigned for the consequence 0A 0B is larger than the one for 1A 1B . In the case of the game I, the difference of pay-offs in these consequences is only 4 − 3 = 1. Therefore, as seen in Fig. 6, the domain of irrational solutions is relatively small. In Fig. 8, we introduce the two games denoted by game II and game III whose pay-offs assigned for 0A 0B are 6 and 10. The boundaries in Fig. 8 are given by







II: α1 = sin arctan 3 −





III: α1 = sin arctan 10 −

1 − α02

α0 

  ,

1 − α02

α0

  ,

and these domains are larger than the one in the game I. It is expected that the domain is further larger in the PD with extreme pay-offs as in the introduction. Let us consider Alice at (α0 , α1 ) ≈ (0, 0), in which, Alice feels that Bob will choose 1B with probability nearly 1. (Alice may be a believer in classical game theory.) When the game I is given, Alice will choose the rational ‘‘1A ’’ with high probability, because the boundary of Fig. 6 is far from Alice’s point (α0 , α1 ) ≈ (0, 0). In the game III or the game with extreme pay-offs, Alice may touch the boundary and develop the possibility of the irrational choice.

Appendix. On quantum-like representation of classical oscillations in the brain We stress again that the discussion in this section has not be directly coupled to our quantum-like model of decision making. One may, but need not, consider a possibility that classical wave processes in the brain might produce the quantumlike representation of information. In general our model is simply a formal mathematical model describing the process of decision making by using quantum mathematics and nothing more. Some authors claim that quantum-like effects in cognitive science and psychology, in particular, the interferencetype effects, can also be produced by non-quantum oscillators as wave sources. For example, Acacio de Barros and Suppes emphasized the role of chemical waves in the brain [51]; Geissler [52], Geissler et al. [53], Geissler and Kompass [54] developed a quantum-like model of the brain’s functioning based on oscillations in neuronal ensembles; recently Khrennikov [30] presented a quantum-like model of the information processing in the brain based on classical electromagnetic signals.

M. Asano et al. / Physica A 391 (2012) 2083–2099

Fig. 7. The probability P0A (n) fluctuated in the process of decision-making.

2097

2098

M. Asano et al. / Physica A 391 (2012) 2083–2099

Fig. 8. The boundaries in PD game II and III.

References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39]

J. Nash, Equilibrium points in n-person games, Proceedings of the National Academy of Sciences 36 (1) (1950) 48–49. J. Nash, Non-cooperative games, The Annals of Mathematics 54 (2) (1951) 286–295. D. Bernheim, B. Peleg, M.D. Whinston, Coalition-proof Nash equilibria I, concepts, Journal of Economic Theory 42 (1) (1987) 1–12. A.J. McKenzie, Evolutionary game theory, in: Edward N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy, 2009. T. Schelling, The Strategy of Conflict, Harvard University Press, 1960, 1980. E. Shafir, A. Tversky, Thinking through uncertainty: nonconsequential reasoning and choice, Cognitive Psychology 24 (1992) 449–474. A. Tversky, E. Shafir, The disjunction effect in choice under uncertainty, Psychological Science 3 (1992) 305–309. R. Croson, The disjunction effect and reasoning-based choice in games, Organizational Behavior and Human Decision Processes 80 (1999) 118–133. D.R. Hofstader, Dilemmas for superrational thinkers, leading up to a luring lottery, Scientific American 6 (1983). D.R. Hofstader, Metamagical Themes: Questing for the Essence of Mind and Pattern, Basic Books, New York, 1985. A. Khrennikov, On Quantum-like probabilistic structure of mental information, Open Systems and Information Dynamics 11 (3) (2004) 267–275. A. Khrennikov, Quantum-like brain: interference of minds, BioSystems 84 (2006) 225–241. K.-H. Fichtner, L. Fichtner, W. Freudenberg, M. Ohya, On a quantum model of the recognition process, QP-PQ:Quantum Probability White Noise Analysis 21 (2008) 64–84. J.B. Busemeyer, Z. Wang, J.T. Townsend, Quantum dynamics of human decision making, Journal of Mathematical Psychology 50 (2006) 220–241. J.R. Busemeyer, M. Matthews, Z. Wang, A quantum information processing explanation of disjunction effects, in: Sun, R. and Myake, N. (Eds.), The 29th Annual Conference of the Cognitive Science Society and the 5th International Conference of Cognitive Science, 2006, pp. 131–135. J.R. Busemeyer, E. Santuy, A. Lambert-Mogiliansky, Comparison of Markov and quantum models of decision making, in: P. Bruza, W. Lawless, K. van Rijsbergen, D. A. Sofge, B. Coeke, S. Clark (Eds.), Quantum Interaction: Proceedings of the Second Quantum Interaction Symposium, 2008, pp. 68–74. L. Accardi, A. Khrennikov, M. Ohya, The problem of quantum-like representation in economy, cognitive science, and genetics, in: L. Accardi, W. Freudenberg, M. Ohya (Eds.), Quantum Bio-Informatics II: From Quantum Information to Bio-Informatics, WSP, Singapore, 2008, pp. 1–8. L. Accardi, A. Khrennikov, M. Ohya, Quantum Markov model for data from, Shafir–Tversky experiments in cognitive psychology, Open Systems and Information Dynamics 16 (2009) 371–385. E. Conte, A. Khrennikov, O. Todarello, A. Federici, J.P. Zbilut, Mental states follow quantum mechanics during perception and cognition of ambiguous figures, Open Systems and Information Dynamics 16 (2009) 1–17. A. Khrennikov, E. Haven, Quantum mechanics and violations of the sure-thing principle: the use of probability interference and other concepts, Journal of Mathematical Psychology 53 (2009) 378–388. A. Khrennikov, Ubiquitous Quantum Structure: From Psychology to Finance, Springer, Heidelberg–Berlin–New York, 2010. A. Khrennikov, Contextual approach to quantum formalism, in: Fundamental Theories of Physics, Springer, Heidelberg–Berlin–New York, 2009. M. Asano, M. Ohya, A. Khrennikov, Quantum-Like model for decision making process in two players game, Foundations of Physics 41 (2010) 538–548. M. Asano, M. Ohya, Y. Tanaka, A. Khrennikov, I. Basieva, On application of Gorini–Kossakowski–Sudarshan–Lindblad equation in cognitive psychology, Open Systems & Information Dynamics 18 (2011) 55–69. L. Accardi, M. Ohya, Compound channels, transition expectations and liftings, Applied Mathematics Optimization 39 (1999) 33–59. R. Penrose, The Emperor’s New Mind, Oxford Univ. Press, New-York, 1989. R. Penrose, Shadows of the Mind, Oxford University Press, Oxford, 1994. S. Hameroff, Quantum coherence in microtubules, a neural basis for emergent consciousness? Journal of Consciousness Studies 1 (1994) 91–118. S. Hameroff, Quantum computing in brain microtubules? the Penrose–Hameroff Orch Or model of consciousness, Philosaphical Transactions of the Royal Society, London A (1994) 1–28. A. Khrennikov, Quantum-like model of processing of information in the brain based on classical electromagnetic field, Biosystems 105 (3) (2011) 250–262. J.O. Ledyard, Public goods: a survey of experimental research, in: J.H. Kagel, A.E. Roth (Eds.), The Handbook of Experimental Economics, Princeton University Press, Princeton, 1995, pp. 111–194. J. Masel, A Bayesian model of quasi-magical thinking can explain observed cooperation in the public good game, Journal of Economic Behavior and Organization 64 (2007) 216–231. R.M. Isaac, K.F. McCue, C.R. Plott, Public-goods provision in an experimental environment, Journal of Public Economics 26 (1985) 51–74. O. Kim, M. Walker, The free rider problem: experimental evidence, Public Choice 43 (1984) 3–24. J. Andreoni, Why free ride? strategies and learning in public-goods experiments, Journal of Public Economics 37 (1988) 291–304. R. Burlando, J.D. Hey, Do Anglo–Saxons free-ride more? Journal of Public Economics 64 (1997) 41–60. R.T.A. Croson, Partners and strangers revisited, Economics Letters 53 (1996) 25–32. R.M. Isaac, J.M. Walker, Communication and free-riding behavior the voluntary contribution mechanism, Economic Inquiry 26 (1988) 585–608. R.M. Isaac, J.M. Walker, S.H. Thomas, Divergent evidence on free riding in experimental examination of possible explanations, Public Choice 43 (1984) 113–149.

M. Asano et al. / Physica A 391 (2012) 2083–2099

2099

[40] R.M. Isaac, J.M. Walker, A.W. Williams, Group-size and the voluntary provision of public-goods experimental evidence utilizing large groups, Journal of Public Economics 54 (1994) 1–36. [41] D.M. Kreps, P. Milgrom, J. Roberts, R. Wilson, Rational cooperation in the finitely repeated Prisoners Dilemma, Journal of Economic Theory 27 (1982) 245–252. [42] J. Brandts, A. Schram, Cooperation and noise in public goods experiments: applying the contribution function approach, Journal of Public Economics 79 (2001) 399–427. [43] J. Weimann, Individual behavior in a free riding experiment, Journal of Public Economics 54 (1994) 185–200. [44] C. Keser, F. van Winden, Conditional cooperation and voluntary contributions to public goods, Scandinavian Journal of Economics 102 (2000) 23–39. [45] G.S. Becker, A theory of social interactions, Journal of Political Economy 82 (1974) 1063–1093. [46] J. Andreoni, Giving with impure altruism: applications to charity and Ricardian equivalence, Journal of Political Economy 97 (1989) 1447–1458. [47] R.C. Jeffrey, The Logic of Decision, University of Chicago Press, 1983. [48] J. Orbell, R.M. Dawes, A cognitive miser theory of cooperators advantage, American Political Science Review 85 (1991) 515–528. [49] S. Attal, A. Joye, Weak coupling and continuous limits for repeated quantum interactions, Journal of Statistical Physics 126 (2007) 1241–1283. [50] M. Asano, M. Ohya, Y. Tanaka, I. Basieva, A. Khrennikov, Quantum-like model of brainfsfunctioning: decision making from decoherence, Journal of Theoretical Biology 281 (2011) 56–64. [51] J. Acacio de Barros, P. Suppes, Quantum mechanics, interference, and the brain, Journal of Mathematical Psychology 53 (2009) 306–313. [52] H.-G. Geissler, The temporal architecture of central information processing: evidence for a tentative time-quantum model, Psychological Research 49 (1987) 99–106. [53] H.-G. Geissler, F.-U. Schebera, R. Kompass, Ultra-precise quantal timing: evidence from simultaneity thresholds in long-range apparent movement, Perception and Psychophysics 6 (1999) 707–726. [54] H.-G. Geissler, R. Kompass, Temporal constraints in binding? evidence from quantal state transitions in perception, Visual Cognition 8 (2001) 679–696. [55] M. Asano, M. Ohya, Y. Tanaka, A. Khrennikov, I. Basieva, Qunatum uncertainty and decision-making in game theory, QP-PQ: Quantum Probability and White Noise Analysis XXVIII (2011) 51–60.