A network model of knowledge accumulation through diffusion and upgrade

A network model of knowledge accumulation through diffusion and upgrade

Physica A 390 (2011) 2582–2592 Contents lists available at ScienceDirect Physica A journal homepage: www.elsevier.com/locate/physa A network model ...

3MB Sizes 0 Downloads 56 Views

Physica A 390 (2011) 2582–2592

Contents lists available at ScienceDirect

Physica A journal homepage: www.elsevier.com/locate/physa

A network model of knowledge accumulation through diffusion and upgrade✩ Enyu Zhuang a,∗ , Guanrong Chen b , Gang Feng a a

Department of Manufacturing Engineering and Engineering Management, City University of Hong Kong, Hong Kong, China

b

Department of Electronic Engineering, City University of Hong Kong, Hong Kong, China

article

info

Article history: Received 15 October 2010 Received in revised form 13 January 2011 Available online 11 March 2011 Keywords: Knowledge accumulation Knowledge diffusion Knowledge upgrade Multi-agent network

abstract In this paper, we introduce a model to describe knowledge accumulation through knowledge diffusion and knowledge upgrade in a multi-agent network. Here, knowledge diffusion refers to the distribution of existing knowledge in the network, while knowledge upgrade means the discovery of new knowledge. It is found that the population of the network and the number of each agent’s neighbors affect the speed of knowledge accumulation. Four different policies for updating the neighboring agents are thus proposed, and their influence on the speed of knowledge accumulation and the topology evolution of the network are also studied. © 2011 Elsevier B.V. All rights reserved.

1. Introduction Needless to say, knowledge is important for human beings to achieve a higher-level civilization. This concept leads to a growing interest in knowledge management in the past few years [1]. To have a full understanding of the evolution of knowledge is a key step for knowledge management. On another front of research, the theory of complex networks provides a useful tool for investigating the dynamics of various kinds of networks [2–6], including computer networks [7,8], social networks [9,10], systems biology [11,12] and many others. Some works have addressed the development of knowledge using complex networks as a tool [13–16]. In this paper, we establish a new model based on the framework of multi-agent networks and consider two kinds of knowledge evolution: knowledge diffusion and knowledge upgrade, which lead to knowledge accumulation. In our model, there are agents with different levels of knowledge and some relationship edges connecting them. Knowledge diffusion happens when an agent absorbs the knowledge of another agent through the edge between them. And knowledge upgrade takes place when one agent discovers new knowledge based on the knowledge it already has. We adopt a mechanism that every agent updates its neighborhood before starting the next period of knowledge evolution by replacing a worst neighbor with a new one. There are four optional policies that can be used for choosing a new neighbor: the new neighbor could be a randomly selected agent, the best neighbor of its best neighbor, the best among its worst neighbor and neighbors of its worst neighbor, or the best among a randomly chosen agent and neighbors of this randomly chosen agent. We stimulate the behavior of the new network model with different populations, numbers of neighbors and neighborhood update policies. We find that in the network, the per-capita rising knowledge follows an exponential form. It rises when the population of the network or the number of an agent’s neighbors increases. The policy significantly changes

✩ This research was supported by the Hong Kong Research Grants Council (grant CityU1117/10E).



Corresponding author. Tel.: +852 3442 2016. E-mail addresses: [email protected] (E. Zhuang), [email protected] (G. Chen), [email protected] (G. Feng).

0378-4371/$ – see front matter © 2011 Elsevier B.V. All rights reserved. doi:10.1016/j.physa.2011.02.043

E. Zhuang et al. / Physica A 390 (2011) 2582–2592

2583

the topology of the network and thus affects the process of knowledge accumulation. It has significant influences on different phases of the knowledge evolution. Regarding degree distribution and path length distribution, which describe the topology of a network, we show that they gradually reach a relatively stable state. The time they need to become stable is dependent on the population of the network and the number of an agent’s neighbors. The shapes of a stable degree distribution and a path length distribution both depend on the neighborhood-update policy. The rest of the paper is organized as follows. Section 2 describes the model. Section 3 presents the numerical simulation results. Section 4 concludes the paper with some discussion and remarks. 2. The new model The new model to describe knowledge accumulation is a network consisting of agents and directed edges. Every agent has a certain level of knowledge discovered by itself. Among agents, there are some directed edges through which knowledge spreads. During every period of evolution time, knowledge diffusion and knowledge upgrade take place in the network. Agent A can absorb the knowledge discovered by agent B through the directed edge from B to A. An agent’s neighbor is the one from which it can absorb knowledge. An agent can discover new knowledge based on the existing knowledge it already has. Moreover, every agent updates its neighborhood before starting the next period of knowledge evolution. There are four optional policies for neighborhood update. All these are detailed subsequently. 2.1. The network Let V = {v1 , v2 , . . . , vN } denote a set of N agents. If there is a directed edge from agent j to agent i at discrete time t, then agent i can absorb knowledge discovered by agent j at time t. Thus, at any time t, all the directed edges make up a set of edges E (t ) and the directed graph G(t ) = (V , E (t )) represents the topology of the network. We assume that every agent publishes the knowledge it discovers and every agent can choose M and only M agents whose knowledge it wants to absorb. So, there are always M edges connecting to every agent at any time and thus the total number of edges in the network is N × M at any time. 2.2. Knowledge diffusion We suppose that knowledge diffusion follows the equation Ki (t ) = Ksi (t ) +



Dij (t )Ksj (t − 1),

i = 1, 2, . . . , N , t = 1, 2, . . .

(1)

j̸=i

where Ksi (t ) denotes the amount of accumulative knowledge that agent i has discovered at time t , Ki (t ) denotes the amount of knowledge that agent i has at time t, which contains two different components: ∑ the knowledge possessed and discovered by itself, Ksi (t ), and the knowledge it absorbs from its neighboring agents, j̸=i Dij (t )Ksj (t − 1); Dij (t ) is the weighting coefficient of the knowledge that agent i obtains from agent j, determined by the topology of network and changes correspondingly as follows. Normally, the knowledge one agent has through its knowledge upgrade cannot be fully absorbed by others. So, even though agent j is a neighbor of agent i at time t , Dij (t ) is typically smaller than 1. Here, we suppose that the absorption capability of every agent is fixed and the value is defined as a constant CA. As a result, if there is an edge from agent j to agent i at time t, then Dij (t ) equals CA; otherwise, it is determined by Dij (t ) = Dij (t − 1) ×

Ksj (t − 2) Ksj (t − 1)

,

i = 1, 2, . . . , N , t = 2, 3, . . . .

(2)

Since the amount of knowledge always satisfies Ksj (t − 2) ≤ Ksj (t − 1), and Dij (t − 1) ≤ CA, we have Dij (t ) ≤ CA for all i, j and all t. 2.3. Knowledge upgrade Besides knowledge diffusion, each agent upgrades its knowledge level during every period of time [t − 1, t ). The amount of new knowledge discovered by one agent is dependent on the knowledge it already has. Notice that, the existing knowledge it has includes two parts: the knowledge possessed and discovered by itself and the knowledge it had absorbed from others before. The more it has accumulated, the more new knowledge it can further discover. Moreover, the knowledge upgrade is assumed to be a stochastic process: normally it does not happen right after knowledge diffusion, but after a process of knowledge growth. We suppose that the knowledge upgrade is governed by the following equation Ksi (t ) = Ksi (t − 1) + Ui (t )Ki (t − 1),

i = 1, 2, . . . , N , t = 1, 2, . . .

(3)

where Ui (t ) represents the possibility and the scale of knowledge discovered by agent i at time t, which has a maximal value CU, and Ui (t ) is supposed to follow a uniform distribution, Ui (t ) ∼ U [0, CU ], i = 1, 2, . . . , N.

2584

E. Zhuang et al. / Physica A 390 (2011) 2582–2592

The ability of knowledge update is captured by a uniform distribution in the interval between 0 and a constant CU. If we assume that the increment of knowledge is nonnegative all the time, knowledge update defined here can be extended to a complex combination of several knowledge movements including discovery, elimination, depreciation and so on. 2.4. Neighborhood update We suppose that before every period of knowledge evolution, each agent updates its neighboring agents in order to improve the aggregate knowledge level of its neighborhood. It selects one of its neighbors and then replaces it by a new one. In so doing, the number of each agent’s neighbors remains unchanged. The knowledge that can be absorbed from one agent is proportional to the knowledge that was discovered by this agent. So, we consider a neighbor to be better than another if it has more knowledge discovered by itself. Since each agent typically has only a few nodes as its neighbors in the network and knows all of them, it is easy to select one out of them to do the replacement, preferably the worst neighbor who discovers the least knowledge, i.e. the one with the smallest value of Ks. However, the one to replace the worst neighbor is much more difficult to choose. Because searching through the whole network for the most proper one has a high cost, even if this is possible. So, we propose four kinds of policies to search for a suitable new agent. The first one is totally at random. The agent selects a new neighbor randomly from all the agents that are not its immediate neighbors, including the one that has been selected to be replaced. This policy is the most cost-effective. The second policy is that the agent selects the best neighbor of its best neighbor. The agent’s best neighbor also has neighbors and the number is not large. So, it is not very costly to search for the best neighbor of the agent’s best neighbor. Moreover, it is likely that the best neighbor of one’s best neighbor has a higher level of knowledge. The third policy is to select the best among its worst neighbor and neighbors of its worst neighbor. Considering that if all the neighbors of the agent have different categories of knowledge, when the worst neighbor is replaced, the agent will lose the knowledge diffusion of one category which the worst one has. In this case, the third policy is probably a good choice to keep knowledge relevant with that category. The last policy is to select the best among a randomly chosen agent and neighbors of this randomly chosen agent. Considering the cost, this policy is almost equivalent to the third one, and higher than the first one but lower than the second. Yet, compared with the first policy, it is more likely to select a better new neighbor. The aforementioned four kinds of policies will be referred to as RA, BB, WB and RB, respectively. 3. Numerical analysis 3.1. Settings We choose N ,

M N

and policy P as variable parameters of the model. N is the population of the network and is set to be 500,

1000 or 2000 in simulations.

M N

represents the percentage of neighbors one agent has with respect to the whole network

and is set to be 0, 0.01, 0.05 or 0.10. Assuming a limited capability of absorption, the larger the N, the smaller the M . P is N the policy of choosing a new neighbor and is taken to be RA, BB, WB or RB. The network can only adopt one policy and then keeps it forever. Four policies will be performed respectively, and then compared. We also fix the absorption capability CA of every agent to 0.2 and the maximal knowledge update value CU to 0.01. In general, knowledge diffusion is much easier than knowledge upgrade. Therefore, we let CA be much larger than CU. Regarding initialization, at time t = 0, let Ksi (0) ∼ U [0, 1]. At time t = 1, the neighbors of every agent are randomly selected from the whole network. For every possible parameter setting, we repeat the simulation 20 times and then take the average to analyze the result. 3.2. Performance At a certain time t, the total amount of knowledge of the whole network can be obtained from the existing amount by ∑N adding the knowledge discovered by all the agents, that is, i=1 Ksi (t ). To exclude the knowledge variation from agent to agent in the population of the network, we take the average knowledge of all agents at a certain time t to measure the performance of the network. Thus, the performance index is defined as Ks(t ) =

N 1 −

N i =1

Ksi (t ),

t = 1, 2, . . . .

(4)

From the simulation results, first, we find that the performance index follows an exponential form as time t increases, no matter what parameter set is applied to the network. Fig. 1 indicates the regularity of the exponential form, where the network has 2000 agents and each agent has 200 neighbors. Also, we find that as long as M is larger than 0, the per-capita knowledge increases as N increases. Fig. 2 shows the N simulation results of the relationship between the per-capita knowledge and the population of the network. Note that when M is 0, it is a network without knowledge diffusion. Every agent upgrades its knowledge only depending on the knowledge N

E. Zhuang et al. / Physica A 390 (2011) 2582–2592

Fig. 1. Per-capita knowledge versus time t, where N = 2000 and

Fig. 2. Per-capita knowledge versus time t, where P = RB. For

M N

M N

2585

= 10%. The knowledge accumulation follows an exponential form.

= 0 or larger than 0, N has different influences on per-capita knowledge.

discovered by itself. The population of the network does not change the knowledge outcome. As soon as M becomes larger N than 0, the population influences the outcome remarkably. It is because, the larger the population, the more knowledge an agent can absorb from the network. As a result, more knowledge can be discovered by an agent and it leads to increasing the per-capita knowledge of the whole network. For instance, considering only the knowledge diffusion while all agents having the same level of knowledge Ks discovered by themselves, when t is large enough, the maximal knowledge an agent can absorb from the network is CA × (N − 1) × Ks. One can see that the population of the network affects the knowledge diffusion significantly. For the same reason, it is not difficult to predict that the larger the M , the more knowledge spreads over the network. N Thus, the per-capita knowledge increases as

M N

increases, as shown in Fig. 3.

The influence of a policy is much more complicated than the parameters N and M . Fig. 4 shows, during 1000 times of N knowledge evolutions, how policies change the per-capita knowledge of the network. First, one can see that in all possible parameter settings, policies BB and WB have similar outcomes. These two policies have one property in common, that is, they always try to find the best neighbor from an agent’s neighbors. More specifically, they only focus locally on the agent’s neighborhood rather than globally on the whole network. Thus, as time t increases, the performance cannot catch up with

2586

E. Zhuang et al. / Physica A 390 (2011) 2582–2592

Fig. 3. Per-capita knowledge versus time t, where N = 2000. The larger the

M N

, the larger the per-capita knowledge.

Fig. 4. Per-capita knowledge versus time t.

E. Zhuang et al. / Physica A 390 (2011) 2582–2592

Fig. 5. The evolutionary process of the degree distribution, where N = 1000,

2587

M N

= 10%.

the network comparing to the other two policies, as demonstrated by Fig. 4 for many cases. However, some cases in the figure do not seem to follow this rule. For example, when the network has 2000 agents and each agent has 10% agents of network as its neighbors, it looks like the network with policy BB or WB performs not worse than policy RA or RB. But, when time t is increased to 3000 steps, the result changes, as shown in Fig. 1. When choosing a new neighbor that has never been a neighbor of any agent, it will significantly enhance the knowledge of this agent and, afterwards, improve its knowledge upgrade. Thus, in a long run, and considering the performance of the whole network, policies RA and RB tend to become better than policies BB and WB. In Fig. 4, one can see that, at time t = 1000, it is difficult to distinguish the performances of policies RA and RB. When N and M are both small, policy RA works better than policy RB. However, as N or M becomes larger, policy RB becomes better N N than policy RA. At small time t, policy RB is more likely to choose one agent at a higher knowledge level. So, at the very beginning, policy RB always works better than policy RA and obtains a dominant level of knowledge. The dominant level of knowledge, which will increase following an exponential form, allows the network to keep its leading position if it is not caught up by another in a short period of time at the beginning. However, policy RA has its advantage of more likely choosing a new neighbor that has never been a neighbor of any agent before. When N and M are both small, this advantage appears N at an early time and it pushes the network to outperform policy RB. This can be clearly seen in the diagrams at the top-left side of Fig. 4. However, when N or M is large, this advantage is not prominent any more. Therefore, the network with policy N RB always keeps its leading position, which it takes at the very beginning.

3.3. Degree distribution In the model, when knowledge spreads from one agent to others, edges are directed. Every agent has the same number M of in-degree, but different numbers of out-degree. The in-degree shows how many neighbors from which it can absorb knowledge, while the out-degree indicates how many agents absorb knowledge from it. Here, for brevity only the out-degree distribution is demonstrated and analyzed. Thus, from now on, degree means out-degree. Fig. 5 shows the evolutionary process of the degree distribution as time increases, where the population of network is 1000 and each agent has in-degree of 100. We find that the degree distribution of the network changes severely when time t is small but gradually becomes relatively stable. When time t is larger than 721, the degree distribution has no big changes and stays stable. At time t = 1, the degree distributions of all policies are the same because all the neighbors

2588

E. Zhuang et al. / Physica A 390 (2011) 2582–2592

Fig. 6. The evolutionary process of the degree distribution, where N = 2000,

M N

= 10%.

are randomly selected. Before reaching a stable state, the degree distribution of the network with policy RA extends as time t increases. Although the maximal degree is much larger than that at initialization, it stops rising after a certain point which is far from the maximal degree it can achieve theoretically. For the same network with other three policies, before reaching a stable state, the degree distribution extends as well and the maximal degree reaches the theoretical maximum, N − 1. than that of Fig. 5. The larger the N, the more time the Moreover, Figs. 6 and 7 both indicate a larger N and a smaller M N network needs to reach a stable state. When M , N

M N

is small, the time for becoming stable increases as M N

M N

increases. For large

it behaves in the opposite way, because the increasing of a large can be regarded as the decreasing of a small 1 − M . N From Figs. 5–7, we also find that the network with policy BB or WB reaches a stable state quicker than policies RA and RB. Policies BB and WB have smaller ranges of searching new neighbors than policies RA and RB, so they reach a stable state faster. Another way to investigate the evolutionary process of degree distribution is shown by Fig. 8, which indicates the probability of having nodes with 0 degree. With policy BB, WB, once an agent has degree 0, it will remain to be of 0 degree forever. But the number of agents with nonzero degree has a low limit which equals the number of neighbors of each agent. Therefore, the probability of having nodes with 0 degree is nondecreasing and bounded by 1 − M . For policy RB, an agent N with 0 degree has a chance to be connected again. If agent A randomly chooses agent B when updating its neighborhood and agent B has discovered more knowledge than its neighbors who are not neighbors of agent A, then agent B will be a new neighbor of agent A regardless of its degree. So, the probability of having nodes with 0 degree is normally lower than policies BB and WB. And it is easy to understand that policy RA has the lowest probability of having nodes with 0 degree due to its random selection mechanism. Fig. 9 shows the degree distribution at time t = 1000. The diagrams on the top-left side are those networks reaching stable states; however, the diagrams of the opposite positions are not. Obviously, when the degree distribution is stable, it is more even and has a smaller maximal degree in the case of policy RA than the other three policies. Policies BB, WB and RB have most agents with extremely low degrees near 0 and some agents with extremely high degrees near N − 1. However, among three policies BB, WB and RB, policy RB seems to have a little more even degree distribution and the probabilities of having extremely low degrees near 0 or extremely high degrees near N − 1 are smaller than the other two. It indicates that in the network with policy RA, the knowledge of most agents spreads to other agents; but in the network with policy BB or WB, the knowledge of most agents hardly spreads to other agents while a small number of agents have their knowledge widely spread to the whole network. And the network with policy RB falls in between.

E. Zhuang et al. / Physica A 390 (2011) 2582–2592

Fig. 7. The evolutionary process of the degree distribution, where N = 1000,

Fig. 8. The probability of having nodes with 0 degree.

2589

M N

= 5%.

2590

E. Zhuang et al. / Physica A 390 (2011) 2582–2592

Fig. 9. Degree distribution at time t = 1000.

3.4. Path length To a certain extent, path length can measure the connectivity of a network. Thus, we use it to analyze the connectivity of various networks in our model. Because the edges are directed, the path length from agent A to agent B is normally not the same as that from B to A. The number of possible path lengths on the network is N × (N − 1). Like degree distribution, the path length changes severely at small time t and gradually reaches a relatively stable state, as shown in Fig. 10, where the population of the network is 2000 and each agent has in-degree of 200. In the figure, the reciprocal of path length is used instead of path length due to the existence of infinite path length. In the case of infinite path length, its reciprocal is zero. Otherwise, the reciprocal belongs to (0, 1]. Here, if agent A to agent B is with infinite path length, it means there are no directed edges connecting from agent A to agent B directly or indirectly. At any time t, the probability of path length 1 is always the same and equals NM , due to the fact that every agent has M and only M −1 neighbors. At time t = 1, since all the neighbors are randomly selected, path lengths of all policies are the same and there exist no infinite path lengths. However, when knowledge evolution begins, the average path lengths of pairs with finite path lengths increase and the probability of pairs with infinite path lengths becomes larger. The probability of pairs with infinite path lengths is quite stable when time t is large. But the path length distribution of finite path lengths is not as stable. Considering the influences of N and M , we have the same conclusion as for degree distribution; that is, the larger the N N, the more time the path length distribution of the network needs to reach a stable state. When M N

M , N

M N

is small, the time for

achieving a stable state increases as increases; for large it behaves in an opposite way. Fig. 11 represents the path length distribution at time t = 1000. Comparing the networks that reach stable states, the network with policy RA always has a larger probability at short path length than the other policies. The network with policy RA has much less probability of pairs with infinite path lengths. The probability of pairs with infinite path lengths is almost the same for policies BB and WB, and that of policy RB is a little smaller. In words, the connectivity of the network with policy RA is much better than others. Policies BB and WB have the same worst connectivity, while policy RB is somewhat better than them. So, in the network with policy RA, the knowledge distribution is the most complete, and the next one is policy RB, but policies BB and WB are not good for knowledge diffusion. The difference of path length distributions among four policies is easy to explain. It is caused by the range of searching for new neighbors. Policy RA has the largest searching range, policies BB and WB have the same limited searching range, and policy RB falls in between.

E. Zhuang et al. / Physica A 390 (2011) 2582–2592

Fig. 10. The evolutionary process of path length distribution, where N = 2000,

Fig. 11. The path length distribution at time t = 1000.

2591

M N

= 10%.

2592

E. Zhuang et al. / Physica A 390 (2011) 2582–2592

4. Conclusions This paper models knowledge evolution in a multi-agent network regarding knowledge diffusion and knowledge upgrade, and reveals that knowledge accumulation follows an exponential form. Both the population of a network and the number of neighbors from which an agent can absorb knowledge affect the speed of knowledge accumulation. The larger these two factors, the faster the knowledge accumulation. Moreover, the model represents how an agent updates its neighboring agents so as to influence the outcome of the whole network. At the beginning, absorbing knowledge from good neighbors can improve the outcome significantly. However, in a long run, absorbing knowledge globally from the whole network is more efficient than absorbing knowledge locally from good neighbors. As a real-world example, the rapid development of interdisciplinary research supports this observation. For a deeper understanding of knowledge evolution, degree distribution and path length distribution of the network are further investigated. The degree distribution and path length distribution both finally reach a relatively stable state, no matter what strategy is used for neighborhood update. However, when the strategy focuses locally on a small range of the network rather than globally on the whole network, it is faster to achieve a stable state and then remain being stable. Besides, if the strategy pays attention only to the best agents, the knowledge of those best agents spreads widely while the knowledge of most agents cannot be absorbed by the others at all. There are many issues arising from on this model that are worth further investigation. For example, this paper has analyzed the average knowledge, but the variance and its impact have not been discussed. For the process of absorbing knowledge from neighbors, only the case of same absorption capability is considered. But different absorption capabilities for different agents or different neighbors are more realistic, which will be further studied in our future research. For neighborhood update, the number of neighbors to be replaced for each agent is fixed to be 1. However, it is more practical to be variable. These are interesting but challenging topics for future studies. References [1] M. Alavi, D.E. Leidner, Review: knowledge management and knowledge management systems: conceptual foundations and research issues, MIS Quarterly 25 (1) (2001) 107–136. [2] R. Albert, A.-L. Barabási, Statistical mechanics of complex networks, Reviews of Modern Physics 74 (2002) 47–97. [3] S.N. Dorogovtsev, J.F.F. Mendes, Evolution of networks, Advances in Physics 51 (4) (2002) 1079–1187. [4] M.E.J. Newman, The structure and function of complex networks, SIAM Review 45 (2003) 167–224. [5] L.D.F. Costa, F.A. Rodrigues, G. Travieso, P.R.V. Boas, Characterization of complex networks: a survey of measurements, Advances in Physics 56 (1) (2007) 167–242. [6] S.N. Dorogovtsev, A.V. Goltsev, J.F.F. Mendes, Critical phenomena in complex networks, Reviews of Modern Physics 80 (4) (2008) 1275–1335. [7] R. Pastor-Satorras, A. Vázquez, A. Vespignani, Dynamical and correlation properties of the internet, Physical Review Letters 87 (25) (2001) 3–6. [8] G. Chen, Z. Fan, X. Li, Modelling the Complex Internet Topology, Springer, Berlin, Heidelberg, 2005, pp. 213–234. [9] F. Liljeros, C.R. Edling, L.A. Amaral, H.E. Stanley, Y. Aberg, The web of human sexual contacts, Nature 411 (2001) 907–908. [10] M.E.J. Newman, The structure of scientific collaboration networks, Proceedings of the National Academy of Sciences of the United States of America 98 (2) (2001) 404–409. [11] A.-L. Barabási, Z.N. Oltvai, Network biology: understanding the cell’s functional organization, Nature Reviews Genetics 5 (2004) 101–113. [12] L. Costa, F. Rodrigues, A. Cristino, Complex networks: the key to systems biology, Genetics and Molecular Biology 31 (3) (2008) 591–601. [13] C. Chen, D. Hicks, Tracing knowledge diffusion, Scientometrics 59 (2) (2004) 199–211. [14] R. Andergassen, F. Nardini, M. Ricottilli, The Emergence of Paradigm Setters through Firms’ Interaction and Network Formation, vol. 567, Springer, Berlin, 2006, pp. 93–106. [15] R. Cowan, N. Jonard, Network structure and the diffusion of knowledge, Journal of Economic Dynamics and Control 28 (8) (2004) 1557–1575. [16] I. Licata, A dynamical model for information retrieval and emergence of Scale-Free clusters in a long term memory network, Emergence: Complexity and Organization 11 (1) (2009) 48–57.