Effect of spike-timing-dependent plasticity on neural assembly computing

Effect of spike-timing-dependent plasticity on neural assembly computing

Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎ Contents lists available at ScienceDirect Neurocomputing journal homepage: www.elsevier.com/locate/neucom Effect of...

4MB Sizes 1 Downloads 39 Views

Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎

Contents lists available at ScienceDirect

Neurocomputing journal homepage: www.elsevier.com/locate/neucom

Effect of spike-timing-dependent plasticity on neural assembly computing Elahe Eskandari a, Arash Ahmadi a,n, Shaghayegh Gomar b a b

Electrical Engineering Department, Razi University, 67149-67346 Kermanshah, Iran Electrical and Computer Department, University of Windsor, Ontario, Canada

art ic l e i nf o

a b s t r a c t

Article history: Received 17 September 2014 Received in revised form 4 January 2016 Accepted 12 January 2016 Communicated by J. Torres

Spiking neural network (SNNs) are practical and realistic neural models, which have attracted considerable attention and many valuable physical realizations have been implemented. Neural assembly computing (NAC) is a new approach to SNNs, which is known to be a promising mechanism for explaining large-scale neural behavior, and it has been used to examine and explore the computational activities of neural cell assemblies. To obtain algorithms based on NAC, neural coalitions are considered to be responsible for patterns, memorizing them, and controlling their hierarchical relatives. In addition, spike timing-dependent plasticity (STDP) can be employed as a synaptic plasticity learning rule to modify the synaptic weights in neural networks, thereby allowing the convergence of neural activities to a spatiotemporal neuron-network pattern. Thus, applying STDP to NAC can be a powerful tool in neuroscience computing. Investigations in this area are also useful for understanding biological systems. In this study, we investigated the effect of applying STDP to NAC. Our simulation results showed that applying STDP rule increased the average number of firings for each event in neural assemblies by adjusting the weights of the connections. Moreover, the firing of neural assemblies resulted in a sequence of events in a closed loop. We determined correlations to measure the similarity between the patterns in each assembly, which showed that STDP made the network fire with a more distinctive and similar pattern. In addition, after memorizing a pattern, the frequency of events increased and the firing patterns became faster. Therefore, STDP can improve and accelerate the overall NAC process. Thus, our simulation results demonstrate that STDP can help NAC to obtain a more distinctive pattern in the network output. & 2016 Elsevier B.V. All rights reserved.

Keywords: Izhikevich model Neural assembly computing (NAC) Spike timing-dependent plasticity (STDP) Spiking neural network (SNN)

1. Introduction The functionality of the nervous system and neural networks, which comprise the most powerful natural computer, has been investigated in many respects. Artificial neural networks were developed based on this remarkable computational system and they have a wide range of applications [1]. Despite their capabilities, the restricted functionality of neurons and their interconnections in artificial neural networks have limited the utility of these networks, thereby encouraging researchers to develop a new generation of neural networks called spiking neural networks (SNNs) [2–3]. The human brain is one of the biggest biological neural networks in terms of functionality and cognition, where it comprises about 1011 neurons. Emulating the brain has high potential in n

Corresponding author. E-mail addresses: [email protected] (E. Eskandari), [email protected], [email protected] (A. Ahmadi), [email protected] (S. Gomar).

many processing applications such as pattern recognition, speech processing, and complex logic reasoning. Despite the complexity of the brain and various limitations in this field of study, such as our limited knowledge of brain functionalities, there are similarities between brain processing mechanisms and SNNs. Thus, SNNs emulate the behavior of biological nervous systems and they are considered to be powerful tools for brain-related studies. Many different neuron models have been proposed for spiking neurons, which range from simple to highly complex, such as the Hodgkin–Huxley model, spike response model, integrate-and-fire model, and the Izhikevich model [4]. These models differ in terms of their computational costs and the degree of biological plausibility. In SNNs, neurons communicate with each other via trains of spikes, which occur when the membrane potential of a neuron increases suddenly. Thus, information is encoded in individual spikes, thereby adding new features to the functionality of traditional neural networks, i.e., time, frequency, and phase dimensions. Many different schemes are available for encoding

http://dx.doi.org/10.1016/j.neucom.2016.01.003 0925-2312/& 2016 Elsevier B.V. All rights reserved.

Please cite this article as: E. Eskandari, et al., Effect of spike-timing-dependent plasticity on neural assembly computing, Neurocomputing (2016), http://dx.doi.org/10.1016/j.neucom.2016.01.003i

E. Eskandari et al. / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎

2

D A 4 ms

B

2 ms

5 ms

C

D

B A

C 0

1

2

3

4

5

Fig. 1. Polychronization concept. (a) A, B, and C are presynaptic neurons and D is a postsynaptic neuron. The different conduction delays are 4 ms, 2 ms, and 5 ms. (b) Optimal firing pattern where presynaptic neurons excite postsynaptic neuron D.

information into sequences of spikes, e.g., time to first spike code, rank order code, and latency code. In addition, spiking network architectures can be classified according to different categories, i.e., feed forward networks, recurrent networks, and hybrid networks. In hybrid networks, some subpopulations may be feed forward, whereas others have recurrent structures. Therefore, several hybrid network topologies are possible such as synfire chain and reservoir computing [5]. A new mechanism for SNNs is neural assembly computing (NAC). NAC is an approach that investigates basic computation using “neural cell assemblies” [6]. From this viewpoint, spiking neural groups can perform several computational operations when the majority of the neurons in one group fire together. In fact, these activities represent logical functions (such as AND, OR, and NOT). In addition, assemblies may act like flip-flops in digital circuits to memorize data. By associating logical functions and flipflops, it was shown that NAC can implement finite state automata, which are important for creating sequential machines [7]. NAC can also implement gates and data samplers, as well as decisionmaking circuits [8], which are important for constructing more complex spiking neural assembly structures. As a dynamic system, a single neuron may be affected by noise and it can exhibit unstable behavior, whereas neuron assemblies are more tolerant of noise. Neural coalitions can exhibit patterns, memorize them, and control their hierarchical relationships to generate algorithms, although the details of this process using spiking neurons are still unclear. However, the concept of neural assemblies comprising patterns of neuron groups provides some evidence in support, e.g., during muscle activity, the amount of force produced by a muscle can be changed by increasing or decreasing the number of active motor units. In the 1930 s, Sherrington discussion this hypothesis in terms of neurons cooperating with each other in order to perform a complex task and it was also claimed [9] that motor neurons only assist a specific muscle from a group of potentially active cells. Another example involves basal ganglia, which comprise a set of nuclei located in the forebrain, where this brain area is related to action selection and reinforcement learning. The neural structure of basal ganglia comprises several neural groups, which employ a mechanism similar to neural assemblies [10]. Furthermore, various terms are used to describe neural assemblies in neuroscience, such as persistent activity [11], neural synchrony [12], neural cell assemblies [12,13], spatiotemporal firing patterns [14], and oscillating networks, where assemblies of neurons have functional relations defined by synchronized oscillations, which is referred to as “binding by

P1 P2 P3

A1

B1

A2

B2

A3

A

B3

B

Fig. 2. Bistable neural assembly or bistable polychronization group.

synchrony” [15]. This model has gained substantial support based on experimental symptom states [16–18]. Polychronization has been proposed as a subtype of assembly formation in SNNs [19] in order to study a number of cognitive phenomena, such as associative memory, attention, and crossmodal binding. In a polychronous group (PG), the communication between neurons is characterized by the timing of spikes, which follow a precise temporal pattern that depends on delays. The strength of synapses is changed based on the respective timing of pre- and postsynaptic spikes in biological nervous systems. Thus, spike timing-dependent plasticity (STDP) [20] rules have been introduced, which are now accepted as the foundation of synaptic plasticity. A detailed method has been proposed for spiking NAC [6], but the effects of plasticity mechanisms such as STDP have not been tested by adjusting synaptic weights, whereas in a biological neural network, synaptic weights regulate the amount of excitation/inhabitation spikes generated from one neuron to other neurons [21]. In this study, we investigated the effect of applying STDP in NAC. In Section 2.1, we describe the neuron model employed. We explain PGs in Section 2.2 and in Section 3 we introduce NAC. In Section 4, we discuss the STDP rules. In Section 5, we present simulations and results, are we give our conclusion in Section 6.

Please cite this article as: E. Eskandari, et al., Effect of spike-timing-dependent plasticity on neural assembly computing, Neurocomputing (2016), http://dx.doi.org/10.1016/j.neucom.2016.01.003i

E. Eskandari et al. / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎

3

P1 P2 P3 A1 A2 A3 B1 B2 B3 time (ms) Fig. 3. Raster plot representing the interactions among three polychronization group (PGs). PG P causes PG A to fire and PG A causes PG B to fire. Due to the recurrent connections from group B to group A, a bistable neural loop based on two events is established [6].

V1

ATT_VISION

S

ATT_SMELL

A1

ATTENTION

HUNGRY

FORAGING MOV_LEGS

V2

V1

S A2

V2 S

A1

V2

V1

S

S

A2

A1

V1

S

S

A2

A1

A2

I

I

I

F1

F2

F1 L1

MOV_HEAD

F2

L2 H1

F1 L1

H2

F2

L2 H1

F1 L1

H2

F2

L2 H1

L1 H2

time (ms) Fig. 4. Hierarchical relationships among several neural assemblies that control the state of the agent, e.g., HUNGRY and FORAGING. This figure shows how the neural assembly can compute stochastic logic functions [6].

2. Background 2.1. Neuron model The Izhikevich model [22] was employed in this study, which is considered to be as realistic as Hodgkin–Huxley model, where it represents a compromise between accuracy and computability. This model is defined by two differential equations: ( dv ¼ 0:04v2 þ5v þ 140  u þ I dt ð1Þ du ¼ aðbv  uÞ dt with the auxiliary after-spike resting equations: ( v’c v Z 30 then ; u’u þ d

ð2Þ

where v is the membrane potential, u is the recovery variable, and I represents the input current. The membrane potential rises when a small pulse of current is applied. If the current is sufficiently strong, the membrane voltage crosses its apex (30 mV), after

Fig. 5. STDP rule [19]. The two exponential decay curves characterize variations in the STDP value for positive and negative time intervals between pre- and postsynaptic spikes.

Please cite this article as: E. Eskandari, et al., Effect of spike-timing-dependent plasticity on neural assembly computing, Neurocomputing (2016), http://dx.doi.org/10.1016/j.neucom.2016.01.003i

4

E. Eskandari et al. / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎

Fig. 6. Pseudo-code employed to examine the effect of the STDP rule in NAC.

which the membrane potential and the recovery variable are replaced according to the auxiliary reset equations. In this study, a, b, c, and d are dimensionless variables, which can be described as follows. a: Time scale of the recovery variable u with time inverse (1/s) dimension. A lower value results in a smaller recovery time. b: Sensitivity of the recovery variable u to sub-threshold fluctuations in the membrane potential v. The dimension of this variable is 1/s.

c: After-spike reset value for the membrane potential v caused by fast high-threshold K þ conductance. d: After-spike reset value for the membrane potential v caused by slow high-threshold Na þ and K þ conductance. 2.2. Polychronous group The PG concept was introduced by Izhikevich in 2005 [19]. Polychronization is a process that is represented in SNNs with

Please cite this article as: E. Eskandari, et al., Effect of spike-timing-dependent plasticity on neural assembly computing, Neurocomputing (2016), http://dx.doi.org/10.1016/j.neucom.2016.01.003i

E. Eskandari et al. / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎

5

propagation delays to illustrate the memory capacity of SNNs. The basic idea of polychronization is that presynaptic neurons may fire at different times but the spikes generated from these neurons arrive at a common target postsynaptic neuron simultaneously, which makes it fire due to various conduction delays. The output spikes from some of these postsynaptic neurons may arrive at other neurons at the same time and thus new groups appear. A group of neurons with a particular time-locked firing pattern is known as a PG. The notion of polychronization is shown in Fig. 1, where the presynaptic neurons A, B, and C are connected to the postsynaptic neuron D with different conduction delays. The spiking pattern where neuron A fires at 1 ms, neuron B fires at 3 ms, and neuron C fires at 0 ms is effective in exciting neuron D because the spikes arrive simultaneously at this neuron. According to this figure, only the specific firing patterns of the input neurons will arrive at the output neurons at the same time.

3. Neural assembly computing The concept of a bistable neural assembly (BNA) was first described by Ranhel [6]. In Fig. 2 [6], there are two small PGs, A and B, which have three neurons. The neurons in PG A are fully connected to PG B, and the neurons in PG B are fully connected to PG A. In this figure, the neurons in group A receive input stimuli from another group, such as PG P. If the axonal conduction delays between the neurons in PG P and neuron A1 (A2 or A3) mean that spikes from neurons P1–P3 arrive at neuron A1 (A2 or A3) at the same time, then this neuron fires a spike. Therefore, PG P causes the neurons in PG A to fire spikes. This phenomenon is also repeated from PG A to PG B and since there are connections from PG B to PG A, group B makes PG A fire and a BNA loop (or bistable PG (BPG)) is established. Therefore, the PG P detects a pattern. In Fig. 3, the raster plot shows that PG A and PG B maintain their internal states. Thus, the BNA can memorize an occurrence that establishes its state and the firing of neurons in one group can also trigger other neural groups, so a hierarchical flow of causal events is established. In the following, we explain how neural assemblies must be related to each other to obtain static logical functions and to control a sequence of actions. An example of an agent's neural network [6] is shown in Fig. 4, which comprises several neural groups. When the agent is hungry, it forages in the external environment and finds foods using its senses of vision and smell. An assembly of the agent's neurons is related to the hungry state; thus, when the agent has a low energy level, the majority of the neurons in the group fire spikes and a persistent firing pattern occurs, after which a flow of causal activities occurs. After the hungry state is detected, BNAs are established for attention and foraging. The attention and foraging states are memorized via the interactions between the respective neural groups (F1 and F2 for foraging, and A1 and A2 for attention). V1 and S are assemblies that correspond to vision and smell, which are triggered by spikes from A1. Group S is also triggered by A2, which means that S occurs for a brief moment via A1 and at another time by A2. Thus, group S fires due to A1 or A2, which is denoted as (S ¼A1 þA2). Event S is an isolated event and it does not establish another event to create a BNA, but V1 triggers V2 and a new BNA loop is created between these groups. Fig. 4 also shows that L1 is triggered by spikes from F1 and F2. Therefore, the spikes from F1 and F2 must coincide to initiate group L1 (represented as L1¼ F1.F2) and then group L2. The two assemblies H1 and H2 are both responsible for a common output and they are caused by the

Fig. 7. Raster plots representing the activity of random neural groups in the BPG shown in Fig. 6. (a–d) The STDP rule has been used to adjust the synaptic weights during the simulation time. (e) The STDP rule has not been used and all of the synaptic weights remained fixed during the simulation.

Please cite this article as: E. Eskandari, et al., Effect of spike-timing-dependent plasticity on neural assembly computing, Neurocomputing (2016), http://dx.doi.org/10.1016/j.neucom.2016.01.003i

E. Eskandari et al. / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎

6

D W0

assembly 0

D W0

D

assembly 1

assembly 2

W0

1.25 D

assembly 3 D

0.5 W0 0.5 D W0

assembly 5

W0 W0 0.5 D

assembly 4

Fig. 8. Diagram of a typical neural network with six neural groups. The relationships among the neural groups are indicated.

same stimuli, but each is considered to be a distinct branch (H1 ¼F1.F2 and H2 ¼F2.F1).

4. Spike timing-dependent plasticity Synaptic plasticity is a process that modifies the connections (synapses) between neurons to change the strength of a synapse. Therefore, a synapse can be strengthened or weakened based on its activity. From a biological viewpoint, repetitive electrical activities can change the synaptic weights. These increases or decreases in synaptic strength are referred to as long-term potentiation or long-term depression when the changes occur over the long term (timescale of hours), whereas they are known as short-term potentiation and short-term depression when changes occur in the short term (timescale of seconds or minutes). STDP [20] is a synaptic modification model based on the relative timing of presynaptic and postsynaptic spikes. According to STDP, if the presynaptic spike arrives at the postsynaptic neuron before the postsynaptic neuron fires, the synapse is potentiated, whereas the opposite relationship leads to depression of the synapse. Therefore, the precise timing of the arrival of spikes from presynaptic to postsynaptic neurons is very important [23–25]. Multiple STDP models have been proposed [26]. In the model illustrated in Fig. 5 [19], increases and decreases in the weights of the synaptic connections correspond to the positive and negative parts of the STDP curves shown. The synaptic weight increases until its value equals the cut-off value, after which it is fixed. In Fig. 5, the two exponential decay curves characterize variations in the STDP value for positive and negative time intervals, which are described by the following formulae: t

A þ eτ þ

ð3Þ

t

ð4Þ

A  eτ  ;

where A þ ¼0.1, A  ¼0.12, and τ is time constant as τ þ ¼ τ  ¼20 ms (the values of these parameters were obtained from Izhikevich [19]).

to several different types of SNNs where each of the networks comprised a number of neural assemblies. The structures of these neural networks differed in terms of the interactions among the assemblies, the number of neurons in the assemblies, and other parameters. Different Izhikevich neuron types were also selected randomly for each network. The pseudo-code employed for the simulation is presented in Fig. 6, which shows that some parameters were defined initially, where D is the average conduction delay between two neural groups. The axonal conduction delays in the mammalian neocortex may vary from 0.1 ms to 44 ms depending on the type and location of the neuron [19], so D was assumed to be 40 ms in these simulations. In addition, w0 is the initial synaptic weight for connections, which we determined empirically to allow different neuron types to perform their functions correctly. When the STDP rule was not used, this parameter was considered to be a fixed synaptic weight during the simulation of all connections (w¼w0), but when the STDP rule was applied, it was the initial synaptic weight and it could change during the period of each connection (waw0). Another parameter, sm, is the maximum synaptic strength, where the values of the weights were kept in a range between 0 and sm, which was typically less than 10 (mV) [19]. The noise factor for the synaptic input current (nA) was also assumed to be 0.05. After defining the topology of the neural network, including the neuron types, relationships among assemblies, and the precise axonal conduction delay for each connection based on the average conduction delays between two neural groups (D), and other parameters, we simulated the neural networks and the STDP rule was employed to adjust the synaptic weights. As mentioned in the previous section, the STDP rule can potentiate or depress synaptic weights according to the relative timings of the presynaptic and postsynaptic spikes. According to Eqs. (3) and (4), to approximate the exponential curve of the STDP in each time step (1 ms), we have: STDPðt þ 1Þ’e  20  STDP; 1

and thus STDPðt þ 1Þ’0:95  STDPðtÞ:

5. Simulation and results In a previous study [6], investigations were performed to address the basic issues related to digital NAC, but the synaptic weights of all the connections were fixed in the simulations. In the present study, we investigated the effect of applying the STDP rule

ð5Þ

ð6Þ

We determined the timing of the last spike's arrival from the presynaptic neurons to each postsynaptic neuron, and the synaptic weights were adjusted based on the positive or negative parts of the STDP curve. Instead of varying the synaptic weight Wij, the simulator changed their derivative ΔWij. The value of ΔWij for each Wij was stored in the variable sd and the weights were

Please cite this article as: E. Eskandari, et al., Effect of spike-timing-dependent plasticity on neural assembly computing, Neurocomputing (2016), http://dx.doi.org/10.1016/j.neucom.2016.01.003i

E. Eskandari et al. / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎

7

updated using the following equations:

Fig. 9. Raster plots representing the activity and timing of random neural groups in the neural network shown in Fig. 8. (a–d) The STDP rule has been used to adjust the synaptic weights during the simulation time. (e) The STDP rule has not been used and all of the synaptic weights remained fixed during the simulation.

W’W þ 0:01 þ sd

ð7Þ

sd’0:9sd;

ð8Þ

where 0.01 is the activity-independent increase in the synaptic weight, which is required to potentiate synapses connected to silent neurons [27]. The simulation results obtained for the BNA are presented in Fig. 7. This BNA was initialized by another neural group, such as PG P shown in Fig. 2. The average conduction delay between two neural groups was assumed to be 40 ms for this bistable loop. In addition, each group comprised 100 neurons. Fig. 7(a–d) shows the results of the simulation of this BPG during the period when the STDP rule was applied. The synaptic weights were changed according to the STDP and they converged to fixed values, as shown in Fig. 7(d). Fig. 7(e) shows the raster plot obtained for synaptic weights with fixed values when the STDP was not been applied during the simulation. The second set of simulations was performed for a neural network with six neural assemblies. Each assembly comprised 50 randomly selected types of excitatory Izhikevich neurons (inhibitory neurons were omitted from this selection). The topology of the neural networks is depicted in Fig. 8, where D is the average conduction delay between two assemblies (assumed to be 40 ms) and w0 is the initial value of the synaptic weight for each connection. These values remained fixed during the simulation when STDP was not used. In this neural network, assemblies 1 and 2 formed one BNA. Assembly 4 could fire 0.5D after assembly 2 or 0.5D after assembly 3 because the values of the synaptic weights (w0) for the connections between assembly 2 and assembly 4, or between assembly 3 and 4, were sufficiently strong to trigger assembly 4 (assembly 4 ¼assembly 2 þassembly 3). It should be noted that this network was used to investigate the effects of STDP in NAC, but it is also applicable to other neural networks with different structures. The simulation results obtained for this network when the STDP rule was applied are presented in Fig. 9(a–d). Similarly, the results obtained when the synaptic weights evolved according to STDP and converges to their new values are shown in Fig. 9(d). Fig. 9(e) shows the raster plot obtained using fixed values for the synaptic weights but without the STDP rule. In the following, we present another example to illustrate the effects of STDP in NAC, as shown in Fig. 10. This network comprised seven neural assemblies, with 25 randomly selected Izhikevich excitatory neurons of different types in each assembly. Two BNAs were formed in this network, i.e., assembly 2–assembly 3 and assembly 5–assembly 6. It should be noted that assembly 3 or assembly 5 could not trigger assembly 6 on their own, because the synaptic weight between assembly 3 and assembly 6 was 0.5w0, and the synaptic weight between assembly 5 and assembly 6 was 0.5w0 which were not sufficiently strong to fire assembly 6. To excite the neurons in assembly 6, both assembly 3 and 6 had to fire spikes together and the spikes needed to arrive at assembly 6 at the same time. Thus, the conduction delay values were selected as shown in Fig. 10. The simulation results obtained for this network with and without the STDP rule are shown in Fig. 11(a) and (b), respectively. In addition, Fig. 11(a) shows the final firing pattern for this network. It is clear that the synaptic weights converged to their final values after some time elapsed when the STDP rule was applied. In order to perform a quantitative comparison of the two states for the simulations (with and without STDP rule), we defined a formula for computing the average number of firings (ANF) by

Please cite this article as: E. Eskandari, et al., Effect of spike-timing-dependent plasticity on neural assembly computing, Neurocomputing (2016), http://dx.doi.org/10.1016/j.neucom.2016.01.003i

E. Eskandari et al. / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎

8

0.5D W0 assem bly 2 D

assem bly 0

W0

0.5W 0

D

W0 D

D

assem bly 3

W0

assem bly 1

D 0.5W 0

0.5D W0

assem bly 4

D W0

assem bly 6

assem bly 5 0.5 D W0

Fig. 10. Diagram of a typical neural network with seven neural groups. The relationships among the neural groups are indicated.

Fig. 11. Raster plots representing the activity of random neural groups in the neural network shown in Fig. 10. (a) The STDP rule has been used to adjust the synaptic weights. (b) The STDP rule has not been used and all of the synaptic weights were fixed during the simulation.

Please cite this article as: E. Eskandari, et al., Effect of spike-timing-dependent plasticity on neural assembly computing, Neurocomputing (2016), http://dx.doi.org/10.1016/j.neucom.2016.01.003i

E. Eskandari et al. / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎

9

Table 1 Summary of the simulation results obtained using two states: without (A) and with (B) the STDP rule. Case Study

ANFstateA

ANFstateB

Diffa1

Diffr1 (%)

Frequency of state A (Hz)

Frequency of state B (Hz)

Diffa2

Diffr2 (%)

Network 1 Network 2 Network 3

68 30 21

309 140 54

241 110 33

77.99 78.57 61.11

22.7 11.5 14.8

83.3 40 34.3

60.6 28.5 19.5

72.74 71.25 56.85

Table 2 Correlation coefficients of patterns for two states: without (A) and with (B) the STDP rule.

Network 1

Network 2

Network 3

Assembly number

Ave_Corr for state A

Ave_Corr for state B

1 (neurons 101–200) 2 (neurons 201–300)

0.8867 0.9547

0.9546 0.9679

Average 2 (neurons 101–150) 3 (neurons 151–200) 4 (neurons 201–250) 5(neurons 251–300)

0.9207 0.9401 0.8907 0.9018 0.7845

0.9612 0.9673 0.9566 0.9512 0.9624

Average 2 (neurons 3 (neurons 4 (neurons 5 (neurons 6 (neurons

0.8793 0.9537 0.9183 – 0.8510 0.8480

0.9594 0.9818 0.9700 – 0.9331 0.9723

0.8927

0.9643

51–75) 76–100) 101–125) 126–150) 151–175)

Average

assemblies during an interval as follows:

ANF ¼

10 P

num PAsm

run ¼ 1

i¼1

NF ðiÞ    10  num Asm

;

ð9Þ

f or 1000ms

where numAsm is the number of assemblies in a neural network and NF(i) is the number of spikes in the ith assembly of the neural network during an interval of 1000 ms. This average was calculated 10 times while running the program. ANF was calculated for the two states without and with the STDP rule for each neural network. The first state is referred to as A and the second as B in Table 1. Two criteria were used to explain the difference between the two states as follows.   Diff a ¼ ANFstateA  ANFstateB  ð10Þ Diff r ¼

jANFstateA ANFstateB j  100 ANFstateB

ð11Þ

In this table, the first and second columns show the ANF calculated for three networks without and with STDP. The next two columns indicate the difference between the two states of A and B. Diffa1 and Diffr1 are the absolute and relative differences, which were calculated for the ANF between the two states of A and B. In addition, Diffa2 and Diffr2 represent the absolute and relative differences between the frequencies of states A and B, respectively. The synaptic weights were assumed to be fixed in all of the simulations for state A, whereas in state B, the synaptic weights were changed according to the STDP rule. Table 1 indicates that the ANF values for the three networks (respectively) were (309, 140, 54) in state B, which were more than those in state A (68, 30, 21). In addition, the frequencies for state B were (83.3, 40, 34.3), which were more than those for state A (22.7, 11.5, 14.8) in the three networks (respectively). These higher frequencies indicate that the speed was faster when using STDP in the assemblies of neural networks. Using the STDP rule, synaptic weight depression occurs if the spike of the presynaptic neuron arrives exactly after the time when the postsynaptic neuron fires and the synaptic weight decreases slowly to zero. This type of connection with a zero weight is not required and it should be removed. Furthermore, if the presynaptic

neuron arrives at the postsynaptic neurons before the time when these neurons fire, then the synaptic connection potentiates slowly. This type of connection that makes the postsynaptic neuron fire should be strengthened. Therefore, the STDP rule makes stronger causal interactions in these neural networks. Based on this mechanism, as demonstrated in Table 1, STDP increases the average number of firings by fine tuning the connection weights in the neural assemblies, even when using different randomly selected Izhikevich neuron types in the networks. Moreover, when an event process occurs that fires the neural assemblies, the STDP rule accelerates the process, which is reflected in the firing frequency during events. In a PG, the communication between neurons is characterized by the timing of spikes, which generate a precise temporal pattern. Depending on the delays involved, these characteristic patterns must be repeated. Correlations can measure the similarities between the spike timing of neurons [28] and they have been used to investigate the effects of STDP on the repeated temporal pattern of a PG in NAC networks. In NAC, a neural assembly fires at certain times, e.g., as shown in Fig. 9(e), assembly 2 (neurons 151–200) was activated two times during a period of 200 ms. When a neural assembly is activated, the neuron firing pattern in the assembly can be described by a matrix. In order to quantify the similarity between these patterns, the correlation coefficient can be calculated between related matrixes. The correlation coefficient for two matrixes with two repeated patterns in one assembly can be calculated using Eq. (12), where mean2(.) is the average of the matrix elements. PP ðAmn  AÞðBmn  BÞ

m n ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi r ¼ s  ffi PP PP 2 2

ðAmn  AÞ

m

n

ðBmn  BÞ

m

n

where A ¼ mean 2ðAÞ; and B ¼ mean 2ðBÞ

ð12Þ

Table 2 shows the calculated correlation coefficients for our examples, where we calculated the correlation for each pair of patterns in an assembly. To increase the precision of the computations, the program was run five times and we calculated the average correlation coefficient for 10 pairs in each run, before

Please cite this article as: E. Eskandari, et al., Effect of spike-timing-dependent plasticity on neural assembly computing, Neurocomputing (2016), http://dx.doi.org/10.1016/j.neucom.2016.01.003i

E. Eskandari et al. / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎

10

computing the average results according to Eq. (13). 5 P

Ave_Corr ¼

10 P

run ¼ 1 i ¼ 1

5  10

ri ð13Þ

This process was repeated for three networks and the results are shown in Table 2. Table 2 shows the correlation coefficient between the patterns with states A and B. According to these results, the correlations were higher with state B and patterns in the assemblies had stronger correlation with each other. For example, in network 2, the average correlation was 0.9594 with state B, whereas it was 0.8793 with state A. Thus, the STDP rule is a synaptic plasticity mechanism that creates more distinct events and it improves the behavior of NAC networks.

6. Conclusion In NAC, the interactions among neural assemblies generate a flow of causal events. One of these interactions is BNA (BPG), which is capable of memorizing internal states. This mechanism may be present in biological neural networks, which have been created by evolution over time, but it is difficult to prove this hypothesis. In this study, we investigated a digital assembly that operates in two welldefined states: ON, where all or most of neurons fire together; and OFF, where none of the neurons in the assembly fires. Based on previous studies, we examined dynamical effects in SNNs that are due mainly to plasticity mechanisms. Real biological systems employ complex mechanisms, which have been theoretically modeled by STDP. In fact, learning and cognitive processes are partly explained by these dynamics, which is why exploiting these dynamics (such as STDP) is important in SNNs and NAC. Thus, we investigated the effect of applying the STDP rule in networks and our simulation results showed that the STDP rule is a synaptic plasticity mechanism that improves the behavior of NAC networks. We computed correlation coefficients to measure the similarities among events with and without using the STDP rule, which demonstrated that distinct events became stronger and they occurred at a higher frequency when we applied the STDP rule. Our results are consistent with the natural behavior of biological neural systems, which learn or memorize patterns based on their reactions to the same pattern, which may become faster and more accurate. This was expected because STDP strengths the connections in a network that are effective for firing neurons, whereas it depresses other connections.

References [1] Pankaj Mehra, Benjamin W. Wah, Artificial Neural Networks: Concepts and Theory, IEEE Computer Society Press, Los Alamitos, 1992. [2] Gérard Dreyfus, Neural Networks: Methodology and Applications, Springer Science & Business Media, Berlin, Germany, 2005. [3] Hélene Paugam-Moisy, Sander Bohte, Computing with spiking neuron networks, Handbook of Natural Computing, Springer, Berlin, Heidelberg, 2012, pp. 335–376. [4] Wulfram Gerstner, M. Kistler Werner, Spiking Neuron Models: Single Neurons, Populations, Plasticity, Cambridge University Press, Cambridge, 2002. [5] Jilles Vreeken, Spiking neural networks, an introduction, Technical Report UUCS-2003-2008, Institute for Information and Computing Sciences, Utrecht University, 2002. [6] João Ranhel, Neural assembly computing, IEEE Trans. Neural Netw. Learn. Syst. 23 (6) (2012) 916–927. [7] Ranhel, João, Neural assemblies and finite state automata, in: Proceeedings of the 2013 BRICS Congress on IEEE Computational Intelligence and 11th Brazilian Congress on Computational Intelligence (BRICS-CCI & CBIC), 2013. [8] Rodrigues de Oliveira-Neto, Jose, et al., Magnitude comparison in analog spiking neural assemblies, in: Proceedings of the IEEE International Joint Conference on Neural Networks (IJCNN), 2014. [9] Richard Stephen Creed, et al., Reflex Activity of the Spinal Cord, Oxford University Press, United Kingdom, 1932. [10] Javier Baladron, Fred H. Hamker, A spiking neural network based on the basal ganglia functional anatomy, Neural Netw. 67 (2015) 1–13.

[11] Xiao-Jing Wang, Synaptic reverberation underlying mnemonic persistent activity, Trends Neurosci. 24 (8) (2001) 455–463. [12] György Buzsáki, Neural syntax: cell assemblies, synapsembles, and readers, Neuron 68 (3) (2010) 362–385. [13] Gilles Laurent, Olfactory network dynamics and the coding of multidimensional signals, Nat. Rev. Neurosci. 3 (11) (2002) 884–895. [14] Yu Qiang, Temporal Coding a nd Learning in Spiking Neural Networks, Diss, 2014. [15] Wolf Singer, Time as coding space, Curr. Opin. Neurobiol. 9 (2) (1999) 189–194. [16] Wolf Singer, Neuronal synchrony: a versatile code for the definition of relations, Neuron 24 (1) (1999) 49–65. [17] Francisco Varela, et al., The brainweb: phase synchronization and large-scale integration, Nat. Revi. Neurosci. 2 (4) (2001) 229–239. [18] Peter Uhlhaas, et al., Neural synchrony in cortical networks: history, concept and current status, Front. Integr. Neurosci. 3 (2009) 17. [19] Eugene M. Izhikevich, Polychronization: computation with spikes, Neural Comput. 18 (2) (2006) 245–282. [20] Sen Song, Kenneth D. Miller, Larry F. Abbott, Competitive Hebbian learning through spike-timing-dependent synaptic plasticity, Nat. Neurosci. 3 (9) (2000) 919–926. [21] Gayle M. Wittenberg, Wang Samuel S-H., Malleability of spike-timing-dependent plasticity at the CA3–CA1 synapse, J. Neurosci. 26 (24) (2006) 6610–6617. [22] Eugene M. Izhikevich, Simple model of spiking neurons, IEEE Trans. Neural Netw. 14 (6) (2003) 1569–1572. [23] Larry F. Abbott, Nelson Sacha B., Synaptic plasticity: taming the beast, Nat. Neurosci. 3 (2000) 1178–1183. [24] Richard Kempter, Gerstner Wulfram, J. Leo Van Hemmen, Hebbian learning and spiking neurons, Phys. Rev. E 59 (4) (1999) 4498. [25] Guo-qiang Bi, Mu-ming Poo., Synaptic modifications in cultured hippocampal neurons: dependence on spike timing, synaptic strength, and postsynaptic cell type, J. Neurosci. 18 (24) (1998) 10464–10472. [26] Natalia Caporale, Dan Yang, Spike timing-dependent plasticity: a Hebbian learning rule, Annu. Rev. Neurosci. 31 (2008) 25–46. [27] Niraj S. Desai, et al., Critical periods for experience-dependent synaptic scaling in visual cortex, Nat. Neurosci. 5.8 (2002) 783–789. [28] Shaghayegh Gomar, Ahmadi Arash, Digital multiplierless implementation of biological adaptive-exponential neuron model, IEEE Transactions on Circuits and Systems I. Regular Papers, 61(4), 2014, pp. 1206–1219.

Elahe Eskandari received the B.Sc. and M.Sc degree from the Department of Electrical Engineering, Razi University, Kermanshah, Iran in 2011 and 2013 respectively. Her research interests include high performance computing, neural networks, neuromorphic engineering, and brain simulation.

Arash Ahmadi received the B.Sc. and M.Sc. degrees in electronics engineering from Sharif University of Technology and Tarbiat Modarres University, Tehran, Iran, in 1993 and 1997, respectively, and the Ph.D. degree in electronics from the University of Southampton, U.K., in 2008. He was with Razi University, Kermanshah, Iran, as a faculty member from 1997 to 2014, and as a Fellow Researcher with the University of Southampton form 2008 to 2010. He is currently with the Electrical and computer Engineering Department, University of Windsor. His current research interest include hardware design and implementation, high-level synthesis, bioinspired computing, neuromorphic and memristors.

Shaghayegh Gomar received the B.Sc. and M.Sc degree from the Department of Electrical Engineering, Razi University, Kermanshah, Iran in 2011 and 2013 respectively. Her research interests include neuromorphic engineering, brain simulation, digital implementation and signal processing.

Please cite this article as: E. Eskandari, et al., Effect of spike-timing-dependent plasticity on neural assembly computing, Neurocomputing (2016), http://dx.doi.org/10.1016/j.neucom.2016.01.003i