Accepted Manuscript
Spiking Neural P Grey Wolf Optimization System: Novel Strategies for Solving Non-determinism Problems Moustafa Zein, Ammar Adl, Aboul Ella Hassanien PII: DOI: Reference:
S0957-4174(18)30797-8 https://doi.org/10.1016/j.eswa.2018.12.029 ESWA 12370
To appear in:
Expert Systems With Applications
Received date: Revised date: Accepted date:
7 May 2018 12 November 2018 17 December 2018
Please cite this article as: Moustafa Zein, Ammar Adl, Aboul Ella Hassanien, Spiking Neural P Grey Wolf Optimization System: Novel Strategies for Solving Non-determinism Problems, Expert Systems With Applications (2018), doi: https://doi.org/10.1016/j.eswa.2018.12.029
This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
ACCEPTED MANUSCRIPT
Highlights • Novel strategies for solving non-determinism problem of Spiking Neural P systems. • Design a new mathematical model of the grey wolf optimization algorithm.
CR IP T
• Control the copying spikes process among neurons is developed.
AC
CE
PT
ED
M
AN US
• Consider the time control approach to avoid non-determinism inside neurons.
ACCEPTED MANUSCRIPT
Spiking Neural P Grey Wolf Optimization System: Novel Strategies for Solving Non-determinism Problems Moustafa Zeina , Ammar Adlb , Aboul Ella Hassaniena,c,∗ a
Faculty of Computers and Information, Cairo University, Egypt. Faculty of Computers and Information, Beni-Suef University, Egypt. c Scientific Research Group in Egypt (SRGE)
CR IP T
b
Abstract
AN US
Spiking neural P systems (SN P systems, in short) are the latest branch of membrane computing; inspired by the biological behavior of spiking neurons. They are considered true distributed and parallel systems; modeled to solve time consumption problem and presented the concept of parallelism usage in the computing field. This paper proposes novel strategies for solving non-determinism problem of SN P systems. The proposed algorithm relies on the parallelism
M
feature to simulate the social hierarchy, tracking, encircling, and attacking behaviors in the grey wolf optimizer. It is modeled by collaboration between a set of SN P systems to get a feasible
ED
solution in polynomial time. Moreover, a new method named the power of signal is proposed to control the copying spikes process between neurons and differentiate between the arithmetic
PT
operations. Additionally, a time control approach is proposed to avoid non-determinism inside neurons that applied the determinism feature during firing rules. The theoretical and empirical
CE
experiments proved that the algorithm can successfully halt and in addition to the effectiveness of the proposed neural systems in getting an optimal solution in a reasonable time. As a result, this study is counted as a significant advancement in intelligent and optimization systems, whereas
AC
it has a direct impact on enhancing the performance of these systems and their applications. ∗
Corresponding author. Address: 5 Ahmed Zewail, Ad Doqi, Giza, Egypt. Tel.: +20 02 33381687 Email addresses:
[email protected] (Moustafa Zein),
[email protected] (Ammar Adl),
[email protected] (Aboul Ella Hassanien)
Preprint submitted to Expert Systems with Applications
December 18, 2018
ACCEPTED MANUSCRIPT
Keywords: SN P systems, Grey Wolf Optimizer, Power of Signal, Bio-inspired Computing, Membrane Computing. 1. Introduction
CR IP T
Natural computing is instantiated mainly from behavior, functions, and structure of biological and natural systems. Natural computing is a wide field includes many research trends such as membrane computing (P˘aun, 2000), artificial neural networks (Wang et al., 2013; Wilusz, 1995; Livingstone, 2008; Juang et al., 2011) and DNA-based molecular computing (Arto Salomaa & Rozenberg, 1999). Membrane computing is a theoretical research field simulated the functionality
AN US
and structure of membranes on three levels: cells, tissue, and neurons. It is employed to solve many computing problems due to getting an optimal solution in a reasonable time (Zhang et al., 2013; Ramanujan & Krithivasan, 2013; Cabarle et al., 2011, 2012). There are three classes of P systems: cell-like, tissue-like and neural-like P systems (P˘aun, 2000). The third class presented
M
in spiking neural type with name Spiking Neural P systems (SN P systems for short). SN P systems are aiming at developing computational models based on the neurobiological
ED
behavior of spiking neurons (Ionescu et al., 2006b; P˘aun et al., 2007; Amin & Fujii, 2004). According to SN P systems definition, the third generation of neural network embraces SN P systems (Jiang et al., 2016; Maass, 1997; Gerstner & Kistler, 2002; Maass & Bishop, 2001).
PT
Consequently, they provide an innovative perspective to investigate spiking neural networks and are reported as powerful computing models (Jiang et al., 2016; Song & Pan, 2016; Song et al.,
CE
2015, 2013; Zhang et al., 2014). SN P systems have novel features to deal with variant real applications and solve problems. These features depend on the parallelism and real representation
AC
of biological spiking neurons. SN P systems are represented in a directed graph where nodes refer to spiking neurons and edges correspond to synapses between neurons. Spiking neurons have a set of rules within two types of rules: forgetting rules and spiking rules (Krithivasan et al., 2011). Every spiking neuron sends signals as spikes train over synapses that carry encoded
ACCEPTED MANUSCRIPT
information to incorporate this spatial and temporal information in computation processes. Additionally, neurons use the incoming spikes train to encode their response or message (Jiang et al., 2016; Peng et al., 2017). Timing and spikes train are two important factors because they keep spiking, encoding and giving specific information (Ionescu et al., 2006a; Freund & Kogler,
CR IP T
2010). Many expert and intelligent systems in different areas used membrane computing to introduce new methods and attempts due to getting better theoretical advancement. For instance, SN P system is used to tackle a challenge in computational biology that is identifying the nuclear export signal from high throughput data of amino acid sequences (Chen et al., 2018). Moreover,
AN US
a new intelligent ranking system in cheminformatics is introduced based on the computational power of P systems (Adl et al., 2016). Recently, SN P systems are considered an impressive solution developed in many intelligent applications such as semantics of deductive databases systems (Diaz-Pernil & Guti´errez-Naranjo, 2018) and a parallel multiplier (Diaz et al., 2017). Therefore, SN P systems have a powerful participation in expert systems and applications from
M
both mathematical/theoretical side (Wu et al., 2018).
One of the most important problems is the optimization in general. The optimization
ED
problem is defined as a problem that cannot be solved to optimality by any deterministic method within a reasonable or in a specific polynomial time limit (Coello, 2004). Many optimization
PT
methods, strategies, and algorithms are introduced to solve this type of problems (Mirjalili et al., 2014; Coello, 2004; Brownlee, 2011; Passino, 2010; Bansal et al., 2014). But most of
CE
these studies focused on reaching the best result without serious consideration with the time limit. One of the motivations in this study is to define a novel solution with deliberation to the
AC
optimization problem obstacles: time limit and optimal solution. On the other hand, the spikes are copying from the source neuron to all connected neurons, not the destination only, and this is not suitable for intelligent systems. Therefore, This paper includes a new solution to this problem that is evaluated and used in a meta-heuristic algorithm.
ACCEPTED MANUSCRIPT
Time consumption is considered a major problem in optimization algorithms. It comes from the sequential modelization of optimization algorithms. In contrast, the parallelism feature contributes with a powerful added value for any algorithm or system. The objective of this study is proposing SN P systems as an optimization algorithm with new definite strategies
CR IP T
(determinism control and power of signal); named Spiking Neural P Grey Wolf Algorithm (SPG). SN P systems provide the parallelism feature that can simulate the natural movement behavior of swarms which is the core of SPG construction. Moreover, the parallelism feature is employed to solve the time consumption problem through SPG. On the other hand, power of signal is a new concept inspired from biology where the spikes are short electrical pulses having
AN US
voltages that represent the signal power of spikes train. The remaining sections are managed as follows: section 2 recalled the definition and previous work of Grey Wolf Optimizer and SN P system; section 3 debated the proposed model construction; section 4.1 and 4.2 articulated the experiments and results discussion. Finally, section 5 appropriated as a step for the future work
M
and concluded this study.
ED
2. Related Work
In this section, related studies about two topics (Grey Wolf Optimizer (GWO), and SN P systems) are discussed. The mathematical model and nature background of GWO are mentioned;
PT
moreover, the definition of SN P systems and its applications are presented.
CE
2.1. Grey Wolf Optimizer
GWO is a metaheuristic and natural-inspired algorithm; inspired by the social dominance
AC
hierarchy and hunting behavior of grey wolves (Mirjalili et al., 2014). The social hierarchy is distributed among four classes as alpha, beta, delta, and omega levels. Every level has a main role in grey wolf swarm starting with alpha and ending with omega. Alpha has the upper hand of taking decisions about hunting and managing the pack. Beta helps alpha to take a decision
ACCEPTED MANUSCRIPT
and delta wolves have to submit to alphas and betas, but they dominate the omega. Omega submits the information to all the others dominant wolves (Mirjalili et al., 2014). The hunting process has main three steps as bellows:
2. Pursuing, encircling and harassing the prey, 3. Attacking toward the prey.
CR IP T
1. Tracking, chasing and approaching the prey,
The mathematical model of the social hierarchy is defined as follows: the fittest solution is
AN US
considered as alpha α, the second and third best solutions are named beta β and delta δ. The rest of candidate solutions are considered to be omega ω and the hunting process is considered the optimization procedure (Medjahed et al., 2016). The mathematical model of GWO is defined according to the following steps: The algorithm commenced with encircling a prey and the
M
mathematical model for encircling behavior as the following:
ED
→ − → − − − D = | C .→ x p (t) − → x (t)|, → − → − → − − x (t + 1) = → x p (t) − A . D ,
(1) (2)
CE
et al., 2014).
PT
→ − → − − xp where t indicates the current iteration. A and C are coefficient vectors. The expression → − is the position vector of the prey. The symbol → x is the position vector of a grey wolf (Mirjalili
AC
→ − → − The vector A and C calculated as follows: → − − − − A = 2→ a .→ r 1−→ a,
(3)
→ −c = 2.→ − r2
(4)
ACCEPTED MANUSCRIPT
− − − where → a is linearly decreased from 2 to 0 during the iterations. → r 1, → r 2 ∈ [0, 1] are random − a = 2 − 1 ∗ (2/Nmax ) is employed to generate vectors [1]. In this paper, a generation equation →
− the value of → a at every iteration; taken form the implementation of GWO (Mirjalili et al.,
2014) where Nmax is the maximum number of iterations. The hunting behavior process is
→ − → − − − D α = C 1 .→ xα−→ x → − → − − − D β = C 2 .→ xβ −→ x
AN US
→ − → − − − D δ = C 3 .→ xδ −→ x
CR IP T
mathematically modeled as follows: (5) (6) (7) (8)
→ − → − → − − x2 =→ x δ − A 2. D β
(9)
→ − → − → − − x2 =→ x β − A 3. D δ
(10)
M
→ − → − → − − x1 =→ x α − A 1. D α
(11)
ED
→ − − − x1+→ x2+→ x3 → − x (t + 1) = 3
The final position would be in a random position within a circle which is defined by the positions of alpha, beta, and delta in search spaces. That position is introduced as the optimal
PT
solution.
GWO became very popular and widely used in different research fields so enhancing GWO
CE
performance is a considerable idea in optimization aspects. Experienced GWO is introduced to set the right parameters for the algorithm based on reinforcement learning and neural networks
AC
(Emary et al., 2018). Another study discussed increasing the possibility of the exploration process over the exploitation to get a better hunting performance and applied to grid-connected permanent magnet synchronous generator driven by variable speed wind turbine (Qais et al.,
2018). Many real-world applications implemented and solved by GWO such as power system
ACCEPTED MANUSCRIPT
protection coordination (Kim et al., 2018), fuzzy blocking flow shop scheduling (Yang & Liu, 2018), prediction of surface roughness in the ball-end milling (Sekulic et al., 2018), and real power dispatch with non-linear constraints problems (Venkatakrishnan et al., 2018). The improvements scope for GWO in those studies included the algorithm behavior or parameters, ignoring the
CR IP T
speed of algorithm which is an important criterion in distributed systems and applications. The idea of keeping the best performance in algorithm speed and getting the optimal solution is the aim of this paper. 2.2. Spiking Neural P systems
AN US
In this section, we introduce the formal definition of SN P systems and a brief about their applications (Ionescu et al., 2006b; Wu et al., 2016). The reader needs to be familiar with membrane computing and theoretical computer science (P˘aun, 2006). There are two advantages in SN P systems (timing systems, and represent how the brain works so it’s a realistic system). The first pro is keeping up the synchronization and allowing to calculate the computational
M
power of the system. Generally, many applications were built according to SN P systems (Bi & Zang, 2016). Those applications included arithmetic operations (Liu et al., 2015; Metta
ED
et al., 2012), logic gates (Ionescu & Sburlan, 2012), sorting (Ionescu & Sburlan, 2012; Metta & Kelemenov´a, 2015), factorial calculation (Liu et al., 2014) and others (Bi & Zang, 2016; Zhang
PT
& Pan, 2016).
AC
2006b)
CE
Definition 1. A Spiking Neural P systems of degree m ≥ 1, is a construct (Ionescu et al.,
Π = (O, σ1 , ..., σm , syn, io ),
ACCEPTED MANUSCRIPT
where: 1. O = {a} is the singleton alphabet (called spike);
CR IP T
2. σ1 , ..., σm are neurons, of the formal construction σi = (ni , Ri ), 1 ≤ i ≤ m, where: • ni is a number of spikes contained in neuron σi ;
AN US
• Ri is a set of rules of the following two forms E/ac → ap where E is regular expression over a and c ≥ p ≥ 1 (spiking rules); additionally, as → λ for some s ≥ 1, with restriction that as ∈ / L(E) for any rule E/ac → ap of type (1) from Ri (forgetting rules).
M
/ syn for 1 ≤ j ≤ m (synapses among 3. syn ⊆ {1, 2, ..., m} × {1, 2, ..., m} with synapse (i, i) ∈ cells);
ED
4. i0 ∈ {1, 2, ..., m} indicates the output neurons. SN P systems have two types of rules (firing or spiking rule, and forgetting rule). Spiking
PT
rule is applied if a neuron contains k spikes where k ≥ c . E/ac → ap is fired and that means c spikes are consumed. The forgetting rules are applied as follows: if a neuron contains exactly s
CE
spikes, then the rule as → λ can be used and this means that all s spikes are removed from the neuron. SN P systems can have more than a configuration and they move from a configuration
AC
to another according to the state of neurons. SN P systems and their variants have different applications in many areas because they
provide a desirable performance based on their parallelism feature. Many variants introduced in SN P systems to propose further improvements such as a small SN P system with communication
ACCEPTED MANUSCRIPT
on request to construct Turing universal SN P system with only 14 neurons (Pan et al., 2018). This solution decreases the needed number of neurons to build this type of systems. One of the newest appropriate variants is the extended SN P systems with white hole rules, which send the complete contents of a neuron to other neurons (Alhazov et al., 2018). The white hole
CR IP T
rules can easily simulate the register machines and reduce the extra work inside neurons to send spikes. Other variants are presented based on some changes in SN P system construction such as Spiking Neural P systems with polarizations to avoid using regular expression in describing firing conditions (Wu et al., 2017), multiple channels and anti-spikes inside neurons (Song et al., 2018), neurons connected with scheduled synapses (Cabarle et al., 2017), and anti-spikes and
AN US
without annihilating priority (Wang et al., 2017). In this paper, a new variant of SN P systems is introduced to handle sending spikes process over synapses to specific neuron as a destination, which is not handled by any previous study in SN P systems. This feature is very important to real-world applications and intelligent systems such as natural inspired optimization systems because the swarm behaviors in real-life are systematic and at the same time parallelized.
M
Therefore, the proposed variant is employed to simulate this case.
ED
3. Material and methods
In this section, SPG is presented to simulate the social hierarchy and hunting behavior
PT
of a grey wolf pack. SPG is simulating GWO stages based on the mathematical model of
CE
social hierarchy, tracking, encircling, and attacking a prey. SPG construction is formed by − a collection of SN P systems (arithmetic operations, sorting, objective function, → a generator,
AC
random-generator and grey wolf behavior) in the form ΠSP G = (Πadd , Πsub , Πmulti , Πdiv , Πsort , Πrand , Π− → a , Πfobj , ΠGW )
whereas Πadd indicates addition SN P system, Πsub refers to subtraction SN P system, Πmulti
represents multiplication SN P system, Πdiv is division SN P system (P˘aun, 2006; P˘aun et al.,
ACCEPTED MANUSCRIPT
2007), Πsort is sorting SN P system to get the optimal solution (Ionescu & Sburlan, 2012), Πrand → − represents random SN P system, Π− → a refers to a linearly generation SN P system, Πfobj is the objective function SN P system and ΠGW indicates SN P Grey Wolf System. Πrand will be constructed based on the most popular formula x = (p1 ∗ randseed + p2 )/mod(f ) is used in
CR IP T
random generators. Furthermore, the mathematical formula of mod = a − (f ∗ (a/f )) will be constructed also. Fig. 1 shows a structural model of SPG including the mentioned SN P systems above and interactions between them. The following formal definitions include SN P grey wolf − a generator and objective function) SN P systems. All SN P system and ( random-generator, →
AN US
systems in SPG except ΠGW named outer SN P systems. 3.1. SN P Grey Wolf System
Definition 2. We consider a SN P Grey Wolf system ΠGW of degree m ≥ 3, in the form
M
Π = (O, σ, syn, iin , io ), where:
ED
1. O is the singleton alphabet (called spike). 2. σ is a set of grey wolf hierarchy and in/out neurons σ = σd ∪ σα ∪ σβ ∪ σδ ∪ σc ∪ 0iin ∪ io
PT
where σ = σ1 , ..., σm . iin refers to an input neuron, io indicates an output neuron and σd is a dispatcher neuron. The set of σα , σβ , σδ or σc are referring to a grey wolf pack.
CE
ΠGW focused on the most three important levels (alpha, beta, and delta) similarly in GWO and σ is emulated the same social hierarchy in GWO. The set of σ is divided into four
AC
levels to represent the grey wolf hierarchy (alpha, beta, and delta) and its central neuron.
Alpha is represented by σα , σβ is beta, σδ is delta, and σc is a central neuron for each wolf contains the updated value of wolf position and three optimal solutions. ΠGW passes
through three configurations: initial configuration Cini , hunting behavior Chunt and halting
ACCEPTED MANUSCRIPT
Rand1-Generator neuron
a-Generator neuron
Rand2-Generator neuron
7
7
1
In
Dispatcher neuron
Send spikes of Algorithm parameters
12 Output neuron
3 Central wolf neuron 10
Send spikes of Updated Send spikes positions of Best value
Out
11
8
6
Send spikes to ask about two random number
Beta neuron
Send spikes of obj f value
Delta neuron
Send spikes to perform AO
4
5
6
Send spikes of result value
7
9
Label:
ED
M
Send spikes of three optimal solutions
Send spikes to ask obj f value
6
Alpha neuron
Send spikes of Wolf positions
Send spikes of Wolf positions and parameters
Input neuron
Send spikes to restart the process
2
AN US
0
Send spikes of two random Send spikes to numbers ask about two random numbers
CR IP T
Send spikes Of (a) Send spikes value to get (a) value
Figure 1: An illustrative representation for SPG Algorithm. The normal arrow for sending spikes to get a value and the dashes arrow for sending spikes of a value. SPG workflow is proceeded by numbers; starting from process 0 until reaching process 12. All numbers are put according to ΠSP G design.
PT
configuration Chalt . The initial configuration shows the instantiation of the systems (ex : the initial spikes in neurons). In contrast, the halting configuration is the termination
CE
state in neurons. The core of SN P systems is allocated in the hunting configuration. All neurons will fire their rules according to system transition and the current configuration.
AC
The transition between configurations is performed based on algorithm steps. Every neuron fires a set of rules in every configuration to get the best three optimal solutions as in forms
ACCEPTED MANUSCRIPT
σc = (nc , Rc ), 1 ≤ c ≤ m, σαi = (ni , Ri ), 1 ≤ i ≤ m, σβ j = (nj , Rj ), 1 ≤ j ≤ m,
iin = (nin , Rin ), and io = (no , Ro ).
CR IP T
σδ k = (nk , Rk ), 1 ≤ k ≤ m,
As defined above, a set of neurons σc is responsible for updating the value of grey wolf
AN US
position and keeping up the best three optimal solutions. nc is the initial spikes in σc . For σαi , σα indicates the neurons (wolves) and ni ≥ 0 is the initial number of spikes that is contained by neurons in alpha level where i refers to the index of neuron σα . According to σβ j , σβ is a set of neurons and nj ≥ 0 is the initial number of spikes in neurons in beta level where j denotes the neuron index in beta level. Giving σδ k , σδ indicates the neurons
M
and nk ≥ 0 is the initial number of spikes that is contained by neurons where k is the index of every neuron in delta level. Fig. 1 shows neurons structure and represents the
ED
hierarchy levels of grey wolves (alpha, beta, and delta). Every wolf has four neurons: three neurons to calculate its position in alpha, beta, and delta and one central neuron to get the
PT
updated value from the previous three neurons. The term iin represents the input neuron and io indicates the output neuron.
CE
3. The synapses syn = {(σc , σc ), (σc , σαi ), (σc , σβj ), (σc , σδk ), (σc , σiin ), (σc , σio ), (σiin , σio ),
AC
(σiin , Π− → a ), (σc , Πsort ), (σc , Πfobj ), (σc , Πdiv ), (σc , Πadd ), (σαi , Πrand ), (σβj , Πrand ), (σδk , Πrand ), (σαi , Πmulti ),(σβj , Πmulti ), (σδk , Πmulti ), (σαi , Πsub ), (σβj , Πsub ), (σδk , Πsub )}.
There are synapses between the central neuron σc with three social levels σαi , σβj and σδk , whereas c, i, j and k are equal for each wolf. There are synapses between outer SN P
ACCEPTED MANUSCRIPT
systems and ΠGW . Fig. 1 shows the synapses between ΠGW and (Π− → a , Πfobj and Πrand ); moreover, simulates the neuron structure of ΠGW , Π− → a , and Πrand . 4. The input neuron iin is responsible for receiving initial values of algorithm parameters
CR IP T
(maximum time limitation t max, maximum number of iterations N max and components − of → a ) and position vector 4p of each wolf. 5. The dispatcher neuron σd is responsible for distributing the position values for all central neurons σc ; further, broadcasting the value of three optimal solutions and current iteration. The needs of controlling value’s distribution and overcoming copying the values over all
AN US
synapses made constructing a dispatcher and a broadcaster in the SPG a mandatory. ΠGW depends on assigning every grey wolf position value to a central neuron, but SN P systems work with copying one value to all synapses and this contrasts with the right work of ΠGW . The power of signal S of spikes is added in the current system to control the lifetime of spikes among synapses. The idea of the dispatcher comes from controlling the power of
M
signal of spikes train, whereas S depends on the index of position value. In each time, σd receives position value spikes, the power of these spikes are determined by the index of
ED
that value. A further central neuron to σd , more powerful signal of spikes is needed due to reaching that neuron. The second part of applying power of signal S in neuron σd is a
PT
broadcaster. Some values should be sent to all central neurons σc such as the three optimal solutions and current iteration value so σd sends spikes of those values and gives them the
CE
needed power of signal to reach all central neurons σc . 6. The output neuron io has a responsibility, which is halting the system or keeping up
AC
the work continuity. To perform this responsibility, io checks the optimal solution and maximum number of iterations. io sends best three optimal solutions to the environment
and fires forgetting rules to terminate the system or re-instantiate the system work.
ACCEPTED MANUSCRIPT
3.1.1. Central Neurons The central neurons have a finite set of rules R to characterize the state of performing SPG algorithm stages. The idea of representing a wolf by neurons was built by mapping every wolf behavior to four neurons (central, alpha, beta and delta) in order to simulate the behavior of
to update wolf positions in forms σc = (nc , Rc ), where:
AN US
nc = {az3 c }, − − − Rc = {→ a c /→ ac →→ a c tmx ,
CR IP T
hunting a prey as in (Mirjalili et al., 2014). Every central neuron has a specific set of rules Rc
aNcurr c → aNcurr c tmx ,
apcurr c (apcurr c )+ → apcurr c tmx , (af c )+ → af c tmx ,
M
(aAscore c )+ /aAscore c → aAscore c tmx ,
(aBscore c )+ /aBscore c → aBscore c tmx ,
ED
(aDscore c )+ /aDscore c → aDscore c tmx ,
apcurr c (afnew c )+ (af c )+ → aApos c tmx ,
PT
(afnew c )+ (aAscore c )+ /afnew c → aAscorenew c tmx ,
AC
CE
apcurr c (afnew c )+ (af c )+ → aBpos c tmx ,
(afnew c )+ (aBscore c )+ /afnew c → aBscorenew c tmx , apcurr c (afnew c )+ (af c )+ /af c → aDpos c tmx ,
(afnew c )+ (aDscore c )+ /afnew c → aDscorenew c tmx , aApos c (aAscore c )+ → aApos c tmx ,
aBpos c (aBscore c )+ → aBpos c tmx ,
aDpos c (aDscore c )+ → aDpos c tmx , (ax1 c )s1 /ax1 c → (ax1 c )s1 tmx ,
ACCEPTED MANUSCRIPT
(ax2 c )s1 /ax2 c → (ax2 c )s1 tmx , (a c )s1 /a c → (a c )s1 tmx ,
(ax3 c )s1 /ax3 c → (ax3 c )s1 tmx , (az3 c )s4 → (az3 c )s4 tmx ,
apnew c /apnew c → apnew c tmx , next av c → λ[tcurr max , tmin ]},
v ∈ {fnew , η, , pnew },
CR IP T
(aη c )s4 /aη c → (aη c )s4 tmx ,
next tmx = [tprev max , tmin ], t ∈ {t1 , ..., t23 },
AN US
S = {s1 , s4 }, and 1 ≤ c ≤ m.
For nc , each central neuron σc has initial spikes of number three by azc3 . These spikes are instantiated in the initial configuration Cini at each algorithm iteration. The set of rules Rc is fired by σc to achieve σc responsibilities which include:
M
1. Receive values of algorithm parameters and wolf position to send them to α, β and δ.
ED
2. Perform the objective function and calculate the optimal solutions.
PT
3. Update the current wolf position with a new value. Power of Signal Strategy. To handle the problem of choosing the type of an arithmetic operation
CE
through algorithm steps, the power of signal S is employed to control and define the arithmetic operation type. It has a specific value S, which is carried on the sent spikes. According to the value S, the spikes will be directed to a specific arithmetic operation SN P system.
AC
S = {s1 , s2 , s3 , s4 } is a symbolic representation for the needed power to send spikes to arithmetic operation SN P systems where s1 , s2 , s3 , and s4 are employed to send spikes to Πadd , Πsub , Πmulti , and Πdiv respectively.
ACCEPTED MANUSCRIPT
Time Control Strategy. The time factor t is employed to control non-determinism state in firing neuron rules. Applying the time factor in a rule construction inspired from timed SN P systems (TSN P systems) (Peng et al., 2010; Pan et al., 2011; Song et al., 2014), but every neuron will fire a rule or fire a rule and consume all spikes at a single time unit; moreover, forgetting rule
CR IP T
will be fired according to a specific time value. The value of time is assigned according to the neurons state in ΠSP G . As reported in TSN P systems, the firing mechanism is introduced to substitute original firing and delay mechanisms in SN P systems (Peng et al., 2010). In addition to t in ΠSP G includes a delay factor because there are some outer SN P systems that affect the value of it.
AN US
Firing Process. σc starts receiving spikes of initial positions from σd and a transition Cini ⇒ Chunt − − − is done to change the system state. σc fires → a c /→ ac → → a c to send the spikes of component
− a ) to σα , σβ and σδ at a time t = t1 where cod refers to encoded information in spikes. cod(→ − σc consumes the spikes of current value → a and waits the next value at the next iteration. σc
M
fires aNcurr c → aNcurr c to send spikes of the current iteration value cod(Ncurr ) to σα , σβ and σδ at a time t = t2 . After that, σc sends position spikes encoded as cod(pcurr ) to σα , σβ and σδ by
ED
firing apcurr c (apcurr c )+ → apcurr c at a time unit t = t3 , whereas c is the index of a neuron and its position value. At the same time, σc sends the positions to Πfobj to calculate and get the objective value af c for the search agent in that neuron. Before firing the objective value rule
PT
(af c )+ → af c , a forgetting rule av c → λ is fired to destroy the spikes that are not belong to this neuron where v = f . When Πfobj sends objective value spikes, the spikes are copied to all
CE
central neurons.
To differentiate between the objective values and send a right objective value to its central
AC
neuron, the forgetting rule is fired with temporal interval starts with the maximum time tmax of the previous rule and the minimum time of the next rule tmin . Receiving spikes takes no time to enter neurons (Ionescu et al., 2006b), but other operations will take a time. Therefore, any central neuron will not receive the spikes of results at the same maximum time of the fired
ACCEPTED MANUSCRIPT
rule. The central neuron σc will forget any existed spike by firing av c → λ with maximum time that is equal the minimum time of the next rule tnext min , whereas v is encoded with the
spikes of next rule. At current state in σc , the current spikes are encoded to cod(f ). σc fires (af c )+ → af c at a time t = t4 after receiving the spikes of af c from Πfobj and sends spikes cod(f )
CR IP T
to Πsort . After that σc sends the spikes of best three optimal solutions of the encoded forms (cod(Ascore ), cod(Bscore ), and cod(Dscore )) by firing (aAscore c )+ /aAscore c → aAscore c at a time t5
,(aBscore c )+ /aBscore c → aBscore c a time t6 and ,(aDscore c )+ /aDscore c → aDscore c a time t7 respectively,
whereas aAscore c , aBscore c ,and aDscore c are consumed in σc to remove these solutions from the current iteration. When Πsort receives aAscore c , aBscore c , aDscore c and af c from all neurons, Πsort
AN US
sends a sorted list of these values.
When Πsort sends a sorted list that consists of three optimal solutions and fitness values, every central neuron σc receives the sorted list. σc receives spikes of the first fitness value afnew c and all other spikes trains of that type will be destroyed through synapses or on the neuron until receiving aAscore . Every neuron compares between the current fitness value and
M
the incoming fitness value. If they are equal, this leads to the current position aApos c achieved the best fitness value in the current iteration. Moreover, sending aApos c to all central neurons
ED
σc by firing apcurr c (afnew c )+ (af c )+ → aApos c at a time t = t8 . The previous comparison state is applied on case of aBpos c and aDpos c at a time t = t10 and t = t12 respectively. All central
PT
neurons are fully connected to get the value of aApos c , aBpos c , and aDpos c from the central neurons that achieve highest fitness values. After getting the three optimal solutions, σc performs the
CE
second responsibility which is sending them to i0 . All central neurons send the same values of three optimal solutions to i0 and the output neuron i0 will fire its rules with the first three
AC
optimal solutions.
As a third responsibility, every σc sends three optimal positions (alpha, beta, and delta) to
its alpha, beta, and delta neurons. σc fires aApos c (aAscore c )+ → aApos c at a time t = t14 to send the spikes of optimal alpha position Apos to an alpha neuron σα and consume the current spikes
ACCEPTED MANUSCRIPT
of cod(Apos ) in σc . After that σc fires aBpos c (aBscore c )+ → aBpos c at a time t = t15 to send the
spikes of optimal beta position aBpos to a beta neuron σβ and consume the spikes of cod(Bpos ). At a time t = t16 , σc fires aDpos c (aDscore c )+ → aDpos c to send the spikes of optimal delta position Dpos c to a delta neuron σδ and consume the spikes of cod(Dpos ).
CR IP T
The last responsibility is updating wolf positions and sending some values to the output neuron. When σc receives spikes of cod(X1 ), cod(X2 ) and cod(X3 ), the rule (ax1 c )s1 /ax1 c →
(ax1 c )s1 is fired at a time t = t17 and the rule (ax2 c )s1 /ax2 c → (ax2 c )s1 is fired at a time t = t18 . The firing steps are continuing working to get the updated position apnew c and contacting with
Πadd and Πdiv until t = t22 . σc re-fired av c → λ for the last time to destroy the spikes of updated
AN US
position values that do not fit with the current central neuron. When σc receives its spikes of the updated position value, it fires the last rule apnew c /apnew c → apnew c to send spikes of the updated position value to i0 at time t = t23 . Every central neuron σc enters the halt configuration which means σc is in a waiting state until receiving the position values.
M
3.1.2. Alpha, Beta, and Delta Neurons
In the following, we show the workflow of alpha, beta and delta neurons. Three types of
ED
neurons have the same set of rules ( Ri for alpha neurons, Rj for beta neurons and Rk for delta neurons), but the difference in calculating the wolf position according to the mathematical
AC
CE
PT
model of wolf social hierarchy as in forms σαi = (ni , Ri ), where: ni = {az2 i },
Ri = {aNcurr i → aNcurr i tmx , (az2 i )s3 → (az2 i )s3 tmx , → − → − (a a i )s3 → (a a i )s3 tmx , (a i )s3 → (a i )s3 tmx ,
ACCEPTED MANUSCRIPT
(ar1 i )s3 → (ar1 i )s3 tmx ,
(aη i )s2 /ar1 i → (aη i )s2 tmx , → − → − (a a )s2 /a → (a a )s2 t , i
i
i
mx
(ar2 i )s3 /ar2 i → (ar2 i )s3 tmx ,
(aC1 i )s3 /aη i → (aC1 i )s3 tmx ,
CR IP T
(az2 i )s3 → (az2 i )s3 tmx ,
(aApos i )s3 /aC1 i → (aApos i )s3 tmx , (aω i )s2 /aω i → (aω i )s2 tmx ,
(apcurr i )s2 apcurr i → (apcurr i )s2 tmx ,
AN US
(aDAlpha i )s3 /aDAlpha i → (aDAlpha i )s3 tmx , (aA1 i )s3 /aA1 i → (aA1 i )s3 tmx ,
(aApos i )s2 /aApos i → (aApos i )s2 tmx , (aϕ i )s2 /aϕ i → (aϕ i )s2 tmx , (aX1 i ) → (aX1 i )tmx ,
M
next av i → λ[tcurr max , tmin ]},
v ∈ {r1 , r2 , , η, ω, C1 , ϕ, X1 },
ED
next tmx = [tprev max , tmin ], t ∈ {t1 , ..., t18 },
AC
CE
PT
S = {s2 , s3 }, and 1 ≤ i ≤ m. σβj = (nj , Rj ), where: nj = {az2 j },
Rj = {aNcurr j → aNcurr j tmx , (az2 j )s3 → (az2 j )s3 tmx , → − → − (a a j )s3 → (a a j )s3 tmx , (a j )s3 → (a j )s3 tmx ,
(ar1 j )s3 → (ar1 j )s3 tmx ,
ACCEPTED MANUSCRIPT
(aη j )s2 /ar1 j → (aη j )s2 tmx , → − → − (a a )s2 /a → (a a )s2 t , j
j
j
mx
(az2 j )s3 → (az2 j )s3 tmx ,
(aC2 j )s3 /aη j → (aC2 j )s3 tmx ,
CR IP T
(ar2 j )s3 /ar2 j → (ar2 j )s3 tmx ,
(aBpos j )s3 /aC2 j → (aBpos j )s3 tmx , (aω j )s2 /aω j → (aω j )s2 tmx ,
(apcurr j )s2 apcurr j → (apcurr j )s2 tmx ,
(aDBeta j )s3 /aDBeta j → (aDBeta j )s3 tmx ,
AN US
(aA2 j )s3 /aA2 j → (aA2 j )s3 tmx ,
(aBpos j )s2 /aBpos j → (aBpos j )s2 tmx , (aϕ j )s2 /aϕ j → (aϕ j )s2 tmx , (aX2 j ) → (aX2 j )tmx ,
next av j → λ[tcurr max , tmin ]},
M
v ∈ {r1 , r2 , , η, ω, C2 , ϕ, X2 },
next tmx = [tprev max , tmin ], t ∈ {t1 , ..., t18 },
ED
S = {s2 , s3 }, and 1 ≤ j ≤ m.
AC
CE
PT
σδk = (nk , Rk ), where: nk = {az2 k },
Rk = {aNcurr k → aNcurr k tmx , (az2 k )s3 → (az2 k )s3 tmx , → − → − (a a k )s3 → (a a k )s3 tmx , (a k )s3 → (a k )s3 tmx ,
(ar1 k )s3 → (ar1 k )s3 tmx ,
(aη k )s2 /ar1 k → (aη k )s2 tmx ,
ACCEPTED MANUSCRIPT
→ − → − (a a k )s2 /a k → (a a k )s2 tmx , (az2 k )s3 → (az2 k )s3 tmx ,
(ar2 k )s3 /ar2 k → (ar2 k )s3 tmx ,
(aC3 k )s3 /aη k → (aC3 k )s3 tmx , (aω k )s2 /aω k → (aω k )s2 tmx ,
CR IP T
(aDpos k )s3 /aC3 j → (aDpos k )s3 tmx ,
(apcurr k )s2 apcurr k → (apcurr k )s2 tmx ,
(aDDelta k )s3 /aDDelta k → (aDDelta k )s3 tmx , (aA3 k )s3 /aA3 k → (aA3 k )s3 tmx ,
AN US
(aDpos k )s2 /aDpos k → (aDpos k )s2 tmx , (aϕ k )s2 /aϕ k → (aϕ k )s2 tmx , (aX2 k ) → (aX2 k )tmx ,
next av k → λ[tcurr max , tmin ]},
v ∈ {r1 , r2 , , η, ω, C3 , ϕ, X3 },
M
next tmx = [tprev max , tmin ], t ∈ {t1 , ..., t18 },
ED
S = {s2 , s3 }, and 1 ≤ k ≤ m. For nα , nβ or nδ , the neurons in social levels σα , σβ or σδ have initial spikes of number two
PT
az2 at the initial configuration Cini . At Chunt , every neuron in the social level starts calculating the value of A1 , A2 or A3 by firing some rules from t = t1 until t = t6 . At the maximum time of
CE
t6 , the rule av → λ is fired to destroy spikes cod(A1 ), cod(A2 ) or cod(A3 ) that do not belong to that neuron. Calculating the value of A1 , A2 or A3 is considered the first stage of σα , σβ or σδ work from t = t1 until t = t7 . The second stage is calculating the value of C1 , C2 or C3 . The
AC
calculation process is starting by firing (az2 )s3 → (az2 )s3 at a time t = t8 to send spikes cod(z2 ) of number two to Πmulti with power of signal s3 . Third stage is calculating DAplha , DBeta and DDelta in σα , σβ or σδ respectively. The first rule
in this stage is firing (aC )s3 /aη → (aC )s3 at t = t10 to send spikes cod(C) to Πmulti with power
ACCEPTED MANUSCRIPT
of signal s3 . (aApos )s3 /aC1 → (aApos )s3 is fired to send spikes cod(Apos ) to Πmulti with power of signal s3 in a neuron σα (aBpos )s3 /aC2 → (aBpos )s3 is fired to send spikes cod(Bpos ) in a neuron
σβ . At the last step in this stage (apcurr )s2 apcurr → (apcurr )s2 is fired at a time t = t13 to send spikes of a wolf position value cod(pcurr ) to Πsub with power of signal s2 . Πsub sends the result
CR IP T
of that subtraction operation encoded to cod(DAlpha ), cod(DBeta ) or cod(DDelta ).
When every neuron gets its DAlpha , DBeta or DDelta , the neurons start calculating wolf position in the three social level (σα , σβ or σδ ) as the last stage of these neurons work. At a time t = t14 , the spikes of cod(DAlpha ) are sent to Πmulti by firing (aDAlpha i )s3 /aDAlpha i → (aDAlpha i )s3 in a neuron σα . Similarly, the spikes of cod(DBeta ) are sent to Πmulti with power of signal s3
AN US
by firing (aDBeta k )s3 /aDBeta k → (aDBeta k )s3 in a neuron σβ . The firing process continues where
(aDpos k )s2 /aDpos k → (aDpos k )s2 is fired to send the spikes of cod(Dpos ) to Πsub with power of signal s2 . The rule (aϕ )s2 /aϕ → (aϕ )s2 is fired to send the spikes cod(ϕ) at a time t = t16 from σα , σβ
or σδ and a rule av i → λ is fired to destroy the spikes cod(X) that do not belong to a neuron
σα , σβ or σδ . At the last step in σα , σβ or σδ , the rule aX → aX is fired to send the spikes of
M
wolf position cod(X) in every social level σα , σβ or σδ to their central neuron. At that time,
configuration.
PT
3.1.3. Input Neuron
ED
a transition Chunt ⇒ Chalt is performed to move the state of the current neurons to halting
The input neuron iin sends spikes of some parameters to dispatcher neuron σd and others to
AC
CE
the output neuron io according to the following rules: iin = (nin , Rin ), where: → − nin = {aNcurr , aNmax , a a },
Rin = {aNmax /aNmax → aNmax tmx , aNcurr → aNcurr tmx ,
ACCEPTED MANUSCRIPT
(apg )+ /apg → apg tmx , → − → − → − a a /aNcurr a a → a a tmx ,
aAscore /aAscore → aAscore tmx ,
aDscore /aDscore → aDscore tmx },
CR IP T
aBscore /aBscore → aBscore tmx ,
next tmx = [tprev max , tmin ], t ∈ {t1 , ..., t7 },
and 1 ≤ g ≤ m, Ncurr ≤ Nmax .
AN US
The input neuron iin has initial spikes nin of the value of algorithm parameters (aNmax , aNcurr → − and a a ), whereas aNmax is the maximum number of iterations, aNcurr represents the current → − − iteration and a a indicates the value of component → a . These variables are initialized at Cini . In the set of rules Rin , when input neuron receives the spikes cod(Nmax ) and cod(Ncurr ), Nmax it sends their spikes to io and Π− /aNmax → aNmax at a time t = t1 → a respectively by a and aNcurr → aNcurr at a time t = t2 . After firing those two rules, the spikes cod(Nmax ) are
M
consumed. The input neuron sends spikes cod(Ncurr ) to Π− → a in every iteration to get the − a and consumes the previous value aNcurr . When iin receives spikes of wolf positions values of →
ED
from io or the environment, iin fires (apg )+ /apg → apg at a time t = t3 to send spikes of wolf
position to the dispatcher neuron σd and consume the current positions’ values apg . After → − → − → − − firing a a /aNcurr a a → a a , the spikes of cod(→ a ) are sent to dispatcher neuron σd at a time
PT
− t = t4 . At every algorithm iteration, iin asks about the value of → a and this happens when output neuron io sends the current iteration spikes cod(Ncurr ). iin receives the initial values of
CE
three optimal solutions with encoded forms cod(Ascore ), cod(Bscore ), and cod(Dscore ) respectively. It fires aAscore /aAscore → aAscore , aBscore /aBscore → aBscore and aDscore /aDscore → aDscore to send
AC
spikes of initial values of three optimal solutions to σd in addition to consume the current spikes cod(Ascore ), cod(Bscore ) and cod(Dscore ) waiting the incoming spikes of better three optimal solutions from i0 . Rin is applied in Chunt . When iin starts to send spikes, a transition Cini ⇒ Chunt is happened.
ACCEPTED MANUSCRIPT
3.1.4. Dispatcher Neuron The neuron σd applies broadcasting and dispatching functions by the form: σd = (nd , Rd ),
nd = {},
CR IP T
where:
Rd = {(aNcurr )s0 /aNcurr → (aNcurr )s0 tmx , (apg )sg /apg → (apg )sg tmx , → − → − → − (a a )s0 /a a → (a a )s0 t , mx
(aAscore )s0 /aAscore → (aAscore )s0 tmx ,
AN US
(aBscore )s0 /aBscore → (aBscore )s0 tmx ,
(aDscore )s0 /aDscore → (aDscore )s0 tmx }, next tmx = [tprev max , tmin ], t ∈ {t1 , ..., t6 },
S = {s0 , sg }, and 1 ≤ g ≤ m.
M
The dispatcher neuron σd has not any initial spikes nd at Cini .
ED
Power of Signal Strategy. Signal power value s0 refers to the needed power of signal to send spikes of some parameter values over synapses between σd and σc . The dispatcher function is performed by applying the signal power sg on spikes apg on their way to σc where the signal
PT
power is increased by increasing the index g and g = c.
CE
Firing Process. σd starts receiving the spikes cod(Ncurr ) of the current iteration value. The current neuron σd has a great and important role in ΠGW , it distributes the input parameters
AC
to central neurons σc according to the signal criterion. To perform broadcasting function, a → − signal power value s0 is assigned to aNcurr , a a , aAscore , aBscore and aDscore . when σd receives spikes cod(Ncurr ) from iin , the working is started by firing aNcurr /aNcurr → aNcurr from the set of rule Rd at a time t1 and t2 , A rule (apg )sg /apg → (apg )sg is fired to send spikes of position
value cod(pg ) to σc according to the signal power sg . This rule continues firing until sending all
ACCEPTED MANUSCRIPT
→ − → − → − incoming spikes of type cod(pg ). A rule (a a )s0 /a a → (a a )s0 is fired to send spikes cod(Ncurr ) − and cod(→ a ) to all σc . Moreover, the spikes cod(Ascore ), cod(Bscore ), and cod(Dscore ) are sent to all σc by firing aAscore /aAscore → aAscore , aBscore /aBscore → aBscore , and aDscore /aDscore → aDscore at times t4 , t5 and t6 respectively. The neuron’s work is applied in Chunt and when the neuron
Ncurr at the next iteration. 3.1.5. Output Neuron All responsibilities of io are done by Ro as
AN US
io = (no , Ro ),
CR IP T
finishes firing all rules, a transition Chunt ⇒ Chalt is happened to stop working until receiving
where:
no = {az1 },
Ro = {aNcurr → aNcurr tmx ,
(az1 )s1 /aNcurr → (az1 )s1 tmx ,
M
aNnew /aNnew → aNcurr tmx , aNmax → aNmax tmx ,
new
new
ED
(apg )+ /apg
new
→ apg tmx ,
(aAscore ) + /aNcurr → aAscore tmx ,
PT
(aBscore ) + /aAscore → aBscore tmx ,
AC
CE
(aDscore ) + /aBscore → aDscore tmx , (aApos ) + /aDscore → aApos tmx , (aBpos ) + /aApos → aBpos tmx ,
(aDpos ) + /aBpos aDpos → aDpos tmx , (aNcurr )+ a → λ},
next tmx = [tprev max , tmin ],
t ∈ {t1 , ..., t12 }, and 1 ≤ gnew ≤ m.
ACCEPTED MANUSCRIPT
The output neuron io has spikes cod(z1 ) carrying the value of a constant number 1 at the initial configuration. In each algorithm iteration according to the set of rules Ro , io fires aNcurr → aNcurr
at a time t = t1 and az1 /aNcurr → az1 to send spikes of number 1 to Πadd at a time t = t2 with power of signal s1 in order to increase the value of cod(Ncurr ) by one. io consumes the current
CR IP T
spikes cod(Ncurr ) in the same rule and waits a new value of cod(Nnew ). After receiving the spikes cod(Nnew ), they are assigned to cod(Ncurr ). io checks the current iteration number if it exceeds the maximum number of iterations by sending the spikes to Πsort at a time t = t3 and t = t4 new
new
respectively. When io receives the updated value of wolf position, it fires (apg )+ /apg
new
→ ap g
at a time t = t5 to send spikes of updated values to iin . Sending spikes of position vector
AN US
is considered a sign to start the next iteration. Once, the input neuron iin receives spikes of updated position values, it starts working. io receives the spikes of cod(Ncurr ) from Πsort as the larger value than cod(Nmax ). At the same time io receives from Rc the spikes of the best three optimal solutions of objective values in encoded forms cod(Ascore ) cod(Bscore ) and cod(Dscore ), and three best positions in encoded forms cod(Apos ), cod(Bpos ), and cod(Dpos ). When the
M
spikes of optimal solutions are available and cod(Nmax ) comes from Πsort before cod(Ncurr ), the rules (aAscore ) + /aNcurr → aAscore , (aBscore ) + /aAscore → aBscore , (aDscore ) + /aBscore → aDscore ,
ED
(aApos )+/aDscore → aApos , (aBpos )+/aApos → aBpos and (aDpos )+/aBpos aDpos → aDpos are fired from t = t6 to t = t11 to send values of three optimal solution (objective values and positions) to the
PT
environment and cod(Ncurr ) is consumed. After that io fires the forgetting rule (aNcurr )+ a → λ to forget all spikes in the neuron and stop the system working. io is applying Chalt when firing its
CE
rules. In this case, the system is moved from the previous configuration to halting configuration Chunt ⇒ Chalt . If io sends spikes of wolf positions, the transition Chalt ⇒ Cini is done.
AC
3.1.6. Multi-Dimension Problems The current GW SN P system construction supports multi-dimension problems with some
modifications. The modifications include some rules in io , iin and σd ; and duplicating the central, alpha, beta, and delta neurons. The modification in neurons io , iin and σd includes an
ACCEPTED MANUSCRIPT
input rule for every position dimension whatever the number of dimensions such as the rule ap (ap )+ → ap to perform algorithm step on these dimensions. Then in neurons structure and for every dimension, there are central, alpha, beta and delta neurons; moreover, the dispatcher
position values change as the same as in io and iin . 3.2. Outer Systems Construction
CR IP T
neuron σd distributes the position values in all central neurons in all dimensions. The rules of
− a linearly generation SN P system of a degree m = 1, is constructed Definition 3. Considering → of the form
AN US
Π− → a = (O, σ1 , syn, iin , io ), where:
− a neuron of the form 2. σ1 is a→
M
1. O = {a} is the singleton alphabet (called spike).
AC
CE
PT
ED
σ1 = (n1 , R1 ), where:
n1 = {az1 , az2 , aNmax },
R1 = {(az2 )s4 /aNcurr → (az2 )s4 tmx , (aNmax )s4 → (aNmax )s4 tmx , (a )s3 /a → (a )s3 tmx , (az1 )s3 → (az1 )s3 tmx , (az2 )s2 → (az2 )s2 tmx , (aη )s2 /aη → (aη )s2 tmx ,
ACCEPTED MANUSCRIPT
→ − → − → − a a /a a → a a tmx }, next S = {s2 , s3 , s4 }, tmx = [tprev max , tmin ], and t ∈ {t1 , ..., t7 }.
CR IP T
The term n1 is a set of initial spikes in σ1 and R1 is a set of rules is used to calculate the − − value of → a . The component → a is decreased linearly over algorithm iterations. There is an − equation for calculating the component of → a and Π− → a is constructed based on an equation that is mentioned in (Mirjalili et al., 2014). 3. The synapses syn ={(σ1 , Πmulti ), (σ1 , Πsub ),(σ1 , Πdiv ), (σ1 , ΠGW )}. There are synapses
AN US
between σ1 and some outer systems (Πdiv , Πmulti and Πsub ).
4. The input and output neurons are a single neuron σ1 where iin , io ∈ {1, ..., m} and m = 1. Π− → a includes only one neuron is used to be an in/out neuron σ1 . At the initial configuration Cini , σ1 contains constant spikes for constant numbers (az1 , az2 and aNmax ). az1 for a number
ED
M
one, az2 for a number two, and aNmax for the maximum number of algorithm iterations. At → − a time t1 , a transition Cini ⇒ Chunt is done and Π− → a starts calculating a value. When σ1 receives the spikes of a current iteration value cod(Ncurr ) from iin in ΠGW . At a time t1 , the rule az2 /aNcurr → az2 is fired to send spikes of number two to Πdiv with power of signal s4
and consume spikes of aNcurr to be ready for the next iteration. At a time value t2 , a rule
PT
aNmax → aNmax is fired to send spikes of cod(Nmax ) to Πdiv with power of signal s4 . When the Πdiv receives spikes az2 and Nmax , Πdiv calculates the result and sends spikes of that result a
AC
CE
z1 z1 to Π− → → a . Π− a fires a /a → a at a time t3 ; moreover, fires a → a at a time t4 to send the spikes az1 and a to Πmulti with power of signal s3 to get the result of multiplication aη . At the − a , σ1 fires az2 → az2 to send the spikes of number two at a time t5 and last step of calculating →
fires aη /aη → aη at a time t6 . az2 and aη are sent to Πsub with power of signal s2 . Πsub sends → − − spikes a a to σ1 and at a time t7 , σ1 sends the spikes of component → a to iin in ΠGW by firing → − → − → − − a a /a a → a a and consumes the current value of → a . At this time, a transition Chunt ⇒ Chalt
ACCEPTED MANUSCRIPT
is done to move Π− → a to halt configuration. The time factor t is a set of time units and the value of these time units is considered before starting the system work. A set of time units is instantiated due to dealing with the free timed SN P systems (Πdiv , Πmulti , or Πsub ).
Πrand = (O, σ1 , syn, iin , io ), where: 1. O = {a} is the singleton alphabet (called spike).
CR IP T
Definition 4. Considering Random Generator SN P system of a degree m = 2, of the form
AN US
2. A neuron σ1 is employed to implement the mathematical formula of random generator and the mathematical formula of modulus in according to the form σ1 = (n1 , R1 ),
M
where:
n1 = {ap1 , ap2 , arseed },
ED
R1 = {(ap1 )s3 (aNcurr )+ /aNcurr → (ap1 )s3 tmx ,
AC
CE
PT
(arseed )s3 /arseed → (arseed )s3 tmx , (a )s1 /a → (a )s1 tmx , (ap2 )s1 → (ap2 )s1 tmx , aη /aη → aη tmx , ar → ar1 tmx , ar → aseed tmx , av → λtmx },
next tmx = [tprev max , tmin ], t ∈ {t1 , ..., t7 },
ACCEPTED MANUSCRIPT
v ∈ {, η, r}, and s = {s1 , s3 }.
where: n2 = {ap1 , ap2 , arseed },
CR IP T
σ2 = (n2 , R2 ),
R2 = {(ap1 )s3 (aNcurr )+ /aNcurr → (ap1 )s3 tmx , (arseed )s3 /arseed → (arseed )s3 tmx ,
AN US
(a )s1 /a → (a )s1 tmx , (ap2 )s1 → (ap2 )s1 tmx , aη /aη → aη tmx , ar → ar2 tmx ,
ED
M
ar → aseed tmx , av → λtmx }
AC
CE
PT
next tmx = [tprev max , tmin ], t ∈ {t1 , ..., t7 },
v ∈ {, η, r}, and s = {s1 , s3 }. σ3 = (n3 , R3 ), where: n3 = {ag },
R3 = {(aη )s4 → (aη )s4 tmx , (ag )s4 → (ag )s4 tmx ,
ACCEPTED MANUSCRIPT
(aθ )s3 /aθ → (aθ )s3 tmx , (ag )s3 → (ag )s3 tmx ,
(aρ )s2 /aρ → (aρ )s2 tmx , ar → ar tmx },
CR IP T
(aη )s2 /aη → (aη )s2 tmx ,
next tmx = [tprev max , tmin ], t ∈ {t1 , ..., t7 },
and s = {s2 , s3 , s4 }.
AN US
3. The synapses are constructed in random generator SN P system as syn = {(σ1 , ΠGW ), (σ2 , ΠGW ), (σ1 , σ3 ), (σ2 , σ3 ), (σ1 , Πmulti ), (σ2 , Πmulti ), (σ3 , Πmulti ), (σ1 , Πsum ), (σ2 , Πsum ), (σ3 , Πdiv ), (σ3 Πsub )}.
4. The input neurons σ1 and σ2 indicate iin and connect with ΠGW ; awaiting spikes to start
M
the work. Furthermore, when the workflow of these neurons is finished, σ1 and σ2 send
ED
two random numbers to ΠGW . So σ1 and σ2 are considered the output neurons. The terms n1 and n2 are sets of initial spikes in neurons σ1 and σ2 respectively where σ1
PT
and σ2 are the neurons that are responsible for generating two random numbers encoded as cod(r1 ) and cod(r2 ). Moreover, n3 is the set of initial spikes in a neuron σ3 that is responsible
CE
for calculating the modulus. R1 , R2 and R3 are sets of rules in neurons σ1 , σ2 and σ3 employed to generate two random numbers within a rang [0, 1]. Random generator neurons starts receiving spikes of cod(Ncurr ) from ΠGW by σ1 and σ2 .
AC
The rule ap1 (aNcurr )+ /aNcurr → ap1 is fired at a time t1 , to send the spikes of cod(p1 ) to Πmulti with power of signal s3 and consume spikes of Ncurr waiting the next iteration. The spikes of cod(rseed ) are sent to Πmulti with power of signal s3 by firing arseed /arseed → arseed at a time t2 . Πmulti multiplies the values of rseed and p1 and sends the spikes of multiplication result cod() to
ACCEPTED MANUSCRIPT
σ1 or σ2 . Before σ1 or σ2 receives the result of multiplication, the rule av → λ is fired at the maximum time of t2 to revoke the result that does not fit to that neuron, whereas v = . At a time t3 , σ1 or σ2 fires a /a → a to send spikes cod() to Πsum with power of signal s1 . σ1 or σ2
fires ap2 → ap2 to send spikes of p2 to Πsum at a time t4 with power of signal s1 . At maximum
CR IP T
time t4 , a rule av → λ is fired where v = η to destroy the spikes of summation result that do not
belong to the working neuron. At a time t5 , aη /aη → aη is fired to send the spikes of cod(η) to σ3 . In this time, the process in σ3 is started and σ1 or σ2 pauses their work. The rule aη → aη
is fired at a time t1 in σ3 to sends spikes of cod(η) to Πdiv with power of signal s4 and at a time t2 , ag → ag is fired to send spikes cod(g) to Πdiv with power of signal s4 ; waiting the results.
AN US
When σ3 receives spikes cod(θ) of division result and at a time t3 , aθ /aθ → aθ is fired to
send the spikes cod(θ) to Πmulti with power of signal s3 . Πmulti receives the spikes cod(g) at a time t4 by firing ag → ag with power of signal s3 . The last calculation in σ3 is subtracting the value of multiplication result cod(ρ) from cod(η) by firing aη /aη → aη at a time t5 , and firing
aρ /aρ → aρ at a time t6 . Πsub sends spikes of cod(r) to σ3 and σ3 fires ar → ar at a time t7 to
M
send spikes of cod(r) to σ1 or σ2 . σ3 work is finished and cod(r) is considered the generated random number. Before the working in neuron stops at a maximum time t5 , av → λ is fired to
ED
revoke the result cod(r) that does not be related to that neuron. σ1 or σ2 work is resumed again and at a time t6 the rule ar → ar is fired to send spikes cod(r1 ) or cod(r2 ) to ΠGW , and the
PT
value of cod(rseed ) is updated at a time t7 by firing ar → arseed to be used at the next iteration.
Πfobj = (O, σi , syn, iin , io ),
AC
where:
CE
Definition 5. Considering the objective function SN P systems of a degree m ≥ 1, of the form
1. O = {a} is the singleton alphabet (called spike). 2. A set of neurons σi of the form
ACCEPTED MANUSCRIPT
σi = (ni , Ri ), 1 ≤ i ≤ m, where: n1 = {ap1 , ap2 , arseed }. ni ≥ 0 is the initial spikes in neuron σi . The initial spikes are
CR IP T
instantiated according to the mathematical model of the employed objective function. Ri is a set of rules is used in every neuron σi to apply the objective function over the wolf position value. σi starts when the central neuron σc in ΠGW sends the spikes cod(apc ) of wolf position to be received by the input neuron iin in ΠGW . The output neuron io sends spikes cod(afc ) of the objective value to the central neurons σc .
AN US
3. The synapses syn = (iin , ΠGW ), ..., (io , ΠGW ). There are synapses between σ1 and outer systems (Πdiv , Πmulti and Πsub ). σ1 is connected with the input neuron iin in ΠGW . 4. The input neuron iin receives the spikes of cod(apc ) from ΠGW and The output neuron io
M
sends the spikes of fitness value cod(afc ) to ΠGW .
Theorem 3.1. The social hierarchy of grey wolves, tracking, encircling, and attacking prey are
ED
simulated by applying the characterizations of SN P systems based on GWO. Proof. Given ΠSP G , the mathematical model of GWO is emulated based on SN P systems.
PT
ΠSP G respected the sequence of GWO; covered all GWO calculations (arithmetic operations, sorting, random generation, and the objective function), and implemented the core of GWO from
CE
Definition 2 to 5. As a result, SN P systems can be formulated as an optimization algorithm. Therefore, the theorem halts.
The pseudo-code of the proposed SPG algorithm can be found in Algorithm 1, which states
AC
the sequence of firing system rules. The algorithm starts with initializing the input parameters based on the type of dataset, whereas it designed to accept the same dataset of GWO and its applications. During the algorithm instantiation, the algorithm invokes the functionalities of outer and arithmetic operations SN P systems. For each iteration, the firing process inside
ACCEPTED MANUSCRIPT
neurons is executed based on the sequence of rules as defined in Definition 2. At the end of each iteration, the maximum number of iterations and time limitation are checked until getting the optimal solution according to the maximum limit of these variables. The result is the values of Apos and Ascore , where Apos is the best position and Ascore is the best objective value.
PT
ED
M
AN US
CR IP T
Algorithm 1 Pseudo-code of SPG Algorithm Input: Π = {σ, iin , io }, Nmax , pg and Tmax {SN P system input configuration.} Output: Apos , Ascore and T 1: Ncurr ← 1 and T ← 0 2: Step 1: Import Πadd , Πsub , Πmulti , Πdiv and Πsort 3: Step 2: Invoke Π− → a , Πrand and Πfobj {Call Definition 3, 4 and 5} 4: while Ncurr ≤ Nmax and T ≤ Tmax {Call Definition 2} do − 5: Calculate → a in Π− → a → − 6: Pass pg , a , Ascore from iin to σd − 7: Dispatch pg , → a , Ascore to σC {Call power of signal strategy} 8: Fire central neurons σC rules in parallel 9: Call Πfobj Objective function rules 10: Pass pg , f, Ascore , Bscore , Cscore , Apos , Bpo , Cpos to σα , σβ , σδ by σC rules 11: Calculate X1 , X2 , X3 in parallel by calling σα , σβ , and σδ rules 12: Return-back X1 , X2 , X3 in parallel to σC by calling σα , σβ , and σδ rules 13: Calculate pnew by Πadd and Πdiv Call σC rules 14: Pass pnew in parallel to io , Πsort Call σC rules 15: State Ascore , Bscore , Cscore , Apos , Bpo and Cpos {Call io rules to get the new values} 16: check Ncurr and T : Call io rules 17: end while 18: return Apos , Ascore : Call io rules
CE
4. Experiments and Discussion In this section, we present a theoretical and an empirical evaluation of the proposed SPG
AC
with main P systems and swarms evaluation methods. 4.1. Numerical Experiments A numerical example is introduced to discuss the workflow and computational power of
SPG. Here, suppose that an optimization problem has m number of grey wolves with initial
ACCEPTED MANUSCRIPT
values of their position 4p. There are some algorithm parameters are initiated such as the maximum number of iterations with value Nmax = 2; initial three optimal solutions with values Ascore , Bscore and Dscore . Those optimal solutions can be used as a threshold and considered as a stopping criterion. The time configurations include assigning time units to all rules according
CR IP T
to a global clock. The time factor can be used as a stopping criterion similarly with Nmax and optimal solutions. The numerical experiment as in Table 1 has the setup configurations of the numerical experiment; including neurons (wolves), available outer systems, stopping criteria, and I/O neurons conditions. In this experiment, the minimum requirements of outer systems are configured and this is considered the worst case.
AN US
Table 2 and 3 show time configurations, applying stopping criteria, and mainly the tracing scenario of SPG. The tracing scenario in Table 2 and Table 3 is discussed at the next lines. Time configurations are divided into two factors (time units, and time step). The idea of time units comes from the consumed time values in system neurons as mentioned at SPG construction. On the other hand, the time step is a unique number to calculate the time consumption over all the
M
system including all time units in SPG. Therefore, the time step factor changes incrementally according to time units. The first step in SPG is instantiating system neurons with the initial
ED
spikes at time step t = 0. The current instantiation state is considered the initial configuration Cini , whereas this configuration is applying with every iteration according to SPG construction.
PT
The trace scenario embraces time management, grey wolf neurons, I/O neurons, and outer SN P systems. Every time step includes the time units are used by neurons in parallel which
CE
means the time step is calculated by the largest time unit value in the current step. In this example, SPG Algorithm workflow depends on grey wolf movement to get the optimal value
AC
across two iterations. The synchronization between neurons to access outer systems are built on the power of signal and the concept of first come first serve. The tracing scenario fires the rules in SPG systems in parallel with respect to the sequential rules in a single neuron. The numerical example uses a system from every type of outer system (random, the component of
ACCEPTED MANUSCRIPT
Table 1: Numerical experiment configuration.
Parameter name
Parameter value
Involved neurons
Description
1
Wolves count
1
4
σc , σα , σβ , σδ and σD
2
Nmax
2
3
3
Optimal solution
4
I/O parameters
5
Time factor
Time units t
6
Outer systems
One system for every operation
3 3
CR IP T
Ascore , Bscore and Dscore Apos , Bpos and Dpos Initial optimal solutions Wolf positions 4p
Maximum time of iteration in neurons: iin , io and Π− → a Three optimal position values Three optimal fitness values iin , io and σD
Consumed time unites
AN US
ID
Πadd , Πsub , Πmulti , Πdiv , Πsort , Πrand and Πfobj
→ − a , objective function, addition, subtraction, multiplication, and division). From the numerical example, there are some points are deduced such as the computational power of SPG and some additional enhancements in SPG. The computational power comes from the true parallelism
M
in SN P systems and with that parallelism, the grey wolf behavior is simulated as in nature. Nature and biological systems work in parallel so simulating these systems with their behavior
ED
is the best way to solve a complex problem with linear time as in SPG. When any meta-heuristic algorithm works on optimization problems, it needs a large number
PT
of iterations and deals with multi-dimensions in large spaces. So some enhancements are applied over SPG to give it the ability to handle these cases. The enhancements include two points
CE
(time configuration, and outer systems optimization). The most important and applicable enhancement to be included is outer system distribution. Using a single system for every arithmetic operation, random, and objective function systems cause a sequential working and
AC
effects on the time of finding an optimal solution. But the solution for this point is utilizing many systems. For more details, including these systems at every grey or neuron level. This modification will optimize the running time of the system. So we can redistribute outer systems over every neuron to perform neuron’s calculations independently. It’s easy to apply this
CE
z3 Apos
t7c , t4α
t11c , tmulti , tsub
9
13
17
a a aBpos
az3 afnew
t4c , t1δ , t2β
8
aAscore aBscore aDscore af az3
aAscore az3 ap
az3 aNcurr
a
z3
PT
σc
t6d , tfobj , tmutli , t1β , trand
6
t3in , t1d , t1c , t1...,7→ − a t6in , t4d , t2c , t3o , t2α , trand
t0
0
3
Time units
Time step
aa a
p η A1
→ − az2 a a ar1 ar2 ap a → − az2 a a ar2
aAscore aBscore aDscore az3
→ − az3 a a ar1 ar2 ap aNcurr → − az3 a a ar1 ar2 ap
→ − az3 a a ar1 ar2 ap aNcurr a
σδ
→ − az2 a a aNcurr
a
z2
a
a
→ − az2 a a ar1 ar2 ap a
→ − az3 a a ap aNcurr
iin
a
a
a
Ascorenew Bscorenew
a a
z1 Ncurr Nmax
a a
z1 Nmax
az1 aNmax aNcurr
a
z1
io
I/O neurons
CR IP T
aDscore aNcurr
→ − aNcurr a a
aNcurr → − aNmax a a
Ascore Bscore Dscore
AN US
→ − az2 a a aNcurr
M
a
z2
→ − az2 a a ar1 ar2
a
z2
σβ Fitness Values: a
ED
σα
ΠGW
Halt
a
Bscore
ap g
σd
Πsub
Πmulti
Πmulti
Πsort
Πmulti
Πrand
Πsort
Πrand
Πmulti
Πfobj
Πmulti
Πrand
Πsort
Π− → a Πsum
Πfobj
Π− → a Πrand
Outer (Π)
Table 2: The scenario of SPG Algorithm before reaching (X1 , X2 and X3 ) and is employed to update the wolf position.
AC
ACCEPTED MANUSCRIPT
az3 aX1 aX2 z3 X3
pnew
a Halt
t10A
tsub , tmulti
t14δ , t18α , t16β
t16δ , t18β
t19c
tdiv
t23c
t5o , ..., t11o
30
38
45
48
52
57
58
59
110
az3 aX1
t6δ , t8α
24
a a
→ − az2 a a ap aApos aA1 aC1
Halt
→ − az3 a a aX1
Halt
aDpos → − az2 a a aω
CR IP T aAscorenew aBscorenew aApos aDscorenew aBpos aDpos aAscorenew aBscorenew aApos aDscorenew aBpos aDpos
AN US
→ − az2 a a ap aA3 aω aDpos → − az2 a a aA3
Πsub
a a a a aApos aBscorenew aBpos
Πdiv
Πadd
Πsub
Πsub
Πmulti
Πmulti
Πsub
Πmulti
Πsub
Πmulti
Πmulti
io z1 Ncurr Nmax Ascorenew
ap aη a aDpos
iin
Outer (Π)
σδ → − a a a ar1 ar2 ap a → − az2 a a ar2 z2
I/O neurons
Repeat steps from time step (1) to time step (58) Fitness Values: aAscorenew aBscorenew aDscorenew Optimal Positions: aApos aBpos aDpos
Halt
→ − az2 a a aX2
→ − az2 a a aϕ
→ − az2 a a aA2 aBpos
M
σβ → − a a a ar2 ap a aη aBpos
z2
ED
→ − az2 a a ar2 ap aApos aA1
σα
PT
az3 aDpos
t15c , t6β , t4δ
21
σc
ΠGW
Table 3: The calculation processes of optimal solutions in SPG Algorithm.
Time units
CE
Time step
AC
ACCEPTED MANUSCRIPT
ACCEPTED MANUSCRIPT
modification because the change will be in synapses and increase the number of outer systems. 4.2. Empirical Experiments SPG has the same mathematical form as GWO, but it is emulated by SN P systems. Based
CR IP T
on this point, SPG can guarantee the convergence curve and reach the optimal solution as GWO. In the numerical experiment, we focused on the time complexity and how can the algorithm halt. In this section, there are several experiments performed to evaluate the proposed algorithm based on some objective functions. SPG is developed by using Java concurrency programming. SPG is benchmarked on popular benchmark functions reported in many previous works (He
AN US
et al., 2004; Mirjalili et al., 2014). According to the desire of ability to evaluate our results, a set of benchmark function is chosen and listed in Table 4, which contains the mathematical formula, optimum value, dimensionality and search space boundary. SPG was tested by 30 runs on each benchmark function. A different sets of algorithm parameters are tested and gave the results in Figure 2 and 3. From those figures, we found that
M
the algorithm can find the nearest optimal values for the tested benchmark functions. The best results can be achieved when the number of iteration is greater than 50.
ED
Based on results of the experiment stated in Table 5, the algorithm provides competitive results comparing with GWO (Mirjalili et al., 2014).
PT
According to the experiments and results, there are some experimental outcomes lead to define the computational power and solution feasibility of the proposed SPG. These outcomes
CE
include the need of timing methods, outer systems effectiveness analysis over the algorithm performance, and reaching the optimal solution that will be explained at the following paragraphs. In this work, a timing method is employed to overcome non-determinism of firing neuron rules.
AC
Every rule takes enough time units to be executed. The parallelism and number of outer SN P systems are two factors affecting the running time of SPG Algorithm. From the previous numerical example, the number of outer SN P systems effected on the number of time steps.
The numerical example utilized the minimum number of these systems so the count of time
ACCEPTED MANUSCRIPT
Table 4: Test benchmark functions.
f7 (x) = maxi {|xi |, 1 ≤ i ≤} P f8 (x) = ni=1 [x2i − cos(2πxi ) + 10] P i.x2 f9 (x) = ni=1 sin(xi ).(sin( π i ))2m , m = 10 P 1 1 −1 P f10 (x) = ( 500 + 25 j=1 j+ 2 (x −a )6 ) i
fmin
[−100, 100]
30
0
[−5, 10]
30
0
CR IP T
Dimensions
[−100, 100]
30
0
[−100, 100]
30
0
[−1.28, 1.28]
30
0
[−500, 500]
30
-418.983n
[−100, 100]
30
0
[−5.12, 5.12]
30
0
[0, π]
30
4.687
ij
[65, 65]
2
1
ED
M
i=1
Space range
AN US
Function P f1 (x) = ni=1 x2i Pn−1 f2 (x) = i=1 [100(x2i − xi+1 )2 + (xi − 1)2 ] P P f3 (x) = ni=1 ( ij−1 xj ) P f4 (x) = ni=1 ([x2i + 0.5])2 P f5 (x) = ni=1 ix4i + random[0, 1] p P f6 (x) = ni=1 −xi sin( |xi |)
Table 5: A statistical analysis of benchmark functions results
SPG Var 2.965E−18 3.904E−14 2.784E−14 1.355E−16 2.688E−13 6.318E−21 60.601876 6.704E−19 6.861E−14 9.14015E−22
PT
AC
F1 F2 F3 F4 F5 F6 F7 F8 F9 F10
Avg 5.280E−04 1.059E−08 1.998E−08 1.250E−07 2.081E−05 5.293E−07 6.943E−07 2.701E−04 7.704 E−3 9.708 E−5
CE
F
Std 2.744E−07 2.241E−08 4.218E−07 1.839E−06 4.352E−07 2.386E−11 8.005E−4 2.635E−07 2.335E−11 4.177E−15
Avg 6.149E−05 9.105E−05 3.129E−06 1.21679 8.944E−02 2.933E−06 5.061E−07 6.225E−04 3.175E−06 3.404E−03
GWO Var Std 4.571E−21 3.114E−06 29.99E−11 6.993E−06 5.734E−14 7.128E−04 89.34339 1.50412 7.468E−09 3.170E−03 8.471E−17 2.132E−09 31.07288 6.254E−04 5.644E−14 9.332E−06 9.002E−13 5.730E−04 2.550E−19 2.229E−15
M
AN US
CR IP T
ACCEPTED MANUSCRIPT
ED
Figure 2: The results of the three benchmark functions. Every function is shown with the 2-D version, history of population convergence, first search agent (pos) and fitness function. The tendency of the convergence and position history charts are related to the benchmark function performance during running time.
steps is expanded to cover the synchronization between neurons to access the outer systems.
PT
In case of increasing the outer systems for every neuron, SPG processes will be completed in parallel and the time steps will decrease. Working with a huge search spaces needs increasing
CE
the outer system’s number to handle a large number of neurons’ requests and this is clear in Fig. 4, whereas contains a neurons network consists of a large number of neurons m = 503.
AC
Fig. 4 shows the complexity of spiking neural network, the necessary of time configuration and the power of signal to control the flow of spikes’ trains. In addition to Fig. 5 illustrated the structure of the neurons at runtime.
M
AN US
CR IP T
ACCEPTED MANUSCRIPT
Figure 3: 2-D version, search history, and convergence curve of benchmark functions.
ED
Theorem 4.1. For every SPG runtime, the time complexity of random, arithmetic operations and objective function SN P systems TOS =
6n z
where n is the number of time units per an outer
PT
system and z is the number of outer systems. n has the same value for all outer systems and is
CE
the highest time unit value that is consumed by one of the outer systems. Proof. Consider any ΠSP G has at least the arithmetic operations (summation, subtraction, multiplication, and division), random generator and objective function SN P systems Π. Given
AC
that Π consumes n time units, every six Π will consume 6n time. Based on the previous numerical example, we tried to run the algorithm many times. Every runtime, the count of outer systems is doubled with z number as in Fig. 6 which shows the time factor state when increasing or decreasing outer systems count z. It can be observed that the runtime is decreased
ACCEPTED MANUSCRIPT
CR IP T
Dispatche neuron
AN US
Output neuron
Figure 4: The state of ΠGW at a runtime.
M
by the value of z with respect to the value of n so the theorem holds.
Theorem 4.2. The time complexity of SPG Algorithm is (2 ∗ TGW + TOS + Ti/o + Tsort ) ∗ N
ED
where TGW is the time units inside wolf neurons. TOS refers to the total time steps needed in
PT
outer systems. Ti/o is the consumed time steps in in/out neurons including the time of Π− → a, Tsort represents the consumed time steps in sorting SN P system and N is the maximum number of algorithm iterations.
CE
Proof. Let SPG has m wolves which need 4m neurons. One neuron is the central and three for α, β and δ for every wolf so Alpha, Beta and Delta neurons will fire their rules in
AC
parallel and sequentially with their central neuron. Therefore, the needed time is TGW or central neuron and TGW to the reminder of social hierarchy. m wolves consume time steps depends on TOS . Ti/o computed from the time units in in/out neurons and the time of Π− → a . The true parallelism in spiking neural P systems contributes with a computational power of solving an
ACCEPTED MANUSCRIPT
Alpha neuron Wolf5
CR IP T
Dispatcher
AN US
neuron
Wolf 1
neuron
Wolf neurons {Central, Alpha, Beta and Delta)
M
Figure 5: The structure of central, alpha, beta and delta neurons in ΠGW representing five wolves at a runtime.
ED
optimization problem in polynomial time with respect number of outer systems. Therefore, SPG time complexity depends on the maximum number of algorithm iterations and TOS .
The time complexity of GWO is evaluated by Big-O annotation and this experiment is
PT
performed many times on different cases as in Fig. 7. The time complexity of SPG is calculated
AC
CE
in Fig. 7 based on the tracing in the numerical example.
ACCEPTED MANUSCRIPT
AN US
CR IP T
Time usage in 𝞟𝑺𝑷𝑮
M
1200000
Figure 5: The effect of increasing outer systems on time factor.
ED
1000000 800000 600000 400000 200000
AC
CE
0
PT
The consumed time steps
Figure 6: The effect of increasing the number of outer systems on the running time.
1
2
3
4
5
Experiments count GWA time steps
SPG time steps
Figure 7: A time complexity comparison between GWO and SPG. The count of grey wolves is increasing linearly within a range [10-10000] over five experiments.
36 Figure 6: Time complexity comparison between GWA and SPG.
6. Conclusion and Future Work: In this paper, an optimization algorithm is reconstructed by Spiking Neural P Systems
ACCEPTED MANUSCRIPT
5. Conclusions In this paper, an optimization algorithm is constructed based on SN P systems regarding their computational power of solving problems in polynomial time. The proposed algorithm relied on the parallelism feature to implement the mathematical model of GWO; simulating the
CR IP T
social hierarchy, tracking, encircling, and attacking behaviors. SPG presented novel strategies for solving non-determinism problem of SN P systems inside and among neurons. The first strategy named the power of signal that controlled the copying spikes process over synapses and is used to differentiate between the arithmetic operations. Time control approach is the second strategy that applied during firing rules inside neurons. SPG is designed to deal with
AN US
optimization problems within unknown search spaces and achieve satisfying results in getting a feasible solution in a limited time within the context of SN P system, defining accordingly the elements of the model - neuron structure, spikes, rules and the behavior of them. The theoretical and empirical experiments implied the successful running of the algorithm by applying a full
M
optimization scenario. Moreover, the experiments proved that the effectiveness of parallelism feature on the running time.The strengths of the proposed algorithm include some strategies
ED
that effected confidently on the computational power of SPG as follows: • SN P systems are working sequentially on neuron level and in parallel on a system level.
PT
This point is very appropriate with the natural behavior of grey wolves and any other swarm. This point distinguishes the current study from previous studies regarding swarms.
CE
• A determinism strategy based on time control is applied to handle the mathematical model sequencing of GWO. This strategy keeps up the right working of GWO. It is considered a
AC
definite solution for any application working with SPG.
• The concept of power of signal is a new idea in SN P systems. It is proposed to handle the flow of spikes train among neurons. This concept solved a critical problem of copying
ACCEPTED MANUSCRIPT
spikes for all synapses. The power of signal protected spikes train from destruction before reaching the destination and used in dispatching base. For future works, several research directions can be recommended and limitations could be overcome. So we can mention:
CR IP T
• This study is considered a start point of presenting nature swarms via parallelism feature instead of the sequential processing. Swarms behave in parallel and systematic way so the computational modeling for these types of mechanisms need to be simulated as nurture to guarantee the right work, best results and resealable time.
AN US
• Building bio/nature-inspired optimization algorithms by membrane computing can be possible and another interesting future work via a unified mapping between these algorithms and membrane computing. This idea comes and appears as a possible idea from SPG formulation criteria (neuron structure, rules construction, the power of the signal, avoiding
M
non-determinism, and parallelism future).
• There is no strategy for assigning time units to rules in the proposed system. So we need
ED
to build a time-assigning strategy to control distributing time units within a maximum time limit. Timed SN P systems depended on a global clock to configure the time factor inside neurons. In case of SPG, there are several SN P systems and timing is common
PT
between them so timed-assigning strategy can be used as a timing system to handle the
CE
time unites inside SPG systems and calculate the actual running time of the algorithm. • SN P systems is considered a theoretical computational model and cannot be implemented by the current devises. On contrast, we developed SPG using parallel programming to
AC
prove that the algorithm can reach an optimal solution.
Author Contribution The main contributions of this paper are:
ACCEPTED MANUSCRIPT
• Proposes novel strategies for solving non determinism problem of SN P systems. • A new method named the power of signal is proposed to control the copying spikes process between neurons and differentiate between the arithmetic operations.
the determinism feature during firing rules.
CR IP T
• A time control approach is proposed to avoid non-determinism inside neurons that applied
• The theoretical and empirical experiments proved that the algorithm can successfully halt and in addition to the effectiveness of the proposed neural systems in getting an optimal
AN US
solution in a reasonable time. 6. References References
Adl, A., Zein, M., & Hassanien, A. E. (2016). PQSAR: The membrane quantitative structure-
M
activity relationships in cheminformatics. Expert Systems with Applications, 54 , 219–227.
ED
Alhazov, A., Freund, R., Ivanov, S., Oswald, M., & Verlan, S. (2018). Extended spiking neural P systems with white hole rules and their red–green variants. Natural computing, 17 , 297–310.
PT
Amin, H., & Fujii, R. (2004). Spike train decoding scheme for a spiking neural network. In Neural Networks, 2004. Proceedings. 2004 IEEE International Joint Conference on (pp. 477–482).
CE
IEEE volume 1.
Arto Salomaa, Gh. P., & Rozenberg, G. (1999). DNA computing: New computing paradigms.
AC
Computers and Mathematics with Applications, 37 , 134.
Bansal, J. C., Sharma, H., Jadon, S. S., & Clerc, M. (2014). Spider monkey optimization algorithm for numerical optimization. Memetic Computing, 6 , 31–47.
ACCEPTED MANUSCRIPT
Bi, W., & Zang, W. (2016). The research review of spiking neural membrane system. In Human Centered Computing (pp. 855–860). Springer Nature volume 9567. Brownlee, J. (2011). Clever algorithms: nature-inspired programming recipes. Jason Brownlee.
CR IP T
Cabarle, F., Adorna, H., Martinez-Del-Amor, M. A., & Perez-Jimenez, M. J. (2012). Improving GPU simulations of spiking neural P systems. Romanian Journal of Information Science and Technology, 15 , 5–20.
Cabarle, F. G. C., Adorna, H., & Mart´ınez, M. A. (2011). A spiking neural P system simulator based on CUDA. In Membrane Computing (pp. 87–103). Springer Science + Business Media
AN US
volume 7184.
Cabarle, F. G. C., Adorna, H. N., Jiang, M., & Zeng, X. (2017). Spiking neural P systems with scheduled synapses. IEEE transactions on nanobioscience, 16 , 792–801.
M
Chen, Z., Zhang, P., Wang, X., Shi, X., Wu, T., & Zheng, P. (2018). A computational approach for nuclear export signals identification using spiking neural P systems. Neural Computing
ED
and Applications, 29 , 695–705.
Coello, C. A. C. (2004). Metaheuristics for multiobjective optimization. In Metaheuristics (pp.
PT
308–384). Wiley-Blackwell.
Diaz, C., Frias, T., Sanchez, G., Perez, H., Toscano, K., & Duchen, G. (2017). A novel
CE
parallel multiplier using spiking neural P systems with dendritic delays. Neurocomputing, 239 , 113–121.
AC
Diaz-Pernil, D., & Guti´errez-Naranjo, M. A. (2018). Semantics of deductive databases with spiking neural P systems. Neurocomputing, 272 , 365–373.
ACCEPTED MANUSCRIPT
Emary, E., Zawbaa, H. M., & Grosan, C. (2018). Experienced gray wolf optimization through reinforcement learning and neural networks. IEEE transactions on neural networks and learning systems, 29 , 681–694. Freund, R., & Kogler, M. (2010). Computationally complete spiking neural P systems without
CR IP T
delay: Two types of neurons are enough. In Membrane Computing (pp. 198–207). Springer Science + Business Media volume 6501.
Gerstner, W., & Kistler, W. M. (2002). Spiking neuron models: Single neurons, populations, plasticity. Cambridge university press.
AN US
He, S., Wu, Q., Wen, J., Saunders, J., & Paton, R. (2004). A particle swarm optimizer with passive congregation. Biosystems, 78 , 135–147.
Ionescu, M., P˘aun, A., P˘aun, Gh., & P´erez-Jim´enez, M. J. (2006a). Computing with spiking neural P systems: Traces and small universal systems. In DNA Computing (pp. 1–16).
M
Springer Science + Business Media volume 4287.
ED
Ionescu, M., P˘aun, G., & Yokomori, T. (2006b). Spiking neural p systems. Fundamenta informaticae, 71 , 279–308.
PT
Ionescu, M., & Sburlan, D. (2012). Some applications of spiking neural P systems. Computing and Informatics, 27 , 515–528.
CE
Jiang, K., Chen, W., Zhang, Y., & Pan, L. (2016). Spiking neural P systems with homogeneous neurons and synapses. Neurocomputing, 171 , 1548–1555.
AC
Juang, C.-F., Chen, T.-C., & Cheng, W.-Y. (2011). Speedup of implementing fuzzy neural networks with high-dimensional inputs through parallel processing on graphic processing units. IEEE Transactions on Fuzzy Systems, 19 , 717–728.
ACCEPTED MANUSCRIPT
Kim, C.-H., Khurshaid, T., Wadood, A., Farkoush, S. G., & Rhee, S.-B. (2018). Gray wolf optimizer for the optimal coordination of directional overcurrent relay. Journal of Electrical Engineering & Technology, 13 , 1043–1051. Krithivasan, K., Metta, V. P., & Garg, D. (2011). On string languages generated by spiking
CR IP T
neural P systems with anti-spikes. International Journal of Foundations of Computer Science, 22 , 15–27.
Liu, X., Li, Z., Liu, J., Liu, L., & Zeng, X. (2015). Implementation of arithmetic operations with time-free spiking neural P systems. IEEE Transactions on NanoBioscience, 14 , 617–624.
AN US
Liu, X., Li, Z., Suo, J., Liu, J., & Min, X. (2014). A uniform solution to integer factorization using time-free spiking neural P system. Neural Computing and Applications, 26 , 1241–1247. Livingstone, D. J. (2008). Artificial Neural Networks: Methods and Applications (Methods in
M
Molecular Biology). Humana Press.
Maass, W. (1997). Networks of spiking neurons: The third generation of neural network models.
ED
Neural Networks, 10 , 1659–1671.
Maass, W., & Bishop, C. M. (2001). Pulsed neural networks. MIT press.
PT
Medjahed, S., Saadi, T. A., Benyettou, A., & Ouali, M. (2016). Grey wolf optimizer for
CE
hyperspectral band selection. Applied Soft Computing, 40 , 178–186. Metta, V. P., & Kelemenov´a, A. (2015). Sorting using spiking neural P systems with anti-spikes and rules on synapses. In Membrane Computing (pp. 290–303). Springer Science + Business
AC
Media volume 9504.
Metta, V. P., Krithivasan, K., & Garg, D. (2012). Computability of spiking neural P systems with anti-spikes. New Mathematics and Natural Computation, 08 , 283–295.
ACCEPTED MANUSCRIPT
Mirjalili, S., Mirjalili, S. M., & Lewis, A. (2014). Grey wolf optimizer. Advances in Engineering Software, 69 , 46–61. Pan, L., Zeng, X., & Zhang, X. (2011). Time-free spiking neural p systems. Neural Computation,
CR IP T
23 , 1320–1342. Pan, T., Shi, X., Zhang, Z., & Xu, F. (2018). A small universal spiking neural P system with communication on request. Neurocomputing, 275 , 1622–1628.
Passino, K. M. (2010). Bacterial foraging optimization. International Journal of Swarm
AN US
Intelligence Research, 1 , 1–16.
P˘aun, Gh. (2000). Computing with membranes. Journal of Computer and System Sciences, 61 , 108–143.
P˘aun, Gh. (2006). Introduction to membrane computing. In Applications of Membrane
M
Computing (pp. 1–42). Springer.
P˘aun, Gh., P´erez-Jim´enez, M. J., & Salomaa, A. (2007). Spiking neural P systems: An early
ED
survey. International Journal of Foundations of Computer Science, 18 , 435–455. Peng, H., Wang, J., Zhang, G., & Gheorghe, M. (2010). Timed spiking neural P systems. In 2010
PT
IEEE Fifth International Conference on Bio-Inspired Computing: Theories and Applications
CE
(BIC-TA) (pp. 591–595). Institute of Electrical & Electronics Engineers (IEEE). Peng, H., Yang, J., Wang, J., Wang, T., Sun, Z., Song, X., Luo, X., & Huang, X. (2017). Spiking
AC
neural p systems with multiple channels. Neural Networks, 95 , 66–71. Qais, M. H., Hasanien, H. M., & Alghuwainem, S. (2018). Augmented grey wolf optimizer for grid-connectedpmsg-based wind energy conversion systems. Applied Soft Computing, .
ACCEPTED MANUSCRIPT
Ramanujan, A., & Krithivasan, K. (2013). Control languages associated with tissue P systems. In International Conference on Unconventional Computing and Natural Computation (pp. 186–197). Springer volume 7956. Sekulic, M., Pejic, V., Brezocnik, M., Gostimirovic, M., & Hadzistevic, M. (2018). Prediction
CR IP T
of surface roughness in the ball-end milling process using response surface methodology, genetic algorithms, and grey wolf optimizer algorithm. Advances in Production Engineering & Management, 13 , 18–30.
Song, T., Liu, X., & Zeng, X. (2015). Asynchronous spiking neural P systems with anti-spikes.
AN US
Neural Process Lett, 42 , 633–647.
Song, T., Mac´ıas-Ramos, L. F., Pan, L., & P´erez-Jim´enez, M. J. (2014). Time-free solution to sat problem using p systems with active membranes. Theoretical Computer Science, 529 , 61–68.
M
Song, T., & Pan, L. (2016). Spiking neural P systems with request rules. Neurocomputing, 193 ,
ED
193–200.
Song, T., Pan, L., Jiang, K., Song, B., & Chen, W. (2013). Normal forms for some classes of
PT
sequential spiking neural P systems. IEEE Transactions on NanoBioscience, 12 , 255–264. Song, X., Wang, J., Peng, H., Ning, G., Sun, Z., Wang, T., & Yang, F. (2018). Spiking neural P
CE
systems with multiple channels and anti-spikes. Biosystems, . Venkatakrishnan, G., Rengaraj, R., & Salivahanan, S. (2018). Grey wolf optimizer to real power
AC
dispatch with non-linear constraints. CMES-COMPUTER MODELING IN ENGINEERING & SCIENCES , 115 , 25–45.
Wang, J., Shi, P., Peng, H., Perez-Jimenez, M. J., & Wang, T. (2013). Weighted fuzzy spiking neural P systems. IEEE Transactions on Fuzzy Systems, 21 , 209–220.
ACCEPTED MANUSCRIPT
Wang, X., Song, T., Zheng, P., Hao, S., & Ma, T. (2017). Spiking neural P systems with anti-spikes and without annihilating priority. Rammian Journal of Science and Technology, 20 , 32–41.
CR IP T
Wilusz, T. (1995). Neural networks — a comprehensive foundation. Neurocomputing, 8 , 359–360. Wu, T., Paun, A., Zhang, Z., & Pan, L. (2017). Spiking neural P systems with polarizations. IEEE Transactions on Neural Networks and Learning Systems, (pp. 1–12).
Wu, T., Wang, Y., Jiang, S., Su, Y., & Shi, X. (2018). Spiking neural P systems with rules on
AN US
synapses and anti-spikes. Theoretical Computer Science, 724 , 13–27.
Wu, T., Zhang, Z., P˘aun, Gh., & Pan, L. (2016). Cell-like spiking neural P systems. Theoretical Computer Science, 623 , 180–189.
Yang, Z., & Liu, C. (2018). A hybrid multi-objective gray wolf optimization algorithm for
M
a fuzzy blocking flow shop scheduling problem. Advances in Mechanical Engineering, 10 , 1687814018765535.
ED
Zhang, G., Cheng, J., Gheorghe, M., & Meng, Q. (2013). A hybrid approach based on differential evolution and tissue membrane systems for solving constrained manufacturing parameter
PT
optimization problems. Applied Soft Computing, 13 , 1528–1542. Zhang, X., Zeng, X., Luo, B., & Pan, L. (2014). On some classes of sequential spiking neural P
CE
systems. Neural Computation, 26 , 974–997. Zhang, Z., & Pan, L. (2016). Numerical P systems with thresholds. International Journal of
AC
Computers Communications & Control , 11 , 292.