On languages generated by spiking neural P systems with weights

On languages generated by spiking neural P systems with weights

Information Sciences xxx (2014) xxx–xxx Contents lists available at ScienceDirect Information Sciences journal homepage: www.elsevier.com/locate/ins...

654KB Sizes 0 Downloads 54 Views

Information Sciences xxx (2014) xxx–xxx

Contents lists available at ScienceDirect

Information Sciences journal homepage: www.elsevier.com/locate/ins

On languages generated by spiking neural P systems with weights Xiangxiang Zeng a, Lei Xu b, Xiangrong Liu a, Linqiang Pan c,⇑ a

Department of Computer Science, Xiamen University, Xiamen 361005, Fujian, China Department of Computer Science, University of Oxford, Wolfson Building, Parks Road, Oxford OX1 3QD, UK c Key Laboratory of Image Information Processing and Intelligent Control, School of Automation, Huazhong University of Science and Technology, Wuhan 430074, Hubei, China b

a r t i c l e

i n f o

Article history: Received 23 February 2013 Received in revised form 24 February 2014 Accepted 5 March 2014 Available online xxxx Keywords: Natural computing Membrane computing Spiking neural P system Turing universality Recursively enumerable language

a b s t r a c t Spiking neural P systems with weights (WSN P systems, for short) are a class of distributed parallel computing devices inspired from the way neurons communicate by means of spikes. It has been proved that WSN P systems can generate/recognize Turing computable set of numbers (i.e., they are Turing universal as number generators/recognizers). In this work, we investigate the language generation power of WSN P systems, where the set of spike trains of halting computations of a given WSN P system constitutes the language generated by that system. Several relationships of the families of languages generated by WSN P systems with the family of finite languages and the family of regular languages are obtained. The family of recursively enumerable languages is characterized by projections of inverse-morphic images of languages generated by WSN P systems. Ó 2014 Elsevier Inc. All rights reserved.

1. Introduction Membrane computing is one of the recent branches of natural computing, which has developed rapidly (already in 2003, ISI considered membrane computing as fast emerging research area in computer science, see http://esi-topics.com). The aim is to abstract computing ideas (data structures, operations with data, ways to control operations, computing models, etc.) from the structure and the functioning of a single cell and from complexes of cells, such as tissues and organs including the brain. The various types of membrane systems are known as P systems after Gheorghe Pa˘un who first conceived the model in 1998 [14] (the paper was circulated first as a Turku Center for Computer Science (TUCS) Report 208, 1998). There are three main classes of P systems investigated: cell-like P systems [14], tissue-like P systems [10], neural-like P systems. Many variants of all these systems have been considered [1,5]; an overview of the field can be found in [15,17], with up-to-date information available at the membrane computing website (http://ppage.psystems.eu). For an introduction to membrane computing, one may consult [16,17]. The present work deals with a class of neural-like P systems, called spiking neural P systems (SN P systems, for short), introduced in [6]. SN P systems are a class of distributed and parallel computing models inspired by spiking neurons, which are currently much investigated in neural computing, see, e.g., [4,9]. In short, an SN P system consists of a set of neurons placed in the nodes of a directed graph, where neurons send signals (spikes, denoted by the symbol a in what follows) along synapses (arcs of the graph). The neurons contain spiking and forgetting rules for emitting spikes and forgetting spikes. Spiking rules are of ⇑ Corresponding author. Tel.: +86 2787556070. E-mail address: [email protected] (L. Pan). http://dx.doi.org/10.1016/j.ins.2014.03.062 0020-0255/Ó 2014 Elsevier Inc. All rights reserved.

Please cite this article in press as: X. Zeng et al., On languages generated by spiking neural P systems with weights, Inform. Sci. (2014), http://dx.doi.org/10.1016/j.ins.2014.03.062

2

X. Zeng et al. / Information Sciences xxx (2014) xxx–xxx

the form E=ac ! a; d, where E is a regular expression over fag and c; d are natural numbers, c P 1; d P 0. If a neuron contains k spikes such that ak 2 LðEÞ; k P c, then it can consume c spikes and produce one spike, after a delay of d steps. This spike is sent to all neurons connected by an outgoing synapse from the neuron where the rule was applied. Forgetting rules are of the form as ! k, with the meaning that s P 1 spikes are forgotten if the neuron contains exactly s spikes. Until now, there have been over 100 papers published that were devoted to the research of SN P systems. Among these researches, SN P systems have been used for computing functions (e.g., [13,19]), generating sets of numbers (e.g., [3,6]), generating/recognizing languages (e.g., [2,26]), and solving computational hard problems (e.g., [7]). Many variants of SN P systems have also been investigated (e.g., [12,18,21,25]). In SN P systems, the applicability of a spiking rule is determined by checking the number of spikes in the neuron against a regular set associated with the rule. It is proved in [8] that it is at least NP-hard to decide whether a rule can be applied. In order to decide the applicability of rules in an easy way, spiking neural P systems with weights (WSN P systems, for short) were introduced as a variant of SN P systems [24]. Instead of counting spikes as in a usual SN P system, each neuron in WSN P system contains a potential, which can be expressed by a computable real number. Each neuron fires when its potential equals a given value (called threshold). The execution of a rule consumes a part of the potential and produces a unit potential. This unit potential passes to neighboring neurons multiplied by the weights of synapses. WSN P systems can be used as number generators and language generators. As a number generator, the output neuron is requested to fire exactly twice during the computation, and the result of a computation is the number of steps elapsed between the two moments when the output neuron fires. It has been proved in [24] that WSN P systems as number generators can generate all Turing computable sets of numbers and characterization of semilinear sets can be obtained by WSN P systems under certain restrictive conditions. When WSN P systems are used as language generators, the following question remains open: what is the computation power of WSN P systems? In this work, we address the issue of language generation power of WSN P systems. Actually, investigating the language generation power of a computation model is an important issue in the area of artificial intelligence [22,23], since it allows the possibility for the understanding and use of a computation model to accomplish meaningful linguistic goals systematically. With any halting computation of a WSN P system, a string is associated in the following way: the symbol 1 is associated with a step when the output neuron fires, the symbol 0 is associated with a step when the output neuron does not fire. The set of strings associated with all halting computations of a WSN P system is called the language generated by the WSN P system. We obtain some relationships of the families of binary languages generated by WSN P systems with the family of finite languages and the family of regular languages. The family of recursively enumerable languages is characterized by projections of inverse-morphic images of languages generated by WSN P systems. The paper is organized as follows: in Section 2 we introduce the necessary prerequisites. The computation model investigated in the paper, spiking neural P systems with weights, is defined in Section 3. The language generation power of WSN P systems is investigated in Section 4. Finally, conclusions and some open problems for future research are presented in Section 5.

2. Preliminaries It is useful for readers to have some familiarity with (basic elements of) language theory, e.g., from [20], as well as basic membrane computing [15]. We introduce here only the necessary prerequisites. By N; Z; Q; RC we denote the sets of natural, integer, rational, and computable real numbers, respectively. For an alphabet V; V  denotes the set of all finite strings of symbols from V, the empty string is denoted by k, and the set of all nonempty strings over V is denoted by V þ . When V ¼ fag, we write simply a and aþ instead of fag ; fagþ . A regular expression over an alphabet V is defined as follows: (i) k and each a 2 V is a regular expression, (ii) if E1 ; E2 are regular expressions over V, then ðE1 ÞðE2 Þ, ðE1 Þ [ ðE2 Þ, and ðE1 Þþ are regular expressions over V, and (iii) nothing else is a regular expression over V. With each regular expression E we associate a language LðEÞ, defined in the following way: (i) LðkÞ ¼ fkg and LðaÞ ¼ fag, for all a 2 V, (ii) LððE1 Þ [ ðE2 ÞÞ ¼ LðE1 Þ [ LðE2 Þ, LððE1 ÞðE2 ÞÞ ¼ LðE1 ÞLðE2 Þ, and LððE1 Þþ Þ ¼ ðLðE1 ÞÞþ , for all regular expressions E1 ; E2 over V. Unnecessary parentheses can be omitted when writing a regular expression, and ðEÞþ [ fkg can also be written as E . By FIN; REG; RE we denote the sets of finite languages, regular languages, and recursively enumerable languages, respectively. By NRE we denote the family of length sets of recursively enumerable languages. In the characterization of recursively enumerable languages given in this work, the notion of counter machine is used. A counter machine is a construct M ¼ ðm; H; l0 ; lh ; RÞ, where m is the number of counters, H is the set of instruction labels, l0 is the start label, lh is the halt label (assigned to instruction HALT), and R is the set of instructions; each label from H labels only one instruction from R, thus precisely identifying it. The instructions are of the following forms:  li : ðADDðrÞ; lj ; lk Þ (add 1 to counter r and then go to one of the instructions with labels lj ; lk ),  li : ðSUBðrÞ; lj ; lk Þ (if counter r is non-zero, then subtract 1 from it, and go to the instruction with label lj ; otherwise, go to the instruction with label lk ),  lh : HALT (the halt instruction).

Please cite this article in press as: X. Zeng et al., On languages generated by spiking neural P systems with weights, Inform. Sci. (2014), http://dx.doi.org/10.1016/j.ins.2014.03.062

X. Zeng et al. / Information Sciences xxx (2014) xxx–xxx

3

A counter machine M computes (generates) a number n in the following way. The counter machine starts with all counters empty (i.e., storing the number zero). It applies the instruction with label l0 and proceeds to apply instructions as indicated by labels (and, in the case of SUB instructions, by the content of counters). If the counter machine reaches the halt instruction, then the number n stored at that time in the first counter is said to be computed by M. The set of all numbers computed by M is denoted by NðMÞ. It is known that counter machines compute all sets of numbers which are Turing computable, hence they characterize NRE [11]. Without loss of generality, it can be assumed that l0 labels an ADD instruction, that in the halting configuration, all counters different from the first one are empty, and that the output counter is never decremented during the computation (its content is only added to). We use the following convention. When we compare the power of two number generating/accepting devices D1 and D2 , number zero is ignored; that is, NðD1 Þ ¼ NðD2 Þ if and only if NðD1 Þ  f0g ¼ NðD2 Þ  f0g (this corresponds to the usual practice of ignoring the empty string in formal language and automata theory). For the convenience of readers, we provide a notation-list in Table 1 to explain the meanings of mathematical symbols used in this paper. 3. Spiking neural P systems with weights Spiking neural P systems with weights (WSN P systems, for short) were introduced in [24]. We recall the definition of WSN P systems. A WSN P system, of degree m P 1, is a construct of the form

P ¼ ðr1 ; . . . ; rm ; syn; in; outÞ; where: 

r1 ; . . . ; rm are neurons, of the form ri ¼ ðpi ; Ri Þ; 1 6 i 6 m, where:

a) pi 2 Rc (the set of computable real numbers) is the initial potential in ri ; b) Ri is a finite set of spiking rules of the form T i =ds ! 1; s ¼ 1; 2; . . . ; ni for some ni P 1, where T i 2 Rc ; T i P 1, is the firing threshold potential of neuron ri , and ds 2 Rc such that 0 < ds 6 T i ;  syn # f1; 2; . . . ; mg  f1; 2; . . . ; mg  Rc are synapses between neurons, where i – j; w – 0 for each ði; j; wÞ 2 syn, and for each ði; jÞ 2 f1; 2; . . . ; mg  f1; 2; . . . ; mg there is at most one synapse ði; j; wÞ in syn;  in; out 2 f1; 2; . . . ; mg indicate the input and output neurons, respectively. The spiking rules are applied as follows. Assume that at a given moment, neuron ri has a potential p. If p ¼ T i , then any rule T i =ds ! 1 2 Ri can be applied. The execution of this rule consumes an amount of ds of the potential (thus leaving the potential T i  ds ) and prepares one unit potential (also called a spike) to be delivered to all the neurons rj such that ði; j; wÞ 2 syn. Specifically, each of these neurons rj receives a quantity of potential equal to w, which is added to the existing potential in rj . Note that w can be positive or negative, hence the potential of the receiving neuron is increased or decreased depending on w. The potential emitted by a neuron ri passes immediately to all neurons rj , where ði; j; wÞ 2 syn; that is, the transition of potential takes no time. If a neuron ri fires and it has no outgoing synapse, then the potential emitted by neuron ri is lost. Note that each neuron ri has only one fixed threshold potential T i . If a neuron has the potential equal to its firing threshold potential, then all rules associated with this neuron are enabled, and only one of them is non-deterministically chosen to be applied. In each step (a global clock is assumed, marking the time for the whole system, hence the functioning of the system is synchronized), each neuron uses at most one rule, non-deterministically chosen among its rules, provided that its potential equals the firing threshold, but all neurons that have applicable rules must choose and apply a rule. If neuron ri has a potential p such that p < T i , then the neuron ri returns to the resting potential 0. If neuron ri has a potential p such that p > T i , then potential p remains unchanged.

Table 1 The list of notations with their meanings used in this paper. Notation

Meaning

N; Z; Q RC FIN REG RE NRE

The sets of natural, integer, rational numbers The set of computable real numbers The set of finite languages The set of regular languages The set of recursively enumerable languages The family of length sets of recursively enumerable languages The empty string

k

Please cite this article in press as: X. Zeng et al., On languages generated by spiking neural P systems with weights, Inform. Sci. (2014), http://dx.doi.org/10.1016/j.ins.2014.03.062

4

X. Zeng et al. / Information Sciences xxx (2014) xxx–xxx

To sum up, if neuron

ri has potential p and receives potential k at step t, then at step t þ 1 it has potential p0 , where:

8 if p < T i ; > < k; p0 ¼ p  ds þ k; if p ¼ T i and rule T i =ds ! 1 is applied; > : p þ k; if p > T i :

ð1Þ

The configuration of the system P is described by the distribution of potentials in neurons in P. Thus, the initial configuration of the system is the tuple hp1 ; p2 ; . . . ; pm i. Using the rules as described above, one can define transitions among configurations. Any sequence of transitions starting from the initial configuration is called a computation. A computation halts if it reaches a configuration where no rule can be applied. We can associate computation results in several ways. In this work, with any halting computation, a spike train is associated – the binary sequence with occurrences of 1 indicating time instances when the output neuron sends one unit potential (a spike) out of the system (we also say that the system itself fires at that time), with occurrences of 0 indicating time instances when the output neuron does not fire. We consider the spike train itself as the result of a computation. The set of all spike trains generated by a system is called the language generated by the system. For a WSN P system P, we denote by LðPÞ the language generated by the system P, and by LW X SNP m the family of all languages generated by WSN P systems with at most m P 1 neurons, using weights, thresholds, and amounts of consumed potentials in the rules taken from the set X, for X 2 fN; Z; Q; Rc g. When the number of neurons is not bounded, the subscript m is replaced with . In the next sections, WSN P systems are represented graphically, which may be easier to understand than a symbolic representation of WSN P systems. An oval with the initial potential and spiking rules inside is used to represent a neuron, and arrows between these ovals represent the synapses; numbers will mark these arrows, indicating the weights. The input neuron has an incoming arrow and the output neuron has an outgoing arrow, suggesting their communication with the environment. When the weight on a synapse is one, it is omitted in the graphical representation. 4. The language generation power of WSN P systems In this section, we investigate the language generation power of WSN P systems including the relationships with finite languages, regular languages, recursively enumerable languages. 4.1. Relationships with finite languages Theorem 4.1. There is no WSN P systems that can generate the language of the form f0x; 1yg (x and y are arbitrary strings over 0; 1). Proof. In order to generate a string 1y, in the initial configuration the output neuron must be activated. In such a case, no string of the form 0x can be generated, because the output neuron always fires at the first step. So, the language of form f0x; 1yg does not belong to LW Rc SNP  . h The above theorem shows that there are some simple languages that cannot be generated by WSN P systems, which motivate us to modify the way of defining a computation result. Definition 4.1. Let P be a WSN P system, we only consider the computations which fire at least once. Let C be such a computation in P, and 0b 1x (b P 0) be the spike train generated in C, the result of computation C is defined as x. In Definition 4.1, the spike train before the first 1 (including the first 1) is discarded from the result of a computation. The language generated by a WSN P system P under Definition 4.1 is denoted by Ldis ðPÞ, and the family of languages generated by WSN P systems is denoted by Ldis W X SNPm , where X and m have the same meaning with those in Section 3. We give the following example to clarify Definition 4.1. The SN P system in Fig. 1 can generate the language f0; 1g. Neuron rout nondeterministically chooses rule 2=2 ! 1 or 2=1 ! 1. If the former one is applied, the spike train emitted by the output neuron is 10; otherwise, the spike train emitted is 11. By Definition 4.1, the part of spike train before the first 1 should be discarded, so, the symbol 1 is removed from the spike trains 10 and 11, and the language generated by the system is f0; 1g. The above example shows that the modification of the way of defining a computation result can change the language generated by a WSN P system, and in this way has the possibility to improve the language generation power of WSN P systems. In what follows, we investigate the language generation power of WSN P systems under Definition 4.1.

Fig. 1. The WSN P system generating language {0, 1}.

Please cite this article in press as: X. Zeng et al., On languages generated by spiking neural P systems with weights, Inform. Sci. (2014), http://dx.doi.org/10.1016/j.ins.2014.03.062

X. Zeng et al. / Information Sciences xxx (2014) xxx–xxx

5

Theorem 4.2. Let L ¼ fxg; x 2 f0; 1gþ ; j xj1 ¼ r P 0. Then L 2 Ldis W X SNPrþ4 , where X 2 fZ; Q; Rc g. Proof. Given a string x which contains r bits of 1, let us assume that these bits are at steps n1 ; n2 ; . . . ; nr (for all i > 0; ni < niþ1 ), that is, x ¼ 0n1 1 10n2 n1 1 1 . . . 10jxjnr , then the string x can be generated by the system in Fig. 2. Initially, neuron ra1 has potential 1, neuron rti (1 6 i 6 r) has potential ni , neuron rout has potential 1. The system works as follows. At the first step, neuron rout fires, according to Definition 4.1, from this step on, the spike train should be counted into the language of the system. Neuron ra1 fires in each step, decreasing the potential in rti (1 6 i 6 r þ 1) by one, thus neuron rti (1 6 i 6 r) fires at step ni , and this leads to the firing of neuron rout at step ni þ 1. Finally, neuron rtrþ1 fires at step j x j, sending potential 1 to neurons ra1 and ra2 , which stop working at the next step, and in this way, the system halts at step j x j þ1. By Definition 4.1, the language generated by the system is x ¼ 0n1 1 10n2 n1 1 1 . . . 10jxjnr . h Actually, WSN P systems can generate arbitrary finite languages. Theorem 4.3. Let L # f0; 1gþ be any finite language. Then there is a WSN P system P such that L 2 Ldis W X SNP , X 2 fZ; Q; Rc g. Proof. Given a language L ¼ fx1 ; . . . ; xs g; xi 2 f0; 1gþ , 1 6 i 6 s, we show that the WSN P system shown in Fig. 3 can generate L. Specifically, for each string xi we have a ‘‘subsystem’’ M i (in dash boxes) composed of neurons raðiÞ ; raðiÞ , rtðiÞ ; rtðiÞ ; . . ., routðiÞ , 1 2 1 2 with the same synapses and rules as those in Fig. 2. Initially, all the neurons contain the resting potential 0, except that neuron rd1 has potential 1, neuron rd2 has potential 2, neuron rbðiÞ (1 6 i 6 s) has potential i þ 1, in the dash boxes neurons rtðiÞ , rtðiÞ ; . . . (1 6 i 6 s) have potentials as those in Fig. 2. 1 1 2 This system works as follows. All neurons behave deterministically except for neuron rd2 . As long as neuron rd2 uses its rule 2=1 ! 1, it can fire again in the next step: one unit potential remains inside and a further one is received from neuron rd1 , the potential is restored back to 2 (which equals to the firing threshold 2). hence the initial amount, equal to the firing threshold, is restored. In turn, as long as rd2 fires, the potential in neuron rbðiÞ (1 6 i 6 s) is decreased by one unit at each step. 1 At step i, neuron rbðiÞ (1 6 i 6 s) have potential one and fires, sending one unit potential to neuron rbðiÞ . 1 2 However, neuron rbðiÞ can fire if and only if neuron rd2 fires at step i and does not fire at step i þ 1, this happens when 2 neuron rd2 uses its rule 2=2 ! 1 at step i (rule 2=2 ! 1 consumes two units potential in neuron rd2 , and only one unit is received from neuron rd1 , which is then removed, so rd2 remains idle from step i þ 1). Therefore, non-deterministically, one of neurons rbðiÞ (1 6 i 6 s) will be activated, sending one unit potential to neurons raðiÞ ; raðiÞ ; routðiÞ . Having potential one in 1 1 2 neurons raðiÞ , raðiÞ ; routðiÞ , the corresponding ‘‘subsystem’’ in dash box starts to work in the same way as the one in Fig. 2, hence 1 2 generates string xi . Therefore, LðPÞ ¼ fx1 ; . . . ; xs g, and L 2 Ldis W X SNP. h Theorem 4.4. Ldis W X SNP   FIN – ;; X 2 fZ; Q; Rc g. Proof. It is not difficult to see that the WSN P system shown in Fig. 4 generates the infinite language Lð1 Þ. h

Fig. 2. WSN P systems generating L ¼ fxg; x 2 f0; 1gþ ; j xj1 ¼ r P 0.

Fig. 3. The WSN P system generating finite language.

Please cite this article in press as: X. Zeng et al., On languages generated by spiking neural P systems with weights, Inform. Sci. (2014), http://dx.doi.org/10.1016/j.ins.2014.03.062

6

X. Zeng et al. / Information Sciences xxx (2014) xxx–xxx

Fig. 4. The WSN P system generating infinite language.

4.2. Relationships with regular languages We first consider the language generation power of WSN P systems with natural numbers as synapse weights, thresholds, and potentials. Theorem 4.5. Ldis W N SNP   REG. Proof. The inclusion Ldis W N SNP   REG is somewhat straightforward from the following observation. Because all weights are positive, the potential accumulated in a neuron can be decreased only if it is smaller than or equal to the firing threshold of that neuron (see Eq. (1) in Section 3). In other words, if a neuron ri accumulates a potential strictly larger than threshold T i , then the potential remains larger than T i forever (no rule can be applied in ri ). Therefore, the configuration of a WSN P system P ¼ ðr1 ; . . . ; rm ; syn; outÞ can be described by a vector C ¼< s1 ; . . . ; sm >, where si 2 f0; 1; . . . ; T i g [ fT i g, where T i is a symbol indicating that the potential of ri is greater than threshold T i . If new amount of potential is brought to a neuron whose content is already described by T i , then the same symbol T i will be used to describe the potential of that neuron at the next step. Let C 0 be the initial configuration of system P. For a WSN P system P, we construct the right-linear grammar G ¼ ðN; f0; 1g; C 0 ; PÞ, where N ¼ C  f0; 1g and P contains the following rules:      

ðC; 0Þ ! ðC 0 ; 0Þ, for C; C 0 2 C such that there is a transition C ) C 0 in P during which the output neuron does not fire; ðC; 0Þ ! ðC 0 ; 1Þ, for C; C 0 2 C such that there is a transition C ) C 0 in P during which the output neuron fires; ðC; 1Þ ! 0ðC 0 ; 1Þ, for C; C 0 2 C such that there is a transition C ) C 0 in P during which the output neuron does not fire; ðC; 1Þ ! 1ðC 0 ; 1Þ, for C; C 0 2 C such that there is a transition C ) C 0 in P during which the output neuron fires; ðC; 0Þ ! k, for C 2 C which is a halting configuration in P; ðC; 1Þ ! k, for C 2 C which is a halting configuration in P.

The way of controlling the derivation by the nonterminals in C ensures the fact that Ldis ðPÞ is the language generated by right-linear grammar G. Therefore, Ldis ðPÞ is regular. h If synapse weights, thresholds, and potentials of WSN P systems are integers, then the language generation power of WSN P systems strictly increases. Theorem 4.6. There exists a language L 2 Ldis W Z SNP  such that LRREG. Proof. The system P depicted in Fig. 5 can generate non-regular language Ldis ðPÞ ¼ f0n 1n j n P 2g. Initially, neuron ra1 has potential 2, neuron ra2 has potential 1, neuron rd has potential 3, neuron rout has potential 1, respectively. The system works as follows. If neuron ra1 applies rule 2=1 ! 1, it can fire again in the next step: one unit potential remains inside and a further one is received from ra2 , hence the potential of neuron ra1 is restored back to 2 (which equals to its threshold). In turn, as long as ra2 fires, neurons rc1 and rc2 cannot fire, and two units potential are sent to neuron rd at each step. We assume that neuron ra1 applies rule 2=1 ! 1 for n  2 (n P 2) steps. At step n  1, rule 2=2 ! 1 is applied, consuming two units potential in neuron ra1 , only one unit potential is received from neuron ra2 , which is then removed, and ra1 remains idle from step n. At step n, neuron ra2 fires and neuron ra1 does not fire, neurons rc1 and rc2 receive potential one

Fig. 5. The WSN P system for f0n 1n j n P 2g.

Please cite this article in press as: X. Zeng et al., On languages generated by spiking neural P systems with weights, Inform. Sci. (2014), http://dx.doi.org/10.1016/j.ins.2014.03.062

X. Zeng et al. / Information Sciences xxx (2014) xxx–xxx

7

and fires at step n þ 1, respectively. During the next n þ 1 steps (from step n þ 1 to step 2n þ 1), neurons rc1 and rc2 feed each other, sending potential one to neuron rout and potential 2 to neuron rd , thus activating neuron rout and decreasing the potential in neuron rd by two in each step. At step 2n, neuron rd has only potential one and fires. Receiving potential one from neuron rc2 and potential 1 from neuron rd , neuron rout has potential 0 and does not fire at step 2n þ 1. From the above description, we can see that the spike train emitted by neuron rout is 10n 1n . Thus, the language generated by the system P is f0n 1n j n P 2g. h 4.3. A characterization of recursively enumerable languages In this subsection, we give a characterization of recursively enumerable languages by WSN P systems. Theorem 4.7. For an alphabet V ¼ fe1 ; e2 ; . . . ; ek g, and any language L # V  ; L 2 RE, there are a morphism h1 : ðV [ fy; zgÞ ! f0; 1g , and a projection h2 : ðV [ fy; zgÞ ! V  such that there is a WSN P system P with integer weights satisfying that 1 L ¼ h2 ðh1 ðLdis ðPÞÞÞ. Proof. Let V ¼ fe1 ; e2 ; . . . ; ek g; L # V  is a recursively enumerable language, consider the morphisms defined as follows: h1 ðei Þ ¼ 10i 1ð1 6 i 6 kÞ; h1 ðyÞ ¼ 0; h1 ðzÞ ¼ 01, h2 ðei Þ ¼ ei ð1 6 i 6 kÞ; h2 ðyÞ ¼ h2 ðzÞ ¼ k. Let V ¼ fe1 ; e2 ; . . . ; ek g. For a string x 2 V  , by v alk ðxÞ we denote the value in base k þ 1 of x. For example, for a string 3 2 1 s ¼ e2 e4 e1 ; v alk ðsÞ is 2ðk þ 1Þ þ 4ðk þ 1Þ þ 1ðk þ 1Þ . In this way, the symbols e1 ; e2 ; . . . ; ek are considered as digits 1; 2; . . . ; k, and a string x corresponds to a natural number v alk ðxÞ. We extend this notation in the natural way to sets of strings. For a language L # V  ; L is a recursively enumerable language if and only if v alk ðLÞ is a recursively enumerable set of numbers. In turn, a set of numbers is recursively enumerable if and only if it can be accepted by a deterministic counter machine. Therefore, L is a recursively enumerable language if and only if v alk ðLÞ can be accepted by a deterministic counter machine. Let M be a counter machine such that NðMÞ ¼ v alk ðLÞ. A WSN P system P that generates L is shown in Fig. 6. Before we start to show how system P works, it is worth noting that in constructing the system we use the fact that a counter machine can be simulated by a WSN P system. In Fig. 6, the subsystem M is designed for simulating the counter machine M which accepts recursively enumerable set of numbers. The subsystem M0 corresponds to another counter machine M 0 . The role of counter machine M 0 is to produce the number v alk ðxÞ and put it in the common counter c1 , for each x 2 L. After v alk ðxÞ is loaded in counter c1 , the counter machine M is triggered, the system passes to the phase of checking whether the number v alk ðxÞ stored in counter c1 can be accepted. 0 0 Let us assume the counter machine M ¼ ðm; H; l0 ; lh ; IÞ and the counter machine M 0 ¼ ðm0 ; H0 ; l0 ; lh ; I0 Þ, where H \ H0 ¼ ;. As in the usual way of simulating counter machines by WSN P systems, in the construction each counter r of counter machine M is associated with a neuron rr in subsystem M, and each counter r0 of counter machine M 0 is associated with a neuron rr0 in subsystem M0 ; if the counter contains the number n, then the associated neuron will have the potential 2n þ 2. Each label of the two counter machines is associated with a neuron, and some auxiliary neurons will also be considered. In general, the system P works as follows, where five operations are described.

Fig. 6. The WSN P system from Theorem 4.7.

Please cite this article in press as: X. Zeng et al., On languages generated by spiking neural P systems with weights, Inform. Sci. (2014), http://dx.doi.org/10.1016/j.ins.2014.03.062

8

X. Zeng et al. / Information Sciences xxx (2014) xxx–xxx

1. The output neuron fires in the first time unit. In the initial configuration, all neurons have potential 0, except that neuron rout ; ra1 , ra2 ; ra3 have potential 1; k; 2; 1, respectively, and the neurons associated with the counters have potential 2. In the first step, neuron rout fires and sends one unit potential to the environment. 2. Introduce an arbitrary number i (1 6 i 6 k) in neuron rc0 and output 0i 1. Neurons ra1 , ra2 ; ra3 manage to load neuron rc0 non-deterministically with an arbitrary potential 2i (corresponding to natural number i). The non-determinism originates from the choice of the rules 2=1 ! 1 and 2=2 ! 1 in neuron ra2 . If rule 2=1 ! 1 is applied at step t, neuron ra2 can fire again in step t þ 1: one potential unit remains inside and a further one is received from neuron ra3 , hence the potential of neuron ra2 is restored back to its threshold 2. In turn, as long as ra2 fires, the potential in neuron rc0 is increased by two units potential at each step, neuron rout cannot fire, the potential in neuron ra1 is decreased by one unit at each step. The work of neurons ra2 and ra3 stops when neuron ra2 uses rule 2=2 ! 1 or when neuron ra1 fires (at each step, neuron ra1 receives potential 1, it will have potential 1 after k  1 steps and fire, which ensures that neuron ra2 can fire at most k steps), and after that only neuron ra3 fire for one more step, neuron rout receives potential 1 and fires, neuron rl00 receives potential 1 and fires at the next step, and in this way the system starts to simulate the counter machine M 0 . In order that readers can easily check the above process step by step, we show the potentials of each neuron at each step in Table 2. 3. Multiply the number stored in neuron rc1 by k þ 1, and add the number from neuron rc0 . The subsystem corresponding to counter machine M0 performs the following operations: multiply the number stored in neuron rc1 by k þ 1, then add the number from neuron rc0 (initially, these two neurons have potential 2 respectively, which means that their corresponding counters both have number 0). Specifically, if neuron rc0 holds potential 2i þ 2 and neuron rc1 holds potential 2n þ 2, for some i P 0; n P 0, then we end this step with potential 2ðnðk þ 1Þ þ iÞ þ 2 in neuron rc1 and potential 2 in neuron rc0 . 4. Load the number v alk ðxÞ in neuron rc1 . In order to produce the number v alk ðxÞ in the common neuron rc1 , the system needs to repeat operations 1, 2 and 3, or non-deterministically, stop the increase of potential from neuron rc1 and pass to operation 5. To this aim, we use the non-deterministic choice of rules 2=2 ! 1 and 2=1 ! 1 in ra5 . Because neuron ra5 has potential 2 (received from neuron rl0h at the last step), it has to choose non-deterministically one of these rules (assume that this happens at step t). We have the following two cases. (1) If rule 2=2 ! 1 is applied at step t, then neuron ra5 consumes its potential for spiking. With receiving potential 1 from neuron ra4 at step t, neuron ra5 has potential 1 at step t þ 1, which is less than its threshold potential 2, hence neuron ra5 returns to the resting potential. At step t, neuron ra7 receives potential 1 from neuron ra5 , whose potential is less than its firing threshold 1 and it returns to the resting potential at step t þ 1. At step t, neuron ra8 receives potential 0 (potential 1 from neuron ra4 and potential 1 from neuron ra5 ), hence its potential is still 0. At step t, neuron ra6 receives potential 2 (one from neuron ra4 and one from neurons ra5 ) and it fires at step t þ 1. Receiving potential 1 from neuron ra6 , neuron ra7 spikes at step t þ 2. At step t þ 3, neurons rout ; ra1 ; ra2 ; ra3 receive potentials 1; k; 2; 1, respectively. In this way, the system returns to a configuration that is the same as the initial configuration except for neuron rc1 . So, the system P can repeat again operations 1, 2 and 3. The evolution of spikes in each neuron during the above process is shown in Table 3. (2) If rule 2=1 ! 1 is applied at step t, then neuron ra5 consumes potential 1 for spiking. With receiving one unit of potential from neuron ra4 at step t, neuron ra5 still has potential 2 at step t þ 1, and fires again. At step t, neuron ra7 receives potential 1 from neuron ra5 , whose potential is less than its threshold and returns to the resting potential. At step t þ 1, neuron ra7 receives potential 0 (1 from neuron ra5 and potential 1 from neuron ra6 ), so its potential is still zero. At step t, neuron ra8 receives potential 0 (potential 1 from neuron ra4 and potential 1 from neuron ra5 ), so its potential remains 0. At step t þ 1, neuron ra8 receives potential 1 from neuron ra5 and it fires at step t þ 2. Receiving potential 1 from neuron ra8 at step t þ 2, neuron rl0 becomes active. The system P starts to simulate the counter machine M. The evolution of spikes in each neuron during the above process is shown in Table 4. From the above description, we can see that the system P passes non-deterministically to operation 1 or operation 5. After the last increase of potential in neuron rc1 we get v alk ðxÞ for a string x 2 V þ such that the string produced by the system up to now is of the form 10i1 10j1 10i2 10j2 1 . . . 10im 10jm , for 1 6 in 6 k and jn P 1, for all 1 6 n 6 m.

Table 2 The potentials in the neurons during operations 1 and 2 assuming that neuron Neuron

ra1 ra2 ra3 rout rc0 rl00

ra2 uses rule 2=2 ! 1 at step i.

Step 1

2

...

i

iþ1

iþ2

k 2 1 1 0 0

k1 2 1 0 2 0

... ... ... ... ... ...

kiþ1 2 1 0 2i  2 0

ki 1 1 0 2i 0

ki1 0 0 1 2i 1

Please cite this article in press as: X. Zeng et al., On languages generated by spiking neural P systems with weights, Inform. Sci. (2014), http://dx.doi.org/10.1016/j.ins.2014.03.062

9

X. Zeng et al. / Information Sciences xxx (2014) xxx–xxx Table 3 The potentials in the neurons during operation 4 in the case that neuron Neuron

rl0h ra4 ra5 ra6 ra7 ra8 rl0 ra1 ra2 ra3 rout

Step t1

t

tþ1

tþ2

tþ3

1

0

0

0

0

0 0 0 0 0 0 0 0 0 0

1 2 0 0 0 0 0 0 0 0

0 1 2 1 0 0 0 0 0 0

0 0 0 1 0 0 0 0 0 0

0 0 0 0 0 0 k 2 1 1

Table 4 The potentials in the neurons during operation 4 in the case that neuron Neuron

rl0h ra4 ra5 ra6 ra7 ra8 rl0 ra1 ra2 ra3 rout

ra5 uses rule 2=2 ! 1 at step t.

ra5 uses rule 2=1 ! 1 at step t.

Step t1

t

tþ1

1

0

0

0 0 0 0 0 0 0 0 0 0

1 2 0 0 0 0 0 0 0 0

0 2 2 1 0 0 0 0 0 0

tþ2

tþ3

0

0

0 0/1 1 0 1 0 0 0 0 0

0 0 0 0 0 1 0 0 0 0

5. Recognize the number v alk ðxÞ in neuron rc1 . After the number v alk ðxÞ is stored in neuron rc1 . The system now starts to simulate the work of the counter machine M in recognizing the number v alk ðxÞ. During this process, the system outputs no spike, but only one spike is outputted if and only if the machine M halts, i.e., when the number v alk ðxÞ is accepted by the counter machine M, which means that x 2 L. After emitting this last spike, the whole system halts. Therefore, the previous string 10i1 10j1 10i2 10j2 1 . . . 10im 10jm is continued with a suffix of the form 0s 1 for some s P 1. From the previous description of the work of P, it is clear that the system P halts if and only if x 2 L. The spike train 1 produced by the system P is of the form X ¼ 10i1 10j1 10i2 10j2 1 . . . 10im 10jm 0s 1. Moreover, it is obvious that x ¼ h2 ðh1 ðXÞÞ: we 1 have h1 ðXÞ ¼ ei1 yj1 ei2 yj2 . . . eim yjm þs1 z (this is the only way to cover correctly the string x with blocks of the forms of h1 ðei Þ; h1 ðyÞ; h1 ðzÞ); the projection h2 is used to remove the auxiliary symbols y; z. In order to complete the proof, we have to show how the two counter machines are simulated, using the common neuron rc1 but without mixing the computations. To this aim, we need to consider how to simulate their ADD and SUB instructions respectively, which are given in Fig. 7 and 8. Because the constructions are similar to those used in the proof of Theorem 6.1 from [24], we do not enter into details here. However, since the counter machines M and M 0 have a common counter c1 , we have to ensure that no neuron for simulating an instruction (ADD or SUB instruction) of M can fire when the system simulates an instruction of M0 or vice versa. Because it is easy to see that there is no interference between neurons used in the ADD and the SUB modules, we only need to consider the case of the simulations of SUB instructions of the two counter 0 0 0 machines. Specifically, if there are a SUB instruction li : ðSUBðc1 Þ; lj ; lk Þ of M and a SUB instruction li : ðSUBðc1 Þ; lj ; lk Þ of M 0 which act on the same counter c1 , then neurons rl0i;2 and rl0i;3 receive potentials 1 and 1 from neuron rc1 , respectively, while

Fig. 7. Module ADD (simulating li : ðADDðrÞ; lj Þ).

Please cite this article in press as: X. Zeng et al., On languages generated by spiking neural P systems with weights, Inform. Sci. (2014), http://dx.doi.org/10.1016/j.ins.2014.03.062

10

X. Zeng et al. / Information Sciences xxx (2014) xxx–xxx

Fig. 8. Module SUB (simulating li : ðSUBðrÞ; lj ; lk Þ).

simulating the instruction li : ðSUBðc1 Þ; lj ; lk Þ. After receiving these potentials, neurons rl0i;2 and rl0i;3 have potentials that are less than their corresponding firing thresholds, so both of them return to the resting potential 0 at next step. Consequently, the interference among two subsystems will not cause undesired steps. With the above explanations, readers can check that an arbitrary language L # V  ; L 2 RE, can be generated by the system P, which completes the proof. h 5. Conclusions and remarks In this work, we have investigated the language generation power of WSN P systems. We prove that WSN P systems with natural numbers as synapse weights, thresholds, and potentials cannot generate non-regular language, but WSN P systems with integer numbers as synapse weights, thresholds, and potentials can generate non-regular language. It is open and of interest to investigate whether or not regular languages can be characterized by WSN P systems under certain restrictive conditions. In a ‘‘standard’’ membrane system: the language generated by the system can be defined as the set of traces of a distinguished object. In WSN P systems, we can also distinguish a spike by ‘‘marking’’ it and follow its path through the neurons of the system, thus obtaining a language. The language generation power of WSN P systems deserves to be investigated under this definition. In the definition of WSN P systems and in the above proofs, the following fact is assumed and essentially used. A global clock is assumed, marking the time for the whole system, hence the functioning of the system is synchronized. In each time unit, all neurons that have applicable rules must choose and apply a rule. The synchronization is in general a powerful feature, useful in controlling the work of a computing device. But both from a mathematical point of view and from a neuronbiological point of view it is rather natural to consider non-synchronized systems. It is deserved to investigate the language generation power of WSN P systems working in non-synchronous manner. Acknowledgements The work of X. Zeng, X. Liu and L. Pan is supported by National Natural Science Foundation of China (61202011, 61272152, 61033003, 91130034 and 61320106005), Ph.D. Programs Foundation of Ministry of Education of China (20120121120039 and 20120142130008), and Natural Science Foundation of Hubei Province (2011CDA027). References [1] C. Buiu, C. Vasile, O. Arsene, Development of membrane controllers for mobile robots, Inform. Sci. 187 (2012) 33–51. [2] H. Chen, R. Freund, M. Ionescu, G. Pa˘un, M.J. Pérez-Jiménez, On string languages generated by spiking neural P systems, in: M.A. Gutiérrez-Naranjo, G. Pa˘un, A. Riscos-Núñez, F.J. Romero-Campero (Eds.), Fourth Brainstorming Week on Membrane Computing, Sevilla, January 30–February 3, 2006, vol. I, Fénix Editora, 2006, pp. 169–194. [3] H. Chen, M. Ionescu, T. Ishdorj, A. Pa˘un, G. Pa˘un, M. Pérez-Jiménez, Spiking neural P systems with extended rules: universality and languages, Natural Comput. 7 (2008) 147–166. [4] W. Gerstner, W. Kistler, Spiking Neuron Models: Single Neurons, Populations, Plasticity, Cambridge University Press, 2002. [5] L. Huang, I.H. Suh, A. Abraham, Dynamic multi-objective optimization based on membrane computing for control of time-varying unstable plants, Inform. Sci. 181 (2011) 2370–2391. [6] M. Ionescu, G. Pa˘un, T. Yokomori, Spiking neural P systems, Fundam. Inform. 71 (2006) 279–308. [7] A. Leporati, G. Mauri, C. Zandron, G. Pa˘un, M. Pérez-Jiménez, Uniform solutions to SAT and subset sum by spiking neural P systems, Natural Comput. 8 (2009) 681–702. [8] A. Leporati, C. Zandron, C. Ferretti, G. Mauri, On the computational power of spiking neural P systems, Int. J. Unconvent. Comput. 5 (2009) 459–473. [9] W. Maass, Computing with spikes, Spec. Iss. Found. Inform. Process. TELEMATIK 8 (2002) 32–36. [10] C. Martín-Vide, G. Pa˘un, J. Pazos, A. Rodríguez-Patón, Tissue P systems, Theor. Comput. Sci. 296 (2003) 295–326. [11] M. Minsky, Computation: Finite and Infinite Machines, Prentice Hall, Englewood Cliffs, NJ, 1967. [12] L. Pan, J. Wang, H. Hoogeboom, Spiking neural P systems with astrocytes, Neural Comput. 24 (2012) 805–825. [13] L. Pan, X. Zeng, A note on small universal spiking neural P systems, Lect. Notes Comput. Sci. 5957 (2010) 436–447. [14] G. Pa˘un, Computing with membranes, J. Comput. Syst. Sci. 61 (2000) 108–143. [15] G. Pa˘un, Membrane Computing: An Introduction, Springer Verlag, Berlin, 2002. [16] G. Pa˘un, G. Rozenberg, A guide to membrane computing, Theor. Comput. Sci. 287 (2002) 73–100.

Please cite this article in press as: X. Zeng et al., On languages generated by spiking neural P systems with weights, Inform. Sci. (2014), http://dx.doi.org/10.1016/j.ins.2014.03.062

X. Zeng et al. / Information Sciences xxx (2014) xxx–xxx

11

[17] G. Pa˘un, G. Rozenberg, A. Salomaa, The Oxford Handbook of Membrane Computing, Oxford University Press, 2010. [18] H. Peng, J. Wang, M.J. Pérez-Jiménez, H. Wang, J. Shao, T. Wang, Fuzzy reasoning spiking neural P system for fault diagnosis, Inform. Sci. 235 (2013) 106–116. [19] A. Pa˘un, G. Pa˘un, Small universal spiking neural P systems, BioSystems 90 (2007) 48–60. [20] G. Rozenberg, Handbook of Formal Languages: Word, Language, Grammar, vol. 1, Springer Verlag, 1997. [21] T. Song, L. Pan, G. Pa˘un, Asynchronous spiking neural P systems with local synchronization, Inform. Sci. 219 (2013) 197–207. [22] B. Stilman, Discovering the discovery of the hierarchy of formal languages, Int. J. Mach. Learn. Cybernet. (2013). http://dx.doi.org/10.1007/s13042-0120146-0. [23] B. Stilman, V. Yakhnis, O. Umanskiy, The primary language of ancient battles, Int. J. Mach. Learn. Cybernet. 2 (2011) 157–176. [24] J. Wang, H.J. Hoogeboom, L. Pan, G. Pa˘un, M.J. Pérez-Jiménez, Spiking neural P systems with weights, Neural Comput. 22 (2010) 2615–2646. [25] X. Zeng, X. Zhang, L. Pan, Homogeneous spiking neural P systems, Fundamen. Inform. 97 (2009) 275–294. [26] X. Zhang, X. Zeng, L. Pan, On languages generated by asynchronous spiking neural P systems, Theor. Comput. Sci. 410 (2009) 2478–2488.

Please cite this article in press as: X. Zeng et al., On languages generated by spiking neural P systems with weights, Inform. Sci. (2014), http://dx.doi.org/10.1016/j.ins.2014.03.062