Digital Signal Processing 56 (2016) 93–99
Contents lists available at ScienceDirect
Digital Signal Processing www.elsevier.com/locate/dsp
Stochastic resonance in parallel concatenated turbo code decoding Qiqing Zhai a , Youguo Wang b,c,∗ a b c
College of Telecommunications and Information Engineering, Nanjing University of Posts and Telecommunications, Nanjing 210003, PR China College of Science, Nanjing University of Posts and Telecommunications, Nanjing 210023, PR China Jiangsu Innovative Coordination Center of Internet of Things, Nanjing, 210003, PR China
a r t i c l e
i n f o
Article history: Available online 3 June 2016 Keywords: Iterative decoding Stochastic resonance Threshold devices Turbo code
a b s t r a c t In an array of threshold devices, we examine the effect of noise in improving performance of turbo code decoding. Such a phenomenon of noise-enhanced effect is termed stochastic resonance (SR). When signal is subthreshold, SR is observed by using Gaussian noise during iterative decoding. That is, the minimal bit error ratio (BER) is achieved at some non-zero noise intensity level. Besides, the larger the number of threshold devices is, the more remarkable the SR effect becomes. Especially when noise intensity is nearly optimal, BER approximates to zero after a few decoding iterations. Moreover, when Gaussian mixture noise is utilized, suprathreshold stochastic resonance (SSR) occurs in turbo decoding. These results show the beneficial effect of noise in channel coding and decoding. © 2016 Elsevier Inc. All rights reserved.
1. Introduction Stochastic resonance (SR) is a physical phenomenon showing that noise may optimize the performance of some nonlinear systems [1]. Especially in the fields of signal processing and information transmission, much attention has been paid to such noiseenhanced effects. On one hand, suitable noise can be added to the input data in order to make a more reliable decision in the receiver. In [2], Kay showed that by adding some white Gaussian noise to the received data, higher detectability can be obtained. And then Chen et al. presented sufficient conditions for improvability or nonimprovability of detection under the Neyman–Pearson criterion [3]. They also derived the optimal SR noise which helps to maximize probability of detection under the constraint on probability of false alarm. Recently, based on [3], Liu et al. considered the dual problem to minimize probability of false alarm by constraining detection probability [4]. In addition, in joint detection and estimation systems, noise benefits have been displayed as well [5]. On the other hand, additive noise also helps to improve mutual information between input and output signals in both nonlinear dynamical systems [6] and static threshold systems [7,8]. Moreover, to investigate the specific information about a particular stimulus, Duan et al. used a measure of stimulus-specific information (SSI) to evaluate encoding efficiency [9]. And noise benefits have been illustrated based on SSI.
*
Corresponding author. E-mail address:
[email protected] (Y. Wang).
http://dx.doi.org/10.1016/j.dsp.2016.05.013 1051-2004/© 2016 Elsevier Inc. All rights reserved.
In practice, some techniques such as modulation and channel coding are often utilized in information transmission or communication. And noise plays a significant role in these scenarios and SR occurs as well. In [10], Duan and Abbott demonstrated that noise can enhance the performance of binary signal detection when these signals are modulated by amplitude, frequency and phase. And Liu et al. considered receiving binary pulse amplitude modulated signal based on SR, too [11]. As for channel coding, Sugiura et al. investigated the serially-concatenated turbo code in a single comparator and showed that a rather low bit error ratio (BER) can be achieved when appropriate amount of noise is added and the number of iterations is sufficient [12]. In this paper, based on [12], we study the noise-enhanced effect in channel coding as well. But unlike [12], parallel concatenated turbo code, which was introduced in [13], is considered here. This code has been applied to communication systems such as a CDMA mobile radio system [14] and third-generation (3G) mobile radio systems [15]. In addition, an array of threshold devices instead of a single component is employed. One reason for this consideration is that information gain in the array is greater than that in a single element [16]. Another one is that such an array is a simplified scheme of repetition coding and spatial diversity [17] in wireless communication. It should be mentioned that, in some sense, SR is not a technique but a phenomenon which can be observed in some nonlinear systems [18]. And the motivation of this paper is to explore whether noise may enhance performance of channel coding and decoding or not in such a nonlinear system. The remainder of this paper is organized as follows. Parallel concatenated turbo coding scheme in the array of threshold components is presented in Section 2. And then Section 3 shows
94
Q. Zhai, Y. Wang / Digital Signal Processing 56 (2016) 93–99
noise ni (i = 1, 2, ..., M). The output of each component is given by
y jli =
−1, b jl + ni < U , 1, b jl + ni ≥ U
(1)
where U is the threshold of each component. Finally the output of this array r jl is the mean of all y jli ’s, r jl =
1 M
M
i =1
y jli and is received
by the receiver. As we know, the larger the number of components M is, the less information lost during quantizing will be [16]. Performance of decision here may be superior to that in [12] where a single component is employed. We consider that additive noise ni in each component has a generally continuous probability density function (pdf) pn (n) and a cumulative distribution function (cdf) F n (·). Since b jl ∈ {+1, −1}, we have
P r { y jli = 1|b jl = +1} = P r {ni + 1 ≥ U } = 1 − F n (U − 1),
(2)
P r { y jli = 1|b jl = −1} = P r {ni − 1 ≥ U } = 1 − F n (U + 1).
(3)
Consequently, for a given b jl , r jl is subject to the binomial distribution. And then the conditional probability of r jl can be calculated as Fig. 1. (a) The model of turbo coding system. It consists of the encoder, modulator, nonlinear system and decoder. (b) An array of threshold devices acting as the nonlinear part in the turbo coding system.
the performance of turbo decoding enhanced by additive noise. In Section 4, we discuss the occurrence and the effect of SR under various parameters. Furthermore, it has been shown that in suprathreshold system, symmetric scaled noise can not improve performance, whereas noise-enhanced effect exists when noise is Gaussian mixture one. Finally Section 5 concludes this paper. 2. Turbo coding scheme The model of turbo coding system is given in Fig. 1(a), where the nonlinear system—an array of threshold devices—is specifically depicted in Fig. 1(b). Such an array has been widely used to study the phenomenon of SR [7,16,19]. The turbo encoder is made up of two recursive systematic convolutional (RSC) encoders of rate 1/ L, which are connected by a random interleaver in a parallel concatenation. And acting as a turbo decoder, two sub-decoders are combined so that iterative decoding can be conducted [13,20]. By using the turbo encoder whose rate is 1/(2L − 1), source information d (a sequence of N bits symbols (d1 , d2 , ..., d N )) is encoded into the sequence c which has N code words of length 2L − 1 (c = (c 11 ,...,c 1(2L −1) ,...,c jl ,...,c N1 ,...,c N (2L −1) )). And then c, the sequence of 0’s and 1’s, is mapped into a sequence b by binary phase-shift keying (BPSK) modulation,
b = (b11 , ..., b1(2L −1) , ..., b jl , ..., b N1 , ..., b N (2L −1) ), where b jl ∈ {+1, −1}, j = 1, 2, ..., N, l = 1, 2, ..., 2L − 1. Note that the encoder is systematic, b j1 ’s are information bits and other b jl ’s (l = 2, 3, ..., 2L − 1) are parity bits. When signals b jl ’s transmit through this nonlinear system, the corresponding output r will be received. And the original symbols may be recovered via using turbo decoder.
P r {r jl |b jl = +1} = C kM (1 − F n (U − 1))k ( F n (U − 1)) M −k , P r {r jl |b jl = −1} = when r jl =
2k− M , M
C kM (1 −
k
F n (U + 1)) ( F n (U + 1))
M −k
(4) (5)
k = 0, 1, ..., M. Otherwise, P r {r jl |b jl = ±1} = 0.
2.2. Iterative decoding in SR decoder For iterative decoding of turbo codes, a soft-in soft-out (SISO) decoding algorithm is used. With the method of soft decision, extrinsic information is exchanged between two sub-decoders. And little information will be lost during decoding. An essential SISO decoding algorithm is the Maximum A Posteriori (MAP) algorithm. And here we use its logarithmic version—Log MAP algorithm [20]—to iteratively detect output signals at the receiver. In general studies on turbo decoding, symbols are assumed to transmit over the additive white Gaussian noise (AWGN) channel [20,21]. However, in this paper, an array of threshold devices instead of a single channel is considered. We need to derive the log likelihood ratio (LLR) to determine the exchange of information between two sub-decoders. Let the jth sequence received by the sub-decoder be R j = (r j1 , r j2 , ..., r jL ). And the corresponding information bit is e j = b j1 . According to [21], the a posteriori LLR is
L (e j | R j ) = ln
P r {e j = +1| R j } P r {e j = −1| R j }
= L (e j ) + L c (e j , r j1 ) + L e (e j ). The first term L (e j ) = ln
P r {e j =+1} P r {e j =−1}
(6)
is the a priori LLR. Since the
a priori probabilities of e j = +1 and e j = −1 are both equal to 1 , the initial a priori LLR is zero. The second term L c (e j , r j1 ) = 2 ln
P r {r j1 |e j =+1} P r {r j1 |e j =−1}
is the conditional LLR and is dependent on the
channel or system. In AWGN channel scenario, it is proportional to r j1 [21]. While for the nonlinear system in Fig. 1(b), by using Eqs. (4)–(5), this conditional LLR is
(1 − F n (U − 1))k ( F n (U − 1)) M −k
2.1. An array of threshold devices
L c (e j , r j1 ) = ln
This array in Fig. 1(b) has M components, each of which is subject to the input b jl and independent and identically distributed
−M for r j1 = 2kM , k = 0, 1, ..., M. And the last term L e (e j ) is the extrinsic LLR which plays a great role in iterative decoding.
(1 − F n (U + 1))k ( F n (U + 1)) M −k
(7)
Q. Zhai, Y. Wang / Digital Signal Processing 56 (2016) 93–99
95
Fig. 2. The diagram of iterative decoding [15,21].
Fig. 2 shows the iterative decoding process in the SR decoder. (1) In the first sub-decoder, the extrinsic LLR L e (e j ) can be deter(1)
mined by L (1) (e j | R j ) − L (1) (e j ) − L c (e j , r j1 ) and is interleaved to provide the a priori LLR L (2) (e j ) for the second sub-decoder. Similarly, the extrinsic LLR obtained from the second sub-decoder is (2) (2) L e (e j ) = L (2) (e j | R j ) − L (2) (e j ) − L c (e j , r j1 ), which will be deinterleaved and acts as the a priori LLR for the first sub-decoder. And then iterative decoding process continues. After an appropriate number of iterations, we decide eˆ = (ˆe 1 ,eˆ 2 , ..., eˆ N ) as the sign of the a posteriori LLR, i.e., eˆ j = sgn L (e j | R j ) , j = 1, 2, ..., N. And the polarity of eˆ determines the
ˆ dˆ j = 0 if eˆ j = −1 and dˆ j = 1 if eˆ j = +1. Error bits information d, accumulate when dˆ is compared with the source information d. And then BER is calculated as the number of error bits divided by the total number of information bits. 3. Noise-enhanced performance of turbo decoding
In this section, we examine whether adding noise is beneficial to the decoding of turbo codes or not. As a specific case, Gaussian noise with zero mean and variance σ 2 is taken into account. In Gaussian noise case, signal-to-noise ratio (SNR) may not be improved under some condition [22]. However, in the frame of SR, SNR has some limitations [3]. Especially in an information theoretic scenario, SNR is not an appropriate measure [18]. And we will show that SR does occur when Gaussian noise is considered. Here, two identical RSC(2, 1, 3) codes whose generator polynomial is (7, 5)8 are used for turbo encoder. In this case, L = 2. We set the threshold U to be 1.5 and choose the number of threshold devices M = 7. Besides, each block of source information d has the length of 1024, N = 1024. And 1600 blocks are considered, so that more than 1.6 × 106 bits symbols are used to guarantee the reliability of simulation. As Gaussian noise intensity σ increases, the performance of code transmission, measured by BER, is shown in Fig. 3. Fig. 3 shows SR occurs during turbo code decoding in such a nonlinear system in Fig. 1(b). When too little or too much noise is added, no significant difference may be made among all iterations of decoding. And BER stays at a comparative high level, which means the iterative decoding technique is little effective in this case. However, as noise intensity turns to some moderate levels (e.g. around unity), BER decreases in each iteration, which is the typical characteristic of SR. In addition, when noise intensity σ is around optimal, BER falls to zero sharply as the number of iterations increases. As demonstrated in Fig. 3, for σ ∈ [0.7, 1.0], approximately error-free performance is achieved after 3 iterations. This noise-enhanced effect is greater than that in [12] where more than 20 iterations are necessary to obtain near-error-free
Fig. 3. Performance of BER against Gaussian noise intensity σ . The threshold U is 1.5 and the number of threshold devices is M = 7. Moreover, the number of iterations is 6. The points marked by asterisk are obtained in the uncoded case (UC). And the curve marked by solid line displays the theoretical uncoded BER.
performance. On the other hand, since noise intensity is robust in our nonlinear system, we may obtain nearly optimal performance without attempting to determine the exactly optimal noise intensity. As we have mentioned, intensity σ ∈ [0.7, 1.0] leads to optimal result of turbo decoding. 4. Discussions Noise-enhanced performance of turbo decoding may get better or worse when various parameters are chosen. And we attempt to show the effect of SR in different cases as follows. 4.1. Noise-enhanced effect under different numbers of threshold devices M The quantization levels of the array increase as the number of threshold devices M enlarges. Consequently, more information will be contained in the output sequences, which is helpful for soft decision. In this subsection, we set M to be 3 and 15, respectively, and compare the performance with that in Fig. 3 where M = 7. Similar to the case of M = 7, SR occurs when M = 3 and 15. As depicted in Fig. 4, for small number of threshold devices (M = 3), noise helps to decrease BER. But the effect of noise is not satisfactory because the achievable BER is still above 10−2 after 6 iterations. In contrast, for a large M (M = 15), a significant improvement in the performance of BER is obtained. It is obvious that Fig. 4(b) shows greater effect of noise than Fig. 4(a) and Fig. 3 do. After 2 iterations, nearly error-free performance is achieved when noise with appropriate intensity (σ ∈ [0.5, 1.7]) is added into the array. Besides, as M becomes larger (from 3 to 7 and 15), noise becomes more robust. And much wider range of noise intensity can be used to gain approximately optimal performance. What’s more, when M = 1, the array degenerates to the single comparator, in which case, we have found that decoding effect is poor. And for a whole range of noise intensity, there is no significant distinction among performances of all iterations. An alternative way to improve turbo decoding is employing a low rate turbo code. Here, a 1/9 rate code is adopted and noise-enhanced decoding performance is displayed in Fig. 5. In comparison with Fig. 4(a)—note that the rate of the original turbo code is 1/3 and M = 3 in Fig. 4(a), while in Fig. 5 the rate is 1/9 and M = 1—Fig. 5 shows the similar BER performance. It
96
Q. Zhai, Y. Wang / Digital Signal Processing 56 (2016) 93–99
Fig. 4. Performances of BER against Gaussian noise intensity
σ . The threshold U is 1.5 and the number of threshold devices is (a) M = 3, (b) M = 15.
Fig. 5. Performance of BER against Gaussian noise intensity σ in a single threshold system, M = 1. The threshold U is 1.5. Here, two identical RSC(5, 1, 3) codes are parallelly concatenated to form a turbo code of 1/9 rate.
Fig. 6. Performance of BER against Gaussian noise intensity σ . The threshold U is 1.5 and the number of threshold devices is M = 127. Moreover, the number of iterations is 6.
means that, to achieve high decoding efficiency, there should be a tradeoff between the complexity of nonlinear system and code rate. Furthermore, as M increases to some extremely large value, the iterative decoding will probably perform exceedingly well. But the decoding procedure may be a little complex by using Eq. (7). Since M is very large, the central limit theorem is introduced here.
μ+1 = E r jl |b jl = +1 = 1 − 2F n (U − 1), μ−1 = E r jl |b jl = −1 = 1 − 2F n (U + 1), σ+2 1 = V ar r jl |b jl = +1 = M4 F n (U − 1) − F n2 (U − 1) and σ−2 1 = V ar r jl |b jl = −1 = M4 F n (U + 1) −
And the output of the array r jl =
1 M
M
i =1
y jli has an approximately
Gaussian distribution conditioned on a fixed input signal b jl . Consequently, Eqs. (4)–(5) can be substituted by
P r {r jl |b jl = +1} =
1 2
2πσ+1 P r {r jl |b jl = −1} =
1 2 2πσ− 1
exp −
exp −
(r jl − μ+1 )
2
2 2 σ+ 1
(r jl − μ−1 )2 2 2 σ− 1
,
(8)
,
(9)
where
F n2 (U + 1) . Based on Eqs. (8)–(9), the conditional LLR is
L c (e j , r j1 ) = ln
P r {r j1 |e j = +1} P r {r j1 |e j = −1} 1
= − ln 2
σ+2 1 (r j1 − μ+1 )2 (r j1 − μ−1 )2 − + . 2 2 σ−2 1 2 σ+ 2σ− 1 1
(10)
Via using Eq. (10), the iterative decoding procedure will be simply implemented when the number of threshold devices M is extremely large. Specifically, let M = 127, Fig. 6 shows the turbo decoding performance enhanced by noise. It should be mentioned that when Gaussian noise intensity σ is small (e.g., 3σ < U + 1), the probabilities in Eqs. (2)–(3) tend to zero. According to [23], the effect of Gaussian approximation to the binomial distribution may degrade (note that r jl has a bi-
Q. Zhai, Y. Wang / Digital Signal Processing 56 (2016) 93–99
Fig. 7. Performances of BER against Gaussian noise intensity
97
σ . The number of threshold devices M is 7 and the threshold level is (a) U = 0.8, (b) U = 2.4.
nomial distribution for a given b jl ). As a result, iterative decoding procedure is unreliable. In the case of U = 1.5, we find that Gaussian approximation works well when σ > 0.7. While σ ≤ 0.7, efficacy of decoding is impaired greatly by using Eq. (10). So we use Eq. (7)—the exact form of conditional LLR—during the decoding process. As we have illustrated for Fig. 3 and Fig. 4, large number of threshold devices contributes to infinitesimal BER. And concerning the range of optimal intensity, noise becomes more robust. Fig. 6 demonstrates these statements further. 4.2. Noise-enhanced effect under different threshold levels U
too high, more noise is desired to help signal get over the barrier of threshold device. However, stronger noise will worsen turbo decoding and lead to less satisfactory performance. In Fig. 3, the optimal noise intensity is around 1 for U = 1.5. While for U = 2.4 in Fig. 7(b), the optimal intensity is around 1.5, which shrinks decoding performance. Finally, in suprathreshold scenario, we utilize some non-scaled noise instead of Gaussian noise to improve performance of decoding in the array. Here a type of Gaussian mixture noise discussed in [25] is taken into consideration. Its pdf is
In a single threshold system, only when threshold level is higher than the amplitude of signal, SR can be observed. In contrast, in an array of threshold devices, SR can also occur for suprathreshold signals. This phenomenon is termed suprathreshold stochastic resonance (SSR) [7]. In the array in Fig. 1(b), it is necessary to explore the existence of SR or SSR. When threshold U is set to be a small (U = 0.8 < 1) or a large (U = 2.4 > 1.5) one, the corresponding performances of decoding are displayed in Fig. 7. As shown in Fig. 7(a), noise deteriorates the performance of turbo decoding when signal is suprathreshold (U = 0.8 is smaller than 1). As a result, SR does not occur. Although SR has been observed, based on mutual information, in such an array in suprathreshold scenario [7,16]. Here since the signal is binary rather than continuous, an arrayed system can not guarantee the occurrence of SR. In the Appendix, we have shown that in this array, any symmetric scaled noise (referring to [24] for description of scaled or non-scaled noise) will not enhance performance of the uncoded case when signal is suprathreshold. This demonstration, in some degree, may predict that no improvement will be induced by scaled noise in iterative detection. That is to say, SR will not occur during turbo decoding procedure. Additionally, from Fig. 7(a), we note that when noise is not strong (e.g., σ < 1.7), a comparative low BER may be achieved by iterative decoding. This result displays the advantage of turbo code instead of the noise-enhanced effect. Fig. 7(b) does show the effect of noise enhancing decoding when threshold is a little large (U = 2.4). But the minimal BER is so high that communication or signal transmission is not reliable. Therefore, the SR effect is weak compared with that shown in Fig. 3. The reason may be that if threshold level is set to be
(n − μ)2 pn (n) = exp − 2(σ 2 − μ2 ) 2 2π (σ 2 − μ2 ) (n + μ)2 + exp − , σ > μ, 2(σ 2 − μ2 ) 1
(11)
where σ denotes the intensity of Gaussian mixture noise and μ is a fixed parameter. In the simulation, parameters of the system are set to be the same as those in Fig. 7(a), i.e., U = 0.8 and M = 7. In addition, parameter μ is chosen as 1.8. In this scenario, BER of iterative decoding is presented in Fig. 8. Obviously, when signal is suprathreshold, Gaussian mixture noise can induce improvement in both uncoded case and iterative decoding. That is, SSR occurs when such a kind of non-scaled noise is added. Though the performance of turbo decoding is not so acceptable, this exploration displays some applicability of non-scaled noise for suprathreshold signals. Moreover, via decreasing the code rate, effect of Gaussian mixture noise based on SSR will become remarkable. As a specific case, when two RSC(5, 1, 3) codes are employed, we have found that after 5 iterations, BER is nearly zero for Gaussian mixture noise intensity σ ∈ [2.2, 2.4]. In this case, noise effect in iterative decoding is practically satisfactory. 5. Conclusion In the frame of channel coding, that noise enhances performance of parallel concatenated turbo code decoding has been explored. In a nonlinear system—an array of threshold devices— Gaussian noise is introduced and the log likelihood ratio in iterative decoding algorithm is derived. When threshold is larger than
98
Q. Zhai, Y. Wang / Digital Signal Processing 56 (2016) 93–99
∞ q−1 = P r { y jli = 1|b jl = −1} =
∞ pn (n)dn =
U +1
pne (ne )dne . U +1
σ
Since −1 < U < 1, q+1 decreases and q−1 increases monotonically as noise intensity σ increases. In addition, 1/2 ≤ q+1 ≤ 1 and 0 ≤ q−1 ≤ 1/2 because of the symmetry of pn (·). Using Eqs. (4)–(5), the bit error probability in the uncoded case can be calculated as
P b (σ ) = P r {b j1 = +1} P r {r j1 < 0|b j1 = +1}
+ P r {b j1 = −1} P r {r j1 > 0|b j1 = −1} ⎡ ( M −1)/2 1 k k C M1 q+11 (1 − q+1 ) M −k1 = ⎣ 2
+ Fig. 8. Performance of BER against Gaussian mixture noise intensity σ where μ is set to be 1.8. And the threshold U = 0.8, the number of threshold devices M = 7. Moreover, 51200 bits symbols are used.
the amplitude of signals, SR does occur and noise helps to decrease bit error ratio. As the number of threshold devices becomes larger, noise becomes more beneficial and nearly error-free performance can be achieved. When the number is extremely large, the central limit theorem is applied and remarkable improvement of decoding performance is obtained. On the other hand, when threshold is smaller than the amplitude of signals, noise deteriorates the turbo decoding in the array, which means SR does not exist. However, for suprathreshold signal, when non-scaled noise (Gaussian mixture noise) is used, bit error ratio may decrease as noise intensity increases, i.e., SSR occurs. These results may be useful for the researches on communications in some nonlinear systems. Furthermore, other channel coding can be examined such as low density parity check (LDPC) codes [20]. And channel coding and decoding in more general nonlinear systems as in [26] can also be considered in future studies. Acknowledgments The authors warmly thank the reviewers for their helpful suggestions. This work was supported by the National Natural Science Foundation of China (No. 61179027), the Qinglan Project of Jiangsu Province (No. QL06212006) and the Research and Innovation Project for Postgraduate of Jiangsu Province (No. KYLX15_0829). Appendix A In the array of threshold devices (Fig. 1(b)), when signal is suprathreshold (−1 < U < 1), symmetric scaled noise will not enhance the uncoded performance. Proof. We assume that the number of devices M is odd. For an even M, the discussion is similar. For any scaled noise n = σ ne , its pdf is pn (n) = σ1 pne σn and intensity is σ . Here ne has zero mean and unity variance, whose pdf is pne (·). Let
∞ q+1 = P r { y jli = 1|b jl = +1} = U −1
and
∞ pn (n)dn =
pne (ne )dne U −1
σ
k 1 =0
⎤
M
1 2
( M −1)/2 k =0
(A.1)
k
k2 =( M +1)/2
=
C M2 q−21 (1 − q−1 ) M −k2 ⎦ k
M −k C kM qk+1 (1 − q+1 ) M −k + (1 − q−1 )k q− , 1
(A.2) where the a priori probabilities of b j1 = ±1 are both equal to 1/2. Let hk (x) = xk (1 − x) M −k , k ≤ ( M − 1)/2, 1/2 ≤ x ≤ 1. hk (x) is a monotone decreasing function when k is fixed. And then each term in Eq. (A.2) can be rewritten as M −k qk+1 (1 − q+1 ) M −k + (1 − q−1 )k q− 1 = hk (q +1 ) + hk (1 − q −1 ),
which increases as noise intensity σ increases. As a result, P b (σ ) is a monotone increasing function of σ . That is to say, any symmetric scaled noise will deteriorate the uncoded performance when −1 < U < 1. 2 References [1] L. Gammaitoni, P. Hänggi, P. Jung, F. Marchesoni, Stochastic resonance, Rev. Mod. Phys. 70 (1) (1998) 223–287. [2] S. Kay, Can detectability be improved by adding noise?, IEEE Signal Process. Lett. 7 (1) (2000) 8–10. [3] H. Chen, P.K. Varshney, S.M. Kay, J.H. Michels, Theory of the stochastic resonance effect in signal detection: Part I—fixed detectors, IEEE Trans. Signal Process. 55 (7) (2007) 3172–3184. [4] S. Liu, T. Yang, X. Zhang, X. Hu, L. Xu, Noise enhanced binary hypothesis-testing in a new framework, Digit. Signal Process. 41 (2015) 22–31. [5] A.B. Akbay, S. Gezici, Noise benefits in joint detection and estimation problems, Signal Process. 118 (2016) 235–247. [6] S. Mitaim, B. Kosko, Adaptive stochastic resonance in noisy neurons based on mutual information, IEEE Trans. Neural Netw. 15 (6) (2004) 1526–1540. [7] N.G. Stocks, Suprathreshold stochastic resonance in multilevel threshold systems, Phys. Rev. Lett. 84 (11) (2000) 2310–2313. [8] M.D. McDonnell, P.-O. Amblard, N.G. Stocks, Stochastic pooling networks, J. Stat. Mech. Theory Exp. (2009) P01012. [9] F. Duan, F. Chapeau-Blondeau, D. Abbott, Encoding efficiency of suprathreshold stochastic resonance on stimulus-specific information, Phys. Lett. A 380 (2016) 33–39. [10] F. Duan, D. Abbott, Binary modulated signal detection in a bistable receiver with stochastic resonance, Physica A 376 (2007) 173–190. [11] J. Liu, Z. Li, L. Guan, L. Pan, A novel parameter-tuned stochastic resonator for binary PAM signal processing at low SNR, IEEE Commun. Lett. 18 (3) (2014) 427–430. [12] S. Sugiura, A. Ichiki, Y. Tadokoro, Stochastic-resonance based iterative detection for serially-concatenated turbo codes, IEEE Signal Process. Lett. 19 (10) (2012) 655–658. [13] C. Berrou, A. Glavieux, P. Thitimajshima, Near Shannon limit error-correcting coding and decoding: turbo-codes, in: Proc. IEEE Int. Conf. Commun., 1993, pp. 1064–1070. [14] P. Jung, M. Naßhan, J. Blanz, Application of turbo-codes to a CDMA mobile radio system using joint detection and antenna diversity, in: 44th Vehicular Technology Conference, IEEE, 1994, pp. 770–774.
Q. Zhai, Y. Wang / Digital Signal Processing 56 (2016) 93–99
[15] L. Hanzo, J.P. Woodard, P. Robertson, Turbo decoding and detection for wireless applications, Proc. IEEE 95 (6) (2007) 1178–1200. [16] N.G. Stocks, Information transmission in parallel threshold arrays: suprathreshold stochastic resonance, Phys. Rev. E 63 (4) (2001) 041114. [17] D. Tse, P. Viswanath, Fundamentals of Wireless Communication, Cambridge University Press, NY, USA, 2005. [18] M.D. McDonnell, N.G. Stocks, C.E.M. Pearce, D. Abbott, Stochastic Resonance: From Suprathreshold Stochastic Resonance to Stochastic Signal Quantization, Cambridge University Press, NY, USA, 2008. [19] Y. Wang, L. Wu, Stochastic resonance and noise-enhanced Fisher information, Fluct. Noise Lett. 5 (3) (2005) L435–L442. [20] S. Lin, D.J. Costello Jr., Error Control Coding, second edition, Pearson–PrenticeHall, NJ, USA, 2004. [21] L. Hanzo, T.H. Liew, B.L. Yeap, R.Y.S. Tee, S.X. Ng, Turbo Coding, Turbo Equalisation and Space–Time Coding: EXIT-Chart-Aided Near-Capacity Designs for Wireless Channels, second edition, John Wiley & Sons and IEEE Press, NY, USA, 2011. [22] S. Zozor, P.-O. Amblard, Stochastic resonance in locally optimal detectors, IEEE Trans. Signal Process. 51 (12) (2003) 3177–3181. [23] G.E.P. Box, J.S. Hunter, W.G. Hunter, Statistics for Experimenters: Design, Innovation, and Discovery, second edition, Wiley, 2005. [24] F. Duan, F. Chapeau-Blondeau, D. Abbott, Stochastic resonance with colored noise for neural signal detection, PLoS ONE 9 (3) (2014) e91345.
99
[25] Y. Wang, L. Wu, Nonlinear signal detection from an array of threshold devices for non-Gaussian noise, Digit. Signal Process. 17 (2007) 76–89. [26] M.D. McDonnell, X. Gao, M-ary suprathreshold stochastic resonance: generalization and scaling beyond binary threshold nonlinearities, Europhys. Lett. 108 (2014) 60003.
Qiqing Zhai received the B.S. degree in information and computing science from Nanjing University of Posts and Telecommunications (NUPT), Nanjing, Jiangsu, China in 2013. Currently, he is a Ph.D. candidate with the college of Telecommunications and Information Engineering, NUPT. His research interests include stochastic resonance and statistical signal processing. Youguo Wang received the B.S. degree in mathematics from Nanjing Normal University, Nanjing, Jiangsu, China in 1989, and the M.S. degree in mathematics from Zhejiang University, Hangzhou, Zhejiang, China in 1992. In 2006, he received the Ph.D. degree in signal and information processing from Southeast University, Nanjing, Jiangsu, China. Currently, he is a professor in NUPT. His research interests include information theory and statistical signal processing.