Chaotic signal processing: information aspects

Chaotic signal processing: information aspects

Chaos, Solitons and Fractals 17 (2003) 531–544 www.elsevier.com/locate/chaos Chaotic signal processing: information aspects Yuri V. Andreyev a a,* ...

193KB Sizes 0 Downloads 42 Views

Chaos, Solitons and Fractals 17 (2003) 531–544 www.elsevier.com/locate/chaos

Chaotic signal processing: information aspects Yuri V. Andreyev a

a,*

, Alexander S. Dmitriev a, Elena V. Efremova a, Antonios N. Anagnostopoulos b

Institute of Radio Engineering and Electronics, Russian Academy of Sciences, Mokhovaya 11/7, Moscow 101999, Russia b Department of Physics, Aristotle University of Thessaloniki, 54006 Thessaloniki, Greece

Abstract One of the features of chaotic signals that make them different of other types of signals is their special information properties. In this paper, we investigate the effect of these properties on the procedures of chaotic signal processing. On examples of cleaning chaotic signals off noise, chaotic synchronization and separation of chaotic signals we demonstrate the existence of basic limits imposed by information theory on chaotic signal processing, independent of concrete algorithms. Relations of these limits with the Second law, Shannon theorems and Landauer principle are discussed.  2002 Elsevier Science Ltd. All rights reserved.

1. Introduction Dynamic chaos has a number of nontrivial features that initiate interest to applying it in various fields. Internal beauty of chaotic attractors, their scale invariance, sensitivity to initial conditions, generation of information by chaotic systems are premises to the idea that the behavior of the chaotic systems by various processing procedures is essentially determined by these nontrivial features which makes it different from the behavior of other kinds of systems and signals. This paper is devoted to an analysis of the effect of information properties of chaotic signals on processing procedures and their results. Before we proceed to concrete problem statements, let us put a question: what is the efficiency of algorithms for chaotic signal processing determined by? Is it determined by only the art of the method design? Are there basic reasons that determine the efficiency and capabilities of the methods? Indeed, a bad algorithm is itself a reason for low processing efficiency, e.g., in the case of cleaning chaotic signals resulting in a high level of residual noise. However, the question is different. Are there any limits to the quality of processing in ‘‘good’’ algorithms, and if they are, what are the reasons for them? In this paper, we develop and validate a point of view that the processing quality is closely connected with the properties of chaotic signals, especially with information properties that impose fundamental limits on the procedures of chaotic signal processing. These limits stem from the fact that chaotic signals contain information. If loss of information, present in chaotic signals, can be avoided by processing, then, at least in principle, one can expect high quality. If it is not so and information is essentially corrupted or lost, then the capabilities of high-quality processing essentially shrink. However, if the matter were only the limited processing capabilities, the idea would not be very constructive. The reverse of the medal coupled with information properties of chaotic signals is that taking these properties into account and using them can significantly improve the processing quality, since it helps determine the conditions of when and to what extent it is possible. For example, as is found, the procedure of cleaning chaotic signals off noise can be of exponential rate, which is unreal when working with ordinary signals.

*

Corresponding author. E-mail address: [email protected] (Y.V. Andreyev).

0960-0779/03/$ - see front matter  2002 Elsevier Science Ltd. All rights reserved. PII: S 0 9 6 0 - 0 7 7 9 ( 0 2 ) 0 0 3 9 6 - X

532

Y.V. Andreyev et al. / Chaos, Solitons and Fractals 17 (2003) 531–544

The paper layout is as follows: First, we consider information properties of chaotic signals. Then, we discuss linear interaction of chaos, information signals and interference. This interaction is shown to be a basis for a number of interesting problems coupled with chaotic signal processing. Further analysis is given on examples of three such problems: cleaning, synchronization and separation of chaotic signals. After that we analyze the range of applicability of the information approach. Finally, possible relations of the discussed limitations with other important basic restrictions on the information processes are discussed, such as the Second law, Shannon theorems of the information-carrying capacity of the channel, and Landauer principle of energy requirements to computations.

2. Information properties of chaotic signals Generation or production of information (entropy) in the process of chaotic signal generation and, consequently, the presence of this information in the chaotic signal is an inherent feature of dynamic chaos [1]. Consider production and disappearance of information in nonlinear systems with complicated behavior on example of 1D maps of the unit interval ð0; 1Þ into itself xðk þ 1Þ ¼ f ðxðkÞ; lÞ;

ð1Þ

where l is a parameter. Initial value xð0Þ can always be known only with a certain accuracy e, hence it contains I ¼  log2 ðeÞ information bits. Information production by iteration of map (1) in point x is determined by the slope of the function curve f in this point    df  DI ¼ log2  : ð2Þ dx For example, in the case of the map of Bernoulli shift xðk þ 1Þ ¼ ð2xðkÞÞ mod 1;

ð3Þ

the change of information about the point location is DI ¼ log2 ð2eÞ  log2 ðeÞ ¼ log2 2 ¼ 1 bit: After n   log2 ðeÞ map iterates, the initial uncertainty e leads to uncertainty within the entire interval ð0; 1Þ and the knowledge about the initial point location is lost. This is due to generation of the corresponding amount of information by the system itself. In general, average information production I by map (1) by one iteration is   Z 1  df  I¼ P ðxÞ log2  dx; ð4Þ dx 0 where P ðxÞ is the probability density for values of x on the interval ½0; 1. The value of I can be determined even if P ðxÞ is unknown. To do this, one must iterate the map starting from some initial point and calculate the mean value of the slope logarithm   n  df  1X log2  : ð5Þ I ¼ lim n!1 n dx i¼1 The obtained sum for ergodic map is assumed to be weighted with the probability density P ðxÞ due to the very process of iteration. Eq. (5) coincides with that for the Lyapunov exponent k of 1D map, with the only difference that natural (base-e) logarithm is applied in the expression for the Lyapunov exponent instead of base-2 logarithm. Consequently, Lyapunov exponent may be treated as information production rate expressed in units for base-e code. To convert it to bitsper-iteration, one must multiply k by log2 e I ¼ log2 e k:

ð6Þ

Information I is easily calculated for some simple maps, for instance, for the family of ‘‘tent’’ maps, described by Eq. (1) with the right hand side

Y.V. Andreyev et al. / Chaos, Solitons and Fractals 17 (2003) 531–544

 xðk þ 1Þ ¼

xðkÞ=l for  xðkÞ < l; 1xðkÞ for  xðkÞ > l: 1l

533

ð7Þ

The maximum rate of information production in (7) is equal to one bit per iterate and is achieved in the symmetrical map with l ¼ 0:5. The relation for I is as follows: I ¼ ½l log2 l þ ð1  lÞ log2 ð1  lÞ:

ð8Þ

As the second example, consider the logistic parabola map xðk þ 1Þ ¼ lxðkÞð1  xðkÞÞ:

ð9Þ

In the case of l ¼ 4, this map is ergodic and I is equal to one bit per iterate. The slope of map y ¼ lxð1  xÞ is not everywhere greater than one, so, the absence of stable periodical points can not be guarantied. It leads to complicated relations (for l < 4) between chaotic and stable periodical modes. 2.1. Fluctuations of information production rate Both mean rate of information production and fluctuations of this rate in time domain are of interest. Indeed, at different time moment information is produced with different rate. This difference, as will be shown below, plays an important role in processing chaotic signals. Consider information production fluctuations in the discussed examples. The slope of the Bernoulli shift map is constant and the mean rate of information production coincides with the rate of its production on any iteration. Distribution density for this rate is d-function (Fig. 1a). Information production of an asymmetrical tent map by one iteration can take two values:   log2 l; x < l; I¼ ð10Þ  log2 ð1  lÞ; x > l: Since the distribution probability density of the variable is constant within the interval ð0; 1Þ, the number of iterates with each of the two values of I is proportional to l and 1  l, respectively. Therefore, the distribution probability density for information production has the following form (Fig. 1b): P ðIÞ ¼ ldðI þ log2 lÞ þ ð1  lÞdðI þ log2 ð1  lÞÞ:

ð11Þ

For the logistic map xðk þ 1Þ ¼ lxðkÞð1  xðkÞÞ, the information production distribution density varies strongly with variation of parameter l. For example, in the case of l ¼ 3:7 the shape of the distribution is shown in Fig. 1c. 2.2. Restrictions imposed by information theory on transmission of chaotic signals The dynamical system generating chaos can be treated as a specific source of information messages with the mean rate of information production I. In accordance with the Shannon theorem [2], in order to transmit information volume I in unit time, the channel capacity must satisfy the following relation: ð12Þ

C P I:

Relation (12) does not give a procedure for cleaning or synchronizing chaotic signals. The sense of (12) is that the ‘‘communication channel’’ must possess some minimum channel capacity to pass chaotic signals. In other words, the noise level must not exceed some level. The quantitative analysis is based on the theorem about the information-carrying capacity of a noisy channel [2]. According to the theorem, the information-carrying capacity of the channel with frequency bandwidth W and white Gaussian noise with power N and average transmitted signal power S is equal to C ¼ W log2

SþN : N

ð13Þ

That is, by means of encoding, one can transmit signals with the rate of W log2 ððS þ N Þ=NÞ bps and an arbitrarily small error rate.

534

Y.V. Andreyev et al. / Chaos, Solitons and Fractals 17 (2003) 531–544

Fig. 1. Per-iteration distribution density of information produced by (a) Bernoulli shift map, (b) tent map with l ¼ 0:3, and (c) logistic map with l ¼ 3:7.

3. Linear interaction of chaos, information signals and noise Classical information theory deals with two types of sources producing information (entropy): message sources and noise sources. Message source can generate finite amount of information in unit time, as takes place in the case of finite alphabet. Also, it can generate information in the range from zero to infinity if the signals have continuous values, which can be measured with unlimited precision. However, finite-precision measurements result in a limited number of states, since only the states that can be distinguished with the given precision measurements make sense. Hence, information that can be received from the source also becomes finite. Noise (interference) in communication channel can also be a reason for finite amount of perceived information. A standard model of noise source is a white Gaussian noise source. White noise in discrete-time sources is a sequence of normally distributed independent samples. Formally, such a noise has infinite entropy. Chaotic oscillators can also be treated as information sources, namely, as sources with very special features: • signals of such sources take continuous values; • average information production rate is finite, and in the case of 1D maps is determined by relation (5).

Y.V. Andreyev et al. / Chaos, Solitons and Fractals 17 (2003) 531–544

535

The scope of information theory is analysis of interaction of information (entropy) sources with themselves and with receivers. Actually, in the case of message and noise sources one may talk only about transmission of information in the absence or in the presence of interference, i.e., either about interaction of several message sources, or about interaction of message sources with noise sources. Adding chaotic sources to this pair essentially extends and complicates the whole picture. Now the following combinations of entropy (information) sources are possible: chaotic source–noise source; chaotic source–chaotic source; message source–chaotic source, and finally, message source–chaotic source–noise source. Let us find what interesting information and communication problems correspond to these types of interaction. 3.1. Chaotic source–noise source Cleaning chaotic signals of noise: Filtering (cleaning) is a typical problem of signal processing. It will be shown below that this problem has very specific features in the case of chaotic signals. Chaotic synchronization in the presence of noise: Chaotic synchronization may be treated as a process of transmitting information through a noisy channel from a chaotic source to a receiver, that must reproduce the transmitted signals either exactly or with an admissible level of distortions. What is important here, regardless of the physical nature of the synchronization process it can be established and maintained only in the case of a channel with sufficient informationcarrying capacity (throughput). Besides, the value of the necessary channel capacity is determined by the degree of chaoticity of the ‘‘transmitter’’. Actually, in this case we deal with a certain generalization of the synchronization concept, because it is understood here not as imposing the behavior of one system on the other, but as construction of an exact or a close copy of the signal by the receiver (the copy also means synchronous behavior in time domain with respect to the time of signal propagation and processing). Also important is that treating synchronization as a process of transmitting information makes sense only in the case of chaotic signals, since the amount of information in periodic or irregular, but predictable at large time intervals, signals is zero. Other examples of interaction of chaotic signals (systems) and noise are radio-, acoustic and optical sensing using chaotic signals; distinguishing chaotic and noise signals [3,4]; extraction of chaotic signals from interference [5]. 3.2. Chaotic source–message source Information transmission using chaos: Beginning from 1992, a number of methods for transmitting information using chaotic dynamics was proposed, such as chaotic masking [6,7], switching chaotic modes [8–10], nonlinear mixing [11– 13], use of inverse systems [14–16], chaos control [17], etc. Chaotic masking: Let there be a message source, whose information is to be received by an ‘‘own’’ user (users) and must not be received by other observers. In this case chaotic source may be used for masking the information signal. Own users are supplied by compensators of the masking signals. 3.3. Chaotic source–chaotic source Separation of chaotic signals: Let there be two different, in general, (‘‘drive’’) dynamic systems generating chaotic oscillations. These oscillations are summed and the sum signal is fed to a pair of other (‘‘response’’) dynamic systems that may be connected to each other. Is the inverse problem of separation of the two oscillating processes possible under these conditions in the pair of the response systems? This is the problem of chaotic signal separation. 3.4. Message source–chaotic source–noise source As examples of such interaction one can mention the systems for information transmission using chaos operating in real conditions with interference. Also, multiple-access communication systems employing chaotic carriers and retrieving own message on the background of other usersÕ signals, and finally, a radio environment with many users employing ordinary as well as chaotic signals and producing interference to each other. Below, as examples we consider from the information viewpoint three typical problems of chaotic signal processing: cleaning chaotic signals off noise, chaotic synchronization and separation of chaotic signals. The aim of the paper is to analyze information aspects and to find limit potential efficiency of chaotic signal processing, and to investigate concrete algorithms in view of maximum efficiency.

536

Y.V. Andreyev et al. / Chaos, Solitons and Fractals 17 (2003) 531–544

4. Cleaning chaotic signals The problem of cleaning (filtering) chaotic signals is of great interest in many applications, and it also deserves attention from theoretical point of view. In the simplest form, it could be stated as follows. Let there be a chaotic source (CS) sending signal xðkÞ into a communication channel where noise wðkÞ is added to the signal. The mixture of the chaotic signal and noise zðkÞ is fed to a device called a chaos receiver (CR), whose purpose is to obtain estimate x^ðkÞ as close as possible to CS output signal xðkÞ (Fig. 2). In this statement, the problem of chaotic signal cleaning is a version of a classical problem of a signal transmission over a noisy communication channel with its further estimation [2,18], where the chaotic signal plays the role of information signal. On the other hand, it has some common features with the problem of obtaining synchronous chaotic response or the problem of synchronization of CS with CR when the signal in the channel is disturbed by interference. Finally, this problem can be discussed as a problem of obtaining a copy of the signal generated by CS at the output of the chaos receiver with maximum precision. The problem of cleaning chaotic signals was analyzed in a number of publications by means of different approaches that did not take into account the information properties of the chaotic signal [5,19]. The significance of information aspect of the problem was noted in [20,21]. However, the relation between the cleaning efficiency and information properties of chaotic signals was not considered in detail in these publications. Here we demonstrate practical possibility of cleaning chaotic signals off noise taking into account their information properties [22]. 4.1. The first approach Let a chaotic dynamic system be Bernoulli shift map f ðx; lÞ ¼ ðlxÞ mod 1:

ð14Þ

For l ¼ 2, this map moves mantissa of the binary representation xð1Þ ¼ 0:a1 a2 . . ., ai 2 f0; 1g to the left by one position. Let us divide the interval ½0; 1 into two parts ½0; 1=2Þ and ½1=2; 1 and assign the symbolic variable SðkÞ to 0 if xðkÞ < 1=2 and to 1 if xðkÞ P 1=2. This means that SðkÞ ¼ ak , and symbolic sequence Sð1Þ; . . . ; SðnÞ contains all information about xð1Þ with the accuracy within n binary digits. The mixture of the chaotic and noise signals zðkÞ ¼ xðkÞ þ wðkÞ is fed to the response system. Let there be hx2 i  d2 ¼ hw2 i, then for the majority of iterates the integer parts of doubled values zðkÞ and xðkÞ will be equal to symbolic values intð2xðkÞÞ ¼ intð2zðkÞÞ ¼ SðkÞ:

ð15Þ

Thus, xð1Þ can be recovered as accurately as n binary digits from the first n samples. In other words, at the response system output, we can obtain the signal x^ð1Þ coinciding with the drive system signal with a good accuracy. Of course, relation (15) breaks from time to time but the level of noise in the signal at the response system output is considerably lower as compared to that in the signal in the communication channel. Note two circumstances. First, in the response system, we use only very rough information about the chaotic samples namely only one information bit (0 or 1). This is just the amount of information produced by the CS by one iterate. Using so little information allows us to increase the probability of obtaining the correct value for the sample estimate. Second, xð1Þ can be recovered with accuracy n-bit only after a series of n samples zð1Þ; . . . ; zðnÞ is recovered. That is, xðkÞ is recovered with a delay in respect to the reception of zðkÞ sample in the response system. The stricter the recovery accuracy requirement, the longer the delay. The procedure of signal cleaning can be modified as follows.

Fig. 2. Scheme for cleaning chaotic signals off noise: CS––chaotic source, CR––chaos receiver, xðkÞ––chaotic signal; wðkÞ––noise, zðkÞ ¼ xðkÞ þ wðkÞ; x^ðkÞ––chaotic signal estimate.

Y.V. Andreyev et al. / Chaos, Solitons and Fractals 17 (2003) 531–544

537

4.2. The second approach Let us introduce a map inverse to the Bernoulli shift map. Note, that it is two-valued contracting map (contraction value is equal to 2). Fix a time moment k 0 0 and a natural number n P 1. To get an estimate of x^ðk 0 Þ, we use the following algorithm. þn Consider a piece fzðkÞgk¼k of the observed trajectory. The idea of the cleaning is that map (1) is strictly contracting by k¼k 0 backward iteration, i.e., iterating sample zðk 0 þ nÞ backward n times allows us to approach the true sample xðk 0 Þ close enough. For Bernoulli shift map, the contraction at backward iteration is equal to 2, and deviation of the estimate from the true value is equal to wðk 0 þ nÞ2n . Direct accomplishment of this approach is impossible, since in map (14) two argument values correspond to a single 0 þn of the observed function value. This problem can be solved using information present in the interval fzðkÞgk¼k k¼k 0 trajectory in order to select one of the two inverse map branches. To take the correct branch of the inverse map at each backward iterate we can use the sequence of SðkÞ. If Sðk  1Þ ¼ 0, we take the lower branch of the inverse map, and the upper branch for Sðk  1Þ ¼ 1. As well as in the first approach, the estimation accuracy of xðkÞ sample value also increases by a factor of two by one step of the map iteration. However, in the case of a small noise with d2  1, the initial uncertainty is equal to 1/2 for the first approach and d for the second. Therefore, for the same estimation accuracy the second approach requires less number of iterates time (delay). 4.3. The third approach According to approach to chaotic signal cleaning for the Bernoulli shift map, the branch for the backward iterate of a current sample zðkÞ is chosen using the knowledge of the previous sample zðk  1Þ instead of the symbolic sequence element Sðk  1Þ [21]. The idea of the approach is as follows. At the first step, we choose one of the two pre-images of point zðk 0 þ nÞ, the one that is the nearest to the sample 0 zðk þ n  1Þ. We denote it as yðk 0 þ n  1Þ. At the second step, of the two pre-images of the sample yðk 0 þ n  1Þ we take the nearest to zðk 0 þ n  2Þ and denote it as yðk 0 þ n  2Þ. The process is repeated until we get the series of samples 0 þn1 fyðkÞgk¼k . These samples are the ‘‘true’’ trajectory and the value yðk 0 Þ is chosen as an estimate of x^ðk 0 Þ for xðk 0 Þ. k¼k 0 The discussed procedures of chaotic signal cleaning off noise for map (10) allows us to generalize the results to other 1D maps (with a finite number of inverse map branches). In general, the cleaning quality for each piece of the trajectory will be determined by the local Lyapunov exponent (local rate of information production) for this segment. ‘‘Backward analysis depth’’ n is a free parameter of the procedure. If we choose the ‘‘true’’ trajectory correctly, the clearance quality increases exponentially with n (as 2n , in the case of Bernoulli shift map (14)). However, an increase of n can result in increased probability of false choice of the ‘‘true’’ trajectory, which leads to decrease of the clearance quality. In order to estimate the efficiency of the discussed methods, we simulate the cleaning of chaotic signals generated by Bernoulli shift map (14) with Gaussian noise in the channel.

Fig. 3. The comparison of the efficiency of cleaning methods (1–3) for Bernoulli shift map (n ¼ 10, Gaussian noise): cleaning coefficient as a function of noise level in the channel.

538

Y.V. Andreyev et al. / Chaos, Solitons and Fractals 17 (2003) 531–544

In the simulation using the second and third methods, the initial error was equal to d; while for the first approach it was equal to 1/2. Each backward iterate decreases the noise level by 6 dB. Hence, to get comparable results, the number of backward iteration steps in the first approach must be greater than in the second and third approaches by Dn ¼ ½20 lgð1=2dÞ=6  3:3 lgð1=2dÞ: The numerical results are shown in Fig. 3. Analysis of the plots in Fig. 3 shows that the third method (curve 3) is the most efficient and that it begins functioning at higher noise levels. The first and the second methods (curves 2 and 3) are nearly equivalent at a proper number of steps.

5. Synchronization of chaotic systems The phenomenon of chaotic synchronization of two systems with unidirectional coupling implies, as a rule, a direct effect of a physical process xðkÞ in one system on the other system in the presence of, in general, noise wðkÞ in the channel. The case when two systems are synchronized not directly through the physical signals generated by them, but using a number of signal transformations is typical. The physical nature of the signal in the channel, after the transformer, may be different from that of the signal at the output of the drive system. For example, the drive and response systems may be mechanical or electric, with an optical signal in the channel; or analog systems may be connected with a discrete channel. So, the synchronization circuit must be more realistic (Fig. 4a). Here, along with the drive and response systems, a transformer (Tr) of the physical signal from one form to another, an inverse transformer (Tr1 ) and a channel that carries the signal in a new form yðkÞ from the transformer to the inverse transformer are presented. The powers of the signals at the outputs of the drive system and the transformer can also be different. However, at the response system input we must again have a physical signal uðkÞ characteristic of the synchronized systems. Physical signal uðkÞ is reconstructed in the receiver from the signal that comes from the channel. This means that we substitute the direct physical effect of one system on the other by its imitation, i.e., actually we change the physical effect of the systems to information about this effect. Or, in other words, replace physical interaction by information interaction [23,24]. Definition. Let there be a drive and a response systems. We say that the response system is synchronized with the drive system if the signal at the output of the response system is a copy of the signal at the output of the drive system irrespective of whether they interact directly through the physical signal generated by the drive system, through the signal transformed to some other form and then transformed back to the initial signal, or with a help of information delivered from the drive to the response system. Treating synchronization as an information process, let us discuss two synchronization methods, using Bernoulli shift map and tent map as chaotic sources.

Fig. 4. Synchronization circuits: (a) conventional; (b) first synchronization method using symbolic sequences; (c) second synchronization method using symbolic sequences: CS––chaotic source; CR––chaos receiver; Tr––transformer; Tr1 ––inverse transformer; Q2 –– quantizer.

Y.V. Andreyev et al. / Chaos, Solitons and Fractals 17 (2003) 531–544

539

Fig. 5. The map inverse to tent map.

Consider a map xðkÞ ¼ f 1 ðxðk þ 1ÞÞ

ð16Þ

inverse to map (1). For example, for symmetric tent map (7) this will be a map depicted in Fig. 5. Let a symbolic sequence Sð1Þ; . . . ; SðkÞ; . . . ; Sðk þ nÞ; . . . be fed to the receiver of chaos instead of the original chaotic sequence xðkÞ. A term Sðk þ nÞ of the symbolic sequence defines the value of the chaotic sample as accurately as interval ½0; 1=2Þ for Sðk þ nÞ ¼ 0 and ½1=2; 1 for Sðk þ nÞ ¼ 1, respectively. Let us apply map (16) to either interval. As a result, we obtain two segments, each of the size of 1/4. Of the two, we then take the segment corresponding to element Sðk þ n  1Þ of the symbolic sequence. For example, if Sðk þ n  1Þ ¼ 1, we take the segment corresponding to the upper branch of map (16). Thus, we obtain an estimate of xðk þ n  1Þ, now as accurate as a length-1/4 interval. Repeating the procedure n times, we obtain an estimate of xðkÞ to within 2n1 accuracy. The methods for synchronization in the presence of noise in the communication channel discussed below differ by the circuit forming the symbolic sequence. In the first method (Fig. 4b) the drive system generates a signal in the range ½0; 1. Then the signal is transformed according to the expression yðkÞ ¼ 2ðxðkÞ  12Þ

ð17Þ

and is sent through the communication channel where noise wðkÞ is added. In the second method (Fig. 4c), signal xðkÞ from the chaotic source output is fed to a level-2 quantizer (Q2 ), where it is transformed into an element of symbolic sequence SðkÞ ¼ 0 if xðkÞ < 1=2 or SðkÞ ¼ 1 if xðkÞ > 1=2, which in turn is transformed into a physical signal S 0 ðkÞ ¼ 1 if SðkÞ ¼ 0 or S 0 ðkÞ ¼ 1 if SðkÞ ¼ 1. The physical signal is sent to noisy channel. A priori, the second method seems to be less sensitive to errors than the first one.

Fig. 6. Bit to error ratio (BER) for the case of tent map: 1––first method; 2––second method.

540

Y.V. Andreyev et al. / Chaos, Solitons and Fractals 17 (2003) 531–544

Actually, in the case of quantizing the signal by two levels, a signal with well-distant values ‘‘)1’’ and ‘‘þ1’’ is sent to the channel and a much greater noise level is necessary to push the signal from )1 to þ1 or vice versa, which makes the errors in the receiver quantizer less probable. In Fig. 6, calculation results for the probability of single errors of the discussed synchronization methods are presented (the first method––curve 1, the second method––curve 2) using tent map as a chaotic source, which confirms this conclusion. Thus, effective methods for transmitting information generated by the map through a noisy channel essentially decrease the synchronization resistance in respect to noise and allow us to obtain synchronization of chaotic systems at a noise level close to the theoretical limit. 6. Chaotic signal separation Let there be two (or more) sources of chaotic signals xj ðkÞ, j ¼ 1; 2. On the path (a channel) to an observer, the signals xj ðkÞ are summed. In general, the sum signal is also contaminated by additive noise wðkÞ (Fig. 7). The observer has to separate the individual signals from the sum. Let the chaotic sources be 1D maps described by the equations x1 ðk þ 1Þ ¼ f1 ðx1 ðkÞÞ;

x2 ðk þ 1Þ ¼ f2 ðx2 ðkÞÞ:

ð18Þ

The signal in the channel is uðkÞ ¼ x1 ðkÞ þ x2 ðkÞ þ wðkÞ:

ð19Þ

So, the problem can be rigorously defined as follows. Given a sequence of samples of a sum signal fuðkÞg, k ¼ 1; 2; . . . ; N ; knowing the dynamics of the systems generating the chaotic signals (here, the functions f1 and f2 ), and given (good) estimates ~x1 ðN Þ and ~x2 ðNÞ at Nth time moment, to obtain estimates ~x1 ðkÞ and ~x2 ðkÞ, k ¼ 1; 2; . . . ; N , of the signals x1 ðkÞ and x2 ðkÞ on the entire time interval, satisfying the dynamics of sources (18) and the most close to x1 ðkÞ and x2 ðkÞ, respectively. Let for certainty the chaotic sources be described by the maps of logistic parabola f ðxÞ ¼ lxð1  xÞ x1 ðk þ 1Þ ¼ l1 x1 ðkÞð1  x1 ðkÞÞ;

x2 ðk þ 1Þ ¼ l2 x2 ðkÞð1  x2 ðkÞÞ

ð20Þ

with parameters l1 ¼ 3:7 and l2 ¼ 3:8. They generate I1 ¼ 0:51 and I2 ¼ 0:62 bits per iteration, respectively. However, these values are average: the amount of generated information differs from iteration to iteration. According to expression (19) the observer receives the sum of chaotic signals x1 ðkÞ and x2 ðkÞ distorted by noise wðkÞ. This can be treated as a model of a ‘‘communication channel’’ with Gaussian noise through which a signal xðkÞ ¼ x1 ðkÞ þ x2 ðkÞ is transmitted. On each iteration step the sum of the signals contains certain amount of information whose distribution density is presented in Fig. 8. In order to separate the signals x1 and x2 it is necessary that the information is not lost due to contamination of the signal sum by the noise w [25–27]. According to Shannon theorem [2], the information-carrying capacity of the channel per-iteration is equal to   1 hx2 ðkÞi 1 C ¼ log2 1 þ 2 ð21Þ ¼ log2 ð1 þ SNRin Þ; 2 2 hw ðkÞi where SNRin is the signal-to-noise ratio in the channel. Maximum amount of information going through this channel is determined by the right boundary of the distribution density Imax in Fig. 8. This gives a necessary condition for the signal separation ð22Þ

C > Imax

Fig. 7. Separation of chaotic signals.

Y.V. Andreyev et al. / Chaos, Solitons and Fractals 17 (2003) 531–544

541

Fig. 8. Per-iteration distribution density of information produced simultaneously by two logistic maps with l1 ¼ 3:7 and l2 ¼ 3:8.

consequently, SNRin > 22Imax  1:

ð23Þ

In the discussed case, Imax  3:4 bits per iteration, hence SNRin

½dB ¼ 10 lgð22Imax  1Þ > 20 dB:

ð24Þ

Thus, information theory gives us a lower boundary for the noise efficiency of separation for the case of two logistic maps. However, it gives no hint about the procedure. It must be derived from some other reasons. The idea of separation is the same as of signal cleaning. The observer simultaneously iterates the maps f 1 inverse to those that generate the chaotic signals: x1 ðk  1Þ ¼ f11 ðx1 ðkÞÞ;

x2 ðk  1Þ ¼ f21 ðx2 ðkÞÞ;

ð25Þ

starting from arbitrary initial conditions for each map. Since inverse maps (25) are contracting, the estimates of signals x1 and x2 become more and more accurate with each iteration (on the average). The only problem here is to take proper branches of the inverse maps. It is solved at each moment k by means of minimizing deviation of the sum of estimates uij ðk  1Þ ¼ ~xi1 ðk  1Þ þ ~xj2 ðk  1Þ;

i; j ¼ 1; 2

ð26Þ

from the sum signal in the channel uðk  1Þ, which is illustrated in Fig. 9. If k is Lyapunov exponent of a map, then the average inverse map contraction factor is ek . So, estimate errors d1 ðlÞ ¼ j~x1 ðlÞ  x1 ðlÞj and d2 ðlÞ ¼ j~x2 ðlÞ  x2 ðlÞj of separated signals ~x1 ðlÞ and ~x2 ðlÞ decrease exponentially (on the average) d1 ðlÞ ¼ d1 N expðk1 ðN  lÞÞ;

d2 ðlÞ ¼ d2 N expðk2 ðN  lÞÞ;

ð27Þ

Fig. 9. Choice between combinations of inverse map branches. l is discrete time, uij ðlÞ ¼ ~xi1 ðlÞ þ ~xj2 ðlÞ, i; j ¼ 1; 2. The values of the sum signal uðlÞ are denoted by asterisks.

542

Y.V. Andreyev et al. / Chaos, Solitons and Fractals 17 (2003) 531–544

where d1 ðN Þ ¼ j~x1 ðN Þ  x1 ðN Þj and d2 ðNÞ ¼ j~x2 ðN Þ  x2 ðNÞj are initial estimate errors, and k1 and k2 are Lyapunov exponents of the trajectories of the first and the second systems, respectively. In agreement with expression (27), the accuracy of estimates ~x1 and ~x2 improves exponentially with each step of inverse function iteration and eventually achieves the limit of calculation accuracy. In our numerical experiments with accuracy e, the limit attainable closeness, i.e., separation accuracy, is achieved after p ¼  logðedÞ=k steps at most, where d is the initial estimate error. Thus, the described procedure allows one to separate the signals, given on interval ð1; NÞ, nearly on the entire interval ð1; N  pÞ as accurately as the machine arithmetic, with a few less accurate samples at the end of the separated signal sequences. This less accuracy of the ending p samples can be explained by the lack of information on the interval ðN  p; N Þ. In the above algorithm for signal separation only one branch is tracked back at each step, so it was called singlebranch algorithm. It proved to be efficient in the case of low channel noise, however, the noise boundary of efficient separation with this method was SNRin ¼ 65 dB, which is rather far from the theoretical value of 20 dB. So, another (multi-branch) algorithm was proposed, whose efficiency was improved by means of the use of nonlocal information at each iteration. Several branches were backtracked simultaneously besides the one, optimal in the sense of condition (25), and the best branch was chosen among them by means of minimizing the average deviation signal. Due to evident, inevitable restrictions on computational capabilities, memory resources, etc., we restrict the number of the backtracked branches, say to M, ‘‘best’’ in a certain sense, by means of discarding the least probable ones. Besides, specific dynamics of chaotic systems is to be taken into account: since the backtracked branches tend to converge due to relation (27), from time to time we remove the ‘‘stuck’’ ones in order to keep branches different. When the entire interval ð1; N Þ is processed, the separated signals ~x1 and ~x2 are obtained with the condition of minimum average deviation of the sum of signal estimates from the channel signal uðkÞ. Since in the absence of noise the signals of two logistic maps were efficiently separated, further investigation was concentrated on the method resistance with respect to the channel noise. Signal recovery error xi ðkÞ  ~xi ðkÞ, i ¼ 1; 2, was treated as residual noise in the separated signals. The signal-to-noise ratios, SNRout , for the separated signals were calculated as functions of the signal-to-noise ratio of the channel signal uðkÞ, SNRout ¼ hx2i i=hðxi  ~xi Þ2 i;

i ¼ 1; 2;

SNRin ¼ hðx1 þ x2 Þ2 i=hw2 i

ð28Þ

(all signals were normalized to zero mean values). Calculation results are presented in Fig. 10. The results for single-branch algorithm are represented by curve 1, which shows that the region of effective separation extends to SNRin of 65 dB. The results for multi-branch algorithm are presented by curve 2 (16 branches were tracked back). The results indicate that with this algorithm the chaotic signals are separated at SNRin  25–30 dB. The diagonal is a virtual border of effective separation, i.e., above the diagonal the noise in the separated signals is lower than that in the channel. Given for comparison is curve 3, corresponding to results for the method of chaotic synchronization [28]. As can be seen in Fig. 10, the noise in the signals separated by the method of chaotic synchronization is always higher than the channel noise. Note that the curves 1 and 2 exhibit thresholds (separation boundaries). To the right of the corresponding boundaries (SNRin  65 dB for curve 1 and 30 dB for curve 2), the noise in the separated signals rapidly decreases according to relations (27), and its value is eventually determined by only the number of backward iteration steps and by the machine calculation accuracy. This means that the signals are not only separated but also cleaned off noise.

Fig. 10. Signal-to-noise ratio of the separated signals, SNRS , as a function of channel SNRC . Curves are shown for (1) single-branch algorithm, (2) algorithm with 16 branches, and (3) method of [28]. The results are averaged over 10,000-sample chaotic sequences.

Y.V. Andreyev et al. / Chaos, Solitons and Fractals 17 (2003) 531–544

543

As can be seen from comparison of the efficiency of single- and multi-branch algorithms in Fig. 10, the latter is 35–40 dB better in respect to noise resistance, and its characteristics is much closer to the theoretical separation limit (24).

7. Conclusions The above analysis shows that capabilities and efficiency of processing data involving chaotic signals is on the basic level determined by information properties of dynamic chaos. For example, at the same level of noise total cleaning of one chaotic signal with high information content (consequently, high chaoticity) can prove to be impossible, while another signal with lower information content can be cleared with any necessary precision. Thus, fundamental restrictions on capabilities of chaotic signal processing exist. These restrictions are determined by information properties of chaotic signals rather than by concrete methods. At present, the question of quantum computers is extensively discussed. One of the premises for development of this research direction was a paper by Landauer [29], in which relations between calculations, information transmission and measurements was set. The main result of this paper is a conclusion of the existence of energy restrictions on the processes of information processing which can be expressed as follows: (i) Calculations by themselves do not require any, even minimal energy expenses. Ideal computing system can in principle function without any thermal losses. (ii) Erasing information always produces heat that dissipates in the environment. Restrictions on processing chaotic signals are similar to restrictions of the kind of the Second law, Shannon theorems and Landauer principle in the sense that they do not depend on concrete methods or algorithms; they are the limits. Physical essence of information is exhibited in existence of fundamental principles, relations and restrictions. It is the corner-stone by research in the field of quantum computations, determining the limit efficiency of measurements, calculations and information transmission. The results presented here indicate that the chaotic signal processing is based on the same principles. This can have far-reaching consequences.

Acknowledgements The work was supported in part by NATO LG (Nr PST.CLG 977018). The authors are grateful to G. Kassian and A. Khilinsky for useful discussions and help in preparing the paper.

References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20]

Shaw R. Naturforsch 1981;36A:80. Shannon C. Bell Syst Tech J 1948;27:379–623. Eckmann J-P, Ruelle D. Rev Modern Phys 1985;57:617. Afraimovich VS, Reiman AM. Nonlinear waves. Dynamics and Evolution. Gorkii: Gorkii University; 1989. p. 238 (in Russian). Abarbanel HDI. Analysis of observed chaotic data. New York, Berlin: Springer-Verlag; 1996. Kocarev L, Halle KS, Eckert K, Chua L, Parlitz U. Int J Bif Chaos 1992;2(3):709. Cuomo MK, Oppenheim AV, Strogatz SH. IEEE Trans Circ Syst 1993;40(10):626. Partlitz U, Chua L, Kocarev L, Halle K, Shang A. Int J Bif Chaos 1992;2(4):973. Belskii YL, Dmitriev AS. Radioteknika I Elektronika 1993;38(7):1310 (in Russian). Dedieu H, Kennedy M, Hasler M. IEEE Trans Circ Syst 1993;40(10):634–42. Volkovskii AR, Rulkov NV. PisÕma v GTF 1993;9(3):71. Dmitriev AS, Panas AI, Starkov SO. Int J Bif Chaos 1995;5(3):371. Dmitriev AS, Panas AI, Starkov SO, Kuzmin LV. Int J Bif Chaos 1997;7(7):1014. Halle KS, Wu CW, Itoh M, Chua LO. Int J Bif Chaos 1993;3(2):469. Hasler M, Dedieu H, Kennedy M, Schweizer J. In: Proc Int Sym on Nonlin Theory and Appl, Hawaii, 1993. p. 87. Bohme F, Feldman U, Schwartz W, Bauer A. In: Proc Workshop NDESÕ94, Krakov, 1994. p. 163. Hayes S, Grebogi C, Ott E. Phys Rev Lett 1993;70(20):3031. Kotelnikov VA. The theory of optimal noise immunity. NY, Toronto, London: McGraw Hill book Co; 1959. Kantz H, Schreiber T. Nonlinear time series analysis. NY, Cambridge, 1997. Stojanovski T, Kocarev L, Harris R. IEEE Trans Circ Syst 1997;44(10):1014.

544 [21] [22] [23] [24] [25] [26] [27] [28] [29]

Y.V. Andreyev et al. / Chaos, Solitons and Fractals 17 (2003) 531–544 Rosa E, Hayes S, Grebogi C. Phys Rev Lett 1997;78(7):1247. Dmitriev AS, Kassian G, Khilinsky A. In: Proc 7th Int Workshop NDES-99, Rome, 1999. p. 187. Dmitriev AS. Izvestiya Vuzov Radiofizika 1998;41(12):1497 (in Russian). Dmitriev AS, Kassian G, Khilinsky A. Int J Bif Chaos 2000;10(4):749. Andreyev YuV, Dmitriev AS, Efremova EV. In: Proc ISCAS-2000, vol. 4, Geneva, 2000. p. 441. Andreyev YuV, Dmitriev AS, Efremova EV, Pustovoit VI. Doklady Rossiyskoi Academii Nauk 2000;372(1):36 (in Russian). Andreyev YuV, Dmitriev AS, Efremova EV. Phys Rev E 2002;65:046220. Tsimring LS, Sushchik MM. Phys Lett A 1996;213:155. Landauer R. IBM J Res Develop 1961;3:183.