.__ ;__ llfiz
20 November 1995
2s
PHYSICS
ELX3’lER
LETTERS
A
Physics Letters A 208 (1995) 117-126
A note on optimal ergodic invariant measures for piecewise linear maps on fractal repellers K. Gustafson Department Received 22 February
of Mathematics, University of Colorado, Boulder, CO 80309-0395, USA
1995; revised manuscript received 14 September 1995; accepted for publication Communicated by A.R. Bishop
19 September
1995
Abstract We answer a question about uniqueness of maximum information dimension invariant measures on invariant fractal sets, and we interpret the maximum information dimension as an instance of a general regularity principle. New expressions for information dimension, Lyapunov exponent, and chaotic trajectory decay rate, are derived.
1. Introduction
In this first section I recall the setting and question of Ref. [I]. In Section 2 I (a) answer the question in the negative, and (b) relate it to a more general regularity preference principle. In Section 3 I emphasize the points: (a) that uniqueness questions/statements are dependent upon the candidate class specified; and (b) the uncountably infinite set of optimal measures do possess some interesting common properties: a symmetry, no lost information under the original interval map, and certain related conjugation rules among the parameters. These properties are then seen to link the criteria of maximal information dimension proposed in Ref. [ 1 ] to my more general regularity preference principle. Then elaborating, in Section 4 I revisit optimality and information dimension and derive upper bounds and new expressions for the information dimension and the Lyapunov exponent from the multiparameter viewpoint. A useful new notion of Gibbs discrepancy is introduced. In Section 5 I present a new perspective for the decay rate as discussed by Grassberger, Kadanoff, and others. In Section 6 I revisit regularity and make a few observations concerning its relationship to decay, physicality, differentiation intertwining, and Fourier theory. In Section 7 I recall an interesting physical motivation for the De Rham maps. In Ref. [ 1] a family of one-dimensional piecewise linear maps admitting fractal invariant sets and uncountably many invariant measures is constructed and shown to be ergodic (exact). An optimal measure is selected according to a criterion of maximal information dimension. Here I will observe that: (a) the optimal measure is not unique within the mapping family; (b) the maximal information dimension criterion is an instance of a general regularity principle. These conclusions follow readily from those of Ref. [ 1 ] when one takes my more general point of view of treating the family as a three parameter (rather than one parameter) family. 0375.9601/95/$09.50 @ 1995 Elsevier Science B.V. All rights reserved SSDIO375-9601 (95)00754-7
K. Gusiafson /Physics Letters A 208 (I 995) I I7-126
118
For brevity of this note I refer to Ref. [ 1 ] directly. The maps of the unit interval are (see Fig. 1 of Ref.
111) Xn+I =
o
s&l = px,, =P’xn+(l
< x, < 1.
1 -p’-’
-p’>,
It is assumed that p > 1, p’ > 1, and p-’ + p’-’ < 1. An example of these maps is the well-known singular Cantor map p = /3’ = 3. The invariant Bore1 measures are obtained corresponding to their associated distribution functions F,(x) E ,u~( 0, x ] which are found as fixed points of functional equations, T,f(x)
= af(Px)
o
7
p-1
= a, = (1 -a)f(P’(x-
1 -p’-‘
1) + 1) +a,
1 -p’-‘,
1.
It is assumed that 0 < LY< I. Then maximization of the information dimension, ~Ina+(l -cr)In(l -LY) D _ & = Kolmogorov-Sinai entropy Lyapunov exponent =(YIn p + ( 1 - (Y)lnp ’ n - A, is proposed as the criterion for optimal (natural) invariant measure. This generalizes the Sinai-Ruelle-Bowen absolutely continuous measures [ 21 to the singular case. It is shown in Ref. [ I] that D, has a unique maximum as a function of the parameter a and that this maximum coincides with the Hausdorff dimension of the maximal invariant set of the map S. The question of whether maximization of information dimension generally determines a unique invariant measure is mentioned [Il. 2. Optimal measures and a regularity principle Proposition. (a) The optimal measure is not unique within the mapping family. (b) The maximal information dimension is an instance of a general regularity principle [ 3,4]. Pro05 (a) We consider the expression D,,p,p~ first more generally allowing /I and p’ also to vary and then immediately constrain the uniqueness question to the class of measures pa.~,~l with p-’ + p’-’ = 1 and (Y= p-i. Then the (conjugate) relations become
(Y-1
P P’=p_l”P=
(Y-1 - 1
1 1 -cx’
(1)
P=$
from which D rr.P,P’=
alna+ (1 -a)ln(l - -alnrr-(I-cu)In(l-cu)
-a) =”
Thus all members of this class of measures pCLa,p.p E ,uup-l.p,p. maximize D. (b) As pointed out in Ref. [ I], the corresponding distribution function F, has zero derivatives almost everywhere unless p-i + /?-I = 1 and LYP= 1. In terms of this note, that means that the measures in the complement of the class P~-I,~,~* are more singular. Thus the optimal measure, optimal from the point of view of maximal information dimension, is seen to be least singular with respect to Lebesgue measure. The authors of Ref. [ 11 also use the terms “richest” and “natural” for such measures. From the point of view of the present
119
K. Gustafson/Physics Letters A 208 (1995) 117-l 26
author, and the interest that drove this note, these considerations are an instance of a very general regularity preference principle that I have developed elsewhere [ 3,4]. This general regularity principle is wide ranging and may be regarded as a very extensive irreversibility law. Therefore, it is no surprise that a relative maximum entropy criterion such as maximum information dimension as used in Ref. [ I] should fall within its confines.
3. Remarks (a) As a general rule [ 31, uniqueness propositions often depend critically on specifying the class of objects under consideration. This applies especially when dealing with regularity properties, and pertains to both parameter space and state space considerations. In the present instance [ 11 I simply extended the parameter space, but in a natural way within context, since neither /?,/?‘, nor (Y is a priori to be favored. The permitted three parameter domain could be useful in applications. (b) Notice that the conjugate selections ( I), which afford us an uncountably infinite set of optimal measures ,~,,p,p~, for which D,.p,pf = 1, exhibit some further interesting properties with respect to the original piecewise linear maps. First, p-’ and p’-’ then effect a symmetric (about x = i) partition of the unit interval. This is true independent of whether p’-’ is to the right (p > 2) or to the left (p < 2) of p-‘. Second, the point 1 - /3’-’ by coinciding with the point p-l implements a situation of “no lost information” under the mapping S, i.e., the iterated system S” remains defined on the whole unity interval and the fractal dimension remains 1. This property appears again in the Radon-Nikodym derivative of the measure pLaS (see Ref. [ 11, Eqs. ( 10) ) . Third, the conjugate role of (Y= p-’ is less consequential within the three parameter domain, even though it does of course maximize (see Fig. 2 of Ref. [ 1 ] ) the information dimension D, once the parameters p and p’ have been chosen. More interestingly, its conjugate role relative to the conjugate roles of /3 and p’ appears in the Frobenius-Perron operator, U,,p,p,f(x)
= af(x/P>
+ (1 - (Y)f((X +
P’ - 1)/P’) = P-‘.w-‘4
+ P’-‘m’-‘x
+ P-‘),
as again reflecting the symmetric partition of the unit interval referred to above. These three properties are intimately related to the fact that the criterion of maximum information dimension proposed for “natural” invariant measure selection in Ref. [ 1 ] also selects the “most regular” invariant measure relative to Lebesgue measure. This is an instance of a more general regularity preference principle [ 3,4]. Without further elaboration, let me mention that I was led to this principle in Ref. [ 31 where I show that the Rankine-Hugoniot and entropy criteria for the unique downstream continuation of a solution in gas dynamics, both analytical and numerical, are determined more generally by the regularity preference principle. See Ref. [ 31. I have expanded on this principle in Ref. [ 41 within the context of general irreversibility considerations in statistical and quantum mechanics.
4. Revisiting
measure optimality
and information
dimension
Whereas the approach of Ref. [ 1 ] (see also Ref. [ 5 I) , emphasized fixed /? and p’ and then looked at measure optimality as a question in the parameter cy, my point of view in Sections 2 and 3 was a three-parameter one. In this section I want to revisit the one-parameter and three-parameter views, and give some indications about generalizations to n parameters. First I want to address the question of obtaining upper bounds for the information dimension. Actually, as will be seen below, we may solve this problem exactly in one dimension, and in principle exactly in three and n dimensions, but by seeking these upper bounds we gain a useful new perspective.
K. Custafson /Physics
120
Lemma 4.1 (upper bounds). For arbitrary dimension D,.p,pt has upper bounds
/? > 1, /?’ > 1, I/j3 + 1/p’
< 1, 0 < a < 1, the information
,:,i’,“,: Tfiiyp
De.p,p < 1 -
(a)
Letters A 208 (1995) 117- I26
]-
(b)
-alna-
1
(p-1
--I
+P’-1)
(1 -a)ln(l
-a)
’
The second upper bound is sharper than the first. Proo& (a) We may write D ffJ3.P’ =
-alncu-(I
-a)ln(l
alnp+(l
< /3-’ - aIn/?-’ \
-CZ)
-a)In/Y + p’-’
cYln@+(l
cu-alna+(l--a)-(I-cz)ln(l-a)-1 =
culnP+(l
- (1 - a) ln(P’-‘)
-a)InP
- 1,
-a)ln/?’
where we have twice used the Gibbs inequality, a - alna < b - a In b, a and b positive entities. (b) In a like manner, but now focusing attention on the denominator rather than the numerator, cxlnP+(l
-a)In@=/?-’
-cxln(P-‘)
+p’-’
- (1 -a)ln(P’-‘)
-(p-l
we have
+p’-‘)
>cu-a1na+(1-~)-(1--_)In(1-n)-(~-’+/3’-’) =-culncr-(l-cx)-(I--)In(l-a)+[l-(P-’+/?-’)I, from which the second upper bound follows. That the second upper bound in Lemma 4.1 is sharper than the first upper bound will follow trivially from Lemma 4.2 below, but to motivate Lemma 4.2, let us directly prove the stated relationship here. For that and later purposes, it is convenient to call the quantity 1 - (p-’ +/3-l ) the “gap”. Considering the first upper bound minus the second upper bound, we have (in obvious shorthand, n = numerator, d = denominator, g = gap) Cd-g)(n+g)
-nd
d(n + g) dg - g2 - gn =
d(n+g)
g
= d(n+g)
[d - (n+g)l.
Thus the second upper bound is less or equal to the first upper bound if and only if d 3 n + g. This may be verified directly, e.g., as in the proofs of (a) and (b) above. More to the point is the following interesting lemma, which is sharp. For the lemma and later purposes, it is now convenient to introduce and discuss the notations: g, gap, information gap, Cantor gap, I - (/?-I + p’-’ ) , more generally, 1 - c:=, qi, where each 4i > 0 and Cy_, qi = 4 6 1. The gap represents the second property mentioned in Section 3 (b), that of “lost information” in the sense of failure of base interval matchup for the original map S. Although I would not detail the more general (n pieces) maps S which could be associated with the more general gap formulation just given, the gap 1 -4 measures the failure of the slopes of the inverse map S-‘, as interpreted as probabilities, to be optimal. It also measures the total “Cantor effect” of interval removal caused by the repellers of the invariant set of the map S. h is the entropy, H metricof Ref. [ 61, the Kolmogorov-Sinai entropy, --(Y In LY- ( i - a) ln( 1 - cr), more generally, - cy=’ pi Inpi, where each pi > 0 and ckl pi = 1. We could also introduce an incomplete entropy
121
K. Gustafson/ PhysicsLettersA 208 (I 995) 117-126
here, where the p; do not sum to one, which would introduce an “entropy gap” similar to the “information gap” just discussed, however, for simplicity we will not do so. The roles of the pi will be similar to that of LYand I - a, notably, LYmay be thought of as the “slope” of S-’ with respect to the measure ,ua, dS/d,uFLu= LX-‘, see Ref. [ I]. The most important operational role of CYin these theories, it seems to me, is to preserve measure additivity as expressed in the distribution function F,(x), 0 < x < p-l,
Fa(x) = aFa(Px)> = a,
p-1 6 x < 1 - p’-1,
= (1 -a)Fa($x+
1 -/?‘)
which solves the De Rham equation F(x)
=pa(O,x]
+a,
1 -p’-‘
1,
(see Refs. [ 1,5,7] ),
+u,(S-‘(0,x])
=p(O,p-‘x]
+&I
-/3’-‘J-h+
1 -@-‘I
=F(s)+Ifx+$-‘) -+$). It is by this decomposition that the distribution function F,(x) satisfies the De Rham equation. The placement of the value (Y in the gap g preserves the additivity of the measure p,. A is the Lyapunov exponent, tu In p + ( 1 - a) In p’, more generally, C:=r p; In q;, where the pi and 4; are as defined above. The Lyapunov exponents are of course related to decay rates and trajectory divergences and I will return to this point below. GistheGibbsdiscrepancy,[~~‘-aln(~~’)+~’~1-(1-n)ln(~‘~‘)]-[cu-alna+(I-Lu)-(1a) ln( 1 - a) 1, more generally, Cl, [ (q; -pi In qi) - (p; -pi In qi) 1, where the pi and qi are as defined above. This is a new notion growing out of the proof of the upper bounds of Lemma 4.1 above. Its key role is apparent in the following lemma, which sharpens those bounds. Lemma ProoJ
4.2 (Lyapunov
decomposition).
A = g + G + h.
For the special case, we may write
culn~+(1-~)In~‘=-(~~‘+~‘~‘)+[~~‘-culn(~~’>+~’~’-(1-~)ln(~’~’>] -[a-~~lnc8+(1-(~)-(l-~~)ln(l-a)]+(n+1). For the general case, the argument information dimension D = n/d,
is the same: in shorthand,
in terms of the numerator
and denominator
of the
i=l
Theorem D=
4.3. (Information
optima&y).
D,,p,p~, or more generally,
D, ,,...,p,,,y,,.,.,qn, may be expressed
as
h g+G+h
Proof of Lemma 4.2. Note that the requirements for D to be optimal (D = 1), even in the many parameter case, are quite evident from the theorem: if and only if G = g = 0. Moreover, the upper bounds of Lemma 4.1 are now immediate from the theorem. The first upper bound means D=
h g+G+h
<
h+G
‘g+G+h
K. Gustafson/Physics Letters A 208 (1995) 117-126
122
and the second upper bound means h
D=
< h ‘gfh’
g+C+h
The error of the first upper bound is caused by the presence of the Gibbs discrepancy in the numerator. error of the second upper bound is caused by the absence of the Gibbs discrepancy in the denominator. right way to think of departure from optimality, from this information dimension view, is
The The
1
D=
1 +G/h+g/h‘
We remark that clearly versions of Lemmas 4.1, 4.2, and Theorem 4.3 also hold with the finite sums replaced by convergent infinite sums or integrals, so that a countable or uncountable infinity of parameters can be accommodated. Other upper bounds are obtainable from D as expressed in the theorem (one will occur for us in the next section). Lower bounds (poor) may be obtained for example by deleting specific numerator terms. Next let us turn to the suboptimal case: p-’ + /3-l = 4 < I. What should CYbe, to maximize D,,P,~? This is the one parameter view of Ref. [ 11, see also Refs. [5,7]. The fact that D,, for fixed p,p’, has a unique maximum in LY,is illustrated there (see Ref. [ 11, Fig. 2) by two D, curve plots, one optimal, one suboptimal. With an eye toward further generalization, let us first derive here the exact cr value at which D, is maximized. For that purpose, let x = LY,y = p-’ , z = /I’-‘, a = In y, b = In z, c = In( y/z), and let us also use, as above, the shorthand n and d for numerator and denominator of D. Then, writing out all terms, g
=dln(&)
-rzhr(f)
=axlnx-axln(l-x)+blnx-bxlnx-bln(l-n)+bxln(l-x)-cxlnx-cln(l-x)+cxln(1-x) =(a-b-c)xlnx+(-a+b+c)~ln(l
-x)+blnx+(-b-c)ln(l
-x)
=blnx-aln(l
-x),
where we have made use of the fortuitous cancellation c = a - b. Thus dD/dx = 0 if and only if x satisfies the (generally transcendental) equation x InP’ = (I - .x)‘“@. As the exponents are both positive, the one parameter optimal solution x = LYis exactly the intersection of two power curves. The maximized value of D there has the interesting general expression I - CU)‘-~]
D = ln[&(
(a-b)cu+l
1 + (ln/3’/ln/I)Cy’np’~‘“p =
I+
,na.
WP’IPM
For the example plotted in Ref. [ 11, Fig. 2, where /3 = 2, j3’ = 5, the equation x2.32’9287+ x - 1 = 0 from which x z 0.6425 and D z 0.63873. The second derivative is also easily found to be d2D
= ax2
In(
z
Il*yll’
‘_“‘)
=
for (Ybecomes
(approximately)
1
In
(
pll@l/(
I-.\)
)
’
which proves the strict concavity downward for all the D(u) curves. Now, to generalize the above considerations to the three parameter view, we may employ Lagrange multipliers. We wish to find the extrema of w = D + A(y + z - 4). We form the system w, =blnx-aIn(1 w;
= -
n l-x -+-t=o, z
d*
-x)
=O,
w,==+A=O, d* Y
wA=y+z
-q=o.
123
K. Gusrafson / Physics Letters A 208 (1995) 117-126
From the first equation we have x’“p = (1 - _x)‘“fl again as above, from the second and third equations we have Z/J> = (I - x)/x, the fourth equation just being the gap constraint equation. The first equation is actually oversimplified, before cancellations it may be written as din(+)
-rtln($
=O,
which now tells us that for an extremum
]_ 0 x
d-U
.r
There know G =0 ; = y,
we must necessarily
= I.
are now two cases. First, when d = tt, any x from Theorem 4.3 that then the gap g = 0 (so (so N = p-’ and ( I - a) = /!I’-’ necessarily). so /3 = p’ = 2/q. The three parameter relative
D=
have
will suffice. But that is the optimal case D = I, and we p-’ + /3-l = I necessarily), and the Gibbs discrepancy Second, when d #II, necessarily 1 - x = x so cr = i, and maximum is thus
In2
In2 - Inq’
The t? + 1 parameter case can be similarly treated (not here), in which In2 will change to In tt. For the example of Ref. [ I 1, we found above that (Y = 0.6425 was maximized for p = 2 and p’ = then D = 0.63873. The above considerations show that for 6-l i p’-’ = 9 = 0.7 fixed, the maximal 0.66025 is achieved at (Y= 0.5, fi = p’ = 2.857143. In principle then, Lagrange multipliers will enable maximization of information dimension in suboptimal cases for tt parameters. Second derivative tests also be employed to guarantee downward concavity at the local extrcma, and of course the behavior boundaries of the constraint regions should also be checked, as insurance.
5. Decay rate, Lyapunov
exponent,
5, and D of exact should at the
and Gibbs discrepancy
In Ref. [ 61 the relationship (note that cr here is not our previous LY= pl, but we retain the notation of Ref. [6] for convenience) cr = ( I - D)A was derived for special classes of models, and it was conjectured that it holds much more generally. Here LYis the decay rate (i.e., relaxation constant) for long-time chaotic transients and C-O measures the probability that a uniformly distributed ~‘0 remains in the [O, 1 J interval. As discussed in Ref. [ 61, this decay constant cu is independent of the initial distribution, provided that it was smooth. SW Ref. 161 for more discussion and background. In particular, from this relation, one has [ 61 the expression for the Lyapunov exponent A = h + (Y. This may be heuristically interpreted [ 6,8] as follows: from a flux A of incoming digits (some significant, some not), the fraction h/h leads to unpredictable motion on the repeller, the fraction Q/A pushes the trajectories away from the repeller. In these connections, the following may be interesting to compare. Corollary
5.1
(Decay
components).
CY= g -t- G.
PI-ou$ Lemma 4.2 and the relationship
above derived in Ref. [ 61.
Corollary 5.1 suggests that, in the heuristic fraction Q/A in fact decomposes to
interpretation
of A just mentioned,
one may go further. The
K. Gustafson / Physics Letters A 208 (1995) I1 7-126
124 so
that the pushing away of the trajectories from the repeller is caused by two agents: the information loss, i.e., Cantor extraction loss, due to the failure of the /?, /3’, more generally the q;, to map the whole interval; and the Gibbs loss, due to the relative incompatibility of the y; to the qi. A second consequence of Corollary 5.1 should be mentioned. In Ref. [ 61, when discussing higher dimensional systems in the context of the Kaplan-Yorke [9] overall information dimension estimate D = CL, D;,it is conjectured that the total decay rate will be d CY= c
'Ai(l- D;), i=l
where the sum is over unstable (i.e., those with positive Lyapunov exponents A; and suboptimal information dimensions D; < 1) directions only. I might interject that this interpretation is nice because it is analogous to the stable, center, unstable manifold picture current in fluid dynamics research. We may write this expression as
CY=
c’
ai
i=l and take each czi = g; + G; as in Corollary Theorem 4.3 above, the expression a -= h
6.1, and then note that, along the lines of the discussion
following
Gibbs + g entropy + Gibbs + g
can be used now to provide us with another upper bound for the information dimension D. That is, applied to each ith unstable direction, we see that the conjecture of Ref. [ 61 expressed as Di = 1 ~ q/A; now has a precise meaning in terms of the Gibbs discrepancy and gap information loss along unstable directions. A related third observation is the following. In the discussion at the end of Ref. [ 61, it is pointed out that “In practical applications, it will in general be rather difficult to estimate the codimensions 1 - Di.Thus the main application of Eq. (3.4) might consist in providing lower bounds for them: 1 - D; > a/A; (5.1)for all directions with positive A”. Without explicitly exploring this point here, we note that the decomposition of the decay rate cy in terms of its gap and Gibbs discrepancies may serve to refine that statement and its implementations. Stated in another way, our methods for upper bounds for D; provide lower bounds for the codimensions 1 - Di. Finally, in Ref. [ 61 the conjecture of Kadanoff and Tang [ IO] that LY= -
lim 1 In c
n-can
Idef(l
is compared to the conjecture LY= -
- Df'"'(x))Ip'
.XEF'" [ 6 ] that
lim 1 In exp n-oo n c ( r&.C’“’
where c’ denotes summation over positive A;(x) only, as in our discussion above. In these two expressions conjectured for LY,F(“) denotes all fixed points of the iterated interval map f(“), and Df(")(x)is the derivative matrix of f(“’ at X. Our interpretation (Corollary 5.1) of the decay rate LYin terms of the map gap loss R and Gibbs discrepancy G provides a third expression for a which involves no needed Jacobian derivatives and no needed assumptions of total hyperbolicity.
K. Gustafson/Physics
6. Revisiting
regularity
Letters
A 208 (1995)
117-126
125
and physicality
The connection of regularity to the present context, as stated in Section 2, was an attempt to put more meaning into the proposal of Ref. [ I ] to use maximization of the information dimension as the criteria for optimal invariant measure selection. In short, I would prefer the term “most regular” to the terms used in Ref. [ 1] “most natural” or “richest.” As presented in Refs. [ 3,l 1,4], the regularity ansatz is that nature generally prefers to increase regularity in evolutions, when possible. Of course, we need more physically-taken dynamical systems, than those maps considered usually now in these thcorics, e.g., as in Refs. [ 1,2,5-71, to better test this regularity tendency of physical evolutions. Here I want to restrict my discussion to just two quick points about regularity and physicality. As discussed in Refs. [ 611, in the suboptimal case of D < I, the maximum value of D(a) corresponds to a Cantor-like set with fractal dimension D satisfying (,f3n)-’ + (/3’D)-’ = 1. In other words, the point of view [ l,6] is to scale up the p and /?’ to pD and /31D so that there is no gap g, and then one-parameter optimize by taking LY= (pD)-’ and 1 -a = (p’D)-‘. Then “the natural invariant measure is the one whose information dimension is equal to the Hausdorff dimension of the maxima1 invariant set” [ 1 ]. My first point is that “natural” here really means resealing the p and /?’ to remove first the gap discrepancy, and then the Gibbs discrepancy, and that these may be viewed as “regularizing” actions. My second point is that the increased decay need not be thought of as physicality or as irreversibility. Thus in Refs. [ 7,12,13], by use of the intertwining relation between differentiation and the Frobenius-Perron operator of maps similar to those discussed here and in Refs. [ 1,7,6] and elsewhere, the slowest decaying left eigenstates are studied, and it is found that when the initial probability densities are required to be smoother, e.g., possess nz derivatives, the essential spectra1 radius decreases as eentA, where A is the Lyapunov exponent. My points are simple and two-fold and are offered for discussion rather than strong difference with those of Refs. [7,12,13]. First, the Frobenius-Perron operators intertwining with differentiation for piecewise linear maps S are really just a scale change by the slopes of the maps S. Thus imposing degrees of differentiability on the domains of your operators, i.e., on the initial distributions to be iteratively mapped, although a regularity, need not be a physically interpretable condition. The decay rate of an eigenmode, although found by using this intertwining differentiation, really still only reflects the iterated slopes of the map S. Whether or not these eigenmodes decay sufficiently fast to be physical depends on, and only on, a specific physical situation being treated. Second, the operators are not a time evolution in the usual operator sense, repeated iterations Ucn) of the Frobenius-Perron i.e., not an exponential operator evolution e A’ for .some infinitesimal generator A. Rather U(“‘p is an n-fold convolution of the initial density p. Each iteration by U involves [ 12,131 a discrete Fourier transform. As is well known, higher differentiability of a function transcribes into faster falloff of its Fourier transform. This is a Fourier property and not automatically a physical one.
7. Revisiting
De Rham
Finally, I would like to close this commentary with an interesting historical reflection. Most papers, e.g., Refs. [ 1,5,7,12,13], refer to De Rham’s paper [ 141 when using his mapping theory in these considerations. De Rham’s original motivation [ 151 was as follows. It seems De Rham was watching someone make a round broom handle, starting with a beam of square cross-section. The corners would be bevelled successively. What was the limit of thcsc iterations? De Rham’s answer as we know now is a continuous but generally nondifferentiable curve. In Figs. 1 and 2 I take the liberty to show the figures of Ref. [ 161. In Ref. [ 161 De Rham dedicates his presentation to M. Andre Ammann, a student in I’Ecole des Arts et Metiers in Geneva, who asked for the “equation” of the limiting curve.
126
K. Gustafson / Physics Letters A 208 (1995) I I7-126
Fig. Fig. 2. The polygons order I. (From Ref.
I. Trisection of a polygon. (From Ref. [ 161.)
PO and PI (solid line), Q(l and Q) (dotted line). defining the four triangles
of order 0 and the eight triangles
of
[ 161.)
Acknowledgement The author thanks Professor I. Antoniou who showed him their work [ I] in Brussels in January 1994, Professors Hasegawa and Driebe, who made available their work in Nice in July, 1994, and Professor Lester Dubins, who in a conversation in Beijing in May 1994, put the author on the trail of De Rham’s original motivation.
References [II S. Tasaki, Z. Suchanecki and I. Antoniou, Phys. Lett. A 179 (1993) 103. [21 J.P. Eckmann and D. Ruelle, Rev. Mod. Phys. 57 (1985) 617. [31 K. Gustafson, Partial differential equations, 3rd Ed. (Kaigai, Tokyo, 1992) [in Japanese] [English translation (International Services, Calcutta, India, 1993) 1. ]41 K. Gustafson, to be published in Math. Comput. Model. [51 S. Tasaki, 1. Antoniou and Z. Suchanecki, Chaos Solitons Fractals 4 ( 1994) 227. [61 H. Kantz, P Grassberger, Physica D I7 ( 1985) 75; T. Bohr and D. Rand. Physica D 25 (1987) 387. [71 S. Tasaki, 1. Antoniou and Z. Suchanecki, Phys. Lett. A I79 ( 1993) 97. 181 R. Shaw, Z. Naturforsch. 36a ( 1981) 80; D. Farmer, Z. Naturforsch. 37a ( 1982) 1304. 191 P Frederickson, J. Kaplan and J. Yorke, J. Diff. Equ. 49 ( 1983) 185. [IO] L. Kadanoff and C. Tang, Proc. Nat. Acad. Sci. 81 ( 1984) 1276. [ I I ] K. Gustafson, Partial differential equations, I st Ed. (Wiley, New York, 1980), 2nd Ed. (Wiley, New York, 1987). [ 121 H. Hasegawa and D. Driebe, Phys. Len. A 176 ( 1993) 193. [ 131 H. Hasegawa and D. Driebe, Phys. Rev. E 50 ( 1994) 1781. [ 141 G. De Rham, Rend. Sem. Mat. Torino 16 (1957) 101. [ 151 L. Dubins, private communication, June 1994. [ 161 G. De Rham, Elem. Mat. 2 ( 1947) 73.
Journal