Learning in recurrent neural networks

Learning in recurrent neural networks

102 Abstracts the basis of the mathematics of computability theory, does help to sever the situationally determined link between individual rational...

85KB Sizes 4 Downloads 245 Views

102

Abstracts

the basis of the mathematics of computability theory, does help to sever the situationally determined link between individual rationality and predictability. Such a result resurrects, analytically, the enlightened individualism of Smithian economics and relegates the role of predictability to group and social phenomena. This is fully compatible with the rich results of probability theory (even when computationally constrained). It is also a result and a methodology that libertarians should welcome.

References K.J.

Arrow,

R. Candy, eds., S.J.

Situational

Lucas,

J.J.

McCall,

J.J. J.J.

McCall McCall

UCLA W.S. Sot.

behavior

The ‘Smithian’

Paper

enumerable

50 (1944) 284-316;

J. Business brain,

H.J.

59 (1986) S.385-S.400. Keisler and K. Kunen,

1980) pp. 123-148.

Brit. J. Philos.

theory,

J. Business

UCLA

Sci. 23 (1972). 59 (1986) S.401-S.426. Working

Paper

(August

The Walrasian demon exorcised, in preparation (1990a). Constructive foundations for economics: The Emperor’s

(December

Embodiments

Recursively

in economics,

and economic

system,

in J. Barwise,

Amsterdam,

self and its ‘Bayesian’

and K. Velupillai, and K. Velupillai,

Working

in an economic for mechanisms,

(North-Holland,

determinism

Adaptive

McCulloch,

E. Post,

of self and others

thesis and principles

The Kleene Symposium

Latsis,

R.E.

Rationality Church’s

Old Clothes,

1990b).

of Mind (MIT Press, sets of positive

reprinted

1990).

integers

Cambridge,

MA,

1965).

and their decision

in M. Davis, ed., The Undecidable

problems,

(Raven

Press,

Bull. Am. Math. New York,

1965)

pp. 304-337. G.-C. Rota, In memoriam of Stan Ulam: the barrier of meaning, Physica 22D (1986) l-3. D. Ruelle, Is our mathematics natural?: the case of equilibrium statistical mechanics, Bull. (New Series) Am. J.C.

Math.

Webb,

Dordrecht,

Sot.

19 (1) (July

Mechanism,

1988) 259-268.

Mentalism

and Metamathematics:

An Essay on Finitism

(D. Reidel Publ. Co.,

1980).

Learning in Recurrent Neural Networks.

and Institute for Neural Computation, USA.

Halbert White, Department of Economics University of California, San Diego, CA,

This presentation focuses on a class of artificial neural networks known as recurrent neural networks. These networks possess a rich internal dynamic structure and have been used to study linguistic structure, recognize speech, control robot movement, and forecast time series. Like humans, such networks learn from experience by a process of trial and error. The learning methods in current use perform fairly well, but theory describing the convergence of recurrent network learning algorithms has previously been missing. In this talk I present some recent results that provide the missing theory, demonstrating that appropriate learning rules for recurrent neural networks converge to network weights with certain optimality properties. Interestingly, the theory underlying these convergence results has applications to the study of economic agents learning about a system under their partial or com-

Abstracts

103

plete control, to the related engineering problem of a nonlinear dynamic controller learning to adaptively control an unknown plant, and also to statistical estimation of time-series models containing dynamic latent variables, such as ARMA and bilinear models.