NeuralNetworks,Vol. 10,No, 1,pp. 111-123,1997 Copyright~ 1996ElsevierScienceLtd. All rights reserved Printed in Great Britain 0893–6080/97$17.00+.00
Pergamon @
PII: S0893-6080(96)000767
CONTRIBUTED ARTICLE
Pattern Categorization and Generalizationwith a Virtual NeuromolecularArchitecture JONG-CHEN CHEN1 ANDMICHAELCONRAD2 1National Yunlin Institute of Technologyand 2Wayne State University (Received6 December1994;accepted23May 1996)
Abstract-A multilevelneuromolecularcomputingarchitecturehasbeen developedthatprovidesa richplatformfor evolutionarylearning.The architecturecomprisesa networkof neuron-likemoduleswithinternaldynamicsmodeled by cellularautomata.Thedynamicsaremotivatedby thehypothesisthatmolectdarprocessesoperativeinrealneurons (inparticularprocessesconnectedwithsecondmessengersignalsandcytoskeleton-membraneinteractions)subservea signal integratingfunction. The objective is to create a repertoire of specialpurpose dynamicpattern processors throughanevolutionarysearchalgorithmandthento usememorymanipulation algorithmsto select combinationsof processorsfrom the repertoire thatare capableof performingcoherentpattern recognition/neurocontrol tasks. The system consists of two layers of cytoskeletally controlled(enzymatic)neuronsand two layers of memory access neurons(calledreferenceneurons)dividedintoa collectionoffunctionallycomparablesubnets.Evolutionarylearning can occur at the intraneuronal level throughvariationsinthe cytoskeletalstructuresresponsiblefor the integrationof signalsinspace andtime,throughvariationsinthe locationof elementsthatrepresentreadinor readoutproteins,and throughvariationsin the connectivityof the neurons.The memory manipulationalgorithmsthat orchestrate the repertoire of neuronalprocessors also use evolutionarysearchprocedures. The network is capable of performing complicatedpattern categorizationtasks andof doingso in a mannerthat balancesspecljicity andgeneralization. Copyright~ 1996ElsevierScience Ltd.
Keywords—Neuromolecular computing,Evolutionarylearning, Referenceneurons, Pattern recognition, Generalization,Intraneuronaldynamics,Cytoskeleton,Cellularautomata. Mjakotina, Shklovsky-Kordy, & Conrad, 1985; Matsumoto, 1984;Matsumoto & Sakai, 1979;Triestman & Levitan, 1976). The neuron itself could conceivably be a sophisticated signal integration system, capable of transforming spatiotemporal input patterns impingingon it into temporal patterns of impulse activity. We have developed an artificial neuromolecular architecture (to be referred to as the ANM architecture) with multiple levelsof processing that is motivated by this hypothesis.This architecture is purposely elaborate, the objectivebeing to create a rich platform for evolutionary learning. The motivating hypothesis is that the cytoskeleton, modelled as a cellular automaton, makes a significantcontribution to signalintegration withinthe neuron (Conrad, 1990;Conrad et al., 1989;Hameroff, Dayhoff, Lahoz-Beltra, Samsonovich,& Rasmussen, 1992; Koruga, 1990; Liberman et al., 1985; Rasmussen, Karampurwala, Vaidyanath, Jensen, & Hameroff, 1990;Werbos, 1992).Signalsimpingingon the postsynaptic membrane of these neurons are converted to cytoskeletalsignalswhich are combined
1. INTRODUCTION
Substantial evidence now exists that the input– output behaviour of neurons can be rather complicated and that a variety of internal chemical and molecular mechanisms contribute to this complication (Drummond, 1983; Dudai, 1987;Greengard, 1978; Hameroff, 1987; Liberman, Minina, & Golubtsov, 1975; Liberman, Minina, ShklovskyKordy, & Conrad, 1982a, 1982b;Liberman, Minina,
Acknowledgements:This material is based on work supported by the National Science Foundation under Grant No. ECS9409780.The authors are indebted to V. L, Dunin-Barkovskii, K. Kirby, and H. Szu for helpfulcommentson the manuscript and to membersof the BiocomputingGroup at Wayne State University for stimulating discussion. Jong-Chen Chen is at the Department of Management Information Systems, National Yunlin Institute of Technology, Touliu, Taiwan. E-mail:
[email protected] Requests for reprints should be sent to Michael Conrad, Department of Computer Scienee, Wayne State University, Detroit, MI 48202,USA; e-mail:
[email protected] 111
112
in space and time in a manner that depends on the structure of the cytoskeletalnetwork. The neurons fire when the convergence of signals is sufficient to activate readout enzymes that control ion channel activity and thereby trigger firing activity. The (abstract) neurons that we use in our model may accordinglybe thought of as cytoskeletallycontrolled enzymaticneurons. Evolutionary algorithms are used to generate a repertoire of such cytoskeletally controlled neurons with different pattern processing capabilities. Memory manipulation mechanisms are then used to selectneurons from this repertoire and to group (or orchestrate) them in combinations suitable for more complicated pattern processing tasks. The architecture is thus open to many levels of evolutionary learning. At present the architecture comprises two levels of such pattern processing neurons, with one level having initially narrow receptive fields and the other having initiallybroad receptivefields.These are orchestrated by two levelsof memory accessneurons, called reference neurons. Evolution can occur at five levels: at the level of readout enzymes that respond to cytoskeletal signals, at the level of cytoskeletal structure, at the level of modulating proteins associated with the cytoskeleton, at the level of connections between sensory neurons and enzymatic neurons, and at the level of reference neurons that orchestrate the repertoire of enzymatic neurons. The ANM architecture, despite its physiological and evolutionary motivations, has been primarily designed using operational effectivenessas a guide. Thus the cellular automaton dynamics within the neuron represents the signalintegration hypothesisin an extremely abstract manner. Many molecular, biochemical, and structural processes within the neuron could contribute to signal integration. The biophysical mechanisms and their functional significance are largely open questions at the present time. The ANM architecture should not be viewed as an empirically based model, but rather as an effort to understand how intraneuronal signal integration could contribute to brain function by using a constructivemethod of building a systemthat addresses pattern recognition/neurocontrol problems in a technologicallyuseful manner. Evolutionary techniques have been utilized by a number of investigators in recent years for training connectionist neural nets (e.g., Reeke & Edelman, 1988; Smalz & Conrad, 1991, 1994; Spiessens & Torreele, 1992;Whitley & Hanson, 1989),and have also been used for training neural nets in which the neurons have internal dynamics (Conrad, Kampfner, & Kirby, 1988; Conrad et al., 1989; Kampfner & Conrad, 1983; Kirby & Conrad, 1986). As noted above, the features of the ANM architecture are designed to allow evolutionary operators to act on
Jong-ChenChenandM. Conrad
networks of neurons to create a repertoire of different neuronal types and then to select different groupings of neurons from the repertoire for the performance of different perception–action tasks. In nature the repertoire of neuronal types and its orchestration would presumably be largely evolved on a phylogenetic time scale and then tuned on an ontogenetic time scale. The ANM architecture can be run in a manner that segregates the different levels of evolution in time, and can be interpreted abstractly as representing processes in a population or in an individual, or as a combination of these. 2. DESCRIPTION OF THE ARCHITECTURE 2.1. Overall Structure
The ANM architecture comprises four types of neuronal components: reference neurons, cytoskeletally controlled enzymaticneurons, receptor neurons, and effecter neurons (Figure 1). The enzymatic neurons are divided into eight subnets, initially with two layers in each subnet. Each layer consists of 16 neurons. The neurons of the first layer will be called low level, since they respond to a narrow receptive field. The neurons of the second layer are referred to as high level, since they respond to a wide receptive field. If the connectivityof the neurons is allowed to evolve, as in some of the experimentsto be reported here, the sharpness of the distinction between wide and narrow fieldsmay of course be lost. Reference neurons control (or provide) access to enzymaticneurons or other referenceneurons (Figure 2). The terms “load” and “rekindle” can be used to
Reference neurons
t
I ~::;::;::
High-level enzymatic — neurons
I
Effecter neurons
neurons
—
Effecto~ neurons
Receptor neurons FIGURE 1. Overall flow of Information In the ANM architecture (from Chen & Conrad, 1994b). Information flowa from the environment to receptor neurona. The outputa of the latter are integrated by enzymatic neurons that utilize cytoskeletai dynamics to combine signais In apace and time. Theae in turn control affecter neurons. Reference naurona select aubsets of enzymatic neurona for such control.
NeuromolecularArchitecture Loading
RI
RI
❑ Fire Fire 000 El
Fire E2
E3
e
A El
0 E2
Make synaptic connections to
E3
Recalling R1
RI
A El
o E2
E3
A El
o E2
E3
FIGURE 2. Schematic illustration of reference neuron scheme. The enzymatic neurone (denoted by El and controlled by cytoekeletal dynamica in the ANM archlteotura) are responsible for primary proceeding. One or a few reference neurone (denoted by R,) fire at any given time. Theee “load” all the enzymatic neurona that fire at about the aame time. “Loading” meane that tha aynaptic contacte between the reference neurona and the enzymatic neurona are facilitated. Later firing of the reference neuron activatee (or rekindlee) all the primariee that it loaded.
describe the control process. Reference neurons can load (or modify) neurons that they contact and do so if they fire at the same time. This just means that the synaptic connection between the reference neuron and the neuron it contacts is strengthened, or facilitated. Later firing of the referenceneuron refires (or rekindles) all the neurons it has loaded. The schemeis basicallyof the Hebbian type (Hebb, 1949), but with hierarchical controls (Conrad, 1974, 1976a, 1976b, 1977; cf. also Minsky, 1980 and Teyler & DiScenna, 1986 for conceptually related models). In the present implementation synaptic connections are turned from completelyoffto completelyon whenever loading occurs;connectionsthat are turned on remain on while the network processesinput patterns, but in general change in the variation-selection phase of learning. Figure 3 depicts the connections between the reference neurons and enzymatic neurons for two of the subnets. Two layers of reference neurons, corresponding to high and low levels of control, are required to efficientlycombineinto a singlesystemthe variation-selection process that creates the different neuronal types and the variation-selection process that orchestrates these types into coherent functional groups. Each high level reference neuron controls a collection of low levelreference neurons. Each of the low level reference neurons in turn controls a bundle of enzymaticneurons that are comparable in the sense that they constitute a single population with respect to the variation-selection operations that act at the neuronal level (as distinct from the orchestration process).Thus when a high levelreferenceneuron fires it willcause all low levelreferenceneurons loaded by it
Subnet1
Subnet2
FIGURE 3. Connection between reference and cytoakeletally controlled enzymetic neuron layers (reprinted from Chen and Conrad, 19S4b). The low level reference neuron LR1l contacts naurons at position (1,1) of each subnet and the low level reference neuron LR21contacts neurons at position (2,1) of each subnet. The high level raference neuron HRa contacts two low level reference neurone: LRII and LR21. When HRa fires, it will causa LRI1 and LR21 to fire. This in turn caueee all enzymatic neurone In poeltions (1,1) and (2,1) of each eubnet to fire. The current implementation comprisee eight redundant subnats, each consietlng of 32 anzymatic neurons. To simplify the figure, only two of theee redundant subnete are shown, each consisting of aight naurons.
to fire. This in turn will fire a particular combination of bundles of enzymaticneurons (i.e., the same subset of enzymatic neurons in each subnet are fired). The activation of each subnet in the sequence could be controlled by a third set of reference neurons, but this is not explicitlyincluded in the implementation. Signals from this set of reference neurons would inhibit these subnets in a manner that allows only neurons of one subnet to be active at any instant of time (i.e., at most one neuron in each bundle is allowed to fire at one time). The reference neurons in the present study are strictly used for orchestrating different combinations of enzymatic neurons through a variation and selectionprocess. In the full reference neuron scheme enzymaticneurons would also load referenceneurons that fire at about the same time. Memory manipulation is achieved by using reference neurons to reconstruct patterns of primary neuron firing, or new combinations of patterns, and then reloading these under the control of a new referenceneuron. Contentordered, time-ordered, and associativememories are easily implemented (Conrad et al., 1989; Jeffries & Conrad, 1994; Trenary & Conrad, 1987). Memory
114
Jong-ChenChenandM. Conrad
manipulation processes of this type are obviously of enormous physiological importance. We have disallowed such processesin the present study, however, since their inclusion would make it more difficult to ascertain how far it is possible to go with purely evolutionary learning algorithms. 2.2. Cytoskeletal Model As noted earlier, the enzymatic neuron model used in the ANM architecture abstracts features of the cytoskeleton (cf. Kirkpatrick, 1977; Matsumoto, Tsukita, & Arai, 1989; Matus & Riederer, 1986; Selden & Pollard, 1983; Vallee et al., 1984). The neurons are simulated with two-dimensionalcellular automata (Figure 4). States of the grid units represent three types of components, corresponding to building blocks of microtubules, microfilaments, or neurofilaments(componentsare denoted by Cl, C2, and C3 in the figure). A readin enzyme (or receptor–enzyme complex) transduces signals impinging on the postsynaptic membrane of the model neuron to cytoskeletal signals. Thus, when activated, the readin enzyme will activate a cytoskeletal component sharing the same site, which in turn activates the next immediateneighboring componentsof the same type, and so on. After activation a component will enter a refractory state for a certain amount of time. This ensures unidirectional propagation. An activated component will affect the state of its neighboring components of differenttype only if
//
I 1C3
Readin enzvme
Readout cl ,1C2 ,’ cl ,’ C3 ,’cl ,’ cl / C24C1+ C3 / cl t
FIGURE 4. Cytoskeletal model. Eech grid Iocetion hae at meet one of three typee of component, Iabelled Cl, C2, end C3. These correspond in a somewhat arbitrary menner to components of the three types of fibres in the cytoskeleton (microtubulee, microfllamanta, and naurofilamenta). Each grid location contains at most one type of component. Some locations may contain no cytoskeletal component at all. Readin anzymea, which initiata signal propagation, can reeide at any occupied cite. Readout enzymaa are only allowed to reside at aitaa occupied by Cl components. Each site hae ei@rJ neighboring eitas. The neighboura of an edga aita ara daterminad In wrap-around faahion. Two naighbouring component of different type may be linked by an associated protain (or MAP) that controis aignai flow between them. Strictiy speaking MAPs ehouid refer to microtubuie associated protains, but in tha modei wa uae tham to refer to ail protaina that bridga different fibras.
there is an associated protein linking them together. For simplicity we will refer to all such signal controlling proteins associated with the cytoskeleton as MAPs even though they may be associated with microfilament or neurofilament rather than with microtubule, and even though strictly speaking readin and readout proteins are also cytoskeleton associated proteins. The assumptions about these interactions are intended to capture the idea that the cytoskeleton serves to integrate signals in space and time. The interactions between two different types of neighboring components are asymmetric. For example, when a Cl component is activated it will change the state of its neighboring C2 or C3 components,thereby initiatingsignalflowalong C2 or C3 fibres. However, a signal from an activated C2 or C3 component is only able to change the state of a neighboring Cl component to a more active state, but is not sufficientto initiate signal flow. Different types of components transmit signals at different speeds.For example,Cl components transmit signals at the slowest speed, but with highest activation capacity. This allows signal flow along C2 or C3 components to modulate the pathway of signal flow along Cl components (a feature that enhances the evolution friendliness of the structure). When a requisite spatiotemporal combination of cytoskeletal signals arrives at the site of a readout enzyme, the neuron will fire. 2.3. MultilevelEvolutionaryLearning The differently specialized enzymatic neurons are generated by allowing variation operators to act on readout enzymes, signal control proteins (MAPs), cytoskeletalstructure per se, readin enzymes,and on connections between cytoskeletal and sensory neurons, and then selectingthe three subnetswith the best performance. In the present implementation the variation operators are allowed to act on only one of these parameters (or levels)at a time. Evolution of readout enzymes is implemented by copying(with addition or deletionmutations) readout enzymesin each neuron of best-performingsubnets to all comparable neurons of lesser-performingsubnets. Evolution of MAPs is implementedby copying (with mutation) the proteins that control this flowof signals between cytoskeletal fibres in each neuron of bestperforming subnets to all comparable neurons of lesser-performing subnets. Evolution of the cytoskeletal structure is implemented by replacing cytoskeletal fibres (or parts of cytoskeletal fibres) with different fibres, or by adding or deleting cytoskeletal fibres of random lengths. Changing cytoskeleton associated proteins or cytoskeletal structure can alter the pattern of signal flow in the neurons, whereas varying the readout enzymesonly alters the
NeuromolecularArchitecture
115
manner in which the pattern of signal flow is interpreted. Evolution of the readin proteins is also implemented through addition and deletion mutations. Such mutations alter the connectivity between sensory and enzymatic neurons, though in some of the experiments to be reported the readin and connectivity mutations are decoupled (in which case the connections between neurons stays the same, but the locus of contact between them changes). The orchestration process is implemented by copying (with variation) the low level reference neurons loaded by the reference neurons assigned highest fitness to less fit high level reference neurons (Figure 5). The copy process is implemented by activating the most fit high level reference neurons, which in turn reactivate the pattern of low level reference neuron firing. This pattern is then loaded by a less fit high levelreference neuron. (a) :
El
E2
:
E3
E4
1%
ES
E6
Each variation operator was turned on for 16 generations, at which point it was turned off and another operator turned on (Figure 6). An initial repertoire of neuronal types is generated at random and is therefore available at the first cycle. The reference neuron variation (or orchestration) process is turned on first for the fixed time of 16 generations. The subsequent sequence of features subject to variation is: sensory to cytoskeletal neuron connections, referenceneurons, cytoskeleton,again reference neurons, MAPs, yet again reference neurons, and finallyreadouts. The sequenceis then repeated.
5=? Reference neuron level
I
E7
t
I
(b) RI
R2
R3
t Cytoskeletal componentlevel 00 El
E2
E3
E4
ES
E6
o E7
1
I
P
(c)
Readin level
RI
El
I
E2
R2
E3
E4
R3
E5
E6
o E7
FIGURE 5. Schematic Illustration of the orchestration proceaa. (a) The cytoakeletally controlled anzymatlc naurona (danoted by El) previously loaded by aach of the reference neurona are activated in aequence. The performance (or fltneaa) of each auch grouping ia then evaluated. Thia corraeponda to the aelection proceaa. (b) Suppoaethatthe group loaded by referenca nauron R2 iamoatfit. Reference neurona RI and R3 then attempt to load thia grouping. Thia correaponda to the copy or reproduction proceaa. (c) Since the copy atep ia aubject to noiae, RI and R3 aotually load allghtly different groupinga than thoae activated by R2. Thla correaponda to the variation proceaa.
I
No
FIGURE 6. Saquenca of evolutionary learning operation uaed in the ANM architecture. The orcheatratlon of neurona into coherent groupinga ia mediated by evolution at tha reference neuron level. The creation of the neuronal repertoire ia mediated by evolutionary operatora acting on varioua Intraneuronai paramatera, referred to aa Ievela. The application of the differant intraneuronal operatora la in each caae followad by the orchestration operation at the reference neuron level. Readin Ieval changea altar the connectivity of raceptor and enzymatic neurona (except for aome experlmenta in which theae two Ieveia of changa are decoupled).
116
Jong-ChenChenandM. Conrad
The variation operators described above are utilized to generate and orchestrate a large repertoire of neuronal types. Clearly we have made no attempt to model the genetic-developmentalmechanisms that are operative in natural organisms. For example, the variation operator that modifies cytoskeletal structure makes no attempt to model the growth dynamics of the cytoskeleton, or to distinguish between phylogenetic and ontogenetic learning processes. 2.4. ProblemDomain
The core system described above may be coupled to different problem domains by supplying it with a receptor–effecter interface that allows it to receive information about the particular environment and to act on this environment. Originally the model was tested on navigation tasks in an artificial maze-like environment (Chen & Conrad, 1994a, 1994b). The organisms had 16 groups of receptor neurons, with each group consistingof four neurons, and 32 effecter neurons, divided into four groups: north, east, south, west. In the present study we have dispensedwith the geometrical representation of the environment and have abstracted the problem domain to a bit pattern categorization problem. The patterns are sixty-four bits in length. These must be grouped into four categories(correspondingto the originalcategoriesof N, S, E, and W). All four choicesare alwaysavailable to the ANM system. Thus the task is to learn the functions f: {O,1}N+ {N, E, S,W} whereN = 64.In someof the experimentsfewerchoices are relevant (i.e., the proper choice might always be betweenN or S, but the systemstill has the option of givingE or W as wronganswers).The problembecomes more ditlicultas the number of possibletypesof correct responseincreases.For example,it shouldbe simplerfor the systemto learn to respondto all input patterns with a singleoutput (say move N) than to learn to partition the set into two differentcategories(e.g., move N for some, S for others). In the one category case three wrong responses are possible, but these are not punished.Thus, the number of categoriesis definedas the numberof responsesthat are rewarded.Rewardwas always directlyproportional to the number of correct responses. Training sets were constructed either randomly or in accordance with biasing constraints to be described. Test sets were developed by imposing different levels of random noise or by systematically changing particular bit patterns. The different components of the architecture are linked together by a discrete events modelling
technique (Zeigler, 1984). This approach, which orders processes according to a time-ordered events list, provides flexibilityby allowing easy replacement of submodelsby differentones that expressalternative dynamics or learning strategies.When the clock time is the same as the occurrencetime of the first event on the list, that event is removed and the associated routine activated. The processing of an event might cause the cancellationof some other scheduledevents, the activation of somenew events,or the rescheduling of the eventslist. An early version of the implementation is described in detail elsewhere (Chen, 1993; Chen & Conrad, 1994b). Variation operators that act on cytoskeletal structure and on readin enzymes have been added to the present implementation. This addition leads to a significant increase in the variability within bundles of comparable neurons. Another difference is that variations at the level of connectivity were precluded in the original implementation. When the latter variations are allowed the distinction between enzymatic neurons initially assigned broad and narrow receptive fields can become altered in the course of evolution. 3. EXPERIMENTAL RESULTS 3.1. PreliminaryRemarks
Previous work with the system has shown that the combination of repertoire generation and orchestration modes of learning yields recognition/action capabilities that exceed those that can be obtained with either of these modes taken in isolation (Chen & Conrad, 1994b).These capabilitiesare enhanced as the number of parameters that are subjectto variation operations increases,with the degree of enhancement becomingmore prominent as task difficultyincreases. Increasing the number of components in the cytoskeleton and increasing the number of modulating interactions speeds up the learning process and often makes it possible to learn tasks that are refractory to versionsof the systemwith lower complexity(Chen & Conrad, 1994a), The experimentsto be describedbelow are directed primarily to the categorization and generalization capabilities of the system, and to some extent to the effect of enlarging the set of evolutionary operators. A long-term objective is to create an evolutionary system that can exhibit open-ended evolutionary improvement. Ideally runs would last for an indefinitely long amount of time. Running 1,000 generations on a SPARC-10 workstation requires about a day. Our choice of the 64 bit pattern categorization problem for the present study is a compromise between problem complexity and the computer time required to obtain a statistically meaningfulensembleof results.
117
NeuromolecularArchitecture
3.2. CategorizationCapabilities 3.2.1. General Characterization. Nine sets of 30 patterns were randomly constructed as a training set. Each pattern comprised 64 bits. The test set consists of 2@ possible patterns. The system must classifysamples from the test set into four categories. Some of the test patterns are chosen by randomly imposing varying degrees of random noise. Others were obtained by systematicallyaltering differentbits. Runs were terminated when the rate of learning showed a significant slowdown, possibly corresponding to a stagnation point. In this section we describe experiments that characterize the system’s ability to learn to categorize training sets with different structures. In the next section we address the system’sability to generalizeto the test set. The data in Figure 7 is illustrative of the general capabilities of the system. Note that the number of training patterns learned at the time of termination ranged from 23 to 28. All cases, except for one, learned 24 or more (80Y0or more). Since there are only four possible responses the initial correctness score should on the average be 25°/0.Five out of the nine cases could learn to the 50°/0level in 16 cycles through reference neuron evolution alone acting on the initially supplied repertoire (not shown in the figure). Evolution of intraneuronal parameters became more important at later stages. Clearly the
r 6000
5000
rate of learning is quite variable, and more difficultat the later stages. Closer examination of the data showed that the rate of learning depends on how similar or different the patterns in the training set are with respect to one another. If the systemis required to respond to similar patterns (say differingby one bit) in differentways the problem is more difficult,as would be expected. We utilized this feature in some of the experimentsto be described below. 3.2.2. E#ect of Task D@culty. As noted earlier, the outputs can be interpreted as different motions. Figure 8 illustrates the effect of altering the number of such motions (or output categories)into which the systemis required to partition the training set, In this case the size of the training set was increased to 50 patterns and the experiment run on a singletraining set. As expected,the task was most difficultwhen four possible motions were required, and least difficult when only one motion (some response) was required. Forty of the patterns could be learned in the four category case, corresponding to the 80°/0achievement leveldescribedabove (where30patterns were used for the training set). In the three-category case 41 were learned, and 47 were learned in the two-categorycase. All could be learned in the trivial one-categorycase in only 25 cycles(not shown in the figure). We re-did this experiment, but made it more difficult by selecting the training set in a way that ensured that it contained many similar patterns (Figure 9). An initial bit string consisting only of 0s was subject to a 50°/0 chance of mutation at 16 locations that were divided into four equally spaced groups of four. Thus no two patterns in the training set could differ by more than 16 bits. The average differencewas about eight. Again the task was most difficult when four possible motions were required, and least difficultwhen only one motion was required.
: 4000 ; % g 3000
5000 – 4000–
– – 2-motion ----- 3-motion — 4-motion
$ z 3000— c % o 2000–
2000
1000
I ; ; : ; : i’
z
/
I
/
1000– 0
/ / / 1 I I I
–/ 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 No. of patterns learned
FIGURE 7. Pattern recognition capacity of the ANM system. The training set comprieee nine sate of 30-bit patterne, each with 64 bit positions. The nine curvee repreeant the performance of the system on each of the nine different data sete.
o
16
I
I
22
28 No. of patterns learned
46
FiGURE 8. Dependence of pattern recognition performance on number of raquired categorization. The training set comprised 50 randomiy generated 64-bit training patterne.
118
Jong-ChenChenandM. Conrad
I
8000
Learning was slower in the one-category case when the training set patterns were chosen at random and could be consummated solelyby variation acting at the reference neuron level. The reason is that similarity of the patterns made it unnecessary to expand the initial neuronal repertoire. But as soon as the task is made more difficult by increasing the number of categories (desired motions), the initial repertoire becomesinadequate and the possibilityfor conflictingresponses enters. Intraneuronal variation operators are then necessary. The effect of increasing the similarity of the training set patterns as a function of number of response categories is illustrated in Figures IOa–10d. The difficultyof learning similarpatterns as compared to random patterns is greatest in the four-category case, both in terms of the speed of learning and the number of patterns learned. The disparity decreasesin the three- and two-category cases, and reversesin the one-category case (as described above).
– – 2-motion ----- 3-motion — 4-motion
2000t
No.of patternslearned FIGURE 9. Dependence of pattern recognition performance on numbar of required categorizatlona for a dlfflculttralning set. The training set comprised 5084-bit training patterne, but the patterns wera selectad In a way that mada them more similar than for the random case illustrated in Figure 8.
Thirty-three of the patterns could be learned in the four-category case, 42 in the two- and three-category cases, and all could be learned in the one-category case. Referenceneuron evolutionused in isolationwas sufficientto do the job in three cyclesin the latter case. At first sight the last result appears to contradict the assumption that the task was made more difficult by increasing the similarity of the training patterns.
3.3. Noise Toleranceand Generalization 3.3.1. Randomly Introduced Noise. Noise tolerance experiments were performed with all of the training sets described above. The test sets were generated from the training set by imposing different noise
r–
@) 80(30 8000
—
t
patterns Random patterns
-----
; f !,
Similar patterns Randompatterns
j
I I
u h
!
: 4000—
1 I I
0 $
1’
1’
2000– o
16
(c)
1-
z z 6000—
I
:
/
~
g
I
24 28 No. of patterns learned
Similar patterns / Randompatterns / / 1’
8000
z
I
20
~oooo
-----
* o
/ ,’
1 I
6000 i
I
1
1 1 I
4000–
; // ,.-J
2000– o
I
I
I
16 20 24 28 32 36
40 44 48
No. of patterns learned FIGURE 10. Effect of increasing the similarity of training set patterns. The graphs compara tha data illustrated in Figures 8 and 8 for four required categorizations, corraspondhrg to four motione (a), three required categorization (b), and two required categorization (c).
.
NeuromolecularArchitecture
119
levels.At any given noise leveleach bit had an equal probability of being switched.The average result over the nine data sets for the case of four required categorizations demonstrates that the system combines a fairly high degree of selectivitywith tolerance to low noise levels(see Figure 11). Thus at the 2.5°/0 levelthe systemrecognizesabout 89°Aof the patterns. This means that the patterns are in most cases classified correctly if one or two bits are changed. At the other extreme, at the 100Yonoise level, the system necessarilycorrectly classifiesaround 25°/0of the derived patterns even though they are completely reversed (sincethe groupings of patterns are arbitrary at this point and any response has a 25°/0chance of being correct in the four-category case). The same experimentwas also performed on the 50 training pattern case for differentnumbers of required categorizations (or motions). Recall that two cases were considered: random training sets and biased training sets that include a greater number of similar patterns and which are therefore more difficult to learn. Figure 12showsthat in the random training set case the noise tolerance (or correct classificationrate) degrades gracefully with increasing noise, with the four motion case falling off most rapidly and the less difficult two motion case least rapidly. In the one category case (one motion rewarded, none punished) the best strategy for the systemis to respond to every pattern. The corresponding data for the more difficult biased training set case is illustrated in Figure 13. The degradation of noise tolerance clearly is much sharper than in the random training set case, except for the trivial one-category case. The rapid fall-off suggeststhat the systemis sensitiveto the structure of
—F -– A--
z-motion s.m~tion
~4-motion
--,
u“
I &“\ ‘\
\ “Q \
P
u
\ h-
-A., ‘A-–4- -
\
~ /
‘n-
A
-.
.&
‘.&
Noise (%) FIGURE 12. Dependence of noise tolerance on number of required categorizations for a random training act. The training set compriaed 50 randomly constructed 64-bit patterna (the same ueed to generate the learning time data in Figure 6). The aelectlvity is greateat when four categorization are required. The triviai case of one category (not shown) exhibita no selectivity, aince the eystem ia never puniahed for responding incorrectly and conaequentiy it is aeiected for maximal responsiveneaa. Except for thia caae the performance dagradea gracefully with increasing noiae.
the training set. When the training set has more closelyspaced patterns the systemis rewarded on this basis, and appropriately develops a higher degree of selectivity.The selectionexerted on the systemis less as the number of categories decreases (since it is rewarded in fewer instances and never punished). This is consistent with the fact that the selectivityis less, both for the random and biased training set cases,when fewercategorizationsare required. Figure 13 seems to suggest that putting structure into the
1009
—F 2-motion -– A-- s-motion ~4-motion
.G
o’
io
‘iO
io
io
-n ‘
.U
\
lbo
Noise (%) I
FIGURE 11. Effect of randomly introduced noise on pattern recognition performance. The curve represents an average over the nine seta of 30 64-bit training sets used for learning time experiments summarized in Figure 7. The date for each individual aet ia indicated with different markings. The eystem ia required to claasify the Inputa Into four categories. Consequently the 25% correct claaaification ievei corresponds to a completely random reaponae. The noise toierance thus degradea gracefully from the O to 100°A noiae Ievela,
o
20
I
I
40 60 Noise (%)
I
80
I
100
FIGURE 13. Dependence of nolae tolerance on number of required categorizations for a difficult training set. The training set was the aame aa that uaed for tha learning data in Figura 9. The performance degrades rapidiy with increasing noiae, Indicating that buiiding the training set out of aimiiar patterns aiiowa the syetem to deveiop a high degree of selectivity.
120
Jong-ChenChenandM. Conrad
training set increasesthe selectivityrelative to nearby patterns in the two- and three-category cases without increasing it relative to more distant patterns. But we cannot say whether this effect would hold up if as many experimentswere performed as in the 30pattern training set case above (which would be very time consuming). 3.3.2. Systematic Alteration of the Training Set. Test sets were also generated by systematicallychanging particular bit patterns in the training set. Both the biased and randomly generated sets of 50 training patterns were used. A first test set was generated by altering the first bit of each training pattern, a second test by altering the second bit, and so forth. Altogether this yields64 test sets, each comprising 50 patterns. In effect,in this experimentthe ANM system is given64 differentexams,each with a slightvariation on 50 precursor questions. The results for the random and biased training set cases are shown in Figures 14a and 14b.The random case shows higher tolerance for the bit change than does the biased case, consistentwith the conclusionof the previous section that the biased set allows for the evolution of a greater sensitivityto structure change. In other words, if the ANM system is trained on an exam which has built in structure it has a greater likelihood of failing when this structure is slightly altered. The effect is most prominent when four categorizationsare required (i.e., when the number of possibleanswersthat can be rewarded is greatest), but is still present even when only one categorization is required. Certain bit positions are much more important than others. Thus in the biased case a single bit alteration in some positions has no effect on the
response, whereas in other positions it can lead to almost completefailure. The importance of a position has its originin the structure of the training set. If a bit at a particular location can always be used to give a correct responseto all training set patterns, the system learns to treat it as more significant.The significant bit will in general change as the number of response categories increases (a bit definitiveof a move in the N direction might then become associated in some cases with an S move and in others with an N move). Different training sets will of course possess different significant bit positions. The biased sets, since they comprise more similar patterns, are more likely to possess such significant bits. If such a significant bit is altered the pattern will not be recognized. Conversely, the system tends to ignore positions whose bit value is not significant in the above sense. Note that if the test sets were generated by introducing two or more systematic bit changes rather than one it would becomeincreasinglydifficult to separate significant from insignificant bits, since there is an increasingchance that both significantand insignificantbits would be changed at the same time. This is the motivation for generating the test sets by systematicallyaltering singlebits. The issues of generalization and noise tolerance are, strictly speaking, not the same. Noise tolerance most naturally refers to the ability to handle random changes in a pattern that might in practice correspond to, say, image imperfections. Generalization might refer to the ability of a system to group different patterns in a natural way in accordance with some underlying structural or functional principle (such as equivalence under distortion). In the present case, where we are dealing with uninterpreted
(a)
■ 2-motion
9 2-motion ❑ 3-motion ❑ 4-motion
3-motion 4-motion
i?
I Correctclassificationrate (%)
Correct classification rate (%) FIGURE 14. Effect of eyetematic alteratlone of the training aet on pattern recognition performance. (a) The test set was constructed by making eingle alterations at each bit posltlon in each training pattern of the 50 randomly constructed 84-bit patterns used to generate the learning data in Figure 8. (b) The test set was constructed by making single alterations at aach bit poeltion in each training pattarn of the 50 difficult (i.e., similar) 84-bit patterns used to generate the Iearnlng data in Figura 9. Comparison of (a) and (b) chows that most of the test set can still be recognized when a random training set ie ueed but not when a training aet comprlelng simiiar patterns Ie ueed. This demonstrate that the eyetam deveiope a much higher degree of selectivity when the training patterns are simiiar to each other.
.—
NeuromolecularArchitecture
bit patterns, the ability of the system to use its dynamical capabilities to generalize from a training set is indistinguishable from its ability to use these capabilitiesto recognizea set of bit patterns derivable from the initial set by either random or systematic changes. If the alterations could be used to vary the parameters of some functionally significantstructure, the ability of the system to group the patterns would have the appearance of bona fide generalization. The ability of a system to distinguish significant from less significantbits would allow it to generalize to all structures whosecharacteristicscan be varied by altering the latter. But it should be emphasized that this is a very low level (purely dynamical) approach to generalization. Cognitive processes with a more procedural aspect would also have to be taken into account in a more comprehensiveapproach. 3.4. Related ExperimentalObservations We have performed a number of variations on the experiments described above. Many of these have been of an exploratory nature. Here we briefly note three results that put the performance characteristics of the system in broader perspective. 3.4.1. Ejj2ect of Increasing the Number of Evolvable Parameters. As noted earlier preliminary experiments indicated that opening more parameters to evolution appears to increase the rate of learning and to increase the number of training patterns that can be learned. We originallyallowedevolutionaryoperators to act on reference neurons, MAPs, and readout enzymes,and subsequentlyadded cytoskeleton structure, readin enzymes,and finallyconnectionsbetween sensory neurons and enzymatic neurons. Each addition increased the rate of learning and processing capability as measured by the number of patterns learned at the point of termination. We recently investigated this effect with the 50 random test patterns described in Section 3.2.2. When all levels are subject to evolution in an alternating (or nonsimultaneous) manner, 44 of the 50 patterns were learned within 5,000 cycles. When the number of levels was reduced from six to five by suppressing evolution of connectivity, the system could learn only 41 in the same time frame. When readin rather than connectivity evolution was suppressed the number of learned patterns decreased to 36. The greater slowdown in this case is probably due to the fact that mutating readins does not necessarily alter connectivity, but mutating connectivity necessarily alters readins (since readins are always associated with the contact point). As discussedpreviously (Figure 8) when readins and connections are both allowed to evolve, but required to do so simultaneously, 40 of the patterns could be learned,
121
indicating that the enhancing effect of increasing the number of levelsis greater when the levelsare allowed to evolve independently in time. Experiments performed under a wide variety of conditions support these conclusions. 3.4.2. Order of Evolutionary Operators. Originally we expected that the evolution would be fastest when each levelwas evolved for an equal amount of time in an alternating manner. However, it appears that it is more efficientto follow the action of each individual intraneuronal operator by a period of referenceneuron evolution. Apparently it is useful to reorchestrate the repertoire of neurons as soon as neurons with new capabilities are added to it. The importance of alternating between different levels suggeststhat evolutionary developmentsat one level open up new opportunities for evolutionary change at other levels.These effects become more pronounced as the task becomes more difficult. For very simple tasks referenceneuron evolution acting on an initially fixedrepertoire can be most efficient.The importance of evolvingthe neuronal signalintegration capabilities becomes more pronounced as the problem domain becomesmore challenging. 3.4.3. Evolution of Intraneuronal Organization. The evolution exhibited a curious self-organization of readin and readout regions. At the inception of learning neurons typically contain two or three readouts. When learning is completed the number of readouts generally ranges from five to 15. On the average 50°/0of the readouts would residein the same compartment as the readin enzymes if the organization of the neuron were completely random. In fact this number is much lower, around 200/., indicating that readins and readouts tend to separate from each other in the course of evolution. Also, about 1O–2OYO of input linesimpingeon the same or nearby cytoskeletal compartments. This is suggestiveof the specialization of real neurons into dendritic input zones and an axon hillock zone responsible for generating output. If the readin and readout zones were not separated the neuron could not selectively integrate features of the input vector that are separated in space and time, since the firing behaviour would be overly controlled by individual input lines. 4. CONCLUSIONS The balance between information processing at the individual neuron level and at the level of neuron interconnectivity is an open question. Clearly networks with significantpattern processing capabilities can be built up out of neurons with extremelysimple input–output dynamics. The ANM architecture is
122
motivated by the hypothesis that at least some neurons in the brain use mechanisms internal to the unit to perform sophisticated input–output transforms. The experimentalresults demonstrate that it is possible to achieve strong pattern recognition performance with networks built up out of such signal integrating neurons. The results reported here are primarily directed to partitioning 64-bitpatterns into up to four categories. The ANM system on the average learned the four category classification task to the 84°/0level at the point at which the evolution was terminated. The performance increased as the number of required partitions decreases,reaching the 94~0levelin the two category case. Noise tolerance degraded gracefully with increasing noise. If the training set is constructed so that it contains many similar patterns the performance achieved at the time of termination decreased somewhat, but the noise tolerance behaviour of the system showed that it developed greater sensitivity to the structure of the patterns. In general the processing power and both the rate and quality of learning increased as the number of evolvable parameters were increased. This effect became more pronounced as the difficulty of the tasks were increased, and could only be reversed by reducing the difficultyof the task to the point where the extra capacity became gratuitous. The ANM architecture, as stated at the outset, is deliberately complex and organized so that extra features can continually be added. This strategy is concomitant to the goal of buildinga rich platform for evolutionary learning. As noted above, increasingthe richness and openness of this platform has in each case increased the problem solving power of the system. This type of organization would in principle also be suitable for elaborating the system in the direction of greater physiologicalrealism. But even if such openness is introduced in a way that has no specific or well founded physiological correlate its presence captures an important feature of biological organizations, namely their structural and functional transformability. The intraneuronal signal integration in our representationhas been attributed to mechanismsassociated with the cytoskeleton. But of course many other processes, including in particular processes in the dendritic arborization, could support significantpattern processingat the neuronallevel.The modelcannot by itselfprovideevidencefor any particularmechanism. But as stated above,it constructivelydemonstratesthat signalintegration at the levelof the neuronal unit can be effectivelyemployedat the network levelin at least one network architecture. The much richer network structure of the brain would presumably open up many other possibilitiesfor synergiesbetween interand intraneuronal processing.
Jong-ChenChenandM. Conrad
REFERENCES Chen,J. C. (1993).Computer Experiments onEvolutionary Learning in a Multilevel Neuromolecrdar Architecture. Unpublished doctoral dissertation, Department of Computer Science, Wayne State University. Chen, J. C., & Conrad, (1994a). A Multilevel Neuromolecular Architecturethat uses the ExtradimensionalBypassPrincipleto Facilitate EvolutionaryLearning, Physica D, 75,417-437. Chen, J. C., & Conrad, M. (1994b). Learning Synergy in a MultilevelNeuronal Architecture. BioSystems, 32, 111-142. Conrad, M. (1974). Molecular Information Processing in the Central Nervous System, Parts I and II. In M. Conrad, W. Giittinger,& M. Dal Cin (Eds.), Physics arrdA4at/zematicsof the Nervous System (pp. 82–127).Heidelberg:Springer-Verlag. Conrad, M. (1976a). Complementary Molecular Models of Learningand Memory. BioSystems, 8, 119-138. Conrad, M. (1976b). Molecular Information Structures in the Brain. Journal of Neuroscience Research, 2, 233–254. Conrad, M. (1977).Principleof superpositionfreememory. Journal of Theoretical Biology,67, 213–219. Conrad, M. (1990). Molecular Computing. In M. Yovits (Ed.), Advances in Computers (pp.235–324).New York: Academic Press. Conrad, M., Kampfner, R. R., & Kirby, K. G. (1988).Neuronal Dynamicsand EvolutionaryLearning.In M. Kochenand H. M. Hastings (Eds.), Advances in Cognitive Science (pp. 169-189). Boulder,CO: WestviewPress. Conrad, M., Kampfner, R. R., Kirby, K. G., Rizki, E., Schleis,G., Smalz, R., & Trenary, R. (1989).Towards an Artificial Brain. BioSystems, 23, 175–218.
Drummond, G. I. (1983). Cyclic Nucleotides in the Nervous System. In P. Greengard & G. A. Robinson (Eds.), Advances in Cyclic Nucleotide Research, Vol. 15, (pp.373–494). New York: Raven Press. Dudai, Y. (1987). The cAMP Cascade in the Nervous System: Molecular Sites of Action and PossibleRelevanceto Neuronal Plasticity. Critical Reviews of Biochemistry, 22,221-281. Greengard, P. C. (1978). Cyclic Nucleotides, Phosphorylated Proteins and Neuronal Function. New York: Raven Press. Hameroff, S. R. (1987). Ultimate Computing. Amsterdam: NorthHolland. Hameroff, S. R., Dayhoff, J. E., Lahoz-Beltra, R., Samsonovich, A. V., & Rasmussen, S. (1992). Models for Molecular Computation: Confirmational Automata in the Cytoskeleton. Computers, 25(1 1), 30-40. Hebb, D. O. (1949). The Organization of Behavior. New York: Wiley. Jeffries, J., & Conrad, M. (1994) Face Recognition as a Task Environment for the Reference Neuron Model of Memory, BioSystems, 33, 155–166.
Kampfner, R. R., & Conrad, M. (1983).Computational Modeling of Evolutionary Learning Processes in the Brain. Bulletin of Mathematical Biology, 45, 969–980.
Kirby, K. G., & Conrad, M. (1986).Intraneuronal Dynamicsas a Substrate for EvolutionaryLearning. Physica D, 22,205-215. Kirkpatrick, F. H. (1978). New Models of Cellular Control: Membrane Cytoskeletons, Membrane Curvature Potential, and PossibleInteractions. BioSystems, 11, 85–92. Koruga, D. (1990).Molecular Networks as a Sub-neuralFactor of Neural Networks. BioSystems, 23,297-303. Liberman, E. A., Minina, S. V., & Golubtsov, K. V. (1975). The study of the metabolic synapse: H: comparison of cyclic 3’,5’-AMPand cyclic3’,5’-GMPeffects.Biophysics, 22, 75-81. Liberman, E. A., Minina, S. V., Shklovsky-Kordy,N. E., & Conrad, M. (1982a).Change of Mechanical Parameters as a Possible Means for Information Processingby the Neuron (in Russian). Biophysics, 27, 863-870.
NeuronrolecularArchitecture Liberman,E. A., Minina, S.V., Shklovsky-Kordy,N. E., &Conrad, M. (1982b). Microinjection of Cyclic Nucleotides Provides Evidence for a Diffusional Mechanism of Intraneuronal Control. BioSystems, 15, 127-132. Liberman,E. A., Mirrina,S. V., Mjakotina,O. L., Shklovsky-Kordy, N. E., &Comad, M. (1985).NeuronGeneratorPotentialsEvoked by Intracclhdar Injection of CyclicNucleotidesand Mechanical Distension.Brain Research, 338, 33-44. Matsumoto, G. (1984). A Proposed Membrane Model for Generation of SodiumCurrents in Squid Giant Axons. Journal of Theoretical Biology, 107, 649–666.
Matsumoto, G., & Sakai, H. (1979). Microtubules inside the Plasma Membrane of Squid Giant Axons and their Possible PhysiologicalFunction, Journal of Membrane Biology, 50, 1–14. Matsmnoto, G., Tsukita, S., & Arai, T. (1989).Organization of the Axonal Cytoskeleton: Differentiation of the Microtubule and Actin Filament Arrays. In Cell Movement, Vol. 2: Kinesin, Dynein, and Microtubule Dynamics (pp. 335–356). New York: Alan R. Liss. Matus, A., & Riederer, B. (1986).Microtubule-associatedProteins in the DevelopingBrain. Annals of the New York Academy of Sciences,466, 167-179. Minsky, M. L. (1980). K-lines: A Theory of Memory. Cognitive Science,4, 117-133. Rasmussen, S., Karampurwala, H., Vaidyanath, R., Jensen, K. S., & Hameroff, S. (1990). ComputationalConnectionism with Neurons: a Model of Cytoskeletal Automata Subserving Neural Networks. Physica D, 42,428-449. Reeke, G. N., & Edelman, G. M. (1988). Selective Networks and Recognition Automata. In: M. Kochen and H. M. Hastings (eds). Advances in Cognitive Science, (pp. 50-71). Boulder, CO: Westview Press. Selden, S. C. & Pollard, T. D. (1983). Phosphorylation of Microtubulc-associated Proteins Regulates their Interaction with Actin Filaments. Journal of Biological Chemistry, 258, 7064-7071.
123 Smalz, R., & Conrad, M. (1991). A Credit Apportionment Algorithm for Evolutionary Learning with Neural Networks. In A. V. Holden and V. J. Kryukov (Eds.), Neurocomputers and Attention 1[: Connectionism and Neurocomputers, (pp.663– 673).Manchester, UK: Manchester UniversityPress. Smalz,R., & Conrad, M. (1994).CombiningEvolutionwith Credit Apportionment: A New Learning Algorithm for Neural Nets. Neural Ne~works,7, 341–351. Spiessens,P., & Torreele, J. (1992).MassivelyParallel Evolution of Recurrent Networks: An Approach to Temporal Processing. In: F. J. VareIa and P. Bourgnine (Eds.), Toward a Practice of Autonomous Systems: Proceedings of the First European Conference on Art@ia[ L@e @p. 70–77). Cambridge,
MA: The MIT Press. Teyler, T. J., & DiScenna, P. (1986).The hippocampal memory indexingtheory. Behavioral Neuroscience, 110, 147–154. Trenary, R., & Conrad, M. (1987). A Neuron Model of a Memory Systemfor Autonomous Exploration of an Environment. In: L. O. Hertzberger and F. C. A. Green (Eds.), Intelligent Autonomous Systems, (pp. 601–609). Amsterdam: North-Holland. Triestman, S. N., & Levitan, I. B. (1976).Alteration of Electrical Activity in Molluscan Neurons by Cyclic Nucleotides and Peptide Factors. Nature, 261,62-64. Vallee, R. B., Bloom, G. S., & Theurkauf, W. E. (1984). Microtubule-associatedProteins: Subunits of the Cytomatrix. Journal of Cell Biology, 99, 38s–44s.
Werbos, P. (1992).The Cytoskeleton:Why it May be Crucial to Human Learning and Neurocontrol. Nanobiology, 1, 75–96. Whitley, D., & Hanson, T. (1989).OptimizingNeural Networks UsingFaster, More Accurate GeneticSearch, Proceedings of the 3rd Intern. Conference on Genetic Algorithms IEEE, pp. 157255.Palo Alto, CA: Kaufmann Zeigler, B. P. (1984). Multt~acetted ModeUing and Discrete Event Simulation. New York: AcademicPress.