Neurocomputing 2 (199(I/91) 181-182 Elsevier
ISl
Scanning the issue
In Multilayer perceptrons for classification and regression, F. Murtagh describes the multilayer pereeptron (MLP) as a supervised classification method. The relationship to statistical and pattern recognition approaches to the problems of classification and regression is outlined. Different learning rules (weight update algorithms) are discussed and the application of muitilayer perceptrons for forecasting sunspots, classification of Fisher's Iris Data, and in other examples is presented. In 'the forecasting example the final architccture wets a three-layer MLP (3-5-10) and the conjugate gradient due to Barnard and Cole was used. After 200 iterations the percentage Capproximate') correct was 58.33% (93.75%). This represented a higher performance than using K-Nearest Neighbors Discriminant Analysis for K = 1 (this performed better than K = 2), which using the same test and training data delivered a percentage ('approximate') correct of 49.58% (85.53%). In the classification example several iearning rules with a 4-15-3 MLP worked well; among these: the conjugate gradient, Quickprop, and the cascade-correlation algorithms. In Simulation of backpropagation networks on transputers - suitable topologies and performance analysis. R. Straub, D. Schwarz and E. Sch6neburg describe the parallel implementation and performance of the simulation of backpropagation networks on parallel computer topologies using transputers. The computation of a backpropagation network is first divided into partial functions of different complexity, which can be implemented as parallel processes mapped onto 0925-2~12'91/$03.5(I © 1991 ElscvierScience Publishers B.V.
an optimal transputer network. For lransputer networks, especially due to the restricted number of hardware communication links (total of four per transputer), an optimal ratio between computation and communication times should be achieved, through minimization of communication time. In this approach a master transputer can be connected to up to three slave transputer pipelines. Each pipeline can be built up of principally any number of transputers. In each slave transputer a multiplexer process is placed for passing messages through the pipeline between the master and the slaves not directly connected to the master. Different parallel process distributions on the transputer network are analyzed. Six computing transputers T800 deliver for a test four-layered network (3-150-150--1) slightly more than 200 KCUPS. W. Poechmueller and M. Glesner present an
Evaluation of state-of-the-art neural network customized hardware. For the VLSI realisation of networks of considerable size and complexity, three architectures are principally possible: small networks without on-chip learning, large networks using Water Scale Integration (WS1), and network distribution on several chips using cascadable architectures. The most promising architecture seems to be the cascadable. This cascadability can be arbitrary or limited. Limited cascadability, enabling larger networks with more layers and more neurons per layer, but not more synapses per neuron, seems to give more promise for VLSI realisation. Neural network architectures are classified in this paper as network-, neuron-, and synapse-oriented architec-
182
V.D. S(mchez A.
tures (in order of degree of parallelism, from none to the highest respectively). Existing and proposed designs are then evaluated. Networkoriented architectures presented: conventional von Neumann machine (25,000-250,000 CUPS), and conventional machine with acceleration board (1.5-3 MCUPS). Neuron-oriented architectures presented: Digital Signal Processors (567 MCUPS with 256 processor boards), transputer networks (275 KCUPS with 4 transputers), RISC arrays, linear systolic arrays (32 MCUPS with a 20cell system), small two-dimensional systolic arrays (8.8MCUPS), the Neural Bit Slice (10 MCUPS per chip), and the BACCHUS architecture (1 billion CUPS, only with binary values and associative memories). Synapseoriented architectures presented: large twodimensional systolic arrays (51.4 MCUPS), analog implementation of associative memories (4.84 billion CUPS), SIMD arrays (1.6 billion CUPS), the ETANN chip (2 billion CUPS per chip), and Pulse Stream Chips. In A stock selection strategy using fuzzy neural networks, F.S. Wong and P.Z. Wang introduce and apply the concept of neural gates and describe the design of an Intelligent Stock Selection System (ISSS) based on this extension of the artificial neural network approach. The overall ISSS architecture includes knowledge acquisition, (Boolean, fuzzy, and probabilistic) data processing, integration of expert knowledge with investor preference, fuzzy reasoning, and automatic learning. The system rule base consists of company-, industry-, attributes-, and countrybased rules, which are set up by the knowledge acquisition subsystem. The system database consists of company-, industry-, socio-economic-, and country-based data, which is handled by the data maintenance subsystem. The neural forecaster and the similar structured country and stock selection (FuzzyNet module) are core system components. The FuzzyNet module consists of three submodules: Membership Function Generator (MFG), Fuzzy Information Processor (FIP), and Backpropagation Neural network (BPN). Neural gates are generalized artificial
neurons for processing Boolean (Generalized Boolean Neural Gate, GBN), Probability (Probability Neural Gate, PN), and Fuzzy information (Fuzzy Neural Gate, FN). Neural gates are used in the FIP and BPN submodules of the FuzzyNet. E. Sch6neburg presents in Neural networks hunt computer viruses an innovative neural network-based approach for the detection of mutated computer-viruses. First, computer-virus code sequences and theoretical limitations of computer-virus identification are reviewed. Then, two approaches for identifing mutations of computer-viruses are presented using characteristics sequences of the 39 most well-known virus families separated into 34 classes. A backpropagation network (three layer, 40-15-34 neurons) is used in both cases, which is trained on the scaled sequences (each sequence consists of 40 pairs of two hex-digits). With the first approach, mutations of up to 20% (noise) are identified within a 70% classification accuracy range. Difficulties for the first approach due to virus mutations produced by non-stochastic processes (e.g. functional modifications including insertion, deletion, exchange and replacement of byte series) are attacked in the second approach using a generalized input transformation (scaling) obtaining 95% classification accuracy for up to 40% virus mutations (up to 75% when supplementary information is available in practice). My review of Advanced neural computers edited by R. Eckmiller completes the last issue of the second volume of Neurocomputing-An International Journal. This Elsevier book summarizes the papers presented at the International Symposium for Sensory and Motor Systems (NSMS) held in Neuss (FRG) on March 22-24, 1990. I wish to acknowledge the cooperation by all those who allowed us to consider their work for inclusion in this issue. V. David Sfinchez A.
Editor-in-Chief