A connectionist architecture for neural networks

A connectionist architecture for neural networks

A CONNECTIONIST ARCHI~C~ F O R NED-RAL N L ~ O R K S . Kamal S. All, Dia L. All, A d e l L. All. D e p a r t m e n t s of C o m p u t e r S c i e n c ...

89KB Sizes 1 Downloads 126 Views

A CONNECTIONIST ARCHI~C~ F O R NED-RAL N L ~ O R K S . Kamal S. All, Dia L. All, A d e l L. All. D e p a r t m e n t s of C o m p u t e r S c i e n c e & E n g i n e e r i n g Technology, University of Southern Mississippi Hattiesburg, MS 39406-5106 USA. In t h i s p a p e r we i n t r o d u c e an a r c h i t e c t u r e for neurocomputing. In this architecture all the sorting and searching for data, conventionally carried out by the operating system, w i l l be e l i m i n a t e d . T h i s is d o n e b y implementing specialized hardware to carry out the sorting of data, both in the input and output stages. Hardware and software aspects of the architecture are discussed. To simulate a Neural Network on a conventional computer, one faces many problems. A l t h o u g h , theoretically a Neural Network can be simulated in a V o n N e u m a n n m a c h i n e , to serially calculate all the inputs and outputs for all processing elements, and to recalculate till the system stabilizes, will r e q u i r e i n f e a s i b l e amounts of CPU time. Some simplified models, or small scale Neural Networks h a v e b e e n s i m u l a t e d on c o n v e n t i o n a l machines, y i e l d i n g some p r o m i s i n g r e s u l t s . H o w e v e r , o n c e the simulated network is increased in size, the demand for computer time increases dramatically. To overcome this difficulty new architectures, b e t t e r s u i t e d for N e u r a l Networks, are utilized. The architecture we are building is based on the assumption that a neuron that does not receive an input (excitatory or inhibitory) will not produce an a c t i o n potential, in o t h e r w o r d s its o u t p u t d o e s not n e e d to be calculated. Eliminating the zero input processing e l e m e n t s f r o m the calculations will speed up the system. This, however, might not increase the s y s t e m s speed drastically since it is likely that most processing elements will have a non-zero input. In the proposed architecture all the sorting and searching for data, conventionally carried out by the operating system, will be e l i m i n a t e d . T h i s is done by implementing specialized hardware to carry out the sorting of data, both in the input and output stages. Clearly this feature will cut down on the systems need to access memory, increasing the systems overall performance. The s y s t e m c o n s i s t s of S l a b P r o c e s s o r b o a r d s (SPs) and a Connection control Processor board (CP). The slab boards are all identical. Each slab board will be responsible for the calculation of the output of the neurons in t h a t s l a b only. The C o n n e c t i o n c o n t r o l P r o c e s s o r board, on the other hand, is r e s p o n s i b l e f o r t h e r e g u l a t i o n of d a t a t r a n s f e r a m o n g S l a b Processor boards. The CP will also be responsible for communication with the host machine. The software for this system is divided into two distinct units, the CP's and the SP's operating systems. The CP's operating system should be capable of all c o m m u n i c a t i o n s , w i t h the h o s t m a c h i n e and the o u t s i d e w o r l d if n e e d e d . T h e CP r u n n i n g its o p e r a t i n g s y s t e m s h o u l d be able to read the initial network topology from the host machine and distribute the information to the appropriate SPs. In other words, the CP should be able to a c c e s s M2 directly. After such a transfer is completed the CP should read the input, or a c t i v a t i o n , from the host and direct it to the appropriate slab. When the system starts calculations, the prime task of the CP becomes bus a r b i t r a t i o n . F i n a l l y , the CP should be able to read and send out the SP's final output or result to the host machine. The SP's operating system starts the calculations once an i n p u t is r e c e i v e d , and only when calculations of all inputs is completed would data transfer be allowed. This is to assure that networks timing is not compromised. The operating systems w i l l be w r i t t e n initially to s u p p o r t s i m p l i f i e d l e a r n i n g rules, and simplified connection s c h e m e . H o w e v e r , o n c e the h a r d w a r e is tested the o p e r a t i n g systems can be expanded to support different learning algorithms and connection schemes.

537