Tetrahedron Computer Methodology, Vol. 3, No. 3/4, pp. 119 to 128, 1990 printed in Great Britain
0898-5529/90 $3.00+.00 Pergamon Press plc
Neural Network Technology and its Application in Chemical Research Mark E. Lacy Product Development, Norwich Eaton Pharmaceuticals, Inc., A Procter & Gamble Company, P. O. Box 191, Norwich, New York 13815 USA
Received 11 September 1990, Revised 26 October, 1990, Accepted 26 October 1990
Key words: Neural network; Computer methodology; Protein structure; Organic chemistry; Spectroscopy; Nucleic acid Abstract: Neural network technology is finding new applications in chemical research. This paper briefly describes neural network technology, reviews recent applications in chemistry, and introduces the papers appearing in this special issue of Tetrahedron Computer Methodology.
INTRODUCTION Neural network technology is the design and use of neural networks to solve computational problems. The roots of this technology were established in the 1940s and 1950s with the work of McCulloch & Pitts 1 on networks of idealized neuron-like elements, the development of the perceptron element by Frank Rosenblatt at Cornel1, and the development of the "adaptive linear element" by Widrow. 2 Funding for neural network research was sharply curtailed subsequent to Minsky & Papert's analysis of the perceptron and description of its limitations. 3 Several groups continued to work on improving neural networks and their capabilities, but it was not until the 1980s, when Hopfield 4 published a clear description of a neural computing system whose interconnected elements seek an energy minimum, that interest in neural networks was revitalized. Suddenly neural network technology received widespread attention. The outstanding progress in computing power that had taken place since th e 1960s, as well as the progress made in artificial intelligence on the expert systems front, each played a role in neural network technology becoming an established and sustained research focus. Now neural network technology is a flourishing field, with a number of journals (e.g., Neural Networks and International Journal of Neural Systems) and professional societies (e.g., the International Neural Network Society) devoted to it. The purpose of this paper is to (1) provide some information on the fundamentals of neural networks for those who are not familiar with the technology, (2) review neural network applications in chemistry that have been published over the last few years, and (3) briefly introduce the papers presented in this issue. Readers will find more extensive explanation of some neural network concepts in the articles in this issue, including
119
120
M.E. LACY
pointers to other sources of information. A number of texts on neural networks have also recently been introduced on the market (see, for example, Refs. 5-8; Ref. 6 comes with software). B A C K G R O U N D ON N E U R A L N E T W O R K T E C H N O L O G Y i s a N e u r a l Network? A neural network is a collection of processing elements, interconnected in a specified way, that takes a group of inputs and transforms them into a group of outputs. Such a network of elements, when given certain characteristics, can be used as a computational means of answering problems that are less tractable with other computer methods. Presently, most neural network research involves software simulation of these devices and networks. Hardware implementations of neural networks that can be programmed and reconfigured dynamically are still mainly in the developmental stage. Neural networks are referred to b y - m a n y different names, including connectionist models, neurocomputers, artificial neural systems, parallel associative networks, and parallel distributed processors. While there are nuances to the use of each of these synonyms, broadly speaking they all refer to neural network technology. The term "artificial" is frequently used to distinguish neural networks from their biological counterparts. Artificial neural networks are inspired by biological networks, but for the purposes of applications, the emphasis is on computational power rather than close similarity to true biological systems. Instead of a highly interconnected network of neurons, an artificial neural network is made up of a number of highly interconnected processing elements. The network is a computing system that stores information by changing its state (the properties of the network) in response to external inputs. What
t, e a t u r e s o j a l v e u r a , l v e t w o r K
There are several characteristic features of neural networks (Figs. 1-3). These include processing elements, interconnects/connections, and layers, referring to, respectively, the parts of the network, how they interact, and how they are organized.
INPUT 1 ~
INPUT 2
~- O U T P U T
INPUT 3 INPUT n Fig. 1. A processing element (PE) for a neural network. A processing element (PE; Fig. 1) is analogous to a neuron in a biological neural network. It is a simple device that receives a number of input signals and applies a mathematical function to transform the input, which is then output to other processing elements.
Neural network technology
121
An interconnect, or connection, between two processing elements (Fig. 2) is a path along which a signal can flow. Each interconnect is unidirectional; it can only pass a signal one way, from the output of one processing element to the input of another processing element. Each interconnect is also characterized by a connection strength, or weight, which is applied to the signal that the connection carries.
>
z
//
/ INTERCONNECT
Fig. 2. An interconnect in a neural network. Processing elements are generally arranged in "layers" (Fig. 3). The sets of elements that receive input from and provide output to the outside worid comprise the input and output layers, respectively. Each input element receives a certain kind of input data; each output element produces a certain type of result. There are usually one or more layers between the input and output layers; these are referred to as "hidden layers". It is sometimes (but not always) possible to attribute a physical meaning to the elements in the hidden layer.
OUTPUT LAYER
C? 4'
C? 4'
@ 4~
() ,~
INPUT LAYER Fig. 3. Layers in a hypothetical neural network.
4'
122
M.E. LAcY
Neural networks differ not only in the number of elements they use and how these elements are interconnected, but also in several characteristics that govern their implementation, including the rules used to determine how a processing element responds to its inputs, and the procedures by which the networks are trained (see below). These differences make some networks more suitable for certain problems than others. A pattern-recognition problem may be approached using a network that uses the back-propagation algorithm 5'9 for learning, while a problem in constrained optimization (such as finding a near-optimal solution to the "traveling salesperson" problem) may be approached using a Hopfield net. 10 Choosing the right architecture and properties of a neural network for a particular problem is the major task in applying neural network technology; while some heuristics are in use, there are no "cookbook" instructions that can be followed.
An Example Network; Training a Network A simple example of a neural network is shown in Fig. 4. The purpose of this network is to take information on the characteristics of a hang-gliding site and determine whether the site is better for training or soaring. The inputs are ratings for soaring windspeed, suitability as a launch area, suitability as a landing area, suitability of the set-up area, and flying enjoyment. Three PEs have been arbitrarily chosen to make up the hidden layer, while two PEs serve to provide output on training site rating and soaring site rating.
~8~
so~\~
s~
4'
s~~es ~
/b
4'
~i~ dspeed l~e~ ~
l~e~ ~
4' /~e~~ l ~
4' ~.~o~e~
Fig. 4. An example neural network. Before a neural network such as this can be used, however, it must be trained. The result of training is a complex internal representation of the training cases that were presented to it. The process of training, in which the strengths of the connections among the elements are selectively adjusted, is accomplished through the use of a learning procedure. The learning procedure involves an algorithm for repeatedly providing a training set of inputs and outputs to the network and adjusting connection strengths until the root-mean-square or mean square error is reduced below a selected value. Depending on the learning procedure used, this presentation of input data may need to be done 10 times or 10,000 times. This learning step is termed "supervised" if a desired set of outputs is provided to the network for it to use as a goal, adjusting itself to bring its output in line with the goal output. If no desired set of outputs is provided, and the network is simply allowed to follow a rule for modifying its connection strengths in response to its input examples, the learning is termed "unsupervised".
Neural network technology
123
Once the network in Fig. 4 has been trained with a sufficient number of test cases, it can be used to provide "instantaneous" predictions of training and soaring suitability in response to new sets of input data (i.e., data that was not part of the original training data set).
Differences from Other Programming Paradigms There are a number of fundamental differences between neural networks and other programming paradigms, including conventional programming and expert systems. 11
Knowledge. Knowledge is represented symbolically in expert systems, but non-symbolically in neural networks. In neural networks it is distributed throughout the net as a set of numerical parameters (connection strengths).
Processing. Processing of data or information in conventional computer programs or expert systems is usually sequential: the flow of data can be followed step by step from one instruction to the next. Neural networks, on the other hand, are theoretically parallel systems: data flows along multiple interconnects simultaneously. In practice, software simulations of neural networks use sequential processing. Problem solving. Conventional programs and expert systems are algorithmic, in that step-by-step descriptions can be given of how the input leads to the output. Neural networks, however, are non-algorithmic. While the operation of a given element can be described algorithmically, the operation of the network as a whole cannot. It is consequently easier to explain how an ordinary program or expert system comes up with an answer than to explain how a neural network reaches its conclusion. Neural networks have certain advantages over ordinary programming and expert systems. They can generalize to find the "best-fit" answer to a problem. Through the use of learning procedures they self-organize to form their own internal knowledge representations. As tools in pattern recognition, they are adept at discovering non-obvious features in data. They can obtain near-optimal answers very quickly. They do not require complete data in order to arrive at an answer to a problem, and they can guess at answers to problems for which they have not been trained.
Implementation Presently most neural network applications involve software simulations. A variety of different commercial software packages are available for setting up and experimenting with neural networks, primarily for IBM PCs and PC clones, Macintoshes, and Sun workstations. Special coprocessor boards are also available to accelerate simulations. Some neural network coprocessors allow ordinary procedural programming and neural network processing to be combined; the neural networks run as host-called procedures on the coprocessor. Work is underway as well to construct hardware implementations of neural networks; true neurocomputers may be made from analog VLSI chips or optoelectronics. Many of the available software packages have been the subjects of software reviews (see Ref. 12 and recent issues of the magazine PC AI). The NeuralWorks Professional II package is one which runs on a variety of platforms. This issue of Tetrahedron Computer Methodology includes an up-to-date review 13 of NeuralWorks Professional II and a demonstration of the system is included with the electronic media for this issue. A P P L I C A T I O N S OF N E U R A L N E T W O R K S IN C H E M I S T R Y : R E V I E W The first practical applications of neural network technology have been mainly outside of scientific research, with the exception of modeling biological neural networks. These applications span many fields. Applications in decision support systems include risk assessment for mortgage insurers, the design of
124
M.E. LAcY
manufacturing processes, and project management. Applications involving expert systems include optimizing seating on airlines, recommending financial buying strategies, and evaluating loan applicants. Applications in pattern recognition, an area that may be one of the most fruitful, include systems that recognize handwritten characters and systems used for face/fingerprint/retina identification. Other applications include machine vision and control of processes and machines. The earliest appearance of neural networks in applications to chemistry can be found in the extensive work of Peter Jurs and co-workers in chemical applications of pattern recognition. Among the nonparametric methods which these workers used to develop discriminants of compounds with regard to chemical and/or biological properties is the use of the linear learning machine (or, perceptron). A review of Professor Jurs' contributions is beyond the scope of this paper; the reader is directed to the text by Jurs & Isenhour 14 and a number of publications appearing in Analytical Chemistry in the late 1960s and early 1970s. The limitations of the perceptron inhibited more widespread application; now, more elaborate networks with hidden layers are able to provide greater analytical p o w e r . . The first symposium on neural network applications in chemistry was held at the national meeting of the American Chemical Society in April 1989. Several papers were presented on applications that included organic chemistry, protein structure, and correlation of chemical and biological properties. This symposium was the basis for a cover article for Chemical & Engineering News 15 and neural network technology was the focus of a radio show produced by the ACS. Since 1988, a number of publications have appeared describing chemical applications of neural networks. These applications have included predicting protein structure, relating chemical properties to biological activity, classifying and identifying chemical spectra, and analyzing nucleic acid sequences. Predicting Protein Structure By far the greatest number of publications in chemical applications of neural network technology have concerned the problem of predicting secondary and tertiary protein structure. The earliest of these reports were those of Qian & Sejnowski, 16 Bohr et al., 17 and Holley & Karplus. 18 Sejnowski & Rosenberg 19 applied neural network technology to the problem of speech synthesis and achieved remarkable success by treating the problem as one in which a set of input PEs could be used to capture a "window" of contextual information in the alphabetic letters that precede and follow a letter whose sound is desired. The principles behind "NETtalk" were used by Qian & Sejnowski 16 to gather the "meaning" of an amino acid to the secondary structure of a protein by using a window that looks at the amino acid sequence for several residues to each side of the amino acid of interest. Using a back-propagation network with input and output layers and a single hidden layer, these researchers achieved a success rate of 60-65% for predicting the secondary structure of a set of globular proteins. While this was low enough for them to conclude that local information in the protein sequence is insufficient for predicting structure, it was still higher than that obtained using other methods reported in the literature. Similar methodology and results were reported by Holley & Karplus, 18 who later used their approach to predict the secondary structure of the principal neutralizing determinant of human immunodeficiency virus HIV-1.20 The paper by Bohr et al. 17 discusses a study of the alpha-helices in rhodopsin using perceptron-like networks with back-propagation. As in the work by Qian & Sejnowski, 16 the input to the networks was an amino acid sequence, but unlike Qian & Sejnowski, 16 three separate networks were used for predicting ct-helix, [3-sheet, and random coil structure. The c~-helix network achieved a success rate nearing 73%. These researchers went on to report21 the generation of a three-dimensional structure using a neural network to predict distance matrices, followed by the use of a minimization fitting procedure. McGregor et al. 22 have used neural networks to improve the prediction of [3-turns over what is predicted by the Chou-Fasman method. The Hopfield net4 is noted for displaying associative memory, i.e., an entire pattern of data stored or "remembered" by the network can be retrieved through association by supplying the network with an aopropriate piece of the pattern. Hopfield's use of an energy function for this network is related closely to
Neural network technology
125
concepts in statistical mechanics, namely, spin systems, and Ising models. The statistical mechanics approach to protein structure, as characterized by these concepts, has been described most recently by Bryngelson, Friedrichs, Hopfield, and Wolynes. 23-25 The preliminary work of Bryngelson & Hopfield24 involved using a neural network to learn the values for a model Hamiltonian which could be used to predict secondary structure from an amino acid sequence. Friedrichs & Wolynes 25 reported the use of an associative memory Hamiltonian based on hydrophobicity patterns in the recognition of protein tertiary structure. Finally, it should be noted that a hybrid approach of neural networks with other methods may be able to draw on the strengths of each approach. Viswanadhan et al. 26 used a combination of neural network models, information theory, homology modeling, and the Chou-Fasman method to predict the secondary structure of human folate binding protein. Relating Chemical Properties to Biological
Activity
Given the potential for use of neural networks in QSAR work, it is surprising that little has been published regarding this type of application. Stubbs 27 described a neural network that used plasma half-life, pKa, daily dose, and molecular weight as inputs to predict gastrointestinal side effects of non-steroidal anti-inflammatory drugs. Aoyarna et al. 28 used neural networks to examine structure-activity relationships for a series of anticarcinogenic carboquinone derivatives and a series of antihypertensive arylacryloylpiperazine derivatives; they found that the neural network gave better results than the ALS method. 29 A thorough comparison of neural networks against traditional statistical methods in QSAR is needed. Classifying and Identifying Chemical Spectra
One of the first reported uses of neural networks in spectroscopy was the incorporation of the SAVVY Signal Recognition System in the SACHEM software product offered by Sprouse Scientific Systems, Inc., and announced at the 1988 Pittsburgh Conference. SACHEM is chemical analysis software with provisions for spectra! file normalization, archiving and searching, and mathematical manipulation and analysis of spectra. The SAVVY neural network is used in the searching of the spectral database. Because other Sprouse proprietary algorithms, specifically designed for examining infrared spectra and not based on general neural network technology, were able to outperform the SAVVY system, 30 SACHEM now uses these algorithms instead of the SAVVY neural network. Neural networks have also been used in pattern recognition of the proton NMR spectra of sugar alditols. 31 Aoyama et al. 32 found that using a neural network to predict conformation of norbornene derivatives from 13C-NMR chemical shifts was more reliable than linear learning machine or cluster analysis methods. Analyzing Nucleic Acid Sequences
Neural networks are being used in research involving the identification of specific functional segments of DNA sequences. 33 Ezhov et al. 34 and Lukashin et al. 35 have used neural networks to recognize promoter regions on DNA sequences. Towell et al. 36 tested a hybrid learning system called KBANN (Knowledge-Based Artificial Neural Networks) on promoter recognition and achieved superior results over other pattern recognition methods, including a standard back-propagation neural network. While training networks to recognize pre-messenger RNA splicing signals, Brunak et al. 37,38 investigated anomalous results with several genes from the EMBL nucleotide sequence databank and found errors in the databank sequences. This led them to propose using neural networks for proofreading of databank entries.
126
M.E. LACY
APPLICATIONS OF NEURAL NETWORKS IN CHEMISTRY: P R E V I E W FOR THIS ISSUE The reports included in this issue describe further work in using neural networks to predict protein structure, as well as new applications in classifying chemical spectra and planning synthetic strategies in organic chemistry. In the area of protein structure prediction, Bryngelson et al. 39 report further work on the use of a neural network to learn the parameters for a predictive energy model, while Friedrichs & Wolynes 40 report further work using associative memory Hamiltonians for tertiary structure recognition. Wilcox et, al. 41 report preliminary results with very large neural networks to predict protein secondary and tertiary structure, using distance matrices and a heterologous training set. Curry & Rumelhart42 provide a thorough and extensive analysis of a complex neural network used to classify mass spectra. Newly reported applications of neural networks in organic chemistry include a hybrid neural network/expert system developed and described by Luce & Govind43 which performs retrosynthetic analysis for organic reactions. The neural network system under development by Elrod et al. 44 also predicts organic reactions, but uses a different type of structural representation for training the network. Finally, Kirk et al.45 explain in a short communication how they have used a neural network to optimize an enzymatic synthesis. CONCLUSION Neural network technology is receiving renewed attention and new applications in chemistry are rapidly being developed. This technology and its applications are not yet mature, and many applications are only in their early stages, but already we are learning how neural networks compare to other approaches in their ease of use, their predictive power, and their computational demands. Because a substantial number of researchers are involved in the theoretical exploration of the technology, and because successful applications are being developed in non-scientific fields, we will continue to see cross-fertilization of techniques in new application areas, such as chemistry. Future research in chemical applications can be expected to show more clearly the limitations of this technology and where it can most profitably be applied, and hopefully will provide new insights into chemical research problems. ACKNOWLEDGMENT The suggestions of the referees are gratefully acknowledged. I wish to thank my colleagues at Procter & Gamble (Eric Suchanek, Franz Dill, Bill Laidig, and Morgan Griffith) for their support. The technical assistance at Norwich Eaton of Mr. Don Windsor and Ms. Roseann Randall is also greatly appreciated. REFERENCES 1. 2. 3. 4. 5. 6.
McCulloch, W. S.; Pitts, W. H. "A Logical Calculus of the Ideas Immanent in Neural Nets". Bull. Math. Biophys. 1943, 5, 115-133. Widrow, B.; Smith, F. W. "Pattern-Recognizing Control Systems". In Computer and Information Sciences Symposium Proceedings; Spartan Books: Washington, DC, 1963. Minsky, M. L.; Papert, S. S. Perceptrons; MIT Press: Cambridge, MA; 1969. Hopfield, J. J. "Neural Networks and Physical Systems with Emergent Collective Computational Abilities". Proc. Natl. Acad. Sci. USA 1982, 79, 2554-2558. Rumelhart, D. E.; McClelland, J. L.; PDP Research Group. Parallel Distributed Processing: Explorations in the Microstructure of Cognition; MIT Press: Cambridge, MA; Vol. 1 & 2, 1986. McClelland, J. L.; Rumelhart, D. E. Explorations in Parallel Distributed Processing: A Handbook of Models, Programs, and Exercises: MIT Press: Cambridge, MA; 1988.
Neural network technology
7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17.
18. 19. 20.
21.
22. 23. 24. 25. 26.
27. 28. 29.
127
Patten, C.; Harston, C.; Maren, A.; Pap, R. Eds. Handbook of Neural Computing Applications; Academic Press: San Diego, CA; 1990. Hecht-Nielsen, R. Neurocomputing; Addison-Wesley: Reading, MA; 1990. Parker, D. B. "Learning-logic". Report TR-47, Cambridge, MA: Massachusetts Institute of Technology, Center for Computational Research in Economics and Management Science. Hopfield, J. J.; Tank, D. W. "'Neural' Computation of Decisions in Optimization Problems". Biol. Cybern. 1985, 52, 141-152. Michalski, G. P. "Conventional, Symbolic & Neural Computing: A Primer". Artificial Intelligence Research, New Science Associates, Inc., November 7, 1988, pp. 834-836. Schwartz, D.; Jurik. M. "Neural Nets on a Personal Computer". PC AI 1988, 37-39, 66-71. Suchanek, E. G. "Software Review: NeuralWorks Professional II". Tetrahedron Comput. Methodol., this issue. Jurs, P. C.; Isenhour, T. L. Chemical Applications of Pattern Recognition; Wiley: New York; 1975. Borman, S. "Neural Network Applications in Chemistry Begin to Appear". Chem. Eng. News 1989, April 24, 24-28. Qian, N.; Sejnowski, T. J. "Predicting the Secondary Structure of Globular Proteins Using Neural Network Models". J. Mol. Biol. 1988, 202,865-884. Bohr, H.; Bohr, J.; Brunak, S.; Cotterill, R. M. J.; Lautrup, B.; Noerskov, L.; Olsen, O. H.; Petersen, S. B. "Protein Secondary Structure and Homology by Neural Networks. The ~x-Helices in Rhodopsin". FEBS Lett. 1988, 241,223-228. Holley, L. H.; Karplus, M. "Protein Secondary Structure Prediction with a Neural Network". Proc. Natl. Acad. Sci. USA 1989, 86, 152-156. Sejnowski, T. J.; Rosenberg, C. R. "Parallel Networks That Learn to Pronounce English Text" Complex Systems 1987, 1,145-168. LaRosa, G. L.; Davide, J. P.; Weinhold, K.; Waterbury, J. A.; Profy, A. T.; Lewis, J. A.; Langlois, A. J.; Dreesman, G. R.; Boswell, R. N.; Shadduck, P.; Holley, L. H.; Karplus, M.; Bolognesi, D. P.; Matthews, T. J.; Emini, E. A.; Putney, S. D. "Conserved Sequence and Structural Elements in the HIV-1 Principal Neutralizing Determinant". Science 1990, 249, 932-935. Bohr, H.; Bohr, J.; Brunak, S.; Cotterill, R. M. J.; Fredholm, H.; Lautrup, B.; Petersen, S. B. "A Novel Approach to Prediction of the 3-Dimensional Smactures of Protein Backbones by Neural Networks" FEBS Lett. 1990, 261, 43-46. McGregor, M. J.; Flores, T. P.; Steinberg, M. J. E. "prediction of p-turns in Proteins Using Neural Networks". Protein Eng. 1989, 2,521-526~ Bryngelson, J. D.; Wolynes, P. G. "Spin Glasses and the Statistical Mechanics of Protein Folding ~' Proc. Natl. Acad. Sci. USA 1987, 84, 7524-7528. Bryngelson, J. D.; Hopfield, J. J. "Learning a Hamiltonian for Protein Folding". Abstract COMP-28, 197th ACS National Meeting, Dallas, t989. Friedrichs, M. S.; Wolynes, P. G. "Toward Protein Tertiary Structure Recognition by Means of Associative Memory Hamiltonians". Science 1989, 246, 371-373. Viswanadhan, V. N.; Weinstein, J. N.; Elwood, P. C. "Secondary Structure of the Human Membrane-Associated Folate Binding Protein Using a Joint Prediction Approach". J. Biomol. Struct. Dyn. 1990, 7, 985-1001. Stubbs, D. F. "A Neurocomputer Using Chemical Properties to Predict Drug Safety". Abstract COMP-30, 197th ACS National Meeting, Dallas, 1989. Aoyama, T.; Suzuki, Y.; Ichikawa, H. "Neural Networks Applied to Structure-Activity Relationships". J. Med. Chem. 1990, 33, 905-908. Moriguchi, I. In Structure-Activity Relationship - Quantitative Approaches; Fujita, T., Ed.; Nankodo: Tokyo, Japan, 1986; Ch. 9.
128
30. 31. 32. 33.
34. 35. 36. 37. 38. 39. 40. 41. 42. 43. 44.
45.
M.E. LACY
Personal observation of Jim Sprouse (personal communication), president of Sprouse Scientific Systems. Thomsen, J. U.; Meyer, B. "Pattern Recognition of the 1H NMR Spectra of Sugar Alditols Using a Neural Network". J. Magn. Reson. 1989, 84, 212-217. Aoyama, T.; Suzuki, Y.; Ichikawa, H. "Neural Networks Applied to Pharmaceutical Problems. I. Method and Application to Decision Making". Chem. Pharm. Bull. 1989, 37, 2558-2560. Lapedes, A.; Barnes, C.; Burkes, C.; Farber, R.; Sirotkin, K. "Application of Neural Networks and Other Machine Learning Algorithms to DNA Sequence Analysis". In Computers and DNA, Santa Fe Institute Studies in the Science of Complexity VII; Addison-Wesley: Reading, MA, 1989. Ezhov, A. A.; Kalambet, Yu. A.; Cherny, D. I. "Neuron Network for the Recognition of E. coli Promoters". Stud. Biophys. 1989, 129, 183-192. Lukashin, A. V.; Anshelevich, V. V.; Amirikyan, B. R.; Gragerov, A. I.; Frank-Kamenetskii, M. D. "Neural Network Models for Promoter Recognition". J. Biomol. Struct. Dyn. 1989, 6, 1123-1133. Towell, G. G.; Shavlik, J. W.; Noordewier, M. O. "Refinement of Approximate Domain Theories by Knowledge-Based Neural Networks". Proceedings AAAI-90, Vol. 2, 861-866, 1990. Brunak, S.; Engelbrecht, J.; Knudsen, S. "Letter: Cleaning up Gene Databases". Nature 1990, 343, 123. Brunak, S.; Engelbrecht, J.; Knudsen, S. "Computerized Proofreading of Genetic Databank Entries". Preprint. Bryngelson, J. D.; Hopfield, J. J.; Southard, S. N., Jr. "A Protein Structure Predictor Based on an Energy Model with Learned Parameters". Tetrahedron Comput. Methodol., this issue. Friedrichs, M. S.; Wolynes, P. G. "Molecular Dynamics of Associative Memory Hamiltonians for Protein Tertiary Structure Recognition". Tetrahedron Comput. Methodol., this issue. Wilcox, G. L ; Poliac, M.; Liebman, M. N. "Neural Network Analysis of Protein Tertiary Structure". Tetrahedron Comput. MethodoL, this issue. Curry, B.; Rumelhart, D. E. "MSnet: A Neural Network Which Classifies Mass Spectra". Tetrahedron Comput. Methodol., this issue. Luce, H. H.; Govind, R. "Neural Network Applications in Synthetic Organic Chemistry: I. A Hybrid System Which Performs Retrosynthetic Analysis". Tetrahedron Comput. Methodol., this issue. Elrod, D. W.; Maggiora, G. M.; Trenary, R. G. "Application of Neural Networks in Chemistry. 2. A General Connectivity Representation for the Prediction of Regiochemistry". Tetrahedron Comput. Methodol., this issue. Kirk, O.; Barfoed, M.; Bj6rkling, F. "Application of a Neural Network in the Optimization of an Enzymatic Synthesis". Tetrahedron Comput. Methodol., this issue.