Very large scale integration

Very large scale integration

46 EUROMICRO Reports facilities and to describe a PDP-8. Three refinement techniques for hardware design: structural decomposition, data flow refine...

415KB Sizes 6 Downloads 209 Views

46

EUROMICRO Reports

facilities and to describe a PDP-8. Three refinement techniques for hardware design: structural decomposition, data flow refinement and data representation refinement were explained by C.K.C. Leung (M.I.T., U.S.A.) who used a model in which hardware is organized into modules which communicate by sending packets to each other. In the lecture, " O n A Top-Down Design Methodology for Packet Systems", these techniques were illustrated in a top-down design example using ADL as the design language. S. Klein and S. Sastry (USC Information Sciences Institute, U.S.A.) presented a simple formalism that effectively unifies structural and simulation-oriented hardware descriptions. In their lecture, "Parameterized Modules and Interconnections in Unified Hardware Descriptions", they additionally showed an algorithmic method for handling classes of interconnections and modules and included several examples. The system "Object Oriented Description Environment for Computer Hardware" was presented by A. Takeuchi (Mitsubishi Electric Corporation, Japan). OODE is based on the so-called object oriented languages and a high degree of module structure is achieved with it. Takeuchi described every module from three points of view: behavioral, structural and conceptual. (5) Applications and Comparison of CHDL's In the first section of the lecture entitled "The C A P / D S D L System: Simulator and Case Study", R.J. Dachauer (Siemens AG, F.R.G.), K. Groening, K.-D. Lewke and F.J. Rammig (University of Dortmund, F.R.G.) showed how the C A P / D S D L system can be used to support the design of an imaginary 16-bit computer in an early design phase. After this introduction to the capabilities of the language, they discussed some principles of an advanced simulator for CHDL which especially have been used to implement a C A P / D S D L simulator. The MICroprogrammed filter Engine (MICE) is a fast, microprogrammable processor built with ECL bit slices intended primarily to be used as an on-line data filtering engine for high energy physics experiments. A. van Dam (Brown University, U.S.A.), M. Barbacci (CarnegieMellon University, U.S.A.), C. Halatsis (NRC Democritos, Creece), J. Joosten and M. Letheren (CERN, Switzerland) demonstrated the advantages of high level software tools and the thought processes that permitted the successful development of a MICE model in a very short period of time. A.K. Singh and J.H. Tracey (Kansas State University, U.S.A.) identified a set of language features that can be useful in making comparisons among the many CHDL's which have been developed in recent years. They chose a representative set of CHDLs to illustrate the use of this comparison method. Their lecture was entitled "Development of Comparison Features for Computer Hardware Description Languages".

(6) Mieroprogramming and Control M. Mezzalama and P. Prinetto (Politecnico di Torino, Italy) reported on a microprogram assembler which can assemble code for many classes of user-defined target machines, described according to an ad hoc firmware description language, i.e. a meta microassember. According to the lecturer, its most important feature is a 'definition phase' where, for all possible target machines, the description of the actual microinstruction set is given. In the lecture, "Statistical Studies of Horizontal Microprograms", P. Marwedel (Christian-AlbrechtsUniversit~t, F.R.G.) discussed properties which are relevant to the design of microarchitectures. He quantified the speedup by microinstruction pipelining, the speedup by different timing mechanisms and gave some guidelines for the implementation of condition logic. A software system for the design of digital processors, based on a computer hardware design language, was used to find these statistical properties. P. Keresztes and D. Pacher (Hungarian Academy of Sciences, Hungary) discussed the "Step-Assignment Method Using Decomposition for Realizing AHPL-Like Control Sequence Descriptions". By introducing a new definition of partition pairs on the set of steps of a parallelism-free control sequence description, the possibility of using the classical decomposition methods of switching theory was shown. The Proceedings of this Syposium have been edited by M. Breuer and R. Hartenstein and published by NorthHolland Publishing Company under the title "Computer Hardware Description Languages and Their Applications" (1981. 306 pages. ISBN 0-444-86279-x. Price: US $36.25/Dfl. 85.00).

Very Large Scale Integration VLSI 81, the first European Conference dedicated to all the subjects involved in the exploitation of silicon as a systems implementation medium, was held in Edinburgh from August 18-21, 1981. According to John P. Gray, the Chairman of the Programme Committee, it has only recently become apparent, due to the pioneering work of Mead, that this emerging area of research embraces a very wide range of disciplines from device physics to branches of discrete mathematics. Organized by the University of Edinburgh Departments of Computer Science and Electrical Engineering and the Wolfson Microelectronics Institute, with the assistance of CEP Consultants, Ltd., VLSI 81 aimed to reflect this diversity by putting together a broad programme. The fields of interest covered were the application of discrete mathematics to VLSI systems, novel architectures, design methodologies, design tools, ap-

EUROMICRO Reports

plications of VLSI systems and the design of circuits. Special emphasis was given to theoretical aspects of the subject and to work which bridges the gap between disciplines. A report on VLSI 81 is presented below. Session 1

In the first lecture presented at the conference, entitled "VLSI and Technological Innovations", C.A. Mead (California Institute of Technology, U.S.A.) discussed what he believed to be the most important opportunity since the industrial revolution, a circumstance created by the emerging VLSI technology, with which enormously complex digital electronic systems can be fabricated on a single chip of Silicon one-tenth the size of a postage stamp. In the lecture, "Generalized IC Layout Rules and Layout Representations", C.H. S~quin (University of California, Berkeley, U.S.A.) presented a simple set of generalized rules that are formalized independently of a particular fabrication sequence, emphasizing those tolerances that are common to many different processes. E.E. Barton (lnmos Ltd., U.K.) illustrated " A NonMetric Design Methodology for VLSI", a self-timed design which is a method of abstraction, limiting the numbers of details on hand during the stages of a design. A description of a top-down design technique used on a 100,000 transistor 32-bit CPU which provides accurate information about interconnect wiring and optimum use of chip area right from the start of the design process was given by R.H. Krambeck et al. (Bell Laboratories, U.S.A.). In the lecture, "Synthesis and Control of Signal Processing Architectures Based on Rotations", H.M. Ahmed and M. M o r f (Stanford University, U.S.A.) presented a microprogram control strategy for a dual processor speech and signal processing chip which utilizes the CORDIC algorithms. J.A. Marques (INESC, Portugal), in his lecture entitled "MOSAIC: A Modular Architecture for VLSI System Circuits", described the distributed architecture for implementing the very special class of the system circuits, a fundamental part of the MOSAIC methodology. Session 2

The thesis of M. Rem's (Eindhoven University, The Netherlands) lecture "The VLSI Challenge: Complexity Bridling", was that complexity control - a conditio sine qua non - will lead to hierarchical structures. The effective reasoning about hierarchical structures, he said, requires us to learn how to specify the net effects of components in a formalism that is independent of the way in which they realize their net effects. In a lecture entitled "Recognize Regular Languages with Programmable Building-Blocks", M.J. Foster and H.T. Kung (Carnegie-Mellon University, U.S.A.) introduced a new programmable building-block for recognition of regular languages.

47

The description of " A Very Simple Model of Sequential Behaviour of nMOS", a notation thus strongly oriented to behaviour which depends on the storage of charge on electrically isolated gates, was provided by M. Gordon (University of Cambridge, U.K.). H. Chang et al. (IBM T.J. Watson Research Center and Carnegie-Mellon University, U.S.A.) presented "Magnetic-Bubble VLSI Integrated Systems", embodied in a systolic-array string-pattern matching chip, which receive and generate signals in the form of bubbles. They claim that it is also possible to implement allbubble conservative logic, intelligent memory chips, parallel and pipelined processors, PLA's, etc. Session 3

B. Collins and A. Gray (Inmos Ltd., U.K.) described a hardware description language and simulation system, currenly in use at Inmos, which are intended to support the development of very large MOS circuits. They also gave some of the considerations influencing their design. In their lecture, " A Pragmatic Approach to Topological Symbolic IC Design", N. Weste and B. Ackland (Bell Laboratories, U.S.A.) examined a proven design system (MULGA) for the design-rule free symbolic layout and verification of MOS integrated circuits. Special attention was given to the chip assembly phase of the design process. The architectural methodology presented by R.F. Lyon (Xerox Palo Alto Research Center, U.S.A.) is built on top of the logic, circuit, timing, and layout levels of VLSI system design methodology presented by Mead and Conway (1980). It includes a large component that is independent of the underlying technology; a description was provided in the lecture " A Bit-Serial VLSI Architectural Methodology for Signal Processing". J-P. Banatre, P. Frison and P. Quinton (IRISA, France) studied a word-spotting algorithm that allows the comparison of words of a given vocabulary with the outputs of the phonemic analysis of a pronounced sentence. Their lecture was entitled " A Network for the Detection of Words in Continuous Speech". According to P.B. Denyer (University of Edinburgh, U.K.) and D.J. Myers (British Telecom Research Laboratories, U.K.), it is well known that the parallel multiply functions may be implemented in a regular array of carry-save adders. They showed in their lecture an extension of the concept to very large carry-save arrays that implement complete filter functions on high-speed parallel data. Session 4

R.C. Mosteller (California Institute of Technology, U.S.A.) described "REST - A Leaf Cell Design System", which is a simple design system with the interfaces between parts using standard text files. " A n Algebraic Approach to VLSI Design" was proposed by L. Cardelli and G. Plotkin (University of Edinburgh, U.K.). VLSI networks were described by expres-

48

EUROMICRO Reports

sions of a many-sorted nMOS algebra, and the algebraic operators were designed to support a structural methodology. J. Batali et ak (MIT Artificial Intelligence Laboratory, U.S.A.) examined "The DPL/Daedalus Design Environment", an interactive VLSI design system implemented at the MIT Artificial Intelligence Laboratory. In the lecture entitled "Regular Programmable Control Structures", D.J. Kinniment (University of Newcastle upon Tyne, U.K.) presented techniques which enable the implementation of writeable PLAs in NMOS technology and regular structures for parallel asynchronous control systems. Session 5 J.C. Mudge (CSIRO, Australia), in his lecture "VLSI

Chip Design at the Crossroads", discussed (1) formal composition systems, (2) design representations, and (3) interface between design and fabrication, areas in which there will have to be significant progress if we are to harness the technology available to us. In the lecture, " A Hierarchical Design Analysis Front End", T. Whitney (California Institute of Technoloy, U.S.A.) presented a design style aimed at reducing the complexity o.f designing a VLSI circuit, and then presented an algorithm which exploits this design style in order to reduce the computational complexity of analyzing a large design. In the lecture that followed, C.R. Rupp (Digital Equipment Corporation, U.S.A.) described an experimental silicon compiler system called DEA (DEsign Architecture) which emphasizes the similarities and dissimilarities between hardware (silicon) and software compilation. An "Overview of the CHiP Computer" was presented by L. Snyder (Purdue University, U.S.A.), who also discussed the capabilities and limitations of the components of the CHiP computers in detail. K.F. Smith (University of Utah, U.S.A.) examined the "Implementation of SLA's in NMOS Technology". The technique he studied allows the system designer to visualize the logical description of the system in terms of the physical layout of the IC, a technique which he calls 'visual perception' of a logical design. Under the assumption that it is more effective to design a machine whose architecture takes advantage of some of the special characteristics of physical design algorithms rather than wait for a new generation of high speed general purposes computers, S.J. Hong, R. Nair and E. Shapiro (IBM T.J. Watson Research Center, U.S.A.) considered the wiring process in their lecture, " A Physical Design Machine". Session 6 B. Chazelle and L. Monier (Carnegie-Mellon University,

U.S.A.) investigated the actual performance of wellknown circuits in new models and suggested designs which meet criteria of "Optimality in VLSI". They

showed in particular that many complicated schemes falsely believed to be efficient can be advantageously replaced by simpler and higher performance designs. In the lecture entitled "Chip Bandwidth Bounds by Logic-Memory Tradeoffs", R.H. Kuhn (Northwestern University, U.S.A.) explored the limitations of chip architecture imposed by increasing the scale of integration. F.T. Leighton and G.L. Miller (Massachusetts Institute of Technology, U.S.A.) described in their lecture techniques for finding good layouts for small shuffleexchange graphs. They commented that although the techniques do not yet constitute a general procedure for finding truly optimal layouts for all shuffle-exchange graphs, they can be used to find very nice layouts for small shuffle-exchange graphs. W.E. Donath (IBM T.J. Watson Research Center, U.S.A.) and W.F. Mikhail (IBM General Technology Division, U.S.A.) treated the mathematical model and derivations and then the experimental results of "Wiring Space Estimation for Rectangular Gate Arrays", an area of crucial importance in VLSI chip design. Session 7

In the first lecture of this session, "Impact of Technology on the Development of VLSI", M.W. Larkin (Plessey Solid State Division, U.K.) reviewed the progress which has been made over the past 20 years, which has brought us from discrete semi-conductor devices to VLSI components containing up to 100,000 active elements. It is possible, he concluded, that future developments in VLSI may be more constrained by computer aided design and testing, than they are by the physical limitations of processing. K.D. Mueller-Glaser and L. Lerach (Siemens AG, F.R.G.) presented " A General Cell Approach for Special Purpose VLSI-Chips", a methodology designed by SIEMENS Data Processing Department for the economic design of customized VLSI-chips containing 5000 to 20,000 gate functions plus on chip RAM or ROM and with relatively low production volumes of several thousands of pieces a year. " A Switch-Level Model of MOS Logic Circuits" was presented by R.E. Bryant (California Institute of Technology, U.S.A.). This new logic model more closely matches MOS circuit technology and hence can describe the logical behavior of a wide variety of MOS logic circuits in a very direct way. In the lecture entitled "Failure Mechanisms, Fault Hypotheses and Analytical Testing of LSI-NMOS (HMOS) Circuits", B. Courtois (Laboratoire Informatique et Math6matiques Appliqu6es, France) was concerned with a classification of failure mechanisms, followed by its application to both electrical and logical levels for N-MOS (H-MOS) technology. J.P. Roth (IBM T.J. Watson Research Center, U.S.A.) presented methods of automatic logic design, methods for verifying a logic design and a new method of design which obviates the need for LSSD for diagnosis.

EUROMICROReports In his lecture, "Automatic Synthesis, Verification and Testing", he also gave a method for partitioning a logic design which converts a VLSI diagnosis job into an LSI diagnosis job. A formal method for the design of testable hardware was proposed by .4.C. Parker (University of Southern California, U.S.A.) and L.J. Hafer (Carnegie-Mellon University, U.S.A.) in their lecture, "Automating the Design of Testable Hardware". The method uses an algebraic model of resister-transfer (RT) behavior which the lecturers have developed. The proceedings of VLSI 81 have been edited by John P. Gray and are available from Academic Press Inc., Ltd., 24-28 Oval Road, London NWl 7DX, England (1981. xiv + 364 pages. ISBN 0-12-296860-3. Price: £15/$36).

Computer Program Testing The Sogesta Summer School, held this year from June 29 - July 3, 1981 in Urbino, Italy, concentrated on Computer Program Testing. The lectures delivered at the conference provided a comprehensive, tutorial discussion of the current state of the art as well as research directions in the area of testing computer programs. They covered a wide spectrum of topics: from theoretical notions through practical issues in testing programs and large software systems to integrated environments and tools for performing a variety of tests. The lecturers were all active and recognized researchers and practitioners in the field. A report on the Summer School on Computer Program Testing is featured below.

Introductory Concepts In the first lecture of the conference, entitled "Computer Program Testing - An Introduction", S.H. Zweben (The Ohio State University, U.S.A.) presented a general overview of basic terminology and philosophies in computer program testing. Some problems associated with many of the commonly suggested strategies were also discussed. L.J. White (The Ohio State University, U.S.A.) surveyed some of the "Basic Mathematical Definitions and Results in Testing". In order to study concepts of program structure, digraphs were introduced as a potential model. White also reviewed the papers of Goodenough/Gerhart and Howden, which most researchers in the area of program testing agree comprise the basis for a theory of testing.

Aspects of Program Testing A tutorial introduction to static program checking was given by C. Ghezzi (Universit~t di Padova and Politec-

49

nico di Milano, Italy). He showed how programming languages influence the amount of checks that can be done statically on programs and also how recent programming languages are designed with the express goal of supporting extensive static checking. In his lecture, "Levels of Static Program Validation", he also introduced higher order forms of static checking provided by data flow analyzers and symbolic executors. Ghezzi finally discussed how static checking tools can be integrated in a coherent development system. Data flow analysis has been shown to be a useful tool in demonstrating the presence or absence of certain significant classes of programming errors. According to L.J. Osterweil, L.D. Fosdick (University of Colorado, U.S.A.) and R.N. Taylor (University of Victoria, Canada), it is an important software verification technique, as it is inexpensive and dependably detects a well defined and useful class of anomalies. Work to this point has been directed at the analysis of a small but diverse assortment of errors and anomalies. The lecturers believe, however, that a larger assortment could be studied and they presented a conceptual framework for doing so in their lecture entitled "Error and Anomaly Diagnosis through Data Flow Analysis". In the lecture, "Symbolic Evaluation Methods - Implementations and Applications", L.,4. Clarke and D.J. Richardson (University of Massachusetts, U.S.A.) described symbolic evaluation, a program analysis method that concisely represents a program's computations and input domain by symbolic expressions. The general concepts were explained and three related methods of symbolic evaluation were elaborately described. Examples of all three methods were given and each method's implementation approach, applications and limitations were explained, as well as the status of current research in the area. L.J. White, E.I. Cohen and S.J. Zeil (The Ohio State University, U.S.A.) presented a testing strategy designed to detect errors in the control flow of a computer program, and gave and characterized the conditions under which this strategy is reliable. In their lecture, " A Domain Strategy for Computer Program Testing", they described a new method to decide whether an additional path should be tested when a number of paths have already been tested, and whether no additional information can be gained by testing this path. According to W.E. Howden (University of California, San Diego, U.S.A.), the basic steps in functional testing are the identification of the functions that are supposed to be computed by a program or system, the identification of the important classes of input and output data for the functions and the selection of test data. In his lecture, "Errors, Design Properties and Functional Program Tests", he made an attempt to set firm guidelines for the application of the method. T.A. Budd (University of Arizona, U.S.A.) gave an introduction to the ideas of mutation analysis, which is a method for measuring test data quality. The method was