National Computer Conference — personal computers give it a new look

National Computer Conference — personal computers give it a new look

National Computer Conference personal computers give it a new look programming and firmware engineering, software engineering and parallel programmin...

526KB Sizes 2 Downloads 48 Views

National Computer Conference personal computers give it a new look

programming and firmware engineering, software engineering and parallel programming.

The largest NCC ever was held in the largest covered exhibition centre in the world in the second largest state in the U S - - y e t the small computer held the attention. Simon M iddelboe rePorts.

Hardware/computer architecture

7-10June 1982. Astrodomain, Houston, Texas, USA. Organizedby AFIPS (the American Federation of Information ProcessingSocieties) The NCC (National Computing Conference) once again has shown the speed of movement in the dynamic computing industry. Two years ago, personal computers were nowhere to be found. Last year in Chicago, they were there, but separated from the main stream. This year, the personal computer dominated the show. The most popular conference sessions were in the personal computing stream and the most popular stands were those showing personal computers, and they were great in number and size. Nearly 100 000 people were enterrained to a feast of new personal computers, over 20 among the 650-odd exhibitors. They hailed from different countries and different types of companies, often from those recognized in other fields of electronics. DEC and Wang have joined IBM in the professional ($3000) market, and the Japanese component manufacturers Hitachi and Toshiba have joined Sony in the same 16-bit microcomputer market.

mainframe, minis and micros alike. One of elements which will effect how the strategy proceeds, and one which provoked much discussion again at the NCC, is the issue of local area networks.

Network dilemma The argument, broadband against baseband, which blew up last year with the prediction that Ethernet would fail and take with it Xerox, DEC and Intel still rages, though the threat to Xerox and the other has all but subsided. Many of the new proi:essional computers have built-in network interfaces. Some favour baseband and Ethernet, others favour broadband and Wangnet. The I EEE standards committee are equally undecided. Their draft (Standard 802) recognizes broadband and baseband, standardizing for both. One of the reasons that nobody really wants to commit themselves is IBM. If IBM does release a local network, it will almost certainly become a de facto standard - and IBM is not telling.

Conference sessions

There was a comprehensive array of sessions and speakers at the four-day Lots of cheap storage conference. Divided into eight streams running concurrently, there Though the vibrant inexpensive were over 100 different sessions, each microcomputer market appears to have dominated the NCC, the proliferation of lasting about one and a half hours. Within each session, there were on manufacturers offering cheap mass average about three speakers, giving storage on floppy and Winchester discs different points of view on the same was of equal importance. The industry's subject, in some cases provoking forecasters are in no doubt that the professional microcomputer will become an intense discussion. essential part of the office of the future. There were four streams of particular interest: hardware/computer However their assumptions depend as architecture, software engineering, much on the ability to provide inexpenpersonal computing and office sive data storage which can be accessed very quickly as it does on the availability systems. In the first two streams, of cheaper and better business computers. sessions covered single-chip microThe strategy in office automation is computer advances, distributed 16-bit processing, systems in CMOS, micronow one which affects manufacturers of

310

The pace of development of computer hardware continues unabated. These sessions cover developments from the smallest systems that can fit on a single chip all the way to the largest of the supermicrocomputers. The single-chip microcomputer is becoming increasingly important in the computing world since it represents the potential for placing large amounts of computing power at everyone's fingertips. The position the single-chip microcomputer holds in the chip manufacturers' list of priorities was emphasized by the presence Motorola, Rockwell International and Texas Instruments at the session, chaired by Motorola applications engineering manager, Charlie Melear. Higher performance does not merely mean improved memory map expansion, said Motorola's Ed Peatrowsky. It also means improved features and functions, along with increased versatility in applications. Requirements in industrial control, communications, automotive and other applications demand more and more powerful single-chip computers, said Peatrowsky.

Family architecture For Motorola, the way to meet this demand realistically, both in time and cost, is by way of an ever-expanding family of microcomputers. The advantage of a family is in the software and the increased integration. The software is always upwards compatible, though higher level single-chip microcomputers can always execute existing code. By having a family of chips which use many of the components, it is possible to have increased integration. This not only gives increased reliability, because of the reduced number of chips on board, but also reduces the total system cost. ,ks an example, Peatrowsky detailed the 6801 family of single-chip microcomputers. There are, however, three major factors which limits advancement, warned Rockwell applications

microprocessors and microsystems

engineer, Randy Dumse. Though there is a headlong rush to put functions on a chip, at which 'we have just touched the surface', the factors of time, system sophistication and cost are limiting. Time moderates progress in two ways, said Dumse. There is the unavoidable delay between concept and finished product and there is the delay caused by the gulf between the industry and its supply of engineers from the educational system. Industry has found more ways, said Dumse, of condensing features and functions on a piece of silicon than educational facilities have found to cram equal amounts of understanding of the use of these features into a single human head. System sophistication, by necessity, is more of a problem in single-chip than in multichip applications. There are only so many CPU, ROM, RAM and special-purpose devices that can be placed on a single easily-manufactured silicon die with current technology; and though technology will increase production capabilities, it is unreasonable, said Dumse, to expect, say, a single-chip microcomputer with over 1 Mbyte of RAM within the next two years.

random logic with a control ROM to implement internal control of the microcomputer. This feature will also allow the user to have his instruction set modified (TI hope to be able to let the user to do this himself soon) for a specific application. In a different session, Michael Shapiro, also of Texas Instruments, went into more detail on SCAT and its implementation into the TMS7000 design process.

Von Neumann organization Weighing up the advantage of ease of single-chip programming over cheaper hardware costs, Motorola engineer, Bill Huston, described the commonmemory Von Neumann organization that owes much of its architectural heritage to large computers. The chief benefit of the Von Neumann architecture is the inherent ability to operate upon addresses as easily as data. Program and data table pointers can be saved in RAM and indexing of other address calculations can also be included. However, all address and data elements must be standardized to the bus width.

These benefits have come into more general use as the thinking behind programming has changed. In the past, programs were short and it was considered sufficient to simply have very low-cost programmable ICs. Now programs are not just written and then forgotten. Today they are changed, in some cases many times. The Von Neumann architectural concepts allows this to be done more cheaply. Program changeability costs should be considered when amortizing program costs, said Huston.

Distributed processing There is a definite trend towards distributed processing when solving complex problems. In the past, performance was limited by the ability of the central CPU to bring in data, process it and output it in some useable form. The resulting input/output bottlenecks produce a slow nonoptimium data processing system. Distributed processing, in so much as it delegates some of the processing tasks to intelligent substations, offers a simpler and easier-to-use system. Not only are costs lower with the distributed approach but the time needed to imple-

SCAT is the answer SCAT (strip chip architectural topology) Processor, Processor, is the answer, according Texas Instrumini computer, minicomputer, ment engineer, Jerry Corbin. The or computer or computer novelty of this design process, which is used in the TMST000 series, is that the layout is of primary importance, rather than taking 'the backseat' as in conventional VLSI design. TI strive to minimize their interconnect problems. The SCAT design philosophy gives Central much greater architectural flexibility, computer said Cobin. Because the same pieces of one chip can be used on another, new versions can be brought out quicker. TI developed their 4k ROM version Terminals of the 7000 family from their 2k version without redesigning the chip. They simply added another 2k block to Processor, Processor, the original version, the rest remained minicomputer, minicomputer, or computer or computer the same. Another feature that TI think is important in single-chip microcomputer technology is their microprogrammability. They replace the traditional proDistributed S TAR network described by Tony Zingale of Intel grammed logic array and associated

vol 6 no 6july/august 1982

Terminals

311

ment such systems is substantially less. The session on distibuted processing saw representation from five major chip manufacturers, Motorola, National Semiconductor, Zilog, Intel and Texas Instruments. The processors and peripherals of tomorrow will be more performanceorientated and will have to be wellthought out so that they can be upwardly expanded without requiring major systems redesign, said Motorola's John Stockton. Talking about the company's 16-bit processor, the 68000, Stockton said that it is important to design in a migration path from existing singlebus systems to the higher performance multiple-local bus systems of the future.

Build in upgradeability Designers should thus include all the necessary hooks into the processor to support multiple architectures. The solution to increasing performance will rely heavily on multiple processors, said Stockton. To take advantage of this solution, microprocessor vendors must build this upgradeability early in the design of their microprocessor families. Stockton talked about the 68000 range of processors, showing there to be a path along which a user can go when upgrading his application. A microprocessor-based distributed system can give better performance at lower cost than a timeshared mini or mainframe system, said National Semiconductor engineer, Leslie Kohn. He described the NS16000 as being well suited to distributed processing because of its 32-bit architecture and demand-paged virtual memory support. Once again, as a family of upgradeable processors, the 16000 has been designed to support large software systems. The 16000 operating system was also designed with distributed processing in mind, said Kohn. It supports transparent distributed processing, which allows the tasks of a software system to be placed on different nodes without any change in the tasks. An operating system component such as a file manager can be located far from another operating system components, such as the memory manager. This mechanism is used to implement transparent access to nonlocal network resources such as

312

discs. Thus expensive hardware resources, such as hard discs or printers, may be shared by several 16000 processors. Zilog's design philosophy envisions a distributed processing approach for many Z8000 applications, said Zilog engineer, Janak Pathak. The family of components are all connected together by the company's Z-bus and while each component has a different use, CPU, support circuitry or peripheral, together they form a powerful distributed system. The sorts of VLSI peripherals that go to form this distributed system include a memory management unit, DMA transfer controller, FIFO input/output interface unit, counter/time and parallel I/O unit and serial communications controller.

Cooperative transactions Another essential element of the Z8000 distributed processing concept is the use of cooperative transactions, i.e. devices which each have a specific cability. The theory here is that if a task requires combination of capabilities, it is better to allow several devices to participate in the task than to replicate capabilities in several devices. By combining a limited number of peripheral support components with memory together with an iAPX 186, said Intel's Tony Zingale, you can achieve a condensed cost-effective system on one board, making the company's 16-bit processor an optimal processor for distributed processing nodes. The level of integration is achieved using the HMOS II silicon gate technology. (The new 286 processor, which uses HMOS III, is compatible with and is used as master for the 186.) The data processing node, according to Zingale, must be more cost-effective than other similar approaches, easy to implement, compatible with existing hardware and capable of high speed execution rates. Zingale believes that the Zilog chip has these qualities. On board, it has a standard 8 MHz clock generator; two independent 16-bit timers and a third programmable 16-bit timer; two high speed DMA channels; an interrupt controller that can accept interrupts from five external sources; chip select/ready generation; and CPU internal registers.

Practical CMOS There is a long-standing impression that CMOS is too slow for many microprocessor uses. The CMOS-is-too-slow image is no longer valid, said Motorola's Bill Huston. A selection of CMOS microprocessors is available at speeds matching NMOS devices, he said. Past hesitancy to use CMOS chips has been as much because of the narrow choice as it has been because of low speed. Until recently, said Huston, there were only two CMOS chips available. Now there are 8080 and 6800 derivatives as well as the traditional 1802. (According to Dataquest figures, the 1802 is fifth behind the 8080, 6800, Z80 and 6502 in world production volume.) Volume production allows costs to be lowered and a multichip alI-CMOS microprocessor system is now practical, said Huston. He reported on such a system, the mid-range MC146805E2, whose instruction set is a control-optimized derivative of the 6800 with low power stand-by i0structions.

Software implications Though hardware has been developed to allow the exploitation of distributed/ multimicroprocessor systems, the application of this hardware to real-world problems and the development of software to solve these problems is developing at a slower rate. Terrance McKelvey and Dharma Agrawal, from Wayne State University, presented some of the current thinking on the subject of software design methodologies for distributed microprocessor systems. Parallelism, i.e. computation that can be done in parallel, may be introduced into a system at various levels. For example, it may be done within separate processor modules, thus obtaining the speed and reliability advantages offered by distributed systems. There are three important levels at which parallelism can be detected, according to McKelvey and Agrawal: algorithm, source language and machine language level. Detection and exploitation or parallelism is the key to the effective design of distributed microprocessor systems, they continued. The conclusion drawn is that an overall net parallelism detection of less

microprocessors and microsystems

Input queue

Tope ready Computing

Rewind tope

I/O on tope

Computing

question, would it be a baseband or a broadband network? There is no real controversy between baseband and broadband, according to Wangnet product line manager, Mike Stahlman. Most vendors will offer both, he said. Though Wang are using the broadband principle in their network, Stahlman said he thought that baseband made sense in certain specific applications. The problem that people will have, he said, is making their mind up between the two. The baseband representative, in the figure of DEC's Ralph DeMent, was not quite as fair to broadband as Stahlman had been to baseband. DeMent said that Ethernet had been chosen because the network architecture was the key element and the Ethernet standard was the only one in the running.

Network criteria Computing Tope ready Output queue

Petri net model of a computer outlined by McKelvey and Agrawal

than a fivefold factor over the strictly sequential execution of machine language instructions is theoretically possible.

Multitasking Multitasking also offers a design methodology for the detection and exploitation of parallelism for distributed processing. Though a concept initially conceived to take advantage of CPU idle time, it is a valid method for software system design, said McKelvey and Agrawal. There is no reason why a problem must be programmed and executed step by step from start to finish. Though most people tend to think sequentially, multitasking offers advantages in terms of efficiency of resource use, improvement in the overall speed of execution and is a natural design methodology. The term software engineering has traditionally been applied to extremely diverse activities, ranging from system programming to managing programmer

vol 6 no 6july/august 1982

teams. ADA, says TeleSoft software engineer, Kenneth Bowles, appears destined to be the first widely used programming language designed to bring the diverse actiyities together in ways supported by both programmers and engineers. Among the important aspects, said Bowles, are its orientation to system construction using interchangeable building-block packages and the strong standardization in the interest of program portability. These aspects should foster the emergence of a new kind of software component industry, he said. Major machine-independent software systems will emerge, and hardware will increasingly be regarded as an added value.

DEC's four criteria for network choice were economy, performance, reliability and flexibility. Ethernet satisfies all four, said DeMent. With Ethernet, he said, they were striving to 'make the cost:performance ratio so low that it is no longer an issue'. The main cost difference between baseband and broadband comes about because the latter needs an RF modulator, which is expensive, lan Killen, chairman of Strategic Inc., who was the person who said last year Ethernet would fail, has qualified his statement by saying it is only cost-effective on small scale usage. With a large number of nodes, the cost per node of broadband's modulator becomes purely nominal. With baseband, each node needs its own transceiver to access the network. With the general concensus being that both baseband and broadband can live harmoniously together, it is not surprising that the IEEE standards committee is attempting to standardize on both. Standard 802, says committee member, Jerry McDowell, supports both broadband and baseband. The mainpoint in having the standard is to 'provide interconnection of different microsystems'.

How broad is broad enough? The local area network will become an integral part of any office system of the future, said Dale Kutnick, the Yankee Group. But he posed the

Pros and cons Fron a neutral position, McDowell analysed the pros and cons of the two systems. CSMA (carrier sense multiple

313

access) is the protocol whereby baseband nodes can access the network. It is simple to implement but, said McDowell, there is no guarantee of connection. He reckons there is only a 98 per cent certainty of getting through. The token passing system of broadband on the other hand is I00 per cent certain, but the technology is not well understood by the computing fraternity.

sensitive screen to do a lot of programming did not make much sense.

Natural language essential

Natural language and common interface are also essential for the office of the future, Wohl said. By natural language, she meant the ability of a system to understand a variety of similar words to motivate the system. A common interface allows 'the user to speak (sic) to the computer in Network competition a language more like our own'. Wohl Ralph DeMent supports the compeanticipated that systems like this would tition between baseband and broadbegin to appear within the next few band. 'The more compatibility you have, years. the less innovation and forward moveSoftware is a problem, according ment there is'. McDowell, however, to Wohl. Future software needs to be summed up the feeling in the computing integrated, i.e. that is should be able world. While it makes sense for everyto operate at different levels for difone to use the same one architecture as ferent requirements and still 'talk' to a de facto standard, senior management other systems. 'We are not going to be will resist the use of someone else's able to educate 50M (white collar architecture. He warned, though, that workers) with the present software.' while IBM threaten to bring out a netSince much of the available software, work, which would almost certainly get especially on CP/M, comes from peruniversal acceptance, no one is presonal computer packages or 'garages pared to risk too much in developing in California' which do not have a their own standard which may be differ- strong future, the industry will have to ent to I BM's. No one, however, knows rely on major hardware vendors for if and when IBM are going to make their their software. move.

Micros in the office The microcomputer that will succeed in the office environment will be in an intuitive system, according to Advanced Office Concepts' President, Amy Wohl. An intuitive system is one which you can use without feeling like a fool, she said. With so many microcomputers being released into the professional environment, Wohl was giving her interpretation of the qualities a successful system must have. Systems should not be entirely menu-operated, since with some systems, it is impossible to do anything in real time. A system must be so logical that by reading the display you will be able to do anything, Wohl said. A touch screen is also useful in a workstation designed for managers and executives - managers are used to pointing, she said. Wohl did however warn that too much pointing was tiring and therefore using a touch-

314

Local nets - logical part of the office Local computing power in the office, with 125k of internal RAM and hard disc storage a 'fact of life', means that there has to be a way to connect them up. Local area networks will 'logically be seen more as a part of a system of information processing tools'. On the issue of broadband versus baseband, Wohl said 'it is not an issue in office automation. Baseband can handle data and text transfer, but is no good for image and voice transfer, where broadband's greater channel capacity wins out. She also suspects that baseband is not robust enough for the office environment. Standardization is important, though. Easy interconnection of diverslymanufactured components into a single system is essential. However, it is likely, she said, that multiple standards will be accepted. To sum

up on networks, she said that you must design a building to accommodate the cable. It would have cost $I M to retrofit $20 000 of cable into a hospital she recently looked at.

F O R T R A N pioneer day Twenty-five years on, FORTRAN, the first widely-used high level language, was honoured at the National Computing Conference. It was the language that made computers more accessible to noncomputing professionals, like scientists and engineers. The silver jubilee celebrations comprised two sessions, 'The early days of FORTRAN' and 'The institutionalization of FORTRAN', and an exhibition and film put together by IBM. Speakers at the sessions included Robert Bemer on 'Computing prior to FORTRAN'; Richard Goldberg on 'Register allocation in FORTRAN I'; Herbert Bright on 'An early FORTRAN user's experience'; and William Heising on 'The emergence of FORTRAN IV from FORTRAN II'.

How it came to be Ten people (nine men and a woman) developed FORTRAN (a mathematical formula translating system) at IBM. It was released on 15 April 1957. It is all the more remarkable because the team were working in a vacuum of knowledge since no software benchmarks or other means of comparing performance existed. The group, led by John Backus who conceived the idea of FORTRAN, lived 'like a small family' according to one of the pioneers. 'Whenever someone asked us when we would finish our project, we would always say 'six months', Backus recalled on film. 'We did not know it would actually take us three years.' A model of the operator's console and CPU of the IBM 704 electronic data processing machine, for which the FORTRAN compiler was written, dominated the exhibit. About the same size atan IBM 3081, the 704 contained some 1200 vacuum tubes and offered only a fraction of the 3081 's processing power.

microprocessors and microsystems