Corporatenetworkingand the PC by MIKE BEVAN
T
here is a piece of management theory known as the ‘Nolan which may Stages Hypothesis’ usefully be applied to the current state of the personal computer market. The hypothesis, offered as a piece of working methodology by the US consulting practice, Nolan Norton & Co., assists organizational learning of the factors affecting the rate of ingestion of new technology. Those familiar with the theory may have noted that the PC market appears to be moving out of Stage 2 (Contagion) into Stage 3 (Coordination and Control). The explosive growth experienced over the last two or three years is falling away rapidly, and the additional stimulus offered by occasional new PC variants will produce only short-term and smallerscale effects.
Abstract: Corporate networks are no longer items to be bought from the mainframe supplier, as personal computers become an important part of the networking problem. IBM is, for marketing reasons, unlikely to make its networks open to other suppliers. Open systems interconnection is also unlikely to provide the hoped-for solutions. Most organizations will, for now, attempt a bottom-up solution at a localized level. Distributed PC networks might later be inrorporated mto an architecture using conversion or assimilation techniques. Keywords: data processing, computer communications, networks, personal computers. Mike Bevan Ltd.
is managing
director
~0127 no 7
September
1985
of Xionics
In this particular instance of the Control Stage, the process of exploiting the technology already procured, and of laying the basis for further growth, is very likely to involve networking.
Background For even the largest organizations, a network architecture or system architecture is something which they have bought almost unconsciously from a principal computer manufacturer. For the great majority of users, such architectures are constrained within the mainframe/front-end/cluster controller/terminal hierarchical structure. Point-to-point or X.25 links may have been provided between multiple computers, but the functions of such links are typically limited to file transfer or interhost terminal switching. While ‘networking’ at the multiplexer/PAD/ modem level is far from being a business for amateurs, it is relatively trivial in its complexity by comparison with the highly-distributed, organizationally homogeneous, multivendor networks which are going to be needed to exploit the new low-cost information technology products now available.
Network
scope
Most organizations of any size are likely to find themselves reconciling the IBM world with the nonIBM world. In Figure 1, this is the principal vertical subdivision. For convenience, these two segments each subdivide into two further horizontal
0011-684X/85/070017-04$03.00
@ 1985 Butterworth
subsegments. In the case of IBM, the upper subsegment is the traditional mainly SNA domain, with constituents like 3270, 278013780, and, more recently, DCAiDIA (IBM’s Document Architecture standards). In the lower IBM subsegment are all the new and emerging things - PC/XT/AT, PC/DOS, PC/NET, Sytek, Xenix, etc. The nonIBM world can be thought of as comprising international standards in the upper quadrant (X.25, X400, telex) and so forth, plus important ‘other manufacturer’ phenomena in the lower quadrant, for example, VT100 and Macintosh. Like all such convenient representations, it somewhat oversimplifies matters, but at least three of the four principal subdivisions identifiably exist within most larger organizations.
Network architecture For any specific collection of equipment within these categories, some form of network architecture will be needed, covering, for example, physical addressing, physical and logical routing and alternative routing, flow control, network monitoring and management, security. The new dimensions to be accommodated nowadays under these headings stretch from the severe complexities of duplicated paths, nodes, and resources, to such requirements as
& Co (Publishers)
Ltd
17
homogeneity is a very natural one for nontechnical users. It is what they want, and they feel entitled to regard its nonavailability as symptomatic of incompetence on the part of the computer industry. Many have become disillusioned, because the ‘integration’ so smoothly claimed in television advertising has so often turned out to be unavailable even within a single manufacture’s product range. ‘Standardizing’ on a particular type of personal computer has revealed for most the unpalatable truth that within a single simple product subfamily, dozens of different applications packages can find dozens of different and incompatible ways of representing data.
System architecture shell
Network architecture shell
Personal
Other
PC/XT/AT PCIMSDOS PC-NET etc.
VT100 Apple Wang etc.
l
Addressing
0
Routing
l l
Magic wands Flow control Network management ,
l l
Database homogeneity
0
Auditability
Networked applications
l
System management
Figure 1. Reconciling the IBM world with the nonIBM world terminal subaddressing (to permit differentation between traffic streams where multiple communications tasks are executed concurrently within a IV. Some aspects of the network architecture will only be under the control of the user organization to the extent of its ability to select from a limited set of offered alternatives. Nevertheless, any organization of size which embarks upon PC networking in a piecemeal manner, without trying to understand how the eventual architecture should function, will certainly run into major difficulties.
System architecture As a working simplification, the network architecture can be thought of as being mainly concerned with internal and invisible comms-related activities. The system architecture on the
18
other hand is perhaps more concerned with how the entire collection of computers, databases, and networks will appear to the user. It deals, for example, in uniformity of access methods, distributed database homogeneity, user identity conventions and mapping electronic mail systems across numerous dissimilar messaging mechanisms. Most important, it can and should deal with certain aspects of the man/machine interface. For example, a PC user should be able simply to request an item of data by specifying its name, or some identifying characteristic, and the ‘system’ should then deliver that data to him intelligibly, provided he is authorized to see it, regardless of where it is located, or on what type of computer, or in what format.
Changing attitudes The idea expressed above of system
Many, perhaps most, management services departments are rather hoping that the mess will be sorted out for them by some external agency. The two favourite ‘magic wands’ are IBM and the International Standards Organization (ISO). It may seem heretical to say so, but IBM has not actually built much of its success on advanced networking. Its occasional sorties into distributed processing have failed by a mile to produce the market domination it achieves elsewhere. Its current PC networking products are limited and unsophisticated. At the system architecture level, the user-perceived degree of integration between its various computer product ranges is small. All of these problems can be overcome of course, given time and money, and nothing will separate most committed IBM users from their understandable faith in the ultimate competence and professionalism of that corporation. What is perhaps less defensible is the belief that IBM will espouse genuine open networking. In general it is the job of the management of a commercial enterprise to preserve and enlarge the customer base (in this case 20000+
data processing
SNA sites), not to find expensive means of opening it up to the competition. If IBM creates a situation in which any of its customers can buy any computing product from a competitor - with no risk that the technical goalposts will be moved - then it will be time for IBM’s stockholders to look closely at the senior management. Open systems interconnection Much has been written elsewhere regarding the way in which proliferating standards have become a bigger headache than the incompatib~ities they were supposed to address. Let us charitably suppose that a single set of internationally-agreed standards came into existence, immune to technology changes and with no ambiguities or options. Let us further suppose that the millions of items of nonconforming info~ation technology already installed and in use were somehow not a problem. Surely no one imagines that interconnecting networks would spontaneously come into existence, with the right quantities of cabling in the right places, flow control problems solved automatically by dynamic buffer distribution, physical addresses selfallocated and a virtual monitoring and management system mapping itself into place. The manufacturer-independent standards initiatives have produced some useful tools V241RS232, X.25, ASCII, for example, and more are coming - X.400 and X.32 look promising candidates. However, they cannot define a network architecture blueprint for a particular user organization. Still less do they yet have any contribution to make at the system architecture level, where current standards initiatives are not seriously expected to produce stable and substantial results this century.
looking so far at the tip of a very large iceberg of further complexity. Already very visible on the short-term horizon is document image processing. Add a low-cost scanner and highresolution CRT to a PC, and you have the basis for the storage, retrieval, display of facsimile images of documents. Entire files can then be held electronically, and not merely the material created by word processing or data processing. Bear in mind, however, that a compressed image of an A4 page, at medium (200 x 200) scanning resolution, occupies some 30~0 bytes. Consider also the typical office workers, who will expect to browse through files of such material at the same speed as for paper files, and who in doing so will create at peak times network traffic loads considerably beyond current network capacities. And then there are all the happenings on the computer-design, engineering and manufacturing fronts, leading to an entirely new palette of graphics standards gaining relevance to commercial information processing, not to mention further potential heavy traffic loads.
Mainframe Gateways
More to come If all this sounds a bit gloomy, it should be noted that we may only be
~0127 no 7
September 1985
Figure
2.
Some pointers It is very difficult to avoid the conclusion that corporate network architectures and systems architectures are no longer things to be bought from your mainframe supplier. The only theoretical exception to this is where the user organization is prepared to select a single mainframe supplier, veto the procurement of equipment from elsewhere (even ‘compatible’ equipment), and limit its rate of procurement to that containable within whatever cohesive product architecture the supplier is able currently to support. What most organizations will do ‘top-down’ is to wait and see. What most organizations will do ‘bottomup’ is to try to solve localized, tactical networking problems as they arise. In one sense, a bottom-up implementation method must be right. The massive traffic levels which will accumulate as these corporate networks grow can only be sustained by distributing databases or file servers away from the centre (see Figure 2). The 80:20 principle appears to be valid for work group (department, team) access to data, and if 80% of their
etc. t
~~s~.~~ti~gdatabases or file sewers away from the cmtre
19
retrieval traffic can be kept away from other work groups, then the overall problem of loading becomes more manageable. The idea of implementing numerous small PC networks, with their own file servers, is therefore a bad one in principle. How possible it will be in due course to fit these clusters into some homogeneous corporate architecture is less clear.
etc.) so that apparent homogeneity can be superimposed. New domain types can be added as need arises. It has been a common misunderstanding for some years that this is the purpose of a local area network (LAN). In reality, the LAN deals only with the comparatively straightforward business of physical/logical interconnection. The summit of its ambitions is to enable different types of computer system to exchange mutually unintelligible information. The intelligence and knowledge of a Virtual Network Architecture is necessarily transportable between hardware sets, but will typically take the form of network nodes, either separate units, or addon cards and software for PCs and file servers.
Retroactive architectures There are basically two approaches worth consideration, which might be labelled ‘conversion’ and ‘assimilation’. The conversion approach regards all clusters and subsystems as independent domains in a federated architecture. When one domain communicates with another, formats and protocols are converted into the best approximation intelligible to the receiving domain. For more than a different domain types, this becomes inefficient, because of the knowledge each must hold about all others. The assimilation approach requires the existence of some form of higher structure, through which all domains intercommunicate, and are monitored and managed. This structure (see Figure 3), which has been termed a Virtual Network Architecture, holds information about domain characteristics (SNA, VTlOO, MS/DOS, WP formats, addressing structures, HDLC variants, logon sequences
PCS
VDUs
Strategic considerations Those who have implemented substantial multivendor networks have the considerable advantage of recognizing now what they should have paid more attention to at the start. Some of the common themes are: l
Availability and traceability. With large user populations, multiple file servers, mainframe and minicomputer interfaces, and numerous external gateways, it can be very difficult to diagnose transient oddities - lost transactions, spurious
Portables
Graphics workstation
WPS
User device translators/controllers
Resource
location/Categorization
Physical/Logical
Internal/External
IBM
DEC
network
processing/Database/Service
Dialog
Figure 3. Virtual Network Architecture
20
intelligence
Bureau
domains
Telex
messages and so forth. Life can be made a little easier by ensuring, for example, that intercluster traffic, or mainframe access, is recorded in some form of control record. Network management. The tools provided for this crucial activity seldom approach adequacy. The gathering and analysis of statistical data is particularly important where the network is continuously growing and is required to provide consistently sharp response times. Resilience. Organizational dependency on information technology generally is increasing. It seems obvious that the corporate data network should offer virtually guaranteed availability comparable to the telephone network. Expansibility. It is at best frustrating to be the 25th user of a network cluster limited to 24 users. ReconJigurability. People move offices; departments change shape. It is helpful if logical work group clustering can (subject to the traffic load implications referred to earlier) be decoupled from physical clustering.
Summary Corporate networks, which include large numbers of PCs, will characterize organizational data processing for the foreseeable future. As soon as these networks move beyond the tactical, experimental stage, they will present severe technical and operational problems unless they can be fitted within some cohesive higherlevel architecture. Generalized architectured solutions will emerge to help the many user organizations who run into difficulties through delaying strategic action because of uncertainty. Some of the deficiencies arising from lack of strategic preparation will however be impossible wholly to remove 0 subsequently.
Xionics Ltd, 7TD, UK.
45 Mortimer
St, London
WlN
data processing