Workstation environment for image processing in nuclear medicine

Workstation environment for image processing in nuclear medicine

Workstation environment for image processing in nuclear medicine Dirk Tombeur”, Axe1 Bossuyti and Frank Deconinck* In this paper we describe the ima...

5MB Sizes 5 Downloads 87 Views

Workstation environment for image processing in nuclear medicine Dirk Tombeur”, Axe1 Bossuyti and Frank Deconinck*

In this paper we describe

the image processing environment for nuclear medicine applications that we arc developing on general purpose graphical UNIX workstations. The environment that is described in this paper has a truly distributed, object oriented architecture. The basic building blocks are formed by the Interviews’, Allegro2 and NIH-Class” (formerly OOPS) toolkits. These tookits are developed in C+ +, and together they provide the basic functions for constructing a distributed object-oriented environment. They were developed as separate, independent libraries, although Interviews and Allegro were developed by the same group. The fact that it was easy to integrate these rather large libraries must be attributed to the use of 00 techniques. We will elaborate on the functionality contained in these libraries below. The components, implemented partly on the basis of these libraries are also described.

development platforms. UNIX is the primary operating system for graphical workstations, and continues to be the major development platform for innovative technologies (e.g. X Window System(j). The C++ language is an object-oriented (00) successor of C, with the same efficiency and portability. Seamless integration with utility libraries developed in C or FORTRAN remains easy. In our new version we are working to overcome the following shortcomings: we identified4 major units which are functionally independent from each other (co/our manager, file image overview, image processing). manager, Nevertheless, we were forced to put them into one huge, monolithic process. According to the UNIX philosophy, functions should be put in separate programs. UNIX provide stools (the shell) to combine these programs into larger entities; we spent much time implementing a deviceindependent graphics layer. This approach has been superseded with the breakthrough of the X Window System as the universal computer graphics standard; only the higher layers of the environment were built with an object-oriented flavour. Depsite the mechanisms which were provided by C+ +, we experienced great problems deciding how to concentrate functionality into different objects and how to organize these objects.

Keywords: image processing, object oriented techniques, nuclear medicine, X window system, CH, UNIX, distributed computing

The Experimental Medical Imaging Laboratory at Vrije Universiteit Brussel has a major interest in problems related to medical imaging. Currently we are investigating practical and theoretical aspects of algorithms, software environments and hardware architectures for applications in nuclear medicine. This research is done in close cooperation with the nuclear medicine department of the Academic Hospital of the VUB, which conducts both patient care related examinations and clinically oriented research. We reported4 on the first version of this environment elsewhere. It is based on UNIX and C+ +5 as *Experimental Medical Imaging Laboratory, Nuclear Medicine, Vrije Universiteit Brussel, Laarbeeklaan 101, B-1090 Brussels, Belgium Paper received:

10 November

‘Department of AZ-VUB (NUGE),

1992; revised paper received:

We identified4 four objectives version of the system: l l l

6 April

1993

l

522

0262/8856/93/080522-08 0 1993 Buttetwotth-Heinemann Image and Vision Computing Volume 11 Number 8 October

to be realized

in the next

migration to the X Window System a more modular approach by switching distributed architecture a cleaner object-oriented implementation levels stability. Ltd 1993

to

a

at all

Workstation

GENERAL

ARCHITECTURAL

Functional requirements environment

environment

CONCEPTS

of a nuclear medicine

From the very beginning we decided that we did not want to build a general purpose image processing environment. Such an environment would have to satisfy the often contradictory needs from a broad range of fields, ranging from computer vision over microscopy to remote sensing. Our focus is nuclear medicine, and we investigated how the typical requirements of this field constrain and delineate the required functionality of the environment. These requirements also have repercussions on the architectural organization of the environment: images are always grouped into related sets: in tomographic studies a pile of 2D slices constitutes a 3D image volume. In dynamic studies, a sequence of images represents spatio-temporally varying data. The images themselves are small (from 64 up to 256 pixels square) and sets are equally small (16-64). Both graphical and non-graphical functions have to take into account the fact that images are never isolated. At the same time, they should take advantage of the fact that images are small. This is a good indication that the communication overhead to shuffle images around (which is inherent in a distributed application) will not create unacceptable delays. the functions and the organization of the environment should be adaptable to the way in which medical specialists examine images. Visualization of image volumes requires real-time extraction of 2D slices along the main body axes (sag&al, coronal and transverse), as well as oblique slices, from the reconstructed image volume. Different simultaneous, synchronized views on the same data have to be supported. Specialized functions such as phase-amplitude decomposition must be available to allow analysis of a time-series of images. image processing tasks have to be specified interactively and executed in real-time. Bulk data processing operations, such as tomographic reconstructions, can be done off-line. the user interface should be intuitive and consistent. This rules out a command line interface as the primary interface in favour of a graphical user interface. integration hooks must be provided to integrate the system with a hospital information system and in the future with a PACS (Picture Archiving and Communication System) environment. communication functionalities should be exploited to create distributed applications. Decomposing tasks into relatively small cooperating, but independent programs is a software implementation of the way medical specialists want to interact with their data: integrated facilities which support multiple simultaneous views on images and related data, combined with flexible tools to operate on the data.

for image processing

Image processing

in nuclear medicine:

D Tombeur

et al.

in a classical UNIX environment

Image processing traditionally consists of numerical processing of the data, followed by the visualization of the results. UNIX provides an elegant mechanism to integrate these processes, based on a traditional command line interface. In a UNIX environment, each numerical or graphical function would exist as a separate program. Programs execute I/O operations on standard logical channels. They are combined in shell scripts, which also support control structures such as for, while, if. Pipes may be used to interconnect the I/O channels, so that programs can exchange data. The display programs can visualize the intermediate and final results of such processing chains. This approach leads to a clean, modular organization of functions. Also, the commands in a pipeline operate (pseudo-) concurrently. If computers are connected through a network. the processing load can be distributed across different machines. It is clear, however, that this approach has a number of drawbacks: l

l

l

it is suitable for bulk data procesing which does not require intensive user interaction. Complex command scripts generally do not have interactive response times. the UNIX shell(s) only provide a user interface based on textual commands. These are prone to various lexical and syntactic errors. only a fraction of the capabilities of a modern graphical workstation can be exploited in this scenario. Interaction with visualized results is very difficult to integrate into the shell framework.

Graphical

User Interfaces

The success of Graphical User Interfaces (GUI) such as the Macintosh interface and MS Windows has convincingly demonstrated that applications which allow interactions through a GUI are in general much easier to operate than their classical command line driven counterparts. Less obvious, however, is the fact that a GUI is not always synonymous to flexibility. It is not sufficient to provide pop up menus and dialogue windows if they do not clearly reflect the functions and the organization of the application! Another problem is caused by the fact that most GUI toolkits only offer commonly used interaction elements (buttons, menus . . ). For specialized application domains such as image processing, the core elements for image visualization or colour manipulation, must be built from scratch. Due to the way these environments evolved and the kind of hardware platform they are intended for, they generally suffer from four major drawbacks: l

0

they use communication functions provided by modern network technology solely for data transfers; most application programs integrate lots of functions into one monolithic block. which is contrary to modularity requirements;

Image and Vision Computing

Volume

11 Number 8 October

1993

523

Workstation l

l

environment

for image processing

they have generally only possibilities, based on some tion; they only address the needs areas (text processing, . . .

Distributed

limited global

in nuclear medicine:

data exchange clipboard func-

of common ).

application

computing

We do not want to lose the elegance and modularity of the UNIX approach, but at the same time we want to operate our system through a GUI. Specifically, we want to create a distributed environment, where functions are divided over several cooperating programs. Distribution can exist at three levels: l l l

Until recently, only distributed data access was supported by industrial grade solutions. The industry standard Network File System (NFS)’ from Sun Microsystems enables file sharing over networks consisting of heterongeneous machines. It enables transparent access to files on remote disks, without having to adapt existing programs. This is possible because NFS operates at the operating system kernel level, which is invisible to programs. Distributed graphical user interfaces are now also a reality, with the universal acceptance of the X Window System as a network transparent graphical environment. X allows programs to execute on a computer which is different from the one on which the visualization takes place. X is not integrated in the kernel, but this is not a problem, because existing programs have to be rewritten anyway to use (the very extensive) X functions. Distributed data processing is still an open ended issue. The Sun RPC/XDR (Remote Procedure Call)(External Data Representation)‘,” facilities are de facto industry standards, which support remote computations using a client-server approach. The server implements the actual processing functions, whereas the RPC package allows a client process to issue a call to the server over a network connection. An RPC call also supports arguments and possibly returns results. The details of transporting arguments from client to server and results the other way round are handled by the RPC package. XDR does the necessary data representation conversions for different machine architectures. RPC/XDR has the advantage of being a de facto standard, but it is not an operating system service. This implies that programs have to be specifically written or adapted to use RPC/XDR functions. Therefore we opted for the solution described below.

SYSTEM LIBRARIES toolkit

Interviews

is an object

524

et al.

building toolkit. It provides a convenient objectoriented interface to low level X facilities. This simplifies the construction of new UI objects, where these raw facilities are used. Interviews also provides a rich set of UI elements (buttons, menus . . . ) and contains classes to support the composition of these elements into complex graphical interfaces. We use Interviews both to build interfaces and as an experimenter’s workbench for distributed GUIs. The 00 architecture of Interviews makes adaptation and extension straightforward. The UI objects were redesigned according to the Open Look guidelines and classes for visual representation of images and colour support were added.

Allegro toolkit

the user interface data access data processing.

Interviews

D Tombeur

oriented

Image and Vision Computing

user

interface

Volume

(UI)

Allegro is a C++ library that provides a mechanism which allows transparent sharing between objects across process and machine boundaries. These objects can be anything that can be encoded in a C++ class. Clients can execute operations on remote objects using an 00 Remote Procedure Call (RPC) model. Thus Allegro provides the basic mechanisms for distributed data processing. Objects which logically belong together are clustered in a single UNIX process, which is termed an object space. Object spaces register with the Allegro name server, of which one has to be active per host. Object spaces are located by sending a message, containing an identification (generally, a name encoded as a string) to a well known IPC port, managed by the name server. If the lookup succeeds, the client receives the Allegro identification for the object space. This ID is then used instead of the string name in subsequent calls. Once an object space has been located, objects inside it, which have registered themselves in the space’s dictionary, can be referenced by remote objects through a handle. Through that reference it is possible to execute calls on the remote object. Remote calls which do not return a value can be executed asynchronously (the caller does not wait). When a return value is expected, the caller is blocked in suspended mode and continues execution only after receiving the value (synchronous execution). This strategy for data sharing is clearly optimal when the majority of the remote calls don’t expect return values and when the size of the returned data is small. As an example, consider the rotation of an image volume around an arbitrarily oriented axis. This operation is a good candidate to be executed on a fast remote CPU, if that is available. The original Allegro package only provided communication based on Berkely socket IPC. This is clearly wasteful when clients and servers are both running on a single host and when they want to share bulk data. The obvious remedy was to add a shared memory facility to the underlying Allegro mechanisms. Externally shared memory segments are accessed through Allegro calls, but they don’t have the communication overhead of the socket based mechanism. Evidently, data residing in shared memory does not have to replicated in a client’s address space.

11 Number 8 October

1993

Workstation environment From a programmers point of view, operations on remote object stubs should be indistinguishable from local calls. In C+ + derived classes are type compatible with their parent classes. Code that manipulates derived objects only has to be aware of the base types. However, it remains necessary to handle the IPC aspects of application objects manually. This technology has not yet reached the maturity of remote filesharing and virtual memory, which are totally invisible to application programs.

NIH-Class

library

The NIH-Class library provides a rich collection of socalled foundation c&es. It comprises high level datastructures (dynamic arrays, dictionaries, sets . . ), as well as support mechanisms (transmission and storage of complex object clusters and synchronization among interdependent objects). We described a garbage collector (GC) for automatic memory compacting4. It proved to be of no use in practice, however, because the use of the GC introduced more complexity than manual deallocation of data structures. We have started to eliminate duplicate functionality from InterViews and Allegro, by replacing their foundation classes whenever possible with the NIH-class counterparts.

SYSTEM BUILDING

BLOCKS

The main high level components which we have implemented are an input-output manager, a file manager, an overview facility for large collections of and an interactive image images, a colour manager processing tool that we call the Image Calculator (ImCa). These tools are distributed over different processes. They can transparently request services from other processes using Allegro facilities. This approach has a lot in common with the classical UNIX shell approach, but unlike the latter is does not preclude the incorporation of a powerful GUI.

File Manager The File Manager (FM) provides interactive browsing facilities for the hierarchical UNIX file system. This includes browsing files (naming, deleting . . . ) and operations on the directory tree. It can be thought of as the graphical equivalent of the UNIX 1s command. In Figure I, the content of a directory is shown in an iconic representation, with the right window showing more detailed information about the file selected. At the same time it provides services to the other processes when they request a filename to open data files or save results on disk. The actual reading and writing of files is done by the component which requested the service. The fact that we only have one file and directory browsing tool ensures that it is consistent, and the code is not replicated in each program that needs assistance for file operations.

for image processing

in nuclear medicine:

D Tombeur

et al.

The file operations in the other components rely on the file manager. In a first implementation we provided file operations which executed operations on stub objects. The stubs relayed these requests to the file This design introduces unnecessary and manager. unwanted interdependencies among components. Our new approach is more orthogonal: file selection is done through a text dialogue but this dialogue can cooperate with the selection process in the file manager. We now have to check for a valid filename when it is typed in manually. If the name was generated by the file manager, a flag which indicates that validity checks are not necessary is set. The file manager operates on a memory resident representation of the UNIX directory hierarchy. This allows us to cash the part of the directory tree that was already visited. Because only the FM process contains a memory representation of the tree, no replication overhead is involved. To avoid inconsistencies with the actual state of the tree on disk. we compare the timestamps of the disk version and the memory structures to decide whether to update the latter. This is done in an incremental fashion when the user visits a directory.

Image format handling

and inpvt-output

manager

Currently, no consensus exists about image storage formats among researchers or manufacturers of (medical) imaging equipment. This implies that for each particular storage format, procedures must be designed to do I/O operations in the required format. To isolate this problem from the rest of the environment, we designed a mechanism which enables different formats to be treated through a common interface. This is fairly easy using virtual functions and inheritance mechanisms from C+ +. First a common interface is defined as a set of empty, virtual functions in a top level class (open, close, extract . ). For each actual storage format, a class can be derived from the top level one, which effectively implements the interface functions. The problem remains then to select the correct interface. We used a descriptive header file per image in which a type-name was recorded4. Opening an image consisted then of reading the type field from the header file and using a switch statement to select the appropriate interface. This solution lacks elegance because each time an interface for a new format was added, the file which contained the code to select the correct interface functions had to be recompiled, and the whole application had to be relinked. Our solution to this problem consists of two parts. The switch has been replaced by a dictionary datastructure which associates type-names with an interface. The corresponding file is only compiled once. We now also encode the file type in a single Desktop file per directory but this is not essential. For each format we have an associated decoding object, which is created after the interface dictionary and then registered in it. The decoding classes can be developed and tested as separate entities. Their

Image and Vision Computing

Volume

11 Number 8 October

1993

525

Workstation

environment

for image processing

in nuclear medicine:

object files can then be linked with the I/O-module, without having to touch the code for the interface dictionary. We load the entire file content into a virtual memory buffer on which all operations such as slice extraction are done. Because we have multiple buffer representations, one interface was defined through which the clients request services (slices or ranges of slices). The connection between this memory buffer and the original file is maintained throughout the life of the buffer. If the buffer is discarded, the application allows the user to merge the buffer with the original file, or rather record it in a separate file. This extra level of indirection is much safer than operating directly on files. Because the image buffers are managed inside a single object space, the organization is straightforward. Utility functions which operate on the image volume are also implemented in this object space.

D Tombeur

et al.

arbitrary spherical subsets of a data volume. The bounding box of the surface being projected can be interactively manipulated on three simultaneous orthogonal views. This results in a kind of geographical map. The screen shot shows a HMPAO SPECT study of a patient who suffered from stroke. Sometimes coordinate transforms highlight special symmetries or features in an image. This is shown in the MRI image in Figure 6, where the central brain fissure clearly shows up at two discrete angular positions (horizontal axis) after polar transformation. Also shown are the central axes of the images, which were calculated from first order moments. The polar view suggests that geometrical registration between similar images which differ by a rotation might be easier in polar than in Cartesian form. A coworker in our laboratory has actually implemented a novel registration method, based upon this observation, although his approach was mainly derived from theoretical considerations.

Image overview component After an image is transferred to a buffer, the user can display an overview of the content of the buffers (backgrounds of Figures 2, 3 and 4). The images in this ‘default’ view have their natural size (dimensions of the buffer). They provide an initial impression of the image quality and the image features. The user can open additional views on the same buffer, which may contain enlarged or reduced pictures. The overview facility can cooperate with other tools to specify ranges which have to be loaded from the buffer space. This cooperation is non-modal in the sense that the communication is never initiated by the other application. It promotes orthogonality and functional segregation and it decreases dependencies among components. Originally, we used one ImageView object to visualize one image slice. This approach is wasteful with respect to X server resources. We then implemented an ImageView which can display multiple slices in a tabular row-column format using a single X window. The related SliceView object may be used to display a single slice. The next step in the development process was to separate the display policy from the display mechanisms in the ImageView. This allows us to use an ImageView to display a visual representation of the LIFO stack. In a stack arrangement placing slices does not follow a simple row-column strategy, because adding a new image slice ‘pushes’ the slices which were already present to the right. The separation of mechanism and policy is a central theme in our design. Also shown are three derived applications. They offer the possibility to inspect a data volume with other The Volume Rendering Comviewing techniques. ponent (Figure 2) renders a shaded pseudo-3D of the entire data volume. The picture shows a volume that has been rendered using constant depth shading. More sophisticated shading algorithms are currently being implemented. A slightly modified version of the rendering algorithm allows the extraction of arbitrary oblique slices from the data volume (Figure 3). Also shown is the stereographic viewing component (Figure 5), which supports stereographic projections of

526

Image and Vision Computing

Volume

Colour table manager This tool allows modifications to be made to the screen colours. Presently, we expect a visual which supports at least 256 different colours through a lookup table (LUT). This is not in accordance with the X philosophy that any well behaving X client should be able to run on any server, but image processing is a very specialized field anyway. Colour table management is complex under X and it is very difficult to come up with a general strategy which allows cooperating clients to satisfy their mutual colour preferences. X allows up to one colour map per individual window. Because most servers control only one hardware colourmap, X internally supports virtual colourmaps which are all mapped on a single physical LUT. According to the X Window System’s ICCCM conventions, the window manager client is responsible for installing and uninstalling colourmaps for the individual client’s top windows. When the client has the focus it is free to execute its own policy on it’s subwindows. We decided not to follow these guidelines, because it would force us to incorporate colour handling functions in all clients which have a need for that functionality. In our approach, the colour table is managed by a single client, that is responsible to satisfy the needs of the others. The colour table is divided into three parts: l l l

system area (32) scratch area (32) lookup table area (192).

The system urea contains entries for the user interface components. This ensures the use of consisent palettes and avoids disturbing ‘Las Vegas’ effects. Colours in this area can be shared among clients and are loaded using the X Resource Manager. They can be referenced by their English name. A browser which shows colours and their names can be used to select apfiropriate entries.

11 Number 8 October 1993

Workstation

environment

for image processing

1

Figure 5

Figure 2

Figure 6

Figure 3

Figure 7

Figure

Figure 4

in nuclear medicine:

D Tombeur

et al.

The scratch area can be used when an application needs an exclusive writeable entry. An example is the colourname palette browser itself, when it is used to interactively create a new colour using sliders to specify an RGB (Red, Green, Blue) mixture. The lookup table area contains the colour scale which the image visualization tools use. The user can load a number of predefined palettes, such as Grey, Inverted grey or Cyclic (for phase images). It is also possible to compose a new palette, using a browser that allows part or whole of the LUT area to be changed (Figure 7). Essentially, one specifies graphically the relation (entry, RGB triple) by interactively drawing the graph or the relation (red line for R component ). The

Image and Vision Computing

Volume

11 Number 8 October

1993

527

Workstation environment for image processing in nuclear medicine: D Tombeur et al. available range can be split into a number of separately controllable regions. This allows the user to change the effective dynamic range of the LUT or to introduce highlighting effects. The parameters for the LUT are

placed in the X server as a globally accessible entity (an Xproperty) that is consulted by the visualization chents. Image

calculator

One of the main problems in interactive image processing is the organization of calculations. Users must be able to control in an intuitive and consistent way which operations are to be applied to which images. The ergonomics of the user interface are crucial to solve these problems. In our system, images are selected for processing by putting them on an image calculation stack. Operations on image stacks are modelled after the popular RPN calculators from Hewlett-Packard. They include control operations (push, pop, last, swap, hop, clear, roll) and mathematical operations on images (convolution and transcendental functions, filters, arithmetical . . . ). Only images of the same size and dimension are allowed on one stack. This is no limitation, since multiple stacks may be active simultaneously and transfers of images, which reside on different stacks, are supported by using subsampling or enlargement functions, defined in the image classes. On the screen each calculation stack has an associated view. These views are kept in synchronism with the contents of the stacks. Using stacks to organize images considerably simplifies control structures. Unary operations are always executed on the top element of the stack. Binary operations are executed on the two first elements on the stack. They are popped off the stack, and the result of the operation is pushed on the stack again. This approach is rigid but it solves most problems we encountered in other systems: it excludes syntactical ambiguities (correct combecause the stack paradigm mand sequences), automatically amounts to postfix specification of all operations; the screen organization is unambiguous; no user interaction is required to specify where on the screen results will be placed; undo is automatically available, because discarded slices are pushed from the calculation stack onto an undo stack (up to a certain level). This approach can easily be extended to curves, but our experience has shown that the stack visualization scheme is not optimal to represent curves: 0 l

0

528

numerical operations among curves are less common (especially multiplication and division); values on the curve frequently need to be inspected interactively, using some kind of rubberbanding cursor to pick points on the curve; overlaying multiple curves is a common operation.

0

Visual overlaying could also be done on a stack view, but it is at odds with the requirement that the stack view should be the exact equivalent of the in-memory data structures; interactive range operations (e.g. zooming) are frequent.

Therefore we are now implementing a spreadsheet which allows operations on mixtures of images and curves. It essentially uses the same visualization and computational classes as the stack calculator. Only the controlling part is not based on a stack but a graph. Nodes represent images or curves and links represent operations which express the relations between the nodes. CONCLUSION The system described in this paper is still under development. In its current state it is suitable for research purposes, but it is not stable and robust enough to be used in a clinical NM department. Our object-oriented framework, supported by the tools that we discussed, has led to a truly modular, manageable system. The use of the X Window System based Interviews toolkit enabled us to create a distributed graphical user interface ‘for free’. We then used the distributed object management tool Allegro to achieve the same effect for non-graphical services. Services which belong logically together are grouped inside a single process. This promotes modularity, while Allegro allows us to integrate these processes in a consistent, homogeneous environment. In the near future we plan to add XDR support to Allegro to achieve data representation independence. We are also thinking about tools to let an end-user interactively specify the exact connections between the different object spaces. That part is now handled inside the application programs, with only limited configurability. It will also be necessary to extend the basic tools that we have already developed. The following points are on our list of wishes: a macro language to specify repetitive operations. We have done experiments with ‘replaying’ user interaction through synthetic events. This is a most effective approach for demonstration purposes. A more comprehensive solution would be to integrate a ‘small’ interpretive language into the calculation tool; integration of additional bulk image processing operations, such as tomographic reconstruction; support for graphical 3D representations; support for data fusion between data from both nuclear medicine and radiological modalities (MRI, XRay, CT); removing the need to relink the image loader when a new format is introduced. We are currently experimenting with a novel approach to this problem which does not rely on dynamic linking. This approach will also solve the problem of accessing data which is logically a single entity but which is physically represented by separate entities.

Image and Vision Computing Volume 11 Number 8 October 1993

Workstation

environment

for image processing

REFERENCES Linton, M A and Calder, P R ‘The design and implementation of Interviews’, USENIX Proceedings C++ Workshop, Santa Fe, NM (1987) Linton, M A, Quong, R W and Calder, P R ‘The design of the Allegro programming environment’, USENIX Proceedings C+ + Workshop, Sante Fe, NM (1987) Gorlen, K E ‘An object oriented class library for C++ programs’, USENIX Proceedings C++ Workshop, Sante Fe, NM (1987) Tombeur, D and Deconinck. F ‘The design of a UNIX work-

5 6 7 8 9

in nuclear medicine:

D Tombeur

et al.

station environment for medical image processing’, EUUG Spring Conference, Brussels, Belgium (1989) Stroustrup, B The C++ Programming Language Addison Wesley, Wokingham (1986) Scheiffler, R W and Gettys, J ‘The X Window System’, Commun. ACM, Vol 29 No 3 (March 1986) pp 184-201 Network File System Protocol Specification’, Sun Microsystems (1986) Remote Procedure Call Programming Guide, Remote Procedure Call Protocol Specification, Sun Microsystems (1986) External Data Representation Protocol Specification, Sun Microsystems (1986)

Image and Vision Computing

Volume

11 Number 8 October

1993

529