Developing technologies for broad-network concurrent computing

Developing technologies for broad-network concurrent computing

Journal of Systems Architecture 45 (1999) 1279±1291 www.elsevier.com/locate/sysarc Developing technologies for broad-network concurrent computing Pa...

436KB Sizes 1 Downloads 117 Views

Journal of Systems Architecture 45 (1999) 1279±1291

www.elsevier.com/locate/sysarc

Developing technologies for broad-network concurrent computing Paul A. Gray *, Vaidy S. Sunderam

1

Emory University, Department of Mathematics and Computer Science, Morth Decatur Building /100, 1784 North Decatur Road, Atlanta, GA 30322, USA

Abstract Recent developments in networking infrastructures, computer workstation capabilities, software tools, and programming languages have motivated new approaches to broad-network concurrent computing. This paper describes extensions to concurrent computing which blend new and evolving technologies to extend users' access to resources beyond their local network. The result is a concurrent programming environment which can dynamically extend over network and ®le system boundaries to envelope additional resources, to enable multiple-user collaborative programming, and to achieve a more optimal process mapping. Additional aspects of the derivative environment feature extended portability and support for the accessing of legacy codes and packages. This paper describes the advantages of such a design and how they have been implemented in the environment termed ``IceT''. Ó 1999 Elsevier Science B.V. All rights reserved. Keywords: Distributed virtual machine; Distributed computing; Java; Native methods; Heterogeneous cluster computing

1. Blending existing and evolving technologies In recent years, the advances in both raw computing ability of personal workstations and communication technologies have given support to the viability of harnessing networked clusters of workstations for concurrent, parallel computing. At the same time, programming languages, such as Java and Perl, and the

* 1

Corresponding author. Tel.: 1 404 7271971; fax: 1 404 7275611; e-mail: [email protected] E-mail: [email protected]

1383-7621/99/$ ± see front matter Ó 1999 Elsevier Science B.V. All rights reserved. PII: S 1 3 8 3 - 7 6 2 1 ( 9 8 ) 0 0 0 6 8 - X

1280

P.A. Gray, V.S. Sunderam / Journal of Systems Architecture 45 (1999) 1279±1291

growing popularity of the Internet have led to vast pools of programs accessible for anonymous utilization. Even though there are several ongoing projects working to utilize Internet resources for concurrent computing, Internet environments which are well suited for anonymous utilization and long-time interaction amongst computationally intensive concurrent process collectives have yet to be realized. In contrast, existing environments which allow distributed, parallel programming amongst clustered workstations (such as PVM and MPI [1,2]) are designed for environments consisting of resources located within a local network. The IceT project incorporates the advantages of Internet technologies with the established techniques of network cluster computing, while maintaining a deference to what each discipline is best suited for. For example, there are many projects underway which are looking to Java as a high-performance language ([3] for example), however Java has yet to be established as such, 2 whereas Fortran and its variants appear to be immovable as standard mathematical package languages. On the other hand, lack of security, architectural dependence and lack of standardized implementations limit languages like C, C++, and Fortran from realizing the portability inherent in Java. Therefore, the emphasis in developing the IceT distributed computing environment has been on utilizing Java for its portability and dynamic loadability features while permitting utilization of native 3 code where problem size, computational speed or communication aspects are of utmost importance. The aspects of the IceT environment addressed within this paper describe how leveraging of the portability of Java with established distributed computing message-passing paradigms leads to enhancements in distributed computing across local network and ®le system boundaries. The end result is an environment that supports broad-network concurrent computing paradigms which supports the use of system-dependent codes in order to achieve high-performance computations. In general terms, IceT presents a message-passing environment based on established models of cooperating sequential processes interacting via explicit messages. However, the IceT framework signi®cantly extends the scope of this model by providing greatly increased dynamism and ¯exibility in (1) locating and migrating static processes, and (2) dynamically merging and splitting virtual environments. To the user, the latter property provides a means of augmenting personal computational resources through the merging of local and external resources, or for collaborating users 4 to pool together individual resources. The former property of IceT is thus needed to provide the mechanism to ferry processes across virtual environmental boundaries for subsequent execution. It is also valuable as a means of dynamically uploading computational tasks to remote locations, a methodology for ``soft installation'', and for on-the-¯y software updates. The general setting of IceT is depicted in Fig. 1, where individual users have used IceT to merge their local environments to form a multi-level, time-shared virtual resource pool. Upon the combined resources, processes can be freely accessed and executed subject to the security restrictions imposed by the owner of the resource. In Fig. 1, there is a single concurrent process which is running in concert over several of the computational resources within the greater environment.

2 3 4

A further discussion on considerations and the future viability of Java for high-performance programming is given in Ref. [4]. Throughout this paper, the term ``native'' will refer to code written in C, C++, or Fortran. For details of IceT relating to programming of collaborative tools, see Ref. [5].

P.A. Gray, V.S. Sunderam / Journal of Systems Architecture 45 (1999) 1279±1291

1281

Fig. 1. IceT allows users to merge their local computational in order to create a distributed virtual machine.

The remaining portion of this paper discusses the implementation considerations required to support this scenario, namely communication, static processes migration and other topics associated with concurrent computing across network boundaries. Section 2 discusses details of the IceT daemons and their contribution to the distributed environment. Section 3 illustrates aspects of the IceT API which facilitate communication amongst concurrent processes by way of a Java-based distributed computation example. Details relating to the encapsulation of native (C/Fortran) into portable shared libraries is given in Section 4.

2. The IceT implementation The IceT software subsystem is built upon three major components: daemons, consoles, and tasks. Daemons are responsible for maintaining communication links between individual resources and for initiating process creation; consoles provide individual users direct access to the functionality of the daemons; and tasks are processes which connect to and utilize the IceT environment for the passing and receiving of messages in order to perform their intended function. Details on the IceT console and user interaction with the IceT environment can be found in Ref. [5]. Tasks and the IceT API are presented in detail in Section 3. The glue which holds the distributed environment together is the connection between the IceT daemons. In order for resources to be utilized by a concurrent computation, some connection must exist between the processes and resources which permits communication. The daemons provide this fundamental link, providing communication and routing links within the computational resource pool. As was shown in Fig. 1, this joining together of resources creates a virtual computational group upon which distributed computations may run. In addition, the daemons are responsible for the creating instances of processes on the local resource. For this, the daemon needs to locate and resolve static dependencies of the pending process, to locate and link external shared libraries, and to manage system- and security-dependent instantiation of the process. IceT processes are written in Java as the base programming language. These Java-based programs may, in turn, access C, C++, or Fortran code through Java-based wrappers to shared libraries (using the Java Native Interface (JNI) speci®cation [6]). In order for a daemon to instantiate an IceT

1282

P.A. Gray, V.S. Sunderam / Journal of Systems Architecture 45 (1999) 1279±1291

Fig. 2. The static form of an IceT process is a Java-based ``class'' ®le which consists of interfaces, dependency classes and perhaps shared libraries.

process, it must locate the base class of the Java program, all of the system- and user-de®ned classes which are used by the base class, the interfaces implemented by the class, and the shared library form appropriate for the local architecture and operating system. Fig. 2 illustrate a typical IceT process' static composition. In the setting of a distributed virtual machine such as proposed in Fig. 1, a major obstacle concerns the creation of a remote process. For example, if just two systems are joined together in a distributed virtual machine, the issue being addressed here is how to ``spawn'' a process on the remote resource when the process itself is not originally located on the remote system. In creating a remote instance of a process, the remote daemon is contacted with a request to create the process. If the process is not found on the remote system, the daemon responds with a request for the byte code representation of the process' Java-based front end, subsequently referred to as the process' ``primary class''. The daemon parses through the byte code representation of the primary class and generates a ®rst-level list of class and shared library dependencies by analyzing the methods list and ``Constant Pool'' entries in the byte code representation for the primary class. The main objective in this parsing of the byte code is to detect use of native library calls and to detect behavior of the process which would compromise security protocols. In the simplest setting, when an IceT process does not depend upon any native library calls, the installation of the process is achieved by IceT in a manner somewhat similar to that of a web browser which loads and resolves the dependency classes of a speci®ed applet. However, if the pending class makes use of native calls and such use is allowed by the security protocols in place, the shared library suited to the local architecture and operating system is located and, if necessary, soft-installed upon the shared library path of the daemon. Unlike a web browser, which locates, downloads and resolves dependency classes only when actually accessed during execution, the classes in the ®rst-level dependency class are obtained and recursively analyzed for dependencies and shared library use prior to the daemon's attempt at instantiating a new instance of the class. The term ``soft-installed'' re¯ects the selfcontained loading in of the speci®ed process' Java-based classes with access of the local ®le system occuring only if the pending class depends upon shared library calls. After the process terminates, the representation of the class is garbage-collected and any shared libraries introduced to the system are removed. For more

P.A. Gray, V.S. Sunderam / Journal of Systems Architecture 45 (1999) 1279±1291

1283

details relating to the loading in and instantiating of classes using the Java ClassLoader class, see Ref. [7]. For details pertaining to speci®c extensions to the ClassLoader class which enable the functionality described above, see Ref. [8]. 3. Programming example This section describes the general Java-binding of the API used to connect to and interact with the IceT environment by way of a distributed computation. The example below shows the fundamental calls which one might use to perform the task of solving the linear system Ax ˆ b over combined local and remote resources, using Gauss±Jacobi iterations; where A 2 Rnn is a diagonally dominant tridiagonal matrix, b; x 2 Rn . The example below illustrates distribution of the computation using a master/slave paradigm. The master program equally distributes the computation over a given number of slave process and gathers in the ®nal results. Given below is a listing of the source for the ``primary'' class ®le of the IceT program GaussJacobiDriver. As shown here, a typical IceT program begins by importing the IceT package (line 1). This allows the program to utilize objects associated with the IceT environment, such as Buffers (lines 30±34 and subsequently line 43 for example), Ids (lines 14, 43), and TaskElements (lines 21, 22, and 73 in the next listing).

1 2 3 4 5 6 7 8 9 10 ... 14 15 18 19 20 21 22 23 24

import IceT.*; public class Gauss Jacobi Driver extends Task Protocols { final static int NUM_SLAVES ˆ 10; final static int SIZE ˆ 200000; static final int ERRORPORTION ˆ 1, LOWERXVALUE ˆ 2, TOTALERROR ˆ 4, RESULTS ˆ 5; BASICINFO ˆ 888; static Id lastSlave; static TaskElement[ ] public void run( ){ taskArray; try{ Task Element myTaskElement ˆ IceT( ); taskArray ˆ new TaskElement [NUM_SLAVES]; numSpawned ˆ spawn (``GaussJacobiSlave'',NUM_SLAVES, taskArray); if (numSpawned ! ˆ NUM_SLAVES)

1284

25 26 27 28 29 30 31 32 33 34 ... 43 44 45

P.A. Gray, V.S. Sunderam / Journal of Systems Architecture 45 (1999) 1279±1291

throw new IceTException (``ErrorSpawning slavetasks''); //All slaves are on board. //send info to the slaves. for (int i ˆ 0; i < NUM_SLAVES;i++) { buf.reset( ); buf.pack(SIZE); buf.pack(blockSize); buf.pack(i+1); buf.pack(TOLERANCE); send(buf, taskArray[i].myId,BASICINFO); }

Additionally, all IceT programs must extend the TaskProtocols class (line 2). The TaskProtocols class contains the methods which directly interact with the IceT environment, such as spawn (line 22), send (line 43), and IceT( ) (which registers the task with the IceT environment, line 20). Extending the TaskProtocols class is necessary for maintaining portability of processes within IceT. The TaskProtocols class is itself a subclass of java.lang.Thread and therefore its execution can be governed by a remote host accordingly. The process under which the TaskProtocol classes are run may stop, start, destroy, re-prioritize, or even possibly serialize the thread to another host for subsequent execution or perhaps serialize the process to disk for archival. 5 The run method (line 18) is the entry point of an IceT process. In this example, the task registers with the IceT environment (line 20) and spawns NUM_SLAVES copies of the GaussJacobiSlave process (line 22). There are variations of the ``spawn'' command which allow the user to specify speci®c mappings of tasks to resources, however the call illustrated here defaults to a host chosen by an IceT-generated load balancing strategy. The spawn command returns with the number of slaves successfully created. If all the slaves were successfully created, a bu€er packed with the requisite program parameters is sent to each of the respective slave processes (lines 29±44). The driver program maintains responsibility for the tail portion of the Gauss±Jacobi iterations and initializes working vectors (code omitted) while the slave processes compute Gauss±Jacobi iterations on their respective portions of the solution vector, communicating dependent information amongst themselves. The driver program then collects the computed error portions from the slaves and calculates the global error estimate of the iteration and passes along the result to the slaves (lines 72±82). Collection of the results occurs if the error is less than the prescribed tolerance.

5

Object Serialization details may be found in Ref. [9].

P.A. Gray, V.S. Sunderam / Journal of Systems Architecture 45 (1999) 1279±1291

72 73 74 75 76 77 78 79 80 81 82 83

1285

for(i ˆ 0; i < NUM_SLAVES; i++) { buf ˆ recv(taskArray[i]. myId,ERRORPORTION + 5*count); presentError + ˆ buf.unpackDouble( ); } error ˆ Aitken(presentError, lastError, timeBeforeLastError); buf.reset( ); buf.pack(error); for(i ˆ 0; i < NUM_SLAVES; i++)> send(buf.bufferClone( ), taskArray[i].myId, TOTALERROR+5*count); if(error < ˆ TOLERANCE||count > MAXITER)break;

When the error tolerance has been met, the driver collects the computed results from the slave processes and outputs the result appropriately (lines 94±104). ... 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 } 113 }

double[ ]results ˆ newdouble[SIZE]; for(i ˆ 0;i < NUM_SLAVES;i++){ buf ˆ recv(taskArray[i].myId,RESULTS+5*count); buf.unpackDouble();//throwawayleader. for(intj ˆ i*blockSize;j < (i+1)*blockSize;j++) results[j] ˆ buf.unpackDouble(); } for(i ˆ 1;i < x.length-1;i++) results[tail_start+i-1] ˆ x[i]; showResults(results); } catch(IceTExceptionite){ TLog(``IceTExceptionthrown:''+ite); } finally{ exitIceT(); }

Unanticipated errors resulting from within IceT method calls may result in the throwing of one or several IceTExceptions. Lines 106±108 catch this exception if thrown and writes the details of the exception to the error log. The task ®nishes by separating itself from the IceT environment by calling (lines 109±111).

1286

P.A. Gray, V.S. Sunderam / Journal of Systems Architecture 45 (1999) 1279±1291

The GaussJacobiSlave program spawned in line 22 would follow similar registration, communication and termination methodologies. As such, the primary Java class listing for the slave process is omitted here. Again, a most worth noting aspect of both the driver and slave processes is that neither are assumed to exist initially on the remote host upon which they will be executed. In the absence of ®nding a process locally, the IceT daemon will query the IceT environment to locate the static representation of the process and will soft-install the process(es) locally, only to be garbage-collecting upon their termination. 4. Access to and portability of native code The above section illustrated the fundamentals of program interaction with the IceT environment by way a distributed linear system solver. Unfortunately, aspects and features which provide Java with such universal portability have a signi®cant cost; namely speed. Java is an interpreted language by nature, and as such, su€ers a signi®cant performance hit when compared to highly optimized C or Fortran programs which perform compute-intensive calculations. Details which benchmark Java-based IceT applications with equivalent PVM/C implementations and IceT/C blendings which accomplish the same task were presented in Ref. [10]. The results therein show that a mixing of Java and native-C programs within the IceT environment can lead to performance on-par with PVM/C in high-grain computations (large computation to communication ratios). However, the results in Ref. [10] also involve the message-passing overhead, which is currently a signi®cant bottleneck in the Java-based message-passing implementation of IceT. This section serves to illustrate how the portability of processes within IceT and the raw computational speed of C and even Fortran can be coupled to form a uniquely abled, broad-network concurrent computing environment. 4.1. Performance issues To re®ne the issue of speed and to give further motivation for using C or Fortran in lieu of Java, consider the results given in Table 1. The results depicted in Table 1 contrast the performance in computing the solution to the tridiagonal linear system Ax ˆ b on a single workstation (hence, with no communication overhead). A Java-only implementation is compared to an equivalent implementation which consists of a Java front end to a Table 1 Time (in seconds) to compute the solution to the tridiagonal linear system Ax ˆ b. Problem size (main diagonal length)

Fully-Optimized Java/C/Fortran blend

Pure Java implementation

1550 6250 25 000 100 000 400 000

0.001027 0.003339 0.01514 0.06902 0.2760

0.011955 0.05061 0.21004 0.8442 3.3700

Benchmarks used the beta 2 release of the JDK version 1.2 on a Sun Ultrasparc and Sun's f77 optimizing compiler.

P.A. Gray, V.S. Sunderam / Journal of Systems Architecture 45 (1999) 1279±1291

1287

Java/C/Fortran shared library blending, where the computational components are ultimately performed using Fortran. The results of the ®gure show that for larger systems, the performance penalty for using Java asymptotically approaches close to a tenfold performance penalty for using unoptimized Java. While Java's reputation of portability precedes the work presented here, IceT permits the porting of the more ecient form which utilizes a Fortran back end to perform the computationally intense calculations, across heterogeneous architectures, distinct ®le systems and across networks. The remaining portion of this section is devoted to the details of generating this Java/C/Fortran shared library blending which provided the computational sca€olding for these benchmarks. The emphasis in presenting the components of the Java/C/Fortran shared library blending is in illustrating the repercussions on the ability to create a process arbitrarily, on any machine in a heterogeneous resource collective. For example, a user on a Sparc-based Solaris workstation may spawn a computation on another user's WindowsNT workstation and vice versa. In addition, the resources involved need not share a common ®le system (NFS-mounted, for example), nor need be restricted to a local network. The only requirement is that appropriate shared library formats for the native code be available for soft-installation on the resource where the process is to be created. The underlying motivation of the remainder of this section is not only to show the additional portability gained by the blending of languages, but also to show how the large amount of existing (legacy) codes already written in Fortran and C, such as LINPACK, LAPACK++, etc., can be coupled with IceT's paradigm for portability, and thus extended to concurrent computing environments which reach beyond the local networked resources, the local ®le system, and the single user paradigm. The example here, albeit elementary, completely encapsulates the issues which need to be addressed to achieve such broad-scale distribution. The code listings below show how Java and IceT can be used to ``handshake'' and communicate across virtual environments. These communications channels can be utilized to pass messages between distinct processes which are running collaboratively. Further, these communications channels can be used to move chunks of system-dependent code between systems which can be subsequently accessed through Java wrapper classes, according to the established JNI framework. 4.2. The Java/C/Fortran shared library The tridiagonal LU decomposition solver of this section consists of several components: (1) the IceT front end; (2) the Fortran-based tridiagonal system code (compiled as a shared library); (3) a C-based transitional code segment which acts as the intermediary to the Java and Fortran code; and (4) the java wrapper class to the C code. These components together form what will be referred to as the program collective. The IceT front end, written in Java, provides the initial portability to the program collective. The Fortran-based tridiagonal solver supplies the computational engine of the collective, and the other portions of the collective are used to mesh these primary components together. The Fortran source for the tridiagonal solver is given below in Fig. 3. Note that this subroutine is selfcontained in the sense that it makes no use of external ``blas'', 6 which makes it ideally suited for illustrating the portability aspects of this section. 6

``Blas'' are the basic linear algebra subroutines; matrix-vector multiplication for example.

1288

P.A. Gray, V.S. Sunderam / Journal of Systems Architecture 45 (1999) 1279±1291

Fig. 3. Fortran subroutine which will serve as the computational engine for a distributed computation.

The task at hand will be to port the optimized, system-dependent compiled binary representation of the shared library associated with the Fortran code to a remote resource and to interact with it using an IceT driver program. The IceT driver program for the Fortran code shares many similarities with the example given in the previous section in that it imports the IceT package, extends the TaskProtocols class, and invokes IceT( ) to register itself with the IceT environment. However, a notable di€erence in the driver program is in its interface to the shared library calls which will actually manage the computations. In the driver program, the actual ``tridiag'' subroutine call is accessed through the NativeDemoClass class (line 6), and the invoking of the tridiagonal solver occurs on line 26. NativeDemoClass is a Java-based class de®nition which de®nes the single method doFortranTask as native (Fig. 4). The native method ``doFortranTask'' (line 2 in Fig. 4) is a method in the shared library libnative.so, 7 which is loaded and linked through the System.loadLibrary(``native'') directive (line 6 in Fig. 4). In general, a library of several mathematical routines would be encapsulated in such a Java-based wrapper class, where the native method calls of the library would be prototyped in a similar manner. The result of ``nativeDemo.doFortranTask(lower, diag, upper, work, 0);'' in line 26 of the driver program NativeTridiagLUDriver (Fig. 5) causes control to be passed to the C-based intermediate

7

Or as the dynamic link library ®le ``native.dll'' on a Windows-based workstation.

P.A. Gray, V.S. Sunderam / Journal of Systems Architecture 45 (1999) 1279±1291

1289

Fig. 4. NativeDemoClass is the wrapper to the system-dependent method call ``doFortranTask'' which is located the shared library ``native.''

Fig. 5. Di€erences in the IceT front-end to the Fortran computational substrate are found on lines 6 and 26 of this excerpt from the Java source to the driver program.

processing unit of the program collective which is listed in Fig. 6. The purpose of this code is to take the system-independent Java objects and convert these to system-dependent representations of arrays of doubles and an integer-valued parameter (lines 23±30 in Fig. 6) and then cast the results of the Fortran subroutine back into the appropriate Java objects to return to the invoking Java process (lines 31±33 in Fig. 6). The ability to detect a pending process' use of such encapsulated native code is attributed to the Java byte code representation of the requisite Java-based front end to the program collective (as was described in Section 3. For details on the analysis of the front end's byte code, refer to Ref. [10]. When encapsulated in this manner, native code can be loaded along with the Java-based front end across networks as easily as an applet can be loaded in an arbitrary web browser. The only issue is whether a binary version of the encapsulated native code can be provided for the remote architecture. These issues and others are being addressed in furthering the functionality of IceT, with encouraging initial results. 5. Conclusions and work-in-progress The initial o€erings of the IceT project depicted here show great potential in advancing the state of broad-network concurrent computing in addition to collaborative programming and programming aspects

1290

P.A. Gray, V.S. Sunderam / Journal of Systems Architecture 45 (1999) 1279±1291

Fig. 6. This C-based portion of the program collective is responsible for taking Java representations of the input parameters and casting them into system-dependent representations which are passed to the externally linked Fortran subroutine.

heretofore unexplored. Furthermore, the coupling of system-independent ``handshaking'' and subsequent soft-installation of system-dependent shared libraries has been shown to have tremendous advantages over a pure Java approach in raw computational speed and lends to e€ective re use of existing codes. References [1] G.A. Geist, V.S. Sunderam, The PVM system: supercomputer level concurrent computation on a heterogeneous network of workstations, in: Proceedings of the Sixth Distributed Memory Computing Conference, IEEE, 1991, pp. 258±261. [2] M. Snir, S.W. Otto, S. Huss-Lederman, D.W. Walker, J. Dongarra, MPI, The Complete Reference, MIT Press, Cambridge, November 1995. [3] B. Carpenter, Y.J. Chang, G. Fox, D. Leskiw, X. Li, Experiments with `HP Java', Concurrency, Experience and Practice 9 (6) (1997) 633±648. [4] G.C. Fox, W. Furmanski, Java for parallel computing and as a general language for scienti®c and engineering simulation and modeling, Concurrency, Experience and Practice 9 (6) (1997) 415±426. [5] P. Gray, V. Sunderam, The IceT environment for parallel and distributed computing, in: Scienti®c Computing in Object-Oriented Parallel Environments, no. 1343, Springer, New York, December 1997, pp. 275±282. [6] S. Liang, The Java Native Method Interface: Programming Guide and Reference, Addison±Wesley, Reading, MA, 1998. [7] J. Gosling, B. Joy, G. Steele, The Java Language Speci®cation, 1st ed., Addison±Wesley, Reading, MA, 1996. [8] P. Gray, V. Sunderam, IceT: Distributed computing and Java, Concurrency, Practice and Experience 9 (11) (1997) 1161±1168. [9] Sun Microsystems, Java Object Serialization Speci®cation, Technical Report, JavaSoft, February 1997, Revision 1.3. [10] P. Gray, V. Sunderam, Native language-based distributed computing across network and ®le system boundaries. Concurrency, Practice and Experience, 10 (1) (1999).

P.A. Gray, V.S. Sunderam / Journal of Systems Architecture 45 (1999) 1279±1291

1291

Paul Gray is a Visiting Assistant Professor at Emory University, Atlanta, GA, USA. His background is in numerical analysis with an emphasis on distributed computing. He is the primary developer of the ``IceT'' project, which aims to provide a distributed virtual environment across networks and ®le systems for extending computation and collaboration. Recently his research activities have included distributing computational simulations of supercondictivity phenomena, analyzing and parallelizing the GMRES algorithm, and analyzing aspects of the Java programming language which are suitable and detrimental for the purpose of high-performance computing.

Vaidy Sunderam is Professor of Computer Science at Emory University, Atlanta, USA. His current and recent research focuses on aspects of distributed and concurrent computing in heterogenous networked environments. He is the principal architect of the PVM system, in addition to several other software tools and systems for parallel and distributed computing. He has received several awards for teaching and research, including the IEEE Gordon Bell prize for parallel processing. Recently his research activities have included novel techniques for multithreaded concurrent computing, input±output models and methodologies for distributed systems, and integrated computing frameworks for collaboration.