INFORMATION SCIENCES ELSEVIER
Journal of Intbrmation Sciences 106 (1998) 197-199
Guest editorial
A special issue on parallel and distributed processing Hamid R. Arabnia a, Keqin Li b " Chair, PDPTA '96 Program Committee, Department Of Computer Science, ~hliversity ~!] Georgia, Athens, GA 30602-7404, USA b Member, PDPTA '96 Program Committee, Department ~f Mathematics amt Computer Science, State Uni~:ersity of New York, New Paltz, N Y 12561-2499, USA
The 1996 International Conference on Parallel and Distributed Processing Techniques and Applications (PDPTA '96) was held in August (9th-I lth) at Sunnyvale Hilton, Sunnyvale, California. The conference was sponsored by the Computer Science Research, Education, and Applications Tech. (CSREA) in cooperation with Alta Technology Corporation, Pacific Sierra Research Corporation, Transtech Parallel Systems Corporation, and Morgan Kaufmann Publishers, Inc. As the chair (Hamid Arabnia) and a member (Keqin Li) of the PDPTA '96 Program Committee, we would like to take this opportunity to thank Professors Edward K. Blum (University of Southern California), Kai Hwang (University of Hong Kong), John Koza (Stanford University), and Peter H. Welch (University of Kent, UK) for presenting the four keynote lectures of the PDPTA '96 Conference. The Program Committee presented the PDPTA Outstanding Achievement Award to Professor Kai Hwang in recognition and appreciation of" his dedicated and outstanding contribution to the fields of parallel and distributed computing and applications. In response to its call for papers, the PDPTA '96 Program Committee received 302 submissions (excluding the invited papers and those that were directly submitted to session proposers) from 37 countries. The submitted papers ranged from 1 to 135pages. Approximately, 36% of the submitted papers (excluding the invited papers and those that were directly submitted to session proposers) were accepted as Regular Papers, and 21% of the remaining papers as Short Papers.
0020-0255/98/$19.00 © 1998 Elsevier Science Inc. All rights reserved. PII: S 0 0 2 0 - 0 2 5 5 ( 9 7 ) 1 000 1 -9
198
Guest editorial / Journal ~7[lnjormation Sciences 106 (1998) 197 199
We are grateful to the many colleagues who helped referee/review the submissions. In particular, we would like to thank the following who refereed/reviewed a number of papers each: Dr. Hamid Abachi (Monash University), Dr. Tarek Abdelrahman (University of Toronto), Dr. Suchi Bhandarkar (University of Georgia), Dr. Mark Clement (Brigham Young University), Dr. Glenn Gibson (University of Texas at E1 Paso), Dr. David Kaeli (Northeastern University), Dr. David Lowenthal (University of Arizona), Dr. Kia Makki (University of Nevada), Dr. Yi Pan (University of Dayton), Dr. Nikki Pissinou (University of Southwestern Louisiana), Dr. Don Potter (University of Georgia), Dr. Gary Rommel (Eastern Connecticut State University), Dr. Jeffrey Smith (University of Georgia), Dr. Dyke Stiles (Utah State University), Dr. Faramarz Valafar (University of Georgia), and Dr. Barry Wilkinson (University of North Carolina at Charlotte). Soon after the conference the participants of the conference were asked (by e-mail) to select papers that they thought were the most informative. As a result, the Program Committee selected six papers which are included in this special issue. The first paper is entitled "A parallel implementation of genetic programming that achieves super-linear performance" by D. Andre and J.R. Koza. This paper describes the successful parallel implementation of genetic programming targeted at a network of processing nodes using transputer-based architecture. The implementation takes full advantage of processors by distributing the population. The authors have shown that their algorithm works more efficiently with multiple sub-populations and therefore achieving super-linear performance. The second paper is entitled "Generational scheduling for dynamic task management in heterogeneous computing systems" by B.R. Carter, D.W. Watson, R.F. Freund, E. Keith, F. Mirabile, and H.J. Siegel. The paper describes a method of scheduling tasks among a collection of heterogeneous computers to achieve maximum execution speed. The method is termed Generational Scheduling (GS). It is a repetitive method which allows for task scheduling to be done dynamically during the task computation. The authors found that the proposed GS technique is competitive in solution quality and superior in time requirements to two near-optimal heuristic combinatorial scheduling methods. The third paper is entitled '"Linear array with a reconfigurable pipelined bus system - Concepts and Applications" by Y. Pan and K. Li. The authors perform a comprehensive study of a new computational model, called linear array with a reconfigurable pipelined bus system (LARPBS). The LARPBS model is developed based on recent advances in optical interconnections. The paper demonstrates how a number of primitive data communication operations can be efficiently implemented on LARPBS. It is shown that these basic operations and the reconfigurability of an LARPBS system can support the implementation of many fast parallel algorithms.
Guest editorial / Journal of' In/brmation Sciences 106 (1998) 197 199
199
The fourth paper is entitled "Addressing the shortcomings of traditional formal reasoning methods for concurrent programs: New tools and techniques for source code correctness" by R.J. Shaw and R.A. Olsson. This paper proposes an architecture for "source code" reasoning with multiple levels of abstraction which aims to capture the "familiar" mental execution of programs as used by programmers. The proposal is indeed general and ambitious. The authors propose that instead of starting from mathematical foundations of programming, one should take a bottom-up view by building an environment for reasoning about "code" that are rooted in programmers "intuitions" and "habits". The fifth paper is entitled "The ParaStation project: Using workstations as building blocks for parallel computing" by T.M. Warschko, J.M. Blum, and W.F. Tichy. The authors describe a system, named ParaStation, which provides a high-speed communication network that provides efficient parallel computing on workstation clusters. Efficiency is achieved by removing the kernel from the communication path and performing all interface procedures at user level. The last paper is entitled "The Fortran Parallel Transformer and its programming environment" by E.H. D'Hollander, F. Zhang, and Q. Wang. The paper presents a parallel programming environment for Fortran-77 programs. The proposed programming environment is called FTP. FTP is used for the automatic parallelization of loops, program transformations, dependence analysis, performance tuning and code generation. As the guest editors of this special issue, we would like to thank the Editorin-Chief of the journal, Professor Paul P. Wang, who kindly supported and encouraged the publication of this special issue; we are very grateful.