On a data-driven environment for multiphysics applications

On a data-driven environment for multiphysics applications

Future Generation Computer Systems 21 (2005) 953–968 On a data-driven environment for multiphysics applications J. Michopoulosa,∗ , P. Tsompanopoulou...

574KB Sizes 0 Downloads 32 Views

Future Generation Computer Systems 21 (2005) 953–968

On a data-driven environment for multiphysics applications J. Michopoulosa,∗ , P. Tsompanopouloub , E. Houstisb,c , C. Farhatd , M. Lesoinned , J. Ricec , A. Joshie a

US Naval Research Laboratory, Special Projects Group, Code 6303, Washington, DC 20375, USA Department of Computer Engineering and Telecommunications, University of Thessaly, 38221 Volos, Greece c Computer Sciences Department, Purdue University, West Lafayette, IN 47906, USA Department of Aerospace Engineering Sciences, University of Colorado at Boulder, Boulder, CO 80309-0429, USA e Department of Computer Science and Electronics Engineering, University of Maryland, Baltimore County, Baltimore, MD 21250, USA b

d

Received 12 September 2003; received in revised form 29 October 2003 Available online 28 February 2004

Abstract The combination of the recent advances in computational and distributed sensor network technologies provide a unique opportunity for focused efforts on high confidence modelling and simulation of multiphysics systems. Responding to this opportunity, we present in this paper the architecture of a data-driven environment for multiphysics applications (DDEMA) as a multidisciplinary problem solving environment (MPSE). The goal of this environment is to support the automated identification and efficient prediction of the behavioral response of multiphysics continuous interacting systems. The design takes into consideration heterogeneous and distributed information technologies, coupled multiphysics sciences, and sensor originating data to drive and to steer adaptive modelling and simulation of the underlying systemic behavior. The design objectives and proposed software architecture are described in the context of two multidisciplinary applications related to material structure design of supersonic platforms and fire/material/environment interaction monitoring, assessment and management. These applications of DDEMA will be distributed over a highly heterogeneous networks that extend from light and ubiquitous resources (thin portable devices/clients) to heavy GRID-based computational infrastructure. © 2004 Elsevier B.V. All rights reserved. Keywords: Data-driven; Multiphysics; Multidisciplinary problem solving environment; Distributed computing; Distributed sensors; Heterogeneous networks

1. Introduction



Corresponding author. E-mail address: [email protected] (J. Michopoulos). 0167-739X/$ – see front matter © 2004 Elsevier B.V. All rights reserved. doi:10.1016/j.future.2003.12.023

Continuous multidomain interacting systems under the influence of coupled multifield loading can exhibit static and dynamic behavior that is highly variable in space and time, spans multiple scales, and sometimes

954

J. Michopoulos et al. / Future Generation Computer Systems 21 (2005) 953–968

has intense nonlinear interaction and response. This is true especially for material/structural systems embedded in host continua as in the case of structures interacting with surrounding fluids under generalized loading conditions. The development of the science, engineering and technology that allows predicting the behavior of such interacting systems is very important from the perspective of their applications when driven by real human needs. An example would be the case of designing materials and structures for the civil and defense infrastructure from the perspectives of life extension and maintainability. Candidate technologies for this application are smart materials and systems that can selfadjust when a scheduled or unexpected restriction of their state space occurs, like structural composites that can change their mechanical properties when exposed to sudden temperature gradients due to explosion or mechanical damage. Furthermore, specific vertical industries must comply with consumer driven quality standards and/or demands that introduce implicit requirements for reducing uncertainty of simulated systemic behavior. This has increased the demand for the validation and verification of predictions, rendered by the computational environments used for this purpose. A focal motivating point of our efforts is the fact that the opportunity for the automated utilization of experimental and monitoring data can be used for pursuing modelling and simulation activities that intrinsically contain validation of simulated behavior. In response to these needs, the main objective of our project is the creation of a framework for developing and applying specific multidisciplinary problem solving environment (MPSE) technologies to applications focused on the multifield behavior of interacting continuum systems. Accurate behavior prediction of fully identified systems will be achieved as a consequence of applying inverse approaches along with massive automated data acquisition during tests of continuous systems; these tests correspond to exhaustive scanning of multidimensional stimulus-response spaces of the systems at hand. Such a data-driven framework will allow the experimental capturing of the multiphysics behavioral characteristics of the continuous system, thus intrinsically ensuring the validity of simulations based on massive and automated testing. The present paper describes the current status of the architecture of data-driven environment for multi-

physics applications (DDEMA) as it has evolved from its initially described state [1,2]. The structure of the present paper is organized as follows. In Section 2, we present the motivating effects that data-driven methodologies can have on the qualification, validation and verification (QV&V) of the modelling and simulation (M&S) process. Additionally, in this section we characterize data-driven modelling methodologies in terms of their attributes and present an overview of our MPSE efforts, along with a description of the system specifications and requirements of the proposed DDEMA environment, followed by a discussion of the existing infrastructure to be incorporated in the implementation of DDEMA. Section 3 presents proposed architectures as the means to satisfy requirements based on the two time modalities for prior- and real-time contexts. Section 4 provides some detail to the middleware implementation involved in the architecture. Section 5 contains a description of two applications (one for each time modality) of DDEMA that will be utilized to validate its feasibility. Finally, the conclusions are presented in Section 6.

2. MPSE overview and DDEMA specifications 2.1. Qualification, validation and verification of data-driven modelling and simulation Data-driven modelling and simulation (M&S) can have an entirely different nature when compared to the traditional M&S practices, from a perspective of uncertainty. Extensive utilization of data, as is promoted by this and other similar efforts, has important motivating characteristics. In order to address the issues of simulation fidelity, accuracy, and in general high confidence predictions of physical system behavior, contemporary M&S practices are subject to the necessary model qualification, verification and validation (QV&V) activities. There are many descriptions of how QV&V relates to modelling and simulation. Fig. 1 represents a depiction of the modelling and simulation process along with the associated QV&V activities that represents a unification of the abstractions defined by many organizations such as the AIAA [3], ASME [4], DoD’s Defense Modelling and Simulation Office (DMSO) [5] and DOE Defense Programs (DOE/DP) Accelerated Strategic

J. Michopoulos et al. / Future Generation Computer Systems 21 (2005) 953–968

955

Fig. 1. Traditional QV&V for the M&S process.

Computing Initiative (ASCI) [6]. The dotted arrows represent human activities that are implemented with various degrees of computational automation and allow transitioning from the physical system to the system’s conceptual model through analysis, to the computational system model through programming and back to the physical system through simulation. Conceptual model representation is constructed through analysis of the observable behavioral structure of the physical system within the application context. The derived conceptual model can be a set of partial differential equations (PDEs) combined with the necessary constitutive equations. For many investigators this equational representation of the model corresponds to what is known as an analytical model and it encapsulates the conceptual model that can reproduce the behavior of the physical system. Another form of this model may be in terms of rule-based (i.e. Cellular Automata or Genetic Algorithms) or input–output associator technologies (i.e. Neural Nets, perceptrons, etc.) The conceptual model introduces the uncertainty or error associated with the differences between the implied idealization of the analytical/mathematical representation of the conceptual model and that of the actual physical model. The computational model is encapsulated by the software that implements the conceptual model on the computational infrastructure. It is constructed by a variety of programming techniques based on various degrees of computational automation from manual code development [7,8] to automated code synthesis that ex-

ploits the ability of some of the commercial design tools (i.e. Mathematica) used in the conceptual model building process to automatically generate code or code stubs and extends to custom approaches [9,10], and to problem solving environment (PSE) based compositional approaches [11,12]. The computational model introduces additional uncertainty associated with the space and time discretization errors associated with the process of converting differential operators within the involved PDEs to systems of algebraic equations and the corresponding algorithms for carrying out the solution schemes of these systems. The formal definitions for QV&V used in the diagram of Fig. 1 are as follows [3]: • Qualification is the process of determining that a conceptual model implementation represents correctly a real physical system. • Verification is the process of determining that a computational model implementation represents correctly a conceptual model of the physical system. • Validation is the process of determining the degree to which a computer model is an accurate representation of the real physical system from the perspective of the intended uses of the model. Both validation and qualification are attempting to answer the question of the representational fidelity of the conceptual (qualification) and computational (validation) models relative to the real physical system. For this reason qualification can be thought

956

J. Michopoulos et al. / Future Generation Computer Systems 21 (2005) 953–968

of as another type of validation. This consideration explains why most of the bibliography has narrowed the discussion to only the verification and validation (V&V) notions. Verification is attempting to answer the question of closeness (error, accuracy) between two models (conceptual and computational). Fig. 1 depicts the QV&V concepts in terms of comparing with the behavioral data corresponding to the systemic behaviors of the physical (through experimentation), the conceptual and computational systems (through simulation prediction). The most critical issues associated with this datadriven behavioral comparison among the various systemic representations are those of complexity and weak termination as described below. Complexity: The comparison is usually performed on data instantiating behavioral field state variables defined over vector spaces spanned by the input–output vectors (i.e. load–displacement, stress–strain, etc.) of the modelled system which many times correspond to vector fields defined over volumes, areas or lines associated with the geometrical models of the involved continua. In multifield multicontinua applications (coupled or/and uncoupled) the dimensionality of these spaces can be high especially when including electromagnetic fields to those of the classical continuum thermomechanics, leading thus to increased problem complexity. In practical reality terms it appears that today’s PSEs still cannot address realistic engineering applications since they are not used by engineering professionals. They mostly exist in the realm of academic proof-ofconcept validation articles. This project will attempt to bridge this gap since it is directed by investigators from both PSE and engineering applications communities. Weak termination: Unfortunately termination of the recursive process of minimizing the differences between model predicted datasets and physical system behavior datasets, is not guaranteed to terminate for a user-defined accuracy (as a measure of fidelity of simulation) in a finite time, nor is it guaranteed that there is a unique model that produces a predicted behavior dataset that converges to the physical system behavior dataset with a desired speed of convergence. This situation forces users and domain experts in general, to employ “engineering approximation” attitudes of accepting models within “acceptable bounds” for specific applications, which consequently leads to a plethora of low-confidence specialized models and solutions that

vary according to the personal modelling assumptions of the respective investigator/expert. The complexity issue has been one of the main motivators for the successful development of various MPSE frameworks and corresponding applications. The pluralism of implementation methodologies and domain of application sensitivities has lead to long list of such systems that end up being used mostly by those who developed them. This is another secondary yet critical issue associated with the PSE technology. To address the weak termination issue as well as to move towards an approach that guarantees high confidence reality approximation with high fidelity in a built-in or embedded-in-the-model manner, we are not following the approach of finding a solution that alleviates the issue, but rather we are following the approach of establishing the conditions that give rise to it and subsequently considering removing them such as the issue ceases to exist. The pursued alternative is to implement a M&S methodology that can utilize the following three elements: (1) data-streams of controlled continuous system behavior (stimulus-response pairs of all observables regardless of their field or nonfield nature), (2) analytical representations of the model that can accurately reproduce a subset of the acquired data via system identification through successive dynamic model adaptation with the help of optimization techniques thus encoding intrinsically the validity of the derived models, and (3) derived models to simulate predictive response of the originally modelled system or any other system that shares the identified continuous behavior with the original system. Fig. 2 shows the proposed data-driven M&S process as a modification of the original Fig. 1. The modules inside the dashed line area constitute an alternative conceptual system model that is embodied by the optimization scheme described. This approach effectively moves the physical data of the experimentally observed behavior of the actual system, inside the conceptual model of the system thus guaranteeing the intrinsic validity of the conceptual system. For continuous systems, usually this model refers to the set of algebraic and partial differential or integral equations representing the constitutive behavior along with the conservation laws involved. Thus, this methodology eliminates the weak termination problem. Termination is strong and embedded

J. Michopoulos et al. / Future Generation Computer Systems 21 (2005) 953–968

957

Fig. 2. Evolved QV&V for data-driven M&S process.

since the adaptively computed model is trained on the behavioral data and has to terminate as soon it satisfies the optimization criteria including the acceptable error tolerance between actual and simulated data, and occurs prior to using the model for prediction of behavior that has not been used in the training phase. 2.2. Attribute space of data-driven environments The main feature of DDEMA that differentiates it to existing MPSE efforts will be its utilization of data and its data-driven attributes in general. The attribute space of a data-driven information technology application/environment such as DDEMA, can be viewed as a four-dimensional space spanned by four bases— (1) time modality: time of utilization versus time of acquisition of data (static versus dynamic or priortime versus real-time); (2) place of data utilization: onboard (computational operations and decision support environment (CODSE)) versus outboard (computational design environment (CDE)); (3) implementation topology: localized versus distributed; (4) building blocks implementation economy: custom versus commercial of the shelf (COTS) technologies. Fig. 3 shows a two-dimensional subspace of this space, spanned by the basis attributes of time modality and place of data utilization. The two values per basis cases define four particular applications of a data-driven environment depending on these values.

They are described counterclockwise (starting from the outboard-static case). When pre-existing data have been captured and are used for constructing a (mathematical and computational) model of the physical system that has been systematically observed while the computational environment that utilizes them is located out of the modelled physical system (outboard) then our data-driven environment can be utilized as a computational design environment (CDE). When the environment just described is actually located within (onboard) the physical system it models, then our data-driven environment can be utilized as a computational operation and decision support

Fig. 3. Special cases of data-driven environments defined in the “place of data utilization” vs. “time modality” context space.

958

J. Michopoulos et al. / Future Generation Computer Systems 21 (2005) 953–968

(CODSE) system to aid the crew within the system (i.e. submarine, destroyer, aircraft carrier). To the extent that we allow the previous CODSE to be driven by live real-time data originating from sensor networks, then it becomes a dynamic CODSE (D-CODSE). Finally, if we take some or all of its implementation modules and distribute them in remote locations so they are not on-board any more, then our data-driven environment becomes a remotely assisted dynamic CODSE (RAD-CODSE). This taxonomy of potential systems is generic and will not be discussed in detail here due to lack of space. Special emphasis will be given however, in the transitioning from prior-time (static) to a real-time (dynamic) environment. All other cases are expected to be almost trivially producible from these two cases. 2.3. Background of multiphysics problem solving environments Progress towards the solution of coupled multiphysics and multicontinua interaction, both from the constitutive and the field equation perspectives, has been scarce. There is significant progress however, when focus has been given to computational implementations for a small number (mostly three) of interacting fields and a number of active projects exist. Among them one can include our development of the GasTurbnLab [13,14] multidisciplinary problem solving environment that addresses the interactions between a stator, rotor, and combustor in a gas turbine engine that has been implemented on a mobile agent platform, utilizing a collaborating partial differential equations methodology [15,16]. In the area of fluid–structure–control interaction, we can mention the advanced AERO computational platform. This platform comprises the AERO-F, AERO-S, AEROC and MATCHER suite of codes [7,8], developed at the University of Colorado for the solution of nonlinear transient aeroelastic problems. They are portable, and run on a large variety of computing platforms ranging from Unix workstations to shared as well as distributed memory massively parallel computers. A generalization of AERO to an aerothermoelastic implementation was initiated recently [17]. The case of aero-structural coupling with an energy dissipative nonlinear material response under a virtual wind

tunnel environment has also been initiated recently [18,19]. 2.4. Requirements The DDEMA system will use network based distributed computing infrastructures and data and will provide a web-accessible user’s environment. It will also have the respond to the following requirements: (1) Computation should be partitioned into coarse grain and fine-grain components and will be executed in a loosely coupled fashion and distributed manner for the lightweight implementation, and in tightly coupled fashion for the heavyweight implementation perhaps by exploitation of web services technologies. (2) The system should be able to distribute dynamically the various components in order to adapt according to resource availability and performance variations and/or demands. (3) Automatic adaptive modelling should be available and the system should minimize the difference between stored behavioral datasets and predicted behavioral datasets, in a design optimization fashion, where the objective function and the potential constraints will be user or system definable. (4) To enable optimal user customization and history maintenance of user–system interaction, a visual problem definition editor (VPDE) should be developed and its user-invoked states will be achievable and unarchievable upon request. (5) To enable collective and dynamic knowledge transfer among distributed points of entry and achieve auto enhancement of simulation due to historical collective user experience, abstractions of successful simulations and/or problem solutions as well as their least upper bound (as a representation of the nonconflicting historic common asserted knowledge) should be maintainable for all users with appropriate authorization/authentication. (6) The system should be able to deal with machine and programming language heterogeneity problems while it will preserve platform independent component communication by leveraging efficient and ubiquitous data exchange formats and methodologies.

J. Michopoulos et al. / Future Generation Computer Systems 21 (2005) 953–968

2.5. Existing technologies integration The work under development borrows design concepts from or builds on top of: (1) existing agent systems [20,21], (2) communications tools and protocols that can provide interoperability between different platforms (e.g. Java-RMI-IIOP [22]), (3) symbolic algebra systems and subsystems for automating the field equation generation process (e.g. MathTensor [23]), (4) flexible APIs for visual editing of user controlled problem definition as well as 2D and 3D visualization (e.g. Ptolemy-II [24]), (5) high performance Java implementations for parallel computation over distributed and shared memory architectures focused on intraparadigm compatibility (e.g. ProactivePDC [25]) or message passing efficiency (e.g. Hyperion [26]), (6) resources allowing GRID integration (e.g. web service XML-based metadata integration [27] and GSiB [28,29]), and finally (7) heterogeneous device (form thin PDAs and mobile phones to heavyweight high performance computing and GRID resources) implementations to support ubiquitous computing infrastructure [30]. Many of these technologies have already been proven to be appropriate for rapid prototyping, involving traditional multiphysics and multiscale models developed from different disciplines and organizations. What is crucially important for our project is the clean, simple and flexible programming model of mobile or static component systems, which is expected to facilitate the development of the overall envisioned system. Besides making necessary adjustments/extensions to such systems, our IT development work will also focus on transforming proven but largely monolithic legacy codes into a set of well integrated, interoperable components (via wrappers based on the Java Native Interface (JNI) [31]) that can be distributed over the network. Very recently we have established that webservices-based approaches followed by very few groups [27–29] share a common perspective with our team regarding the computational technologies utilization and philosophy of integration especially the GSiB project [29] that has created a crucially important resource for automatic JNI wrapping of legacy codes through JACAW [32]. From the data-driven perspective we have also very recently established that other groups [33,34] are following similar philosophy and approaches.

959

3. Design specifications of DDEMA for two data-driven application scenarios The application domain for the validation of utility for DDEMA will be that: (a) of composite material structures in their linear and nonlinear behavioral regimes when they interact with surrounding fluids under multiphysics loading conditions primarily for material/structural design of supersonic platform, (b) fire– material–environment interaction, monitoring, assessment and management system. The two application scenarios considered can be characterized by the prior-time and real-time modalities identified in the overview section introduced in the discussion of Fig. 3. Following, we present the user and system architectural design objectives of DDEMA framework for the two identified application scenarios. 3.1. Real-time system design objectives In this system configuration, we assume that the onboard user has many operational objectives some of which are: • Monitor the health state of the area (e.g. entire platform, substructure, component) of interest by observing continuous field distributions guided by incoming sensor datastreams originating from a distributed sensor network that can be monitored from multiple points of entry. • Simulate potential future states (“what-if” scenarios) of the system given certain hypothetical battle or situational escalation scenarios. This requires auto conversion of actual operating conditions to corresponding boundary/initial conditions, exploitation pre-computed field states to synthesize potential field evolutions quickly for “what-if” scenarios. • Based on input from the previous two items, the onboard users will be able to make decisions and implement them by activating actuation mechanisms. These require a dynamic analysis of simulated scenarios based on controllability of the systems, an impact analysis via a system that implements heuristic knowledge management techniques, and an archival capability for future re-usage and recording evaluative experiences of implemented decisions.

960

J. Michopoulos et al. / Future Generation Computer Systems 21 (2005) 953–968

Fig. 4. A real-time system architecture.

There are two general approaches that depending on computational resources availability, complexity and size of modelling can be utilized in the real-time modality context. One is using real-time data to dynamically steer a simulation-based on real-time solution of the corresponding PDEs. The other one is using real-time data-streams to select user-defined proportions of pre-computed basis solutions [35], provided the stimulus space of the system at hand is parameterizable and decomposable to basis cases and that linear superposition can be exploited due to the fact that the inherent stimulus-response state variables are linearly related or the system can be linearized. Though we plan to explore a solution for the first case, here we present a high level schematic view of the so called real-time system architecture for the second case of solution synthesis in Fig. 4, where the coarse grain functionality modules have been shown from a data association perspective only. This diagram should be implementable within the VPDE. A short description of each one of the modules presented is as follows. Solution generator: This module encompasses the functionality required preparing the data for the codes performing fluid-structural analysis. It is responsible for converting mission specific events to input data for the PDE solvers involved in the modules below.

Multiphysics/multidomain solvers: These modules accept fluid properties, flight conditions, and structural properties and they can compute any fluid or structure related field results. They are the solvers of the multifield and multidomain PDEs involved in the modelling of the corresponding physical systems. The aerostructural problem will utilize the AERO suit of codes while an additional suite for reactive flow has to be incorporated in order to address the fire related problems. Pre-computed basis case solutions: This module is a database that stores all basis case field solutions that the user generates through the corresponding analysis module for the domain(s) involved. Stimulus specification controller (Middleware 2): This module establishes the stimulus vector space and its bases and produces parametric representations of desired loading paths applied onto the structure via explicit loading conditions, or implicit ones via loading event parameterization. It also establishes the material and geometry specification for the system at hand as a function of both user-decisions and sensor data. Labor distributor (Middleware 1): This module distributes labor among the various solvers in a manner consistent with the restrictions imposed by the rest of the modules that are connected to it and provides its output to the solution composer module. Solution specification composer: This module combines all the selections from the controllers for loading,

J. Michopoulos et al. / Future Generation Computer Systems 21 (2005) 953–968

961

pre-computed solution basis cases database module to synthesize the consistent to the user/sensor data specification solution. Display module: The user interaction and visualization module. The focal domain of application of this real-time approaches will address mainly the fire assessment, evaluation and control scenario depicted in Fig. 5. This scenario involves utilizing sensor network data along with a user specified query processed from the routing software on a base station(s) environment, that is capable of requesting a “GRID” implemented solution of the corresponding PDE solvers to formulate a “microfuture” prediction of a “what-if” specification (i.e. what is going to happen if the fire-personnel opens a door). Fig. 5. A fire scenario implementation of DDEMAs architecture.

3.2. Prior-time system design objectives material and geometry path definitions, and synthesizes from the base case solutions database, the desired combined solution. Simulation composer: This module utilizes the output specification from the solution composer module with the pre-computed fields stored in the

In this system configuration, we assume that the laboratory or off-board user, who can be a structural designer or system qualifier who needs accurate systemic state behavior prediction, has similar operational objectives to that of the real-time modality. The

Fig. 6. A prior-time implementation of DDEMAs architecture for a CDE application.

962

J. Michopoulos et al. / Future Generation Computer Systems 21 (2005) 953–968

only difference will be that the real-time component will not be present at the time of simulation but it will be present at the time of model formation. The role of the incoming data will be played by the multidimensional material testing data-streams for identifying constitutive behavior of materials that may also guide model selection and implementation. This requirements specification represents the structural system design and qualification utilization of the system. The high level schematic view of the so called prior-time (static) system architecture depicted in Fig. 6 was produced by appropriate adjustments of Fig. 4.

4. Implementation issues for DDEMA environment

Fig. 7. Three-layered abstractions of DDEMA in terms of their distance from user’s perspective and the abstract functional refactoring.

In this section we discuss the implementation of DDEMA for the two identified application scenarios.

tially is capable of encapsulating instantiations of the deepware modules and project them to their visual incarnations available in the surfaceware level. The deepware layer corresponds to the runtime environment where the legacy codes run on “GRID” or conventional resources. The symbolic modules associated with the PDE models themselves are also implemented at this level. From a logical instantiation perspective an actor can contain the functionality of one or more agents, while an agent can contain the functionality of one or more deepware units. From an implementation perspective an actor is a wrapper providing a view to an agent composite, while an agent is a wrapper on deepware composite. It is worthwhile to emphasize here that the simultaneous presence of actors and agents is not merely a matter of implementation granularity. The two abstractions are necessary to satisfy the need for functional composability between subbehaviors in the preruntime mode (actors), and the runtime mobility in the runtime mode (agents) of the environment. Below is a more detailed description of the three layers.

4.1. Design overview The plethora of available choices of computational infrastructure (hardware, software, networking), see Section 2, along with specific domain of application and user specific backgrounds and needs have introduced a large number of research issues associated with the computational implementation of any MPSE and particularly DDEMA. The major of these issues are: (1) ability and optimization for computation over distributed resources, (2) ability for dynamic migrating component distribution, (3) adaptive modelling capability, (4) user dependent customization, (5) collaborative knowledge representation and interactivity, (6) dealing with heterogeneity of legacy and new code, and finally (7) ability to sustain component fault tolerance. To address as many of the above issues, in the context of the data-driven requirements and the two time modalities scenarios considered, the three-layered design shown in Fig. 7 is implemented. The surfaceware layer implements the user interface view of DDEMA. It consists of actors with a visual abstraction in a two-dimensional visual language context, that allows the user to use various semantics (i.e. data flow, flow of control, etc.) to implement problem solving composites of schemes and workflows. The middleware layer serves as the interface between the surfaceware and the deepware that essen-

4.2. Surfaceware DDEMAs realization will be based on two main operational modes: The application design and the application use modes. In the application design mode the designer will utilize the VPDE for designing the actual application

J. Michopoulos et al. / Future Generation Computer Systems 21 (2005) 953–968

architecture in terms of data flow and message passing diagram. In particular, a visual representation of the linguistic components available for composition through their visual incarnations within the VPDE will be used. This is exactly why we plan to utilize an appropriately modified version of the “Vergil” visual editor paradigm provided by the Ptolemy-II [24] system which already contains a well documented specification for creating new actors and directors. In this mode of usage DDEMAs GUI will be that of the VPDE based on Ptolemy-II. It is important to realize that the Ptolemy-II infrastructure guarantees that actors are composable but static (from the perspective of the fact the runtime JVM environment exploited to create the corresponding composition diagram is at the machine used by the user to author it). In the application use mode, the user will take action to initiate the automatic generation of Java source code or bytecode that implements the application functionality intended. Upon execution of the bytecode, the user will be presented with a stand alone custom made application that is the result of the previous mode to perform the activities associated with the corresponding usage requirements. During this phase secondary processes will be spawned on the appropriate lightweight and heavyweight computational infrastructure available at hand. In order to address issues of parallelism, distribution, concurrency, security and process mobility,

963

we will first attempt to introduce an intermediate layer between the actual mission functional and actor view layers, that provides a unified set of resources for this goal such as INRIAs ProActive library [25]. In this case the GUI will be the one defined by the user in terms of the previous stage. Based on the issues, requirements, and data associations between functional abstractions as they have been described above, we have decided on proposing the following set of actors to capture as an example the Finite Element Tear and interconnect methodology for solving field PDEs. A Director (or General Control Actor (GCA)), a Finite Element Tear and Interconnect Actor (FETIA), a Communication Actor (CA), Domain Actors (DA), Database Actor (DBA), and many I/O Actors (e.g. Display Actor, Printer Actor, Sensor Actor, User Actor), see Fig. 8. Each actor has specific tasks and interactions with other actors. However, we will not discuss here the individual responsibilities of each actor/agent due to the lack of space. Instead, the reader can infer some of this functionality through the associations depicted in Fig. 8. In most cases the actors will have to be constructed such as they encapsulate existing codes like the AERO suite [7,8] or their aggregations and/or subcomponents through the intermediate corresponding agent middleware wrapping.

Fig. 8. Example of surfaceware utilization of a FETI model for DDEMA in Ptolemy’s Vergil used as a VPDE.

964

J. Michopoulos et al. / Future Generation Computer Systems 21 (2005) 953–968

4.3. Middleware

4.4. Deepware

The design of the middleware implementation for DDEMA is based on our experience from previous projects implementation, such as GasTurbinLab [13]. It is presented in the context of an agent-based middleware in order to be able to address the large spectrum of hardware heterogeneity involved that ranges from portable PDAs and small processing power intelligent sensors to large scale shared memory machines, GRIDbased computing and web-services-based computing resources. Agent systems are by default composable in the sense of the existence of their container abstraction that is the most common characteristic of most of these systems. However, they can also be mobile in that they can migrate to various runtime environments hosted by various physical hosts. This feature is particularly desirable for the cases where for unforseen reasons the hardware may fail (i.e. fire damage) while their runtime presence can migrate into healthy hardware. Since there is a plethora of agent-based systems we needed to establish an informed perspective as to which features were most important from the DDEMAs design needs specification in order to be able to downselect a particular agent middleware infrastructure. We followed a strategy of identifying a set of knockout and performance criteria similar to that of [36]. Those systems that survived the knockout criteria evaluation were further evaluated from a performance set of criteria perspective pertinent to not only their general behavior but also to their ability to be integrated under the needs for the DDEMA project. In a forthcoming publication we are describing the process and criteria along with the evaluation results in detail. The results of this effort concluded that there are three potential systems that can be used for DDEMA. They are the Java Agent Development Framework (JADE) [20], the GRASSHOPPER mobile agents system [21], and the ProActive Java library for parallel, distributed, concurrent computing with security and mobility [25]. At the present stage it has not been decided which of the three systems will be utilized for the needs of DDEMA. However, there is an effort to use the FIPA [37] compatibility feature of the JADE and GRASSHOPPER to use both of them in conjunction with ProActive.

The main components of DDEMAs deepware are the AERO suite modules capable of handling fluid– structure interaction in conjunction with heat transfer and fluid mesh deformation as it has been described elsewhere [17]. Additional components will be the codes that are capable of solving the reactive flow problem associated with fire conditions associated with the one of the two demonstrations applications of DDEMA. Potential candidates for this task are NISTs “Fire Dynamic Simulator” [38] or a general purpose PDE solver such as freeFEM [39] or FlexPDE [40]. Finally, the following modules that are essential for our project applications will be developed and will be a part of the deepware layer of DDEMA: (1) symbolic constitutive theory and PDE model creator infrastructure, (2) data-driven automatic model generator based on information theory or/and standard optimization techniques, and (3) real-time solution constructor based on data-driven approaches either for steering real-time computation for simulation purposes, or for computing solutions based on pre-computed basis solution approaches. These applications will be projected on the surfaceware level through the appropriate agent-based middleware wrapping, and will be available for user graphical synthesis via the actor wrapping of the corresponding agents.

5. Validation case studies 5.1. Coupled multiphysics of continua: material/structural design of supersonic platform Our experience with continuous multiphysics field theories [41,42] suggests that the design process of a structure often requires the usage of a tool such as DDEMA for establishing optimal material characteristics like fiber-matrix properties, fiber orientation, and laminate layup properties, and the shape tailoring for an aircraft—or an aircraft component such as a fuselage, a wing, or a control surface—under high temperature and mechanical loading conditions inflicted from a supersonic mission requirement.

J. Michopoulos et al. / Future Generation Computer Systems 21 (2005) 953–968

Intrinsic to the validity and confidence of any aerostructural simulation are the utilized continuous models of material behavior. It is exactly this area where our robotic testing capability along with historical exhaustive data pertaining to identification of material behavior, will be used to launch an effort for automating model selection, implementation and verification methodologies. This will be done by considering all intrinsic and extrinsic, wanted and unwanted factors (uncontrolled biasing, noise, repeatability, history dependence, etc.) effecting data quality for the intended usage of model formation. Information theoretic, statistical and certainly deterministic techniques for model selection and/or generation in the context of continuous system identification and inverse approaches will be considered and ultimately some of them will be encapsulated in DDEMA. This will exercise the prior-time data-driven aspect of the system. We have already demonstrated that it is possible to deal with most of these issues from a single physics rapid modelling perspective [18,19] in conjunction with the development of a supersonic aircraft shaping technology for reducing the initial shock pressure rise characterizing the ground signature. However, such a Quiet Supersonic Platform (QSP) requires a lightweight airframe, and will exhibit an aero-structural behavior. Its feasibility depends not only on the ability to design a shape that generates a low-boom ground signature, but mainly on the ability to build a lightweight high-stiffness and damage tolerant structure that can withstand the expected aerodynamic and thermal loadings for long-range missions at a sustained rate. Therefore, the final design of a QSP will have to rely on a multidisciplinary aero-thermo-structural optimization methodology. 5.2. Monitoring, assessment and management of fire-environment interaction The ability to deal with the consequences of fire, extends over a wide range of application areas that have a direct effect on the survivability, ability to repair, maintainability, life extension and mission goals attainment of the environments and structural platforms effected by it. Some of these application areas can involve time critical situations that when they arise, demand decision-making support for an accurate monitoring capability supplemented by fire

965

damage assessment and management and control countermeasures capabilities. A point in case are navy vessels built with a great variety of materials under extremely demanding threat conditions such as possible catastrophic events due to fuel tanks proximity, limited oxygen supply (i.e. submarines), fire byproducts toxicity, structural and atmospheric damage. To demonstrate the value of the core ideas behind the development of DDEMA, we plan to develop a proof-of-concept instantiation of DDEMA within the context of managing all required activities of an accidental fire induced crisis scenario as depicted in Fig. 4. More specifically, this application of DDEMA should be able to provide assistance to the users by employing the following features: (1) ability to receive real-time datastreams from multiple redundant distributed sensor networks that allow capturing of multimodal point or continuous distributions of various field quantities such as area/volume of live flames, temperature, chemical byproducts concentrations, etc.; (2) ability for multipoint-of-entry monitoring where this can be accomplished in multiple locations synchronously or asynchronously; (3) co-presentation of reactive flow and reactive phase transformation simulation capability with multiphysics fluid–structure interaction simulation capability that allows “what-if” prediction exploration in order to be able to evaluate validity of decisions and alternative countermeasures; (4) a decision support subsystem, in order to combine sensor inputs, simulation results analysis and user choices based on experience and knowledge and form a course of action countermeasures that can also be simulated; (5) an interface to a control system of an existing countermeasure distributed actuation network, in order to implement the decided management strategy. A system with these capabilities allows for portability and wide applicability.

6. Conclusions We have presented an overview of the data-driven motivations for creating DDEMA as a system where data representing discrete factual encapsulation are used for embedding validation and qualification within the modelling and simulation aspects of systemic behavior prediction.

966

J. Michopoulos et al. / Future Generation Computer Systems 21 (2005) 953–968

The requirements of such an environment have been established and the major details of the abstract architecture of the software abstractions required for DDEMAs implementation have been determined. Two application scenarios for validating the architecture, implementation and utility of this system have also been described. The first one is the usage of DDEMA as a computational design environment of a nonlinear multiphysics structure such as a Quite Supersonic Jet, and the second is the usage of DDEMA as a computational decision support system for fire monitoring and management.

[8]

[9]

[10]

[11] [12]

Acknowledgements The authors acknowledge the support by the National Science Foundation under grants ITR-0205663 and EIA-0203958.

[13]

[14]

References [1] J. Michopoulos, P. Tsompanopoulou, E. Houstis, J. Rice, C. Farhat, M. Lesoinne, F. Lechenault, DDEMA: a data driven environment for multiphysics applications, in: Peter M.A. Sloot, et al. (Eds.), Proceedings of the International Conference on Computational Science (ICCS’03), Part IV, LNCS 2660, Melbourne, Australia, June 2–4, Springer-Verlag, Heidelberg, 2003, pp. 309–318. [2] J. Michopoulos, P. Tsompanopoulou, E. Houstis, J. Rice, C. Farhat, M. Lesoinne, F. Lechenault, Design architecture of a data driven environment for multiphysics applications, in: Proceedings of the DETC’03, ASME DETC2003/CIE, Chicago, IL, Paper No. DETC2003/CIE-48268, September 2–6, 2003. [3] Guide for the Verification and Validation of Computational Fluid Dynamics Simulations, American Institute of Aeronautics and Astronautics Standards Program, AIAA report G-077– 1998, 1998. [4] ASME-JFE, Editorial policy statement on control or numerical accuracy, J. Fluids Eng. 115 (3) (1993) 339–340. [5] DoD, Verification, Validation, and Accredization (VV&A) Recommended Practices Guide, Defense Modeling Simulation Office, Office of the Director of Defense Research and Engineering. http://www.dmso.mil/docslib. [6] M. Pilch, T. Trucano, J. Moya, G. Froelich, A. Hodges, D. Peercy, Guidelines for Sandia ASCI Verification and Validation Plans—Content and Format, Version 2.0, Sandia Reports SAN2000–3101, January 2001. [7] C. Farhat, M. Lesoinne, Two efficient staggered procedures for the serial and parallel solution of three-dimensional nonlin-

[15]

[16]

[17]

[18]

[19]

[20] [21]

[22]

ear transient aeroelastic problems, Comput. Meth. Appl. Mech. Eng. 182 (2000) 499–516. C. Farhat, M. Lesoinne, P. Stern, S. Lanteri, High performance solution of three-dimensional nonlinear aeroelastic problems via parallel partitioned algorithms: methodology and preliminary results, Adv. Eng. Software 28 (1997) 43–61. E. Kant, F. Daube, W. MacGregor, J. Wald, Scientific programming by automated synthesis, in: M. Lowry, R. McCartney (Eds.), Automatic Software Design, AAAI Press/MIT Press, Menlo Park, CA, 1991, Chapter 8, pp. 169–205. R. van Engelen, L. Wolters, G. Cats, Ctadel: a generator of efficient code for PDE-based scientific applications, Technical Report 95–26, Department of Computer Science, Leiden University, 1995. H. Fujio, S. Doi, FEEL: a problem-solving environment for finite element analysis, NEC Res. Dev. 39 (4) (1998) 491–496. E.N. Houstis, J.R. Rice, Parallel ELLPACK: a development and problem solving environment for high performance computing machines, in: P. Gaffney, E. Houstis (Eds.), Programming Environments for High-Level Scientific Problem Solving, NorthHolland, Amsterdam, 1992, pp. 229–241. S. Fleeter, E. Houstis, J. Rice, C. Zhou, A. Catlin, GasTurbnLab: a problem solving environment for simulating gas turbines, in: Proceedings of the 16th IMACS World Congress, 2000, No. 104–5. E.N. Houstis, A.C. Catlin, P. Tsompanopoulou, D. Gottfried, G. Balakrishnan, K. Su, J.R. Rice, GasTurbnLab: a multidisciplinary problem solving environment for gas turbine engine design on a network of non-homogeneous machines, J. Comput. Eng. Math. 149 (2002) 83–100. J.R. Rice, P. Tsompanopoulou, E.A. Vavalis, Interface relaxation methods for elliptic differential equations, Appl. Num. Math. 32 (1999) 219–245. P. Tsompanopoulou, Collaborative PDEs solvers: theory and practice, PhD Thesis, Mathematics Department, University of Crete, Greece, 2000. H. Tran, C. Farhat, An integrated platform for the simulation of fluid–structure–thermal interaction problems, AIAA Paper 2002–1307, 43rd AIAA/ASME/ASCE/AHS/ASC Structural Dynamics and Materials Conference, Denver, Colorado, April 22–25, 2002. J. Michopoulos, R. Badaliance, T. Chwastyk, L. Gause, P. Mast, C. Farhat, M. Lessoine, Coupled multiphysics simulation of composite material softening in a virtual wind tunnel environment, in: Proceedings of the Sixth US National Congress on Computational Mechanics, US Association for Computational Mechanics, Dearborn, MI, 2001, p. 521. J. Michopoulos, C. Farhat, M. Lesoinne, P. Mast, R. Badaliance, T. Chwastyk, L. Gause, Material softening issues in a multiphysics virtual wind tunnel environment, in: Proceedings of the 40th Aerospace Sciences Meeting and Exhibit, Reno, Nevada, AIAA Paper 2002–1095, 2002. JADE, Java Agent Development Framework. http://jade.cselt.it. The GRASSHOPPER Agent Platform, IKV++ GmbH, Kurfurstendamm, Berlin, Germany, pp. 173–174. http://www. ikv.de. Java-RMI-IIOP. http://java.sun.com/products/rmi-iiop/.

J. Michopoulos et al. / Future Generation Computer Systems 21 (2005) 953–968 [23] L. Parker, S.M. Christensen, MathTensor: A System for Doing Tensor Analysis by Computer, Addison-Wesley, Reading, MA, 1994. [24] J. Davis II, C. Hylands, B. Kienhuis, E.A. Lee, J. Liu, X. Liu, L. Muliadi, S. Neuendorffer, J. Tsay, B. Vogel, Y. Xiong, Heterogeneous concurrent modeling and design in Java, Memorandum UCB/ERL M01/12, EECS, University of California, Berkeley, CA, USA, March 15, 2001. http://ptolemy.eecs.berkeley.edu/ptolemyII/. [25] D. Caromel, W. Klauser, J. Vayssiere, Towards seamless computing and metacomputing in Java, in: G.C. Fox (Ed.), Concurrency Practice and Experience, vol. 10, no. 11–13, Wiley, New York, September–November 1998, pp. 1043–1061. http://www-sop.inria.fr/oasis/proactive/. [26] G.L. Antoniu, L. Bouge, P. Hatcher, M. MacBeth, K. McGuigan, R. Namyst, Compiling multithreaded Java bytecode for distributed execution, in: Euro-Par 2000: Parallel Processing, Lecture Notes in Computer Science, vol. 1900. Springer-Verlag, Munchen, Germany, 2000, pp. 1039– 1052. [27] C. Youn, M. Pierce, G. Fox, Building problem solving environments with application web service toolkit, in: P.M.A. Sloot, et al. (Eds.), Proceedings of the International Conference on Computational Science (ICCS’03), Part IV, LNCS 2660, Melbourne, Australia, June 2–4, Springer-Verlag, Heidelberg, 2003, pp. 403–412. [28] Y. Huang, The role of Jini in a service-oriented architecture for grid computing, PhD Thesis, Department of Computer Science, Cardiff University, April 2003. [29] Y. Huang, GSiB: PSE infrastructure for dynamic serviceoriented grid applications, in: P.M.A. Sloot, et al. (Eds.), Proceedings of the International Conference on Computational Science (ICCS’03), Part IV, LNCS 2660, Melbourne, Australia, June 2–4, Springer-Verlag, Heidelberg, 2003, pp. 430– 439. [30] T. Drashansky, S. Weerawarana, J. Anupam, R. Weerasinghe, E. Houstis, Software architecture of ubiquitous scientific computing environments, in: ACM–Baltzer Mobile Networks and Nomadic Applications (MONET), vol. 1, no. 4, 1996. [31] Java Native Interface Specification. http://web2.java.sun.com/ products/jdk/1.1/docs/guide/jni. [32] JACAW: a Java-C automatic wrapper tool. http://www.wesc. ac.uk/projects/jacaw/. [33] C. Douglas, Y. Efendiev, R. Ewing, R. Lazarov, M. Cole, C. Jones, C. Johnson, Virtual telemetry for dynamic data-driven application simulations, in: P.M.A. Sloot, et al. (Eds.), Proceedings of the International Conference on Computational Science (ICCS’03), Part IV, LNCS 2660, Melbourne, Australia, Springer-Verlag, Heidelberg, June 2–4, 2003, pp. 279–288. [34] D. Knight, Data driven design optimization methodology: a dynamic data driven application system, in: P.M.A. Sloot, et al. (Eds.), Proceedings of the International Conference on Computational Science (ICCS’03), Part IV, LNCS 2660, Melbourne, Australia, June 2–4, Springer-Verlag, Heidelberg, 2003, pp. 329–336. [35] J.G. Michopoulos, P.W. Mast, R. Badaliance, I. Wolock, Health monitoring of smart structures by the use of dissipated en-

[36]

[37] [38] [39] [40] [41]

[42]

967

ergy, in: G.P. Carman, E. Garcia (Eds.), Proceedings of the 93’WAM on Adaptive Structures and Material Systems, ADvol. 35, ASME, New York, 1993, pp. 457–462. M. Grabner, F. Gruber, L. Klug, W. Stockner, J. Altmann, W. Esmayr, EvalAgents: Evaluation Report, Software Competence Center, Hagenberg Technical Report SCCH-TR-64/2000, 2000. The Foundation of Intelligent Physical Agents, FIPA Specifications. http://www.fipa.org/specifications/index.html. NIST Fire Dynamic Simulator and Smokeview. http://fire. nist.gov/fds/. freeFEM Family of Codes. http://www.freefem.org/. FlexPDE. http://www.pdesolutions.com/index.html. G.C. Sih, J.G. Michopoulos, S.C. Chou, Hygrothermoelasticity, Martinus Nijhoff (now Kluwer Academic Publishers), Dordrecht, 1986. J.G. Michopoulos, G.C. Sih, Coupled theory of temperature moisture deformation and electromagnetic fields, Institute of Fracture and Solid Mechanics, Report IFSM-84–123, Lehigh University, 1984.

J. Michopoulos is a Senior Research Scientist and Mechanical Engineer at the Special Projects Group of the Naval Research Laboratory in Washington, DC, USA, and directs the Computational Multiphysics Systems Lab. He holds a PhD in Applied Mathematics and Theoretical and Experimental Mechanics (1983) from the National Technical University of Athens, Greece, and completed his postdoctoral work (1984) at Lehigh University, Bethlehem, PA, USA. His research interests are in the areas of computational theoretical and experimental coupled multiphysics, data-driven robotic characterization, identification and simulation of constitutive response of materials, heterogeneous computing, automation of research, automated theory and model generation through symbolic algebra, development and characterization of biomimetic systems, and whole platform/structure simulation-based design. He has authored and coauthored more than 65 publications on these areas and is recipients of over 40 awards and is a member of various professional societies. P. Tsompanopoulou is Visiting Assistant Professor in Department of Computer and Communication Engineering of University of Thessaly, Greece. Her research interests are scientific computing, numerical analysis, parallel computing, agent computing and problem solving environments. She has been Postdoctoral Research Fellow in Simula Research Laboratory, Norway and Postdoctoral Research Associate in Department of Computer Sciences, Purdue University. She received her MS in Applied Mathematics from University of Crete in January 1995, Greece, MS in Computer Sciences from Purdue University in May 1999 and PhD in Numerical Analysis from University of Crete, Greece in July 2000.

968

J. Michopoulos et al. / Future Generation Computer Systems 21 (2005) 953–968

E. Houstis is a Full Professor of Computer Engineering and Communications Department at University of Thessaly, Greece and Purdue University, USA. Most of his academic career is associated with Purdue University. He has been an Professor of Computer Science and Director of the Computational Science and Engineering Program of Purdue University. He is in the editorial board of several journals and he is a member of the IFIP WG 2.5 Working Group in Numerical Software. His current research interests are in the areas of problem solving environments (PSEs), networking computing, enterprise systems, computational intelligence, computational finance, e-business, and e-learning. He is one of the principal designers of several domain specific PSEs (e.g., Parallel ELLPACK, PDELab, WebPDELab, GasTurbnLab, SoftLab, PYTHIA) and has contacted numerous performance evaluation studies of PDE software and parallel architectures. C. Farhat is a Professor and Chair of Aerospace Engineering Sciences, and Director of the Center for Aerospace Structures at the University of Colorado at Boulder. He holds a PhD in Civil Engineering from the University of California, Berkeley (1987). He is the recipient of several prestigious awards including the Institute of Electrical and Electronics Engineers (IEEE) Computer Society Gordon Bell Award (2002), the International Association of Computational Mechanics (IACM) Computational Mechanics Award (2002), the IACM Award in Computational Mechanics for Young Investigators (1998), the USACM R.H. Gallagher Special Achievement Award for Young Investigators (1997), the IEEE Computer Society Sidney Fernbach Award (1997), the IBM Sup’Prize Achievement Award (1995), the American Society of Mechanical Engineers (ASME) Aerospace Structures and Materials Best Paper Award (1994), the Society for Automotive Engineers (SAE) Arch T. Colwell Merit Award (1993) and many others. He is the Vice Chair of the Society for Industrial and Applied Mathematics’ Activity Group on Supercomputing (2003–2006), and fellow in various professional societies. He is the author of over 200-refereed publications on aeroelasticity, computational fluid dynamics on moving grids, computational structural mechanics, computational acoustics, supercomputing, and parallel processing. M. Lesoinne is an Assistant Professor in the Aerospace Department at the University of Colorado. His research interest are in high performance computing and computational aeroelasticity. He has made contributions to the FETI algorithm, leading to the development of the FETI-DP version. He is a corecipiant of a 2002 Gordon Bell Award. He received his BS in Aerospace Engineering from the University of Liege in Belgium, MS in aerospace vehicle design from Cran-

field University in the UK and an MS and PhD in Aerospace Engineering from the University of Colorado in Boulder in 1991 (MS) and 1994 (PhD).

J. Rice is W’Brooks Fortune Distinguished Professor at Purdue University. He studied Mathematics at the California Institute of Technology (PhD, 1959) and came to Purdue University in 1964. His early research work was mostly in mathematics and now he is working in the areas of computer security, computational science and engineering and problem solving environments. He was founder and Editor-inChief of the ACM Transactions on Mathematical Software (1975–1993). Professional honors include the 1975 Forsythe Distinguished Lectureship, Fellow of the AAAS, Fellow of the ACM, and election to the National Academy of Engineering.

A. Joshi is an Associate Professor of Computer Science and Electrical Engineering at UMBC. He obtained a BTech degree in Electrical Engineering from IIT Delhi in 1989, and a Masters and PhD in Computer Science from Purdue University in 1991 and 1993, respectively. His research interests are in the broad area of networked computing and intelligent systems. His primary focus has been on data management for mobile computing systems in general, and most recently on data management and security in pervasive computing and sensor environments. He has created agent-based middleware to support discovery, composition, and secure access of services/data over both infrastructure based and ad hoc wireless networks, as well as systems that integrate sensors with the grid. He is also interested in semantic web and data/web mining, where he has worked on personalizing the web space using a combination of agents and soft computing. His other interests include networked HPCC. He has published over 50 technical papers, presented tutorials in conferences, served as guest editor for special issues for IEEE Personal Comm., Comm. ACM, etc., and served as an Associate Editor of IEEE Transactions of Fuzzy Systems from 1999 to 2003. He is a member of IEEE, IEEE-CS, and ACM.