Hybrid models in agent-based environmental decision support

Hybrid models in agent-based environmental decision support

Applied Soft Computing 11 (2011) 5243–5258 Contents lists available at ScienceDirect Applied Soft Computing journal homepage: www.elsevier.com/locat...

1MB Sizes 7 Downloads 58 Views

Applied Soft Computing 11 (2011) 5243–5258

Contents lists available at ScienceDirect

Applied Soft Computing journal homepage: www.elsevier.com/locate/asoc

Hybrid models in agent-based environmental decision support Marina V. Sokolova a,b,c , Antonio Fernández-Caballero a,b,c,∗ a b c

Instituto de Investigación en Informática de Albacete (I3A), Universidad de Castilla-La Mancha, 02071 Albacete, Spain Kursk State Technical University, Kursk, ul.50 Let Oktyabrya, 305040, Russia Departamento de Sistemas Informáticos, Escuela de Ingenieros Industriales de Albacete, Universidad de Castilla-La Mancha, 02071 Albacete, Spain

a r t i c l e

i n f o

Article history: Received 13 September 2010 Received in revised form 10 March 2011 Accepted 4 May 2011 Available online 23 May 2011 Keywords: Hybrid models Human health Environmental pollution Decision support systems Multi-agent systems Simulation

a b s t r a c t Providing informational support in decision making is one of the priority directions of research in the sphere of public health management. Modern approaches suggest wide usage of intelligent data mining methods and Web-services, but just a few enable to study a complex system from an interdisciplinary point of view. In this paper an agent-based decision support system (ADSS), which embodies the principles of the interdisciplinary approach and facilitates multi-focal view and examination of the “Environment–Public health” system, is introduced. The detailed design of the system with the emphasis on the roles, scenarios and its implementation is presented. The data mining procedures used for data preparation, modeling and simulation, which include statistics, methods of artificial intelligence and hybrid models in form of cascade committee machines, are described. Then, the advantages of the proposed hybrid models against “singular” modeling methods are demonstrated. Finally, a case study for a selected region is presented and its results are discussed. © 2011 Elsevier B.V. All rights reserved.

1. Introduction Health care decision support is faced with the challenges of complex and diverse data and knowledge, the lack of standardized terminology compared to basic sciences, the stringent performance and accuracy requirements and the prevalence of legacy systems [20]. Capturing domain knowledge has proven to be one of the largest challenges for expert systems builders [10]. It has become evident, through research into intelligent decision support systems (DSS), that expert knowledge needs to be supplemented with facts gleaned from machine learning processes to solve more difficult problems. Due to the stochastic and complex nature of most real world systems, simulation models of these systems are themselves difficult to build as well as time consuming to execute. In many cases, decision makers cannot afford to explore a large area of the decision variable space or to conduct a lengthy search for the best set of decision variables [4]. One feasible alternative is to build a meta-model [19]. So, due to the growing complexities and uncertainties in decision making situations, model-driven DSS have become increasingly important to decision makers [25]. Model-driven DSS can assist decision makers in applying quantita-

∗ Corresponding author at: Instituto de Investigación en Informática de Albacete (I3A), Universidad de Castilla-La Mancha, 02071 Albacete, Spain. Tel: +34 967599200; fax: +34 967599224. E-mail addresses: [email protected] (M.V. Sokolova), [email protected] (A. Fernández-Caballero). 1568-4946/$ – see front matter © 2011 Elsevier B.V. All rights reserved. doi:10.1016/j.asoc.2011.05.035

tive models to support the decision-making process. The simulation model is an important type of quantitative model used in modeldriven DSS. A simulation model can imitate the behavior of an actual or anticipated human or physical system. It can capture much more detail about a specific system than algebraic models. It can capture underlying mechanism and dynamics of a system, which enables decision makers to effectively manage daily operations and make long term plans. It provides also a test-bed to assess changes in operations and managerial policies [11]. Decision making involves processing or applying information and knowledge, and the appropriate information/knowledge mix depends on the characteristics of the decision making context [38]. Intelligent data analysis may be defined as “encompassing statistical, pattern recognition, machine learning, data abstraction and visualization tools to support the analysis of data and discovery of principles that are encoded within the data” [18]. The authors state that the principal differences between intelligent data analysis and knowledge discovery in databases is that the techniques used are those of artificial intelligence [7] rather than pure traditional statistical methods. Intelligent data analysis refers to all methods that are devoted to support the transformation of data into information exploiting the knowledge available on the domain. Very recently, a novel framework for the construction of augmented fuzzy cognitive maps based on fuzzy rule-extraction methods for decisions in medical informatics has been presented [24]. Environment is a clear example of a complex domain, composed of numerous self-organized subsystems. If interactions of humans within the environment are studied, the level of

5244

M.V. Sokolova, A. Fernández-Caballero / Applied Soft Computing 11 (2011) 5243–5258

complexity of such a system greatly increases [27,32]. A very recent paper describes the design of a fuzzy decision support system in multi-criteria analysis approach for selecting the best plan alternatives or strategies in environment watershed [5]. It is a fact that environment affects human health. Climate changes together with growing anthropogenic impact intensify interactions within the “environment–human health” system. Humans are affected by this global imbalance, and react with direct and indirect health problems, some example include “excessive heat-related illnesses, vector- and waterborne diseases, increased exposure to environmental toxins, exacerbation of cardiovascular and respiratory diseases due to declining air quality, and mental health stress. Vulnerability to these health risks will increase as elderly and urban populations increase and are less able to adapt to climate change. In addition, the level of vulnerability to certain health problems varies by location. As a result, strategies to address climate change must include health as a strategic component on a regional level. Improving health while addressing climate change will contribute to public health infrastructure today, while reducing the negative consequences of a changing climate for future generations” [36]. This complex system can be decomposed into “Air pollutants”, “Climate change”, “Water”, “Ecological”, and “Social/economic” sub-systems. And, each of the sub-systems affects human health. The strength and dynamics of health outcomes can be measured with statistical indicators: mortality, morbidity, birth defects rate, etc. The “Health” concept represents a complex system, which includes physical, social, mental, spiritual and biological well-being, spanning across all the spheres of human lives. Environmental pollution, as one of the factors with dominant and obvious influence upon human health, causes direct and latent harmful effects, which must be evaluated in order to create a set of preventive health-preserving solutions. That is why linking all the named components in one system and studying of this system leads to the analysis of potential and present health problems, retrieval of the new ones and to working out the in-depth view of situation development, strategies, and activities oriented to situation management and control. Numerous studies have shown an adverse relationship between environmental hazards and human health. Some research works were aimed to discover detail mechanisms of these relationships [13,31]. For example, a known and aggressive air contaminant is ambient fine particulate matter (PM2.5 ). It has been demonstrated that it increases cardiovascular risks, with a stronger impact on heart failure [12] and causes premature death if it is locally emitted [23]. Ref. [34] examines and proves the presence of associations between carbon monoxide (CO), nitrogen dioxide (NO2 ), ozone (O3 ), sulfur dioxide (SO2 ), and particulate matter (PM10 and PM2.5 ), and visits for angina/myocardial infarction, heart failure, dysrhythmia/conduction disturbance, asthma, chronic obstructive pulmonary disease, and respiratory infections. Another paper [35] describes a research in which adverse respiratory health effects on children caused by the petrochemical refinery’s emissions are studied. The emissions include SO2 , particles and oxides of nitrogen as well as fugitive emissions consisting of numerous aliphatic and aromatic hydrocarbons were studied. Some studies of prenatal exposure to solvents including tetrachloroethylene have shown increases in the risk of certain congenital anomalies among exposed offspring [1]. Some of the environmental pollutants play the role of carcinogens. For example, there are two types of environmental exposures, which are related to lung cancer: radon in homes and arsenic in drinking water [2]. Also, in [22] a causal effect that leads to increased respiratory illnesses in children due to traffic pollution is studied. In case of environmental impact assessment (EIA), all the advantages of intelligent agents become crucial. Indeed, the intelligent agent and multi-agent systems (MAS) approach is an excellent

technique that can help to reduce the complexity of a system by creating modular components, which solve private subtasks that together achieve the whole goal. Every agent utilizes the most effective technique for solving the subtask and does not apply the general approach, which is often acceptable for the system in the whole, but not optimal for a concrete subtask [29]. In accordance to a recent paper [6] “the applications of agents and multi-agent systems in the health care and clinical management environments are becoming a reality. Most agent-based applications are related to the use of this technology in patient monitoring, treatment supervision and data mining.” EIA is an indicator, which enables evaluation of the negative impact upon human health caused by environmental pollution. Environmental pollution, a factor with dominant and obvious influence, causes direct and latent harm, which must be evaluated and simulated in order to create a set of preventive health-preserving solutions. Large amounts of raw data describe the “Environment–Human health” system, but not all the information is used. It transforms from the initial “raw” state to the “information” state, which suggests organized data sets, models and dependencies, and, finally, to the “new information” which is represented as a set of recommendations, risk assessment and forecast values. 1.1. Notations This paper focuses on the description of an agent-based decision support system, and its application to environmental domain. The nomenclature and abbreviations used throughout the paper are provided in Table 1. 2. Description of the agent-based DSS The proposed framework consists of three phases, which are reflected in the architecture of the agent-based DSS (ADSS). The proposed system is logically and functionally divided into three layers: the first is dedicated to meta-data creation (information fusion), the second is aimed at knowledge discovery (data mining), and the third layer provides real-time generation of alternative scenarios for decision making [30,28]. The levels do not have strongly fixed boundaries, because the agents construct a community, in which the agents’ spheres of competence can overlap, and the boundaries are smooth. The ADSS asserts the main points of a traditional decision making process, and includes the following steps: 1. 2. 3. 4. 5. 6.

Problem definition. Information gathering. Alternative actions identification. Alternatives evaluation. Selection of decision. Decision implementation.

The first and the second stages are performed during the initial step, when the expert information and initial retrospective data is gathered, stages three, four and five are solved by means of the MAS, and the sixth stage is supposed to be carried out by the decision maker. Although the goals of the ADSS are determined and keep constant within various domains, the goals of the case study are not visible and clear at first sight. That is why the domain of interest has been studied with the creation of a goal tree. 2.1. Goals and scenarios The principal goals of the proposed multi-agent system follow the logical sequence of the main stages of the DeciMaS framework. For this reason, the process of the system design starts with iden-

M.V. Sokolova, A. Fernández-Caballero / Applied Soft Computing 11 (2011) 5243–5258 Table 1 Nomenclature and abbreviations. Nomenclature Abbreviation AA ADSS BOD BP CA CMA CO COD CSA DA DAA DOA DPA DSA DSS EA EIA FAA FHX GA GAA GANN GMDH GMDHA ICD ICIDH MAS MFA MoFA NA NO2 O3 PFA PM2.5 PM10 RA RPROP SO2 TFA WDFA WFA

ANN agent Agent-based decision support system Biochemical oxygen demand Back-propagation algorithm Correlation agent Committee machine agent Carbon monoxide Chemical oxygen demand Computer simulation agent Decomposition agent Data Aggregation agent Domain Ontology agent Data Preprocessing agent Data Smoothing agent Decision support system Evaluation agent Environmental impact assessment Function Approximation agent Farfield Human Exposure Genetic algorithms Gaps and Artifacts Check agent Neural network trained with genetic algorithms Group Method of Data Handling GMDH agent International Statistical Classification of Diseases and Related Health Problems International Classification of Functioning and Disability Multi-agent system Mining Data Fusion agent Morbidity Data Fusion agent Normalization agent Nitrogen dioxide Ozone Petroleum Data Fusion agent Ambient fine particulate matter Particulate matter Regression agent Resilient propagation algorithm Sulfur dioxide Traffic Pollution Fusion agent Waste Data Fusion agent Water Data Fusion agent

tification of general goals that are divided into subgoals and then refined. The final goal is Create recommendation and it is achieved as a result of three parallel goals Make forecast, Make sensitivity analysis and Check for alarm. The goals Make forecast and Make sensitivity analysis use parts of the same knowledge. Both sensitivity analysis and forecasting are based on models received as a result of Create models and Select the best models goals. The goal Check environmental impact is independent from the other goals of the second logical layer, although its outputs are used for making recommendations on the third logical layer. The goal Preprocess data is the initial goal for all the data mining procedures. The correlation between goals is shown in Fig. 1. A scenario represents a purposeful behavioral model of collective activity [33]. A scenario serves to achieve a practical goal and may include sub-scenarios as well. It involves, at least, two agents performing particular roles. A scenario is a composition of agents’ scenarios oriented to achieve goals, which in their turn, may be achieved by sets of problem-solving activities that include agent roles. Every role that is played by an agent, includes actions and plans. Tables 2–5 present five scenarios that the system should enact. Each scenario is described as a sequence of steps, where each step has its type, name, role, and type of the data it uses and produces. The type term can be a goal, an action, a percept, or a sub-scenario.

5245

The scenario of achieving a Retrieve and fuse data goal is provided in Table 2. It is initiated by the Data Aggregation agent, and the Domain ontology and the data mining agents participate in this scenario. It suggests collaboration with the actor Expert. The scenario has seven steps. The first step suppose receiving the perception “Obtain expert knowledge”. The remaining steps are actions. The scenario to achieve a Preprocess data goal is shown in Table 3 and it describes interaction of the Data Aggregation agent and the Data Preprocessing agent and its team. This scenario has one goal Prepare data for modeling and performs five action, namely Eliminate artifacts, Normalize data, Smooth data, Parametric correlation and Non-parametric correlation. The Check Environmental Impact scenario has two actions: Create neural networks and Evaluate impact assessment, and invokes the Function approximation agent and the Artificial Neural Network agent. This scenario is presented in Table 4. The Create models scenario has 11 steps, which are actions to be completed by agents from the Function Approximation agent team, as it is shown in Table 5. The Create recommendation scenario suggests collaboration with the external actor User/Decision maker and contains ten steps. These steps are carried out within two roles: Computer simulation and Decision making. Table 6 provides a detailed view of this scenario. 2.2. Actors The proposed ADSS supposes communication with two actors. One actor, Expert, embodies the external entity which possesses the information about the problem area. In more detail, the Expert contains the knowledge of the domain of interest represented as an ontology, and delivers the knowledge to the ADSS. Through the Simulate Models scenario, the user interacts with the knowledge base, and gets recommendations if they have been previously simulated and stored before, or creates and simulates the new alternative decisions. As a result of the interaction within the Retrieve and Fuse data scenario, the raw information is being read, and is shown as “Heterogeneous Data Sources” data storage, and the “Pollutants” and “Morbidity” data sources are created. The second actor, named User/Decision maker, is involved in an interactive process of generation of alternative decisions to choose the optimal one/ones. This actor communicates with the agents by passing a message, stating the model, simulating values, predicting periods, levels of variable change, etc. The actor accepts the best alternative in accordance with its beliefs and the MAS. The flows of works, which are essential for decision making, include three sub-scenarios: the Simulate models scenario, the Create recommendation scenario and the Search for the adequate model scenario. Additionally, there are three goals, which are related to each scenario and have similar names. Each goal has a number of activities, and within each scenario resources in the form of data sources are used, modified or created. 2.3. Roles of the proposed MAS Scenarios focus on how a multi-agent system achieves goals, interaction models define the agent’s outgoing communications, and the actors represent external entities, which interact with the system [33]. The detailed behavior of an agent is represented by roles, plans. The communication between agents is shown in acquaintance models. Roles represent agent’s functions, responsibilities and expectations. A role enables pooling together the goals of the system in accordance with different types of behavior that an agent assumes when archiving a goal or a series of goals. Table 7 shows a view of the correspondent logical levels, and the roles, which are played there.

5246

M.V. Sokolova, A. Fernández-Caballero / Applied Soft Computing 11 (2011) 5243–5258

Fig. 1. Goals tree.

Table 2 Scenario for achieving the “Retrieve and fuse data” goal. Name Actors Initiator Trigger

Retrieve and fuse data scenario Expert The Data Aggregation agent The “Read ontology” message to the Domain Ontology agent

Step #

Type

Name

Role

Data used

Data created

1 2 3 4 5 6 7

Percept Action Action Action Action Action Action

Obtain expert knowledge Read ontology and data sources Fuse morbidity data Fuse transport data Fuse data on mines Fuse petroleum usage data Fuse data on wastes

Data Fusion Data Fusion Data Fusion Data Fusion Data Fusion Data Fusion Data Fusion

Domain Ontology agent beliefs External data sources External data sources External data sources External data sources External data sources External data sources

Domain Ontology agent beliefs Domain Ontology agent beliefs Domain Ontology agent beliefs Domain Ontology agent beliefs Domain Ontology agent beliefs Domain Ontology agent beliefs Domain Ontology agent beliefs

Table 3 Scenario for achieving the “Preprocess data” goal. Name Actors Initiator Trigger

Preprocess data scenario No The Data Preprocessing agent The “ready” message from the Data Aggregation agent

Step #

Type

Name

Role

Data used

Data created

1 2 3 4 5 6

Action Action Action Goal Action Action

Eliminate artifacts Normalize data Smooth data Prepare data for modeling Parametric correlation Non-parametric correlation

Data Clearing Data Clearing Data Clearing Data Clearing Data Clearing Data Clearing

Domain Ontology agent beliefs External data sources Morbidity, Pollutants Morbidity, Pollutants dataX, dataY dataX, dataY

Morbidity, Pollutants Morbidity, Pollutants Morbidity, Pollutants dataX, dataY Correlation table Correlation table

M.V. Sokolova, A. Fernández-Caballero / Applied Soft Computing 11 (2011) 5243–5258

5247

Table 4 Scenario for achieving the “Check Environmental Impact” goal. Name Actors Initiator Trigger

Check Environmental Impact scenario No The Function Approximation agent The “Start Data Mining” message from the Function Approximation agent

Step#

Type

Name

Role

Data used

Data created

1 2

Action Action

Create neural networks Evaluate impact assessment

Impact Assessment Impact Assessment

dataX, dataY, Correlation table IAResults

IAResults IAResults

Table 5 Scenario for achieving the “Create models” goal. Name Actors Initiator Trigger

Create models scenario No The Function Approximation agent The “ready” message from the Data Preprocessing agent

Step #

Type

Name

Role

Data used

Data created

1

Action

Decomposition

Decomposition

Groupings table

2

Action

Create univariate regression models

Function Approximation

3

Action

Create multiple regression models

Function Approximation

4

Action

Create neural network models

Function Approximation

5 6 7 8 9 10

Action Action Action Action Action Action

Create GMDH-models Evaluate univariate regression models Evaluate multiple regression models Evaluate neural network models Accept models Create committee machines

11

Action

Creation of reports

Function Approximation Function Approximation Function Approximation Function Approximation Function Approximation Function Approximation, Impact Assessment Function Approximation, Impact Assessment, Decomposition

dataX, dataY, Correlation table, rangs dataX, dataY, Correlation table dataX, dataY, Correlation table dataX, dataY, Correlation table Models table, dataX, dataY Models table Models table Models table Models table Models table

Models table Models table Models table Models table Models table Models table Models table Models table Final models

IAResults, Final models, CS results, Grouping table

The distribution of roles for agents determines the agents’ specialization and knowledge. One of the intentions for the system design was to assign one role to each agent or agent team. That requirement was met for the roles Data Fusion and Data Clearing where the teams of Data Fusion agent and of the Data Preprocessing agent carry out these roles. Moreover, the Function Approximation agent manages three data mining roles: Impact Assessment, Decomposition and Function Approximation, and the Computer Simulation agent takes on Computer Simulation, Decision Making and Data Distribution roles.

With regard to the proposed multi-agent architecture and in order to gain time of the recommendation generation process and optimize interactions between agents, local agent teams were used. The teams coordinate and supervise task execution and utilization of resources. Agent teams synchronize the work of the system, execute plans in a concurrent mode, and strengthen the internal management by local decision making. There are four agent teams defined within the system: two within the first level, and one team on the second and third level. Each “main” agent plays several roles.

2.4. Description of the agents

2.4.1. The Data Aggregation agent and its team The Data Aggregation agent (DAA) is the principal agent, which acts within the first logical layer. One of the agents is oriented to read the Domain Ontology, and the others have to retrieve information from the identified data sources. There are a number of subordinate agents under its control. These are:

Once the multi-agent system notions have been defined and its logical architecture has been determined, with the set of goals, scenarios, interaction models and data usage, a global view of the system’s layers and description of agent teams may be provided. Table 6 Scenario for achieving the “Create recommendation” goal. Name Actors Initiator Trigger

Create recommendation scenario User/decision maker The Computer Simulation agent The “ready” message from the Function Approximation agent

Step #

Type

Name

Role

1 2 3 4 5 6 7 8 9 10

Percept Goal Action Goal Action Percept Action Goal Action Action

Obtain preferences for simulation Make forecast Forecasting Make sensitivity analysis Models simulation Preferences for decision Criteria application Check for alarm Alarm generation Creation of reports

Computer Simulation Computer Simulation Computer Simulation Computer Simulation Decision Making Decision Making Decision Making Decision Making Decision Making

Data used

Data created Data for simulation

Models tables, dataX, dataY dataX, dataY, Correlation table Models table, dataX, dataY CS Results table, IAResults Alarm levels IAResults, Final models, CS results

CS Results table CS Results table CS Results table Data for decision Final models Final models

5248

M.V. Sokolova, A. Fernández-Caballero / Applied Soft Computing 11 (2011) 5243–5258

Table 7 Roles played in the multi-agent system. Logical level

Main agent

Subordinate agent

Role

Data Fusion

Data Aggregation agent

Domain Ontology agent Traffic Pollution Fusion agent Water Data Fusion agent Petroleum Data Fusion agent Mining Data Fusion agent Morbidity Data Fusion agent Waste Data Fusion agent Normalization agent Correlation agent Data Smoothing agent Gaps and Artifacts Check agent

Data Fusion

Data Preprocessing agent

Data Clearing

Data Mining

Function Approximation agent

Regression agent ANN agent GMDH agent Committee Machine agent Decomposition agent Evaluation agent

Impact Assessment Decomposition Function Approximation

Decision Making

Computer Simulation agent

Forecasting agent View agent Alarm agent

Computer Simulation Decision Making Data Distribution

1. The Domain Ontology agent (DOA). 2. The fusion agents: • Water Data Fusion agent (WFA), • Petroleum Data Fusion agent (PFA), • Mining Data Fusion agent (MFA), • Traffic Pollution Fusion agent (TFA), • Waste Data Fusion agent (WDFA) • Morbidity Data Fusion agent (MoFA). The process of information fusion requires working with multiple data sources. Some of them can vary significantly in their format and internal data structure. These are the reasons why the Data Aggregation agent receives several subordinate agents in its disposition. They facilitate data retrieval because each of them is specified in a particular type of pollutant. The Data Aggregation agent must achieve the following goals: 1. Obtain information from the ontology of the domain. 2. Search for information sources, which may contain information of interest stored in the ontology of the domain. 3. Retrieve information from the found sources. 4. Transform the retrieved information in order to avoid heterogeneity. 5. Fuse information. 2.4.2. The Data Preprocessing agent and its team The Data Preprocessing agent (DPA) aims to prepare the initial data for further modeling. It manages a number of subordinate agents, which make up its team. Each subordinate agent specializes in a different data clearing technique: • Gaps and Artifacts Check agent (GAA) clears fused raw information from missing and inconsistent values and fill the gaps. • Data Smoothing agent (DSA) carries out exponential and moving average smoothing procedures. • Normalization agent (NA) normalizes data sets. • Correlation agent (CA) calculates correlation matrices. 2.4.3. The Function Approximation agent and its team The Function Approximation agent (FAA) has a hierarchical team of subordinate agents, which serve to carry out the roles:

“Impact Assessment”, “Decomposition” and “Function Approximation”. FAA has under its control a number of subordinate agents:

• Data mining agents incorporate well-known techniques already applied to more or less recent decision support systems (e.g. [17,3,9,37], among others). In our case they work in a concurrent mode and create models of the following types: – The Regression agent (RA) creates regression models for given independent and dependent variables. It reads information from belief data sets for independent variables X and dependent variables Y. The Regression agent disposes two plans: * simpleRegression used for the generation of univariate regression models, * multipleRegression used for multiple regression models generation. Each plan permits the creation of linear and non-linear models: hyperbolic, exponential and power models. – The Artificial Neural Network agent (AA) creates neural network models. The ANN agent creates different types of models based on neural networks. For this reason, it executes several plans: * evaluateImpactAssessment, aimed to calculate environmental impact assessment and to select the most influential factors X for every dependent variable Y, * neuralNetwork, used for the generation of approximation and autoregression models of Y = F(X), Y = F(t) and X = F(t). Feed-forward neural networks trained with backpropagation algorithm are calculated within the evaluateImpactAssessment plan. When an agent chooses the neuralNetwork plan, it creates feed-forward neural network models trained with: * backpropagation algorithm (BP), * genetic algorithms (GA), * resilient backpropagation algorithm (RPROP). – The GMDH agent (GMDHA) creates polynomial models with the group method of data handling. • The Evaluation agent (EA) calculates evaluation criteria for the models. The Evaluation agent evaluates the received models and checks the adequacy of the model against the experimental data, and returns the list of the accepted models, while the others are banned and deleted. The Evaluation agent has four plans for model evaluation. These plans are:

M.V. Sokolova, A. Fernández-Caballero / Applied Soft Computing 11 (2011) 5243–5258

5249

extractCSV.plan Input: Data Sources *.csv. Output: Retrieved information C and P. Read file line to line and analyze tokens. Search for elements C and P. Extract found concepts/properties and “remember” their positions in the file. (2) Search for concepts/properties aass intersections of found concepts. (3) Change found properties if their scales are different fro from m the standard ones. DAA. A. (4) Write retrieved concepts C and P to internal file/DB and send it to DA (1)

extractDOC_XLC.plan Input: Data Sources *.doc, *.xls. Output: Retrieved information C and P. (1) Search for elements from C and P in rows or/and columns. Mark found concepts/properties. columns. (2) Search for concepts/properties as intersections of of found rows and colu mns. (3) Change found properties if their scales are different fro from m the standard ones. DAA. (4) Send retrieved concepts C and P to DA A. Fig. 2. Plans for data extraction from the CSV, XLS and DOC files.

– evaluateANN, which is used when the Evaluation agent receives the StartEvaluation message from the Artificial Neural Network agent; – evaluateSimpleRegression, which is used when the Evaluation agent receives the StartEvaluation message indicating the “univariate” parameter as a regression type from the Regression agent; – evaluateMultipleRegression, which is used when the Evaluation agent receives the StartEvaluation message indicating the “multiple” parameter as a regression type from the Regression agent; – acceptModels, which is used for each data mining agent in order to check the adequacy of a model from the initial data. • The Committee Machine agent (CMA) creates hybrid models, using models created by the RA and the AA, and evaluated by the EA. • The Decomposition agent (DA) carries out the decomposition procedure. It provides a relevant plan Decomposition to range the inputs of the neural model by their importance, and illustrates how the model output may change in response to variation of an input. 2.4.4. The Computer Simulation agent The Computer Simulation agent interacts with the user and performs a set of tasks within the Computer Simulation, Decision Making and Data Distribution roles. Its subordinate agents are the following: • The Forecasting agent, which is used to create forecasts and predictions of dependent and independent variables. • The Alarm agent, which is used to identify the values that are received by the Forecasting agent, which exceed permissible level. • The View agent, which is used to organize the computer-user interaction and create textual, graphical, and other types of documents. The Computer Simulation agent asks for the user’s preferences, and, to be more precise, for the information of the disease and pollutants of interest, the period of the forecast, and the ranges of their value changes. Once the information from the user is

received, The Computer Simulation agent sends a message to the Forecasting agent, which reasons and executes one of the plans, which include Forecasting, ModelSimulation, and CriterionApplication. When the alternative is created, the Computer Simulation agent sends another message to the Alarm agent. The Alarm agent compares the simulation and forecast data from the Forecasting agent with the permitted and alarm levels for the correspondent indicators. If they exceed the levels, the Alarm agent generates alarm alerts. 3. Data for experiment In order to evaluate the proposed agent-based DSS, retrospective data dated from 1989 until 2007 from Castilla-La Mancha (a Spanish region) is used. Indeed, resources offered by the Instituto Nacional de Estadística (Spanish Statistics Institute) [16] and by the Instituto de Estadística de Castilla-La Mancha (Castilla-La Mancha Statistics Institute) are used for the research. The factors that describe the Environmental pollution–Human health system are used as indicators of human health and influencing factors of environment. These can cause negative effects upon the above noted indicators of human health. The factors used in the experiment are presented in Table 8. Morbidity, classified by sex and age, is accepted as an indicator to evaluate human health. Table 9 provides a list of diseases examined in this case study. The diseases included in the research are chosen in accordance with the International Statistical Classification of Diseases and Related Health Problems [14]. The sex groups include “males”, “females” and “total”; and the age groups consist of “all the ages”, “under 1 year”, “1–4 years”, “5–14 years”, “15–24 years”, “25–34 years”, “35–44 years”, “45–54 years”, “55–64 years”, “65–74 years”, “75–84 years” and “85 years and over”. 4. Results of the experiment 4.1. Data retrieval In the data for experiment described there are 148 data files that contain information of interest. These files are used to extract the

5250

M.V. Sokolova, A. Fernández-Caballero / Applied Soft Computing 11 (2011) 5243–5258

Table 8 Pollutants studied in the research. Pollutant class

Factors

1 2 3

Transport Usage of petroleum products Water characteristics

4

Wastes

5

Principal miner products

Number of lorries, buses, autos, tractors, motorcycles, others Petroleum liquid gases; petroleum autos; petroleum; kerosene; gasohol; fuel-oil Chemical oxygen demand (COD); biochemical oxygen demand (BOD5); solids in suspension; nitrites Non-dangerous chemical wastes; other non-dangerous chemical wastes; non-dangerous metal wastes; wastes from paper industry; dangerous wastes of glass; dangerous wastes of rubber; dangerous solid wastes; dangerous vitrified wastes; wastes from used equipment; metallic and phosphorus wastes Hull; mercury; kaolin; salt; thenardite; diatomite; gypsum; rock; others

information. After extraction, data is placed in data arrays in accordance with the ontology. The fusing agents, which are supervised by the Data Aggregation agent, work with CSV, XLS and DOC-files. The algorithm of data extraction for these types of files is shown in Fig. 2. The Domain Ontology agent reads the OWL-file that contains the ontology of the system. Firstly, the agent creates the hierarchy of classes: Class :Pollution Class :Water pollution Class :Solar radiation Class :Transport Class :Dangerous wastes Class :Urban waste products Class :Industrial waste products Class :Industry Class :Minery products Class :individuals pollution Class :Morbidity Class :Exogeneous Class :Endogeneous Class :Ontology of Environment Class :Data Class :Region } Class :Pollution is a super-class of Class :Water pollution is a super-class of Class :Transport is a super-class of Class :Dangerous wastes is a super-class of Class :Industry Class :Urban waste products is a sub-class of Class :Dangerous wastes

The part of the output, given above, shows the names of the classes from the OWL-file. Next, the hierarchical links between the classes, shown with “sub-class” and “super-class” properties, are also retrieved. For example, the line Pollution is a super-class of Class :Water pollution shows one such relation. The restrictions and properties of each class, generated by the Domain Ontology agent, are shown next:

Class :Ontology of Interactions is a sub-class of owl.Thing Restriction with ID a-5 on property :has initiator some values from Class :Ontology of Agents is a sub-class of Restriction with ID a-6 on property :has receiver some values from Class :Ontology of Agents is a sub-class of Class owl :Thing Class :Ontology of Agents is a sub-class of is a sub-class of owl.Thing on property :has believes some values from Class :Data is a sub-class of Restriction with ID a-8 on property :has desires some values from Class :Methods is a sub-class of Restriction with ID a-9 on property :has intentions some values from Class :Methods is a sub-class of Class owl :Thing Class :Ontology of TASKS is a sub-class of is a sub-class of owl.Thing on property :has method some values from Class :Methods is a sub-class of Class owl :Thing is a super-class of Class :Methods

The ontology data is converted into agents’ beliefs, which are used for further data retrieval.

4.2. Information fusion and preprocessing 4.2.1. Detection and elimination of artifacts Data is checked for the presence of missing values and outliers. These can be caused by registration errors or misprints. First, the data sets are checked for the presence of missing values. It is discovered that 13 out of 65 factors have more than 50% of gaps, as these

Table 9 Diseases studied in the research. Disease class

Factors

1

Endogenous diseases

Certain conditions originating in the perinatal period Congenital malformations, deformations and chromosomal abnormalities

2

Exogenous diseases

Certain infectious and parasitic diseases Neoplasm, diseases of the blood and blood-forming Organs and certain disorders involving the immune mechanism Endocrine, nutritional and metabolic diseases Mental and behavioral disorders, diseases of the nervous system Diseases of the eye and adnexa, diseases of the ear and mastoid process Diseases of the circulatory system, diseases of the respiratory system Diseases of the digestive system, diseases of the skin and subcutaneous tissue Diseases of the musculoskeletal system and connective tissue Diseases of the genitourinary system, pregnancy, childbirth and the puerperium Symptoms, signs and abnormal clinical and laboratory findings, not elsewhere classified External causes of morbidity and mortality

M.V. Sokolova, A. Fernández-Caballero / Applied Soft Computing 11 (2011) 5243–5258

5251

Table 10 Outcomes of missing values and outliers detection. Factor

Description

Missing values (%)

Outliers (%)

Factor

Description

Missing values (%)

Outliers (%)

X0 X2 X4 X6 X8

38 38 87 60 0

0 0 0 0 0

X1 X3 X5 X7 X9

78 40 87 0 0

0 0 0 0 0

0

0

X11

Petroleum liquid gases Petroleum for other purposes Gasohol Production of asphalts Biochemical demand of oxygen in water Nitrites in water

0

0

0 6 ... 6 0 0

X13 X15 ... X49 X51 X53

0 0

6 0

18 0 0

0 0 0

0

0

X55

37

0

25

0

X57

18

0

18

0

X59

25

0

25

0

X61

Mercury Salt ... Dangerous chemical wastes Dangerous metal wastes Dangerous waste from chemical substances Non-dangerous metallic and phosphorus wastes Non-dangerous wastes of paper, glass and rubber Non-dangerous solid and vitrified wastes Number of tractors

25

0

X62 X64 ... Y0

Petroleum for general use Petroleum for cars Kerosene Fuel oil Chemical demand of oxygen in water Concentration of solids in suspension Hull Kaolin ... Non-dangerous chemical wastes Non-dangerous metal wastes Non-dangerous waste from chemical substances Dangerous metallic and phosphorus wastes Dangerous wastes of paper, glass and rubber Dangerous solid and vitrified wastes Other non-dangerous chemical wastes Number of lorries Number of cars ... Total number of diseases

25 25 ... 12

0 0 ... 0

X63 X65 ... Y1

25 25

0 0

12

0

Y2

Neoplasm

0

6

Y3

0

6

Y4

Endocrine, nutritional and metabolic diseases ...

8

0

Y5

Number of buses Number of motorcycles ... Certain infectious and parasitic diseases Diseases of the blood and blood-forming organs and certain disorders involving the immune mechanism Mental and behavioral disorders

0

0

...

...

...

X10 X12 X14 ... X48 X50 X52 X54 X56 X58 X60

...

30 0 ... 6 18 12

...

factors have not been registered until several years ago. For example, the data for pollutants “Solids in suspension” and “Nitrites” are only available since 1996, and data for some types of wastes such as “Dangerous wastes of paper and carton” and “Dangerous chemical wastes” have no records from 1989 to 1998. As a result, the number of valid pollutants for further processing decreases from 65 to 52. These data sets are excluded from the analysis. Next, pollution indicators are checked for the presence of outliers. The results are provided in Table 10. The human health indicators appear to be more homogeneous, and there are more data sources containing information of interest. These data do not contain missing values.

some data sets before (in red) and after (in blue) the filling gaps procedure. 4.2.3. Smoothing the results The reason to apply smoothing is to homogenize the data after the management of missing values. Exponential smoothing with

4.2.2. Filling of missing values Data sets are checked for the presence of artifacts. The method of outlier detection, based on using the interquartile range is used to identify artifacts. This method uses the interquartile range (IQR), which is a measure of variability and is calculated as IQR = Q3−Q1, and represents the spread of the middle 50% of the data. A value is an artifact if: • It is lower than (Q1 − 1.5 (IQR)). • It is higher than (Q3 + 1.5 (IQR)). where Q1 is a 25th percentile, Q3 is a 75th percentile. If a value is an artifact, it is eliminated from the data set. As a result, data sets contain missing values after artifact detection. The presence of missing values skews the data and may lead to incorrect or unreliable conclusions. In the current study, some data sets have a lot of gaps. In order to fill missing values, the “golden ratio” method is used. The golden ratio is 0.62% of the previous value and 0.38% of the next value to the gap. The bar chart provided in Fig. 3 visualizes

Fig. 3. Bar chart exemplifying the filling gaps procedure: data before (in red) and after (in blue) filling the gaps. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of the article.)

5252

M.V. Sokolova, A. Fernández-Caballero / Applied Soft Computing 11 (2011) 5243–5258

coefficient ˛ equal to 0.15 is used, as this value of ˛ provides a “light” smoothing.

a result of these procedures, the initial information is transformed and prepared for knowledge discovery.

4.2.4. Normalizing the results Data is normalized using two normalization methods: Z-score standardization and min − max normalization. The example from Fig. 4 shows the results of the normalization. The column “Normalized X” contains values normalized within the interval [0, 1]. The extreme values of the real data are shown below as the minimum and the maximum values of the data set.

4.2.6. Decomposition results The decomposition of the studied complex system, Environmental pollution–Human health, is carried out by means of correlation analysis. As correlation between variables can impede the correct execution of data mining procedures and lead to false results, a set of non-correlated independent variables X for each dependent variable Y are created. The independent variables (pollutants) that offer insignificant correlation with the dependent variable (disease), are also included into the set. The mutual correlation between the variables of a model is also studied. The variables that have a correlation coefficient greater than 0.7 are marked for exclusion from the model. This procedure is applied for regressions, artificial neural networks, and so on. The results are offered in Table 11.

4.2.5. The results of the correlation analysis Unfortunately, the data sets fused from the data sources are short and only contain 32 values. Owing to this fact non-parametric correlation coefficients are calculated using Kendall’s  statistic. For the given data sets the critical value of Kendall’s  is equal to 0.6226. The correlation analysis has shown the following results. In the same data pool the variables correlating significantly with morbidity from “Neoplasm” for the age group “under 1 year” are “Water characteristics” and “Wastes from used equipment”; for the age group “more than 85 years”, apart from the same factors, are “Wastes: non-dangerous and dangerous chemical waste” (Kendall’s  −0.726 and −0.983); for the age group “more than 1 and less than 4”, significant correlation is found with “Usage of petroleum products: gases” (Kendall’s −0.650), with “Wastes” (Kendall’s  −0.850 and −1.0). In relation to endogenous diseases, the outcomes of the “Certain conditions originating in the perinatal period” correlation demonstrate relations with “Water characteristics” (Kendall’s  −0.650), “Principal miner products” (Kendall’s  −0.750), “Nondangerous waste from chemical substances” (Kendall’s  −0.733), and “Metallic and phosphorus wastes” (Kendall’s  −0.750). The data from the class “Congenital malformations, deformations and chromosomal abnormalities” correlates with “Principal miner products”(Kendall’s  −1.0), “Dangerous wastes of paper, glass and rubber” (Kendall’s  −1.0), and with “Dangerous solid and vitrified wastes”(Kendall’s  −0.767). The closer the Kendall’s  value to 1, the stronger the agreement between the two rankings of the analyzed variables. The closer the Kendall’s  value to −1, the stronger the disagreement is between the two rankings of the analyzed variables. For these reasons, as their the Kendall’s  coefficients’ values are greater than the critical value 0.6226, the calculated values of non-parametric correlation for the above mentioned variables proves the existence of statistical similarities between them. During the testing, an interface window is created, which contains the final results of data cleaning. Fig. 4 shows the changes in data after the detection of outliers, filling gaps, and normalization (above). The bar chart shows the data set before and after filling the gaps. In conclusion, knowledge about the domain of interest is successfully retrieved from the OWL-file. Next, data is successfully retrieved from various containers, and, as a result, agents’ beliefs are filled with data. With regards to data quality and their initial preprocessing, several data sets related to pollutants appear to have many missing values and outliers (between 40 and 87%). Therefore, the factors that correspond to those data sets are eliminated from the study. Elimination of these factors, which are the components of the Environmental pollution–Human health system, does not significantly affect the description of the system. The factors eliminated are represented by so scarce data, which cannot be interpolated, and, hence, the presence of these factors in the study can only impede correct data treatment. Thus, the data sets that contain less number of gaps are preprocessed. Firstly, the outliers are eliminated. Second, missing values are filled by using the “gold ratio” method. Third, data sets are smoothed with exponential smoothing, and, finally, normalized. As

4.3. Knowledge discovery results The ADSS recovers data from plain files which contain the information about the factors of interest and pollutants. The files are fused in agreement with the ontology of the problem area. Some necessary changes of data properties (scalability, etc.) and their preprocessing are assumed. The ADSS has a wide range of methods and tools for modeling, including regression, neural networks, GMDH, and hybrid models. The Function Approximation agent selects the best models. These models include simple regression—43 models; multiple regression—24 models; neural networks—4098 models; GMDH—1409 models. The selected models are included into the committee machines. Next, the values for diseases and pollutants are extrapolated for the period of 10 years with a six month step. This extrapolation allows visualizing the dynamics of the factors and detecting if their values overcome the critical levels. A control under the “significant” factors, which impact health indicators, could decrease some types of diseases. As a result, the MAS provides all the necessary steps for the standard decision making procedure by using intelligent agents. The levels of the system architecture, logically and functionally connected, have been presented. Real-time interaction with the user provides a range of possibilities in choosing one course of action from several alternatives, which are generated by the system through guided data mining and computer simulation. The system is designed for regular usage to achieve adequate and effective management by responsible municipal and state government authorities. Also, both traditional data mining techniques and other hybrid and specific methods, with respect to data nature (incomplete data, short data sets, etc.) were used. The combination of different tools enabled us to gain quality and precision of the reached models, and, hence, in making recommendations, which are based on these models. The received dependencies of interconnections and associations between the factors and dependent variables help to correct recommendations and avoid errors. 4.3.1. Regression models For every class of disease, plotting morbidity value against one or several pollutants, simple and multiple, linear and nonlinear, regressions are performed. As a result, regression models are created of least-squared, power, exponential and hyperbolic types. Each model is evaluated with Fisher F-value. The models that do not satisfy the F-test are eliminated from the list of accepted models. The critical F-value for (m − 1) = 2 − 1 = 1 and 2(n − 1) = 2(16 − 1) = 30◦ of freedom is 4.35. Generally, the number of accepted regression models is low. Indeed, the predictability of the best performing univariate regression models range from 0.48 to 0.82 for the discrimination

M.V. Sokolova, A. Fernández-Caballero / Applied Soft Computing 11 (2011) 5243–5258

5253

Fig. 4. Outcomes of data clearing: elimination of outliers, filling gaps and normalization, where X1 is the data before and X2 is the data after filling the gaps.

Table 11 Decomposition results.

1 2 3 4 5 6 7

Dependent variable

Independent variables

Y1 Y7 Y20 Y35 Y96 Y100 Y181

X27 ,X35 , X39 , X40 , X42 , X54 , X59 , X60 , X61 , X62 , X63 , X64 X28 ,X29 , X30 , X32 , X33 , X36 , X37 , X40 , X42 , X48 , X54 , X55 , X57 , X60 , X61 , X62 , X63 , X64 X21 , X24 , X26 , X27 , X28 , X29 , X30 , X31 , X33 , X35 , X37 ,X38 , X39 , X40 , X42 , X44 ,X54 , X55 , X50 , X60 , X61 , X62 X8 , X9 , X12 , X60 , X61 , X62 , X63 , X64 X592 , X60 , X61 , X62 , X63 , X64 X26 , X27 , X28 , X29 , X30 , X31 , X33 , X35 , X37 ,X38 , X39 , X40 , X42 , X49 ,X54 , X55 , X59 , X60 , X61 , X62 X6 , X61 , X64

coefficient. Figs. 5 and 6 show examples of regression models and approximation to real data. The model shown in Fig. 5 is a univariate regression model that constructs the function Y0 = f(X1 ) and is equal to Y0 = 6.42X1 − 0.068. The red line represents the initial data, and the blue line represents the data approximated with the model. A visual analysis shows that the model does not fit well with the data set. The same conclusion is drawn by analyzing its statistical criteria: the correlation coefficient R = 0.48, the determination coefficient D = 0.23 and the F-criterion F = 4.4. The regression model shown in Fig. 6, which models the dependency Y14 = f(X44 ) and has the form Y14 = 4.43X44 − 0.144, shows better results in fitting the initial line, as well as proving the statistical criteria: the correlation coefficient R = 0.82, the determination coefficient D = 0.68 and the Fcriterion F = 30.09. In general, univariate regression models for the current case study are characterized with low values of statistical

indicators and cannot be used for modeling. Nonetheless, multiple regression models show better performance results. For example, the multiple regression model for the Y15 is given in Fig. 7. The model is written as Y15 = 0.022X14 + 0.001X4 + 0.012 and its statistical criteria for this model are: the correlation coefficient R = 0.77, the determination coefficient D = 0.59 and the F-criterion F = 20.69. The meaning the explanatory variables, X4 and X14 , explains the dependent variable Y9 in 59% of cases. In other words, in 59% of cases the model would give a correct result, and in 41% of cases the model would give an incorrect result.

Fig. 5. Univariate linear regression to model Y0 = f(X1 ).

Fig. 6. Univariate regression to model Y14 = f(X44 ).

4.3.2. Neural network models Neural network-based models, calculated for the experimental data sets, have shown high performance results. The Encog library [15] is used to create and to train neural network models. The Encog

5254

M.V. Sokolova, A. Fernández-Caballero / Applied Soft Computing 11 (2011) 5243–5258

Fig. 7. Multiple regression to model Y15 = f(X4 , X14 ).

Fig. 10. Neural network trained with backpropagation algorithm, learning rate 0.9, momentum 0.3, number of epochs 5500.

Fig. 8. An example of BP model for Y35 .

library contains classes which allow creating a wide variety of networks (feedforward neural networks, Hopfield neural network, Radial Basis Function network, etc.) and training them with various algorithms (backpropagation, resilient propagation, genetic algorithms, etc.). The library also includes support classes to normalize and process data for the neural networks. Networks trained with resilient propagation and with backpropagation algorithms have similar architectures, and the training and testing procedures are equivalent. Before modeling, some previous experiments are carried out. The training parameters are varied and the outputs of the neural network models are evaluated. These experiments help to determine the optimal values of the parameters. The best results are obtained from the networks with a limited number of hidden layers and neurons. In fact, the neural network with one hidden layers appears to be the optimal architecture for working with short data sets, which is the case in our experiments. For feedforward networks trained with the backpropagation algorithm the values of the learning rate and the momentum are varied within the interval [0, 0.99]. The best results are obtained for the learning rate within the interval [0.85, 0.99] and the momentum within range [0.3, 0.4]. An example is shown in Fig. 8. Feedforward neural networks trained with the resilient propagation training algorithm show a high performance with the zero tolerance equal to 1015 , the initial update value within the range [0.05, 0.15] and the maximum step equal to 50. Cross-validation is used to evaluate the performance of the neural networks. The first sample is used to fit a model and the second one is used to estimate the expected discrepancy of the model. With respect to the selection of the number of observations in the validation data set m, there is a practical recommendation that supposes 75 and 25% for training and validation data sets, respectively [8]. Figs. 9–12 represent training error functions for some

Fig. 9. Neural network trained with backpropagation algorithm, learning rate 0.95, momentum 0.4, number of epochs 5500.

Fig. 11. Neural network trained with resilient propagation algorithm, zero tolerance 10−15 , initial update 0.1, and maximum step 50.

neural network models for the variable Y0 . The models are neural networks trained with backpropagation algorithm and neural networks trained with resilient propagation algorithm”. The learning parameters of learning are provided in the legends and the activation function is the sigmoid function. The stopping criteria are double: the training error should not exceed 0.1 and the number of epochs should not be less than 5500. Also neural networks are trained with genetic algorithms using training sets with the following parameters: • population size—the size of population, used for training, • mutation percent—the percent of the population to which the mutation operator is applied, • percent to mate—the part of the population to which the crossover operator is applied. In fact, weight optimization occurs after several populations are created; so, the training error curves have a step-like form. The error curves help to set the parameters for GANN training optimal. The determination coefficient is taken as a fitness function. Fig. 13 shows how this form of error training curve changes depending on the values of the parameters mentioned above. The analysis of the evaluation criteria shows that GANN provides better results for the given type of data series. The variation of the number of iterations (900 and 1500 population created, see Fig. 13(a) and (b)) influences on the value of the training error (≈0.30 versus ≈0.25). Various values for the population size used for training and mutation percent are experimented. The charts (c) and (d) on Fig. 13 show that the combination of a high percent of population used for training (0.7) and for mutation (0.7) does not

Fig. 12. Neural network trained with resilient propagation algorithm, zero tolerance 10−15 , initial update 0.1, and maximum step 50.

M.V. Sokolova, A. Fernández-Caballero / Applied Soft Computing 11 (2011) 5243–5258

5255

Fig. 14. The GMDH model for Y23 .

tal data and has high values of statistical parameters. The 2 + 0.161X 2 − 0.813 model Y14 = 5.292X64 for the variable 60 Y14 . Disease : Certain conditions originating in the prenatal period. Age group : all the ages, which is shown in Fig. 15, obtains very high statistical performance results (see Table 12). The visual analysis shows that this model fits the initial data almost perfectly. Models with similar characteristics as the one shown in Fig. 15 are candidates to be included into the hybrid committee for the given variable.

Fig. 13. Error functions for neural networks training with genetic algorithms with different parameters. The chart training parameters are the population size, the mutation percentage, and the percent to mate, respectively. The parameters for each chart are: (a) 900, 0.1, 0.3; (b) 1500, 0.1, 0.3; (c) 1500, 0.7, 0.7; (d) 1500, 0.3, 0.2.

significantly affect the training error. However, the combination of a relatively low percent of population used for training (0.3) and for mutation (0.2) offers better performance indicators for the models. In fact, the training error is reduced to the value of ≈0.105. 4.3.3. Models received with the group method of data handling An important feature of the iterative GMDH algorithm is its ability to identify both linear and non-linear polynomial models by using the same approach. The results of the GMDH-modeling and the best approximation models obtained are provided in Table 12. The best results are obtained by models number 3 and 1. In general, GMDH-based models obtain high performance results and efficiency when working with short data sets. The models are received with a combinatorial algorithm, where the combination of the following polynomials are used: X, X2 , X1 X2 , X1 X22 , X12 X2 , X12 X22 , 1/X, 1/(X1 X2 ). The choice of these polynomial terms are used in agreement with well-known recommendations [21]. More GMDH algorithms and polynomial terms of higher order will be used in future enhancements of the system. The selection of the model is stopped when the regulation criterion starts to reduce. The GMDH model provided in Fig. 14 is created for variable Y23 Disease: Mental behavior and disorders, age group: under 2 + 1.156X 2 − 2.014. 1 year. and has the form Y23 = 4.153X42 2 The statistical features of this model are the correlation coefficient R = 0.95, the determination coefficient D = 0.91 and the F-criterion F = 50.825. This model fits well the experimen-

4.3.4. Model selection After the modeling, several models for each variable of interest are created, and the best evaluated models are selected to be included into a committee machine. Table 13 shows the results of the accepted models for variable Y0 “Total morbidity from respiratory diseases”. The columns are: R—correlation coefficient, D—determination coefficient, and F—Fisher criterion, where RM stands for regression models, GMDH stands for group method of data handling models, RPROP, BP and GANN stand for neural networks models, trained with resilient propagation, backpropagation and genetic algorithms, respectively. In accordance with Table 13, neural network models and GMDH-models get a better performance than regression models. Consider the values for determination coefficient D for regression models 0.46 and 0.30, and for neural networks trained with RPROP algorithm 0.91 and 0.81, for BP-networks 0.77 and 0.82, and for GANN 0.76 and 0.79, respectively. In general, neural networks models and models created with GMDH show better results for all the variables. Thus, 32 models are created for the variable Y0 = “Total morbidity from respiratory diseases”, where 28 are neural networks, 1 GMDH model and 3 regression models, and for the variable Y35 = “External causes of death” the number of models is 37, where 25 are neural networks, 8 multiple regression models, 1 GMDH and 3 simple regressions. With respect to the execution time of the different data mining methods, there is no doubt that the most rapid methods are regression methods. The time for generation of GMDH-models greatly depends on the data samples and the order of the polynomials used for model creation. On the other hand, neural network models, that is BP and RPROP models, learn in a fast way. Finally, GANN-based

Fig. 15. The GMDH model for Y14 .

5256

M.V. Sokolova, A. Fernández-Caballero / Applied Soft Computing 11 (2011) 5243–5258

Table 12 GMDH-models: several examples (best results are provided in bold).

1 2 3 4 5 6

Model

R

D

2 Y23 = 4.153X42 + 1.156X22 − 2.014 Y14 = 5.44X31 + 0.171X34 + 0.26X61 − 0.727 2 2 Y14 = 5.292X64 + 0.161X60 − 0.813 Y7 = 3.704X28 + 0.301X36 + 0.29X38 − 1.197 Y9 = 4.67X28 − 0.051X30 − 0.222X31 − 0.54X36 − 0.259 Y24 = 0.01X1 X0 − 69.01

0.95 0.67 0.98 0.54 0.77 0.54

0.92 0.44 0.98 0.29 0.60 0.30

Table 13 Models for disease class Y0 = “Total morbidity from respiratory diseases” and for age groups “All ages” and “Less than 1 year.” Model type

Model

R

D

F

Age group “All ages” RM GMDH RPROP BP GANN

Y0 = 4.12X5 − 0.81 Y0 = 4.18X5 − 0.019X2 − 0.051X6 − 0.025 Y0 = f(X1 , X2 , X4 , X7 ) Y0 = f(X3 , X5 , X6 ) Y0 = f(X0 , X2 , X6 )

0.68 0.76 0.95 0.88 0.87

0.46 0.57 0.91 0.77 0.76

12.2 5.30 27.9 13.73 12.45

Age group “Less than 1 year” RM GMDH

Y0 = 4.11X5 − 0.52 Y0 = −0.11X0 − 0.08X62 − 0.05X52 +

0.55 0.85

0.30 0.72

5.98 5.41

0.90 0.91 0.89

0.81 0.82 0.79

11.72 18.22 15.05

RPROP BP GANN

6.67 X4

+ 7.78X5 − 1.73

Y0 = f(X0 , X1 , X2 , X3 ) Y0 = f(X4 , X5 , X6 ) Y0 = f(X1 , X3 , X4 )

models require more execution time. Evidently, the simultaneous execution of these methods in concurrent mode forces a significant reduction of execution time. 4.3.5. Committee machines A committee machine is tested as a final model for every variable. Just as an example of a committee machine, the outputs of modeling for the variable of interest Y35 = “Disease: External causes of death. Age group: all ages” are discussed. First, after the decomposition of the number of variables (pollutants) that could be included, the models of interest for Y35 include the following factors: X8 , X9 , X12 , X60 , X61 , X62 , X63 , X64 . Several models that include these factors are created for the variable, Y35 , and are then evaluated. The models with higher values of the correlation coefficient R, the determination coefficient D are selected. These are the best models gotten: 1. Multiple regression model Y35 = f1 (X9 , X61 ). 2. Neural network trained with backpropagation algorithm Y35 = f2 (X8 , X63 , X9 ) (see Fig. 16). 3. Neural network trained with RPROP algorithm Y35 = f3 (X60 , X62 , X12 ) (see Fig. 16). 4. Neural network trained with genetic algorithms Y35 = f4 (X64 , X12 ).

x(n − 2) . . .. For the current case, each autoregressive model is calculated as x(t) = f(x(t − 1), x(t − 2), . . ., x(t − 4)), where t represents time, and has values (1, 2, . . ., n). n is the length of the data set and x(t) is the value of the factor at step t. Furthermore, each autoregressive neural network model belongs to the feedforward type, and is trained with the RPROP algorithm. Its structure includes an input layer with five input neurons, a hidden layer with three or four neurons, and an output layer with one neuron. When the predictions for the factors from the formula of the committee machine are obtained, they are used to calculate the forecast for Y35 . The models show similar results that do not vary too much. In accordance with the forecast, the morbidity from external causes has a tendency to decrease (Fig. 17). For the period of prediction, all the models throw similar forecasts that which are not strongly dispersed. This similarity in predictions by different models proves the tendency of the situation development. The outputs obtained by the committee machine are marked with red and, in accordance with Eq. (1), the response is a composition of the best models.

The final model generated by the committee machine is: Y35 =

f1 (X9 , X61 )Rf1 + f2 (X8 , X63 , X9 )Rf2 + f3 (X60 , X62 , X12 )Rf3 + f4 (X64 , X12 )Rf4 Rf1 + Rf2 + Rf3 + Rf4

(1)

where fi is a model, included into the committee machine, and Rfi is the correlation coefficient for the i-th model, i ∈ [0, . . . , n], where n is the number of models. Fig. 16 provides a graphical representation of the models. The factual information covers 28 years, which are given with a six months step. It starts at “0” and finishes at “27.5”. The forecast is made for 10 years, and includes the marks starting from “28” and finishing with “37.5”. The autoregressive neural networks models for all the factors from the formula of the committee machine (see (1)) are calculated to perform the forecast. The autoregressive model is a formula that predicts an output y(n) of a system based on the previous outputs, y(n − 1), y(n − 2) . . ., and inputs, x(n), x(n − 1),

Fig. 16. Accepted models for the variable Y35 “External causes of death”, age group “under 1 year”. Approximation of real data by BP-trained (above) and RPROP-trained (below) neural networks.

M.V. Sokolova, A. Fernández-Caballero / Applied Soft Computing 11 (2011) 5243–5258

5257

Fig. 17. Models for variable Y35 and prognosis for the determined period. Dependent variables are X8 , X9 , X12 , X60 , X61 , X62 , X63 and X64 . The data received by the committee machine is in blue, the data received by the neural network trained with resilient propagation algorithm is in red, the data received by the neural network trained with backpropagation algorithm is in green, the data received by the neural network trained with genetic algorithms in yellow, and the data received by the multiple regression model is in magenta. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of the article.) Table 14 Table with the results of impact assessment for selected diseases. Disease class

Pollutant, which influence upon the disease

1

Neoplasm

2 3

Diseases of the blood and blood-forming organs, the immune mechanism Pregnancy, childbirth and the puerperium

4

Certain conditions originating in the prenatal period

5

Congenital malformations, deformations and chromosomal abnormalities

Nitrites in water; miner products; BOD5; asphalts; dangerous chemical wastes; fuel-oil; petroleum liquid gases; water: solids in suspension; non-dangerous chemical wastes BOD5; miner products; fuel-oil; nitrites in water; dangerous wastes of paper industry; water: solids in suspension; dangerous metallic wastes Kerosene; petroleum; petroleum autos; petroleum liquid gases; gasohol; fuel oil; asphalts; water: COD; BOD5; solids in suspension; nitrites in water Non-dangerous wastes: general wastes; mineral, construction, textile, organic, metal wastes. Dangerous oil wastes Gasohol; fuel-oil; COD in water; producing asphalts; petroleum; petroleum autos; kerosene; petroleum liquid gases; water: BOD5, nitrites, solids in suspension

4.3.6. Environmental impact assessment results The impact assessment shows the dependencies between water characteristics and neoplasm, complications of pregnancy, childbirth and congenital malformations, and deformations and chromosomal abnormalities. Table 14 shows the output of impact assessment for several variables of interest (classes of diseases), which proves that within the most important factors,apart from water pollutants, there are indicators of petroleum usage, miner output products and some types of wastes. The results are offered in Table 14. The results are compared to other studies. Thus, in [22] it is reported about the causal effect of traffic pollution upon respiratory illness. In our research we have come to similar conclusions, as we have detected a presence of links between such indirect indicators (lorries, motorcycles, cars, etc.) and diseases of the respiratory system. As it is reported in [2], water pollution may lead to cancer, and we have discovered similar relations through impact assessment evaluation (see Table 14). 5. Conclusions The proposed ADSS for the study of the “Environment–Public health” complex system includes all the necessary stages of the decision making process, leaving to a user the possibility to simulate possible outputs and to take a weighted decision based on previous analysis and expert knowledge. The ADSS has a strictly

organized structure which, however, can be modified and amplified by user requirements. The fundamental scenarios retrieve and fuse data, preprocess data, check environmental impact, create models, and create recommendation provide the execution of all the data mining procedures carried out by the system and facilitate reaching the aims. In other words, the proposed system may be used as a “black box” system for non-specialists in data processing, or as a “white-box” system for specialists who would modify it in agreement with their needs. Hierarchical agents which have their teams of subordinate agents are used. This decision facilitates distributing the control all over the system. Moreover, in case there is a need to add some additional functionality to any role, often it can be solved by adding a plan o a capability to the subordinate or principal agent. The implementation is performed within the JACK Development Environment. Some plug-ins are used to code data fusion procedures when the ontology is read from the OWL-files, and neural networks models and visualized results are generated. Practical results and some theoretical outcomes have been described from an experiment for the region of Castilla-La Mancha (Spain). Retrospective information has been fused and preprocessed. The results of data cleaning has been provided and explained in detail with graphics and numerical results. Regression, neural networks, and GMDH-models have been created and then the best model for each variable of interest has been included into

5258

M.V. Sokolova, A. Fernández-Caballero / Applied Soft Computing 11 (2011) 5243–5258

committee machines. The example described in the article shows a committee machine for the “External causes of death” variable together with the simulation results obtained. The procedure of the impact assessment has helped to discover the hidden dependencies between health outcomes and environmental pollutants which ware represented by indirect indicators. The obtained results can help in better understanding the possible hazards for regional public health, and could be used to correct and to improve politics of public health institutions. Acknowledgements This work was partially supported by Spanish Ministerio de Ciencia e Innovación TIN2010-20845-C03-01 grant, and by Junta de Comunidades de Castilla-La Mancha PII2I09-0069-0994 and PEII09-0054-9581 grants. References [1] A. Aschengrau, J. Weinberg, P. Janulewicz, L. Gallagher, M. Winter, V. Vieira, T. Webster, D. Ozonoff, Prenatal exposure to tetrachloroethylene-contaminated drinking water and the risk of congenital anomalies: a retrospective cohort study, Environmental Health 8 (1) (2009) 814–830. [2] I. Brüske-Hohlfeld, Environmental and occupational risk factors for lung cancer, Methods in Molecular Biology 472 (2008) 3–23. [3] Q. Cao, M.E. Parry, Neural network earnings per share forecasting models: a comparison of backward propagation and the genetic algorithm, Decision Support Systems 47 (1) (2009) 32–41. [4] F. Castro, A. Nebot, F. Mugica, On the extraction of decision support rules from fuzzy predictive models, Applied Soft Computing (2011). [5] V.Y.C. Chen, H.P. Lien, C.H. Liu, J.J.H. Liou, G.H. Tzeng, L.S. Yang, Fuzzy MCDM approach for selecting the best environment-watershed plan, Applied Soft Computing 11 (1) (2011) 265–275. [6] J.M. Corchado, J. Bajo, Y. de Paz, D.I. Tapia, Intelligent environment for monitoring Alzheimer patients, agent technology for health care, Decision Support Systems 44 (2) (2008) 382–396. [7] O. Cordón, A. Fernández-Caballero, J.A. Gámez, F. Hoffmann, The impact of soft computing for the progress of artificial intelligence, Applied Soft Computing 11 (2) (2011) 1491–1492. [8] P. Giudici, S. Figini, Applied Data Mining for Business and Industry, John Wiley & Sons, 2009. [9] W. Fan, P. Pathak, M. Zhou, Genetic-based approaches in ranking function discovery and optimization in information retrieval—a framework, Decision Support Systems 47 (4) (2009) 398–407. [10] E. Feigenbaum, P. McCurduck, The Fifth Generation, Pan Books, London, 1984. [11] A.G. Greenwood, S. Vanguri, B. Eksioglu, P. Jain, T.W. Hill, J.W. Miller, C.T. Walden, Simulation optimization decision support system for ship panel shop operations, in: Proceedings of the 37th Winter Simulation Conference (WSC), vol. 1, 2005, pp. 2078–2086. [12] V. Haley, T. Talbot, H. Felton, Surveillance of the short-term impact of fine particle air pollution on cardiovascular disease hospitalizations in New York State, Environmental Health 8 (1) (2009) 42–52. [13] I. Holman, M. Rounsevell, S. Shackley, P. Harrison, R. Nicholls, P. Berry, E. Audsley, A regional, multi-sectoral and integrated assessment of the impacts of climate and socio-economic change in the UK, Climatic Change 71 (1) (2005) 9–41. [14] International Classification of Diseases, 2008, http://www.who.int/ classifications/icd/en/ [accessed on July 19, 2008]. home page. Version from 20 of November, 2009, [15] Encog http://www.heatonresearch.com/encog [accessed on June, 2009].

[16] Instituto de Estadistica de Castilla-La Mancha, 2008, http://www. ine.es/en/welcome en.htm [accessed on July 19, 2008]. [17] R.J. Kuo, L.M. Lin, Application of a hybrid of genetic algorithm and particle swarm optimization algorithm for order clustering, Decision Support Systems 49 (4) (2010) 451–462. [18] N. Lavrac, E. Keravnou, B. Zupan, Intelligent data analysis in medicine, Encyclopaedia of Computer Science and Technology 42 (2000) 113–157. [19] Y.F. Li, S.H. Ng, M. Xie, T.N. Goh, A systematic comparison of metamodeling techniques for simulation optimization in decision support systems, Applied Soft Computing 10 (4) (2010) 1257–1273. [20] O.R. Liu-Sheng, Decision support for healthcare in a new information age, Decision Support Systems 30 (2) (2000) 101–103. [21] H.R. Madala, A.G. Ivakhnenko, Inductive Learning Algorithms for Complex Systems Modeling, CRC Press Inc., 1994. [22] E. Migliore, G. Berti, C. Galassi, N. Pearce, F. Forastiere, R. Calabrese, L. Armenio, A. Biggeri, L. Bisanti, M. Bugiani, E. Cadum, E. Chellini, V. Dell’Orco, G. Giannella, P. Sestini, G. Corbo, R. Pistelli, G. Viegi, G. Ciccone, Respiratory symptoms in children living near busy roads and their relationship to vehicular traffic: results of an Italian multicenter study SIDRIA 2, Environmental Health 8 (1) (2009) 27–42. [23] H. Orru, E. Teinemaa, T. Lai, T. Tamm, M. Kaasik, V. Kimmel, K. Kangur, E. Merisalu, B. Forsberg, Health impact assessment of particulate pollution in Tallinn using fine spatial resolution and modeling techniques, Environmental Health 8 (1) (2009) 98–105. [24] E.I. Papageorgiou, A new methodology for decisions in medical informatics using fuzzy cognitive maps based on fuzzy rule-extraction techniques, Applied Soft Computing 11 (1) (2011) 500–513. [25] D.J. Power, R. Sharda, Model-driven decision support systems: concepts and research directions, Decision Support Systems 43 (3) (2007) 1044–1061. [27] M.V. Sokolova, A. Fernández-Caballero, F.J. Gómez, Agent-based interdisciplinary framework for decision making in complex systems, in: Proceedings of the International Conference on Agents and Artificial Intelligence (ICAART), vol. 2, 2010, pp. 96–103. [28] M.V. Sokolova, A. Fernández-Caballero, Modeling and implementing an agentbased environmental health impact decision support system, Expert Systems with Applications 36 (2) (2009) 2603–2614. [29] M.V. Sokolova, A. Fernández-Caballero, Multi-agent-based system technologies in environmental issues, Information Technologies in Environmental Engineering (2009) 549–562. [30] M.V. Sokolova, A. Fernández-Caballero, A multi-agent architecture for environmental impact assessment: information fusion, data mining and decision making, in: Proceedings of the Ninth International Conference on Enterprise Information Systems, vol. DISI, 2007, pp. 219–224. [31] M.V. Sokolova, A. Fernández-Caballero, An agent-based decision support system for ecological-medical situation analysis, Nature Inspired Problem-Solving Methods in Knowledge Engineering (2007) 511–520. [32] M.V. Sokolova, A. Fernández-Caballero, A meta-ontological framework for multi-agent systems design, Nature Inspired Problem-Solving Methods in Knowledge Engineering (2007) 521–530. [33] L. Sterling, K. Taveter, The Art of Agent-Oriented Modeling, The MIT Press, 2009. [34] D. Stieb, M. Szyszkowicz, B. Rowe, J. Leech, Air pollution and emergency department visits for cardiac and respiratory conditions: a multi-city time-series analysis, Environmental Health 8 (1) (2009) 75–90. [35] N. White, J. teWaterNaude, A. van der Walt, G. Ravenscroft, W. Roberts, R. Ehrlich, Meteorologically estimated exposure but not distance predicts asthma symptoms in schoolchildren in the environs of a petrochemical refinery: a cross-sectional study, Environmental Health 8 (1) (2009) 45–59. [36] World Health Organization, Protecting Health from Climate Change: Global Research Priorities, WHO Press, 2009. [37] K. Yada, E. Ip, N. Katoh, Is this brand ephemeral? A multivariate tree-based decision analysis of new product sustainability, Decision Support Systems 44 (1) (2007) 223–234. [38] M.H. Zack, The role of decision support systems in an indeterminate world, Decision Support Systems 43 (4) (2007) 1664–1674.