International Conference on Artificial Neural Networks (ICANN 2006)

International Conference on Artificial Neural Networks (ICANN 2006)

ARTICLE IN PRESS Neurocomputing 71 (2008) 2409– 2410 Contents lists available at ScienceDirect Neurocomputing journal homepage: www.elsevier.com/loc...

73KB Sizes 6 Downloads 104 Views

ARTICLE IN PRESS Neurocomputing 71 (2008) 2409– 2410

Contents lists available at ScienceDirect

Neurocomputing journal homepage: www.elsevier.com/locate/neucom

Editorial

International Conference on Artificial Neural Networks (ICANN 2006)

This special issue of Neurocomputing features recent advances in neural network research topics that were addressed during the International Conference on Artificial Neural Networks (ICANN 2006), held during September 10–14, 2006 in Athens, Greece. The ICANN conference is organized annually by the European Neural Network Society in cooperation with the International Neural Network Society, the Japanese Neural Network Society and the IEEE Computational Intelligence Society. It is the premier European event covering all topics concerned with neural networks and related areas. The ICANN series of conferences was initiated in 1991 and soon became the major European gathering for experts in those fields. In 2006 the ICANN conference was organized by the Intelligent Systems Laboratory and the Image, Video and Multimedia Systems Laboratory of the National Technical University of Athens in Athens, Greece. From 475 papers submitted to the conference, the International Program Committee selected, following a thorough peerreview process, 208 papers for publication and presentation to 21 regular and 10 special sessions. The quality of the papers received was in general very high. After the conference, authors of a number of highly innovative papers were invited to submit extended papers for this special issue of Neurocomputing. Each paper was required to be extended substantially with additional unpublished original computational contributions. The extended papers were reviewed using the normal reviewing process of the journal. The 20 papers included in this issue are those that passed the review process successfully. A variety of topics constitute the focus of accepted papers, which can be clustered into three major groups. The first group comprises five papers, which deal with several aspects of machine cognition and cognitive systems. In the first paper, Hartley and Taylor propose an approach for modelling reasoning using forward and inverse internal cognition models. Bader, Hitzler and Ho¨lldobler present a method for connectionist model generation using recurrent networks that integrate feed-forward networks for encoding background knowledge. In the paper by Apolloni, Bassis, Malchiodi and Pedrycz a featured non-linear regression method is introduced, which uses SVMs and granular computing. In their paper, Matsuka, Sakamoto, Chouchourelou and Nickerson focus on a new framework for descriptive models of human learning that offers qualitatively plausible interpretations of cognitive behaviours. Chortaras, Stamou and Stafylopatis derive connectionist models that are able to represent weighted fuzzy logic programs in the framework of imperfect knowledge representation and reasoning. 0925-2312/$ - see front matter & 2008 Elsevier B.V. All rights reserved. doi:10.1016/j.neucom.2008.03.010

The second group includes eight papers, which focus on advances in neural network learning algorithms. In his paper, Duch presents a class of search-based training algorithms for feed-forward neural networks, focusing on the variable step search algorithm. Schaefer, Udluft and Zimmermann show that time-delay recurrent neural networks unfolded in time and formulated as state space models are capable of learning longterm inter-temporal dependencies. In the paper by Constantinopoulos and Likas a semi-supervised learning method for probabilistic RBF networks is proposed, using labelled and unlabelled observations concurrently, to implement an incremental active learning scheme. Dorronsoro presents a natural conjugate gradient method for multilayer perceptron training, by introducing and simplifying a Riemannian formulation. The paper by Achbany, Fouss, Yen, Pirotte and Saerens is focused on a model allowing tuning of continual exploration in reinforcement learning by integrating exploration and exploitation in a common optimization framework. Martinez-Munoz, Sanchez-Martinez, Hernandez-Lobato and Suarez analyse and evaluate class-switching ensembles composed of neural networks, especially focusing on the generalization capability. Valls, Galva´n and Isasi describe a lazy strategy for training radial basis neural networks, based on the dynamic selection of training patterns by means of different kernel functions. In their paper, Lopez and Onate present an extended class of multilayer perceptrons that is tested in optimal control theory. The third group comprises seven papers, which deal with neural network methodologies and applications. Ishii, Ashihara and Abe propose and evaluate two feature selection criteria for two-class problems based on kernel discriminant analysis. Caridakis, Karpouzis and Kollias present an approach for the recognition of emotional states of users in human computer interaction, which uses neural network architectures to detect the need for knowledge adaptation and apply an efficient adaptation procedure. In their paper, Wysoski, Benuskova and Kasabov describe a spiking neural network for adaptive multi-view visual pattern recognition. Sjoberg, Laaksonnen, Honkela and Polla use self-organizing maps trained by low-level multimedia features and textual data for content-based retrieval. The paper by Angelides and Sofokleous is focused on the use of self-organizing neural networks (SONNs) to rank video segments through collaborative and content clustering, as an add-on to an MPEG-7 semantic meta-data modelling and filtering system. Kwak, Kim and Kim extend standard algorithms for independent component analysis (ICA) to extract attributes for regression problems and apply the proposed

ARTICLE IN PRESS 2410

Editorial / Neurocomputing 71 (2008) 2409–2410

method for dimensionality reduction in real-world problems. In the paper by Tikka and Hollmen a sequential input selection algorithm is proposed for long-term prediction of time series. We wish to thank Tom Heskes, Editor-in-Chief of this journal, for providing us with the opportunity to compile this special issue, and for his continuous support and motivation. We also thank Vera Kamphuis and the editorial staff of Neurocomputing for their valuable assistance in this effort. Finally, we wish to thank the authors of this issue (and those whose papers we were unable to include) and also the reviewers who contributed significantly to the achievement of a high level of quality.

Stefanos Kollias, Andreas Stafylopatis National Technical University of Athens, School of Electrical and Computer Engineering, 157 80 Zographou, Athens, Greece E-mail addresses: [email protected] (S. Kollias), [email protected] (A. Stafylopatis)

W"odzis"aw Duch Nicolaus Copernicus University, Department of Informatics, ul. Grudziadzka 5, 87-100 Torun, Poland E-mail address: [email protected]