Information Fusion 2 (2001) 161±162
www.elsevier.com/locate/inus
Editorial
Fusion system: How good is it? When you use information from one source, it's plagiarism; When you use information from many, it's information fusion In the previous issue, we pondered on the questions of what, where, why, when and how in the context of fusion systems. Assuming that one has successfully navigated around these questions and designed a fusion system, the next question is naturally `how good is this system?' Like any other system design eort, fusion system design eort also cannot be considered as complete till one has determined a measure of its performance. Obviously such a step requires ®rst a de®nition of measure of performance (MOP) or measure of eectiveness (MOE). This is an aspect that has received relatively less attention relative to its importance. The absence or lack of such measures in the context of image data fusion has been highlighted in a recent report on digital video standards for future military avionics systems [1], although there have been some preliminary attempts in this direction, for example, in the work of Xydeas and Petrovic [2], who oer an objective evaluation of fusion performance (OEFP). Another example is the work of Ulug and McCullough [3] who put forth a quantitative metric for comparison of night vision fusion algorithms. Admittedly, it is very dicult to conceive a global measure of performance of a fusion system. It has to be necessarily driven by the objective of the fusion system, for example, is the fusion system addressing the `what is it' issue, i.e., object identi®cation problem, or the `where is it?' issue, i.e., object tracking problem, or some other higher level objective. It is also impacted by the mode (for example, data in±data out, features in±decision out, decisions in±decision out and so on [4]) in which the information fusion is being accomplished. In the context of decisions in±decision out fusion, which also includes multi-classi®er fusion, a beginning has been made in this direction in terms of de®ning a Fusion Algorithm Measure of Eectiveness (FAME) [5]. FAME is de®ned as the ratio of the performance of the fusion system under consideration to the theoretical performance of an ideal fusion model. The ideal fusion model is a theoretical process that generates the correct recognition label of the object if at least one of the input labels is correct. 1566-2535/01/$ - see front matter Ó 2001 Published by Elsevier Science B.V. PII: S 1 5 6 6 - 2 5 3 5 ( 0 1 ) 0 0 0 4 1 - 0
FAME Performance of fusion process under consideration =Performance of an ideal fusion process:
FAME is not intended to portray the eectiveness of the fusion system as a recognition system, but instead as a measure of eectiveness of the fusion process itself. FAME hence oers insight as to the potential for improvement of the fusion processes since a fusion process cannot be realistically expected to generate a correct response if all the inputs to the process are incorrect. For example, such potential for improvement is seen [5] to vary inversely as the signal to noise ratio (SNR) and is also sensitive to the possible lack of training at the operating SNR conditions. While FAME was conceived in the context of evaluating the eectiveness of identity fusion algorithms, one could visualize extension of the concept to other types of fusion system scenarios as well. Various metrics for evaluating tracker performance abound in the literature, but have to be modi®ed or adapted to provide a metric for assessment of the effectiveness of the track fusion system as a fusion mechanism rather than its performance as a tracker. Scanning the current issue, we are oering here six regular papers. The ®rst paper addresses the classi®erlevel fusion problem, using a clustering and selection approach to determine the regions in which each of the component classi®ers performs best. The study of multiclassi®er systems is gaining popularity as a topic of research in the traditional pattern recognition community, as evidenced by the recent special workshops dedicated to this topic [6,7]. Incidentally, a special issue on this topic in the Information Fusion Journal has also been planned and the call for papers has already been issued. The ®rst of the next two studies, which investigate various aspects of the image fusion problem, takes on the challenge of fusion of images with dierent focuses. The second of this pair of image fusion studies examines into the color distortion problem in the context of the family of intensity±hue±saturation (IHS) like approaches to image fusion. The fourth oering compares four dierent methods of fusion at decision level in the context of the landmine detection problem, an application area that has attracted a lot of attention over the past few years (as exempli®ed by the annual SPIE conferences devoted speci®cally to this problem area). The ®fth presentation tackles a relatively new problem in the fusion domain,
162
Editorial
namely, sound localization, through fusion of audio and visual information. The last paper is indeed very much o the beaten path and looks at the morphological clique problem in a mathematically formal fashion and views system synthesis as fusion of local decisions into a global one. The issue thus caters to the thirst of the mathematically sophisticated as well as of the practically inclined among the readership. Continuing our series on `FUN in FUsioN' in a lighter vein, we again pose a fusion problem with multiple interpretations of the word fusion. We continue to hope for and welcome feedback from the readership irrespective of whether they are bouquets or brickbats on this as well as other aspects of the Journal. We would also be receptive to suggestions from the readership regarding suitable topics for special issues of the Journal.
kaayena vaachaa manasendriyairvaa budhyaatmanaavaa prakR^ite svabhaavaat karomi yadyat sakalaM parasmai shriiman naaraayaNaayeti samarpayaami Be it with my body, or with my mind With words, or organs of any kind, With my intellect, or with my soul, Or by force of Nature pushing me to my goal, Whatever it is, with all these I do, Oh! Supreme Lord! I surrender to you.
References [1] Avionic Systems Standardisation Committee, IDEO Systems Subcommittee, www.era.co.uk/assc/Vid6.pdf, Doc No. ASSC/130/ 2/134, May 2000. [2] C.S. Xydeas, V. S. Petrovic, Objective pixel-level image fusion performance measure, in: Proceedings of the SPIE Conference on Sensor Fusion: Architectures, Algorithms, and Applications IV, vol. 4051, Orlando, FL, April 2000, pp. 80±88. [3] M.E. Ulug, C.L. McCullough, Quantitative metric for comparison of night vision fusion algorithms, in: Proceedings of the SPIE Conference on Sensor Fusion: Architectures, Algorithms, and Applications IV, vol. 4051, Orlando, FL, April 2000, pp. 89±98. [4] B.V. Dasarathy, Sensor fusion potential exploitation ± innovative architectures and illustrative applications, The Proceedings of the IEEE ± Special Issue on Sensor Fusion 85 (1) (1997) 24±38 (invited paper). [5] B.V. Dasarathy, Information fusion bene®ts delineation under onominal scenarios, in: Proceedings of the SPIE: Sensor Fusion: Architecture, Algorithms, and Applications III, vol. 3719, Orlando, FL, April 1999, pp. 2±13. [6] 1st International Workshop on Multi-classi®er Systems, Cagliari, Sardinia, Italy, June 2000. (http://www.diee.unica.it/mcs/mcs2000/). [7] 2nd International Workshop on Multi-classi®er Systems, Robinson College, Cambridge, U.K. July 2001. (http://bode.diee.unica.it:80/ mcs/).
B.V. Dasarathy Dynetics Inc., P.O. Box 5500 Huntsville, AL 35814-5500, USA E-mail addresses:
[email protected],
[email protected]