A visual query language for dynamic processes applied to a scenario driven environment

A visual query language for dynamic processes applied to a scenario driven environment

ARTICLE IN PRESS Journal of Visual Languages & Computing Journal of Visual Languages and Computing 18 (2007) 315–338 www.elsevier.com/locate/jvlc A...

4MB Sizes 1 Downloads 15 Views

ARTICLE IN PRESS Journal of Visual Languages & Computing

Journal of Visual Languages and Computing 18 (2007) 315–338

www.elsevier.com/locate/jvlc

A visual query language for dynamic processes applied to a scenario driven environment Karin Camara1, Erland Jungert Swedish Defence Research Agency (FOI), Box 1165, S-581 11 Linko¨ping, Sweden

Abstract Query languages for multi-sensor data sources are generally dealing with spatial–temporal data that in many applications are of geographical type. Such applications are quite often concerned with dynamic activities where the collected sensor data are streaming in from multiple sensors. Data uncertainty is one of the most important issues, which the query language must deal with. Other aspects of concern are sensor data fusion but also association of multiple object observations. Demonstration of the dynamic aspects are generally difficult as scenarios in real-time cannot easily be set up, tested and run realistically. To overcome this problem the query language sigma query language (SQL) has been attached to a simulation framework. Together with this framework scenarios can be set up to form the basis for test and dynamic illustration of the query language. Eventually the query language can be used to support decision making as well. Within the simulation framework input data are coming from sensor models that eventually can be replaced by data from real sensors. Services can be integrated with the information system, used for various purposes and supported by the various capabilities of the query language. A consequence of this approach is that the information delivered by the services, including the query language, can be used as input to an operational picture that eventually can be used to demonstrate on-going dynamic processes. In this work, an extension to SQL, called VisualSQL, will be discussed together with some other relevant services useful in dynamic situations as complements to the query language. Furthermore, the use of the system will be illustrated and discussed by means of a scenario that has been run in the simulation environment. r 2007 Elsevier Ltd. All rights reserved. Keywords: Query language; Sensor datasources; Simulation framework; Scenario driven

Corresponding author. Tel.: +46 13 37 83 37; fax: +46 13 37 80 58. 1

E-mail addresses: [email protected] (K. Camara), [email protected] (E. Jungert). Formerly Karin Silvervarg.

1045-926X/$ - see front matter r 2007 Elsevier Ltd. All rights reserved. doi:10.1016/j.jvlc.2007.02.004

ARTICLE IN PRESS 316

K. Camara, E. Jungert / Journal of Visual Languages and Computing 18 (2007) 315–338

1. Introduction Query languages dealing with dynamic processes in a geographical environment are hard to test and to demonstrate under realistic forms. This is also further complicated by the fact that input data generally must come from various types of sensors that cannot produces exact data. This is due to limitations that are inherited in all sensors. Sensors are, furthermore, specialized in the sense that the data they deliver never includes complete information with respect to the object they observe. A simple example of this is that a video can be used to determine the colour of some objects whereas an infrared (IR) camera cannot. Thus to get a complete description of an object, in which a user can have a high degree of confidence, requires not just a single sensor but a set of sensors of different types. Due to the uncertainty in the sensor data the result from the multiple-sensors must be subject to sensor data fusion [1]. The main reason for this is that it must be determined up to some reasonable level of certainty that the target object is the same that has been observed by all the sensors; this is a classical problem in sensor data fusion, which is called the association problem. VisualSQL has capabilities for handling multiple sensor types and has therefore been equipped with a method for data fusion [2]. As sensor data systems cannot be seen as usable if they require sensor and sensor data competence for their usage it is of utmost importance to make them independent of the sensor data. This will be discussed further subsequently and can be seen as an important aspect of VisualSQL that must be of concern in all types of systems that use sensor data as input. Clearly, this is further motivated by the reason that an end-user, as he/she should not have to be a sensor expert, has better things to do than to care about which sensor(s) are used by the system. Another aspect related to the problem concerns the fact that no sensor can be successfully used under all weather and light conditions. Thus sensor data independence is motivated to let the users concentrate on the main activities instead of fuzzing about the sensors, which better can be handled by the system. Another aspect that also needs to be pointed out in relationship to the above is also that any tool that should be used to deal with the problem of determining what goes on in a particular dynamic process in geographic space must, from the user’s perspective, be possible to apply in a general and dynamic way. One type of tool with this capability is clearly a query language. Sensor data are generally collected to determine what is going on during specific dynamic processes. These processes are quite often very complex in nature and cannot be run without spending large resources unless they are simulated in some way. A simulation framework has been used to demonstrate how a number of decision support tools (services), of which the query languages are the most powerful and general, can be used to monitor dynamic processes. The simulation framework used in this work is called MOSART (MOdulation and SimulAtion Research Test bed) [3]. A central part of this framework is a scenario engine that keeps track of a large number of events. It can for example monitor a large number of moving objects, i.e. basically vehicles of different types. The input data to the simulator framework come from a set of sensor models. That means that for each sensor type there is a piece of software that simulates the real sensor and generates data corresponding to the ongoing situations. As a consequence, a scenario with a number of ongoing events can be run while applying the various decision support tools over time. The information used for the decision making originally comes from the sensor models that are observing the area of the scenario. This makes it possible to observe the various events as they occur and in parallel to this build up an operational picture of the

ARTICLE IN PRESS K. Camara, E. Jungert / Journal of Visual Languages and Computing 18 (2007) 315–338

317

scenario to support decision making. The operational picture will be maintained so that the users of the system can get an understanding of what is going on, i.e. to get a good situational awareness. On the basis of the operational picture they can make their decisions in order to solve the problems present in the scenario. A fairly related approach to the one discussed in this work is presented by Louvieris et al. [4]. Many attempts have been made to make visual interfaces for SQL, but only a few have touched the issues of spatial and/or temporal queries and especially the particular cases where the input data emanate from sensors. An attempt to handle temporal data is given by Dionisio et al. with the tool MQuery [5], but this approach is very simple and does not for instance, handle Allens [6] time intervals. Chittaro et al. [7] on the other hand have a system for handling those time intervals, but it is by no means a complete query language. Abdelmoty et al. [8] have made a representation of visual queries in spatial databases with a filter-based approach. Bonhomme et al. [9] have proposed a visual representation for spatial/temporal queries based on a query-by-example approach. Chang [10] has made an approach to use c-gestures for making spatial/temporal queries in what he calls a sentient map. All of these relate to the approach taken in this and have partly influenced our approach to a visual query language for both spatial and temporal queries. The aspects of uncertainty are, due to the role of uncertainty in the sensor data, of fundamental importance to VisualSQL. These uncertainties have, in particular, effects on the queries applied for determination of spatial relations between extended geographical objects [11] or on spatial relations between man-made objects or man-made objects and extended objects where the latter can be part of the background context. A general approach to handle the uncertainties in these problems is to represent the object with what sometimes is called a broad boundary. The object thus has an uncertainty component that depends on extension and position. This representation includes an interior of the objects that contains the parts that definitively belong to the object. The broad boundary, on the other hand, includes parts that might be part of the object. Consequently, the broad boundary mirrors the uncertainties of the object. This way of representing the uncertainties are in many ways related to the theories of rough set [12] and egg yolk [13]. Since the degree of uncertainty in sensor data primarily depends on the type of sensor this means that different sensors under different circumstances deliver uncertainties of different characters, which has consequences on how the query language should handle the actual uncertainties. The solution to this problem will be further discussed in Section 3.6 for small man-made objects in relation to larger objects that are part of the background. The structure of this work is as follows. The problem of the work including its context is discussed in Section 2. The query language is presented in Section 3 while Section 4 discusses the simulation framework including the sensor models as well as the planning and association services available in the system. Furthermore, Section 5 describes the background for the scenario used in the demonstration that has been carried out. Finally, Section 6 discusses the events that took place in the scenario and how the tools were used to aid a decision maker. Finally Section 7 presents the conclusions of the work. 2. The problem and its context The main problem in this work has been concerned with how to query multiple sensor data sources while observing a dynamic environment in a way that makes it possible for end-users to determine events of dynamic processes and eventually get an understanding of

ARTICLE IN PRESS 318

K. Camara, E. Jungert / Journal of Visual Languages and Computing 18 (2007) 315–338

the situation that could be used for relevant decision making. To achieve this goal many complex subtasks must be solved, for instance analysis and fusion of the sensor data but also selection of the sensors. There are also other aspects of importance that concern the problem like how should a user apply queries to the system, i.e. the design of the user interface. The concern here is not only which input information that should be given by the user but also how it should be given. The question also arises how the information returned by the system should be presented. These problems are of particular importance when considering that the queries are applied in order to determine events in a complex dynamic process. When formulating the research problem it is also important to remember that most input and output data are of spatial/temporal type. The user must also consider what the query concerns when applying a query. The general answer to this problem is that various types of objects can be extracted by the analysis programs from the sensor data. The objects may for instance correspond to vehicles but it is not sufficient to just return the object types since the user may also be interested in different spatial and temporal relationships, status and attribute values. Of concern is also the fact that since the input data are generated by sensors of various types data uncertainties must always be considered by the system. As the activities of the observed objects correspond to events that are parts of a dynamic process, means for demonstration of the dynamic aspect of the query language must be available as well. The approach taken to demonstrate the dynamic aspects of the query language has been to attach the query language to a simulation framework, with a built-in scenario generator and where the sensors are simulated by sensor models. 3. The query language, RQL The query language SQL [14–16] can be seen not just as a query language but also as a query system as it includes capacity for multisensor data fusion, interoperability with respect to sensor types and visual user interaction. Originally, SQL was developed as a query language for querying a single sensor image at a time. However, later it has evolved into a query language for multiple sensor data sources with capabilities for sensor data fusion and sensor data independence. Sensor data independence here means that an end-user should be able to apply queries without being aware of which sensors that are involved in the process of answering the queries. This is especially important as the use of a particular sensor depends on weather and light conditions and it is difficult for a user to determine which the best sensor at the given circumstances is. Furthermore, the various sensors may be of different types and generate heterogeneous sensor data images. For this reason, a large number of algorithms for sensor data analysis [17] must be available to the system. In this paper we are basically concerned with how to apply visual queries, which can be seen as an extension to the original version of SQL, simply called VisualSQL. A prerequisite in this work has been that selection of sensors and sensor data algorithms must be carried out autonomously. For this reason, means for such selection have been implemented. In SQL this is controlled by an ontological knowledge based system [2,18]. The automatic selection of sensors and sensor data by the query system is motivated by the need for sensor data independence. Another motivation for this is to allow the user to concentrate on the work at hand without any knowledge of the sensors and their data types, i.e. the user should not need to be an expert on the used sensors and their data.

ARTICLE IN PRESS K. Camara, E. Jungert / Journal of Visual Languages and Computing 18 (2007) 315–338

319

A third motivation is to make repetitive queries possible without interference from the users. In this way, the query can be repeated during longer or shorter time intervals and as the light and weather conditions change this may, as a consequence, lead to the selections of different sensor types. For example at day light a video camera can be selected but as the day shifts to night the video needs to be replaced; perhaps by an IR-camera. The query processor allows classification of objects but it also allows cuing, i.e. detection of possible candidates as a first step towards the classification. Sensors used for cuing can, for example, be a ground sensor network or a synthetic aperture radar (SAR), while sensors for classification can be an IR-camera, a laser radar or a CCD-camera (charged coupled device camera). Sensors of all these types can be used. Observe that in this context classification means determination of the object class not strictly determination of the identity of a specific object. A distinction between the two classes of sensors is that the former generally covers a much larger area than the latter. Thus the former can be used to quickly search through the area of interest (AOI) for object candidates much faster than the sensors used for classification. The object detection sensors can also be used for determination of much smaller search areas for the classification sensors. That is, the detection sensors can be used to scale down the search areas for the classification sensors in which case the classification step will be carried out much faster. Clearly, this depends on the differences in coverage and resolution, i.e. the classification sensors have a low coverage and a high resolution opposite to sensors used for detection. The basic functionality of the query language can be described as follows. A query is inserted by the user and then the input is fed in to the query processor, which in a dialogue with the ontological knowledge structure generates a set of sub-queries one for each of the selected sensors. The sub-queries are then refined and executed by the query processor. Once the sub-queries have been executed instances of their results are created in a database to support data fusion in a subsequent step. The execution of the sub-queries goes on until the set is empty, i.e. when there is no further sensor data available to process. In the final step data fusion is, if applicable, carried out using the results of the sub queries as input and then the process terminates. This process is further discussed in [15]. Sensor data fusion [2] is another, quite unique, property of SQL that does not occur in traditional query systems. The motivation for sensor data fusion is to allow information from multiple sensors to eventually support identification of the requested objects, but also to complement the information since different sensors can register different object properties. For instance, from an image of a CCD-camera it can be determined that a car is blue, while an IR camera does not allow this; instead it may be possible to determine that the engine of the car is running or have been running recently. Thus, we can use multiple sensors to compile more complete object information. A serious question that depends on the uncertainty of the acquired information is how to interpret the fused result of a query. The approach taken here has been to associate a belief value to the result of each sub-query. Belief values are determined by the involved sensor data analysis algorithms and a common value is determined by the fusion process, which is also forwarded to the user as a part of the query result. All belief values are normalized and a high value means that a user may have a high confidence in the result. This is applied not only to assist the fusion process, but also to give the user a sense of how strong the belief or confidence he/she may have in the result. This is an important aspect of the system that has been introduced to give the users a higher degree of trust in the query result as well as in the query system.

ARTICLE IN PRESS 320

K. Camara, E. Jungert / Journal of Visual Languages and Computing 18 (2007) 315–338

3.1. The query system An overview of the structure of the query system is given in Fig. 1. The system is divided into a visual user interface, a query processor and the sensor nodes to which the sensors are attached. The query processor includes a knowledge system that operates in conjunction with an ontology. This part of the system supports automatic selection of both the sensors and the algorithms for sensor data analysis. In a first set-up of the query system the actual sensors were a digital-camera, an IR camera, a laser radar for classification, and a SAR for object detection. However, the system is not limited to these sensor types but others can, on demand, be attached as well, i.e. the system can facilitate interoperability with respect to sensor types. The sensor nodes include means for target recognition, i.e. object classification. For this purpose a database containing a library of target models is attached to the SQL-processor. The target models stored in this library are used by the image analysis processes to recognize objects found in the sensor data inquired by the users. A meta-database containing general descriptions of the available information that has Users ΣQL processor

User interface

Ontology Knowledge system

Target models

Query interpreter

Fusion module Meta data

Classification sensors

Detection senor

CCD camera Image analysis Image analysis

Synthetic Aperture Radar(SAR)

Infrared camera Image analysis

Laser radar

Image analysis

Fig. 1. An overview of the information system.

ARTICLE IN PRESS K. Camara, E. Jungert / Journal of Visual Languages and Computing 18 (2007) 315–338

321

been registered by the sensors is also attached to the query processor to guide the system in the data search process. The query system includes, contrary to conventional query languages, a sensor data fusion module. The purpose of this module is primarily to fuse the information that corresponds to the result of the sub-queries and where the original data most of the time emanates from the attached sensors. 3.2. Elementary queries As indicated above it should be enough for a user to have a general understanding of the problems associated with the sensor data, e.g. the user should not have to bother with sensor data uncertainty or with the fact that all sensors cannot measure all possible attributes. Thus sensor data independence has to be achieved. To achieve this, the user should not work with concepts related to the sensors, but instead with what is relevant to the user. The basic questions that should be answered in a spatial/temporal environment are where?, when? and what? We call the concepts that are the answers to these questions AOI, time-interval of interest (IOI) and object types. Hence, a user must indicate the AOI in a map and supply the IOI by providing the start and end points in time, see Fig. 2. IOI can be more advanced by only setting the starting time and thus allowing continuous repetition, i.e. answering the same query several times but with different IOIs that are consecutive over time. Object types can in their simplest form just be chosen from a list that mirrors the actual object ontology. The elementary queries are also described in [19]. 3.3. Object attributes Objects belong to all kinds of object types that can be registered by the sensors. All objects may have properties that may vary over time, examples of such object types are vehicles and people. Objects can also be of geographical type, e.g. roads and towns. The properties of the objects are important. They can be categorized as spatial, temporal or other. The spatial attributes or status values are attributes that relate to geography, e.g. position or direction. Temporal attributes are related to time, e.g. the point of time when it

Fig. 2. The interface for elementary queries for selection of AOI, IOI and object type.

ARTICLE IN PRESS 322

K. Camara, E. Jungert / Journal of Visual Languages and Computing 18 (2007) 315–338

was observed or the time interval during which it existed. Other attributes concern width, length, colour, velocity etc., i.e. all conceivable attributes that are neither related to space or time. 3.4. Object relations All objects can be affected by relations. Object relations can be of various types and in this context most relations are of spatial or temporal type, see also [19,20]. Clearly, means for determination of these relations must be available in the query system. Consequently, this is the way to specify the details of the query. The type of objects is generally already set in the elementary part of the query, but the relations delimit the answer to include only those objects for which the relations are true. The relations can be unary or binary. Unary relations are for example ‘‘colour equals blue’’ or ‘‘velocity is greater than 50 km/h’’. The binary relations can be either undirected or directed. Directed means that the order of the involved objects matters, for instance, the relation ‘‘before’’ gives a different result depending on the order of the involved objects, whereas the result of ‘‘equals’’ does not. The spatial relations can be of topological and of directional types that may be either global or local. The global directional spatial relations include members of the set {north_of, south_of,y, north_west_of} while the local directional spatial relations include members of the set {in_front_of, behind,y}. All these directional spatial relations are of binary type. The temporal relations typically correspond to Allens [6] 13 relations, e.g. before, starts, equals, etc. but there are also relations that relate to a specific time, e.g. before 2 o’clock. The temporal relations will be discussed further in Section 3.5 and only briefly in this section. Finally there are also relations that are both spatial and temporal. They relate to movements and changes of the objects over time, e.g.: % MathType!Translator!2!1!LaTeX.tdl!TeX – LaTeX 2.09 and later!get an hobjecti that was at hpositioni at t1 but not at t2 :% MathType!End!2!1!Visualization of the parts where?, when? and of the simple part of what? of the query is relatively straightforward. So, in this paper we have focused on the more advanced part of what? The user interface functionality is built up around a work area and palettes. The work area is the space where the object types and the relations are placed and set in relation to each other in the current work session. The palettes contain the object types and the relations, organized according to the kind of attribute that is of concern for the purpose of simple navigation. When the user selects an object type and places it in the workspace it is visualized as a box with the type of object written inside. Then the user can select the relations that put restrictions to the objects. The relations are also visualized as boxes. When possible, the relation is explained with an icon rather than in text, since that is often simpler for a human to understand [9]. The relations are also visualized as boxes. When possible, the relation is explained with an icon rather than in text, since that is often simpler for a human to understand 3 (Fig. 3). Objects and relations are connected to each other with arrows. Everything passed between a pair of such ‘‘boxes’’ is represented as a set of tuples. Output from an object box corresponds to a tuple of size one that simply contains an item from the set described in the object; e.g. a vehicle. In a binary relation two different tuples are related to each other and the resulting output tuples may contain more than one element. The relations are also visualized as boxes. When possible, the relation is explained with an icon rather than in text, since that is often simpler for a human to understand, see

ARTICLE IN PRESS K. Camara, E. Jungert / Journal of Visual Languages and Computing 18 (2007) 315–338

323

Fig. 3. The user has selected the object types vehicle and road, and the relation inside.

Fig. 4. (a) A relation where one of the tuples has more than one element (coming from an earlier part of the query) and where the result contains only a part of all possible elements. (b) Settings of that relation.

Fig. 4b. In the resulting tuple the user can choose to include any parts from the input tuples. For example, in Fig. 4a we have the relation inside. One of the participating sets contains cities and the other could be a result of a relation relating the cars to the nearby rivers. In this case we could relate the cars to the cities to find out cars that are in the city. In this case probably only car1. The resulting tuple from the relation can contain car, river, and city, or any subset thereof, i.e. car and river; car and city; river and city; car; river; city. In Fig. 4b the resulting set is shown where the tuple consists of river and city. The query language also includes some set operations, i.e. union, intersection and set difference. They are treated similarly to relations, but function a bit differently. The union operator simply produces a set containing all tuples from both incoming sets. The only restriction is that all the resulting tuples must have the same number of elements. To make it meaningful the user pairs elements from the incoming tuples. If some elements that are paired do not contain the same type of objects, the ontology is used to find the object that

ARTICLE IN PRESS 324

K. Camara, E. Jungert / Journal of Visual Languages and Computing 18 (2007) 315–338

is closest above both objects in the object hierarchy to get the resulting type. This situation may occur in queries where objects of similar types are asked for. For instance, the incoming tuples may hold elements of the tuples /car, roadS and /road, truck, riverS; then the result may contain /vehicle, roadS, which is more meaningful than other possible combinations. Intersection is a bit different from union, because in intersection only a single element in each of the input tuples is chosen, just like in our normal relations. These objects are used to compute the intersection just like in normal set operations. The resulting tuple can include elements from all incoming tuples. If, for instance, /car, riverS is intersected with /car, roadS and car in both tuples is chosen for the intersection then the result could be /car, road, riverS. Set difference is similar to intersection in the aspect that only one element in the participating tuples are selected. Contrary to intersection set difference is a directed relation so all elements that are in the first, but not in the second tuple is kept as the result. Similarly to intersection the resulting tuple may contain elements from both participating tuples. On all relations the not operator can be applied as well. This means that all results for which the relation would be true is now false and vice versa. Not is visually denoted by drawing a line diagonally across the icon, see Fig. 5. 3.5. Temporal conditions A classical work on temporal relations is the work by Allen [6], who identified 13 binary relations between pairs of time intervals. These relations are: before, after, meets, metBy, overlaps, overlappedBy, finishes, finishedBy, starts, startedBy, contains, during and equals. We have chosen to use the same icons for these relations as used by Hibino et al. [21]. The only difference is that we have decided to use squares instead of circles at the end of the intervals, see Fig. 6. An extended discussion of the temporal aspects of VisualSQL can be found in [20].

Fig. 5. Not applied to the inside relation.

Fig. 6. Visualization of Allen’s 13 temporal relations.

ARTICLE IN PRESS K. Camara, E. Jungert / Journal of Visual Languages and Computing 18 (2007) 315–338

Vehicle

325

power-plant

Distance < 2 km

12-18

Fig. 7. Find all vehicles close to a power plant at a certain time.

In analogy to the spatial queries there is also a need for relating objects to fixed points in time or time intervals. Thus the functionality for creation of a temporal attribute entity (TAE) has been included, see Fig. 7. If only a discrete point in time is needed the start and end of the time interval become equal. Consequently, Allen’s relations [6] can be used to relate objects to fixed times as well. Determination of tracks is a quite common task in most sensor data systems. To produce tracks by means of a query language from sensor data requires special attention. Thus, when general attribute relations are combined with either spatial or temporal relations then it is quite simple to determine tracks. However, when combining general attribute relations with both spatial and temporal relations then the system must keep track of observations that fulfil both the time and the spatial relations since otherwise simultaneousness may be lost. The objects related through temporal relations, just like the spatial relations, require certain properties. While the spatial relations may have up to three different properties, the temporal relations have just a single one, i.e. has_Time. The reason for this is, of course due to, that time is 1D while space, so far, in our system is 2D.

3.6. Uncertain relations Topological relations concerns the way in which two geometrical objects can relate to each other in two dimensions. In [22], Egenhofer identifies eight atomic topological relations for areas disjoint contains, inside, equals, meet, covers, coveredBy, and overlap, see Fig. 8. These relations assume that the exact extension of the objects is known. When uncertainties are introduced these relations are not sufficient since the exact positions of the edges of the surfaces are not known. One way to handle this is to add a broad boundary [23]. Then the centre of the area signifies the absolutely certain minimum area and the outer boundary signifies for instance the 80% certainty of the area. The resulting topological relations of the areas with broad boundaries are not eight but 44 or 52 [11] if we allow holes in the areas. However, in the work described here this number has been considerably reduced due the object types that are subject to the study, that is in this case some obvious simplifications have been made, which will be discussed further subsequently. The uncertainty aspect in VisualSQL is further discussed in [24].

ARTICLE IN PRESS 326

K. Camara, E. Jungert / Journal of Visual Languages and Computing 18 (2007) 315–338

Fig. 8. The eight topological relations. Contain/inside and covers/coveredBy are, respectively, two cases since it depends on which area is surrounding which.

Fig. 9. The only two relevant topological relations when concerned with mobile artefacts related to mobile artefacts.

In our system the focus is on mobile objects acquired from sensor data. Mobile objects can have uncertainties concerning both the size of the objects and their locations. These uncertainties could be handled with broad boundaries. A consequence of limiting ourselves to mobile objects is that the uncertainties in size are limited, as well. If we are able to identify the type of the objects, for example, car or truck, then the uncertainty in size becomes even more limited. The magnitude of the uncertainty in size is negligible compared to the possible uncertainty in position. Even the size of the objects can be neglected compared to the usual uncertainty in position. Consequently, we can approximate the mobile object with only an area that equals the uncertainty in position. The topological relations between two areas are limited for the eight basic relations described in Fig. 8, but we should consider that since we are taking about mobile objects they cannot in practice overlap, not even a little. Thus the relations contains, inside, equals, covers, coveredBy, and overlap have no equivalence in reality for this type of objects and may for this reason be excluded. Mobile objects may only by close to each other. This is called the proximity, which is equivalent to the topological relations meet, contains, inside, equals, covers, coveredBy, and overlap, or they can be distant which corresponds to the relation disjoint. Thus these eight relations can be reduced to just two, see Fig. 9. As was described earlier, mobile objects with uncertain positions can be approximated with an area corresponding to the uncertainty in position. Background objects, for instance forests, lakes, cities, on the other hand are better approximated with an area including a broad boundary. The kernel of the area corresponds to the part, which with certainty belongs to the object and the broad boundary corresponds to the uncertainty part. The number of possible topological relations between two simple areas is eight according to Egenhofer [22]. If the areas have broad boundaries the number of possible relations is 44 according to Clementini et al. [23]. Another aspect to consider is that since the boundaries of the areas are based on uncertainty it does not seem to be realistic to include a relation that requires the borders to coincide exactly, like the classical relation meet, see Fig. 8. (1) The borders of the objects never coincide exactly.

ARTICLE IN PRESS K. Camara, E. Jungert / Journal of Visual Languages and Computing 18 (2007) 315–338

327

Fig. 10. The nine topological relations between a simple area and an area with a broad boundary when the areas do not contain any holes.

Fig. 11. The three additional topological relations between a simple area and an area with a broad boundary containing holes.

When reviewing the 44 relations and removing all variants that break at least one of these two rules the result is nine topological variations. Since the simple area is only an approximation of a point object it can never cover or overlap, etc. Thus the relations have been grouped into five groups; inside, probably inside, possibly inside/possibly outside, probably outside and outside, see Fig. 10. In effect, the result is only four possible relations. In [11] Clementini et al. introduced relations between two area objects with broad boundaries that may have holes as well. This gives an additional eight relations. When evaluating those relations we add one rule to our set: (2) The point object may not have a hole in its area. The review of these eight relations results in three remaining relations that are applicable to our case. All three of these relations can be put into the existing group possibly inside/ possibly outside, see Fig. 11. 4. The simulation framework 4.1. MOSART MOSART [3] is a platform for integration of research results. The primary objective of MOSART is to simplify integration of research results into larger simulations and demonstrators. MOSART has a modular software environment which provides basic

ARTICLE IN PRESS 328

K. Camara, E. Jungert / Journal of Visual Languages and Computing 18 (2007) 315–338

Fig. 12. An overview of the MOSART system.

simulation functionality and enables an efficient integration of own and commercial software. It contains four main parts:

   

Software for integration. Basic features for simulation. Integrated results from research projects. Gateway to reality.

These main parts make it easier for research projects to evaluate results in a larger context by setting up more advanced simulations and demonstrators. This is accomplished by integration of the personal research results, the basic features and other research results integrated in the system (Fig. 12). Modules and integration: The software for integration of modules decreases the complexity of developing large simulations and demonstrators, since the demand for user knowledge in the area of distributed simulation (high level architecture, HLA) is minimized by the software. In MOSART there are also a number of simulation support features, such as scenario editor and engine, visualization in 2D and 3D, maps and high resolution synthetic environments, real sensor data, logging and a manager for federation and scenario management. In addition, there are the modules that research projects have integrated into MOSART, which can be reused by other research projects. 4.2. Sensor models MOSART includes a set of sensor models of which the ones described subsequently have been used in the scenario that has been used to demonstrate the dynamic aspects of SQL. CARABAS: CARABAS (Coherent All RAdio BAnd Sensing) [25] is a SAR [26] carried by an air plane. Compared to traditional radars CARABAS uses a very long wavelength. The wavelength is typically 3–15 m; unlike for example the much more common

ARTICLE IN PRESS K. Camara, E. Jungert / Journal of Visual Languages and Computing 18 (2007) 315–338

329

Fig. 13. Objects in a forest as seen by CARABAS.

Fig. 14. A simulated IR image.

microwave SAR that uses wavelengths in the size of centimetres. Objects that are much smaller than the wavelength do not significantly affect the result of the radar. The effect of this is that CARABAS can see through vegetation, i.e. tree trunks and branches in a forest does not prevent the radar for seeing what is on the ground. Objects that have been hidden in a forest [27] and in spite of that are detected by CARABAS can bee seen in Fig. 13. In addition to the objects it is also possible to see detections of tree trunks. A CARABAS flies typically at a distance of 12 km and at an altitude of 6 km. IR-sensor and processing: For this scenario we have used a simulated IR-sensor that has been developed for use in simulations concerning network centric warfare [28]. The sensor model is made for being carried by a helicopter or an unmanned aerial vehicle (UAV) in

ARTICLE IN PRESS 330

K. Camara, E. Jungert / Journal of Visual Languages and Computing 18 (2007) 315–338

Fig. 15. Overview of an unattended ground sensor network.

the simulation. It is fast, but still gives realistic results, see Fig. 14. The images produced by the IR-sensor are processed by an algorithm that uses particle filtering. The algorithm has the possibility to track moving vehicles that are seen by the IR-sensor. Ground sensor net: A sensor network for ground surveillance can be used for detection, tracking and classification of vehicles. It is made up of arrays of either acoustic or seismic sensors, i.e. microphones and geophones. There is signal processing in each of the sensor nodes. All sensors know their position and orientation. In a multi-hop radio network they can communicate with the other nodes and thus make association and fusion of their combined data. The unattended ground sensor network used here has not been put into practice commercially yet, but it exists as a simulation connected to MOSART. The simulation has been verified with data from real microphones and geophones [29], see Fig. 15.

4.3. Tools for decision support SB-plan: SB-plan (Simulation-Based support for resource allocation and mission PLANning) [30] is a tool for aiding and improving the work of planning for a decision maker. Nowadays a lot of the planning is made with the help of pen and paper, so the idea of SB-plan is to help the decision maker to transfer the work into a computer. One possible use for SB-plan is sensor management. In this case SB-plan can aid the decision maker in calculating an optimal path for a UAV, which is the platform for some types of sensor and what places the UAV needs to visit along a specified path. It can also calculate where it would be optimal to put stationary sensors, e.g. ground sensor networks in order to monitor movements in an area considering the information that is known about the enemy positions and assumed goals. Association analysis: Association analysis is a tool for associating observations of vehicles that follow roads [31]. Association is the process of deciding which observations are observations of the same vehicle. There are several reasons for making associations in this way. In general association, if it is possible to make, gives fewer tracks, and thus a less complex situation to analyse and understand. One important question to answer is the number of vehicles that has been observed, without association they can seem to be a lot more than they really are. With association also the possibilities for visualization increases since it is possible to make guesses about which route each vehicle has taken. One important objective is

ARTICLE IN PRESS K. Camara, E. Jungert / Journal of Visual Languages and Computing 18 (2007) 315–338

331

to give an estimate of the movement patterns in the situation. The method has two main components:

 

Estimation of the possibility of association between each pair of observations. Global optimization to be able to find the combination of associations that gives the highest total probability.

The estimate of the possibility of association between each pair of observations is based on the following factors:

      

Difference in time between the observations. Length of the shortest route between the observations. A probability distribution of the estimated velocity of the vehicle. Possible classification of type of the observations. The degree of rationality that the route between the observations would mean. An a priory distribution of the probability for different degrees of rational choice of route. Estimated density of vehicles.

5. The scenario description 5.1. Background In the country Crisendo two ethnical groups A-people and B-people lives. A-people is a minority that lives in small enclaves. In this particular scenario we are concerned with an enclave that contains the mountain Great Wredski Peak. This mountain was the place of a big battle between the two ethnic groups in 1455. The battle finished on the 12th of November and A-people was defeated. A small group of B-people annually celebrates this victory by marching to a place a bit south of the peak. During the last few years the celebration has been rather small and only a small group of nationalists has participated. Because of the increasing tension between these two ethnic groups this enclave is now under the protection of the United Nations (UN) (Fig. 16). This year there have been rumours that the nationalists have been able to get the sympathy of a lot more people and thus there may be problems in protecting the enclave. The scenario begins at midnight on 12th of November 2005, the year of the 550th anniversary of the battle.

5.2. Purpose and goal The purpose of this scenario is to show how it is possible to gather information about the situation and present them to decision makers in order to assist them in the decision process. The goal is to have as correct information as possible to give the decision makers, i.e. to give them the possibility to make correct decisions about which measures to take before and during the upcoming event.

ARTICLE IN PRESS 332

K. Camara, E. Jungert / Journal of Visual Languages and Computing 18 (2007) 315–338

Fig. 16. The initial situation on the morning of the 12th of November 2005.

5.3. Resources available to the UN troops The UN troops have half a platoon deployed as an observation post at a point near the route to the peak. The UN company is stationed a few km south of the mountain. The company has a UAV, which carries an IR-sensor and has the possibility to carry and deploy six small ground sensor networks. In total, the UN company has 10 ground sensor networks available. Two permanent ground sensor networks have been deployed at the probable approach routes to the peak. The road leading north from the peak is covered by a minefield, thus preventing vehicles and people to enter the area by that route. If needed, an aircraft carrying a CARABAS system can be requested. 5.4. Tools The scenario is simulated in the simulation environment MOSART. It uses several sensor models to produce realistic data. Three different tools for aiding the decision process have been used, that is the query language SQL for searching the data, the association analysis that uses the data found by SQL for association analysis and the SBplan used to decide where to deploy ground sensor nets (Fig. 17).

ARTICLE IN PRESS K. Camara, E. Jungert / Journal of Visual Languages and Computing 18 (2007) 315–338

333

Fig. 17. The result returned from SQL at 2:37.

Fig. 18. The estimated location of the vehicles that arrived at 2:30.

6. Course of events The scenario begins at midnight the 12 November 2005. The UN troops have received rumours that there will be a demonstration larger than usual the next day, but the rumours are unconfirmed. With the help of SQL the area is monitored and incoming data are used to search for vehicles in the area. A simple query is applied: AOI: A rather large area covering the vicinity of the peak, but none of the nearby towns. Time: From midnight and continuously forward. Object type: Vehicles. At half past two in the morning the passage of three vehicles is registered by the southern stationary ground sensor net in an east-westerly direction. As the observation post has not detected any vehicles on the big road going south the conclusion is that the vehicles have stopped between the sensor network and the big road, see Fig. 18. The vehicles have been classified by the net as one large vehicle, i.e. truck or bus, and two small vehicles, i.e. passenger cars.

ARTICLE IN PRESS 334

K. Camara, E. Jungert / Journal of Visual Languages and Computing 18 (2007) 315–338

Fig. 19. The known situation at 6:12.

Fig. 20. The vehicles on roads that CARABAS found, presented by SQL.

At dawn, 6.12, the observation post reports that a roadblock has been setup during the night, north-east of the observation post. It consists of one truck and three passenger cars (Fig. 19). The conclusion is that the vehicles south-east of the observation post that arrived during the night form a sort of roadblock. Together they are probably meant to keep the UN troops locked in to not be able to patrol the area. Since the UN troops has flying reconnaissance available that will be used instead of forcing a confrontation CARABAS is requested to fly across the area to find further information about the situation. When the detections from CARABAS are delivered they are found to be too many. Thus it is not possible to take a closer look at them all (4100 observations) (Fig. 20). With SQL, vehicles on roads can be found. The query to SQL was: AOI: The area covered by CARABAS, but excluding the nearby towns. Time: The time for the CARABASE detections. Object type: Vehicles. Condition: Vehicles on road.

ARTICLE IN PRESS K. Camara, E. Jungert / Journal of Visual Languages and Computing 18 (2007) 315–338

335

Fig. 21. The route determined by SB-plan.

Since CARABAS only detects objects and no detailed information about the observed objects it is time to use another sensor. The UN troops have a UAV with an IR-camera available. It will be sent out to have a look at all these observations. Since there are obviously activities going on in the area the UAV will also be used to drop additional sensor nets. To find out where the best positions for these nets are the tool SB-plan is used. SB-plan contains information about the roads in the area. The commander also adds information about the most probable sources of ‘‘enemy traffic’’ which in this case are the nearby towns and the most probable goal which in this case is the peak. The tool then finds places, usually crossings that will cover the most possible traffic. Then SB-plan is used to find a route between the places for the new nets and the observations from the CARABAS system (Fig. 21). Around 8:30 detections from the new nets start to come into the system. This time it is larger vehicles like trucks or buses that are observed. The result presented by SQL can be seen to the left in Fig. 22. It is not easy to know from the information given by SQL how many trucks or buses there are in the area. SQL only gives detections from all possible sources. In this case it was not only the sensor nets but also the UAV supplied information. To facilitate the interpretation of the detected vehicles the data are fed into the tool for situation analysis. The tool then tries to determine the observations that correspond to the same vehicles and were detected several times. The tool then approximates the route that the vehicle has taken between these points. The tool animates the tracks to make it easy for a human to understand what is going on. It colours the verified detections in red and the approximations in yellow, see the right part of Fig. 22. Between 10 and 10:30 additional vehicles are detected and visualized by SQL. The observed objects are then associated by means of the association tool. By now a lot of possible demonstrators have arrived and in addition to that some vehicles that can be

ARTICLE IN PRESS 336

K. Camara, E. Jungert / Journal of Visual Languages and Computing 18 (2007) 315–338

Fig. 22. The vehicles that were detected between 8:30 and 8:45. On the left the presentation in SQL, on the right the presentation in the tool for situation analysis.

Fig. 23. The end of the scenario as given by the tools (left) and as set up in the scenario (right).

assumed to contain armed persons. That is the end of the simulation since now the UN troops have enough information to decide what actions to take. The situation as presented with the combined information from the tools can be seen to the left in Fig. 23. The truth, as given by the simulation engine is presented to the right in Fig. 23. The figures are schematic but the tools have the correct number of vehicles in all cases except for one bus that has been missed by the sensor nets since it managed to choose a route that was not covered. The situation as presented by the tools is very close to the ‘‘real’’ ground truth situation in the simulation. An abbreviated description of the scenario described above can also be found in [32].

ARTICLE IN PRESS K. Camara, E. Jungert / Journal of Visual Languages and Computing 18 (2007) 315–338

337

7. Conclusions The system described in this work can basically be seen as a system that supports decision making in dynamic processes that take place in geographic environments and where the used data sources correspond to multiple sensors of various types. The system contains a set of services of which the query language is the dominant one. However, the system cannot only be used for test and demonstration purposes but for system development purposes in a recursive way where the purpose should be to develop and apply new services during controlled forms. As demonstrated the query language can systematically be used over long periods in time and consequently it can also be subject to future research for development of extended facilities in query languages such as, e.g. how to apply queries about various types of time dependent object behaviour. Applications of interest for a dynamic set-up of the kind described in this work are primarily for integration of services and in particular of the query language into, e.g. military and civilian command and control systems where crisis management applications can be of special concern. Future research activities where the scenario driven environment and its dynamic capacity can be used are among other things to extend the query language so that it can be used for querying object behaviour. That is to determine how dynamic objects may behave in various situations. This could involve behaviour that perhaps are abnormal and take place at odd places and time intervals. Clearly, this requires knowledge about what is normal and a general formalism for description of object behaviour in the query language. Other means that must be studied further are how to quickly generate reliable and realistic scenarios but this will require an extension of the simulation framework.

References [1] D.L. Hall, J. Llinas (Eds.), Handbook of Multisensor Data Fusion, CRC Press, New York, 2001. [2] T. Horney, E. Jungert, M. Folkesson, An ontology controlled data fusion process for query language, in: Proceedings of the International Conference on Information Fusion 2003 (Fusion’03), Cairns, Australia, July 8–11. [3] M. Tyskeng, MOSART—instructions for use, FOI-R-1098-SE, December 2003. [4] P. Louvieris, N. Mashanovich, S. Henderson, G.White, M. Petrou, R. Oı´ Keefe, Smart decision support system using parsimonious information fusion, in: Proceedings of the International Conference on Information Fusion, Philadelphia, Pennsylvania, USA, July, 2005. [5] J.D.N. Dionisio, A.F. Cardenas, MQuery: a visual query language for multimedia, timeline and simulation data, Journal of Visual Languages and Computing 7 (4) (1996) 377–401. [6] J.F. Allen, Maintaining knowledge about temporal intervals, Communications of the ACM 26 (11) (1983) 832–843. [7] L. Chittaro, C. Combi, Visualizing queries on databases of temporal histories: new metaphors and their evaluation, Data & Knowledge Engineering 44 (2) (2003) 239–264. [8] A. Abdelmoty, B. El-Geresy, Qualitative tools to support visual querying in large spatial databases, in: Proceedings of the Workshop of Visual Language and Computing, Miami, Florida, September 24–26, 2003, pp. 300–305. [9] C. Bonhomme, M.-A. Aufaure, C. Tre´pied, Metaphors for visual querying of spatio–temporal databases, advances in visual information systems, in: Fourth International Conference, VISUAL 2000, Proceedings, Lecture Notes in Computer Science, vol. 1929, 2000, pp. 140–153. [10] S.K. Chang, The sentient map, Journal of Visual Languages and Computing 11 (4) (2000) 455–474. [11] E. Clementini, P. Di Feliece, A spatial model for complex objects with a broad boundry supporting queries on uncertain data, Data & Knowledge Engineering 37 (2001) 285–305.

ARTICLE IN PRESS 338

K. Camara, E. Jungert / Journal of Visual Languages and Computing 18 (2007) 315–338

[12] O. Ahlqvist, J. Keukelaar, K. Oukbir, Rough and fuzzy geographical data integration, International Journal of Geographical Information Science 14 (5) (2000) 475–496. [13] F. Lehmann, A.G. Cohn, The eggyolk reliability hierarchy: semantic data integration using sorts with prototypes, in: Proceedings of the Third International Conference on Information and Knowledge Management, ACM Press, New York, 1995, pp. 272–279. [14] S.-K. Chang, E. Jungert, Query languages for multimedia search, in: M.S. Lew (Ed.), Principals of Visual Information Retrieval, Springer, Berlin, 2001, pp. 199–217. [15] S.-K. Chang, G. Costagliola, E. Jungert, Multi-sensor information fusion by query refinement, in: Recent Advances in Visual information Systems, Lecture Notes in Computer Science, vol. 2314, Springer, Berlin, 2002, pp. 1–11. [16] S.-K. Chang, G. Costagliola, E. Jungert, F. Orciuoli, Querying distributed multimedia databases and data sources for sensor data fusion, IEEE Transactions on Multimedia 6 (5) (2004) 687–702. [17] T. Horney, J. Ahlberg, E. Jungert, M. Folkesson, K. Silvervarg, F. Lantz, J. Franssson, C. Gro¨nwall, L. Klase´n, M. Ulvklo, An information system for target recognition, in: Proceedings of the SPIE Conference on Defence and Security, Orlando, Florida, April 12–16, 2004. [18] T. Horney, Design of an ontological knowledge structure for a query language for multiple data sources, FOI, Scientific Report, May 2002, FOI-R-0498-SE. [19] K. Silvervarg, E. Jungert, Visual specification of spatial/temporal queries in a sensor data independent information system, in: Proceedings of the 10th International Conference on Distributed Multimedia Systems, San Francisco, California, September 8–10, 2004, pp. 263–268. [20] K. Silvervarg, E. Jungert, A visual query language for uncertain spatial and temporal data, in: Proceedings of the Conference on Visual Information systems 2005 (VISUAL 2005), Amsterdam, The Netherlands, July 2005, pp. 163–176. [21] S. Hibino, E.A. Rundsteiner, User interface evaluation of a direct manipulation temporal visual query language, in: Proceedings ACM Multimedia 97, Seattle, WA, USA, 9–13 November 1997, pp. 99–107. [22] M. Egenhofer, Deriving the combination of binary topological relations, Journal of Visual Languages and Computing 5, 133–149. [23] E. Clementini, P. Di Feliece, Approximate topological relations, International Journal of Approximate Reasoning 16 (2) (1997) 173–204. [24] K. Silvervarg, E. Jungert, Uncertain topological relations for mobile point objects in terrain, in: Proceedings of 11th International Conference on Distributed Multimedia Systems, Banff, Canada, September 5–7, 2005, pp. 40–45. [25] H. Hellsten, L.M.H. Ulander, A. Gustavsson, B. Larsson, Development of VHF CARABAS-II SAR, in: Proceedings of the Radar Sensor Technology SPIE, vol. 2747, 1996, pp. 48–60. [26] W.H. Carrara, R.M. Majewski, R.S. Goodman, Spotlight Synthetic Aperture Radar: Signal Processing Algorithms, Artech House, 1995, ISBN 0890067287. [27] L.M.H. Ulander, B. Flood, P. Follo, P. Fro¨lind, A. Gustavsson, T. Jonsson, B. Larsson, M. Lundberg, Stenstro¨m, CARABAS-II campaign Vidsel 2002—flight report, FOI, FOI-R-1002-SE, 2003. [28] S. Nyberg, IR-sensormodell fo¨r markspaning med flygande farkost, FOI-R-1224-SE, 2003 (in Swedish). [29] M. Bra¨nnstro¨m, R.K. Lennartsson, A. Lauberts, H. Habberstad, E. Jungert, M. Holmberg, Distributed data fusion in a ground sensor network, in: Proceedings of The Seventh International Conference on Information Fusion, Stockholm, Sweden, June 28–July 1, 2004. [30] C. Ma˚rtenson, P. Svenson, P., Evaluating sensor allocations using equivalence classes of multi-target paths, Proceedings of the Eighth International Conference on Information Fusion (FUSION 2005), Philadelphia, USA, 25–29 July 2005. IEEE, Piscataway, NJ, 2005, Paper B9-1, pp. 1–8. [31] D.J. Salmond, N.J. Gordon, Group tracking with limited sensor resolution and finite field of view, in: Proceedings of SPIE, vol. 4048, Signal and Data Processing of Small Targets, 2000, April 2000. [32] K. Silvervarg, E. Jungert, E., A scenario driven decision support system, in: Proceedings of the 12th International Conference on Distributed Multimedia Systems, Grand Canyon, AZ, USA, August 30–September 1, 2006, pp. 187–192.