AIDDS: A System for Developing and Testing Incident Detection Algorithms

AIDDS: A System for Developing and Testing Incident Detection Algorithms

Copyright © IF AC Transportation Systems Chania, Greece, 1997 AIDDS: A SYSTEM FOR DEVELOPING AND TESTING INCIDENT DETECTION ALGORITHMS John Hourdaki...

2MB Sizes 0 Downloads 49 Views

Copyright © IF AC Transportation Systems Chania, Greece, 1997

AIDDS: A SYSTEM FOR DEVELOPING AND TESTING INCIDENT DETECTION ALGORITHMS

John Hourdakis, MS

Research Fellow, University of Minnesota

Abstract: Incident management is a major problem in traffic control. Traffic incidents are the cause for more than half of all traffic delays. The research undertaken by the University of Minnesota has reached a point where good knowledge on different incident detection techniques has been achieved. The computer program presented in this paper is designed to assist researchers in testing incident detection algorithms. The primary gains by using this program are the reduction of time needed for algorithm testing and the flexibility on designing the test site. With this program the user can assign individual threshold sets in every section, and use multiple algorithms simultaneously. The algorithms included in the version presented in this paper are DELOS(3,3), CALIFORNIA, Aig. #7, and Aig. #8. Some of the features that make this program unique are its ability to combine measurements on the field to create "pseudo" detectors, its capability to automatically judge if a detection is valid, and its ability to combine incident detection algorithms to improve detection performance.

Keywords: Detection algorithms, Traffic control, Computer aided testing, Filtering techniques, Pattern recognition.

hampered by limited performance reliability, substantial implementation needs, and strong data requirements. Primarily, the high number of false alarms has discouraged traffic engineers from integrating these algorithms in automated traffic operations. Instead, algorithm alarms typically trigger the operator' s attention; the operator verifies the validity of the alarm and decides on the appropriate incident response (Chassiakos and Stephanedes, 1993). The negative impressions that accompany incident detection algorithms are mainly due to inadequate knowledge of their attributes.

1. INTRODUCTION Interest in automatic incident detection (AID) methods has recently been renewed. With the advance of technology Intelligent Transportation Systems (ITS) are no fantasy. The first of the Advanced Traffic Management and Information Systems (ATMIS) have passed the prototype stage. While ATMIS can address the predictable effects of the operational and geometric bottlenecks that characterize recurring congestion, the effectiveness of ATMIS is tested during periods of nonrecurring congestion caused by incidents.

The algorithms are usually developed and tested in a limited database describing a specific test site. In order to better understand an incident detection algorithm, a test site with wider diversity in terms of geometric characteristics, bottlenecks, weaving areas, detector spacing and location with respect to ramps is needed. Great diversity in terms of the incident set with regard to the incident type, severity, and location is also needed. For the past three years in the

Responding to the need for high-performance Automatic Incident Detection (AID) systems that can be integrated with Advanced Traffic Management and Information Systems (ATMIS), a number of research projects have been instigated around the world to develop and evaluate advanced techniques for incident detection. Despite the substantial research, algorithm implementation has been

119

University of Minnesota emphasis has been given in creating a test site with the above characteristics. With the collaboration of MnDOT and other organizations like the University of California, the creation of a large database was made possible. The database currently contains data from three sources. The larger part consists of two years worth of loop detector data from all the freeways in the Minneapolis/St. Paul, Minnesota. Machine vision is used to collect measurements from the fully instrumented part of freeway 1-394 in Minneapolis. Finally data were added from freeway 1-880 in California courtesy of the University of California. This huge amount of data is accompanied with the corresponding incident database as the traffic management center (TMC) operators created it. More details about the test sites can be found in the work by Stephanedes and Hourdakis (1996).

employing statistical forecasting of traffic behavior and declaring incidents when actual traffic highly deviates from forecasts (time series algorithms), and the type separating the flow-occupancy diagram into areas corresponding to different states of traffic conditions and detects incidents after observing short-term changes of the traffic state. These algorithms operate on detector output of 30-60 sec occupancy and volume data. Additional methods involve detection of stationary or slow-moving vehicles; macroscopic traffic flow modeling to describe the evolution of traffic variables; fIltering to reduce the undesired effects of traffic disturbances; and neural networks to take advantage of learning processes. In the AIDDS version presented in this paper four algorithms have been implemented. The four algorithms are DELOS with exponential smoothing, CALIFORNIA, Alg. #7, and Alg. #8. A short description of each algorithm follows .

AID algorithm research and development was always very active in the University of Minnesota. In fact a number of highly acclaimed incident detection algorithms such as DELOS (Detection Logic with Smoothing) (Stephanedes and Chassiakos, 1993ab) were developed there. During work previously introduced by the author (Stephanedes and Hourdakis, 1996) a number of incident detection algorithms were evaluated using a small part. of the aforementioned database. During this work the absence of an algorithm testing system became apparent. The time and effort available for testing actually dictated its extent.

2.1 DELOS

DELOS algorithms involve smoothing occupancy measurements to distinguish short-duration traffic inhomogeneities from incidents. Although smoothing may conceal the patterns of some non-severe incidents, this is of lower priority to users. Furthermore, in a manner similar to, but more effective than, previous algorithms , the algorithms attempt to distinguish recurrent from incident congestion on the basis of slow or fast evolution of congestion respectively. In particular, the distinguishing logic is based on temporal comparison of the control variable, occupancy difference between adjacent stations.

The purpose of this paper is to present the Automatic Incident Detection Development System (AIDDS), a computer program designed to assist researchers in testing incident detection algorithms. The primary function of this program is the reduction of time needed for algorithm testing. Large gain also comes from the flexibility in the design of the test site. The rest of this paper is divided into two parts. In the fIrst part a brief description is made of the incident detection algorithms currently included in the system. In the second part a description of the systems features and specifIcations is presented.

Two smoothed values are considered for the control variable, one representing current traffic conditions, and one past conditions. For an incident occurring at time t, defIne, OCCj(t+k), smoothed occupancy at station i from k occupancy values after t, and OCCj(t), smoothed occupancy at station i from n occupancy values prior to t, where k and n represent the window size to smooth the data for the current and past period respectively. The incidents is likely to create congestion in upstream station i and reduce flow in downstream station i+ 1 leading to a high value of occupancy difference, t'lOCC(t + k) , as described in

2. INCIDENT DETECTION ALGORITHMS Automatic incident detection (AID) involves two major elements, a traffic detection system that provides the traffIc information necessary for detection, and an incident detection algorithm that interprets the information and ascertains the presence or absence of a capacity-reducing incident.

t'lOCC(t + k) = OCG(t + k) -

A number of AID algorithms have been developed based on typical loop detector data. The most important include the comparative or pattern recogmuon algorithms which seek predefIned incident patterns in the traffic flow (Payne and Tignor, 1978; Levin and Krause, 1978), the type

oce + let + k)

(1 )

Further, to distinguish from bottleneck congestion, we compare occupancy difference t'lOCC(t + k) for the current period to the corresponding value t'lOCC(t ) from the past period, where

120

ll.OCC(t} = OCCi(t} - OCCi + t(t}

(2)

Both tests, congestion and incident, are normalized by the highest value of the two occupancies, upstream and downstream, as in max OCC(t} = max[ OCCi(t}, OCCi + t{t}]

ll.OCC(t + k} maxOCC(t}

_ll._O_C_C....:.{t_+_k....:.}_-_ll._O_C_C....:;(...:...t} > TI maxOCC(t}

OCCRDF(i t) = OCCDF(i,t) , OCC(i,t) '

(8)

OCC(i + l,t - 2) - OCC(i + l,t) (9) OCC(i + l,t - 2) ,

OCC(i,t) is the occupancy at station i during time interval t (in percentage), Tb T 2, T3 are predefined station-specific thresholds.

2.3 Algorithm #7 Algorithm #7 is a small vanatIOn of the CALIFORNIA algorithm. There are three differences in Algorithm #7. Whereas the CALIFORNIA algorithm produces an incident signal whenever OCCDF, OCCRDF, and DOCCTD are greater than associated thresholds, Algorithm #7 replaces DOCCTD with DOCC, suppresses incident signals after the initial detection, and contains a persistence requirement that OCCRDF be greater than the threshold for two consecutive intervals. The specific conditions are:

(4)

(5)

The major concerns in selecting a smoothing technique are related to its effectiveness in eliminating undesirable false alarm sources, the extent to which smoothing distorts the information content of incident patterns, and the detection delay imposed from the need to obtain a number of measurements while an incident is in progress. Exponential smoothing is an effective smoothing technique, extensively used in determining data trends. The general form of the smoother is OCCi(t} = aoi(t} + (1- a}OCC(t -I)

(7)

DOCCTD(i,t)

(3)

This reflects changes with respect to existing conditions prior to incident. The normalization increases the potential for algorithm transferability across locations. In summary, the detection logic involves two tests, congestion (eqn 4) and incident test (eqn 5), where T c, Tt are the respective thresholds: T --.....:....-~>lC

OCCDF(i,t) = OCC(i,t} - OCC(i + l,t),

1. OCCDF(i,t)
2. OCCRDF(i,t)
(6)

where the occupancy measurement at time t and detector station i, Oi(t}, is smoothed via a, the smoothing factor (Stephanedes and Hourdakis, 1996).

2.4 Algorithm #8 Algorithm #8 again can be considered a variation of Algorithm #7. The variation is that Algorithm #8 incorporates a 5 intervals compression wave check. The specific conditions are:

2.2 CAUFORNIA The basic structure of the California algorithm is a decision tree consisting of a set of states. Each state corresponds to a predefined traffic condition. The occupancy, which is the percentage of time a roadway sensor is being occupied, is the basic measurement variable employed in the algorithm. An incident message will be produced after satisfying the followingJ.hree predefined conditions:

1. OCCDF(i,t)
where OCCDF(i,t) , OCCRDF(i,t) ,and IXXCID(i,t) are given by equations 7,8 and 9 respectively.

1. OCCDF(i,t}
occupancies) 2. OCCRDF(i,t)
More details and the decision trees of CALIFORNIA, Alg. #7, and Alg. #8 can be found in the Freeway incident Management Handbook (1991).

121

3. AIDDS ARCHITECTURE

left part continues as 1-35 and the right joins HW66. If detectors I and 2 define the section, the performance will be low because a significant part of the traffic is moving to HW66. In order to compensate for this problem the user can create a pseudo detector station named PD I that combines the data for DETI and DET3 so

The Automatic Incident Detection Development System (AIDDS) is an object-oriented application written in Visual C++ 4.0. The system is running under the Windows95 or Windows NT operating systems. A complex data structure is used to manage the data necessary for the detection and also store the information describing the test site. The system is designed based on independent modules. As a result, design changes or upgrades to the system are relatively easy. The system was designed following an object oriented programming methodology. In the following sections the most important parts of AIDDS are described.

PDl occ = DET20cc + DET3 0cc

(10)

The possibilities for pseudo detector stations are the following: • • •

Detector Addition: PD = Detl + Det2 Detector Subtraction: PD = Detl - Det2 Detector Percentage: PD = x% of Detl

3.1 Data file input interface

After their creation pseudo detector stations are treated as ordinary detector stations.

The most common sources for historic data are ASCII files . AIDDS is equipped with an interface capable of reading CSV formatted ASCII files . The interface can be designed to accommodate any format. The format used can read volume/occupancy files with the columns separated by commas (standard CSV output of Microsoft EXCEL). The file must include a header line before the data, indicating the detector IDs.

3.3 Section database

Freeway sections are the centerpiece of AIDDS. The definition of a freeway section depends on the detection logic. In two-detector station logic, like in the algorithms described earlier, section is the segment of the freeway between two detector stations. In single detector station logic, section is defined as the segment of the freeway around the detector where detection is possible. AIDDS incorporates a database design to store all the userdefined information regarding sections. This information can be stored in a file for later retrieval. Figure 2 displays the section information dialog of AIDDS.

The interface initially reads all detector IDs and informs the system about their number and name. This way the necessary data structures are created dynamically. The system upon request accesses the rest of the file, line by line. The design of the input interface allows unlimited input file length.

3.2 Pseudo Detector Stations

Pseudo detector stations are detectors that do not actually exist on the field, but are created by the user. Sometimes it is necessary to combine the data of two detectors in order to enhance detection performance. In Figure I an example of a case like that is shown.

Fig. 2. Section Information dialog

I

,

In the "Section Information" dialog the user can enter a large amount of information. Some of the information entered through this dialog is not of immediate importance. This information was included in order to enhance the output of the system, and for future needs. Information such as the algorithm thresholds is vital for the detection process. A detailed description of every field in the "section information' dialog follows .

., I

I

Fig. I. Freeway split In the case of Figure I, the user plans to establish incident detection along the 1-35 freeway . This freeway, at the point shown, splits into two parts: the

122

Freeway Corridor: In this entry field the user enters the freeway location of the section.

3.5 Combined detection logic In the detection cycle, multiple algorithms are already being used. These algorithms work independently and each one produces individual alarms. In order to enhance the overall system performance it is possible to combine the algorithm output and create one single alarm, visible to the user. In AIDDS the four algorithms are combined in a weighted voting process. The weights in the process represent the effect the incident had on each of the algorithms. Each algorithm is assigned a weight calculated from the Euclidean distance between each test result and its preset thresholds. The logic raises the alarm if the cumulative vote is higher than a user defined threshold. With this combined detection logic it is possible to reduce the false alarms caused by the idiosyncrasies of a single algorithm.

Section ID: The ID is the name users give to the section. This ID identifies the section in the system.

# of Detector stations: A section can have one or two detector stations. Depending on the algorithms the user wishes to run in this section, one or two detector stations are selected. Upstream and downstream detector stations: These two identical groups contain the entry fields describing the upstream and downstream detector stations respectively. The fields contained are: Name: In the name field the user chooses the detector ID from a list. From the moment the user opens the data file, the system knows the ID for all the available detectors in the file. In addition to the real detector lDs, the list also contains the pseudo detector IDs.

3.6 Automatic detection validation Automatic Detection Validation (ADV) is a feature that improves the time efficiency of the system. ADV allows the user to load the system with incident data. During the detection cycle the system can evaluate whether an alarm is a legitimate or false detection. With this feature there is no need for the user to go through lengthy output files in order to calculate the evaluation features of the algorithms. This feature can validate detection that has happened inside the user-defined section within a user defined time frame.

Location: The location of the detector station. ~:

In this field the user can choose one of the predefined detector types, e.g. AUTOSCOPE, loop detector, radar, pseudo. The user can also define a new detector type.

# of Lanes: The number of lanes at the location of the detector station.

Detection Algorithms group: The detection algorithms group contains all entry fields related to the detection process. Inside the group there are 5 sections, four for the already installed algorithms and one for a future one. Each of the algorithms comes with its threshold entry fields. The user has the choice of activating or deactivating an algorithm in the section.

The "Incident data" dialog of AIDDS contains the necessary information in order to describe a reported incident. The available fields are:

Incident ID: This field contains the user defined ID of the incident entry. Section ID : In this field the user enters the ID of the section where the incident was reported. The user can either type the ID or choose one from the drop-down list box available.

Global Thresholds Button: The user has the option to "Assign these thresholds to all Sections". If the user wishes to have detection algorithms run with the same set of thresholds on all sections, the system provides a shortcut.

Time : Refers to the time the incident was reported in the incident log (ground truth). 3.4 Individual section thresholds Time Frame: The user has the ability to enter a time frame around the reported time. Any detection reported inside the time frame is considered validated. The time frame is defined as the reported time of the incident plus or minus a user-defined constant in minutes.

As indicated earlier the intent of the designer was to treat the freeway sections as individual objects. One feature that facilitates this was the option of having individual threshold sets for each section. If the elements under detection could be isolated the operation of the system would not have to be confined to a specific site and individual section characteristics could be addressed. Although the feature made it possible to assign different thresholds to each section, caution is advised, since threshold optimization would become very difficult.

Although ADV information is part of the AIDDS database it is not stored in a file. Detector information, pseudo detectors and sections can be stored for later use. ADV information is not stored, as there is a possibility of the user validating the

123

wrong incidents. Every time the system is initialized, the user has to enter the incident infonnation.

Section: The ID of the section where the event was produced. Time: The time of the event.

3.7 Output

4. CONCLUSIONS

AIDDS incorporates a detailed screen output, in which the evaluation features and the time/place of the last alarm are displayed for each algorithm. Although this is adequate to create the performance curves for the algorithms, it is not detailed enough. A file output is also produced while the program is being executed. The information displayed in the two forms of output is the following :

In this paper a new computer program for AID algorithm testing has been presented. The Automatic Incident Detection Development System (AIDDS) is a Windows-based application with a number of unique characteristics. Designed to assist in research, its modular design allows easy upgrades. AIDDS incorporates a Section database and features like Pseudo detector stations, Automatic Detection Validation (ADV) , Individual thresholds, and Combined detection logic. This application succeeds in reducing the time needed for algorithm testing and provides the researcher with tools for detailed test design. The next step is the evaluation of the effectiveness of AIDDS by testing it with a large number of historic data. Additional algorithms can be added to the system and the combination formula will expand to accommodate them. Future plans include the incorporation of AIDDS into a larger incident management system.

Detections: The number of detected incidents. When ADV is enabled, this field contains the number of legitimate detection the algorithm has performed. In the case of 100% performance, this number is equal to the number of incidents entered in the database. When ADV is deactivated, this number represents the number of alarms the algorithm has produced so far. False Alarms: The number of false alarms created by the algorithm. This field takes a non-zero value only if ADV is activated. ADV decides whether an alarm is a true detection or a false alarm.

# of Decisions: The total number of decision the algorithm has performed. The algorithm performs one decision (alarm vs. No-alarm) for each section in every detection cycle. The number of decisions is needed to calculate the false alarm rate.

REFERENCES Chassiakos A.P., Stephanedes Y.J. (1993). Smoothing algorithms for incident detection. In: Transportation Research Record 1394, TRB, Washington DC. Levin M. , and Krause G.M. (1978). Incident detection: A Bayesian approach. In: Transportation Research Record 682, TRB, Washington, DC. Payne H.J., and Tignor S.c. (1978). Freeway incident detection algorithms based on decision trees with states. In: Transportation Research Record 682, TRB, Washington DC. Reiss R.A, and Dunn W.M. Jr. (1991). Freeway Incident Management Handbook. In: FHWA ASA-9I -056. Stephanedes YJ. , and Chassiakos AP. (1993a). Application of Filtering Techniques for Incident Detection. In: ASCE Journal of Transportation Engineering, Vol.ll9, No . I. Stephanedes YJ., and Chassiakos AP. (1 993b) Freeway Incident Detection Through Filtering . In: Transportation Research, Vol.l , No.3 . Stephanedes Y. J., and Hourdakis J. (1996). Transferability of Freeway Incident Detection Algorithms. Preprints of Transportation Research Board Meeting, 1996.

Detection rate (%): Detection rate is defined as the percent of detected incidents over the total number of incidents that really happened in the test period. False Alarm rate (%): False Alarm rate is defined as the percent of false decisions (alarms) over the total number of decisions the algorithm made in the test period. Last alarm in section : The ID of the section where the last alarm for this algorithm was reported. Last alarm time: The time of the last reported alarm for this algorithm. The time is measured according to the time information included in the data file. The second form of output is contained in an ASCII file . The file is created and updated during detection time. The output contains the following fields:

Event: A short description of the event the algorithm produced. The events are not confined to alarms, but include incident termination, incident pending, and generally any change in the state of the algorithm. Algorithm: The name of the algorithm that produced the event.

124