Tracking convective cells in the upper rhine valley

Tracking convective cells in the upper rhine valley

Phys. Chem. Earrh (B), Vol. 25, No. 10-12, pp. 1317-1322.2000 Q 2000 Elsevier Science Ltd All rights reserved 1464-!909/00/$ - see front matter Perga...

784KB Sizes 3 Downloads 61 Views

Phys. Chem. Earrh (B), Vol. 25, No. 10-12, pp. 1317-1322.2000 Q 2000 Elsevier Science Ltd All rights reserved 1464-!909/00/$ - see front matter

Pergamon

PII: S1464-1909(00)00201-X

Tracking Convective J. Handwerker, Institut Received

Cells in the Upper Rhine Valley

J. Rel3ing and K. D. Beheng

fur Meteorologie

und Klimaforschung,

14 June 2000; accepted

Forschungszentrum

Germany

ing the mismatch between the estimated and the real position of tracked cells are not avoidable. This paper first shows how cells are defined and how they are tracked. Then an evaluation of results obtained for very different situations is given. Finally some applications of TRACE3D on radar data covering the whole year 1999 are presented.

2

Definition

of cells

TRACE3D relies on reflectivity data measured by a radar to identify convective cells. A first approach might be to define a unique threshold by which convective cells are separated from the background as shown in fig. la for a onedimensional example. This often leads to very large single cells which are recognized by eye to consist of several smaller cells. For example, a squall line of more than 1SO km in length made up by very many cells would be classified as a single cell which is not desirable. The next idea could be to take only the center of each cell as long as its maximum exceeds a certain limit (fig. I b). But then secondary maxima may reach their more intense primary maxima as can be seen with cells 5 and 6 in fig. I b. In the two- or three-dimensional case this procedure might even tend to “cells” with holes inside at locations where stronger cells appear. In order to circumvent these problems a third kind of cell definition was chosen. In this method two different thresholds are applied: one absolute value (thabs, typically 35 dB) and one relative value (t/+.,1, typically 10dB). At first “regions of precipitation” are searched for as contiguous regions with all reflectivity values larger than thabs. (In fig. Ic there are three regions of precipitation.) If the maximum within a certain region of precipitation does not excel t habs + f h rel, that region is rejected. In a second step only beam volume elements with a reflectivity value above the maximum reflectivity in a certain region minus th,,l are considered. Convective cells are then

Introduction

Operational radars provide volume-filling information about reflectivity with a high temporal and spatial resolution. These data are commonly evaluated to estimate precipitation intensity at a given point in space (and time) thus adopting an eulerian point of view. However, data gathered by a radar could also be used in a lagrangian framework if certain objects can be found, which can be identified and tracked in consecutive volume data sets. Such a tracking is here applied to convective cells. To this end, a tracking algorithm TRACE3D is developed which (i) identifies convective cells by searching for large reflectivity values and (ii) looks for these cells in the next radar image. It only uses the three dimensional raw data of a volume scan as input and no other information than those provided by a radar are taken into account. If a bright band structure occurs the input data have to be corrected by a special algorithm (cf. e.g. Gysi et al., 1997). The number of free parameters is reduced to a minimum, but some thresholds for cell definition as well as values on the tolerance regardCorrespondence

Karlsruhe,

29 June 2000

Abstract. A tracking algorithm for convective cells is presented. TRACE3D merely uses the three dimensional polar volume scan data of a radar to track convective cells. Cell tracking allows to switch from a eulerian representation to a langrangian representation. Cell tracking is performed in two steps. At first, convective cells have to be identified in the radar data. Then these convective cells are followed from one volume scan to the next. To this aim an estimated position of a certain storm in the next radar image is determined. Some different procedures to find that position are discussed. The performance of TRACE3D is investigated by comparison with tracking results earned by eye. An overview on some results from 1999 is presented. 0 2000 Elsevier Science Ltd. All rights reserved.

1

Karlsruhe/Universitat,

to: Jan [email protected] 1317

J. Handwerker

1318

et al.: Tracking

Convective

Cells

Table 1. Overview of the free parameters of TRACE3D. typical value

Description I. Cell Definition Minimal reflectivity within a region of precipita-

3SdBz

tion Difference 40

60 range [km]

80

100

120

between

the maximum

reflectivity

IOdB

within a region of precipitation and the minimum accepted reflectivity in a cell Minimum distance between two cells to remain

I km

two separate cells

2 volume elem.

Minimum

IO

number of beam volume elements in

or

one cell 2. Cell Tracking

0

20

40

60 range [km]

80

Maximum time delay between two scans

30 min

Maximum mismatch for a cell position to be ac-

0.6/6tv;(t)l

cepted

rn;n(lFJ -

Maximum distance to a nearest neighbor

IOkm

Minimum distance of a parent beam volume ele-

3km

o, Fk//2)

ment to indicate occurrence of splitting 2

a 0

20

40

60 range [km]

80

region II 100

120 J

Fig. 1. Schematic sketch of possible methods to define cells demonstrated with a unique one-dimensional data set: (a) Each beam volume element with a reflectivity above a certain threshold (here 35 dBZ) is part of a convectivecell. (b) “The upper 10 dB” are taken as the cell. This might lead to cells that include the lower regions of more prominent cells as can be seen with cells 5 and 6.

volume scan. These cells have to be tracked from one volume scan to the next. Identifying convective cells only by their reflectivity values is not unambiguous. To emphasize that not necessarily each object identified as a convective cell by TRACE3D is indeed a convective cell, the objects identified by TRACE3D will be called “reflectivity cores” in the following. The hope is, that nearly always reflectivity cores are convective cells and vice versa.

(c) In a first slep contiguous regions with a reflectivity value above a threshold (35 dBZ) are searched for. Within each region a

secondthreshold, 10dB

beneath the maximum reflectivity within that region, is used to define the cells. In that way TRACE3D

3

Tracking

defines cells.

formed by grouping them to contiguous objects. For a onedimensional example this procedure is shown in fig. lc. In the vicinity of a radar the spatial resolution of the radar data is very fine. Sometimes this leads to a lot of very small “convective cells”, found by the procedure described, which are obviously not independent meteorological objects. To merge them in a reasonable way, two different convective cells are put together and marked as one single cell as long as their minimum distance is smaller than a certain value, typically 1 km. Accordingly two cells are merged when their minimum distance is smaller than two beam volumeelements. Cells with less than 10 beam volume elements are neglected because they are to small to be of any interest. A list of the parameters which are used by the cell identification scheme of TRACE3D is given in table 1. A cell is then identified by (i) the (polar and Cartesian) coordinates of its beam volume elements, (ii) the corresponding reflectivity values, (iii) its size (in cubic meters and in volume elements) and (iv) its center position calculated as the center of rainfall intensity. Rainfall intensity is calculated by a unique Z/R relation. Each cell is identified by an individual number, e.g. cj(t) with j = cell number and t = time of

Tracking is performed in two steps. At first all possible children of a certain parent reflectivity core are searched for. Thereafter the assignments are reduced to the more probable ones. The position of a core cj(t) is determined by its center of precipitation ?‘(t) as described above. For each parent core an estimated position at the time of the successive scan (t + 6t, where 6t is the time step between two consecutive scans) is calculated with regard to the velocity G”(t) of this core. The velocity of a core is calculated from the former time steps the storm is observed. The simplest assumption about the velocity of a core within the current time step is to use just the same velocity as was observed in the last time step. This is the way TRACE3D works. Using the additional information available in the history of the storm should produce more accurate result. This is discussed in section 4. A new born reflectivity core starts with the mean velocity of all cores from the former time step whereas the very first cores get its velocity from the data of the VVP algorithm. Near the estimated position of the parent core, given by Fj(t) + 6t Cj(t), candidates for childhood of that core are searched for. A core ck (t + &) is accepted as a candidate if the mismatch between its actual position ?k(t + bt) and the

J. Handwerker et al.: Tracking Convective Cells

1319

estimated parent position is (i) smaller than a certain fraction fi of the distance the parent travelled during one time step (i.e. fi 6t Cj(t), f 1 is typically 0.6) or (ii) smaller than a fraction fi of the distance between the parent core and its nearest neighbor core (fa r-r-r I?“(t) - Fm(t) I, fi is typically

are called a “heap”. Within a heap it is not possible to reliably detect splitting and merging. Therefore splitting and merging does not occur by definition. Each core may then have one successor and one predecessor in maximum. That core fitting best with the average group velocity is taken.

0.5). If in the second case the distance to a nearest neighbor excels a certain limit (typically 10 km) this limit is used instead.

4

The relative generous tolerance about the mismatch between the predicted and the real position of a reflectivity core chosen is due to the fact that convective cells not only translate (i.e. move with the mean wind) but also propagate (i.e. change their position by growing and decaying). The center of rather large cells sometimes even moves in the opposite direction of the translation. An additional mechanism reflectivity cores can experience is splitting and merging. To detect splitting each parent core is virtually moved to the position of the center of precipitiation of all its identified children as a rigid body. Moved to that location it is looked for beam volume elements that, on the one hand, belong to the parent core but, on the other hand, are more than (typically) 3 km apart from the nearest volume element of all children. If such beam volume elements exist, a splitting event might have taken place. All cores surrounding these beam volume elements are accepted as further candidates for childhood if their center of precipitation is closer than half the way the parent moved in the last time step. A merging is interpreted as a splitting on a reverse time scale. Thus, the same routine which detects splitting is applied a second time after changing the roles of parents and children, i.e. by reversing time. This completes the collection of candidates for childhood. In a further step a selection is performed to reduce the assignments to the most probable ones. Until now assignments are accepted regardless whether they lead to crossings with other assignments or not. Crossings of reflectivity cores must not occur. To avoid crossings groups of assignments which undergo mutual crossings are created. The average velocity of such a group is assumed to be a good estimation of the local wind speed. Hence, assignments leading to velocities which are similar to the corresponding mean velocity are more probable than those which lead to very different velocities. The selection procedure is then to start with the best fitting assignment. The acceptance of a certain assignment leads to the rejection of all assignments which would lead to mutual crossings. This procedure is repeated until for the first time an already rejected assignment would be the next to be accepted. All further assingments which produce crossings are rejected. In very few cases when the distance between reflectivity cores in one scan is too small (as it happens in a squall line, e.g.) there are still too much assignments. To identify these situations the reflectivity cores are grouped again: all interconnected reflectivity cores are taken as one group. If a group consists of 3 or more reflectivity cores from the first radar image and 3 or more from the second radar image, these storms

Estimating the position of a reflectivity core in the next radar image

The position of a reflectivity core is determined

by its center of precipitation. To relocate a reflectivity core from one radar image to the next, it is necessary to estimate the new position of the core. Near this estimated position TRACE3D searches for the core in the next radar image. Different methods to estimate the velocity of a reflectivity core are investigated: Persistency. The present version of TRACE3D uses the velocity observed in the last time step as the current velocity of a reflectivity core: v’e(t + st) = i&(t)

(1)

where the subscript e means ‘estimated’, m means ‘measured’ (the measured velocity is the tracked one). Searching for a better method to estimate the next position of a reflectivity core, additional methods have been applied. Weighted average. For a weighted average, all previous velocities of the core are considered recursively. Thereby, the weight of a velocity value increases with time lag since it was observed: i&(t +Jt)

= k?&(t) + (1 - k&(t)

(2)

The subscripts are the same as in equation (1). If k is chosen to 0, equation (2) yields equation (1). At the beginning, when there is no former time step with an estimated velocity, the estimated velocity is equal to the measured one. Regression Another method to include the history of a core is a polynomial fit on the trajectory. With a least square algorithm a function of first or higher order approximates the observed path. Extrapolating this function to the next time step gives the estimated position of the core. As discussed in section 5, TRACE3D was applied to all radar images of the year 1999. Using each of the three algorithms, each position of each core was predicted and the result was compared with the measured position (obtained by TRACE3D). Three different sets of trajectories are distinguished: A selection of unambiguos trajectories that could be observed for an hour or longer and with only one core (without any splitting or merging) was picked out. This set of trajectories were investigated by eye to exclude errors or very untypical cases. This lead to 122 trajectories. 729 predictions for the (respective) next core positions were made. All data sets with a temporal resolution of 10 minutes, from the beginning of the year 1999 until 10.08.1999. Here 8579 predictions were made.

J. Handwerker

1320

et al.: Tracking

3. Data sets with a temporal resolution of 5 minutes, for the rest of the year 1999. The number of predictions was 7975. The absolute difference between the estimated and the actual position as it is given by TRACE3D is taken as a measure for the quality of the extrapolation procedure. The mean values and standard deviations of these differences for the four different extrapolation techniques and the three different data sets are given in table 2. As can’bc seen the weighted average produces the best results for all data sets, although there is only a very small benefit compared to use of the latest time step. Accordingly there is only a weak dependence of the errors on the weighting parameter k. The regression analyses show no advantages relative to the other methods.

lhble

2. Mean and standard deviation of the absolute difference between

estimated and measured coordinates in km. data set

persistent

k=O

Six time sequences were chosen which differ (i) with respect to their severity, i.e. the number of reflectivity cores identified in the radar data (labeled easy - less than IO reflectivity cores in each data set, medium - 10 to 20 reflectivity cores, hard - more than 20 reflectivity cores) and (ii) with respect to the temporal resolution (labeled slow - IO minutes interval from data set to data set, fast - 5 minutes). Table 3 gives an overview about the temporal resolution and the severity of the sequences as well as on the number of reflectivity cores observed during the recent period and the number of assignments identified as “truth” during this test.

l?abte 3. Cases used for evaluation. For each time span the temporal resolution, the severity, the total number of reflectivity cores and the total number of assignments found in the “true” data set is given. Date

lime

Resol.

Sever.

Reflcores

07.04.99

14:04-l9:34

slow

easy

202

Assienm. 127

13.06.99

Il:O4-15:54

slow

medium

418

332

weighted

regression

regression

30.06.99

6:34- 9%

slow

hard

376

216

k = 0.7

1st order

2nd order

05.10.99

13:00-17:OO

fast

easy

287

208

I: mean

1.9

1.6

2.3

2.4

25.09.99

4:00- 6:40

fast

medium

41 I

309

I : std

1.5

1.3

1.5

1.8

31.10.99

o:OO- I:50

fast

hard

610

450

2: mean

2.5

2.4

2.1

2.9

2: std

2.3

2.1

2.4

3.4

3: mean

1.9

1.8

2.1

2.2

3: std

2.1

2.1

2.3

2.6

A further investigation shows that the estimated positions scatters around the real position randomly, i.e. there is no significant preference to a certain direction deviation. The predictions made for the second part of the year with a temporal resolution of 5 min (group 3 in table 2) are significantly better than those for the first part with a temporal resolution of 10min (group 2). This emphasizes the need of a short scan period to improve the tracking quality by an automatic cell tracking algorithm. Nevertheless, the errors seem to be quite large. A mismatch of 1.8 km in 5 min means an error of 6m/s for the velocity of a reflectivity core. This is due to the fact that a reflectivity core not only translates but also propagates (c.f. section 3).

5

Convective Cells

This procedure led to five data sets of assignments, one from TRACE3D and four from the test persons. An assignment is said to be true if and only if it is found in three of the five data sets of assignments. By this method a “true” data . set 1sformed. This truth is finally used to rate the quality of the tracking results obtained by the algorithm as well as by the four test persons. This rule implies that the truth depends on the results of the algorithm. There are four possibilities to assess assignments in the five data sets in comparison to the “true data” set: 1. An assignment is found in a data set and it is found in the truth. This is called a hit (h). 2. An assignment is not found in a data set although it is found in the truth. This is called a miss or an error of the first kind (ei). 3. An assignment is found in a data set but it is not found in the truth. This is called a false alarm or an error of the second kind (es).

Evabmtion

To test the accuracy of TRACE3D, results from the algorithm were compared with the results of tracking by eye. Four different persons tracked the reflectivity cores identified by TRACE3D during six different sequences by eye. Two of them are very familiar with the algorithm whereas the other two are experienced meteorologists. Each of them analysed independently grayscale SRI images with the reflectivity cores superposed (given by their center of precipitation as well as their full extension). They could switch within each pair of images as often as they wanted, to identify and write down the assignments.

4. An assignment truth.

is neither found in a data set nor in the

To quantify the quality of the tracking results three different assessment relations are used: The propability of detection The ratio of assignments found to be true ones: POD = h/(/z+ el). In optimum POD yields 1. The false alarm rate The ratio of wrong assignments in all found assignments: FAR = ez/(h + ez). In optimum FAR yields 0.

J. Handwerker et ul.: Tracking Convective Cells

Table 4. Tracking results resolved for the six different situationsgiven in tab. 3. “Persistent” means that the estimated speed of a core is given by its speed in the latest time step, whereas “weighted” indicates the use of an weighted average of the velocities based on all former time steps. POD

FAR

CSI

TRACE3D (persistent)

0.90

0.06

0.85

07.04.99

TRACE3D (weighted)

0.94

0.04

0.9 1

easy

test person(average)

0.97

0.06

0.91

TRACE3D (persistent)

0.88

0.04

0.84

Date

tracked by

13.06.99

TRACE3D (weighted)

0.90

0.04

0.87

medium

test person(average)

0.98

0.04

0.94

TRACE3D (persistent)

0.91

0.12

0.81

30.06.99

TRACE3D (weighted)

0.93

0.09

0.85

hard

test person (average)

0.96

0.04

0.92

TRACE3D (persistent)

0.98

0.01

0.96

OS.10.99

TRACE3D (weighted)

0.97

0.01

0.96

easy

test person (average)

1.oo

0.00

0.99

TRACE3D (persistent)

0.87

0.04

0.84

2509.99

TRACE3D (weighted)

0.92

0.04

0.88

medium

test person(average)

0.96

0.03

0.93

TRACE3D (persistent)

0.85

0.09

0.79

3 1.10.99 TRACE3D (weighted)

0.87

0.07

0.81

hard

test person(average)

0.97

0.03

0.94

TRACE3D (persistent)

0.89

0.06

0.84

TRACE3D (weighted)

0.91

0.05

0.87

test person (best)

0.98

0.02

0.95

all

test person (average)

0.97

0.03

0.94

test person (worst)

0.96

0.04

0.93

The critical success index The ratio of hits to the number of all correct or false identified assignments: CSI = h/(h + el + ez). In optimum CSI yields 1. It should be pointed out that these relations strongly depend on the way the evaluation is made. As Wilson et al. (1998) emphasize, a comparison with other evaluation procedures is hardly possible as long as they use a different evaluation scheme. Table 4 shows the three assessment measures for the results of the test persons and those by TRACE3D. Additionally the six cases are weighted according to the number of assignments found in each period (“all”). This mean that the very severe case of Oct. 3 1st. contributes much stronger than the easy case of Oct. 5th. The quality of the algorithm, of course, never reaches the quality achieved by test persons, but the results are encouraging. The benefit of using an advanced method (“weighted”) compared to persistency to calculate the velocity of a reflectivity core is evident. The weighted average always produces better tracking results than the simple use of the velocity from the latest time step. Moreover, TRACE3D tends to be more sensitive on severity than human eyes. Whereas the results are comparable with those of test persons on 07.04.99 and 05.10.99 (the easy cases), there is a significant reduction of quality on 30.06.99 and 31.10.99 (the hard cases). Keeping in mind that only during less than 1 % of the time

1321

nble 5. Distribution of the number of reflectivity cores within one radar image. reflectivity cores

percentage

per image

of pictures

reflectivity cores

0

70%

0%

l-5

percentage

20%

29%

6- 10

5%

25%

II-15

2%

19%

16-20

1%

13%

21-25


8%

26 or more


6%

of severity

easy

medium

hard

of 1999 more than 20 reflectivity cores were observed in one radar image (containing less than 6 % of all reflectivity cores, see table 5), the poorer impression of the performance of TRACE3D in hard cases is put into perspective. The results of TRACE3D depend on the parameters given in table 1 and the cited values are achieved with the best known parameter set. Varying the parameters influences the results in all cases in the same sense. So no compromise between good performance in easy cases or in hard cases is necessary.

6

Cell tracking in 1999

The C-band radar at the Forschungszentrum Karlsruhe (Ger-’ many) is sited in the Upper Rhine Valley. In 1999 it operated with a temporal resolution of 10 min until 10.08.99 and then the resolution was increased to one data set every 5 min. Roughly 32000 radar images are stored, 17300 at IO min resolution, 14700 at 5 min. Radar images with (nearly) no precipitation are not considered. About 50000 reflectivity cores are identified within these radar images (30000 at lOmin, 20000 at 5 min). The distribution of the number of reflectivity cores within one radar image (the severity) is given in table 5. Note that more than 50 % of all reflectivity cores are found in “easy” cases (up to 10 reflectivity cores within one radar image). Thus the performance of TRACE3D averaged over a year should be quite good. In most cases tracking connects the observed reflectivity cores to a single storm. But sometimes the storm splits or merges producing a more complicated path. Especially in cases that are tracked over a rather long time span not only single storms but huge systems of interconnected storms are formed and it is not easy to (automatically) separate them into single storms. Therefore such interconnected reflectivity cores are called a family (of reflectivity cores). In most cases a family comprises a thunderstorm. Fig. 2 shows a not too complicated example of a family. A core entered on Sept. 6th. at 11:54 LT the range of the radar (pos. 1). It then moved to the north east and splitted and merged again twice. At 12:29 LT it splitted a third time (pos 8). The western part died at 12:39 (pos. IOa) whereas the eastern part ceased at 12:44 (pos. 11 b).

J. Handwetker

1322

et al.: Tracking

Convective Cells

J

Fig. 2. An example for a complex family. The lines do not represent the movement of any particle but they connect the centers of the cores. Splitting and merging changesthe number of cores. The center of the combined system of reflectivity cores 4a and 4b is somewhere inbetween these two.

Fig. 4. Spatial distribution of locations where storms are first detected. Each square represents 6x6

km2.

The grayscale gives the number of biths per

square.

Nearly 8200 families are identified in 1999. On the average a family is observed for 21 minutes. Fig. 3 gives the distribution of lifetimes of families. Roughly 5 % of the families last for more than one hour, only 33 families last for more than 120 minutes. This result points to a severe problem in the scope of investigating the growth and decay of typical thunderstorms in detail. Using a repetition rate of one radar image every 5 min produces only 4 observations of a thunderstorm on average, which is an information too crude for a profound analysis. Fig. 4 gives the distribution of the locations, the families are observed for the first time (“location of birth”). Obviously there is a dependence of the number of births on the distance from the radar that cannot be physically sound.

There are several reasons for this finding. First, attenuation may reduce the average reflectivity with distance from the radar. Thus the thresholds used in the cell detection algorithm are less often exceeded. Second, a reflectivity core has to consist of 10 beam volume elements at least. This demands a larger storm in larger distances than in closer regions. Third, probably the visibility due to orographic obstacles handicaps the identification of cells. This has to be studied in detail in future. Besides the reduction of the number of births with distance an increase is visible at the outer range. This is due to thunderstorms entering the range of the radar. They seem to be born near the outer limit. This effect is most clearly seen in the south west because this is the prevailing wind direction in situations with deep convection in the Upper Rhine Valley.

4ca

Acknowledgements.

3500

hak for producing reference values by their tracking by eye.

+a

The authors want to thank Axe1 Seifert and Ulrich Bla-

~25w

References

z2c-x

~1YJo z icao

Gysi, H., Hannesen, R., and Beheng, K. D.. A method for bright-band cor-

WI 0

0

20

40

60

80

loo

120

140

rection in horizontal rain intensity distributions, in Proceedings of’ I/W

160

28hRadar

Conference, pp. 214 - 215, AMS, Austin, USA, 1997.

Wilson, J. W., Crook, N. A., Mueller, C. K., Sun, J., and Dixon, M., Nowcasting thunderstorms: Fig. 3. Distribution of lifetimes of families.

2079-2099.1998.

A status report, Bull. Ame,: Meteor: Sot.,

79.