Job search monitoring intensity, unemployment exit and job entry: Quasi-experimental evidence from the UK

Job search monitoring intensity, unemployment exit and job entry: Quasi-experimental evidence from the UK

Labour Economics 15 (2008) 1451 – 1468 www.elsevier.com/locate/econbase Job search monitoring intensity, unemployment exit and job entry: Quasi-exper...

288KB Sizes 0 Downloads 30 Views

Labour Economics 15 (2008) 1451 – 1468 www.elsevier.com/locate/econbase

Job search monitoring intensity, unemployment exit and job entry: Quasi-experimental evidence from the UK☆ Duncan McVicar ⁎ School of Management and Economics, Queen's University Belfast, Belfast BT7 1NN, United Kingdom Received 29 August 2006; received in revised form 18 January 2008; accepted 18 February 2008 Available online 4 March 2008

Abstract Because unemployment benefit reforms tend to package together changes to job search requirements, monitoring and assistance, few existing studies have been able to empirically isolate the effects of job search monitoring intensity on the behaviour of unemployment benefit claimants. This paper exploits periods where monitoring has been temporarily withdrawn during a series of Benefit Office refurbishments — with the regime otherwise unchanged — to allow such identification. During these periods of zero monitoring the hazard rates for exits from claimant unemployment and for job entry both fall. © 2008 Elsevier B.V. All rights reserved. JEL: J64; J68 Keywords: Unemployment; Monitoring; Search assistance; Hazard rates; JSA

1. Introduction Job search monitoring is the process of checking whether unemployed workers engage in sufficient search activity to qualify for receipt of unemployment benefits. Its purpose is to counteract the search disincentive effect of such benefits. Johnson and Klepinger (1994), Fredriksson and Holmlund (2005) and Manning (2005) present models in which search effort increases with the threshold search level required for eligibility. Intuitively, increasing the intensity of job search monitoring, which can be interpreted as the degree to which such search ☆

Thanks to the Northern Ireland Department for Social Development for financial support and access to data. Thanks also to all those that have helped assemble the data or have offered useful comments and suggestions over the course of the research and on earlier drafts. The views expressed are those of the author. ⁎ Tel.: +44 0 2890 973297. E-mail address: [email protected]. 0927-5371/$ - see front matter © 2008 Elsevier B.V. All rights reserved. doi:10.1016/j.labeco.2008.02.002

1452

D. McVicar / Labour Economics 15 (2008) 1451–1468

requirements are enforced, will have similar effects (for a discussion see Klepinger et al., 2002), and will therefore reduce the duration of unemployment spells and boost job entry rates. van den Berg and van der Klaauw (2006), however, introduce some ambiguity to this prediction. They present a model which differentiates between formal job search and informal job search. Formal job search is the label given to all those job search activities that are monitored by the benefits agency, e.g. visits to the employment service office, time spent reading newspaper job advertisements. Informal job search, on the other hand, is those search activities that are not monitored by the benefits agency, e.g. search through social networks. In this case, more intense job search monitoring leads to increased formal job search but reduced informal job search, with ambiguous overall impact on unemployment duration and job entry rates depending on which type of job search is most effective. Manning (2005) introduces ambiguity in a different way by showing that if search requirements are set too high, unemployed workers may respond by reducing search effort, ceasing to claim unemployment benefits, and moving into unregistered (non-claimant) unemployment or inactivity rather than into employment. Even without these theoretical ambiguities, there is a clear need for empirical evidence on the effects of job search monitoring because of its widespread use (e.g. see Martin and Grubb, 2001). Introducing theoretical ambiguity makes this even more crucial. Such empirical evidence, however, is rather thin on the ground. The main reason for this is that benefit reforms, although extensively evaluated, have tended to package together changes to job search monitoring with other changes, e.g. to job search requirements, job search assistance, or benefit rates, so preventing separate identification of monitoring impacts. Further, in the few cases where studies have looked for such impacts, they have found contrasting results. This paper exploits exogenous periods where job search monitoring was temporarily suspended, during a series of sometimes lengthy Benefit Office refurbishments across one region of the UK (Northern Ireland), to provide new quasi-experimental evidence on the impact of monitoring intensity on male unemployment durations and on the flow of unemployed men into employment and into other non-employment states including education or training and inactivity. Although job search monitoring was completely suspended during these periods, job search requirements, job search assistance services, and all other benefit characteristics were unchanged. So, these refurbishments represent a rare opportunity to identify the impact of monitoring intensity. The resulting estimates show that the suspension of monitoring increased average unemployment duration and reduced the hazard rate for job entry. In the context of van den Berg and van der Klaauw (2006), this suggests the positive impact of monitoring on formal search dominates the negative impact on informal search, at least for the monitoring intensities and benefit claimants considered here. Suspension of monitoring also affects the hazards for exits to non-employment states as Manning (2005) suggests, although the evidence in this respect is more mixed. The remainder of this paper is set out as follows. The following section briefly reviews the existing empirical literature on the impacts of job search monitoring. Section 3 provides details of the Benefit Office refurbishment programme and Section 4 discusses identification of the monitoring impacts. Section 5 describes the data and the hazard functions to be estimated. Section 6 presents and discusses the estimation results and Section 7 concludes. 2. Existing Empirical Literature Unemployment benefit systems, and changes to unemployment benefit systems, tend to couple together job search requirements and monitoring (the ‘stick’) with job search assistance

D. McVicar / Labour Economics 15 (2008) 1451–1468

1453

(the ‘carrot’). Consequently, evaluations of such changes are rarely able to separately identify the impact of monitoring from that of job search requirements and/or assistance (for US reviews see Meyer, 1995; Blank, 2002; for the Netherlands see Gorter and Kalb, 1996). Although much of this evaluation literature finds a positive impact of search assistance together with search requirements and/or monitoring on the unemployment exit probability, it is not clear whether this is because of increased monitoring, search assistance, search requirements, or some combination. The same is true for studies of earlier reforms in the UK (e.g. Dolton and O'Neill, 1996; Manning, 2005). A related literature on benefit sanctions generally finds that imposition of a sanction increases the probability of exiting unemployment benefits (e.g. van den Berg et al., 2004; Abbring et al., 2005; Lalive et al., 2005). Although individuals tend to be more closely monitored after imposition of a sanction, the impact of this tougher monitoring is again not separately identified from the impact of any associated additional job search assistance and/or the impact of the benefit sanction itself. Two empirical studies, however, have been able to identify the effects of job search monitoring separately from that of other changes. Both use US experimental data but draw different conclusions. Klepinger et al. (2002) examine the impacts of the Maryland Work Search Demonstration, which features one experiment where search monitoring of Unemployment Insurance (UI) claimants is scrapped and another where it is intensified by introducing telephone verification of job contacts. They find the former leads to a marginally significant increase in UI claim duration of half of one week and the latter leads to a significant reduction in UI claim duration of one week. Anderson (2001), commenting on an earlier draft, notes that this represents a 10% difference in average UI claim duration between the zero monitoring and tough monitoring regimes. Ashenfelter et al. (2005), on the other hand, find no significant impact on UI claim duration of intensified monitoring — again in the form of increased telephone verification of job contacts — from another experimental reform conducted across four US states. They argue that the UI experiment effects found by Meyer (1995) and others are therefore likely driven by changes in job search assistance rather than job search monitoring. Anderson (2001), again commenting on an earlier draft, argues that the Ashenfelter et al. experiments may have involved monitoring changes that were just too minor to pick up. Similarly, Klepinger et al. (2002) suggest that right-censoring and small sample size in the Ashenfelter et al. experiments reduce the chances of finding significant impacts. Outside of the US, what little evidence that exists is more in line with Klepinger et al. (2002) than Ashenfelter et al. (2005). Lalive et al. (2005) find ex ante effects of benefit sanctions in Switzerland, with claim duration shorter for individuals in areas where there is a greater threat of sanctions being imposed than in others. They interpret the rate at which sanction warning letters are issued as a proxy for the variation in monitoring regime toughness, but warn that this may be correlated with other regime differences, including the level of job search assistance provided. McVicar (2006) uses aggregate local area administrative data from Northern Ireland to show that suspension of monitoring — during the same refurbishments examined in the current paper — increases the number of registered unemployed. 3. The Institutional Background and the Benefit Office Refurbishments There are two variants of unemployment benefit (called Jobseeker's Allowance (JSA)) in the UK. Insurance-based JSA — the UK version of UI — is paid to unemployed workers with

1454

D. McVicar / Labour Economics 15 (2008) 1451–1468

sufficient work history, for up to six months, at a rate otherwise independent of previous earnings level. Income-based JSA — essentially social assistance for the unemployed — is paid to unemployed workers who meet the means testing criteria and who are ineligible for Insurancebased JSA because of insufficient work history or because they have exhausted their current period of eligibility. Unemployed workers on both forms of JSA make up the ‘registered unemployed’ and are subject to exactly the same regime of job search requirements and monitoring and have access to the same public job search assistance services. Prior to the Benefit Office refurbishments considered in this paper, the relevant institutional details of JSA were as follows. JSA claimants were required and assisted to draw up a ‘Jobseeker’s Agreement' (JSAg) at the start of their claim committing them to a programme of job search, e.g. specifying search methods, use of public search assistance services, and the number of employer contacts per week (see the appendix to Manning (2005) for a template). These agreements were drawn up in the local Benefit Office — of which there are 35 spread across Northern Ireland — and can be thought of as setting out the job search requirements for JSA claimants. This semi-contractual approach was enforced through fortnightly face-to-face interviews requiring JSA claimants to report back to their local Benefit Office to provide evidence of job search activity in line with their agreed programme, or else face benefit sanctions. These fortnightly interviews therefore performed the job search monitoring function. Public job search assistance services, e.g. display of registered vacancies, registered vacancy database searching or help with making applications, were offered in the same town/area but by a different set of staff and at a physically separate office location called a Job Centre. Beginning with two pilot areas in 1999 and then rolling out area by area across Northern Ireland between 2001 and 2008, a reform called Jobs and Benefits (J&B) has gradually been introduced with the aim of further strengthening the association between job search and benefit receipt. The reform co-locates the previously separate Benefit Offices (where job search requirements were set and fortnightly monitoring took place) and local Job Centres (where job search assistance services were offered) in a single local ‘Jobs and Benefits Office’ (JBO).1 Under the new regime JSAgs continue to be set as before, but more time is assigned to each fortnightly monitoring interview than was previously the case. The additional time is to allow the fortnightly interviews to include some elements of job search assistance, e.g. with advisors now able to suggest and submit electronic applications to suitable registered vacancies during the interview. Advisors can now also use the same online system to check the number of applications made to registered vacancies over the fortnight during the interview. In addition, JSA claimants now have quarterly in-depth search monitoring and assistance meetings with a ‘personal advisor’ assigned for the duration of the claim. All other job search assistance services continue to be offered as before, albeit in the new JBOs rather than the old Job Centres. Taken together, the J&B reforms represent a strengthening of job search monitoring and a boost to job search assistance services for JSA claimants. Although estimates of the impact of the J&B reforms are presented in this paper, they are not its primary focus because, like many previous reforms, they package together changes in monitoring intensity with changes to job search assistance. This paper focuses instead on the run up to the implementation of the new J&B regime in each local area. Moving from a situation where a given area's job search assistance and job search monitoring services take place in two separate locations to one where they are delivered in a single location requires refurbishment of J&B has similarities with reforms introduced around the same time in the rest of the UK under the labels ‘ONE’ and ‘Jobcentre Plus’ (for more details see Karagiannaki, 2006). 1

D. McVicar / Labour Economics 15 (2008) 1451–1468

1455

existing Benefit Office buildings. Because of disruption caused during these refurbishments, normal fortnightly monitoring interviews have been suspended during the work. On average across the different local areas these periods of suspended fortnightly interviews have lasted for eight months. Because these fortnightly interviews serve the purpose of job search monitoring, their suspension amounts to a reduction in monitoring intensity. In fact, because no substitute postal or telephone job search monitoring was put in place during these refurbishments, parts of Northern Ireland were running an unemployment benefit regime with zero monitoring of job search. The work of Benefit Offices has not been entirely suspended, however, during these refurbishments. The drawing up of JSAgs at the start of new claims has been unaffected. Similarly, existing claimants were covered by their existing JSAgs, so no claimant experienced a change to job search requirements. Also, because under the original regime all job search assistance was delivered through physically separate Job Centres, no claimant experienced a change to the nature or amount of job search assistance services offered during refurbishments. So the refurbishment periods in the run up to implementation of J&B present a rare opportunity — too good to miss — to examine the impact of moving from a regime with regular face-to-face job search monitoring to one with no monitoring at all. 4. Identification of the Impacts of Monitoring Intensity The area-by-area staging of the Benefit Office refurbishments discussed above offers a rare opportunity to obtain unbiased estimates of monitoring impacts. To obtain such estimates we require that these periods of zero monitoring can be plausibly treated as exogenous, i.e. that we have a natural experiment. In other words, we require that the assignment of zero monitoring (the ‘treatment’) is not itself determined by the behaviour of the unemployed, and further, that it is uncorrelated with omitted variables that are themselves correlated with search behaviour. For the purposes of delivery of services to JSA claimants, Northern Ireland is divided into 35 administrative areas, each served originally by a separate Benefit Office and a separate Job Centre. J&B has so far (at the time of writing) been implemented in 25 of these 35 areas, with each now served by a single JBO. This has been achieved by means of a staged roll-out, area by area, starting with two pilot areas in 1999. Table 1 presents the roll out schedule, with start and end dates for the periods of zero monitoring and the subsequent introduction of J&B. Because the treatment is assigned at the area level and not the individual level — with all JSA claimants treated equally within a given area at a given time — we can be confident that there is no direct relationship between individual claimant characteristics and behaviour and treatment assignment. Contrast this with, say, an enhanced monitoring regime targeted at claimants that have already been sanctioned for insufficient search. This is the first point to support the exogeneity of the refurbishments. There are three possible ways, however, in which such an area level assignment of treatment could still be endogenous. First, assignment or the timing of assignment might depend on the actions of the Benefit Offices themselves, e.g. if managers are required to ‘apply’ for treatment in some way. This can be quickly ruled out in this case: the roll-out structure was imposed from above at the regional (Northern Ireland) level by the relevant government department ex ante. Second, these regional policy makers might have deliberately targeted refurbishments (or structured the ordering of the refurbishment roll out) on areas with particular characteristics or where claimants display particular behaviours, e.g. low (or high) average unemployment durations. Again, this can be plausibly ruled out: there was no declared relation between area or average claimant characteristics and treatment status or roll-out order, and none of the relevant

1456

D. McVicar / Labour Economics 15 (2008) 1451–1468

Table 1 The Roll-out Schedule, with Zero Monitoring Dates and Durations JBO Area

Start of Zero Monitoring Period (day/month/year)

End of Zero Monitoring Period/Start of J&B (day/month/year)

Duration of Zero Monitoring (days)

Dungannon Lisburn Lisnagelvin Magherafelt Ballymoney Portadown Foyle Knockbreda Falls Road Newtwonabbey Omagh Kilkeel Newry Shankill Road Enniskillen Limavady Antrim Shaftesbury Square Lurgan Holywood Road Larne Carrickfergus Banbridge Armagh Coleraine

01/01/1999 01/01/1999 01/10/2001 05/11/2001 14/01/2002 15/12/2001 14/01/2002 25/03/2002 25/02/2002 18/02/2002 18/08/2002 02/01/2003 11/08/2002 09/01/2003 02/01/2003 21/04/2003 07/04/2003 04/08/2003 09/06/2003 28/10/2002 08/12/2003 17/11/2003 16/06/2004 17/05/2004 04/05/2005

01/03/1999 01/03/1999 25/02/2002 19/03/2002 13/05/2002 03/06/2002 03/06/2002 23/09/2002 14/10/2002 20/01/2003 30/06/2003 06/07/2003 26/08/2003 13/10/2003 24/11/2003 02/02/2004 23/02/2004 29/03/2004 19/04/2004 13/09/2004 01/11/2004 01/11/2004 21/02/2005 30/03/2005 20/06/2005

59 59 157 134 119 170 139 182 231 336 316 185 380 277 326 287 322 237 314 675 327 348 250 317 47

Note: Periods of zero monitoring end on the day before the start of J&B.

policy makers that we spoke to during the course of the research knew of any such undeclared relation. Taken together, these two points further support the exogeneity of the refurbishments and suggest the correct model for estimating their impacts is a single equation one (see Section 5) and not one requiring an additional structural equation for treatment assignment (see Abbring and van den Berg (2003) for a contrasting example). The third way in which area level assignment might lead to endogenous treatments in the empirical model set out in Section 5 is if they are coincidentally or unknowingly (on the part of the policy makers) correlated with omitted area or average claimant characteristics or time factors, which might themselves be correlated with claimant behaviour. This is less easily ruled out. A guide is to compare characteristics that we can observe. Fig. 1 and Table 2 present such evidence. To simplify the analysis only the ten local areas that do not suspend monitoring during the period covered by the data are counted as comparison areas, the period prior to zero monitoring is taken as running until 1st October 2001 when monitoring was suspended in the first of the non-pilot areas (Lisnagelvin), and the pilot areas (Dungannon and Lisburn) are omitted. (Because different areas suspend monitoring at different times, all other areas yet to suspend monitoring can also act as comparison areas for each particular treatment.)2 Fig. 1 shows very similar Kaplan-Meier 2 Even if there are ‘anticipation effects’ in the weeks prior to suspension of monitoring, or spillover effects between adjacent treatment and control areas, this will not affect more than one or two of the comparison areas for any specific treatment.

D. McVicar / Labour Economics 15 (2008) 1451–1468

1457

Fig. 1. Kaplan-Meier Hazard Functions, Treatment Areas v. Comparison Areas Prior to Zero Monitoring. Notes: Because different areas introduce zero monitoring at different times, the ‘start’ of zero monitoring is taken here as the start of zero monitoring for the first non-pilot area, Lisnagelvin. (The pilots Lisburn and Dungannon are omitted from the above estimates.) Spells ending after that date are treated as right censored. Analysis time is measured in days.

hazard functions for all spells on JSA in the treatment areas prior to refurbishment and all spells in the comparison areas over the same period. Table 2, however, shows statistically significant differences between treatment and comparison area sample means for some observed covariates, most notably between the proportions in the two groups seeking managerial or professional jobs. Remember that those treatment areas yet to suspend monitoring are acting as further comparison areas for each of the treatments, so Table 2 will overstate such differences. Nevertheless, as a test of robustness the main models (as set out in Tables 3 and 4) are re-estimated on a subsample of the areas which omits Benefit Offices in the urban centres of Belfast and Londonderry. This alternative sample displays no significant differences in observed covariate means between Table 2 Covariate Means (Standard Deviations)

Age, Years Married Managerial or Professional Job Sought (SOC1 or SOC2) Other Skilled Job Sought (SOC3-SOC8) Duration, Days No. Spells

All Spells

All (Non-pilot) Offices, Prior to Zero Monitoring

23 (Non-pilot) Treatment Offices, Prior to Zero Monitoring

10 Comparison Offices, Prior to Zero Monitoring

30.85 (11.45) .241 .087

30.39 (11.35) .259 .087

30.46 (11.34) .258 .091

30.23⁎⁎ (11.37) .261 .077⁎⁎

.604

.634

.639

.622⁎⁎

182.85 (240.48) 423182

137.98 (169.22) 178458

136.94 (168.24) 126653

140.52⁎⁎(171.57) 51805

Notes: Because different areas introduce zero monitoring at different times, the ‘start’ of zero monitoring is treated here as the start of zero monitoring for the first non-pilot area, Lisnagelvin. (The pilots Lisburn and Dungannon are omitted.) ⁎⁎ Denotes significant difference in means between treatment and control groups at 99% confidence level and ⁎ at the 95% confidence level.

1458

D. McVicar / Labour Economics 15 (2008) 1451–1468

Table 3 Single Risk Models, Coefficients (Standard Errors), All Exits from Unemployment

Zero Monitoring J&B Age Married Managerial or Professional Job Sought (SOC1-SOC2) Other Skilled Job Sought (SOC3-SOC8) Area Fixed Effects Time Fixed Effects (Quarterly Dummies) Area-Specific Time Quadratics (Months) Weibull p Gamma θ Number of Spells Number of Exits Number of Claimants

Weibull Baseline, Gamma Distributed Unobserved Heterogeneity

Weibull Baseline, Gamma Distributed Unobserved Heterogeneity

Cox, w/o Unobserved Heterogeneity

Piecewise Constant Baseline, w/o Unobserved Heterogeneity

− .218⁎⁎ (.009) .216⁎⁎ (.007) − .020⁎⁎ (.0002) .108⁎⁎ (.006) .497⁎⁎ (.008)

− .172⁎⁎ (.010) .310⁎⁎ (.010) − .020⁎⁎ (.0002) .110⁎⁎ (.006) .497⁎⁎ (.008)

− .147⁎⁎ (.009) .267⁎⁎ (.010) − .019⁎⁎ (.0002) .083⁎⁎ (.005) .455⁎⁎ (.006)

−.111⁎⁎ (.013) .375⁎⁎ (.016) −.017⁎⁎ (.0003) .128⁎⁎ (.007) .448⁎⁎ (.009)

.230⁎⁎ (.005)

.231⁎⁎ (.005)

.195⁎⁎ (.004)

.193⁎⁎ (.005)

Yes⁎⁎ Yes⁎⁎

Yes⁎⁎ Yes⁎⁎

Yes⁎⁎ Yes⁎⁎

Yes⁎⁎ No

No

Yes⁎⁎

Yes⁎⁎

Yes⁎⁎

1.047⁎ (.002) .253⁎⁎ (.003) 388359 387771 171598

1.047⁎⁎ (.002) .254⁎⁎ (.003) 388359 387771 171598

388359 387771 171598

253786 177567 124271

Notes: ⁎ Denotes statistical significance at the 95% confidence level and ⁎⁎ at the 99% confidence level. The coefficients are interpretable as semi-elasticities. In the case of the binary treatment dummies they indicate the percentage impact on the hazard rate (not percentage point impact) of treatment. For the age variable the coefficients indicate the percentage impact on the hazard rate of a one year increase in claimant age. The omitted category for job sought is unskilled occupations (SOC9). Piecewise constant estimates are estimated on a restricted sample taken from every other area in the roll-out schedule, with monthly duration groups, with zero monitoring and J&B start and end dates assigned to the nearest month, and with spells treated as right-censored after 30 months. Table 4 Independent Competing Risks, Weibull Baseline and Gamma Distributed Unobserved Heterogeneity, Coefficients (Standard Errors)

Zero Monitoring J&B Age Married Managerial or Professional Job Sought (SOC1-SOC2) Other Skilled Job Sought (SOC3-SOC8) Area Fixed Effects Time Fixed Effects (Quarterly Dummies) Area-Specific Time Quadratics (Months) Weibull p Gamma θ Number of Spells Number of Exits Number of Claimants See notes to Table 3.

Exits to Employment

Exits to Education and Training

Exits to Other Benefits

Exits to Other Destinations

−.257⁎⁎ (.016) −.032 (.018) −.021⁎⁎ (.0004) .252⁎⁎ (.011) .739⁎⁎ (.013)

.356⁎⁎ (.031) .889⁎⁎ (.033) − .055⁎⁎ (.0008) − .143⁎⁎ (.021) .359⁎⁎ (.027)

− .078⁎⁎ (.029) .530⁎⁎ (.029) .019⁎⁎ (.0006) .077⁎⁎ (.014) − .818⁎⁎ (.031)

−.286⁎⁎ (.016) .362⁎⁎ (.016) −.029⁎⁎ (.0004) .019 (.010) .462⁎⁎ (.013)

.384⁎⁎ (.008)

.134⁎⁎ (.014)

− .151⁎⁎ (.012)

.167⁎⁎ (.007)

Yes⁎⁎ Yes⁎⁎ Yes⁎⁎ 1.100⁎⁎ (.002) 1.161⁎⁎ (.010) 388359 170370 171598

Yes⁎⁎ Yes⁎⁎ Yes⁎⁎ 1.207⁎⁎ (.005) .535⁎⁎ (.021) 388359 30124 171598

Yes⁎⁎ Yes⁎⁎ Yes⁎⁎ 1.194⁎⁎ (.005) .562⁎⁎ (.018) 388359 39827 171598

Yes⁎⁎ Yes⁎⁎ Yes⁎⁎ 1.006⁎⁎ (.002) .514⁎⁎ (.007) 388359 147450 171598

D. McVicar / Labour Economics 15 (2008) 1451–1468

1459

treatment and comparison areas. The results are insensitive to estimation on this alternative sample (see Table 5). Even with the full data set, differences between treatment and comparison areas will only bias estimated treatment effects to the extent that they are correlated with hazard rates and omitted Table 5 Sensitivity Analysis, Impact of Zero Monitoring, Coefficients (Standard Errors) All Exits Proportional Hazards Metric Weibull baseline, gamma distributed unobserved heterogeneity, independent competing risks Weibull baseline, w/o unobserved heterogeneity, no clustering Weibull baseline, w/o unobserved heterogeneity, errors clustered by area Weibull baseline, gamma distributed unobserved heterogeneity, block bootstrap Cox, w/o unobserved heterogeneity, no clustering Cox, w/o unobserved heterogeneity, errors clustered by area Piecewise constant baseline, w/o unobserved heterogeneity, no clustering Weibull baseline, gamma distributed unobserved heterogeneity, dependent competing risks Weibull baseline, gamma distributed unobserved heterogeneity, independent competing risks, alternative sample Accelerated Failure Time Metric Weibull baseline, gamma distributed unobserved heterogeneity, independent competing risks Lognormal baseline, gamma distributed unobserved heterogeneity, independent competing risks Lognormal baseline, w/o unobserved heterogeneity Lognormal baseline, w/o unobserved heterogeneity, errors clustered by area

Exits to Employment

Exits to Education or Training

Exits to Other Benefits

Other Exits

−.172⁎⁎ (.010) −.257⁎⁎ (.016) .356⁎⁎ (.031)

−.078⁎⁎ (.029) −.286⁎⁎ (.016)

−.196⁎⁎ (.009) −.286⁎⁎ (.014) .322⁎⁎ (.031)

−.140⁎⁎ (.028) −.281⁎⁎ (.015)

−.196⁎⁎ (.038) −.286⁎⁎ (.026) .322⁎⁎ (.069)

−.140⁎⁎ (.041) −.281⁎⁎ (.071)

−.172⁎⁎ (.052) −.257⁎⁎ (.039) .356⁎⁎ (.096)

−.078 (.065)

−.147⁎⁎ (.009) −.118⁎⁎ (.014) .060⁎ (.030)

−.146⁎⁎ (.028) −.254⁎⁎ (.015)

−.147⁎⁎ (.033) −.118⁎⁎ (.023) .060 (.053)

−.146⁎⁎ (.041) −.254⁎⁎ (.069)

−.111⁎⁎ (.013) − .041⁎ (.019) .171⁎⁎ (.045)

−.198⁎⁎ (.041) −.205⁎⁎ (.020)

−.218⁎⁎ (.009) −.253⁎⁎ (.013) .215⁎⁎ (.027)

−.028 (.025)

−.119⁎⁎ (.014) −.239⁎⁎ (.022) .283⁎⁎ (.050)

− .089⁎ (.044) −.111⁎⁎ (.024)

−.286⁎⁎ (.095)

−.323⁎⁎ (.014)

.163⁎⁎ (.009)

.234⁎⁎ (.014)

−.295⁎⁎ (.026) .123⁎⁎ (.024)

.284⁎⁎ (.016)

.119⁎⁎ (.012)

.121⁎⁎ (.017)

−.380⁎⁎ (.034) − .056 (.030)

.299⁎⁎ (.019)

.123⁎⁎ (.012)

.185⁎⁎ (.018)

−.372⁎⁎ (.033) − .049 (.030)

.309⁎⁎ (.020)

.123⁎⁎ (.036)

.185⁎⁎ (.029)

−.372⁎⁎ (.073) − .049 (.048)

.309⁎⁎ (.085)

Notes: Piecewise constant estimates are estimated on a restricted sample taken from every other area in the roll-out schedule, with monthly duration groups, with zero monitoring and J&B start and end dates assigned to the nearest month, and with spells treated as right-censored after 30 months. The alternative sample for the Weibull model omits the urban centres of Belfast and Londonderry (see Section 4).

1460

D. McVicar / Labour Economics 15 (2008) 1451–1468

from the model. Trivially, therefore, we are not concerned with differences in the proportion of unemployed seeking managerial or professional jobs per se because these are controlled for in the model set out in Section 5. But of course such observed differences may signal important unobserved differences between areas, e.g. in terms of the underlying skill levels of the unemployed or available vacancies. This is where the area-panel nature of the data comes in useful (see Section 5). To the extent that any unobserved differences between treatment and comparison areas are time invariant they can be controlled for by inclusion of area fixed effects (see Cipollone and Rosolia (2006) for a recent application of this argument). Similarly, if there are common timespecific unobserved factors that influence the behaviour of the registered unemployed then they can be controlled for with time fixed effects. Further, by including area-specific quadratics in time, time-varying differences between areas such as unobserved trends or area-specific shocks are at least partly controlled for. Additional control in this respect is given by the fact that there are 25 treatment areas undergoing treatments at different times — in contrast to the more usual case with one treatment area — which will act to ‘average out’ any remaining area-specific shocks not otherwise controlled for, and also by the inclusion of an individual level term for unobserved heterogeneity in the model set out in Section 5. This high degree of control for unobserved factors provides further support for treating the periods of zero monitoring as plausibly exogenous. 5. Data and Approach The paper uses administrative data on all recorded male JSA claims in Northern Ireland starting between 1st September 1997 and 12th January 2006. There are 406,196 recorded spells starting over this period, with many claimants observed for multiple spells. The average age of a claimant is 30 years, and the mean duration of a spell is 183 days. For each spell we observe the JBO or Benefit Office where the claim is made, the recorded destination on ending the claim, the type of job sought, and the age and marital status of the claimant. Because each claimant's JBO is known dummy variables can be constructed for zero monitoring and J&B which take the value 1 for all covered spells or parts of spells and 0 otherwise. Data are not otherwise collected at JBO/ Benefit Office level, so Eq. (1) below includes area level fixed effects and area-specific quadratics in time. Quarterly time fixed effects are also included to control for regional level common time factors. Covariate means are presented in Table 2. In common with much of the empirical unemployment duration literature the paper adopts a reduced form mixed proportional hazards (MPH) approach to estimation. Specifically, the paper estimates the following single risk hazard function for JSA exit:   h tj ja = ah0 ðt Þexp Mjt d + Xjt b ;

ð1Þ

for individuals j = 1,..,N, where α ~ d(1,φ) captures the multiplicative effect of individual level unobserved heterogeneity on the hazard rate; θ0(t) denotes the baseline hazard (showing how the hazard rate varies with the duration of the spell); Mjt is a binary dummy indicating a zero monitoring regime, and Xjt is the set of observed characteristics, fixed effects and time quadratics discussed above, including a dummy for periods covered by the J&B regime, all of which essentially act to shift the baseline hazard function up or down depending on their sign. The daily data are treated as continuous. Using the information on the destination of JSA claimants on ending a claim also allows estimation of a competing risks version of the MPH model as given by (2), where k denotes the exit destination. Given the theoretical predictions of monitoring impacts on search behavior, the

D. McVicar / Labour Economics 15 (2008) 1451–1468

1461

main interest here is in exits to employment (job entry). This interest is reinforced by the fact that no existing empirical study — including Klepinger et al. (2002) — has found theory-consistent evidence of such an impact specifically on job entry. A possible explanation for this fact, coupled with the positive monitoring impact on (non-specified) exits from unemployment found by Klepinger et al. (2002), is provided by Manning's (2005) model where tough search requirements might lead to increased outflows from registered unemployment to unregistered unemployment or inactivity. The data here also allow a partial test of this hypothesis, with, in addition to exits to employment, separate identification of exits to education and training, exits to other benefits (these are exits to inactivity), and exits to ‘other destinations’. This last category includes many claims that end because the claimant failed to turn up to a fortnightly monitoring interview. It is generally believed that some — perhaps as many as half — of these cases represent exits to employment which are not explicitly recorded as such, although further information on this is not available for the period of interest in Northern Ireland. Other cases, however, are likely to represent what Manning has in mind for exits from registered unemployment to unregistered unemployment. For tractability, the conventional assumption of independence of the competing risks conditional on Xj is assumed (see e.g. Katz and Meyer, 1990). In other words, it is assumed that the unobserved heterogeneity term is uncorrelated across exit destinations, although sensitivity to this assumption is later discussed.   hk tj jak = ak h0k ðt Þexp Mjt dk + Xjt bk

ð2Þ

van den Berg (2001) argues that the MPH model has been widely applied in previous research because of the parsimonious manner in which it combines the various elements of the hazard rate, and that it continues to be attractive because the properties of the model are well understood and because new results can be readily compared with existing findings. He notes, on the other hand, that the assumption of proportionality doesn't follow in general from job search models, that the βs are not structural parameters from the point of view of the theory, and that estimates might be sensitive to functional form assumptions, e.g. regarding the nature of the baseline hazard. Again, for reasons of tractability and also for ease of interpretation, the following section starts by estimating a MPH model assuming Weibull baseline hazard and gamma distributed individuallevel unobserved heterogeneity (sometimes referred to as gamma ‘frailty’). Sensitivity to functional form and other assumptions is then discussed. 6. Results and Discussion Column two of Table 3 presents results from estimation of the MPH model for all JSA exits assuming a Weibull baseline hazard and gamma distributed unobserved heterogeneity, and including area and time fixed effects. Column three presents results from the same model with the addition of area-specific time quadratics. These are jointly significant at the 99% level, suggesting the presence of time-varying area-specific unobserved factors that affect the hazard rate. Subsequent discussion (and all other reported results) therefore focuses on models including these time quadratics. Note that the Weibull index parameter is greater than one, suggesting a gently upward sloping hazard function. The gamma term captures significant unobserved heterogeneity at the level of the individual. The estimated effects of the policy dummies and covariates are presented in coefficient form, i.e. the βs from Eq. (1), and are interpretable as semi-elasticities. So, in the case of the binary treatment dummies, they indicate the percentage impact on the hazard rate (not the percentage point impact) of treatment. For the age covariate the coefficients indicate

1462

D. McVicar / Labour Economics 15 (2008) 1451–1468

the percentage impact on the hazard rate of a one year increase in claimant age. An alternative interpretation — giving the multiplicative effect of treatment on the hazard rate — is given by taking the exponential of the reported coefficients, bearing in mind that exp(β) ≈ 1 + β for small β. The fixed effects are jointly significant and the control variables act in the expected directions, e.g. with hazard rates lower for older and single men and for those seeking unskilled employment (probably proxying for skills and/or qualifications). According to these single risk Weibull estimates, suspension of job search monitoring significantly reduces the hazard rate for JSA exit. So, in contrast to the findings of Ashenfelter et al. (2005), job search monitoring appears to matter. The magnitude of the effect — reducing the hazard rate by 17% — suggests an associated increase in average claim duration of 16%, somewhat larger than the 10% change in UI claim duration found by Klepinger et al. (2002). In other words, unemployment spells would on average last for 16% longer in a regime with no monitoring compared to a regime with the original level of monitoring under JSA. J&B — the new regime of tougher monitoring coupled with enhanced job search assistance — increases the hazard rate for JSA exits by an estimated 31%, which implies a reduction in average claim duration of almost one third. These are big estimated treatment effects, but how confident can we be that they are robust? The assumption of a Weibull baseline, although commonly adopted, imposes monotonicity, which, according to Fig. 1, may be inappropriate. Incorrectly imposing such a restriction can lead to biased estimates of coefficients on time-varying covariates (Narendranathan and Stewart, 1993). This is important here because both the zero monitoring and J&B dummies are timevarying (the other covariates are measured at start of spell). To check sensitivity to this Narendranathan and Stewart recommend estimating Cox Proportional Hazard (CPH) models with unrestricted baselines. The corresponding results are presented in the fourth column of Table 3. Note that unobserved heterogeneity cannot be included in a CPH model without the presence of multiple integrals of the same order as the number of individuals in the risk set (Han and Hausman, 1990). So, given the size of the data set here, the CPH model is estimated without controlling for unobserved heterogeneity. Encouragingly, the CPH results are very similar to the Weibull MPH results, with zero monitoring leading to an estimated 15% fall in the hazard rate and J&B leading to an estimated 27% rise. Results for a piecewise constant model (see Meyer, 1990) — estimated as a further test of the robustness of the results to the assumed form of the baseline hazard — are presented in column five of Table 3. In this case the daily duration data are aggregated into monthly groups and zero monitoring start and end dates are assigned to the nearest months. Estimation of such a model requires expansion of the data so that each month of each spell is represented by a separate row in the data array. To keep things manageable the piecewise constant model is therefore estimated on a half sample of the data consisting of all spells on JSA taken from every second local area in the roll-out schedule. Zero monitoring is again estimated to have a negative impact on the hazard rate for unemployment exit, although with a slightly smaller magnitude — close to that found by Klepinger et al. (2002) — compared to the Weibull and Cox estimates. J&B again has a positive impact on the hazard rate. Similar estimates are obtained using the other half sample. So, where Anderson (2001) notes a 10% difference in average UI duration between Klepinger et al.'s (2002) zero monitoring and ‘tough’ monitoring regimes, here the estimated difference in average claim duration implied by the monitoring suspension estimates presented in Table 3 ranges from 10% to 16%. Two factors might contribute to these apparently larger monitoring impacts. First, the standard monitoring regime under JSA, involving fortnightly face-to-face interviews with Benefit Office staff, could be viewed by claimants as tougher than either of the (mail based) monitoring regimes in the Maryland Work-Search Demonstration, with suspension

D. McVicar / Labour Economics 15 (2008) 1451–1468

1463

of monitoring therefore representing a more significant regime change in Northern Ireland than in Maryland. Second, where the Maryland experiment randomly assigned a sample of UI claimants in each area to reduced or increased monitoring, in Northern Ireland it was the population of claimants in each area that were subjected to zero monitoring. This suggests greater scope for social interactions effects between claimants under the Northern Ireland zero monitoring regime, e.g. through leisure complementarities, to reinforce the ‘direct’ treatment effect on individual claimants.3 Table 4 presents estimates from the independent competing risks Weibull MPH model. Almost half (44%) of all exits from JSA are recorded by Benefit Office staff as exits to employment, 8% as exits to education or training and 10% as exits to other benefits — mostly incapacity benefits and other means-tested social welfare — with payment unconditional or less conditional upon job search. The remaining 38% are recorded under various categories including failure to turn up to a fortnightly monitoring interview and exit to unknown destination, which are here classified as exits to ‘other destinations'. As for the single risk models, the fixed effects are jointly significant and control variables act on the hazard rates in the expected directions. According to these estimates suspension of monitoring reduces the hazard rate for exits to employment (job entry) by 26%. Although we do not observe job search directly, this effect is consistent with a significant reduction in average job search effort. In the context of van den Berg and van der Klaauw (2006) the suggestion is that unemployed workers are not substituting for reduced formal search with increased informal search, or that any additional informal search is less effective than the lost formal search. To put the result in the more usual way, the hazard rate for job entry is increasing with the degree of monitoring, and significantly so both in the statistical and economic senses. In contrast, Klepinger et al. (2002) find no significant impact on employment entry. This, then, is the first such result reported in the literature, and to the extent that it can be generalized beyond time and place, it provides strong support for the standard theory and for policy reforms that seek to tighten search monitoring. What of other kinds of exits from unemployment? Remember the prediction of Manning (2005) that making the unemployment benefit regime tougher could drive some claimants out of registered unemployment into unregistered unemployment or inactivity. Table 4 shows that the hazard rate for exits to education and training is increased during zero monitoring by an estimated 36%, albeit from a low base. The explanation for this apparent effect is not immediately clear. If we naively interpret education as a form of inactivity — a view that is not uncommon amongst parents of students — then Manning's (2005) model implies that removal of job search monitoring would make JSA more attractive and, if anything, would reduce the hazard rate for such exits. It may be that the threat of tougher monitoring to come under the J&B regime, with suspension of monitoring always preceding the implementation of J&B, drives this apparent effect, i.e. it is anticipatory (see Black et al., 2003). It could also be that some unemployed workers respond to reduced monitoring of job search by increased search for education or training opportunities in the spirit of van den Berg and ven der Klaauw (2006). Although this estimated impact is robustly non-negative, however, it is not robustly significant, as shown in the sensitivity analysis presented in Table 5. Suspension of job search monitoring reduces the hazard rate for exits to other benefits by an estimated 8%. This is the only category that unambiguously corresponds to exits to inactivity and is therefore a better test of Manning's (2005) prediction that unemployed workers might respond to a tougher regime by exiting registered unemployment into unregistered unemployment or 3

Thanks to an anonymous referee for suggesting this argument.

1464

D. McVicar / Labour Economics 15 (2008) 1451–1468

inactivity, i.e. by moving further from the labour market. The results are suggestive of such an effect, although it is small compared to the job entry effect. Table 5 also suggests sensitivity of this particular estimate to functional form and other assumptions. Finally, suspension of job search monitoring reduces the hazard rate for exits to ‘other destinations' by 29%. Because this category of exits includes those that are removed from the JSA Register simply because they fail to turn up to a fortnightly monitoring interview, there will inevitably be a negative impact on the hazard rate when all monitoring interviews are in any case suspended. Unfortunately, because of data constraints this somewhat ‘mechanical’ impact of suspension of monitoring cannot be separated here from what Manning (2005) has in mind when he predicts some of the claimant unemployed might respond to a tougher regime by ceasing to claim JSA but nevertheless remaining ‘non-claimant unemployed’. (But of course the finding that zero monitoring affects the hazards for the other competing risks shows that the overall effect of monitoring on unemployment duration reflects a ‘real’ impact and not just this ‘mechanical’ impact.) Now consider the subsequent implementation of the new J&B regime combining tougher monitoring and enhanced job search assistance. The single risk estimates presented in Table 3 suggest that J&B increases the hazard rate for exits from unemployment by 31%. Interestingly, the competing risks estimates presented in Table 4 show that this overall effect is driven by positive impacts to the hazards for exits to education and training, to other benefits and to other destinations, and not by a positive impact on the hazards for exits to employment. In trying to explain this zero job entry impact we are constrained by the fact that we cannot separately identify the effects of monitoring changes from the effects of job search assistance changes making up the overall J&B package. We can, however, speculate a number of possible scenarios that could drive these results. First, a positive impact of enhanced monitoring might counteract a negative impact of enhanced job search assistance. But this requires enhanced job search assistance to have a counter-intuitive impact on job entry (for a model, see van den Berg (1994)). Second, it may be that monitoring is not tougher in practice under the new regime than under the old regime and that the zero J&B impact on the job entry hazard reflects a zero impact of enhanced job search assistance. This also seems unlikely, however, given the nature of the reforms and given that the hazards for exits to education, other benefits and other destinations are all significantly increased after the introduction of J&B. The most interesting scenario is that moving from an already tough monitoring regime to an even tougher regime might in fact lead to meaningful substitution between formal and informal search à la van den Berg and van der Klaauw (2006), or to substitution of exits to non-employment for exits to employment à la Manning (2005). Both forms of substitution are reconcilable with the zero job entry effect and positive nonemployment exit effects of J&B. Again we see the need for more empirical studies — covering different levels of intensity and different directional changes in intensity — of the impact of search monitoring on job entry. Table 5 presents estimated hazard ratios for the zero monitoring dummy from variations of the model in order to test the sensitivity of the results to particular modelling assumptions. First consider clustering of errors. The Weibull models discussed above allow for individualspecific unobserved heterogeneity and associated clustering of errors. Despite area fixed effects and time quadratics, however, there may still be residual correlation within JBO areas which, if ignored, could lead to downward bias in the standard errors reported in Tables 3 and 4 (see Moulton, 1990; Bertrand et al., 2004). In contrast to many of the evaluations discussed by Bertrand et al. (2004), the clear economic significance of the estimated treatment effects here, and the amount of leeway in terms of statistical significance, e.g. with t-ratios ranging from −9 to −24 for the various estimated single risk treatment effects in Table 3, suggests any such bias is unlikely

D. McVicar / Labour Economics 15 (2008) 1451–1468

1465

to materially affect our conclusions. Nevertheless, sensitivity to this is examined in two ways: first by estimating the Weibull model allowing for area-level clustering (for comparison purposes the Weibull model is also estimated without area-level clustering and omitting the individual unobserved heterogeneity term)4; second by using a bootstrap technique with area-level clustering, similar to that suggested by Bertrand et al. (2004), to re-estimate the Weibull-withunobserved-heterogeneity model.5 The standard errors for the zero monitoring coefficients are slightly smaller in the version of the model with no clustering than in the standard version. In the model allowing for area-based clustering the standard errors are up to 4.5 times larger than the version with no clustering. In no case, however, does this make any qualitative difference to the statistical significance of the results: all estimates continue to be significant at the 99% level, and coefficients are of course unaffected. Similarly, the bootstrapped standard errors are up to six times larger than in the standard Weibull case, with the only qualitative difference in results concerning the statistical significance of the impact of zero monitoring on exits to other benefits. Second consider the specification of the baseline hazard. Table 3 presents estimates from single risk models with various baseline specifications. Table 5 extends this to the competing risks estimates, with the key results presented for Weibull, Cox, piecewise constant and lognormal variants of the model. (Note that the lognormal version of the model — included because it does not impose monotonicity like the Weibull model — is estimated as an Accelerated Failure Time (AFT) model. The equivalent AFT estimates with Weibull baselines, which themselves correspond to the estimates for the standard MPH model presented in Tables 3 and 4, are presented for comparison purposes. Briefly, the AFT model is given by lntj = Mjtδ + Xjtβ + zj for spells j = 1,…,N and results are presented in the form of coefficients indicating the impact of a one unit change in the covariate on the log spell duration, with negative values indicating effects that shorten durations and vice versa. For more details see van den Berg (2001) and StataCorp (2003).) The precise estimates of the zero monitoring impact do vary somewhat across the different versions of the model, i.e. there is some sensitivity in terms of magnitudes, but the Weibull, Cox, piecewise constant and lognormal specifications differ qualitatively only with respect to the statistical significance of the impact of zero monitoring on the education hazard and the other benefits hazard. There is no ambiguity in terms of signs and there is no ambiguity in terms of the negative impact of monitoring suspension on the hazards for job entry and for exits to ‘other destinations’. Again, the picture doesn't change when clustering of errors is accounted for. Sensitivity to the assumption of the independence of the competing risks is examined by estimating a dependent competing risks Weibull model where, broadly following the approach of Eberwein et al. (1997), the unobserved heterogeneity term is assumed to be perfectly correlated, rather than uncorrelated, across the risks. In other words, ak = dk a, with α assumed to be distributed according to a gamma distribution as before. The only qualitative difference between the independent and dependent competing risk estimates concerns the statistical significance or otherwise of the zero monitoring impact on the hazard for exits to other benefits. We already have cause to question the significance of this particular estimate, however, and the other results stand up well. Finally, because of statistically significant differences between covariate means for treatment and comparison areas (see Section 4) the Weibull single risk and independent competing risks models are re-estimated on a sub-sample of local areas omitting all Benefit Offices from the urban centres of Belfast and Londonderry. These city Benefit Offices tend to display the most extreme 4 The Stata command for estimating MPH models does not permit clustering of errors at the area-level together with individual unobserved heterogeneity (see StataCorp, 2003). 5 Because the data set is rather large, the bootstrap is limited to 200 replications in each case.

1466

D. McVicar / Labour Economics 15 (2008) 1451–1468

covariate means and their exclusion removes any significant contrast in observables between the treatment and comparison areas. Again, the results are robust. Not only does this increase our confidence that we are indeed identifying the impacts of monitoring and not something unobserved and not otherwise controlled for, but it also gives little indication that suspension of monitoring might affect behavior differently in urban and rural contexts. To sum up, the overall picture is that suspension of monitoring has a robust negative impact on the single risk hazard rate for exits from unemployment corresponding to an increase in average unemployment duration of between 10% and 19%. There are similarly robust impacts on the hazards for exits to employment and exits to other destinations. The impact on the hazard rate for exits to education and training is robustly non-negative but not always statistically significant, and the impact on the hazard rate for exits to other benefits is not robustly non-positive but also not always statistically significant. 7. Concluding Remarks Search theory shows that unemployment benefits can reduce search effort on the part of (registered) unemployed workers. A common policy response to this has been to set minimum requirements in terms of expected search activity, to monitor whether these requirements are met, and to impose benefit sanctions on those not displaying appropriate search behaviour. If properly enforced, search requirements that are higher than the existing level of search effort chosen by the unemployed will, in theory, increase search effort and therefore increase the unemployment outflow rate and the job entry rate. An increase in the intensity of monitoring — stricter enforcement of existing search requirements — will intuitively have a similar effect. An additional level of subtlety is added by van den Berg and van der Klaauw (2006), who suggest stricter monitoring might increase formal search at the expense of informal search, and by Manning (2005), who suggests stricter search requirements might drive some unemployed workers off unemployment benefits not into employment but into ‘unregistered’ unemployment or inactivity. The widespread application of search monitoring in practice is sufficient cause to require empirical estimates of its effects. Put this together with a degree of theoretical ambiguity regarding its impacts and such evidence becomes even more crucial. Empirical studies have rarely been able to identify monitoring impacts, however, because most benefit reforms package together changes to monitoring with changes to job search assistance and other measures. In contrast, this paper exploits convenient periods of monitoring suspension during Benefit Office refurbishments in part of the UK — with the regime otherwise unchanged — to identify such impacts, and in doing so adds new quasi-experimental evidence from outside the US to a small existing literature mostly based on experimental evidence from within the US and giving no clear indication thus far as to the existence or otherwise of monitoring impacts on the behaviour of the unemployed. Consistent with Klepinger et al. (2002) but in contrast to Ashenfelter et al. (2005), this paper shows that plausibly exogenous periods of suspension of job search monitoring led to significantly lower exit rates from registered unemployment and increased average claim duration. In short, the paper shows that monitoring matters. More specifically, the suspension of monitoring led to a robust and significant reduction in job entry amongst the male unemployed. This is the first paper to show such an effect, and in this particular respect the results here differ from those of Klepinger et al. (2002). There is also suggestive evidence — let's put it no stronger than that — that suspension of monitoring leads to fewer exits from registered unemployment to states other than employment, broadly as Manning (2005) suggests.

D. McVicar / Labour Economics 15 (2008) 1451–1468

1467

The policy implications of the paper are readily apparent. Policymakers and/or benefit delivery agencies can influence the behaviour of unemployment benefit recipients solely by altering the intensity with which job search behaviour is monitored. Specifically, increasing the intensity of monitoring — at least over the range considered here — leads to increased job entry rates, although it may also increase entry to other non-employment states further from the labour market, not closer to it. Both types of effects reduce registered unemployment. References Abbring, J.H., van den Berg, G., 2003. The nonparametric identification of treatment effects in duration models. Econometrica, 71 5, 1491–1517. Abbring, J.H., van den Berg, G., van Ours, J., 2005. The effect of Unemployment Insurance sanctions on the transition rate from unemployment to employment. Economic Journal, 115 505, 602–630. Anderson, P., 2001. Monitoring and assisting active job search, in OECD Proceedings: Labour Market Policies and the Public Employment Service. OECD, Paris. Ashenfelter, O., Ashmore, D., Deschenes, O., 2005. Do Unemployment Insurance recipients actively seek work? Evidence from randomized trials in four US states. Journal of Econometrics 125, 53–75. Bertrand, M., Duflo, E., Mullainathan, S., 2004. How much should we trust differences-in-differences estimates? Quarterly Journal of Economics, 119 1, 249–275. Black, D.A., Smith, J.A., Berger, M.C., Noel, B.J., 2003. Is the threat of reemployment services more effective than the services themselves? Evidence from random assignment in the UI system. American Economic Review, 93 4, 1313–1327. Blank, R.M., 2002. Evaluating welfare reform in the United States. Journal of Economic Literature, 40 4, 1105–1166. Cipollone, P. and Rosolia, A. (2006). Social interactions in high school: Lessons from an earthquake. Forthcoming American Economic Review. Dolton, P.J., O'Neill, D., 1996. Unemployment duration and the Restart effect: some experimental evidence. Economic Journal 106, 387–400. Eberwein, C., Ham, J., Lalonde, R.J., 1997. The impact of being offered and receiving classroom training on the employment histories of disadvantaged women: evidence from experimental data. Review of Economic Studies 64, 655–682. Fredriksson, P., Holmlund, B., 2005. Optimal unemployment insurance design: time limits, monitoring, or workfare? Institute for Labor Market Policy Evaluation Working Paper 2005, 13. Gorter, C., Kalb, G., 1996. Estimating the effect of counseling and monitoring the unemployed using a job search model. Journal of Human Resources, 31 3, 590–610. Han, A., Hausman, J.A., 1990. Flexible parametric estimation of duration and competing risk models. Journal of Applied Econometrics 5, 1–28. Johnson, T.R., Klepinger, D.H., 1994. Experimental evidence on Unemployment Insurance work-search policies. Journal of Human Resources, 29 3, 695–717. Karagiannaki, E., 2006. Exploring the effects of integrated benefit systems and active labour market policies: evidence from Jobcentre Plus in the UK. CASE Working paper 107. Centre for the Analysis of Social Exclusion, London. Katz, L., Meyer, B., 1990. Unemployment Insurance, recall expectations, and unemployment outcomes. Quarterly Journal of Economics, 105 4, 973–1002. Klepinger, D.H., Johnson, T.R., Joesch, J.M., 2002. Effects of Unemployment Insurance work-search requirements: The Maryland Experiment. Industrial and Labor Relations Review, 56 1, 3–22. Lalive, R., van Ours, J., Zweimuller, J., 2005. The effects of benefit sanctions on the duration of unemployment. Journal of the European Economic Association, 3 4, 1386–1417. Manning, A., 2005. You can't always get what you want: the impact of the UK Jobseeker's Allowance. Centre for Economic Performance, LSE, London. Centre for Economic Performance, LSE, London. Martin, J.P., Grubb, D., 2001. What works for whom? A review of OECD countries experiences with active labour market policies. OECD Working Paper 14. McVicar, D. (2006). I'll (not) be watching you: does job search monitoring affect unemployment? 5th Transatlantic IZA/ SOLE Meeting of Labor Economists, Buch/Ammersee, Germany, May 18th–21st. Meyer, B., 1990. Unemployment insurance and unemployment spells. Econometrica, 58 4, 757–782. Meyer, B., 1995. Lessons from the US Unemployment Insurance experiments. Journal of Economic Literature, 33 1, 91–131.

1468

D. McVicar / Labour Economics 15 (2008) 1451–1468

Moulton, B., 1990. An illustration of a pitfall in estimating the effects of aggregate variables on micro units. Review of Economics and Statistics 72, 334–338. Narendranathan, W., Stewart, M., 1993. Modelling the probability of leaving unemployment: competing risks models with flexible baseline hazards. Journal of the Royal Statistical Society Series C 41, 63–83. StataCorp, 2003. Stata Statistical Software: Release 8.0 College Station. Stata Corporation, TX. van den Berg, G., 1994. The effects of changes of the job offer arrival rate on the duration of unemployment. Journal of Labor Economics 12, 478–498. van den Berg, G., 2001. Duration models: specification, identification and multiple durations. In: Heckman, J., Leamer, E. (Eds.), Handbook of Econometrics, vol. 5. Elsevier/North Holland, Amsterdam. van den Berg, G., van der Klaauw, B., van Ours, J., 2004. Punitive sanctions and the transition rate from Welfare to Work. Journal of Labor Economics, 22 1, 211–241. van den Berg, G., van der Klaauw, B., 2006. Counseling and monitoring of unemployment workers: theory and evidence from a controlled social experiment. International Economic Review 47, 895–936.