Abstracts and Reviews

Abstracts and Reviews

398 Abstracts and Reviews IM: INSURANCE MATHEMATICS IM01: MODELLING GENERAL 393001 (IM01) Kendall Distributions and Dependence Orderings. Fountain ...

274KB Sizes 0 Downloads 43 Views

398

Abstracts and Reviews

IM: INSURANCE MATHEMATICS

IM01: MODELLING GENERAL 393001 (IM01) Kendall Distributions and Dependence Orderings. Fountain Robert L., Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. Most of the recently-defined notions of "positive" or "negative" dependence rely upon a variety of orderings of bivariate random vectors. These orderings are generally partial orders, and thus there are many pairs of random vectors which are not comparable. By using a weakened version of stochastic dominance and the concepts of Kendall distributions and bivariate integral transforms, an entirely new class of orderings, in which the comparability issue is resolved, is created. Each ordering in this class can be used to construct a measure of dependence. Examples will be shown using bivariate data from stock market indicators and from credit evaluations. Keywords: Dependence, Dependence measures.

IM02: DESCRIPTIVE STATISTICS AND TABLES 393002 (IM02) A practical tool for the generation of weather data scenarios. Rosemberg Chloé, Vignal Bertrand, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. Many economic activities are weather-dependent. For instance, energy demand is highly temperature dependent. For an energy company such as ours, temperature is one of the key factors of its market risk (for the volumetric risk) at a yearly horizon. To be able to assess this risk, we develop a practical approach for generating daily weather data time series and we apply it to temperature time series. The goal of the generator is to describe the range of possible temperatures in the future for a one-year window. Our approach consists first in fitting an appropriate time series model on a set of historical data. This model includes a seasonal component, a trend and an AR nongaussian random process. The innovation of the model deduced from the historical data, hereafter historical innovation, can be assumed to be a white noise. The

standard approach is then to use a random generator and the model to simulate temperatures. We propose to use instead the observed innovations associated to the historical data. Innovations scenarios are built by seasonal circular permutations of the observed innovation time series. These innovations scenarios are re-injected into the model to compute the simulated temperatures. To apply our approach, we use an historical data set of 121 years of daily French temperature. Our approach allows us going from 121 one-year historical scenarios of temperature to about 10000 one-year scenarios of temperature. The results of the simulation show first that month-by-month distributions of daily temperature are well described. Comparable results are obtained considering our approach or considering the standard approach. Second, frequency and duration of the cold spells in winter and the heatwaves in summer are well represented — much more better considering our approach than the standard approach. This result is important since, for instance, a cold spell has a deeper impact on the results of an energy company than the same number of isolated cold days. Keywords: Weather data.

IM10: PROBABILITY THEORY AND MATHEMATICAL STATISTICS IN INSURANCE, GENERAL AND MISCELLANEOUS 393003 (IM10) A Flexible Approach to Multivariate Risk Modelling with a New Class of Copulas. van der Hoek John, Sherris Michael, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. We present a new class of copulas constructed using piece-wise linear distortions of some standard copulas. The method of construction of these copulas allows them to be readily calibrated by fitting to empirical multivariate risk data. We derive properties of this new class of copulas and present results from applying our distortions to a range of copulas including the Gaussian and Archimedean copulas. We consider tail dependence measures and show how distorted copulas can model various forms of tail dependence. The new form of distorted copula is convenient for numerical computation in insurance and financial risk modelling including risk measurement and management of portfolios. Gaussian copulas are often used in modelling credit risk portfolios

Abstracts and Reviews

and for many risk modelling applications in practice. We show how our approach can be applied to Gaussian copulas and derive properties of the distorted copulas. We illustrate the results by discussing the application of the approach to multivariate risk modelling in insurance and finance and compare the approach to other methods. Keywords: Multivariate risk modelling. 393004 (IM10) A mixed family of loss distributions obtained from a gamma distribution. Calderín-Ojeda Enrique, Gómez-Déniz Emilio, Sarabia José María, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. Mixture of distributions occur frequently in statistical theory and in actuarial science for deriving new probability density functions used to model data in actuarial settings. In this paper we propose a new family of continuous distributions obtained by mixing a gamma with a generalized inverse gaussian distribution. It is shown that this formulation provides closed forms for both the probability and the cumulative density functions. Several properties of this family of distributions and their applications are also given. Keywords: Mixtures of distributions. 393005 (IM10) Approximations of compound distributions via gamma-type operators. Sangüesa Carmen, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. In this work we deal with approximations of compound distributions, i.e., distribution functions of random sums. More specifically, we obtain a discrete compound distribution by replacing each summand in the initial random sum by a discrete random variable whose probability mass function is given in terms of the Laplace transform of the initial one, and its successive derivatives. This discretization method has been studied by several authors in the unidimensional case. Our aim is to show the advantages this method has in a context of compound distributions. In particular we give accurate error bounds for the distance between the initial random sum and its approximation under some initial conditions. We apply these results in a context of insurance risk theory by considering risk models in which the individual claims sizes are mixtures of gamma r.v.’s. Keywords: Compound distributions.

399

393006 (IM10) Archimedean Copulas: Prescriptions and Proscriptions. Nelsen Roger B., Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. Archimedean copulas have proven to be remarkably useful for modeling dependence in a variety of settings. In this talk we will survey important aspects of the theory of Archimedean copulas that make them well suited for dependence modeling. We will discuss methods for constructing one and two parameter families, dependence properties (e.g., tail dependence), applications (e.g., extreme value theory, Schur-constant survival models), simulation techniques, etc. We will also discuss cautions about and limitations to the use of these copulas. We conclude with several open problems. Keywords: Dependence modelling, Copulas. 393007 (IM10) Bivariate Density Classification by the Geometry of Marginals. Kolev Nikolai, Fernandez Mariela, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. In Insurance and Finance, as well as in other sciences, one needs to model high-dimensional populations. But, marginal processes are usually better known than the dependent structure. The practitioners often prefer to implement their marginal models in information systems and the idea is to use them for analysis of more complex multivariate models. We are interested on bivariate density classification with simple assumptions about "geometry" of marginal behavior. We refer to "geometry" of the marginal for the frequently observed practical cases when the marginal density is a constant, increasing or decreasing continuous function, having a single minimum or maximum on the support [a,b] ∈ (−∞, ∞). According to Weierstrass’ approximation theorem, a continuous bivariate density h(x,y) with a compact support [a,b]×[a,b] can be approximated by the exponent of a polynomial in two variables, i.e. h(x,y) ≈ exp{ ni 0 λijx iy j }, when n→∞. Our objective is to determine the possible values of the polynomial’s coefficients λij in the most simple cases when n=1 or 2, given the "geometry" of the marginals. Several typical examples (and related graphics) will be presented in order to illustrate the classification obtained. Keywords: Dependence, Bivariate density.

400

Abstracts and Reviews

393008 (IM10) Detecting automobile insurance fraud using a skewed link model. Bermúdez Lluís, Pérez-Sánchez J.M., Ayuso Mercedes, Gómez-Déniz Emilio, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. Standard logit models have been developed to describe the behavior of consumers when they are faced with binary choices (McFadden, 1974, 1981). In this paper, we present a binary-choice model for fraud behavior and we estimate the influence of some insured and claim characteristics on the probability of committing fraud. In this case, we show that the overall fit can be improved by the use of an asymmetric link. An asymmetric link model may be more appropriate when the number of 1’s is much more different than the number of 0’s, as it can be observed in fraud’s data. Our work, based on that developed by Chen et al. (1999), includes the symmetric and asymmetric link models from a bayesian point of view. A database from the Spanish insurance market is used to compare the results obtained by an asymmetric link with those obtained by the standard logit model. Keywords: Logit models, Bayesian statistics, Fraud. 393009 (IM10) Extreme Behavior of Multivariate Phase-type Distributions. Asimit Alexandru V., Jones Bruce, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. We investigate the limiting distributions of the componentwise maxima and minima of suitably normalized iid multivariate phase-type random vectors. In the case of maxima, a large parametric class of multivariate extreme value (MEV) distributions is obtained. The flexibility of this new class is exemplified in the bivariate setup. For minima, it is shown that the dependence structure of the Marshall-Olkin class arises in the limit. Keywords: Multivariate extreme value. 393010 (IM10) Joint modelling of the total amount and the number of claims by conditionals. Sarabia José María, Guillén Montserrat, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. In the risk theory context, let us consider the classical collective model. The aim of this paper is obtain a

flexible bivariate joint distribution for modelling the couple (X,N), where N is a count variable and X = X1+...+XN is the total claim amount. Our proposal is a generalization of a classical hierarchical model, where now we assume that the conditional distributions of X N and N X belong to some prescribed parametric families. A basic theorem of existence and a general result for exponential families are given. We describe in detail the extension of two classical collective models, which now we call Gamma-Poisson and the BinomialPoisson conditionals models. Other conditional models are proposed: the Gamma conditionals and the Inverse Gaussian conditionals. A estimation method is proposed and some numerical illustrations are given. Keywords: Collective risk model, Parametric families. 393011 (IM10) Large Deviations and Ruin Probabilities for Solutions to Stochastic Recurrence Equations with HeavyTailed Innovations. Konstantinides Dimitrios G., Mikosch Thomas, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. In this presentation we consider the stochastic recurrence equation Yt = AtYt−1+Bt for an iid sequence of pairs (At,Bt) of non-negative random variables, where we assume that Bt is regularly varying with index κ>0 and EAtκ<1. We show that the stationary solution (Yt) to this equation has regularly varying finite-dimensional distributions with index κ. This implies that the partial sums Sn = Y1+...+Yn of this process are regularly varying. In particular, the relation P(Sn>x) ~ c1 n P(Y1>x) as x → ∞ holds for some constant c1>0. For κ > 1, we also study the large deviation properties P(Sn−ESn>x), x ≥ xn, for some sequence xn→∞ whose growth depends on the heaviness of the tail of the distribution of Y1. We see that the relation P(Sn−ESn>x) ~ c2 n P(Y1>x) holds uniformly for x≥xn and some constant c2>0. Then we apply the large deviation result to derive bounds for the ruin probability ψ(u) = P[supn≥1([Sn−ESn]−µn) > u] for any µ>0. We see that ψ(u) ~ c3u P(Y1>u)µ−1(κ−1)−1 for some constant c3>0. In contrast to iid regularly varying Yt’s, when the above results hold with c1 = c2 = c3 = 1 the constants c1, c2 and c3 are different from 1. Keywords: Large deviations, Ruin probabilities, Stochastic recurrence equations, Heavy-tailed innovations.

Abstracts and Reviews

393012 (IM10) Modeling Fuzzy Random Variables. Shapiro Arnold F., Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. Fuzzy random variables (FRVs) are random variables whose values are fuzzy numbers. They can be applied in actuarial modeling to accommodate intrinsic features of uncertainty like imprecision and vagueness. This article explores these FRVs. Topics addressed include: differentiating between randomness and fuzziness, intuitive descriptions and formal definitions of FRVs, conceptualizing FRVs and computational methods for implementing them, descriptive statistics for FRVs, and actuarial applications of FRVs. The goal is to model FRVs and, in doing so, to show how naturally compatible and complementary randomness and fuzziness are, and to illustrate the power of combining the two. Keywords: Fuzzy random variables. 393013 (IM10) On a discrete-time risk model with delayed claims and a constant dividend barrier. Wu Xueyuan, Li Shuanming, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. In this paper a compound binomial risk model with a constant dividend barrier is considered. Two types of individual claims, the main claims and by-claims, are defined, where by-claims are produced by the main claims and may be delayed for one time period with a certain probability. Some prior work on these timecorrelated claims has been done by Yuen & Guo (2001), etc.. Formulae of the expected present value of dividend payments up to the time of ruin are obtained for discrete-type individual claims, together with some other results of interest. Explicit expressions of the corresponding results are derived in a special case, of which a comparison is also made to the original discrete model of De Finetti (1957). Keywords: Delayed claims, Constant dividend barrier. 393014 (IM10) Seeking a Compound Geometric Structure of the Perturbed Sparre Andersen Risk Model. Ren Jiandong, Stanford David, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. The compound geometric structure of the perturbed

401

classical risk model has been commented on and exploited by several authors. Recently, some authors have made interesting discoveries for subsets of the perturbed Sparre Andersen model, suggesting that some sort of compound geometric structure may be present there. By pursuing a probabilistic argument, we explore the structure of the surplus process to seek what sort of compound geometric results can be established. Keywords: Perturbed Sparre Andersen risk model. 393015 (IM10) Some diagonal properties of the empirical copula and the construction of families of absolutely continuous copulas with given restrictions. Erdely Arturo, Gonzalez-Barrios Jose M., Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. The problem of determining whether two random variables are independent or not has been widely studied in the statistical literature, and in many cases independence tests have been proposed. The development of the theory of copulas has had great impact in the study of dependence, specially in the case of continuous random variables. In the present work we study the problem of independence of two continuous random variables using the fact that there exists a unique copula that characterizes independence, and that this copula is of the Archimedean type. Sungur and Yang (1996) have already shown that in the case of Archimedean copulas the diagonal section contains all the information about the copula. In the case of two random variables, this reduces the dimension of the estimation from two to one dimension. We analyze some properties of the empirical diagonal and its use in building nonparametric independence tests under the assumption that the underlying copula is of the Archimedean type. We also deal with the question about the existence of non Archimedean copulas who have the same diagonal section as the independence copula. Fredricks and Nelsen (1997, 1997a, 2002) show how to build copulas with a given diagonal section and in all cases the resulting copulas are singular and symmetric. We provide a family of absolutely continuous copulas with a fixed diagonal, which can differ from another absolutely continuous copula almost everywhere with respect to Lebesgue measure. It is important to mention that the asymmetry in the proposed methodology is not an issue. Keywords: Copulas, Archimedean copula.

402

Abstracts and Reviews

393016 (IM10) The Expected Discounted Penalty Functions for a Markov-modulated Risk Model. Lu Yi, Li Shuanming, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. We consider a Markov-modulated risk model in which the claim inter-arrivals, amounts and premiums are influenced by an external Markovian environment process. A system of integro-differential equations of the expected discounted penalty (Gerber-Shiu) functions, given the initial environment state, is established. In the two-state model, by using the Laplace transforms approach, explicit formulas for the expected discounted penalty function at ruin are derived, when the initial reserve is zero or when both claim amount distributions are from the rational family. As an illustration, the explicit results for the joint probability density of the surplus before ruin and the deficit at ruin, given the initial state, when both the claim size distributions are exponentials, are obtained. The diffusion approximation is also discussed. Keywords: Expected discounted penalty function, Markov-modulated risk model, Laplace transform. 393017 (IM10) Using the NIG distribution for modelling actuarial data. Gómez-Déniz Emilio, Pérez Jose, Vázquez-Polo F.J., Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. The normal-inverse gaussian distribution (NIG, henceforth) introduced by Bardornff-Nielsen (1997) is a distribution widely used recently in a variety of disciplines as physics, finance and actuarial sciences, among others. This distribution is a case of the generalized hyperbolic distribution derived by mixing a normal distribution with a generalized inverse gaussian distribution. The NIG distribution depends on four parameters and moves in the real line. Although the probability density function has a complicated expression, the moment generating function has, nevertheless, a simple form which does it suitable to estimate the parameters of the distribution. In this paper a new discrete probability function is obtained by mixing a Poisson distribution with the NIG distribution by an appropriate re-parametrization of the parameter of the Poisson distribution. The model obtained is applied for fitting automobile insurance claim data and comparative analyses with alternatives models

are done. Keywords: Mixture, Poisson distribution, Normal distribution, Inverse gaussian distribution.

IM11: STOCHASTIC MODELS FOR CLAIMS FREQUENCY, CLAIM SIZE AND AGGREGATE CLAIMS 393018 (IM11) A recursion principle for dependent bivariate compound variables of phase-type. Eisele Karl-Theodor, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. We investigate dependent bivariate discrete random variables T = (T1,T2) of phase-type and their compound counterparts S = (S1 = X1+...+XT ; S2 = X1+...+XT ) where (Xi) and (Yi) are two independent families of iid random variables. For the joint probability P(S1=j,S2=k) we give a Panjer-like recursion principle. In particular, the discrete case where Xi, Yi are natural allows for an explicit algorithm to calculate these probabilities. The result is based on the rationality of the common generating function. Keywords: Recursion formula. 1

2

393019 (IM11) A two-level dividend strategy in a ruin process with Markovian arrivals. Badescu Andrei, Drekic Steve, Landriault David, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. In my talk, I will propose a generalization of Ahn et al. (2006) in the class of risk models with a Markovian arrival process for the claim arrival and phase-type distributed claim sizes. In Ahn et al. (2006), a constant dividend strategy is implemented in this class of risk models where the entire premium rate is paid as a dividend rate as soon as the surplus level reaches a constant dividend threshold. In this paper, a more general dividend strategy is considered, the one for which only a proportion of the total premium rate is paid as a dividend, allowing the surplus level to exceed the constant dividend threshold. Such a dividend strategy has been first considered by Lin and Sendova (2006) in the framework of the classical compound Poisson risk model. For this dividend-modified surplus process, explicit expressions of the Laplace transform of the time to ruin as well as the discounted joint density of the time

Abstracts and Reviews

to ruin, the surplus prior to ruin and the deficit at ruin are derived. The analysis is performed via the connection between fluid flow processes and surplus processes. To conclude, numerical examples are carried out to illustrate the impact of the dividend strategy on various ruin related quantities. Keywords: Dividend strategy, Compound Poisson. 393020 (IM11) On the Application of Fractional Brownian Motion in Insurance as a Modelling Tool for Long Range Dependence. Frangos N., Vrontos Spyridon, Yannacopoulos A., Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. Insurance claims often present long range dependence. This phenomenon is likely to have an impact on various issues related to a large class of insurance problems such as solvency, pricing and optimal retention, etc. A commonly used model for long range dependence in insurance is the fractional Brownian motion (FBM). This model presents great interest both from the point of view of theory and of applications. We present using the recently developed stochastic calculus of FBM analytic results on the ruin probability of an insurance firm facing claims of FBM type, on the pricing of insurance and reinsurance policies for such claims, and on optimal insurance and reinsurance for firms facing claims of FBM type. Keywords: Fractional Brownian Motion, Long range dependence. 393021 (IM11) On the Expected Discounted Penalty Function for a Perturbed Risk Process Driven by a Subordinator. Morales Manuel, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. The Expected Discounted Penalty Function (EDPF) was introduced in a series of now classical papers [Gerber and Shiu (1997), (1998a), (1998b)]. Later, Gerber and Landry (1998) extended the concept to the perturbed case. Recent papers have extended these results in more general settings [for instance Tsai and Willmot (2002), Li and Lu (2005) and Li and Garrido (2005)]. In this note we present yet another generalization that has not been considered before in the literature. We present a perturbed risk process with a subordinator as the model for the aggregate claims. We generalize existing results [Tsai and Willmot (2002)] on the EDPF for the

403

subordinator case. Keywords: Expected discounted penalty function, Perturbed risk process. 393022 (IM11) Premium updating in two integer autoregressive processes. Franco Maria Aparecida de Paiva, Fernandes Guilherme Barreto, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. In this work INAR(1) (e.g.,Al Osh e Alzaid,1987) and CINAR(1) ( e.g., Paiva,2003) models for a discrete parameter process (Y(t)),for t=1,2,... where Y(t) is the number of events of some kind, like claims in a policy, occurring in the time interval (t−1,t], are considered. Both processes define Y(t) as a sum of an innovation plus the result of a Steutel and Harn operator on Y(t−1). Gourieroux and Jasiak (2004) considered INAR(1) processes with innovations distributed as Poisson, with parameter given by cm, where m is a gamma distributed variable. Similarly, we make the same hypothesis on the parameter of the innovations, for CINAR(1) process. We prove that the conditional expectation of m given the history of past values of Y in CINAR(1) is still a weighted mean of expectations of some gamma distributions. A general expression is obtained for the premium to be paid after a given history of past claims for both models and some comparisons are shown. Keywords: Premium rating, Autoregressive processes. 393023 (IM11) Risk models with two-step premium rate. Brito Margarida, Leite Jorge Daniel, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. In the classical risk model it is assumed that interclaim times are independent of claim amounts and that the premium rate is constant. We propose an extension of the classical model to a dependence setting. Here, we are concerned with ruin problems. We derive the ruin probability in terms of its Laplace transform and give numerical illustrations for particular cases. A simulation study is also presented. Keywords: Classical risk model, Dependence. 393024 (IM11) Ruin probabilities and aggregate claims distributions for shot noise Cox processes. Albrecher Hansjoerg, Asmussen S., Proceedings of the

404

Abstracts and Reviews

10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. We consider a risk process where the claim arrival process is a superposition of a homogeneous Poisson process and a Cox process with a Poisson shot noise intensity process, capturing the effect of sudden increases of the claim intensity due to external events such as catastrophes. The distribution of the aggregate claim size is investigated under these assumptions. For both light-tailed and heavy-tailed claim size distributions, asymptotic estimates for infinite-time and finite-time ruin probabilities are derived. Moreover, we discuss an extension of the model to an adaptive premium rule that is dynamically adjusted according to past claims experience. Keywords: Cox Process, Asymptotic estimates.

in case that these parameters vary over time, the temporal covariate effects on the failure time are of great interest. In this paper we illustrate how the Cox model can be used in order to manage one of the most important problems recently affecting the insurance sector: customer loyalty. Additionally, in our empirical application to real non-life insurance data evidences of time-varying effects are found. The application of the extended version of Cox model, where the effects of some covariates are allowed to vary over time, provide the insurance manager with a deeper understanding of the factors influencing the customer lifecycle dynamics. Keywords: Cox-model.

IM12: MODELLING OF PORTFOLIOS AND COLLECTIVES

393025 (IM11) The Gerber-Shiu expected discounted penalty function for the classical risk model with interest and a constant dividend barrier. Yuen Kam Chuen, Wang Guojing, Li Wai Keung, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. In this paper, we consider the classical surplus process with interest and a constant dividend barrier. Under constant interest, we derive an integro-differential equation for the Gerber-Shiu expected discounted penalty function, and obtain the solution to the equation which is in the form of an infinite series. In some special cases with exponential claims, we are able to find closed-form expressions for the Gerber-Shiu expected discounted penalty function. Finally, we extend the integro-differential equation to the case that the surplus is invested in an investment portfolio with stochastic return on investments. Keywords: Expected discounted penalty function, Classical risk model, Dividend barrier.

393027 (IM12) Dividend maximization under consideration of the time value of ruin. Albrecher Hansjoerg, Thonhauser Stefan, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. In the Cramer-Lundberg model and its diffusion approximation, it is a classical problem to find the optimal dividend payment strategy that maximizes the expected value of the discounted dividend payments. One often raised disadvantage of this approach is the fact that such a strategy does not take the life time of the controlled process into account. In this paper we introduce a value function which considers both expected dividends and the time value of ruin. In the case of the diffusion model, a barrier strategy again turns out to be optimal. Extensions of this optimal control problem to the Cramer-Lundberg model with and without perturbation are discussed. Keywords: Dividend maximization, Time value of ruin.

393026 (IM11) Time-varying effects when analysing customer lifetime duration in non-life insurance. Guillén Montserrat, Nielsen Jens Perch, Scheike Thomas, Perez-Marin Ana Maria, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. The Cox model is widely applied in a great number of fields including biostatistics, actuarial science and economics. One of the model assumptions is that the regression coefficients are time invariant. Nevertheless,

393028 (IM12) Pricing risks when standard deviation principle is applied for the portfolio. Otto Wojciech, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. When the risk loading for the whole portfolio is set proportionally to the standard deviation, then the problem of coherent pricing of individual risks arises. A similar problem was considered by Borch (1962), who proposed to make use of the Shapley solution for the n-

Abstracts and Reviews

person game. However, the explicit solution is suited only for small n, rather reflecting the game played by few companies negotiating the merger of their portfolios. When we wish to use the same approach for individual risk pricing, explicit solution is no more adequate because of two reasons: (i) it is no more feasible because of computational time increasing with n! and (ii) lack of analytical formula make properties of the resulting premiums hidden. The paper presents arguments that justify allocating the whole portfolio loading proportionally to the variance of the individual risk (to the covariance in a more general setting). This is of course nothing new. However, the essence of the paper is to show that it is a quite well approximation to the Shapley’s value. More precisely, rigorous considerations show what are necessary preconditions to use this approximation and when the approximation could depart from the exact solution substantially. Keywords: Risk pricing. 393029 (IM12) Semiparametric Regression Models for Claims Reserving and Credibility: the Mixed Model Approach. Antonio Katrien, Beirlant Jan, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. Verrall (1996) and England & Verrall (2001) considered the use of smoothing methods in the context of claims reserving, by applying two smoothing procedures in a likelihood-based way, namely the locally weighted regression smoother (‘loess’) and the cubic smoothing spline smoother. Using the statistical methodology of semiparametric regression and its connection with mixed models (see e.g. Ruppert et al., 2003), this paper revisits smoothing models for loss reserving and considers their use in an example from credibility. Next to the flexibility of a semiparametric regression model, advantages of the presented approach are threefold. Firstly, because the constructed semiparametric models have an interpretation as (generalized) linear mixed models ((G)LMMs), standard statistical theory and software for (G)LMMs can be used. Secondly, a Bayesian implementation of these smoothing models is relatively straightforward and allows simulation from the full predictive distribution of quantities of interest. Since actuaries are interested in predictions, this is a major advantage. Thirdly, more complicated statistical models, dealing for example with semicontinuous data or

405

extensive longitudinal data, can be handled within the same framework. Throughout this work, data examples illustrate these different aspects. Evidently, the methodology is not restricted to the problems discussed in this paper, but is relevant for other kinds of actuarial regression problems. Keywords: Semiparametric regression, Linear mixed models. 393030 (IM12) Some New Characterizations and Results on Quasicopulas. Quesada-Molina Jose Juan, Rodriguez-Lallena Jose Antonio, Ubeda-Flores Manuel, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. In 1959, Sklar introduced the concept of a copula, as a function that joins a multivariate distribution function to its one-dimensional margins. Since then, the theory of copulas has been developed and widely used in statistics and probability theory. In particular, copulas have been applied to the study of dependence and measures of association, and to the construction of families of multivariate distributions. In 1993, C. Alsina, R.B. Nelsen and B. Schweizer introduced the notion of quasicopula — a more general concept to that of a copula — in order to show that a certain class of operations on univariate distribution functions is not derivable from corresponding operations on random variables defined on the same probability space. Later, some characterizations and properties of quasi-copulas have been obtained. In this work, we present some new characterizations and results on multivariate quasi-copulas, and provide several examples to illustrate them. Keywords: Quasi-copulas. 393031 (IM12) The compound Poisson risk model with multiple thresholds. Lin X. Sheldon, Sendova Kristina P., Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. We consider a multi-threshold compound Poisson risk model. A piece-wise integro-differential equation is derived for the Gerber-Shiu discounted penalty function. We then provide a recursive approach to obtain general solutions to the integro-differential equation and its generalizations. Finally, we use the probability of ruin to illustrate the applicability of the approach. Keywords: Compound Poisson.

406

Abstracts and Reviews

IM13: RUIN AND OTHER STABILITY CRITERIA 393032 (IM13) Collapse at Interest. Aspandiiarov S., Belitsky Vladimir, Pechersky E., Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. We use the large deviation technique developed by Dobrushin and Pechersky (see "Large deviations for tandem queuing systems", J. Appl. Math. Stoch. Analysis, 7 (1994) and "Large deviations for random processes with independent increments on infinite intervals", Problemy peredachi informacii 34 (1998)) to estimate the ruin probability of an insurance company that gets interest from its available capital. Exactly to state, the only difference between the model studied by us and the classical Cramer’s model is the additional company’s income which amount during an infinitesimal time interval ∆t is rC(t)∆t, where r is the bank interest rate and C(t) is the company’s capital available at time t. Keywords: Large deviation techniques, Ruin probability. 393033 (IM13) Numerical evaluation of continuous time ruin probabilities for a risk process with credibility based premiums. Afonso Lourdes B., Egídio dos Reis Alfredo D., Waters Howard, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. We present a method for the numerical evaluation of ruin probabilities in continuous and finite time for a classical risk process where the premium can change from year to year. Our method is based on the simulation of the annual aggregate claims and then on the calculation of the ruin probability for a given surplus at the start and at the end of each year. We calculate the within-year ruin probability assuming first a Brownian motion approximation and, secondly, a translated gamma distribution approximation for the aggregate claim amount. We consider this approach in the case where the annual premium is updated according to one of the standard credibility models, such as the BühlmannStraub model. We also explore the case where the premium at the start of each year is a function of the surplus level at that time. Keywords: Ruin probability, Brownian motion, Credibility.

393034 (IM13) On the infinite-time ruin and the distribution of the time to ruin. Kaishev Vladimir, Dimitrova Dimitrina, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. We consider a compound Poisson collective risk model in which individual claim amounts W1, W2, ... have any (discrete) joint distribution. The premium income to the insurance company up to time t is modelled by a nondecreasing real-valued function h(t). An expression for the infinite ruin probability under this model has been recently obtained by Ignatov and Kaishev (2006), assuming certain conditions on the function h(t) and the joint distribution of the claim amounts. In the special case of the classical ruin probability model, i.e., when W1, W2, ... are assumed i.i.d. r.v.’s and h(t) = u+ct, this formula has been shown to coincide with the infinite horizon non-ruin probability formulae of Gerber(1988a,b) and Shiu(1987,1989) and also with the formula of Picard and Lefèvre(2001). Here, we give a further refinement of this new ruin probability formula and apply it in order to obtain the distribution of the time to ruin. Numerical illustrations are also presented both for dependent and independent claim amounts. Keywords: Finite ruin probability. 393035 (IM13) On the Time Value of Absolute Ruin with Debit Interest. Cai Jun, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. Assume that the surplus of an insurer follows a compound Poisson surplus process. When the surplus is below zero, we assume that the assurer will borrow money at a rate of debit interest. In such a model, absolute ruin is said to occur at a certain time when the negative surplus cannot return to positive level after that time, which is called absolute ruin time. We define the Gerber-Shiu function at absolute ruin and derive a system of the integro-differential equations satisfied by the Gerber-Shiu function. Furthermore, we consider explicit solutions for the time value of absolute ruin when the initial surplus is zero or claim sizes are exponentially distributed. Keywords: Gerber-Shiu function, Time value of ruin. 393036 (IM13) Ruin probabilities for a risk model with two lines of

Abstracts and Reviews

business. Guo Junyi, Lv Tongling, Meng Qingbin, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. We consider risk models with two lines of dependent business. For the two dimensional risk process, diffusion approximation is obtained and the PDE satisfied by the survival probability of the limiting process is analysed and solved. For the risk process which is the sum of two lines of business, Markovianization technique is used to obtain exponential martingale and upper bound of the probability of ruin. Keywords: Survival probability, Lines of business. 393037 (IM13) The joint density of the surplus before and after ruin in the Sparre Andersen model. Pitts Susan M., Politis Konstadinos, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. Gerber and Shiu (1997) have studied the joint density of the time of ruin, the surplus immediately before ruin, and the deficit at ruin in the classical model of collective risk theory. More recently, their results have been generalised for risk models where the interarrival density for claims is non-exponential, but belongs to the Erlang family. Here we obtain generalisations of the GerberShiu (1997) results that are valid in a general Sparre Andersen model, i.e. for any interclaim density. In particular, we obtain a generalisation of the key formula in that paper. Our results are made more concrete for the case where the density between claim arrivals is phasetype or the integrated tail distribution associated with the claim size distribution belongs to the class of subexponential distributions. Further, we obtain conditions for finiteness of the joint moments of the surplus before ruin and the deficit at ruin in the Sparre Andersen model. Keywords: Sparre-Andersen model, Surplus prior to ruin, Deficit at ruin, Ladder height, Wiener-Hopf factors, Phase-type distribution, Subexponential distribution.

IM30: PREMIUM, PREMIUM ORDERING OF RISKS

PRINCIPLES,

393038 (IM30) Optimal management of an insurer’s exposure under premium control. Emms Paul, Proceedings of the 10th Congress on

407

Insurance Mathematics and Economics, Leuven, July 2006. The qualitative behaviour of the optimal premium strategy is determined for an insurer in a finite and an infinite market where the volume of sales is deterministic. The optimisation problem leads to a system of forward-backward differential equations obtained from Pontryagin’s Maximum Principle. Phase diagrams are used to characterise the optimal control as a function of the model parameters. Two types of strategy are identified: loss-leading and market withdrawal. There are analytical optimal premium strategies for particular demand functions when the backward equations uncouple from the forward equations. For the fully coupled problem the optimal strategy depends on the current state of the insurer as well as the model parameters. In this case the equilibrium point of the state and adjoint equations gives a criterion for whether the insurer should actively sell insurance or leave the market. Keywords: Optimal control, Premiums. 393039 (IM30) Optimal pricing of a heterogeneous portfolio for a given risk level. Zaks Yaniv, Frostig Esther, Levikson Benny, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. Consider a portfolio containing heterogeneous risks, where the policyholders’ premiums to the insurance company might not cover the claim payments. This risk has to be taken into consideration in the premium pricing. On the other hand, the premium that the insureds pay has to be fair. This fairness is measured by the distance between the risk and the premium paid. We apply a non-linear programming formulation to find the optimal premium for each class so that the risk is below a given level and the weighted distance between the risk and the premium is minimized. We consider also the dual problem: minimizing the risk level for a given weighted distance between risks and premium. Keywords: Optimal pricing, Heterogeneous portfolio. 393040 (IM30) Pricing general insurance in a reactive and competitive market. Emms Paul, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. A simple parameterisation is introduced which represents

408

Abstracts and Reviews

the insurance market’s response to an insurer adopting a strategy determined via optimal control theory. Claims are modelled using a lognormally distributed mean claim size rate and the market average premium is determined via the expected value principle. If the insurer maximises its expected net wealth then the resulting Bellman equation has a free boundary in state space which determines when it is optimal to stop selling insurance. Three finite difference schemes are used to verify the existence of a solution to the Bellman equations when there is market reaction. All the schemes use a front-fixing transformation. If the market reacts it is found that the optimal strategy is altered so that premiums are raised if the strategy is of loss-leading type and lowered if it is optimal for the insurer to set a relatively high premium and sell little insurance. Keywords: Optimal control theory, Bellman equations. 393041 (IM30) Risk aggregation and subjective tail dependence. Tsanakas Andreas, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. Motivated by the process of modelling dependencies in the presence of uncertainty, a generalisation of Pearson’s correlation coefficient is introduced that explicitly reflects the economic consequences of pooling risks. The economic framework is provided by a class of convex risk measures (Foellmer and Schied, 2002; Tsanakas and Desli, 2003). These risk measures, similarly to the additive risk measures of the actuarial literature (Gerber, 1974), utilise an exponential utility function to produce a penalty for the aggregation of dependent risks, which is absent in coherent risk measures (Artzner et al, 1999). An example is presented, with risks that derive their dependence structure from asymmetric normal mixtures (Tsanakas and Smith, 2005). Keywords: Tail dependence, Risk aggregation.

IM31: EXPERIENCE RATING, CREDIBILITY THEORY, BONUS-MALUS SYSTEMS 393042 (IM31) Credibility Theory for Generalized Linear Models. Garrido Jose, Zhou Jun, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. Generalized linear models (GLM) are gaining popularity as a statistical analysis method for insurance data. For

segmented portfolios, as in car insurance, the question of credibility arises naturally; how many observations are needed in a risk class before the GLM estimator can considered? This paper considers the extension of Hachemeister regression credibility model and results, to GLM’s. Keywords: General linear models, Credibility, Hachemeister model. 393043 (IM31) On the use of the weighted balanced loss function in credibility theory. Gómez-Déniz Emilio, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. Weighted balanced loss function (WBLF) is a generalized loss function which includes as a particular case the weighted quadratic loss function. Credibility premiums can be obtained by using weighted quadratic loss functions in actuarial science. In this paper, WBLF is used first, to obtain generalized credibility premiums that contains as particular cases other credibility premiums in the literature and, secondly, credibility formula under the distribution free approach. The premiums obtained under this new approach are richer than the previous credibility premiums in the literature and they verify desirable properties which a premium principle should satisfy. Keywords: Credibility, Net, Esscher, Distribution free approach. 393044 (IM31) The expected premium in the non-homogenous motor insurance portfolio. Kryszen Barbara, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. The paper focused on the expected premium for a bonus-malus system in heterogeneous portfolios. The heterogeneous portfolio is considered as an aggregation of a priori sub-portfolios. It was assumed that a claims number for an individual insured has the Poisson distribution and that a differentiation of risks in the portfolio and sub-portfolios has a significant level. In the relation with these assumptions the family of mixed Poisson distributions was used for the frequency modeling. The research examined the relationship between the chosen model and the premium characteristics. To analyze the expected premium features non-realistic

Abstracts and Reviews

systems were eliminated. The bonus-malus system classification presented in the paper allows distinguishing those, present on the competitive market, later called "fair". The paper proved that in the fair systems, the expected premium for the individual driver is a non-increasing claim frequency function. The basic condition for a correct a posteriori risk assessment is a non-positive change of the expected premium level while claim frequency is increasing. The aim of the research was to describe conditions determining the above theorem and to present the formal prove. The fair systems are commonly used in practice. Moreover, the actuarial literature assumes that the Poisson distribution describes number of claims for the individual insured. Based on that, the proved theorem is related to a wide category of systems used on the market. A fundamental problem of the frequency modeling in the heterogeneous portfolio is the allowances for the a priori classification and deriving from that different characteristic of the risk in each sub-portfolio. The article introduced different approaches to the frequency modeling and drew conclusions about premiums in the non-homogenous portfolio. The effect of the analysis is the classification of approaches based on the level of the portfolios’ expected premium. In practice these results can be applied to asses the potential risk connected with each approach. It is particularly important for insurance companies due to the fact that it allows adjusting the bonus-malus system construction to the portfolio characteristics as well as to project future cash flow. Keywords: Bonus-malus systems.

IM42: LOSS RESERVES (INCL. I.B.N.R.) 393045 (IM42) A probabilistic IBNR estimator using policy data. Kubrusly Jessica, Lopes Hélio, Veiga Álvaro, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. The main objective of this work is to present a new probabilistic IBNR estimator that consider not only the reported claims, but also the insurance policy data, which includes the importance of the insured value, the exposed period and the number of policies. The hypotheses of the model are verified by statistics tools. To estimate parameters and the probabilities we use some statistical and simulation techniques, such as EM Algorithm and Monte Carlo methods. We compare the results with some other methods and we also

409

performed back tests to analyse its performance. Keywords: IBNR, Simulation. 393046 (IM42) Predictive Distributions for Reserves which Separates True IBNR and IBNER Claims. Verrall Richard, Liu Huijuan, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. This paper considers the model suggested by Schnieper (1991), which separates the true IBNR claims from the IBNER. Stochastic models are defined, using both recursive and non-recursive procedures, within the framework of the models described in England and Verrall (2002). Expressions for the prediction errors of the reserves are derived analytically. A bootstrapping procedure is also described which allows the prediction errors to be estimated straightforwardly. The full predictive distribution of reserves is also estimated using the bootstrapping method. Some extensions to the original Schnieper model are also discussed, together with other possible applications of this type of model. Keywords: Bootstrapping, Bornhuetter-Ferguson, Chainladder, Claims reserving, Predictive distribution. 393047 (IM42) Which chain-ladder method reserves the best? Brys Guy, Van Wouwe Martine, Verdonck Tim, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. In recent decades, a wide variety of stochastic methods for estimating outstanding claim reserves were developed. The uncertainty of such point estimates may be calculated analytically for some models but often this is cumbersome or impossible to do. Here, we followed a bootstrap approach to estimate the standard error of the claim reserves. We applied the bootstrap on some wellknown stochastic methods, namely the distribution free model of Mack and the log-normal model. Beside this, we incorporated these methods in a multivariate context to retrieve an aggregate reserve range for several lines of business. Moreover, we investigated the influence of outliers and correlation occurring respectively in and between the triangles. Here, we present a method to compare the different reserving techniques. Therefore, we have stripped off the diagonal from the regular triangle. In this way all information about the most recent year is lost, but this information will be used to discuss the quality of the prediction. As financial analysts do not always conform the same attitude

410

Abstracts and Reviews

towards risk, our comparison covers different value-atrisk percentiles. As such, the answer to the question which chain-ladder method reserves the best is left to the reader. Keywords: Chain-ladder, Bootstrap.

IM43: FLUCTUATION RESERVES, SOLVENCY MARGINS 393048 (IM43) An investigation of the Value-at-Risk with incomplete information about the underlying distribution. De Schepper Ann, Heijnen Bart, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. One of the key problems in the financial and actuarial research, particularly in the field of risk management, is the choice of models so as to avoid systematic biases in the measurement of risk. An alternative could consist of working with incomplete information, by fixing only a number of parameters instead of a complete distribution. Such an approach does not lead to unique numbers, but the results will be bounds and/or approximations for the measure at hand. In the present paper, we want to contribute to this research field by showing how to derive upper and lower bounds for the Value-at-Risk. We consider the case where the information about the underlying distribution is restricted to the knowledge of successive moments, if desired in combination with the mode. The results are achieved by means of a transformation of similar bounds for tail probabilities. Keywords: Value-at-risk, Incomplete information. 393049 (IM43) On the risk measures and ordering for the surplus process. Tsai Cary Chi-Liang, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. In this paper we study some risk measures and ordering of the random variables for the maximal aggregate loss. Then we propose theorems for ordering ruin probabilities and some interesting quantities based on these risk measures for two surplus processes associated with different claim size random variables of equal mean. We also find that an exponentially distributed random variable is less, in the meaning of the first stop-loss order, than a mixture of n exponentials with the same

mean. Finally, a numerical example is given to illustrate these theorems. Keywords: Risk measures, Surplus process. 393050 (IM43) Optimal Premium Control (OPC) and Dynamic Continuous Solvency Interaction (DCSI) within a group of insurance companies. Zimbidis Alexandros A., Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. In this paper, we consider a system of insurance companies and investigate the basic problem of optimal premium determination along with the design of the mode for the solvency interaction amongst them. The claims for each insurance company are assumed to be driven by a Brownian motion and consequently, the total input of the system is described by a multi-dimensional Brownian motion. The optimal solution is obtained under a quadratic objective functional and via the application of Hamilton-Jacobi-Bellman equation. Further analysis is also directed towards the exploration of the stability and other qualitative properties of the system. Keywords: Optimal premium control, Solvency.

IM51: RISK SHARING ARRANGEMENTS 393051 (IM51) Comonotonic approximations for VAR(p) process. Ahcan Ales, Masten Igor, Perman Mihael, Polanec Saso, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. In this paper the existing methodology of conditioning Taylor approximation is used to solve a general model from the area of finance. More specifically, we search for the optimal multi-period investment strategy when the log-returns of assets follow a Vector autoregressive VAR(p) model with optimization criterion set to VaR (Value at Risk). We show by means of a numerical illustration that the solutions of the approximate procedure closely relate to the results of Monte Carlo simulation. Keywords: Portfolio allocation, VaR, Analytical approximation, Comonotonic approximations.

Abstracts and Reviews

IE: INSURANCE ECONOMICS

IE10: INSURANCE RELATED MATHEMATICAL ECONOMICS, GENERAL AND MISCELLANEOUS 393052 (IE10) Optimal Dividends in the Dual Model. Avanzi Benjamin, Gerber Hans U., Shiu Elias S.W., Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. In a company which specializes in inventions and discoveries, the surplus process is modeled as U(t) = u − ct + S(t), that is, the jumps are upwards. The optimal dividend problem is analyzed in this model. In particular, specific results are obtained for exponential and mixed exponential jump amount distributions. Keywords: Optimal dividend problem.

IE11: EQUILIBRIUM THEORY 393053 (IE11) Liquidety Preferences as Rational Behaviour under Uncertainty. Mierzejewski Fernando, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. According to the Keynes’s liquidity preference proposition (1936), the demand for cash is negatively affected by the level of interest rates. A functional expression is provided by Tobin (1958), by assuming that averse-to-risk decision makers demand equity for precautionary purposes — in order to prevent insolvency. An extension of the model is presented in this paper, which allows characterising risks by any distribution function — not only those belonging to the Gaussian class — and applicable to markets where investors confront liquidity constraints and keep different expectations about risks. Under these conditions, a level of surplus exists that maximises the value of the portfolio, and in such a way an optimal exchange of uncertainty and sure return is determined. Therefore, liquidity preference is impelled by rational behaviour — even risk-neutral an risk-lover agents demand liquidity — at the time the demand for cash is simultaneously determined by precautionary and speculative purposes. Keywords: Liquidity preferences, Uncertainty.

411

IE12: UTILITY THEORY 393054 (IE12) Optimal expected exponential utility of dividend payments in a Brownian risk model. Grandits Peter, Hubalek Friedrich, Schachermayer Walter, Zigo Mislav, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. We consider the following optimization problem for an insurance company maxC E[U(∫e−βt dCt)]. Here U(x)=(1−exp(−γx))/γ denotes an exponential utility function with risk aversion parameter γ, C denotes the accumulated dividend process, and β a discounting factor. We show that — under the assumption that a certain representation of the barrier function as infinite series converges — the optimal strategy is a barrier strategy. The barrier function is time-dependent. We also discuss a different approach to the problem, where we derive an integral equation for the barrier function. Keywords: Expected exponential utility, Dividend payments, Brownian risk model.

IE13: PORTFOLIO THEORY 393055 (IE13) A Note on the Dividends-Penalty Identity and the Optimal Dividend Barrier. Gerber Hans U., Lin X. Sheldon, Yang Hailiang, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. For a general class of risk models, the dividends-penalty identity is derived by probabilistic reasoning. This identity is the key for understanding and determining the optimal dividend barrier, which maximizes the difference between the expected present value of all dividends until ruin and the expected discounted value of a penalty at ruin (which is typically a function of the deficit at ruin). As an illustration, the optimal barrier is calculated in two classical models, for different penalty functions and a variety of parameter values. Keywords: Dividends-penalty identity, Optimal dividend barrier. 393056 (IE13) Optimal portfolio strategy under regime switching model. Yang Hailiang, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July

412

Abstracts and Reviews

2006. In this talk, I will first give an overview of the optimal investment and consumption problem. Then I am going to present some recent results. We consider the optimal investment and consumption problem of a risk averse investor in discrete time model. Assume that the return of a risky asset depends on the economic environments and that the economic environments are ranked and described using a Markov chain with an absorbing state which represents the bankruptcy state. We formulate the investor´s decision as an optimal stochastic control problem. We obtain analytical form expressions of the optimal investment and consumption strategies. In addition, we investigate the impact of credit risk on the optimal strategy. We employ some tools in stochastic orders to obtain the properties of the optimal strategy. Keywords: Optimal portfolio strategy, Regime switching model. 393057 (IE13) Safety-first Dynamic Asset and Liability Management. Chiu Mei Choi, Li Duan, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. In portfolio theory, safety-first investors minimize the chance of disaster with a target expected return. Roy (1952, Econometrica) suggests that it is useful to minimize the upper bound of the disaster probability, derived from the Chebycheff inequality. When uncontrollable liabilities are taken into account, investors should however set criteria based on the surplus instead of the return. Therefore, it is more realistic to minimize the upper bound of the disaster probability, which measures the likelihood of the final surplus less than a disaster level, subject to an expected final surplus. This latter problem refers to the safety-first asset-liability (AL) management problem. We solve this problem under both continuous and multi-period time settings, connect it to the mean-variance AL problem of Chiu and Li (2006, Insurance: Mathematics and Economics), and give geometric interpretations using the mean-variance AL efficient frontier. Furthermore, the optimal trading rule and optimal initial AL allocation are obtained. We also discuss the optimal strategy of safety-first greedy and non-greedy investors. Keywords: ALM. 393058 (IE13) Tails of Multivariate Archimedean Copulas.

Charpentier Arthur, Segers Johan, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. The tail behavior of multivariate Archimedean copulas is derived in all corners of the unit hypercube. Special attention is devoted to the limiting distribution of a random vector with an Archimedean copula given that some of its components tend to the upper or lower boundaries of their respective supports. This question is relevant, for instance, when studying the joint distribution of the returns on the prices of all the stocks in a portfolio conditionally on some of these returns being extremely large, in the positive or negative sense. Keywords: Copulas, Portfolio. 393059 (IE13) Valuation of participating contracts and risk capital assessment: the importance of market modelling. Ballotta Laura, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. The aim of this paper is to provide an assessment of alternative frameworks for the fair valuation of participating contracts with minimum guarantee, in terms of impact on the market consistent price of the contracts and the options embedded therein, and on the capital requirements for the insurer. In particular, we model the dynamics of the log-returns of the reference fund using the so-called Merton process (Merton, 1976), which is given by the sum of an arithmetic Brownian motion and a compound Poisson process, and the Variance Gamma (VG) process introduced by Madan and Seneta (1990), and further refined by Madan and Milne (1991) and Madan et al. (1998). Although with the Merton process closed analytical formulae can be obtained for certain smoothing mechanisms (see Ballotta (2005) for example), the same does not apply when the VG process is adopted. Hence, we consider suitable simulation procedures based on stratified Monte Carlo/Quasi Monte Carlo with bridges, as proposed by Ribeiro and Webber (2004) and Avramidis and L’Ecuyer (2006), and adapt them to the specifics of the chosen participating policy, and to the calculation not only of the contract fair value, but also of some relevant risk measures, such as VaR and TVaR. Keywords: Fair valuation, Monte Carlo, Simulation, Embedded options.

Abstracts and Reviews

IE43: RISK MANAGEMENT 393060 (IE43) Aggregate Loss Distributions and Risk Measures for non-life insurance company. Cerchiara Rocco Roberto, Esposito Giorgia, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. The development of Aggregate Loss Models for non-life insurance companies is one of the most important key issues concerning classical actuarial mathematics and most recently the Solvency II project. In this paper we are interested in the calculation of the Aggregate Loss moments by considering the "1991 SOA Group of Medical Insurance Large Claims" Database. These data, provided by the Society of Actuaries (SOA), are also analysed by taking into account the previous study elaborated by Cebrian et al. [2003]. The goal of this work is to use different distributions, such as composite Lognormal Pareto distribution (as shown in Cooray K. e Ananda M. [2005]), Lognormal distribution or mixture models based on Empirical Distribution and Extreme Value Theory (see Embrechts et al. [1997] and Gigante et al. [2002]). Different choices of the distributions can in fact give different outcomes especially in the right tail of distributions. Finally by taking into account some aspects of recent Solvency II assessment models, as shown in IAA Insurer Solvency Assessment Working Party Report [2004] and the Swiss Solvency Test developed by FOPI [2004], we analyze the effects of different distributions on some risk measures such as Var and TailVar. Keywords: Extreme value theory, Solvency II. 393061 (IE43) Bounds for Quantile-Based Measures of Dependent Risks’ Functions. Goncalves Marcelo, Kolev Nikolai, Fabris Antonio Elias, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. This paper introduces two techniques for computing bounds of several quantile-based measures to counteract the main disadvantages of using VaR as a tool to risk analysis: its non-convex, non-subadditive and optimistic nature. These techniques use stochastic ordering and bounds for cumulative distribution of functions of dependent risks offering a better control over the real portfolio’s risk, making its evaluation a more accurate task. The results are obtained under the hypothesis of given marginals and unknown joint distributions.

413

Due to the variety of different economic scenarios it is impossible to get an unique risk measure which can be applied effectively in every situation. One of the most important measures is the Tail-Value-at-Risk — TVaR. Some nice properties of Tvar include: sub-additivity, monotonicity, homogeneity and translation invariance, and it is a solution of an optimization problem. TVaR and VaR are special cases of a wider class of risk measures Dg(X) which is an integral operator over the distortion function g evaluated on a risk X’s survival function. In practice it is difficult to have the risks’ joint distribution. On the other hand, information about the marginal distributions of each risk is not uncommon. Usually, one is interested in obtaining bounds for a given function ψ of risks. We use the marginal knowledge to compute bounds for the risk measure of a class of functions ψ, giving examples applied to an insurance context by using stochastic ordering in the bidimensional and multidimensional cases. A guideline is suggested to choose between both approaches in order to compute the bounds of the risks’ measures. Keywords: Distortion, VaR.

IE46: SUPERVISION 393062 (IE46) Evaluation of Insurance Products with Guarantee in Incomplete Markets. Consiglio Andrea, De Giovanni Domenico, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. Life insurance products are usually equipped with minimum guarantee and bonus provision options. The pricing of such claims is of vital importance for the insurance industry. Risk management, strategic asset allocation, and product design depend on the correct evaluation of the options written. Also regulators are interested in such issues since they have to be aware of the possible scenarios that the overall industry will face with. Pricing techniques based on the Black & Scholes paradigm are often used, however, the hypotheses underneath this model are rarely met. To overcome Black & Scholes limitations, we develop a stochastic programming model to determine the fair price of the minimum guarantee and bonus provision options. We show that such a model covers the most

414

Abstracts and Reviews

relevant sources of incompleteness accounted in the financial and insurance literature. We provide extensive empirical analysis to highlight the effect of incompleteness on the fair value of the option, and show how the whole framework can be used as a valuable normative tool for insurance companies and regulators. Keywords: Black and Scholes.

applications of portfolio insurance, to assure results that are better adapted to the market quotations evolution. In order to show the potentialities of de CPI strategy, it is made an empirical application to the PSI-20 Index (the Portuguese stock market index) and to the Eurostoxx 50 index. Keywords: Portfolio insurance.

393063 (IE46) The Use of Mixed Models for Supervision of Solvency and Risk-Based Capital. Pitselis Georgios, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. The main purpose of this article is to provide a statistical methodology for insurance supervision and investigate insolvency based on some financial characteristics of insurance companies. More specifically methods of statistical mixed models have been used predicting the amount of capital needed to absorb the underwriting risks of insurance companies. Some suggestions are provided for obtaining a formula of Risk-Based Capital. An example with a data related to the Greek insurance industry is also provided. Keywords: Mixed models, Supervision, Solvency, RiskBased Capital.

393065 (IE50) Assessing the Market Value of Safety Loadings. Bernard Carole, Le Courtois Olivier, Quittard-Pinon François, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. This article aims at linking conceptually the default puts of (risk-neutral) optional finance to the safety loadings of (historical) actuarial theory that typically serve to reduce bankruptcy risk. We illustrate this study by detailing the contractual provisions underlying typical participating contracts (described and priced by Jorgensen [2002] or Bernard, Le Courtois and QuittardPinon [2005], to quote only a few). Our analysis aims at extending and applying the ideas proposed by Buhlmann [2004], and is sequencing the famous and fundamental works of Merton [1974], Black and Cox [1976], Longstaff and Schwartz [1995]... Beyond the chosen examples, the ultimate goal of this work is to make understand and detail some links and similarities between contingent claims based pricing and standard actuarial safety loadings. Keywords: Default puts, Actuarial safety loadings.

IE50: FINANCE, LANEOUS

GENERAL

AND

MISCEL-

393064 (IE50) A new technique of portfolio insurance. Duarte Elisabete F.M., da Fonseca José A.S., Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. Portfolio insurance is a management strategy that aims to guarantee to an investor, a minimum value, the floor, in bear markets, or the value of a portfolio fully invested in the risky asset, in bull markets. To achieve this goal, the portfolio is composed by stocks and the riskless asset. The composition is reviewed periodically until maturity, according to the rules defined by the different techniques of portfolio insurance. Taking advantage of the link between CPPI and OBPI, in this paper is developed a new technique designed by Combined Portfolio Insurance (CPI). This technique has shown better performances than the original ones, when applied to empirical data. This new technique uses the path dependency usually observed in the empirical

393066 (IE50) Bounds for Asian basket options. Deelstra Griselda, Diallo Ibrahima, Vanmaele Michèle, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. In this paper we propose some pricing methods for European-style discrete arithmetic Asian basket options in a Black & Scholes framework. An Asian basket option is an option whose payoff depends on the average value of the prices of a portfolio (or basket) of assets (stocks) at different dates. In particular, an arithmetic Asian basket call option with exercise date T, m averaging dates with weights bj and exercise price K on a basket with n risky assets Sl$ with weights al generates a pay-off n  m−1  a  at time T. b S (T−j)−K l j l   j=0 l=1 +

Abstracts and Reviews

Determining the price of the basket option is not a trivial task, because we do not have an explicit analytical expression for the distribution of the weighted sum of the assets. By assuming that the assets follow correlated geometric Brownian motion processes, one can use Monte-Carlo simulation techniques to obtain a numerical estimate of the price. In literature, other techniques are suggested with as main goal to approximate the real distribution of the payoffs by another one which is easier to treat mathematically (see e.g. Beisser (2001) and references therein). In this paper, we generalize methods used for evaluating basket and Asian options to the Asian basket case. Firstly we derive upper bounds by applying techniques such as approximating the arithmetic sum and conditioning on some random variable as in Thompson (1999) and Lord (2005). Using the ideas of Lord (2005) we include one more parameter in the method of Thompson, and an optimization over this parameter leads to very good results. We compare this method also numerically in the basket option case. Secondly we generalize the approach for deriving upper and lower bounds for stop-loss premiums of sums of dependent random variable as in Kaas et al. (2000) or Dhaene et al. (2002). As in Deelstra et al. (2004) and Vanmaele et al. (2006), we also adapt the error bounds as proposed by Roger and Shi (1995) and Nielsen and Sandmann (2003) to the case of Asian basket options. Finally we propose a lower bound by using an optimization procedure. Several numerical examples are included. In general, the generalized lower bounds lead to excellent results. The methods of Thompson and Lord lead to very sharp upper bounds. In the in-the-money case the generalized Rogers and Shi method performs very well. The "Partial Exact Comonotonic Upper Bound" leads only to very good results in the far-out-of-the-money case. Keywords: Asian Basket options, Comonotone bounds. 393067 (IE50) Copula theory applied for fixing the premium of CAT-Bonds. Pérez Fructuoso María José, López Victoria Rivas, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. The main goal of this paper is to include the Copula theory in the process of fixing the premium in CatBonds products. The steps followed in this work are: the selection of a marginal distribution function in terms of the AIC criterion; the selection of distribution copula

415

function in terms of the maximum likelihood; and finally we propose an analytic expression for the premium in which the previous functions appear explicitly. Copula theory contributes to treat the dependence between risks. It also allows to include non-linear correlation factors into the risks analysed. Besides the Copula theory permits to considerate the relationship between these marginal distributions and the multivariable copula function such as Sklar’s theorem indicates. Consequently, these advantages are included in the process of fixing the premium af a Cat-Bond in this work. Keywords: Copula theory, CAT-Bonds. 393068 (IE50) Dynamic convex bounds, with applications to pricing in incomplete markets. Courtois Cindy, Denuit Michel, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. Extremal distributions have been extensively used in the actuarial literature in order to derive bounds on functionals of the underlying risks, such as stop-loss premiums or ruin probabilities, for instance. In actuarial science as well as in finance, the pricing of most products is based on some prespecified underlying stochastic process. For example, useful stochastic processes could count the number or the severity of claims in a given portfolio or could simply be a financial stock price process. In this talk, we extend the static notion of stochastic extrema to a dynamic setting (i.a. convex bounds on multiplicative processes are derived) and we get bounds on prices for quantities of interest in actuarial science and in finance. For instance, we use this new pricing method to bound option prices within the context of incomplete markets. We see that, despite their relative simplicity, the extremal processes produce reasonably accurate bounds on option prices in the classical trinomial model for incomplete markets. Keywords: Extremal distributions, Incomplete markets, Convex bounds. 393069 (IE50) Multivariate Elliptical Processes with Application in Finance. Bingham N.H., Kiesel Rüdiger, Schmidt Rafael, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. A family of multidimensional stochastic processes — so-

416

Abstracts and Reviews

called elliptical processes — is considered whose (contemporaneous) margins are based on elliptically contoured distributions. Elliptically contoured distributions are characterized by a location vector, a dispersion matrix, and a one-dimensional random variable which we call risk-driver. In order to increase the distributional flexibility of elliptical distributions, we consider grouped elliptical distributions by assigning different risk-drivers to the lower-dimensional marginal distributions. The contemporaneous dependence structure of elliptical processes will be studied by means of copulas. In this general model we discuss various estimators for the dispersion matrix and propose a procedure how to retrieve the risk-drivers. The advantages of the model are its easy calibration and simulation of random numbers. Applications in financial risk management are given, where we examine the above risk drivers. Keywords: Elliptical processes. 393070 (IE50) On Redington’s Theory of Immunization. Shiu Elias S.W., Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. About half a century ago, the British actuary F.M. Redington published the paper "Review of the Principles of Life-Office Valuations," which is probably the most cited actuarial paper in the finance literature. In this paper, Redington suggested the principle that there should be equal and parallel treatment in the valuation of assets and liabilities. His theory of immunization for insulating a portfolio against interest rate fluctuations is a consequence of this principle. This talk will discuss the historical context and current applications of Redington’s theory, and present the problem of interest rate risk management in terms of stochastic inequalities. Keywords: Redington immunization. 393071 (IE50) On Transition Densities for General Diffusion Processes. Goovaerts Marc J., Laeven Roger J.A., Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. This paper presents accurate and easy-computable approximations to the transition density function for general diffusion processes. Maximum likelihood estimation is investigated. Applications to the valuation of derivative securities are briefly discussed.

Keywords: Diffusion process, Derivative securities. 393072 (IE50) Pricing Participating Products Under a Generalized Jump-Diffusion with a Markov-switching Compensator. Siu Tak Kuen, Lau John W., Yang Hailiang, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. We consider the fair valuation problem of participating life assurance policies with bonus distributions and rate guarantees when the dynamic of the market value of the reference asset is driven by a generalized jump-diffusion model with a Markov-switching compensator. In particular, we suppose that the jump component is specified by a completely random measure process with the compensator switching over time according to the states of an economy described by a continuous-time hidden Markov chain model. This can model the impact of the economics states on the jumps of the price dynamic of the reference asset. Since insurance products have relatively long-dated compared with financial products and the states of the economy can change substantially over a long period of time, it is of practical importance to incorporate the effect of the change in the economic states in the valuation of insurance products. A completely random measure process is defined by an infinite mixture of Poisson random measures and encompasses a general jump-type process, namely, the generalized gamma process, which includes weighted gamma process and the inverse Gaussian process as particular cases. We consider the use of the product Esscher transform to determine an equivalent martingale measure for fair valuation in the incomplete market setting. We shall conduct a simulation experiment and investigate consequences for the fair valuation of participating policies of various specifications of the jump component by the completely random measure process. Keywords: Pricing, Generalized jump-diffusion, Markovswitching compensator, Esscher transform. 393073 (IE50) Risk management of a bond portfolio using options. Annaert Jan, Heyman Dries, Deelstra Griselda, Vanmaele Michèle, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. In this paper, we elaborate a formula for determining the optimal strike price for a bond put option, used to hedge

Abstracts and Reviews

a position in a bond. This strike price is optimal in the sense that it minimizes, for a given budget, either Valueat-Risk or Tail Value-at-Risk. Formulas are derived for both zero coupon and coupon bonds, which can also be understood as a portfolio of bonds. These formulas are valid for any short rate model that implies an affine term structure model and in particular that implies a lognormal distribution of future zero coupon bond prices. As an application, we focus on the Hull-White one-factor model, which is calibrated to a set of cap prices. We illustrate our procedure by hedging a Belgian government bond, and take into account the possibility of divergence between theoretical option prices and real option prices. This paper can be seen as an extension of the work of Ahn et al. (1999), who consider the same problem for an investment in a share. Keywords: Bond put option, Hull-White. 393074 (IE50) Risk miminizing strategies for life insurance contracts with surrender option. Barbarin Jérôme, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. We study the hedging strategies for insurance contracts with surrender option. In contrast to the usual description of the surrender time as an optimal stopping time (see Bacinello, Grosen and Jorgensen...), we assume the surrender time is not a stopping time with respect to the filtration generated by the financial prices. This assumption leads to an incompleteness of the insurance market which does not appear in the usual surrender models. It then becomes relevant to study the hedging strategy an insurer should follow to minimize its exposure to this surrender risk. In this paper, we choose to study the risk-minimizing hedging strategies, which have already been applied with success in insurance by Moller. We extend its results in two ways. Firstly, the random times of payment (here the surrender times) depend on the financial market and are accordingly not independent of each others. Secondly, the financial market is not the Black and Scholes model and is not necessarily complete. We only assume the prices of the tradable financial assets follow continuous semimartingales. In this framework, we derive the form of the risk-minimizing strategies and generalize Moller’s conclusion by showing we can still combine riskminimizing strategies and diversification to reduce the relative risk of a portfolio up to a limit related to the degree of incompleteness of the financial market.

417

Keywords: Hedging strategy, Black and Scholes. 393075 (IE50) Risk-Neutral and Actual Default Probabilities with an Endogenous Bankruptcy Jump-Diffusion Model. Le Courtois Olivier, Quittard-Pinon François, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. This paper focuses on historical and risk-neutral default probabilities in a structural model, when the firm assets dynamics are modeled by a double exponential jump diffusion process. Relying on the Leland [1994a, 1994b] or Leland and Toft [1996] endogenous structural approaches, as formalized by Hilberink and Rogers [2002], this article gives a coherent construction of historical default probabilities. The risk-neutral world where evolve the firm assets, modeled by a geometric Kou process, is constructed based on the Esscher measure, yielding useful and new analytical relations between historical and risk-neutral probabilities. We do a complete numerical analysis of the predictions of our framework, and compare these predictions with actual data. In particular, this new framework displays an enhanced predictive power w.r.t. current Gaussian endogenous structural models. Keywords: Default probabilities, Esscher measure. 393076 (IE50) Static Super-Replicating Strategies for Exotic Options. Chen Xinliang, Deelstra Griselda, Dhaene Jan, Vanmaele Michèle, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. In this paper, we investigate super-replicating strategies for European-type call options written on a weighted sum of asset prices. This class of exotic options includes Asian options and basket options among others. We assume that there exists a market where the plain vanilla options on the different assets are traded and hence their prices can be observed in the market. On the contrary, the option on the weighted sum is not traded in the market so that its market price cannot be observed. We show how to construct a portfolio consisting of the plain vanilla options on the different assets, whose time zero value is an upper bound for the price of the exotic option. Moreover this upper bound is model-free in the sense that it is expressed in terms of the observed option prices on the assets. We further show that this portfolio is an optimal super-replicating strategy that involves

418

Abstracts and Reviews

only investments in the assets and traded calls on the assets. This work is a generalization of the work of Laurence and Wang (2004) and of Hobson et al. (2005) who considered this problem for the particular case of a basket option and who use a Lagrange optimization. The proofs in this paper are based on the theory of stochastic orders and on the theory of comonotonic risks which allow for a more general interpretation in terms of stoploss premiums. Keywords: Comonotonic risks, Super replicating strategy. 393077 (IE50) Trading Noise in Equity Price and Corporate Pricing Models. Leong U Man, Wong Hoi Ying, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. In the literature of credit risk modeling, it is always assumed that the trading noise on equity is negligible, thus is ignored in the consideration of corporate bond pricing. In this paper, we show empirically that, trading noise is present in the equity price after a non-linear filtering scheme. We then investigate if the trading noise plays a substantial role in corporate bond pricing models, such as the extended Merton and Longstaff and Schwartz models. The result shows that extraction of noise from the equity price does not improve the accuracy of bond pricing models significantly. This is consistent with the belief that market participants have already taken the trading noise in the market into their consideration when they invest. Keywords: Credit risk, Bond pricing. 393078 (IE50) Worst Case Risk Measurement. Laeven Roger J.A., Goovaerts Marc J., Kaas Rob, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. This paper studies the problem of finding best-possible upper bounds on risk measures under incomplete probabilistic information. Both the univariate and the multivariate case are considered. Furthermore, we aim to identify the probability distributions that give rise to the worst case scenarios. Special attention is paid to the Value-at-Risk as a measure of risk. Keywords: Risk measures.

IE51: RATES OF INTEREST 393079 (IE51) Estimating a VAR model for the term structure of interest rates in Brazil. Vereda Luciano, Lopes Hélio, Fukuda Regina, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. Pension insurance companies manage huge investment funds with mainly two purposes: (i) extracting the maximum profitability of policyholders’ pension insurance premiums, and (ii) avoiding substantial risk levels to guarantee that future liabilities will be paid off. These objectives call for realistic econometric models, which take into account the evolution of key macroeconomic variables, to produce forecasts for investment returns and liability flows. Typically the vector autoregression technique (VAR) is applied to describe the development of several asset classes like stocks, short and long term bonds, real estate, wage indices and inflation. In our paper we still use the VAR approach to model Brazilian economic and financial variables but, in contrast with other papers in the literature, we try to explain bond yields of several maturities (as in Evans and Marshall (1998)) instead of restricting ourselves to a very limited number of interest rates. Departing from the traditional approach can bring about a sizable development since (i) pension insurance companies’ portfolios are not made exclusively of shares on fixed-income funds whose returns are given by the indices appearing in more conventional models, (ii) they usually contain individual bonds of various maturities and (iii) measuring the current and future solvency status of a pension insurance company necessarily involves marking to market all assets and liabilities, which is possible only if discount rates are available. We also believe that the yield curve is mostly determined by current and future expected economic conditions, especially those related to the course of monetary policy. Therefore, our model is founded on a macroeconomic block which predicts the evolutions of macroeconomic variables influencing monetary policy and the monetary policy instrument itself. Given the future path of these variables, we predict the behavior of yields of several maturities. Another special feature is the method used for choosing the most realistic model. Instead of applying pure statistical or econometric information criteria, we propose judging each alternative on the basis of economic knowledge. More specifically, we select our

Abstracts and Reviews

preferred model as the one whose impulse response functions best reproduce the stylized facts about the links between macroeconomic variables. This is especially important when one is trying to model the behavior of macroeconomic data from a country like Brazil, where different high inflation regimes succeeded each other until price stabilization was achieved in 1994. These characteristics of data discourage working with sample sizes typically used for developed countries and makes previous knowledge of economic mechanisms especially important to extract meaningful and coherent results from the VAR model. Keywords: Vector autoregression technique, Macroeconomic data.

IE52: ACCOUNTANCY 393080 (IE52) A Binomial Tree Model for the Fair Valuation of Participating Insurance Policies with Interest Rate Guarantees. Kleinow Torsten, Willder Mark, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. In this paper we consider how an insurer should invest in order to hedge the maturity guarantees inherent in participating policies. Many papers have considered the case where the guarantee is increased each year according to the performance of a fixed reference portfolio subject to some guaranteed rate. In this paper we will consider the more realistic case whereby the reference portfolio is replaced by the insurer’s own investments. Hence in our case any change in the hedging portfolio leads to a change in the underlying. We use a binomial tree model to show how this risk can be hedged, and hence calculate the fair value of the contract at outset. Keywords: Binomial Tree Model, Fair valuation, Interest rate guarantees. 393081 (IE52) Estimating default barriers from market information. Wong Hoi Ying, Choi Tze Wang, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. Brockman and Turtle (2003, Journal of Financial Economics) develop a barrier option framework to show that default barriers are significantly positive. Most implied barriers are typically larger than the book value

419

of corporate liabilities. We show theoretically and empirically that this result is biased due to the approximation of the market value of corporate assets by the sum of the market value of equity and the book value of liabilities. This approximation leads to a significant overestimation of the default barrier. To get rid of this bias, we propose a maximum likelihood (ML) estimation approach to estimate the asset values, asset volatilities, and default barriers. The proposed framework is applied to empirically examine the default barriers of a large sample of industrial firms. This paper documents that default barriers are positive but not very significant. In our sample, most of the estimated barriers are lower than the book values of corporate liabilities. In addition to the problem with the default barriers, we find significant biases on the estimation of asset value and asset volatility by Brockman and Turtle (2003). Keywords: Default barriers, Market information. 393082 (IE52) Fair Value, Mortality, Expenses and Interest Rate: the case of a life insurance portfolio. Simões Onofre Alves, Dias Eduardo, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. Since the IASB took on the process of creating a standard for the reporting of insurance liabilities, fair value accounting has become one of the most current subjects in the insurance industry. In the actuarial and accounting literature, a number of theories and techniques for fair valuation of insurance liabilities have proliferated. The purpose of this work is to develop a practical approach to perform fair value calculations. A real insurance portfolio is used and the IASB desires "of a high quality, transparent and comparable information in financial statements" are considered. Stress tests will be done to measure the sensibility of the results. Keywords: IASB, Stress tests. 393083 (IE52) Modelling the fair value of annuities contracts: the impact of interest rate risk and mortality risk. Ballotta Laura, Esposito Giorgia, Haberman Steven, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. A full implementation of the fair value accounting system will be carried out as phase II of the IASB project becomes operative. This accounting framework is also considered as the benchmark for the new

420

Abstracts and Reviews

solvency regime, "Solvency II", which is currently being discussed by the EU. There remain many open issues concerning phase II and in particular the fair valuation of insurance liabilities. At this stage of the development of the IASB project, the clarification of practical principles is needed in order to guarantee the reliability and comparability of financial statements. The purpose of this paper is to analyze the problem of the fair valuation of annuities contracts. The market valuation of these products requires a pricing framework which includes the two main sources of risk affecting the value of the annuity, i.e. interest rate risk and mortality risk. The IASB has not set any specific guidelines as to which models are the most appropriate in order to consider the mortality risk and interest rate risk, and so we consider a range of different models based on coherence with historical data and actuarial judgment. We calculate the fair value of the annuity, using standard contingent claim theory, as the expected value of the future payments, where the expectation is taken under the risk neutral probability measure. Consequently, the annuity may be regarded as a portfolio of zero coupon bonds, each with maturity set equal to the date of the annuity payments; the weights in the portfolio being given by the survival probabilities. Finally, we focus on the additional information provided by our valuation models when compared to traditional mathematical reserves. In particular, we use stochastic simulations in order to obtain the distribution of the annuity values which are then used for the definition of a suitable market risk margin. Keywords: Fair value, Market risk margin. 393084 (IE52) Selection bias and auditing policies on insurance claims. Pinquet Jean, Guillén Montserrat, Ayuso Mercedes, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. Selection bias results from a discrepancy between the range of estimation of a statistical model and its range of application. This the case for fraud risk models, which are estimated on audited claims but applied on incoming claims in the design of auditing strategies. Now audited claims are a minority within the parent sample since they are chosen after a severe selection performed by claim adjusters. This paper presents a statistical approach which counteracts selection bias without using a random auditing strategy. A two

equation model on audit and fraud (a bivariate probit model with censoring) is estimated on a sample of claims where the experts are left free to take the audit decision. The expected overestimation of fraud risk derived from a single equation model is corrected. Results are rather close to those obtained with a random auditing strategy, at the expense of some instability with respect to the regression components set. Then we compare auditing policies derived from the different approaches. Keywords: Auditing policies.

IE53: INVESTMENT 393085 (IE53) Backwards recursion in the multidimensional binomial model and valuation of multivariate contingent claims with early exercise features. Kyng Timothy James, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. This paper presents a multivariate extension of the well known binomial option pricing model of Cox Ross and Rubinstein. It also develops a method for indexing the multidimensional array used to store the payoffs and a backwards recursion method to value contingent claims with early exercise features where the payoff depends on multiple sources of risk. An affine transformation of a multivariate binomial distribution is used to numerically approximate a correlated multidimensional geometric Brownian motion process. This process is the driving process which generates the amounts paid under the contingent claim being valued. The amounts paid can be stored in a multidimensional tree / array. A method is developed for indexing / referencing the values in the tree so as to be able to use a backwards recursion approach to valuing the contingent claim. This permits the valuation of multivariate contingent claims with early exercise features. Multivariate Contingent claims with early exercise features are intractable when it comes to deriving or using an analytic approach to valuation. The methodology developed here can be applied to a wide range of option pricing and contingent claims valuation problems. In particular it is applicable to asset liability studies, valuation of executive share option schemes with performance hurdles and early exercise features, and valuation of "real options" involving more than one

Abstracts and Reviews

asset. The backwards recursion method can be applied to both American and Bermudan types of early exercise. This numerical method is easy to program and is efficient and practical. Various examples are included to illustrate the method. Keywords: Binomial option pricing model, ALM. 393086 (IE53) Buy and Hold Strategies in Optimal Portfolio Selection Problems: Comonotonic Approximations. Marín-Solano Jesús, Bosch-Príncep M., Dhaene Jan, Ribas C., Roch O., Vanduffel S., Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. Strategic portfolio selection is the process used to identify the best allocation of wealth among a basket of securities for an investor with a given consumption/saving behaviour over a given investment horizon. The basket of available securities is a selection of risky assets (such as stocks) and riskfree components (such as bonds). The individual investor or the asset manager chooses an initial asset mix and a particular tactical trading strategy, within a given set of strategies, during the whole time period under consideration. In this paper we address our attention to multiperiod optimal portfolio selection problems in a lognormal setting. As the time unit that we consider is long (typically 1 year), from the central limit theorem it seems appropriated to assume a Gaussian model. In the optimal investment problem, we work with risk measures. In particular, we will look for strategies optimizing the Value at Risk of the distribution function of final wealth for a given probability. We study the case the investor has to choose the optimal investment strategy within a class of buy and hold strategies. More precisely, the investor decides which proportion is invested in each asset, and he invests the same proportions in each period of time. No rebalancing is taken. As the terminal wealth is a sum of dependent lognormal random variables, its distribution function cannot be determined analytically and is too cumbersome to work with. Therefore, it becomes convenient to work with accurate analytic approximations for the distribution function at hand. The first approximation that we consider for the distribution of terminal wealth is the so called comonotonic upper bound, which is an upper bound for the exact distribution in the convex order sense. A much better approximation is given by the comonotonic lower (in convex order) bound. These

421

comonotonic approximations reduce the multivariate randomness of the multiperiod problem to univariate randomness. The results obtained for buy and hold strategies are compared with those in Dhaene et al. (Journal of Risk and Insurance (2005) 72, 253-301) for constant mix portfolios. Keywords: Optimal portfolio selection, Comonotonic approximations. 393087 (IE53) Fair Valuation of Participating Insurance Policies under Management Discretion. Kleinow Torsten, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. We consider the fair valuation of maturity guarantees for participating insurance policies. Such a policy can be seen as a path-dependent option whose underlying security is the investment portfolio of the insurance company which sold the policy. We consider the case in which the payoff of the policy depends on the development of the entire portfolio of the insurer. This means that the insurer can not set up a separate portfolio to hedge the risk associated with the policy, since any hedge portfolio would become part of the insurer’s investment portfolio and would therefore change the underlying security of the contract. Instead the insurer can use its discretion about its investment strategies to reduce or eliminate the risk associated with the policy. In that sense, the insurer’s investment portfolio serves simultaneously as the underlying security and as the hedge portfolio. We will show how a risk-neutral price of these contracts can be calculated and how the management of the insurer can use its discretion about its investment strategy to hedge the contract if the financial market satisfies some assumptions. In contrast to Kleinow and Willder (2005) we consider a general continuous-time financial market. Keywords: Fair valuation, Participating insurance policies, Management discretion. 393088 (IE53) Numerical Simulation for Asset-Liability Management in Life Insurance. Gerstner Thomas, Griebel Michael, Goschnick Ralf, Haep Marcus, Holtz Markus, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. Due to increased competition, regulatory decrees and

422

Abstracts and Reviews

higher globalization, stochastic asset-liability management models are becoming more and more important for life insurance companies. In this talk, we propose a time-discrete asset-liability management model for the simulation of simplified balance sheets of life insurance products. It includes a stochastic capital market, lapse effects as well as management rules for bonus declarations. For the numerical simulation of this model, we propose deterministic integration schemes, such as Quasi-Monte Carlo and Sparse Grid methods, as alternatives to usually used Monte Carlo simulation. Numerical examples demonstrate that these deterministic methods outperform Monte Carlo simulation even for long time horizons. The success of the deterministic integration schemes can be explained using the concept of the effective dimension and the concentration of measure phenomenon. Keywords: Quasi Monte Carlo, Simulation, ALM. 393089 (IE53) Optimal Dividends and ALM under Unhedgeable Risk. Pelsser Antoon A.J., Laeven Roger J.A., Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. In this paper we develop a framework for optimal investment decisions for insurance companies under unhedgeable risk. The perspective that we choose is from an insurance company that maximizes the stream of dividends paid to its shareholders. The policy instruments that the company has are the dividend policy and the investment policy. The insurance company can continue to pay dividends until bankruptcy and hence the time of bankruptcy is also endogenously controlled by the dividend and investment policies. Using stochastic control theory, we determine simultaneously the optimal investment policy and the optimal dividend policy, taking the insurance risks to be given. Keywords: ALM, Unhedgeable risk. 393090 (IE53) Portfolio optimization with short fall, expected short fall and other positive homogeneous risk measures for elliptical family. Landsman Zinoviy, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. It is a wide-spread belief that for elliptical risks the use of any positive homogeneous and translation invariant risk measures leads to the same optimal portfolio

management as does the use of the mean-variance risk measure, and hence the solution is equivalent to Markowitz portfolio (see, for example, McNeil, A.,J., Frey, R., Embrechts, P. (2005). Quantitative Risk Management, Section 6.1.5, where the problem was well documented). It should be noted that such a conclusion is true only for a special type of linear constraint on a portfolio management, namely, when the expected portfolio risk is certain. For other types of linear constraint on the portfolio, the solution is distinct from the solution obtained by mean-variance risk measure. We give the condition when the portfolio management problem has a finite solution under the linear constraint of a general form, and provide the explicit closed form solution. As a main illustration we solve the problem of minimizing the short fall and expected short fall risk measures. As a corollary we show that if the minimum short fall solution is finite the expected short fall solution is automatically finite. Keywords: Portfolio optimization, Risk measures, Elliptical family. 393091 (IE53) The younger the riskier: not always true. How the market structure effects the optimal portfolio with stochastic time horizon. Menoncin Francesco, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. In this paper we take into account the asset allocation problem for an investor whose CRRA preferences are defined over his all life (with a deterministic force of mortality) and on his intertemporal consumption. In a complete financial market with n assets and s state variables, we show a quasi-explicit solution to the optimal consumption and investment problem. We show that the optimal portfolio is computed under a particular probability measure different from both the historical probability and the martingale equivalent measure. The optimal portfolio is formed by two components: (i) a speculative component and (ii) a hedging component. In order to find an explicit solution to the optimization problem and to carry out some numerical simulations, we compute the optimal portfolio when the financial market contains: (i) only one state variable given by the stochastic instantaneously riskless interest rate which follows a mean reverting process with affine drift and volatility, (ii) a riskless asset, (iii) a constant time to maturity zero-coupon bond, (iv) a stock. Under the

Abstracts and Reviews

hypothesis that the square of the market price of risk is affine in the riskless interest rate, we find a closed form solution to the asset allocation. In this particular case the speculative component of the optimal portfolio does not depend on time while the hedging component explicitly depend on time via the force of mortality which is not constant through time. The portfolio riskiness is computed as the total percentage of wealth invested in risky assets (zerocoupon bond and stock). By computing the derivative of such a risky measure with respect to time, we show that the riskiness may either increase or decrease through time according to a condition that must hold on both the interest rate parameters and the force of mortality. In particular, if the force of mortality is constant, then the optimal portfolio does not depend on time but just on the value of the state variable. Finally, we carry out some numerical simulations. Keywords: Optimal portfolio, Stochastic time horizon. 393092 (IE53) Using Genetic Algorithms for Optimal Investment Allocation Decision in Defined Contribution Pension Schemes. Senel Kerem, Pamukcu A. Bulent, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. In this paper, we demonstrate that genetic algorithms may provide an alternative solution for optimal investment allocation decision in defined contribution pension schemes. Most of the previous research papers, including our previous studies, attempt to solve the problem analytically. The problem with analytical solutions is that they make numerous restricting assumptions such as lognormal distributions, timeinvariant covariance matrices, or short selling restrictions that are not (or, rather, that cannot be) incorporated into the model for the sake of mathematical tractability. Although some of these restricting assumptions can be relaxed, as we have previously demonstrated by relaxing the assumption of time-invariant covariance matrix in one of our previous studies, such improvements come at the expense of increased mathematical complexity. Genetic algorithms provide numerical solutions that are not bound by such restricting assumptions. For instance, asset returns can be simulated via a bootstrap method so that the genetic algorithm can work with any distribution and not just with lognormal distribution. Similarly, short selling restrictions can easily be incorporated in the genetic algorithm. This study focuses on the relative

423

performance of genetic algorithms in solving the asset allocation problem for defined contribution pension schemes. In particular, we will compare the simulation results from a standard analytical model with results from a genetic algorithm for analyzing the effect of short selling restrictions. This comparison will also shed light on the degree of suboptimality due to restricting assumptions used in analytical models. Keywords: Defined contribution, Genetic algorithms, Simulation approach, Life insurance mathematics, Stochastic processes.

IE54: PROFIT-SHARING ARRANGEMENTS 393093 (IE54) Methods to Estimate the Optimal Dividend Barrier and the Probability of Ruin. Gerber Hans U., Shiu Elias S.W., Smith Nathaniel, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. In a practical situation, complete information of the claim amount distribution may not be available. This paper presents approximate methods to estimate the optimal dividend barrier and the probability of ruin. In particular, the De Vylder methods and the diffusion approximations are analyzed and illustrated numerically. Keywords: Optimal dividend barrier, Probability of ruin. 393094 (IE54) Some Optimal Dividend Problems in a Markovmodulated Risk Model. Li Shuanming, Lu Yi, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. In this paper, we derive some results on the dividend payments prior to ruin in a Markov-modulated risk model in which the claim inter-arrivals, claim sizes and premiums are influenced by an external Markovian process. A system of integro-differential equations with boundary conditions satisfied by the n-th moment of present value of the total dividends prior to ruin, given the initial environment state, is derived and solved. We show that both the probabilities that the surplus process attains a dividend barrier from the initial surplus without first falling below zero and the Laplace transforms of the time that the surplus process first hits a barrier without ruin occurring can be expressed in term of the solution of the above mentioned system of integro-differential equations.

424

Abstracts and Reviews

In the two-state model, explicit results are obtained when both claim amounts are exponentially distributed. Finally, a numerical comparison with the results obtained from the associated averaged compound Poisson risk model is also given. Keywords: Optimal dividend problem, Markovmodulated risk model.

IM00: GENERAL AND MISCELLANEOUS 342001 (IM00) On the Prudence of the Actuary and the Courage of the Gambler (Entrepreneur). Bühlmann Hans, Giornale dell’Istituto Italiano degli Attuari, Anno LXV - N. 1-2; 1° e 2° semestre 2002. Since the middle of the 19th century, the actuarial profession has been the promoter of a solid basis for life insurance; this in contrast to earlier times when life insurance was conducted as a very speculative business: for example, in Great Britain, the following statistics for the period 1844-1853 speaks for itself: 335 new insurance companies were planned, 148 new companies were actually founded, and only 59 of these newly founded companies survived the ten year period. The empirical probability of ruin of the newly founded companies was hence 60% in 10 years! On this historical background, it is evident that the main task of the actuary has always been and still is the avoidance of ruin of his company. Or in other words: the actuary’s endeavour is above all that to make sure that insurance works. The code of arms of the oldest actuarial organization, the Institute of Actuaries, embraces this key role with the motto certum ex incertis. Our Anglo-Saxon colleagues have also an excellent terminology to describe this attitude: The prudence of the Actuary. This prudence of the Actuary as a basic attitude is often in conflict with another attitude, namely that of the Entrepreneur who basically wants to maximize future profits and hence the Value of the Firm. My motivation for writing this paper is not a moral one, namely to judge between these attitudes. I rather want to show the consequences implied by each of these attitudes for the operational activity of an insurance company. My method is based on the standard model in Mathematical Risk Theory. Hence, by the use of mathematics, I aim to be more precise and more objective. Keywords: Life insurance, Probability of ruin.

IB: INSURANCE BRANCH CATEGORIES

IB10: LIFE 393095 (IB10) A Lifetime Model Using Human Life Value Concept as a Basis for Life Insurance Coverage Computations. Mohamad Hasim Haslifah, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. Human life value concept has been used by quite a number of professionals such as actuaries, life insurance consultants, financial planners and lawyers claiming accident compensation for the clients. Basically, human life value is based on the individual’s earning ability. It is the amount that the person’s family will suffer loss of income that they used to receive during the breadwinner’s lifetime. In the application of human life value, its interactive feature allows assessment of the value of an individual’s life to his dependents. Financially, this means that the exact amount of money would have to be invested today to allow their family to replace their current standard of living after the person’s death. The purpose of this article is to develop a lifetime model using human life value concept considering Malaysian scenario assumptions. Four major areas are emphasized in this model which are the analysis of total income, taxes, expenses and the analysis of income after death of the breadwinner. Keywords: Human life value. 393096 (IB10) Adverse selection spirals. de Jong Piet, Ferris Shauna, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. This article discusses risk classification and develops and discusses a framework for estimating the effects of restrictions on risk classification. It is shown that expected losses due to adverse selection depend only on means, variances and covariances of insurance factors and rates of uptake of insurance. Percentage loadings required to avoid losses are displayed. Correlated information, such as family history, is also incorporated and it is seen how such information limits losses and decreases required loadings. Although the evidence suggests that adverse selection is not, at present, a severe

Abstracts and Reviews

problem for insurers, this might change if the authorities impose restrictions on risk classification and/or customers gain an informational advantage (such as better knowledge of their own risk levels). Application is made to unisex annuity pricing in the UK insurance market. Keywords: Annuity pricing, Risk classification. 393097 (IB10) Dynamic Asset Allocation with Annuity Risk. Koijen Ralph S.J., Nijman Theo E., Werker Bas J.M., Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. We study a dynamic asset allocation problem over the investor’s life-cycle taking into account annuity risk at the moment of retirement. Optimally, the investor allocates wealth at retirement to nominal, inflationlinked, and variable annuities and conditions the annuity choice on the state of the economy. We also consider the case in which there are, either for behavioral or institutional reasons, limitations in the types of annuities that are available at retirement. Subsequently, we determine how the investor optimally anticipates the retirement choice in the period before retirement. We show in particular that i) conditioning information is important for the optimal annuity choice, ii) additional hedging demands induced by the annuity demand due to inflation risk and time-varying risk premia are economically significant, while the additional demand to hedge real interest rate risk is negligible in welfare terms, and iii) restricting the annuity menu to nominal or inflation-linked annuities is costly for both conservative and more aggressive investors. More specifically, the welfare costs of not exploiting conditioning information are estimated to be 4%-9%. Even though the optimal conditional annuity strategy turns out to be a complex function of the state variable, we show that it is possible to design a simple linear portfolio rule, which reduces the welfare costs by 75%-90%. The welfare costs caused by not hedging annuity risk in the period before retirement range from 2% to over 10% depending on the risk preferences of the investor and the annuity strategy implemented at retirement. These results are obtained in a financial market model, which allows for stochastic interest and inflation rates as well as time-variation in equity and bond risk premia. Keywords: Asset allocation, Annuity risk.

425

393098 (IB10) Dynamic life tables. Age-period-cohort models. Debón Ana, Martínez Francisco, Montes Francisco, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. The dynamic tables arose from the necessity to incorporate the effect of the period (year of death) in the estimation of the measurement mortality. They have supposed an important advance with respect to the static tables because they allow to capture historical tendencies in mortality. Therefore, they have become a suitable tool for actuarial science. In the book of Tabeau et al. (2001) the recent developed models are described and the necessity to adopt by the actuaries models that already have given good results in other fields, like for example epidemiology, is emphasized. Pitacco (2004) collects the latest contributions and pays special attention to the risk that longevity implies for the insurance company. Some of the most outstanding models, and also more frequently used by the actuaries, are the models based on reduction factors (Continuous Mortality Investigation Reports CMIR 17, 1999), Gompertz-Makeham functions time-dependent (Renshaw et al., 1996) and, more recently, the Lee-Carter model (Lee and Carter, 1992; Brouhns et al., 2002) or the P-splines method (Currie et al. 2004). The age-period-cohort models constitute an evolution of the dynamic models as they incorporate the influence of the year of birth (cohort). Nowadays, Renshaw and Haberman (2006) investigate the feasibility of extending the methodology based on factors of reduction to the modelling and projection of age-period-cohort effects. The aim of our work is to use the model suggested by Holford (1983) and Clayton and Schifflers (1987a, b) with mortality data from Spain, Sweden and Czechoslovakia, analyzing the data by age, year of death and year of birth, with special emphasis in the cohort effect in the different countries. Keywords: Mortality, Lee-Carter. 393099 (IB10) Fair value and demographic aspects of the insured loans. Coppola Mariarosaria, D’Amato Valeria, Sibillo Marilena, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. The insured loan is a contractual form particularly interesting in a financial system making the interlacement between strictly banking and strictly life

426

Abstracts and Reviews

insurance activities more and more practiced. Concerning this case, the paper deals with the liability valuation in compliance of the fair value requirements for the financial assets and liabilities, as mapped out by the international boards engaged on this tool. The aim of the paper is to deepen two different aspects inherent the fair valuation problem in the specific contractual case of the insured loan. At first we propose a closed form for the fair valuation of the mathematical provision in a framework in which the randomness in the mortality is considered together with the financial risk component. The approach we follow implies the mathematical provision calculated as current values, this meaning at current interest rates and at current mortality rates. In these two variables the basic risk drivers of a life insurance business dwell and, broadly speaking, the many-sided risk system consists, in its more relevant characteristics, in the choice of the "right" discounting process and of the "right" mortality table for forecasting the future scenarios. Regarding this second aspect, it is opportune to observe that, as all the mortality dependent contracts, the insured loan is not tradeable in the market in the complete sense of the world, non existing a secondary market referable to this kind of product. As a consequence, the market we refer to is incomplete concerning the demographic component and, for the practical fair valuation item, no indications it gives about the dynamic of the mortality measure. This aspect is treated in the second part of the paper, in which the impact of the risk connected to the choice of the mortality table (table risk) on the fair value of the mathematical provision is quantified using a measurement tool based on conditional stochastic calculation. Considering the interest rates described by means of a stochastic pricing model based on no arbitrage principle, and supposing different mortality scenarios, several numerical applications of the models are provided, with graphical illustrations of the results, both in the fair valuation item and in the table risk quantification. Keywords: Fair value, Life insurance. 393100 (IB10) Longevity Risk in Pension Annuities. Hári Norbert, De Waegenaere Anja, Melenberg Bertrand, Nijman Theo E., Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. In modeling and forecasting mortality the Lee-Carter approach is the benchmark methodology. In many

empirical applications the Lee-Carter approach results in a model that describes the log central death rate by means of a linear trend, where different age groups have different trends. However, due to the volatility in mortality data, the estimation of these trends, and, thus, the forecasts based on it, is rather sensitive to the sample period employed. We allow for time-varying trends, depending on a few underlying factors, to make the estimates of the future trends less sensitive to the sampling period. We formulate our model in a statespace framework, and use the Kalman filtering technique to estimate it. We illustrate our model using Dutch mortality data. We analyze the importance of longevity risk for the solvency of a portfolio of pension annuities. We distinguish two types of mortality risk. Microlongevity risk quantifies the risk related to uncertainty in the time of death if survival probabilities are known with certainty, while macro-longevity risk is due to uncertain future survival probabilities. We use our generalized two-factor Lee-Carter mortality model to produce forecasts of future mortality rates, and to assess the relative importance of micro- and macro longevity risk for funding ratio uncertainty. The results show that even if uncertainty in future lifetime is the only source of uncertainty, pension funds are already exposed to a substantial amount of risk. For large portfolios, systematic deviations from expected survival probabilities and parameter risk imply that buffers that reduce the probability of underfunding to an acceptable level at a 5-year horizon have to be of the order of magnitude of 4.2% to 6.3% of the value of the initial liabilities. Alternatively, longevity risk could be hedged by means of a stop loss reinsurance contract. We use the mortality forecast model to price these contracts. Keywords: Longevity, Survival probability. 393101 (IB10) Management of a pension fund under stochastic mortality and interest rates. Hainaut Donatien, Devolder Pierre, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. The purpose of this article is to analyze the dividend policy and the asset allocation, of a portfolio of life insurance policies. We consider a financial market composed of three assets: cash, stocks and a rolling bond. Interest rates are defined by a Vasicek’s model whereas the mortality of the insured population is modelled by a Poisson process. The fund manager’s aim is to maximize the utility of dividends and of a terminal

Abstracts and Reviews

surplus under the constraint that all deflated intermediate cash flows are at most equal to the current wealth. The incompleteness of the insurance market, generated by the stochastic mortality, entails the non-uniqueness of the deflator. However, closed form formulae are obtained when the actuarial part of the deflator is defined by a constant process. In particular, results are developed for CRRA and CARA utility functions. The method of resolution is based both on the Cox and Huang’s approach and on dynamic programming. This work is concluded by an application of our results to the management of a portfolio of life annuities. Keywords: Stochastic mortality, Dynamic programming. 393102 (IB10) Markov Aging Process and Phase-type Law of Mortality. Lin X. Sheldon, Liu Xiaoming, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. Modelling mortality risk plays a fundamental role in actuarial science. In this talk, we will discuss the general principle of mortality modelling and introduce a dynamic Markov mortality modelling framework. In our approach, a finite-state continuous time Markov process is used to model the physiological aging process of a life. Then the time of death of the life follows a phasetype distribution. We emphasize the relations between mortality and physiological status. The advantages of the model include 1) that mortality is linked with biological meaningful factors, 2) that it provides a framework under which the relationship between mortality and physiological variables can be investigated, 3) that the heterogeneity or frailty effect of a cohort can be investigated, and 4) that the matrix-analytic method developed for phase-type distributions become applicable and closed-form expressions are available for the premiums of certain insurance and annuity products. The model has been fitted to various types of mortality data and the results show the fitting is statistically satisfactory. We will also discuss the applications of this model in the valuation of mortality related insurance products. Keywords: Markov aging process, Phase-type distribution. 393103 (IB10) Modeling uncertainty in mortality trends within a Poisson framework. Olivieri Annamaria, Pitacco Ermanno, Proceedings of

427

the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. This paper aims at analyzing the riskiness borne by an annuity provider, arising from both random fluctuations and systematic deviations of the numbers of survivors. Systematic deviations are caused by uncertainty in mortality trends and constitute the so-called "uncertainty risk". In recent literature, several contributions have focussed on stochastic modeling of mortality trends, in particular with regard to adult and old-adult ages. Alternative approaches have been discussed. Olivieri (2001) and Olivieri and Pitacco(2002, 2003) refer to a given mortality law and adopt a finite set of possible outcomes for the relevant parameters; a "static" approach towards mortality evolution follows, given that the set of the outcomes is chosen just once (at the starting point of the valuation). Biffis (2005), Biffis and Millossovich (2006) and Schrager (2006) adopt affine processes for modeling the force of mortality, allowing for dynamical changes in the mortality pattern; however, this requires the adoption of a risk-neutrality argument. Similar settings are developed, for example, by Cairns et al. (2005), Luciano and Vigna (2005). The approach based on diffusion processes is valuable because of its dynamical features; on the other hand, the approach based on a discrete set of alternative mortality trends doesn’t require the hypothesis of risk-neutrality, which is far to be realized when mortality is addressed due to incomplete markets for mortality risks. In this paper we attack the problem from a different perspective, focussing on the annual number of deaths in the population of annuitants. Random number of deaths are assumed to be Poisson distributed, conditional on a given age pattern of mortality and a specified trend assumption. While random fluctuations naturally follow from the Poisson assumption, the systematic deviation risk is introduced allowing for uncertainty in the Poisson parameter. Several choices for modeling parameter uncertainty can be envisaged, in particular depending on the population addressed: e.g. a single cohort of annuitants versus a set of cohorts, a run-off versus a going-concern perspective. Keywords: Mortality, Diffusion process. 393104 (IB10) Modelling stochastic bivariate mortality. Luciano Elisa, Spreeuw Jaap, Vigna Elena, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006.

428

Abstracts and Reviews

Stochastic mortality, i.e. modelling death arrival via a jump process with stochastic intensity, is gaining increasing reputation as a way to represent mortality risk. This paper represents a first attempt to model the mortality risk of couples of individuals, according to the stochastic intensity approach. On the theoretical side, we extend to couples the Cox processes set up, i.e. the idea that mortality is driven by a jump process whose intensity is itself a stochastic process, proper of a particular generation within each gender. Dependence between the survival times of the members of a couple is captured by an Archimedean copula. On the calibration side, we fit the joint survival function by calibrating separately the (analytical) copula and the (analytical) margins. First, we select the best fit copula according to the methodology of Wang and Wells (2000) for censored data. Then, we provide a sample-based calibration for the intensity, using a time-homogeneous, non mean-reverting, affine process: this gives the analytical marginal survival functions. Coupling the best fit copula with the calibrated margins we obtain, on a sample generation, a joint survival function which incorporates the stochastic nature of mortality improvements and is far from representing independency. On the contrary, since the best fit copula turns out to be a Nelsen one, dependency is increasing with age and long-term dependence exists. Keywords: Stochastic bivariate mortality. 393105 (IB10) Risk Profiles of Life Insurance Participating Policies: measurement and application perspectives. Orlando Albina, Politano Massimiliano, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. The paper deals with the calculation of suitable risk indicators for Life Insurance Participating Policies in a fair value context. In particular, aim of the work is to determine the Value at Risk of the mathematical reserve for Life Insurance Participating Policies. The VaR calculation poses both methodological and numerical problems: for this reason the paper analyses both the choice of the VaR models and the calculation technique. Numerical applications illustrate the results. Keywords: Value at Risk, Fair value, Participating policies, Mathematical reserve. 393106 (IB10) Systematic risk of mortality on an annuity plan.

Planchet Frédéric, Juillard Marc, Faucillon Laurent, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. The purpose of this article is to propose a realistic and operational model in order to quantify the part of systematic risk of mortality included in a pension commitment. The presented model is constructed on the basis of a LEE and CARTER model, which is one of the most used by actuaries to establish life tables. Moreover this is the easiest model. By using it, we first create the mortality surface and then we distort this surface in order to analyse how the pension commitment will evaluate in the case of a deviance of the future hoped mortality. So, contrary to current models which supposed that the future mortality will not be different from one assessed, several mortality evolutions are simulated and so several values of the reserve that have to be formed. Then, thanks to the Monte-Carlo technique, it is possible to assess the reserve distributions. The stochastic prospective tables constructed in this way allow to plan the evolution of random rates in the future and to quantify the part of non mutualisable risk in the commitment of an annuity plan. The interest of the exposed model is the fact that it complies with the Solvency II project. Actually it will compel actuaries to give up determinist models to use stochastic ones in the aim of assessing the fair value of portfolios. The results show that the impact on the commitment considering a stochastic mortality, compared with the determinist case, is small. However it can be noticed that from a high number of policies it will be necessary to take into account the systematic risk of mortality to obtain prudent reserve. Keywords: Mortality, Annuity plan. 393107 (IB10) Valuation of Life Insurance Products Under Stochastic Interest Rates. Gaillardetz Patrice, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. In this talk, we will introduce a consistent pricing method for life insurance products whose benefits are contingent on the level of interest rates. Since these products involve mortality as well as financial risks, we present an approach that introduces stochastic models for insurance products through stochastic interest rate models. Similar to Black, Derman and Toy (1990), we

Abstracts and Reviews

assume that the prices and volatilities of standard insurance products are given exogenously. We then derive martingale probabilities that will evolve with the stochastic interest rates. Numerical examples on Variable Annuities are provided to illustrate the implementation of this method. Keywords: Mortality, Financial risk.

IB20: HEALTH 393108 (IB20) A dynamic family history model of hereditary. Lu Li, Macdonald Angus, Waters Howard, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. Hereditary Nonpolyposis Colorectal Cancer (HNPCC) is characterised by the familial aggregation of cancer of the colon and rectum (CRC). It may be caused by any of five mutations in DNA mismatch repair (MMR) genes or by non-genetic factors, such as life style. However, it accounts for only about 2% of CRC, which is a very common cancer. Previous actuarial models, of diseases with only genetic causes, assumed that a family history of the disease shows mutations to be present, but this is not true of HNPCC. This is a significant limitation, since the best information available to an underwriter (especially if the use of genetic test results is banned) is likely to be knowledge of a family history of CRC. We present a Markov model of CRC and HNPCC, which includes the presence of a family history of CRC as a state, and estimate its intensities allowing for MMR genotype. Using this we find the MMR mutation probabilities for an insurance applicant with a family history of CRC. Our model greatly simplifies the intensive computational burden of finding such probabilities by integrating over complex models of hidden family structure. We estimate the costs of critical illness insurance given the applicant’s genotype or the presence of a family history. We then consider what the cost of adverse selection might be, if insurers are unable to use genetic test or family history information. We also consider the effect of using alternative definitions of a family history in underwriting. Keywords: Cancer, Genotype modelling. 393109 (IB20) Multivariate Prediction of Health Care Expenditures. Frees Edward W., Gao Jie, Rosenberg Marjorie A., Proceedings of the 10th Congress on Insurance

429

Mathematics and Economics, Leuven, July 2006. In an atmosphere of searching for ways to constrain resources spent on healthcare, a decision of whether a healthcare program is successful requires an estimate of costs. Methods for predicting health care expenditures in complex situations are becoming increasingly important. To motivate consideration of one such complex situation, we examine public use data from the Medical Expenditure Panel Survey (MEPS) monitored by the US Agency of Healthcare Research and Quality. These data can provide nationally representative estimates of healthcare use by the civilian non-institutionalized population. The data are collected at the person-level and contain detailed demographic information, health conditions, health status, use of medical services and their payments, as well as information about income and employment. Expenditures are the sum of payments for care from all sources, including out-of-pocket, private health insurance, Medicare, or Medicaid and are also classified by type as either hospital inpatient or outpatient services. We show that inpatient and outpatient expenditures can be modeled by bivariate aggregate loss model. For each type of expenditure, claims frequency and amounts are represented as regression models using demographic, education, regional and economic factors for prediction. We find that these predictors are more useful for predicting frequency than amounts. A random effects specification helps to explain correlations of claims within a subject. A simple predictor is developed that can be used for both univariate and bivariate modeling. Keywords: Healthcare, Regression. 393110 (IB20) Predicting the personal injury compensation awarded by courts. Ayuso Mercedes, Santolino Miguel, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. Calculating reserves for bodily injury (BI) claims becomes a main task for automobile insurers. Traditionally, insurance companies either assess individual claims directly from their own medical reports, or estimate aggregate claims by means of statistical methods (e.g. IBNR). Both techniques have a partial performance and may produce unsatisfactory solutions. In this paper we estimate the individual monetary compensation for automobile personal damages awarded in judicial sentences. We take into account that the insurer expands gradually the information regarding

430

Abstracts and Reviews

its BI claim records during the life of the claim. Autocorrelation and heteroscedasticity are also considered. The former occurred when more than one claimant (BI victim) is involved in the judicial sentence, and the latter is due to particular characteristics of medical valuations. Using a real dataset from a Spanish insurance company, we show that the application of alternative econometric models leads to more accurate predictions of the compensation payment awarded in the judicial sentence, compared to the direct insurer’s assessment. Keywords: Heteroscedasticity, Medical valuation. 393111 (IB20) Stochastic Modelling of Income Protection Insurance of Breast Cancer. Lu Baopeng, Macdonald Angus, Waters Howard, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. Income Protection (IP) insurance is designed to cover a proportion of the applicant’s salary should illness or disabilities prevent working eligibility. A standard IP model was fully described and discussed in Continuous Mortality Investigation Reports (C.M.I.R.) 12. In the current work, we expand the basic model to estimate various insurance values for women at risk of breast cancer. We establish a semi-Markov model of a whole life history of a woman with breast cancer, in which, common breast cancer events such as diagnosis, treatment, recovery and recurrence are represented explicitly as transitions between states. By incorporating the onset of other sickness, the model fully depicts the necessary elements for IP business. Premium ratings and the comparison with C.M.I.R.12 are performed for the general population and extra premiums are estimated for women with genetic mutations. Finally, the costs of adverse selection are estimated under different moratoria on the use of genetic information. Keywords: Income protection, Breast cancer, SemiMarkov model.

IB41: THIRD PARTY LIABILITY 393112 (IB41) Bonus-Malus System as a Competition Tool. Cieslik Barbara, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. Bonus-malus systems (BMSs) are used in many

countries all over the world as a simple method of modifying the individual drivers’ premium depending on theirs claim history. There are many countries, including Poland, where each company can construct its own BMS, which can be, in this situation, treated as a competition tool. The European Commission in cases C346/02 and C-347/02 argued that the obligation to use the same bonus-malus system by all companies in a given country infringed the principle of freedom to set premium rates, established by directive 92/49. In this situation new tools for analysing the interactions between various systems are needed. The traditional literature on BMSs offers almost no tools for analysing the coexistence of different merit rating systems. Eva Fels (1995) showed that the coexistence of different BMSs can lead to an unfavourable reallocation of risks between companies. Krupa Subramanian (later K.S.Viswanathan) further explored this field (1998, 2005 with J.Lemaire). Our paper corresponds with mentioned direction of research and includes a new concept of modelling competition between various systems. The objective of this paper is to present a model that provides grounds for analysing coexistence of BMSs that differ by the number of classes, transition rules and premium coefficient vector in automobile insurance market. Traditional probabilistic methods are used to describe and analyse BMSs. For the purpose of modelling the coexistence of two (or more) different BMSs on a market with the use of Markov chain theory, a two-(or more)-index probabilistic space S* and associated with it matrix M* are defined. The model facilitates predictions of market share, financial situation and portfolio quality of all insurers in dependence with their decisions about basic premium, and insureds decisions about moving to another insurer. Keywords: Bonus-malus, Merit rating, Rating freedom, Automobile insurance market and competition. 393113 (IB41) The calculation of Aggregate Loss distribution in the Third Party Motor Liability insurance. Simulation or Fast Fourier Transform? Cerchiara Rocco Roberto, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. The analysis of Bonus Malus System (BMS) has been developed both by Markov Chain Theory (see for Lemaire [1995]) and by recursive procedure (Dufresne F. [1988] and Picech et al. [1996]).

Abstracts and Reviews

In this paper the calculation of Aggregate Loss distribution in the Third Party Motor Liability insurance (Italian Market), has been developed by using and comparing two approaches under Collective Risk Theory: Monte Carlo Simulation (see for example Beard et al. [1994]) and Fast Fourier Transform (as shown in Robertson [1983] and Wang [1998]), on one year time horizon. Both models has been developed using Matlab Software. In front of a portfolio of policies with homogeneous a priori characteristics, the approach is based on the use of a variable claim frequency for each class of BMS, derived from the claim number observed in Italian market in the 2003 (source "ANIA" in reference to only "regular" passenger cars). These frequencies and Poisson Distribution are used to model the claim number distribution. For the random variable claim size is used the Lognormal distribution, fitted to the ANIA statistics. Keywords: Bonus-Malus, Simulation.

IB81: PENSION 393114 (IB81) Adjusting the contribution level along with the fluctuations in the valuation rate of investment return. Economou Maria, Haberman Steven, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. Two parameters that have an important effect on the development of the fund for a defined benefit pension scheme are the annual investment returns earned and the amortization level of the Unfunded Liability. In real life, the fund fluctuates above and below its target level; and, thus, the actuary has to deal with these fluctuations in order to exclude one of the following cases: (1) the fund will ultimately run out of the assets that pay benefits, (2) the fund grows exponentially out of control. The parameter, "lamda", that determines the level of the amortization of the Unfunded Liability, is often considered as though it were constant over a long period. This restricts our flexibility to either strengthen or weaken the fund value at certain time points according to the gains or losses that have been experienced. We, therefore, propose to model "lamda" as being a variable. We propose to tackle this problem by considering the two elements: the cost amortisation variable, "lamda", and the rate of investment return, "i", as correlated random variables. This leads to the

431

question of determining optimal methods of funding that will reduce the variability of both contributions and fund levels by controlling the spreading of surpluses and deficiencies in a specific time frame. Keywords: Spread parameter, Defined benefit scheme, Optimal spread period. 393115 (IB81) Dynamic Pension Funding Models: Deterministic and Stochastic Approaches. Khalil Dalia, Haberman Steven, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. Dynamic pension funding models for a defined benefit scheme are set up using the deterministic and stochastic approaches in discrete time basis for a specified time horizon. This work is an extension of the earlier work of Haberman and Sung (1994). Two cost-induced performance indexes are formulated within the context of dynamic programming and risk control theory. The optimal contribution rate is derived by minimising the two main risks facing the defined benefit pension schemes: the contribution rate and solvency risks. A cross-product term which was first introduced by Cairns (2000) is included in the performance index. Adding this term to the performance index means that the interests of the employer and the members are considered simultaneously rather than separately. Thus, instead of weighing up the conflicting interests of the employer, who is seeking the stability of the contribution rate on one side, and the trustees and the members, who are seeking the security of the pension fund on the other side, these interests could be viewed as mutual and dependent. Since discontinuance involves the employer in being responsible for compensating his employees and members and meeting their rights, he should be keen to keep his company and the scheme solvent. On the other side, the members should seek the stability of the contribution rate in order to increase the probability that the required contributions will be paid at the specified times. Computational experiments are carried out to reveal the underlying properties of the stochastic model. The behaviour of both the expected optimal contribution rate and fund level is examined by applying a sensitivity analysis with different values of the parameters of the model. We focus on examining the effect of changing the weighting risk factors of the contribution rate risk and the solvency risk with different values of the weighting risk factor of the cross-product term. The

432

Abstracts and Reviews

results show different behaviours of the expected optimal contribution rate and fund level according to the different weights given to the risk factors in the model. Keywords: Pension funding, Defined benefit. 393116 (IB81) The effects on the funding and contribution variance using the Modified Spreading Model. Haberman Steven, Owadally M. Iqbal, Gomez Denise, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. The present work analyses the effects on the funding and the contribution rate when the only source of unpredictable experience is through volatile rates of return. The funding of a defined benefit scheme has been one of the main concerns of the sponsors of a pension plan. A volatile fund is not desirable, as it might imply volatile contributions. It is shown that the modified spreading model, developed by Owadally (2003), eliminates gains and losses, arisen by favourable or unfavourable experience, by paying an specific amount of unfunded liabilities through the time. A comparison of the modified spreading model is made with the spreading model developed by Dufresne (1988). The modified spreading model is shown to be more efficient than the spreading model, as it minimises the variance of the fund and the contribution and as it leads to a smoother fund and contribution rate. Real investment rates of return of the pension fund, are assumed to be represented by two stochastic models. Bootstrap sampling method by using historical data and the IID special case of the autoregressive model. The bootstrap sampling analysis considers two different assets, a high-risk asset given by UK equities and a lowrisk asset given by gilts. Also, six different asset allocations, three different periods of time to project the value of the fund, and three scenarios for the actuarial assumptions on the rates of return are considered and analysed. The basis of our work are found mainly in Owadally and Haberman (2004) and Owadally (2003). Keywords: Modified spreading, Bootstrap.

IB90: REINSURANCE 393117 (IB90) Experience and Exposure Rating for Property per Risk Excess of Loss Reinsurance Revisited. Desmedt Stijn, Walhin Jean-François, Proceedings of the 10th Congress on Insurance Mathematics and

Economics, Leuven, July 2006. Experience and exposure rating are traditionally considered to be independent but complementary methods for pricing property per risk excess of loss reinsurance. Strengths and limitations of these techniques are well-known. In practice, both methods often lead to quite different estimations. In this paper, we show that limitations of traditional experience rating can be overcome by taking into account historical profile information by means of exposure curves. For pricing unused or rarely used capacity, we propose to use exposure rating, calibrated on the experience of a working layer. We compare the methods presented with more traditional methods based on the information which is generally available to the reinsurer. Keywords: Excess-of-loss, Reinsurance. 393118 (IB90) On Excess-Loss Reinsurance. Teugels Jozef L., Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. We deal with an excess-of-loss reinsurance situation with both a lower and an upper bound on the individual claims. For the number of such claims we will base our considerations upon a transparent use of point processes. For the size of the reinsured amount we make a distinction between light and heavy claim sizes. Keywords: Excess-loss reinsurance. 393119 (IB90) Optimal retention levels in dynamic reinsurance markets. Biffis Enrico, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. We consider the problem of determining optimal retention levels for insurers willing to mitigate their risk exposure by purchasing proportional reinsurance. We revisit De Finetti’s classical results in continuous-time and allow reinsurance premiums and retention levels to change dynamically in response to claims experience and market performance. We also take up some ideas from dynamic reinsurance markets to intertwine De Finetti’s work and Markowitz’s mean-variance portfolio theory. Keywords: Reinsurance, Mean-variance. 393120 (IB90) The optimal reinsurance policy in terms of the adjustment coefficient criterion.

Abstracts and Reviews

Guerra Manuel, Centeno Maria de Lourdes, Proceedings of the 10th Congress on Insurance Mathematics and Economics, Leuven, July 2006. This paper is concerned with the optimal form of reinsurance from the ceding company point of view, when the cedent seeks to maximize the adjustment coefficient of the retained risk. We start by studying the problem by assuming that the premium calculation principle is a convex functional and that some other quite general conditions are fulfilled. We show that the problem has at most one solution and provide a necessary condition for the maximizer. We apply the results to the cases where the reinsurance premium calculation principle is calculated according to the expected value principle, standard deviation principle and variance principle.

433

We show that when the reinsurance premium is calculated according to the expected value principle the optimal form of reinsurance is a stop loss contract. This is a generalization of the result obtained by Hesselager (SAJ, 1990). In his paper he proved the result under the constraint that the feasible reinsurance solutions had a fixed expected value. We give the optimal solution for the other two cases. In those cases, for an insurance loss of y, the optimal reinsurance arrangement Z(y) satisfies RZ(y) = max(0, Ry−ln(Z(y)+A)+B), where R is the adjustment coefficient and A and B are real constants. In both cases, the values of R, A and B can be obtained by simple mathematical programming techniques. Keywords: Optimal reinsurance.