Risk measures and return performance: A critical approach

Risk measures and return performance: A critical approach

European Journal of Operational Research 155 (2004) 268–275 www.elsevier.com/locate/dsw Risk measures and return performance: A critical approach Pie...

203KB Sizes 0 Downloads 33 Views

European Journal of Operational Research 155 (2004) 268–275 www.elsevier.com/locate/dsw

Risk measures and return performance: A critical approach Piera Mazzoleni Universita Cattolica del, Sacro Cuore, Largo Gemelli 1, Milan, 201 23, Italy

Abstract When stating a risk measure, the attention has to be concentrated on the amount of money, which is required to guarantee the feasibility of the final value for the position in exam. But this has to be done in relative terms, with respect to a lower bounding threshold if the aim is safety, with respect to a stimulating target, if the objective is to improve the rentability. Moreover, the attitude towards potential losses from the demand and offer sides have to be modelled in different ways. Aim of this paper is to review the current literature to show how one-side and intertemporal elements have to be explicitly included in the definition of a risk measure, to give a flexible policy instrument to risk managers. A new oneside dynamic measure is then defined according to suitable coherence conditions.  2003 Elsevier B.V. All rights reserved. Keywords: Coherent risk measures; Subjective risk measures; Lower partial moments; Dynamic method

1. Introduction The subjective approach is certainly very appealing and it is more and more frequently applied in the decision theory. A wider use has to be promoted in the risk theory and in particular in the process of the risk measurement, in order to optimize the investment and insurance strategies. The application of either a utility function to the decision variables or a distortion function to the probability distribution is the analytical tool to formalize the different viewpoint of the decision makers.

E-mail address: [email protected] (P. Mazzoleni).

Another way to approach the problem is to refer the risk definition to a loss in subjective terms, according to some preassigned target. Such a target may be either a liability to be actually resolved at a fixed date T or some prudential threshold not to be violated: in correspondence the risk problem is set on a finite horizon ½0; T . But, it is especially the development of one-side risk measures by Fishburn (1977), that introduces an explicit reference to subjective targets and thresholds. Then we face several approaches to set the subjective point of view in risk measures. Section 2 reviews some of the risk measures proposed in the literature, that in the opinion of the author emphasize the subjective approach. The same process is followed in Section 3 for the dynamic generalization. In this framework, a further aspect can be considered: it is possible to

0377-2217/$ - see front matter  2003 Elsevier B.V. All rights reserved. doi:10.1016/S0377-2217(03)00085-7

P. Mazzoleni / European Journal of Operational Research 155 (2004) 268–275

measure risk with respect to the final value (Cvitanik and Karatzas, 1999); alternatively, one might compare the results for the single periods (Siu and Yang, 1999). But it is worth stating a link between subsequent periods and give a recursive definition of risk measures (Wang, 1999). Aim of this paper is to develop a model of oneside dynamic risk (Section 4), to optimize the behaviour of the risk managers and to guarantee prudence requirements for the saving sector in a recursive framework. The new measure will be proved to satisfy suitable coherent conditions.

269

x Þ, tion is based on variance, qðe x Þ ¼ E½e x  þ arP ðe when we consider semi-variance we write qðe xÞ ¼ E½e x  þ rP ððEP ½e x  e x Þþ Þ, but both these measures do not satisfy the coherence conditions, in particular they are not subadditive. Therefore, new lines of development started from the Artzner et al. leading idea. Since we are mainly interested in the introduction of subjective points of view, we mention only the following measures, that are particularly relevant in the author opinion and that keep the structure of expected value of opportune random variables: • the mean excess function (Embrechts et al., 1997), with respect to threshold s

2. The subjective approach to static risk measures eðsÞ ¼ E½je x  sj : ex > s; Several risk measures have been proposed in the literature, from different viewpoints. A pure economic definition of risk is given under no assumption of market completeness in Artzner et al. (1999): it is not related with changes in values between two dates, but it is rather referred to the extra cash an investor requires to have sufficient current net worth to support the maximum expected loss. Suppose a reference instrument is assigned with return r and consider the risky position e x , belonging to a preassigned feasible set G, as in the mentioned paper. A risk measure is the expected value of margin (e x =r) optimized with respect to the family P of probability measures: qðe x Þ ¼ supfEP ½ex =r : P 2 Pg: Such a measure applies an optimizing procedure, thus taking the objectives of the decision maker explicitly into account, and it introduces a subjective element represented by the prudent investment to be used as a reference. But the main contribution of above mentioned paper by Artzner et al. is the set of properties, giving the coherence foundation to risk measures: we remind subadditivity, scale and translation invariance. Measures, which are very common in practice, apply approximation formulas to qðe x Þ under a specified probability P : one possible approxima-

• the distorted measure (Wirch and Hardy, 1999). The subjective point of view is represented by an increasing function g : ½0; 1 ! ½0; 1, with gð0Þ ¼ 0, gð1Þ ¼ 1, which is usually supposed to be also concave, and it is applied to modify the probability distribution Z þ1 EG ½e x ¼ g½P ðe x > xÞ dx: 0

A wide discussion has been developed to state a comparison between complete and one-side risk measures. The subjective approach naturally leads to only either downside or upside measures, according to the different policy roles. Investments are selected by applying suitable objectives and choices are guided by the introduction of leading targets. The first examples of one-side risk measures trace back to Markowitz himself (1959), who was not completely persuaded by the use of variance, and suggested to consider semivariance, measuring the downside variability around the expected value. But the main contribution is the introduction of the Lower Partial Moments in the risk measures. Bawa (1975) defines the risk that the investment Rt does not reach the preassigned target s, at degree a, as LPMða; sÞ ¼

K 1 X max½0; s  Rt : K t¼1

270

P. Mazzoleni / European Journal of Operational Research 155 (2004) 268–275

A general theory of downside risk has been developed by Fishburn (1977), who reconciles this theory both with the stochastic dominance approach and the utility theory. A downside risk with respect to target s is defined as Z s qðF Þ ¼ uðs  xÞ dF ðxÞ 1

in which uðyÞ P 0 for y P 0, uð0Þ ¼ 0 is a nondecreasing function of y, representing the subjective attitude, and the domain is bounded by s. A mean-risk utility model is set if and only if there is a real-valued function U such that distribution F1 is preferred to F2 if and only if U ðE½F1 ; qðF1 ÞÞ > U ðE½F2 ; qðF2 ÞÞ with U increasing in the expected value and decreasing in risk. Indeed, a decision makerÕs preferences may satisfy a mean-risk utility model without even satisfying the von Neumann and Morgenstern axioms for expected utility. But the wide use of such a model has stimulated Fishburn to state the equivalence of this approach with the classical one, Z þ1 Z þ1 uðxÞ dF1 ðxÞ > uðxÞ dF2 ðxÞ; 1

1

when referred to the particular utility index uðxÞ ¼ x for x P s; uðxÞ ¼ x  kuðs  xÞ for x 6 s; or more simply uðs  xÞ ¼ ðs  xÞa . Low values of a strengthen the probability to reach the threshold, high values give greater importance to the deviation amount. A three parameter model has been also developed, by separating domain and target values: the e under distribution random amount of wealth w function F faces risk that is measured by Z A 1=k k e  w0 j dF ðwÞ Lðw; k; AÞ ¼ jw ; 1

where w0 is the target wealth, A the deviation range, k the parameter weighting deviations. Finally, we should like to emphasize the role of one-side risk measures:

they are goal oriented and compare the behaviour with suitable reference levels; they treat separately upper and lower sides of risk; they allow us to distinguish targets and values range; they are coherent with the utility approach, but they go beyond normal distribution and quadratic utility assumptions; they take into account the subjective approach of decision makers.

3. The dynamic approach to risk measures Up to now we have measured risk in a one period perspective, but usually investments and insurance contracts cover a longer time horizon and time has to be explicitly treated in modelling risk definition and measurement. Cvitanik and Karatzas (1999) set the model for risk measurement in an active and passive framework, still keeping the downside perspective of Artzner et al. Consider a finite time interval ½0; T  and suppose that the target is a liability C with maturity T , the agent cannot afford to commit at time t ¼ 0. In a complete financial market under the no arbitrage assumption, the expectation E is calculated with respect to the unique risk-neutral equivalent martingale measure. An investment strategy z has to be chosen in a predefined feasible set ZðxÞ, depending on the initial capital x, so that its wealth e x x;z ðT Þ is devoted to hedge the liability C. Under no perfect matching conditions the liability C is a risk to be measured with respect to a reference risk-free prudential instrument with price S0 ðT Þ. If the model depends on a specified probability distribution, the trading strategies z have to be chosen in order to minimize the expected value of the end-ofperiod discounted net-loss, qC ðe x Þ ¼ inf E0 ð½ðC  ex x;z ðT ÞÞ=S0 ðT Þþ Þ; zðÞ2ZðxÞ

x being the initial capital. Cvitanik and Karatzas specify the feasible set ZðxÞ analytically by dynamic constraints and solve the optimal control problem: we refer to the above mentioned paper

P. Mazzoleni / European Journal of Operational Research 155 (2004) 268–275

for the details, but we should like to emphasize the importance of the optimizing approach. This measure verifies the coherence conditions, provided that the investment strategy e x and the liability C vary in the same proportion. Suppose that uncertainty is represented by a family of probability distribution functions F 2 P, then an interval of possible one-side risk measures is set: a lower risk measure þ qm ðe x Þ ¼ sup inf Eð½ðC  e x x;z ðT ÞÞ=S0 ðT Þ Þ; F 2P zðÞ2ZðxÞ

which corresponds to the maximal risk; an upper risk measure qM ðe x Þ ¼ inf

271

Therefore, we briefly review WangÕs model (1999), due to its truly dynamic nature. Indeed, by applying the recursive approach under the assumption of time separability of risk, the author proves that the recursive expression to measure risk on a two periods horizon can be expressed as a generalized mean. Let X be the state space: for any risk profile ðx; dÞ, where x describes the current position, d summarizes all the future forecasted risk, the risk measure Ve ðdÞ : X ! R unifies the current and future position: Ve ðdÞðxÞ ¼ V be x ðxÞ; e d 1 ðxÞc:

þ

sup Eð½ðC  e x x;z ðT ÞÞ=S0 ðT Þ Þ;

zðÞ2ZðxÞ F 2P

which corresponds to the attempt of the agent to contain the worst losses. Siu and Yang (1999) take into account both the dynamic development and the subjective point of view. Compared with the Cvitanik and Karatzas model, the horizon ½0; T  is no longer considered as a whole, but it is subdivided into periods ½n; n þ 1; n; . . . ; N , and De x nþ1 denotes the change of the market value of a portfolio on such an interval. The model by Siu and Yang follows the Artzner et al. proposal, but each period is treated separately. A group of traders V is considered to represent the subjective point of view: each member v of the group V chooses an a priori probability distribution function F v in set Pn . Then, according to the Bayesian theory, information included in the market observations Dxn is added, leading to the conditional expected value. The risk measure is obtained by taking the supremum over the family of probability distributions period by period qn ðDe x nþ1 jDxn Þ ¼ supfEP ½De x nþ1 =rn jDxn  : F v 2 Pn ; v 2 V g; n; . . . ; N ; where rn is the return of the reference instrument. The two previous models consider time either as an end–of–period balance or in a period-by-period perspective. But the intertemporal measure of risk has to take the dynamic development explicitly into account.

As a first step the certainty equivalent of the future risk is calculated and the multiperiod risk is transferred into a one-period equivalent, V ðdÞ ¼ lð Ve ðdÞÞ; by function l such that lðxÞ ¼ x for any x 2 R;

lðex Þ P lðe y Þ if e xPe y:

As a second step a time aggregator, W ðx; nÞ, W : R  R ! R, represented by a continuous and strictly monotone function, is applied: V ðx; dÞ ¼ W ðx; lð Ve ðdÞÞÞ; so that the usual coherence static conditions are verified. But additional conditions have to be added in order to guarantee intertemporal coherence: • Future independence. If inequality V ðx1 ; x2 ; DÞ P V ðx01 ; x02 ; DÞ holds for initial position ðx1 ; x2 Þ, ðx01 ; x02 Þ and deterministic losses D, the same inequality holds for new losses D0 , V ðx1 ; x2 ; D0 Þ P V ðx01 ; x02 ; D0 Þ. Then the risk measure of the current position is independent on the future losses. • Timing indifference. Such a property says that two future risks d and d 0 with the same collection of worse-than sets, will have the same risk measure, V ðdÞ ¼ V ðd 0 Þ. The future independence assumption characterizes W as W ðx; nÞ ¼ u1 ðuðxÞ þ buðnÞÞ;

272

P. Mazzoleni / European Journal of Operational Research 155 (2004) 268–275

b being the discount factor; the certainty equivalent function takes the analytical form from the hypothesis of timing indifference Z  1 w  uðex Þ dm ; l½e x  ¼ ðw  uÞ where u and w are strictly increasing functions, with uð0Þ ¼ wð0Þ ¼ 0, m is a monotone set function. This way it is possible to give the analytical expression for the risk measure: Z   V ðx; dÞ ¼ u1 uðxÞ þ bw1 w½uð Ve ðdÞÞ dm :

4. A dynamic one-side risk measure Risk management of both financial and insurance portfolios states moving targets to stimulate efficiency and moving thresholds to prevent bankruptcy: indeed, very aggressive investment policies exhibit higher risks and have to be controlled. The value-at-risk is measured by cumulative difference of the current and target process, under a truncated expected value. Two methods can be followed to find it: • the no-arbitrage approach is a theoretical equilibrium one and the equivalent martingale measure is applied to give the risk-neutral probability distribution for the expected value; • a more realistic approach optimizes the decision process with respect to a whole family of probability distributions. A more direct approach is the analytical one and takes both the position and the reference level as functions of variable y, representing the market risk factors. We are going to develop a new risk measure to emphasize the different role of the demand and offer sides and the time influence over the optimizing process: such a measure will satisfy suitable coherent conditions. Let us suppose that the different scenarios can be represented by suitable variables y so that the

usual optimizing process can involve also the reference levels: then we can formalize the so called rebalancing process of the benchmark portfolio. Let us denote sS ðyÞ the upper level to promote an efficient investment strategy. The subjective point of view can be emphasized by weighting the difference by the loss function kS : this way we enclose the class of statistical risk measures. Suppose ZðxÞ be the set of feasible investment strategies zðÞ, that are allowed us with the initial capital x, and Gðe x Þ ¼ fðz; yÞ : z 2 Zðe x Þg be the feasible set for the joint investment and target policies, e x denoting the risk position. Then the maximum loss can be controlled by suitably choosing the optimal strategy, with respect to the target sS ðyÞ and the relative measure can be formalized as the optimal value function for the optimization problem þ  kS ðsS ðyÞ  e x x;z ðyÞÞ qM ðe x Þ ¼ inf sup E : S r ðz;yÞ2Gðe xÞ y ð4:1Þ The coherence conditions introduced by Artzner et al. are easily verified also for the relative risk measure. Let us consider the case of identity evaluation function kS ¼ id: in this case the only caution concerns the positive homogeneity. Indeed, it is no longer possible to apply the same unit measure to both the pre and the post risk evaluation, if the target level is not consequently modified: we face qM x ; sÞ 6¼ kqM x ; sÞ; S ðke S ðe unless s is multiplied by k, too. Then it is worth choosing targets having the same nature of the random process under examination. When we allow the loss functions kS making the downside theory compatible with the utility theory a

kðs  xÞ ¼ ðs  xÞ ; we face no longer a linear case and we need add suitable assumptions: Definition 4.1. A risk measure is weakly coherent if it satisfies the following conditions:

P. Mazzoleni / European Journal of Operational Research 155 (2004) 268–275

• translation and scale reduction qðe x þ ar; sÞ 6 qðex ; sÞ;

qðkex ; ksÞ 6 kqðe x ; sÞ;

k > 1; • subadditivity qðe x1 þ e x 2 ; sÞ 6 qðex 1 ; sÞ þ qðe x 2 ; sÞ: Then the following theorem does hold. Theorem 4.1. The optimal risk measure qM xÞ ¼ S ðe qðe x ; sÞ satisfies the coherence conditions of Definition 4:1 if and only if function kS is concave in e x and the feasible set Gðe x Þ for problem (4.1) is a convex multifunction on a convex set of random variables e x 2 X. Proof. Due to assumptions concerning problem (4.1), the optimal value function qM x Þ is convex S ðe (Siu and Yang, 1999): then qM ðe x Þ results to be S quasi-homogeneous and subadditive (Kuczma, 1985) as required by Definition 4.1 for weak coherence.  Remark. The explicit introduction of risk attitude is no longer compatible with dimension neutrality. A more general setting in the static case will take into account both the downside and the upside position, thus leading to an explicitly constrained problem. Suppose that sD ðyÞ be the lower bound to control the maximum loss and the difference s  x be weighted by function kD . Then we can set problem as (4.1) under the constraint of a limited maximum loss with respect to the lower threshold sD ðyÞ

Gðe x Þ ¼ ðz; yÞ : qm xÞ D ðe þ

 kD ðsD ðyÞ  e x x;z ðyÞÞ E r y zðÞ2ZðxÞ 6h :

¼ sup inf

Such a parametric problem gives the constrained version of the biobjective optimization problem, leading to the risk measure interval: indeed, qM S

273

denotes the effort to improve the investment towards the target, while the demand side is protected against the worst loss by qm D. We have now to represent the time dependence of losses explicitly, to overcome the one-period perspective in measuring risk. Suppose that the risk measure be separable in time and let us follow the recursive approach by Wang, but let us apply the optimization process of dynamic programming so that risk is measured in all subperiods. ManagerÕs attitude towards losses with respect to the assigned target is given by a measurable function L : Gðe x Þ ! Rþ , which is assumed to be continuous, strictly increasing and convex. For any risk profile d ¼ ðx0 ; dÞ 2 Gðe x Þ represents a possible tree built on the initial capital x0 and giving the future risk development and it is stated in relative terms as a difference with respect to the target, d1S ¼ sS ðyÞ  e x x;z ðyÞ. Then function e L½dðxÞ ¼ Lðe x ðxÞ; e d ðxÞÞ gives the value-at-risk of d if the state x 2 X is realized. A synthetic measure of the whole future risk is calculated as the certainty equivalent n ¼ LðdÞ ¼ lð e LðdÞÞ: Suppose there exists a time aggregator W ðx; nÞ, allowing us to relate the current capital x and the future losses n, by stating a recursive relation for the loss function, Lðx; dÞ ¼ W ðx; lð e L½dÞÞ: Then the optimal problem for the risk management is given by J ðe x Þ ¼ inf sup W ðx0 ; lð e LðdÞÞÞ z

ð4:2Þ

y

on the feasible set Gðe x Þ for the risk profiles, which is supposed to be a convex multifunction so that problem (4.2) results to be convex. The time aggregator W ðx; nÞ is supposed to satisfy the following conditions: • continuous with W ð0; 0Þ ¼ 0; • W ðx; nÞ strictly increasing and strictly convex both in x and in n; • W ðx; nÞ submodular in the pair.

274

P. Mazzoleni / European Journal of Operational Research 155 (2004) 268–275

The time varying worst-case loss of period t is defined as Jt ðex t Þ ¼ inf sup fW ðe x t ; lð e L½dt ÞÞ : ðz; yÞ 2 Gt ðe x t Þg; z

y

ex t denoting the worst-case current position, dt the future estimation of losses towards the target, Gt ðe x t Þ denotes the feasible set deferred to the future subperiod starting in t. Then the optimality equation is Jt ðex t Þ ¼ inf sup fW ðe x t ; lðJtþ1 ðe x tþ1 Þ½dÞÞ : z

y

x ÞÞg ðz; yÞ 2 pt ðGt ðe x ÞÞ is the t-time feasible set obtained and pt ðGt ðe with the decomposition process. The risk profile develops over time so that risk is recursively optimized both on every subperiod and on the whole horizon and it is measured by the optimal value functional Jt ðe x t Þ. Then the dynamic risk measure is formalized as the optimal value function of a convex problem and sufficient conditions are verified so that it is concave with respect to the position e x. In correspondence the coherence conditions can be stated also in the dynamic framework: Theorem 4.2. Suppose problem (4.2) be convex so that the optimal value functional J ðe x Þ be convex in the position e x . Then it is a weakly coherent dynamic risk measure, according to Definition 4:1. Let us now suppose that the WangÕs conditions are satisfied and only the monotonicity of W is assumed. Moreover we add the assumption of vector timing indifference with respect to targets sS , sD : it requires that risks e x1, e x 2 take the same number of distinct values, say k, and the following condition holds: xÞ : fy 2 Gy ðe sS ðyÞ  e x 1 ðyÞ P x1S ; sD ðyÞ  e x 1 ðyÞ P x1D g ¼ fy 2 Gy ðe xÞ : sS ðyÞ  ex 2 ðyÞ P x2S ; sD ðyÞ  e x 2 ðyÞ P x2D g; that is, the worst-case positions are the same for the two profiles, being compared. Gy denotes the restriction of the feasible set to the market variables y.

Remind that the composition of a convex and a monotone function results only to be quasiconvex. Then we are able to further weaken the coherence conditions: Definition 4.2. A risk measure is quasi-coherent if it satisfies the quasiconvexity condition x 2 ; sÞ 6 maxfqðe x 1 ; sÞ; qðe x 2 ; sÞg qðe x1 þ e instead of subadditivity. Theorem 4.3. Suppose that problem (4.2) be quasiconvex so that the optimal value functional J ðe x Þ is quasiconvex. Then J ðex Þ represents a quasi-coherent dynamic measure according to Definition 4:2. Therefore by aggregating the risk of the current period to the future estimation, we guarantee that the worst-case loss is less than the worse loss between the two terms J ðe x1 þ e x 2 Þ 6 maxfJ ðe x 1 Þ; J ðe x 2 Þg: The dynamic development of the risk measures has led to weaker coherence conditions. Indeed, at each step in time we can measure risk no longer in mean, but according to the worst-case approach. We can now conclude that the risk measure is calculated by solving an optimization problem and not just by aggregating the risk value over time: the optimal targets and thresholds are thus stated as the result of an optimization process and depend on both the risk attitude and prudent bounding conditions. The recursive approach allows us to measure risk also at any time and not only globally on the whole time horizon. 5. Concluding remarks According to the Artzner et al. approach, the definition of risk measures as the maximum expected shortfall has received the theoretical foundation by suitable coherence conditions. This paper has shown that by applying an optimization approach rather than simply weighting the risky amounts over time, we are able to weaken the coherence requirements. To this main result we add also two further possible lines of development:

P. Mazzoleni / European Journal of Operational Research 155 (2004) 268–275

• to introduce upper and lower bounds, simultaneously; • to formalize the rebalancing process of benchmark by making it depending on decision variables that operate on the portfolio and on the targets, simultaneously.

Acknowledgements The research has been partially supported by the National Research Council and the Italian Ministry of Education.

References Artzner, P., Delbaen, F., Eber, J.M., Heath, D., 1999. Coherent risk measures. Mathematical Finance 9, 203–228.

275

Bawa, V.S., 1975. Optimal rules for ordering uncertain prospects. Journal of Financial Economics 2, 95–121. Cvitanik, J., Karatzas, I., 1999. On dynamic measures of risk. Finance and Stochastics 3, 451–482. Embrechts, P., Kluppelberg, C., Mikosch, T., 1997. Modelling Extremal Events for Insurance and Finance. SpringerVerlag, Berlin. Fishburn, C., 1977. Mean-risk analysis with risk associated with below-target returns. The American Economic Review 67, 116–126. Kuczma, M., 1985. An Introduction to the Theory of Functional Equations and Inequalities. University Press, Warszawa. Markowitz, H.M., 1959. Portfolio Selection, first ed. Wiley, New York (2nd ed. Blackwell, Cambridge, 1991). Siu, T.K., Yang, H., 1999. Subjective risk measures: Bayesian predictive scenarios analysis. Insurance: Mathematics and Economics 25, 157–169. Wang, T., 1999. A class of dynamic risk measures. Working Paper, September. Wirch, J.L., Hardy, M.R., 1999. A synthesis of risk measures for capital adequacy. Insurance: Mathematics and Economics 25, 337–347.