scispace - formally typeset
Search or ask a question

Showing papers in "Scandinavian Actuarial Journal in 2023"


Journal ArticleDOI
TL;DR: In this paper , the Stackelberg game for insurance under model ambiguity is solved and the seller's robust optimal premium rule equals the buyer's optimally distorted probability under a mean-variance premium principle.
Abstract: We solve a Stackelberg differential game between a buyer and a seller of insurance policies, in which both parties are ambiguous about the insurable loss. Both the buyer and seller maximize their expected wealth, plus a penalty term that reflects ambiguity, over an exogenous random horizon. Under a mean-variance premium principle and a general divergence that measures the players' ambiguity, we obtain the Stackelberg equilibrium semi-explicitly. Our main results are that the optimal variance loading equals zero and that the seller's robust optimal premium rule equals the net premium under the buyer's optimally distorted probability. Both of these important results generalize those we obtained in [Cao, J., Li, D., Young, V. R. & Zou, B. (2022). Stackelberg differential game for insurance under model ambiguity. Insurance: Mathematics and Economics, 106, 128–145.] under squared-error divergence.

2 citations


Journal ArticleDOI
TL;DR: In this paper , the main characteristics of cyber risk are reviewed and three layers of cyber space are considered: hardware, software and psycho-cognitive layer, and a wide exploration pictures a science in the making and points out the questions to be solved for building a resilient society.
Abstract: Not a day goes by without news about a cyber attack. Fear spreads out and lots of wrong ideas circulate. This survey aims at showing how all these uncertainties about cyber can be transformed into manageable risk. After reviewing the main characteristics of cyber risk, we consider the three layers of cyber space: hardware, software and psycho-cognitive layer. We ask ourselves how is this risk different from others, how modelling has been tackled and needs to evolve, and what are the multi-facetted aspects of cyber risk management. This wide exploration pictures a science in the making and points out the questions to be solved for building a resilient society.

1 citations


Journal ArticleDOI
TL;DR: In this paper , the authors present a general procedure for constructing a distribution-free locally unbiased predictor of the risk premium based on any initially suggested predictor, which is piecewise constant, corresponding to a partition of the covariate space, and by construction auto-calibrated.
Abstract: We study non-life insurance pricing and present a general procedure for constructing a distribution-free locally unbiased predictor of the risk premium based on any initially suggested predictor. The resulting predictor is piecewise constant, corresponding to a partition of the covariate space, and by construction auto-calibrated. Two key issues are the appropriate partitioning of the covariate space and the handling of randomly varying durations, acknowledging possible early termination of contracts. A basic idea in the present paper is to partition the predictions from the initial predictor, which as a by-product defines a partition of the covariate space. Two different approaches to create partitions are discussed in detail using (i) duration-weighted equal-probability binning, and (ii) binning by duration-weighted regression trees. Given a partitioning procedure, the size of the partition to be used is obtained using cross-validation. In this way we obtain an automatic data-driven tariffication procedure, where the number of tariff cells corresponds to the size of the partition. We illustrate the procedure based on both simulated and real insurance data, using both simple GLMs and GBMs as initial predictors. The resulting tariffs are shown to have a rather small number of tariff cells while maintaining or improving the predictive performance compared to the initial predictors.

1 citations



Journal ArticleDOI
TL;DR: Convex risk measures for the aggregation of multiple information sources and applications in insurance are discussed in this article . But the authors do not consider the impact of the aggregation on the overall system.
Abstract: This article refers to:Convex risk measures for the aggregation of multiple information sources and applications in insurance



Journal ArticleDOI
TL;DR: In this article , the authors considered an optimal reinsurance contract under a mean-variance criterion in a Stackelberg game theoretical framework, where the reinsurer adopts the role of social planner balancing its own interests with those of the insurer.
Abstract: In this paper, we consider an optimal reinsurance contract under a mean-variance criterion in a Stackelberg game theoretical framework. The reinsurer is the leader of the game and decides on an optimal reinsurance premium to charge, while the insurer is the follower of the game and chooses an optimal per-loss reinsurance to purchase. The objective of the insurer is to maximize a given mean-variance criterion, while the reinsurer adopts the role of social planner balancing its own interests with those of the insurer. That is, we assume that the reinsurer determines the reinsurance premium by maximizing a weighted sum of the insurer's and reinsurer's mean-variance criteria. Under the general mean-variance premium principle, we derive the optimal reinsurance contract by solving the extended Hamilton–Jacobi–Bellman (HJB) systems. Moreover, we provide an intuitive way to set the weight of each party in the reinsurer's objective. Finally, we consider some special cases to illustrate our main results.

Journal ArticleDOI
TL;DR: In this paper , the authors developed a dynamic equilibrium model of insurance pricing in a competitive market consisting of heterogeneous insurance companies, where insurers have different beliefs on expected loss rate of an underlying risk process and the belief divergences are stochastic.
Abstract: We develop a dynamic equilibrium model of insurance pricing in a competitive market consisting of heterogeneous insurance companies. The insurers have different beliefs on expected loss rate of an underlying risk process and the belief divergences are stochastic. The insurers select optimal insurance market shares to maximize their individual utilities. The equilibrium insurance price is formulated when the insurance market is cleared. We provide a general equilibrium framework with a continuum of insurers in the market and then solve for the equilibrium insurance price explicitly in the case of N insurers. We find that the stochastic heterogeneity brings extra volatility to insurance price and makes it dynamic. The mean-reverting divergences of insurers may explain cycles of insurance business documented by empirical studies. Compared to the previous literature of optimal insurance, this paper introduces an asset pricing framework of general equilibrium to the research of insurance pricing.

Journal ArticleDOI
TL;DR: In this paper , a refracted Lévy risk model with delayed dividend pullbacks triggered by a certain Poissonian observation scheme is proposed, and an explicit expression for the expected (discounted) dividend payouts net of penalties is derived.
Abstract: The threshold dividend strategy, under which dividends are paid only when the insurer's surplus exceeds a pre-determined threshold, has received considerable attention in risk theory. However, in practice, it seems rather unlikely that an insurer will immediately pull back the dividend payments as soon as its surplus level drops below the dividend threshold. Hence, in this paper, we propose a refracted Lévy risk model with delayed dividend pullbacks triggered by a certain Poissonian observation scheme. Leveraging the extensive literature on fluctuation identities for spectrally negative Lévy processes, we obtain explicit expressions for two-sided exit identities of the proposed insurance risk process. Also, penalties are incorporated into the analysis of dividend payouts as a mechanism to penalize for the volatility of the dividend policy and account for an investor's typical preference for more stable cash flows. An explicit expression for the expected (discounted) dividend payouts net of penalties is derived. The criterion for the optimal threshold level that maximizes the expected dividend payouts is also discussed. Finally, several numerical examples are considered to assess the impact of dividend delays on ruin-related quantities. We numerically show that dividend strategies with more steady dividend payouts can be preferred (over the well-known threshold dividend strategy) when penalty fee become too onerous.

Journal ArticleDOI
TL;DR: In this paper , a modified Transformer architecture was proposed for predicting mortality rates in major countries around the world through the multi-head attention mechanism and positional encoding, which extracts key features effectively and thus achieves better performance in time-series forecasting.
Abstract: Predicting mortality rates is a crucial issue in life insurance pricing and demographic statistics. Traditional approaches, such as the Lee-Carter model and its variants, predict the trends of mortality rates using factor models, which explain the variations of mortality rates from the perspective of ages, gender, regions, and other factors. Recently, deep learning techniques have achieved great success in various tasks and shown strong potential for time-series forecasting. In this paper, we propose a modified Transformer architecture for predicting mortality rates in major countries around the world. Through the multi-head attention mechanism and positional encoding, the proposed Transformer model extracts key features effectively and thus achieves better performance in time-series forecasting. By using empirical data from the Human Mortality Database, we demonstrate that our Transformer model has higher prediction accuracy of mortality rates than the Lee-Carter model and other classic neural networks. Our model provides a powerful forecasting tool for insurance companies and policy makers.

Journal ArticleDOI
TL;DR: In this paper , the optimal decision making of an insurer towards a new insurable business, whose risk is independent of the existing risk faced by the insurer, is investigated, and it is shown that a stop-loss reinsurance contract is optimal when the solvency risk is quantified by the conditional value at risk.
Abstract: In this paper, we investigate the optimal decision making of an insurer towards a new insurable business, whose risk is independent of the existing risk faced by the insurer. We assume that the insurer, with the objective of maximizing the expected utility of its final wealth, together with the solvency constraint and the availability of reinsurance as a risk transfer mechanism, is deciding if it is viable to underwrite a new insurance business risk. If this new business is underwritten, it is shown that a stop-loss reinsurance contract is optimal when the solvency risk is quantified by the conditional value at risk. If the regulatory regime changes to the value at risk, the optimal reinsurance form becomes more complicated; it can be either stop-loss or two-layer under the assumption that the new risk has a strictly decreasing probability density function. Numerical examples are provided to illuminate the insurer's decision making and the optimal form of the reinsurance strategy.

Journal ArticleDOI
TL;DR: In this article , the authors considered a two-barrier renewal risk model, where the surplus is inspected only at claim arrival times, and the joint distribution of the time, number of claims and the total claim amount until the surplus process falls below zero (ruin) or reaches a safety level.
Abstract: We consider a two-barrier renewal risk model assuming that insurer's income is modeled via a Brownian motion, and the surplus is inspected only at claim arrival times. We are interested in the joint distribution of the time, number of claims and the total claim amount until the surplus process falls below zero (ruin) or reaches a safety level. We obtain a general formula for the respective joint generating function which is expressed via the distributions of the undershoot (deficit at ruin) and the overshoot (surplus exceeding safety level). We offer explicit results in the classical Poisson model, and we also study a more general renewal model assuming mixed Erlang distributed claim amounts and inter-arrival times. Our methodology is based on tilted measures and Wald's likelihood ratio identity. We finally illustrate the applicability of our theoretical results by presenting appropriate numerical examples in which we derive the distributions of interest and compare them with the ones estimated using Monte Carlo simulation.

Journal ArticleDOI
TL;DR: In this article , the authors consider the problem of maximizing the dividend in a diffusion environment with a ruin penalty, motivated by a constraint on dividend strategies, and derive an explicit equilibrium dividend strategy and the associated value function.
Abstract: We consider the dividend maximization problem including a ruin penalty in a diffusion environment. The additional penalty term is motivated by a constraint on dividend strategies. Intentionally, we use different discount rates for the dividends and the penalty, which causes time-inconsistency. This allows to study different types of constraints. For the diffusion approximation of the classical surplus process we derive an explicit equilibrium dividend strategy and the associated value function. Inspired by duality arguments, we can identify a particular equilibrium strategy such that for a given initial surplus the imposed constraint is fulfilled. Furthermore, we illustrate our findings with a numerical example.