scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Risk and Insurance in 2005"


Journal Article
TL;DR: Sornette et al. as mentioned in this paper showed that a stock market crash is not the result of short-term exogenous events, but rather involves a long-term endogenous buildup, with exogenous factors acting merely as triggers.
Abstract: Why Stock Markets Crash: Critical Events in Complex Financial Systems, by Didier Sornette, 2003, Princeton, NJ: Princeton University Press Consider the following events: a pressure tank within a rocket propulsion system fails during a launch; tectonic plates shift, causing the first significant earthquake in a locale for several decades; a stock market experiences a crash after a prolonged run-up in price levels. The commonality here is that all of these events are ultimately characterized by a "rupture" in the underlying system, following a buildup of "pressure" over a period of time. The recognition of certain engineering and geologic events as analogous in this way to financial market crashes was the impetus for the interesting and enjoyable new book Why Stock Markets Crash: Critical Events in Complex Financial Systems, by Didier Sornette. The major thesis of this book is that a stock market crash is not the result of short-term exogenous events, but rather involves a long-term endogenous buildup, with exogenous events acting merely as triggers. In particular, Sornette examines financial crashes within the framework of the "spontaneous emergence of extreme events in self-organizing systems," noting that "extreme events are characteristic of many... 'complex systems.'" The author employs mathematical tools-specifically, log-periodic power laws-to study the prebubble or precrash buildup in a financial system to its critical point. Efforts by nonfinancial people to analyze and explain financial phenomena using quantitative techniques from the hard and engineering sciences can be of tremendous use and interest to those of us in the financial community-provided that the mathematical techniques are applied by an author with an exposure to and understanding of the financial instruments, processes, and markets that are being analyzed. The author of Why Stock Markets Crash has done an admirable job of understanding and appreciating the financial world and its nuances. Didier Sornette is a professor of geophysics at UCLA, as well as a research director at the National Center of Scientific Research in France. He specializes in the prediction of catastrophic events within a complex system framework. In this book, as well as in a portion of his hundreds of journal articles, he takes his previous work in the physical and geological sciences and exports his mathematical modeling and prediction skills to the financial markets. In the first chapter, Sornette places historical extreme financial events-in particular, market crashes-in a complex, self-organizing system framework. This is followed by two chapters devoted, respectively, to the basic concepts and characteristics of financial markets, and to some statistical analyses demonstrating that financial crashes are essentially outliers. …

426 citations


Journal ArticleDOI
TL;DR: In this paper, the authors study mortality-based securities, such as mortality bonds and swaps, and price the proposed mortality securities, focusing on individual annuity data, although some of the modeling techniques could be applied to other lines of annuity or life insurance.
Abstract: The purpose of this article is to study mortality-based securities, such as mortality bonds and swaps, and to price the proposed mortality securities. We focus on individual annuity data, although some of the modeling techniques could be applied to other lines of annuity or life insurance.

316 citations


Journal Article
TL;DR: Duffie and Singleton as mentioned in this paper proposed a way of integrating credit and market risks in a portfolio model, which can be viewed as a component of market risk and may generate credit risk.
Abstract: Credit Risk: Pricing, Measurement, and Management, by Darrell Duffie and Kenneth J Singleton, 2003, Princeton, NJ: Princeton University Press Credit risk is the major challenge for risk managers and market regulators International regulation of banks' credit risk was put in place in 1988 and since that time there has been no consensus on how to improve that regulatory framework Part of the explanation resides in the complexity of this risk Banks, regulators, and central banks do not agree on how to measure credit risk and, more particularly, on how to compute the optimal capital that is necessary for protecting the different partners that share this risk For example, what proportion of yield spreads on corporate bonds is explained by credit risk? Is it 30 percent, 50 percent, or even 90 percent? Is the credit risk proportion of the observed spreads solely a function of variations in the default probability or is it also explained by variations in the recovery rate over time or across cycles? Are macroeconomic cycles themselves or default risk premia, market liquidity, and even market risk significant determinants of yield spreads? These questions are important because some models such as CreditMetrics use the entire yield spread to compute the capital for credit risk If credit risk explains only a small fraction of yield spreads, these models compute too much capital for regulation and even for credit risk management (Dionne et al, 2004 and references therein) Asking banks to keep too much capital in reserve to cover credit risk can be a source of market distortion in risk management behavior (Allen and Gale, 2003; Dionne and Harchaoui, 2003) For example, it may generate some asset substitution activities that increase the risky position of banks, in order to set the level of risk at its optimal rather than regulatory level All these issues arise in part because credit risk is not well understood So the book by Duffie and Singleton will be welcomed by the academics, regulators, and practitioners who consult it The book has 13 chapters, three appendices (two on affine processes), a comprehensive list of references, and an index (authors and subjects) It covers all subjects related to credit risk It is designed for three broad audiences: academics and graduate students; those involved in the measurement and control of financial risks; and those involved in trading and marketing products with significant credit risk The main focus is modeling credit risk: measuring portfolio credit risk and pricing different securities exposed to credit risk The focus on credit risk management is less important in the book The introduction (indeed the entire book) is very well written and presents the subjects treated with clarity Credit risk is distinguished from other sources of risk such as market risk, liquidity risk, operational risk, systemic risk, and regulatory and legal risk The distinctions take many dimensions such as time horizon, liquidity, the parties implicated, methodology, and information asymmetries However, the authors insist on the fact that this does not mean that all these different risks should be managed separately These different risks may be correlated over time, so integrated frameworks for measuring and pricing them are necessary, particularly for market, credit, and liquidity risks For example, factors underlying changes in credit risk are often correlated with those underlying market risk and changes in liquidity risk can be viewed as a component of market risk and may generate credit risk The last chapter proposes an original way of integrating credit and market risks in a portfolio model The introduction also provides an overview of the book The chapters are organized to highlight the major topics related to credit risk, such as "Definition and Management" (Chapter 2), "Default and Transition" (Chapters 3 and 4), "Valuation" (including valuation of credit derivatives, Chapters 5-9), "Default Correlation" and "Portfolio Valuation" (Chapters 10 and 11), "Credit Risk in OTC Derivatives Positions" and "Portfolio Risk Measurement" (Chapters 12 and 13) …

282 citations


Journal ArticleDOI
TL;DR: Securitization is one of the most important innovations of modern finance as discussed by the authors, which is the process of isolating a pool of assets or rights to a set of cash flows and the repackaging of the asset or cash flows into securities that are traded in capital markets.
Abstract: INTRODUCTION Securitization is one of the most important innovations of modern finance. The securitization process involves the isolation of a pool of assets or rights to a set of cash flows and the repackaging of the asset or cash flows into securities that are traded in capital markets. The trading of cash flow streams enables the parties to the contract to manage and diversify risk, to take advantage of arbitrage opportunities, or to invest in new classes of risk that enhance market efficiency. The cash flow streams to be traded often involve contingent payments as well as more predictable components which may be subject to credit and other types of counterparty risk. Securitization provides a mechanism whereby contingent and predictable cash flow streams arising out of a transaction can be unbundled and traded as separate financial instruments that appeal to different classes of investors. In addition to facilitating risk management, securitization transactions also add to the liquidity of financial markets, replacing previously untraded on-balance-sheet assets and liabilities with tradeable financial instruments. The securitization era began in the 1970s with the securitization of mortgage loans by the government sponsored enterprises (GSEs) Fannie Mae, Ginnie Mae, and Freddie Mac, which were created by the federal government with the objective of facilitating home ownership by providing a reliable supply of home mortgage financing. The securitization process enabled mortgage originators such as banks, thrift institutions, and insurers to move mortgage loans off their balance sheets, freeing up funds for additional lending. In the process, a new class of highly rated, liquid securities was created, enhancing portfolio opportunities for investors. The next major development in securitization was the introduction of asset-backed securities (ABS) based on other types of assets. This market began in 1985 with the securitization of approximately $1 billion in automobile loans and later expanded to include credit card receivables, home equity loans, aircraft-backed loans, student loans, and numerous other asset classes. In 2003, new issue volume of mortgage-backed and nonmortgage-backed ABS reached $2.1 trillion and $585 billion, respectively. (1) Although the insurance industry in the United States accounts for approximately $4 trillion in assets with corresponding liabilities and equity capital that would seem to be candidates for securitization, securitization has been relatively slow to catch on in this industry. The first U.S. insurance securitizations took place in 1988 and involved sales of rights to emerging profits from blocks of life insurance policies and annuities (Millette et al., 2002). Insurance linked securitizations accelerated during the 1990s with the development of catastrophic risk (CAT) bonds and options and a growing volume of life insurance and annuity securitizations. However, the volume of insurance transactions remains small in comparison with other types of ABS. Securitization has the potential to improve market efficiency and capital utilization in the insurance industry, enabling insurers to compete more effectively with other financial institutions. Through securitization insurers can reduce their cost of capital, increase return on equity, and improve other measures of operating performance. Securitization offers insurers the opportunity to unlock the embedded profits in blocks of insurance presently carried on balance sheet and to provide an alternative source of financing in an industry where traditional financing mechanisms are often restricted due to regulation. Securitized transactions also permit insurers to achieve liquidity goals and can add transparency to many on-balance-sheet assets and liabilities traditionally characterized by illiquidity, complexity, and informational opacity. Securitization also offers new sources of risk capital to hedge against underwriting risk more efficiently than traditional techniques such as reinsurance and letters of credit. …

209 citations


Journal ArticleDOI
TL;DR: In this article, the authors examined the relationship among market structure and performance in property-liability insurers over the period 1992-1998 using data at the company and group levels, and found that cost-efficient firms charge lower prices and earn higher profits, in conformance with the ES hypothesis.
Abstract: This study examines the relationships among market structure and performance in property-liability insurers over the period 1992-1998 using data at the company and group levels. Three specific hypotheses are tested: traditional structure-conduct-performance, relative market power, and efficient structure (ES). The results provide support for the ES hypothesis. The ES hypothesis posits that more efficient firms can charge lower prices than competitors, enabling them to capture larger market shares and economic rents, leading to increased concentration. Both revenue and cost efficiency are used in the analysis, and this is the first study to use revenue efficiency in this type of analysis. The results for the sample period as a whole and by year are consistent. The overall results suggest that cost-efficient firms charge lower prices and earn higher profits, in conformance with the ES hypothesis. On the other hand, prices and profits are found to be higher for revenue-efficient firms. Revenue X-efficiency is derived from activities such as cross-selling and may rely heavily on the use of detailed information from customer databases to identify potential customers. The implications of this research are that regulators should be more concerned with efficiency (both cost and revenue) rather than the market power that arises from the consolidation activity taking place in insurance.

182 citations


Journal ArticleDOI
TL;DR: In this paper, the authors provide a formal analysis of payout adjustments from a longevity risk-pooling fund, an arrangement referred to as group self-annuitization (GSA), where the annuitants bear their systematic risk, but the pool shares idiosyncratic risk.
Abstract: This article provides a formal analysis of payout adjustments from a longevity risk-pooling fund, an arrangement we refer to as group self-annuitization (GSA). The distinguishing risk diffusion characteristic of GSAs in the family of longevity insurance instruments is that the annuitants bear their systematic risk, but the pool shares idiosyncratic risk. This obviates the need for an insurance company, although such instruments could be sold through a corporate insurer. We begin by deriving the payout adjustment for a single entry group with a single annuity factor and constant expectations. We then show that under weak requirements a unique solution to payout paths exists when multiple cohorts combine into a single pool. This relies on the harmonic mean of the ratio of realized to expected survivorship rates across cohorts. The case of evolving expectations is also analyzed. In all cases, we demonstrate that the periodic-benefit payment in a pooled annuity fund is determined based on the previous payment adjusted for any deviations in mortality and interest from expectations. GSA may have considerable appeal in countries which have adopted national defined contribution schemes and/or in which the life insurance industry is noncompetitive or poorly developed.

153 citations


Journal ArticleDOI
TL;DR: In this article, the authors examined the efficiency of the marketing distribution channel and organizational structure for insurance companies from a framework that views the insurer as a financial intermediary rather than as a "production entity" which produces "value added" through loss payments.
Abstract: An examination of the efficiency of the marketing distribution channel and organizational structure for insurance companies is presented from a framework that views the insurer as a financial intermediary rather than as a "production entity" which produces "value added" through loss payments. Within this financial intermediary approach, solvency can be a primary concern for regulators of insurance companies, claims-paying ability can be a primary concern for policyholders, and return on investment can be a primary concern for investors. These three variables (solvency, financial return, and claims-paying ability) are considered as outputs of the insurance firm. The financial intermediary approach acknowledges that interests potentially conflict, and the strategic decision makers for the firm must balance one concern versus another when managing the insurance company. Accordingly, we investigate the efficiency of insurance companies using data envelopment analysis (DEA) having as insurer output an appropriately selected (for the firm under investigation) combination of solvency, claims-paying ability, and return on investment as outputs. These efficiency evaluations are further examined to study stock versus mutual form of organizational structure and agency versus direct marketing arrangements, which are examined separately and in combination. Comparisons with the "value-added" or "production" approach to insurer efficiency are presented. A new DEA approach and interpretation is also presented. INTRODUCTION This article uses the nonparametric properties of data envelopment analysis (DEA) coupled with distribution-free rank-order statistics to study the relative efficiency of the different organizational structures used by U.S. property and liability insurance companies (cross classified by their marketing distribution systems). Additionally, this article extends the interpretation of DEA toward a goal-directing technique with the goals as outputs rather than simply having a "product" as an output. This provides another focus and interpretation for DEA analysis in the insurance literature. We also use a form of DEA (the Range-Adjusted Measure, or RAM, model), new to the insurance literature, which is able to provide ordinal level efficiency scoring that allows for subsequent nonparametric statistical analysis such as regression, rank statistical analysis, etc. to be performed incorporating efficiency score as an explanatory variable in subsequent analysis. (1) We dichotomize our results by organizational form into mutual versus stock companies to examine whether these two organizational structures might have differential managerial strategic focus in terms of goals, and have different efficiency and slack variables when using solvency propensity, return on investment, and claims-paying ability as output goals. One might expect potential differences in efficiency between stock and mutual insurers due to the different incentive structures inherent in the two types of organizational forms; in stock companies return on shareholder investment dominates incentives, whereas solvency and claims-paying ability considerations can dominate considerations of mutual insurance company decision makers. Possible efficiency differences between mutual and stock types of organization are intrinsically intertwined with the use of the agency versus direct sales type of marketing distribution systems (2) and these dichotomies are also correlated to emphasis in commercial versus personal lines of insurance. Finally, the differences that can occur by using different DEA formulations (production approach considering losses as the output versus the financial intermediary approach of this article) are explored and discussed. THE RAM DEA MODEL (3) There is a theoretical problem in using DEA efficiency numbers from the standard CCR or BCC models for subsequent statistical analysis because, while DEA evaluates the efficiency of each firm, the comparison set for each firm may be different producing potentially nonmetric level data. …

115 citations


Journal ArticleDOI
TL;DR: In this paper, a multi-period principal-agent model of the traditional reinsurance market is proposed to identify moral hazard in the traditional Reinsurance market, and the empirical results are consistent with the model's predictions.
Abstract: This article attempts to identify moral hazard in the traditional reinsurance market. We build a multiperiod principal-agent model of the reinsurance transaction from which we derive predictions on premium design, monitoring, loss control, and insurer risk retention. We then use panel data on U.S. property liability reinsurance to test the model. The empirical results are consistent with the model's predictions. In particular, we find evidence for the use of loss-sensitive premiums when the insurer and reinsurer are not affiliates (i.e., not part of the same financial group), but little or no use of monitoring. In contrast, we find evidence for the extensive use of monitoring when the insurer and reinsurer are affiliates, where monitoring costs are lower. INTRODUCTION Insurance companies whose book of business is exposed to high risk, such as hurricane or earthquake losses or class action product liability lawsuits, have traditionally hedged the right tail of this exposure through reinsurance. Like primary insurance, reinsurance contracts encounter moral hazard. It is costly for the reinsurer to monitor the underwriting activities of the primary insurer and how the latter settles claims with its own policyholders. Consequently, reinsurance relaxes the incentive for the primary insurer to engage in careful underwriting and loss mitigation. This problem can be especially severe after a natural catastrophe where the primary insurer is overwhelmed with flood or earthquake claims and so is able to pass on the cost of settlements to the reinsurer. Traditional reinsurance includes price controls against moral hazard, including deductibles, co-payments, and "ex post settling up," which is a retrospective adjustment of the premium based on losses incurred during the policy period that is also known as "retrospective rating." Less formal and longer-term controls are also used. Reinsurance is usually conducted as a long-term relationship. Experience bonds parties together and increases the cost of opportunistic behavior. The primary insurer gets continuity of access to reinsurance, whereas the reinsurer can use the relationship's duration to increase the effectiveness of its monitoring, and can use experience to set future prices and terms. (1) Controlling moral hazard via long-term relationships can be costly. Froot and O'Connell (1997) have documented the costs of catastrophe reinsurance and show that the ratio of premium to expected loss increases dramatically at higher layers of coverage (i.e., for reinsurance in the right-hand tail of the loss distribution). Since moral hazard will increase in intensity the greater the level of reinsurance, this pricing pattern is quite consistent with unanticipated moral hazard. (2) Moreover, the shear size of these premium loading suggests that addressing moral hazard in this way is expensive. (3) These large premium loads are relevant today as both insured property and insured claims have increased significantly in the past few decades. (4) Monitoring can also redress moral hazard. (5) In his transaction-cost-based model of firms, Williamson (1985) argued that, whereas markets use price incentives to resolve agency conflicts between separate organizations, monitoring can be a more efficient way to resolve conflicts within organizations where there is greater access to information. Conflicts within organizations are not fully internalized without monitoring since it is difficult to observe the contribution that each worker makes across all affiliated groups; hence, each agent is typically compensated according to his or her readily observed output. The models of vertical integration by Riordan (1990a,b) and Cremer (1993, 1995) show that for transactions within firms, where monitoring is relatively cheap, more emphasis should be placed on monitoring and less on contractual incentives. The opposite is true for transactions between firms where monitoring costs are higher. …

85 citations


Journal ArticleDOI
TL;DR: In this paper, the authors presented new evidence on the cost of equity capital by line of insurance for the property-liability insurance industry and used the full-information industry beta (FIB) methodology to decompose the cost by line.
Abstract: This article presents new evidence on the cost of equity capital by line of insurance for the property-liability insurance industry. To do so we obtain firm beta estimates and then use the full-information industry beta (FIB) methodology to decompose the cost of capital by line. We obtain full-information beta estimates using the standard one-factor capital asset pricing model and extend the FIB methodology to incorporate the Fama-French three-factor cost of capital model. The analysis suggests the cost of capital for insurers using the Fama-French model is significantly higher than the estimates based upon the CAPM. In addition, we find evidence of significant differences in the cost of equity capital across lines. INTRODUCTION Cost of capital estimation is becoming increasingly important for insurers. First introduced during the 1970s in regulatory proceedings, the application of financial methods in pricing, reserving, and other types of financial decision making has grown rapidly over the past two decades. (1) Recent developments include asset-liability management techniques (Panjer, 1998), methodologies to allocate equity capital by line of business (e.g., Myers and Read, 2001), market-based project evaluation techniques such as risk-adjusted return on capital (RAROC), and the projected introduction of fair value accounting for insurer liabilities (Girard, 2002; Dickinson, 2003). These and other changes have intensified the need to find reliable methods to estimate the cost of capital for insurance firms. The use of an incorrect cost of capital in capital budgeting, pricing, and other applications can have serious consequences, with the firm losing market share to competitors if the cost of capital is overestimated and losing market value if the cost of capital is underestimated. Essentially, using incorrect cost of capital estimates can lead to the firm's investing in negative net present value projects that destroy firm value. Choosing the appropriate cost of capital for specific projects is often a challenging task. The cost of capital varies significantly across industries, and cost of capital research has shown that there is a significant industry factor for insurance (Fama and French, 1997). Although insurance is a diverse industry, encompassing numerous lines of business with very different risk characteristics, little progress has been made in estimating costs of capital by line of business within the insurance industry. The objective of the present article is to remedy this deficiency in the existing literature by developing cost of capital models that reflect the line of business characteristics of firms in the property-liability insurance industry to assist insurers in making decisions that maximize firm value. In addition to providing valuable information for financial decision making, estimating the cost of capital by line also contributes to the literature on explaining cross-sectional price differences in the insurance industry (e.g., Sommer, 1996; Phillips, Cummins, and Allen, 1998; Froot, 2003). The issue addressed in this article has been studied in the financial literature as the problem of estimating the cost of capital for divisions of conglomerate firms. Because the conglomerate firm itself rather than the division is traded in the capital market, market value data can be used to estimate the overall cost of capital for the conglomerate but not for the individual divisions comprising the firm. The classic approach for estimating the divisional cost of capital is the pure-play approach (Fuller and Kerr, 1981) that approximates the divisional cost of capital as the average cost of capital for publicly traded "pure-play" firms that specialize in the same product as the division under consideration. The pure-play technique performs well when a relatively large number of pure-play firms of various sizes can be found. However, in many industries, there are only a few true specialist firms in some product lines and they often tend to be relatively small (Ibbotson Associates, 2002). …

75 citations


Journal ArticleDOI
TL;DR: This article used the complete property-casualty insurance files of the National Association of Insurance Commissioners from 1984 to 1991 to assess the effect of medical malpractice reforms pertaining to damages levels and the degree to which these damages are insurable.
Abstract: This article uses the complete property-casualty insurance files of the National Association of Insurance Commissioners from 1984 to 1991 to assess the effect of medical malpractice reforms pertaining to damages levels and the degree to which these damages are insurable. Limits on noneconomic damages were most influential in affecting insurance market outcomes. Several punitive damages variables specifically affected the medical malpractice insurance market, including limits on punitive damage levels, prohibitions of the insurability of punitive damages, and prohibition of punitive damages awards. Estimates for insurance losses, premiums, and loss ratios indicate effects of reform in the expected directions, where the greatest constraining effects were for losses. The quantile regression analysis of losses indicates that punitive damages reforms and limits were most consequential for firms at the high end of the loss spectrum. Tort reforms also enhanced insurer profitability during this time period.

70 citations


Journal ArticleDOI
TL;DR: Artis, Ayuso, and Guillen (2002, Journal of Risk and Insurance 69: 325340; henceforth AAG) estimate a modified logit model allowing for the possibility that some claims classified as honest might actually be fraudulent.
Abstract: Recently, Artis, Ayuso, and Guillen (2002, Journal of Risk and Insurance 69: 325340; henceforth AAG) estimate a logit model using claims data. Some of the claims are categorized as "honest" and other claims are known to be fraudulent. Using the approach of Hausman, Abrevaya, and Scott-Morton (1998 Journal of Econometrics 87: 239-269), AAG estimate a modified logit model allowing for the possibility that some claims classified as honest might actually be fraudulent. Applying this model to data on Spanish automobile insurance claims, AGG find that 5 percent of the fraudulent claims go undetected. The purpose of this article is to estimate the model of AAG using a logit model with missing information. A constrained version of this model is used to reexamine the Spanish insurance claim data. The results indicate how to identify misclassified claims. We also show how misclassified claims can be identified using the AAG approach. We show that both approaches can be used to probabilistically identify misclassified claims.

Journal ArticleDOI
TL;DR: It is shown that a higher coinsurance rate may lead to either less fraud in the market and a lower probability of patients searching for second opinions or more fraud and more searches.
Abstract: We study the impact of variations in the degree of insurance on the amount of fraud in a physician-patient relationship. In a market for credence goods, where prices are regulated by an authority, physicians act as experts. Due to their informational advantage, physicians have an incentive to cheat by pretending to perform inappropriately high treatment levels leading to overcharging patients. Our approach aims on the impact on changes in each, patients' and physicians' incentive structure when the proportional degree of coinsurance varies. It is shown that a higher coinsurance rate may lead to either less fraud in the market and a lower probability of patients searching for second opinions or more fraud and more searches.

Journal ArticleDOI
TL;DR: In this paper, the nonparametric frontier method was used to examine differences in efficiency for three unique organizational forms in the Japanese nonlife insurance industry, i.e., kiretsu firms, nonspecialized independent firms and specialized independent firms.
Abstract: This article uses the nonparametric frontier method to examine differences in efficiency for three unique organizational forms in the Japanese nonlife insurance industry—keiretsu firms, nonspecialized independent firms (NSIFs), and specialized independent firms (SIFs). It is not possible to reject the null hypothesis that efficiencies are equal, with one exception. Keiretsu firms seem to be more cost-efficient than NSIFs. The results have important implications for the stakeholders of the NSIFs. An examination of the productivity changes across the different organizational forms reveals deteriorating efficiency for all three types of firms throughout the 1985–1994 sample period. Finally, the evidence also suggests that the value-added approach and the financial intermediary approach provide different but complementary results.

Journal ArticleDOI
TL;DR: In this article, the authors investigate multi-period portfolio selection problems in a Black & Scholes type market where a basket of 1 risk free and m risky securities are traded continuously and propose accurate approximations based on the concept of comonotonicity, as studied in Dhaene, Denuit, Goovaerts, Kaas & Vyncke.
Abstract: We investigate multiperiod portfolio selection problems in a Black & Scholes type market where a basket of 1 riskfree and m risky securities are traded continuously. We look for the optimal allocation of wealth within the class of ’constant mix’ portfolios. First, we consider the portfolio selection problem of a decision maker who invests money at predetermined points in time in order to obtain a target capital at the end of the time period under consideration. A second problem concerns a decision maker who invests some amount of money (the initial wealth or provision) in order to be able to fullfil a series of future consumptions or payment obligations. Several optimality criteria and their interpretation within Yaari’s dual theory of choice under risk are presented. For both selection problems, we propose accurate approximations based on the concept of comonotonicity, as studied in Dhaene, Denuit, Goovaerts, Kaas & Vyncke (2002 a,b). Our analytical approach avoids simulation, and hence reduces the computing effort drastically.

Journal ArticleDOI
TL;DR: In this article, a simple decomposition of the value of an insurance company is presented, based on the assumption that the risk of a company defaulting on its obligations is captured by the put option.
Abstract: INTRODUCTION One of the fundamental tenets of financial economics is that insurance companies, just as other financial and non-financial firms, have a very strong incentive to maximize current shareholder value. (1) This seemingly simple observation leads to a wide variety of managerial behavior. (2) In this article we will review a simple decomposition of the value of an insurance company. The decomposition illuminates the motivation for a variety of strategies that can be observed in practice. These strategies run the gamut from accounting manipulation through risk transfer schemes to positive net present value (NPV) "project" selection. We will review these in some detail below. Then we will illustrate these techniques from our experience with every major U.S. life insurer insolvency since Baldwin United in 1984. COMPONENTS OF From VALUE The market value of insurance company owners' equity is defined as the difference between the market value of assets and the market value of liabilities. For purposes of valuation, it is helpful to partition more finely the components of equity value. In this section, we will partition the value of insurance company owners' equity, or stock in the case of a stock company, into its four major components: franchise value, market value of tangible assets, present value of liabilities, and put option value (see Figure 1). The first two of these components are clearly assets. The third component is related to liability value. The put option value can be treated as part of the liability value as a contra-liability or as an asset. We will discuss each component in turn. [FIGURE 1 OMITTED] The franchise value stems from what economists call "economic quasi-rents." It is the present value of the "quasi-rents" that an insurer is expected to garner because it has scarce resources, scarce capital, charter value, licenses, a distribution network, personnel, reputation, and so forth. It includes renewal business. (3) Franchise value is dependent on firm insolvency risk. The less insolvency risk there is, the more likely the firm is to remain solvent long enough to capture all the available economic rents arising from its renewal business, its distribution network, its reputation, and so forth. (4) The market value of liabilities includes pricing of all contingencies within the insurance liabilities. In our decomposition we have elected to separate out the risk of the insurance company defaulting on its obligations. The risk of such a default is captured by the put option. Other contingencies, including such risks as interest rate contingencies, mortality contingencies, or equity market contingencies, are included in the present value of liabilities. Thus, the present value of liabilities is the present value of all promised liability cash flows discounted at Treasury rates rather than discounting at rates that would reflect the possibility of default. In the case of interest sensitive cash flows, it is the value of the Treasury security portfolio (including derivatives as needed) that fully defeases the liabilities with all non-default contingencies fully hedged. This quantity will be larger than the market value of liabilities where the promised liability cash flows are discounted at rates that reflect the risk of failure to make the promised cash flows. Merton (1974) initially observed that the value of a debt obligation could be expressed as the value of a default-risk free bond minus a put option. The put option represents the value to the issuer of the possibility of defaulting on the obligation. The same holds true for insurance liability obligations. The market value of liabilities can be expressed as the present value of liabilities minus the put option that represents the value to the issuer of potentially defaulting on the obligations. (5) The market value of tangible assets and the present value of liabilities can be netted together, producing what we will call "net tangible value. …

Journal ArticleDOI
TL;DR: In this paper, the authors hypothesize that third-party insurers use general damage awards to reduce the incentive to submit exaggerated claims for specific damages for injuries and lost wages, and find evidence using data on over 17,000 closed bodily injury claims that special damage claims that exceed their expected value receive proportionally lower general damages awards than claims that do not.
Abstract: Awards for pain and suffering and other noneconomic losses account for over half of all damages awarded under third-party auto insurance bodily injury settlements. This article hypothesizes that third-party insurers use general damage awards to reduce the incentive to submit exaggerated claims for specific damages for injuries and lost wages. Consistent with this hypothesis, the article finds evidence using data on over 17,000 closed bodily injury claims that special damage claims that exceed their expected value receive proportionally lower general damage awards than claims that do not. Among the implications of this research is the possibility that insurers will be less zealous in challenging fraudulent special damage claims under a third-party insurance regime than they will be under a first-party insurance regime in which access to general damages is limited.

Journal ArticleDOI
TL;DR: In this article, the authors examined data for the year ended December 31, 1997 for 80 publicly traded property-liability insurers that have Best financial strength ratings of their consolidated insurance-operating subsidiaries.
Abstract: We examine data for the year ended December 31, 1997 for 80 publicly traded property-liability insurers that have Best financial strength ratings of their consolidated insurance-operating subsidiaries. These firms employ a holding company structure, in which a parent owns the stock of multiple insurance-operating subsidiaries. The operating subsidiaries prepare a consolidated annual report using the Statutory Accounting Principles (SAP), and an analogous set of financial statements based on the Generally Accepted Accounting Principles (GAAP) is released by the parent. We find that the financial characteristics important in determining ratings at the individual firm level-capitalization, liquidity, profitability, and size-are also important at the group level. Further, financial ratios from holding company statements are incrementally useful in the ratings' process, after group-level ratios have been taken into account. Robustness tests based on a subsample of holding companies with minimal investment outside of the property-liability industry reinforce our conclusion that parent company statements influence consolidated group ratings. However, our data do not allow us to separate the relative contribution of the GAAP model and underlying transactions to the ratings decision.

Journal ArticleDOI
TL;DR: In this paper, the authors developed an index for tracking the dynamic behavior of life (pension) annuity payouts over time, based on the concept of self-annuitization.
Abstract: I develop an index for tracking the dynamic behavior of life (pension) annuity payouts over time, based on the concept of self-annuitization. Our implied longevity yield (ILY) value is defined equal to the internal rate of return (IRR) over a fixed deferral period that an individual would have to earn on their investable wealth if they decided to self-annuitize using a systematic withdrawal plan. A larger ILY number indicates a greater relative benefit from immediate annuitization. I use age 65--with a 10-year period certain--compared against the same annuity at age 75 as the standard benchmark for the index, and calibrate to a comprehensive time series of weekly (Canadian) life annuity quotes from 2000 through 2004. I find that during this period the ILY varied from 5.45 percent to 6.90 percent for males and from 5.00 percent to 6.42 percent for females and was highly correlated with a duration-weighted average yield of 10-year and long-term Government of Canada bonds. I believe our ILY metric can help promote and explain the benefits of acquiring lifetime payout annuities by translating the abstract-sounding longevity insurance into more concrete and measurable financial rates of return. INTRODUCTION In this article, I develop a financial metric and index for tracking the time series behavior of life annuity payouts. Indeed, as North American baby boomers approach age 65 and their so-called retirement years there is a growing interest in pension and annuity issues, especially given the apparent liability crises in defined benefit (DB) pension plans. Most retirees lack the actuarial intuition needed to understand the longevity insurance benefits of annuitization compared with traditional alternatives in the market. It is also difficult to position the rate of return from life annuities within a portfolio's risk and return context. I therefore believe that a properly designed annuity payout annuity index might contribute to a greater appreciation and intuition for these products. Against this demographic backdrop, a number of recent articles in the pensions, insurance, and actuarial literature (1) have explored the properties of self-annuitization. This retirement strategy is a consumption and investment plan that attempts to closely mimic the payout from a generic life annuity while allocating investable assets to minimize or limit the probability of lifetime ruin. This plan is not necessarily optimal within a classical life-cycle model with no bequest motives--in which continuously renegotiated tontine annuities are available--as originally demonstrated by Yaari (1965) and recently extended by Davidoff, Brown, and Diamond (2003). However, as pointed out by Yagi and Nishigaki (1993) and others, incomplete annuity markets is just one of the many theoretical justifications for consumers who shun annuitization. In practice, the popularity and interest in "drawdown" and "annuity alternative" continues to grow amongst practitioners. Our proposed index goes beyond a (trivial) cross-sectional average of life annuity payouts offered by different insurance companies. Rather, our contemporaneous index value is defined equal to the internal rate of return (IRR) that an individual would have to earn on their financial portfolio during a deferral period, if they choose to self-annuitize, instead of purchasing a life annuity. I define this IRR--which is based on the current term structure of annuity payouts--as the implied longevity yield (ILY) at a given age and for a given deferral period. Later, I discuss the relationship between ILY values and the traditional actuarial concept of mortality credits. The (unique) ILY value solves a nonlinear equation that is at the core of the article. I also present an approximation that provides a relatively simple and intuitive expression for the ILY that is the root of a quadratic equation. From a practical perspective I suggest using age 65 against age 75, as the standard benchmark for the ongoing index, since this age range appears to be common, at which annuitization decisions are made. …

Journal ArticleDOI
TL;DR: In this paper, a dynamic version of the Rothschild and Stiglitz model is used to investigate the nature of dynamic insurance contracts by considering both conditional and unconditional dynamic contracts, and it is shown that dynamic contracts yield a welfare improvement only if they are conditional on past performance.
Abstract: We take a dynamic perspective on insurance markets under adverse selection and study a dynamic version of the Rothschild and Stiglitz model. We investigate the nature of dynamic insurance contracts by considering both conditional and unconditional dynamic contracts. An unconditional dynamic contract has insurance companies offering contracts where the terms of the contract depend on time, but not on the occurrence of past accidents. Conditional dynamic contracts make the actual contract also depend on individual past performance (such as in car insurances). We show that dynamic insurance contracts yield a welfare improvement only if they are conditional on past performance. With conditional contracts, the first-best can be approximated if the contract lasts long. Moreover, this is true for any fraction of low-risk agents in the population.

Journal ArticleDOI
TL;DR: It is suggested that the insurance premium increases paid by Americans as a result of firearm violence are probably of the same order of magnitude as the total medical costs due to gunshots or the increased cost of administering the criminal justice system due to gun crime.
Abstract: The United States remains far behind most other affluent countries in terms of life expectancy. One of the possible causes of this life expectancy gap is the widespread availability of firearms and the resulting high number of U.S. firearm fatalities: 10,801 homicides in 2000. The European Union experienced 1,260 homicides, Japan only 22. Using multiple decrement techniques, I show that firearm violence shortens the life of an average American by 104 days (151 days for white males, 362 days for black males). Among all fatal injuries, only motor vehicle accidents have a stronger effect. I estimate that the elimination of all firearm deaths in the United States would increase the male life expectancy more than the total eradication of all colon and prostate cancers. My results suggest that the insurance premium increases paid by Americans as a result of firearm violence are probably of the same order of magnitude as the total medical costs due to gunshots or the increased cost of administering the criminal justice system due to gun crime.

Journal ArticleDOI
TL;DR: Ippolito et al. as discussed by the authors show that firms that switch from a traditional defined-benefit pension plan to a defined-contribution-type plan typically are not poor performers.
Abstract: Firms that wish to switch from a traditional defined-benefit pension plan to a defined-contribution-type plan have a choice between converting to a cash-balance plan or replacing the defined-benefit plan with a full-fledged defined-contribution plan. According to Ippolito and Thompson's (1999; Industrial Relations, 39: 228-245) excise tax avoidance hypothesis, a number of firms have switched to cash-balance plans because conversion allows the firm to avoid excise taxes on its excess pension assets. In contrast to existing studies, our evidence supports the excise tax avoidance hypothesis. Cash-balance plan conversions also have been criticized for imposing pension losses on older employees. The implicit contract theory of pensions predicts that poorly performing firms would be the ones that would impose losses on employees. However, our evidence indicates that firms converting to cash-balance plans typically are not poor performers. INTRODUCTION The number of employees enrolled in traditional defined-benefit pension plans has declined dramatically in the past two decades relative to enrollment in defined-contribution-type plans (see, e.g., Ippolito, 1995). An interesting aspect of this transition is the large number of sponsors that converted traditional defined-benefit plans into cash-balance plans during the latter part of the 1990s. Cash-balance plans are similar to defined-contribution plans from an employee's perspective--for example, cash-balance plans have individual employee account balances that are portable. However, cash-balance plans operate like defined-benefit plans from a sponsor's perspective and are treated as defined-benefit plans for regulatory purposes. Ippolito (2002) reports that about 20 percent of defined-benefit plans, weighted by participation, have converted to cash-balance plans. (1) The objective of this article is to present evidence on two separate, nonmutually exclusive hypotheses about why firms convert to cash-balance plans. One hypothesis relates to the avoidance of excise taxes and the other hypothesis relates to the implicit contract theory of pensions. Promoters of cash-balance plans argue that these plans provide a defined-contribution-type plan that is more valuable than a defined-benefit plan for most employees, especially younger employees who are likely to switch jobs frequently during their career. (2) If a defined-contribution-type plan is preferred, the natural question is why not simply terminate the defined-benefit plan and adopt a full-fledged defined-contribution plan. Ippolito and Thompson (1999) suggest (without providing evidence) that the answer lies in the tax code (also see Ippolito, 2001a and 2001b). Congress imposed an excise tax on reverted excess pension assets in the late 1980s and increased it under some circumstances to 50 percent in 1990. Thus, if a firm terminates an overfunded defined-benefit plan in favor of a full-fledged defined-contribution plan in the 1990s, it will lose a substantial part of the excess assets to excise taxes. If instead the firm converts to a cash-balance plan, the firm avoids the excise tax. (3) The avoidance of the excise tax is not costless, however. First, greater administrative costs are likely to be incurred in managing a cash-balance plan than a full-fledged defined-contribution plan, because a cash-balance plan must meet the regulatory requirements of defined-benefit plans (Clark and McDermed, 1990; Ippolito, 1997). Second, when a firm converts to a cash-balance plan, it does not immediately gain access to the excess pension assets. Instead, the excess assets go into the cash-balance plan and must be used to fund future retirement benefits. Consequently, when deciding whether to convert to a cash-balance plan or switch to a full-fledged defined-contribution plan, a firm with an overfunded plan must consider the tradeoff between the excise taxes on the excess pension assets and the cost of restricting the use of those excess assets. …

Journal ArticleDOI
TL;DR: In this article, the authors assess empirically what impact introduction of the bonus-malus system (BMS) has had on road safety in Tunisia and conclude that the BMS reduced the probability of reported accidents for good risks but had no effect on bad risks.
Abstract: The objective of this study is to assess empirically what impact introduction of the bonus-malus system (BMS) has had on road safety in Tunisia. The results of the Tunisian experiment are of particular importance since, during the last decade, many European countries decided to eliminate their mandatory bonus-malus scheme. These results indicate that the BMS reduced the probability of reported accidents for good risks but had no effect on bad risks. Moreover, the reform's overall effect on reported accident rates is not statistically significant, but the exit variable is positive in explaining the number of reported accidents. To avoid any potential selectivity bias, we also made a joint estimate of the reported accident and selection equations. The reform has a positive effect on the exit variable but still does not affect the accidents reported. This indicates that policyholders who switch companies are those attempting to skirt the imposed incentive effects of the new rating policy. Some of the control variables are statistically significant in explaining the number of reported accidents: the vehicle's horsepower, the policyholder's place of residence, and the coverages for which policyholders are underwritten.

Journal ArticleDOI
TL;DR: In this article, the authors examined the conditions under which different types of risks can optimally be covered by a single insurance policy and argued that the case for umbrella policies under multiple moral hazard is limited in practice.
Abstract: Under certain cost conditions the optimal insurance policy offers full coverage above a deductible, as Arrow and others have shown. However, many insurance policies currently provide coverage against several losses although the possibilities for the insured to affect the loss probabilities by several prevention activities (multiple moral hazard) are substantially different. This article shows that optimal contracts under multiple moral hazard generally call for complex reimbursement schedules. It also examines the conditions under which different types of risks can optimally be covered by a single insurance policy and argues that the case for umbrella policies under multiple moral hazard is limited in practice.

Journal Article
TL;DR: Public Finance and Public Policy in the New Century, edited by Sijbren Cnossen and Hans-Werner Sinn as mentioned in this paper is a tribute to honor the 90th birthday of Richard Musgrave, providing a moral view of government that attempts to maximize societal welfare through its influences on the allocation and distribution of resources, and the stabilization of economic activity.
Abstract: Public Finance and Public Policy in the New Century, edited by Sijbren Cnossen and Hans-Werner Sinn. Public Finance and Public Policy in the New Century is a tribute to honor the 90th birthday of Richard Musgrave. The book evolved from a series of presentations at a 2001 conference of the Center for Economic Studies of the Ludwig-Maximilians-Universitat in Munich. Until the 1980s, every student of public finance studied from Musgrave's Theory of Public Finance and the collection of public finance specialists contributing to this volume of essays is a who's who of academic leaders in the field of public finance over the past decades.1 Musgrave provided a moral view of government that attempts to maximize societal welfare through its influences on the allocation and distribution of resources, and the stabilization of economic activity. The use of government tax and expenditure policies is now viewed as relatively clumsy short-run stabilization tools but the role of government in the maximization of societal welfare is a theme of the book and reflects the essence of Musgrave's writings, especially the purpose of the conceptual separation of the allocative and distributive functions. "In ideal circumstances, the Allocative Branch should be concerned with taking the economy to the society's utility possibilities frontier by exploiting all gains from trade, while the redistributive branch alone need be concerned with choosing the ethically preferred point (Boadway et al., p. 333)." Because the focus of interest for this journal is risk management and insurance, most risk specialists will find section III of the book closest to their interests. The section, titled "The Welfare State in an Integrating World" contains two chapters on social insurance and one on the use of medical care by the self-employed. Other risk and insurance topics are sprinkled through the book. For example, a chapter on fiscal federalism and intergovernmental risk sharing provides a discussion of moral hazard issues in a scheme to allocate tax revenue and auditing intensity among the layers of government. The focus of section III is the fiscal pressure of aging populations on government social insurance and medical programs. Hans-Werner Sinn begins the social insurance discussion with a focus on Germany's pension insurance system. Drawing an analogy to the ability of Roman slaves to save during their captivity to purchase their freedom Prince Otto von Bismark proposed the 1881 social insurance legislation that would eliminate the need for the elderly to beg for support from their children. Like many countries, the German demographic changes since Bismark's time have moved from a situation where four workers support an elderly population to a projected 2030 world where two workers will support one retiree. The relative decline in young and the increasing longevity of the population is a problem faced by many industrial countries and Germany's reaction is relevant to the current U.S. debate. In 1992, the Bundestag defined a comprehensive program of sacrifices. It replaced tying pension benefits to gross wages with a tie to net wages; it eliminated early retirement; it abolished pensions for occupational invalidity (due to a decline in earning capacity); it reduced the benefit from 70 percent of gross to 64 percent of net wages over a phase-in period that extends to 2030 (a provision that was later abolished by a new Bundestag majority), and a host of other changes. Ultimately, Sinn recommends an obligatory private savings with a variable rate set to stabilize the sum of this rate and the country's pay-as-you-go contribution rate. Largely agreeing with Sinn's analysis, the comments by Georges de Menil ask questions that are often avoided in the current U.S. debate. He believes the only way to protect the credibility of the existing PAYGO system is to scale back entitlements but his discussion provokes questions that demonstrate the commingling reality of the taxation and allocation functions of government. …

Journal ArticleDOI
TL;DR: In this paper, the authors consider the case in which government assistance takes the form of guaranteeing some minimum wealth level, such as via direct government transfers following a loss, and examine its effects within an insurance market subject to adverse selection.
Abstract: We consider a competitive insurance market with adverse selection. Unlike the standard models, we assume that individuals receive the benefit of some type of potential government assistance that guarantees them a minimum level of wealth. For example, this assistance might be some type of government-sponsored relief program, or it might simply be some type of limited liability afforded via bankruptcy laws. Government assistance is calculated ex post of any insurance benefits. This alters the individuals' demand for insurance coverage. In turn, this affects the equilibria in various insurance models of markets with adverse selection. INTRODUCTION Governments often help in protecting their citizenry against risks. This may take many forms. For example, governments might offer public insurance; they might reinsure particularly problematic catastrophic risks; or they might provide for low-interest loans following a severe loss. Oftentimes, this relief takes the form of guaranteeing some minimum level of wealth. Poorer families suffering a loss might receive direct transfer payments to bring them up to some predetermined minimum wealth level. Or consider bankruptcy laws that allow for one to shield some prespecified level of wealth against creditors. In this article, we consider only the case in which government assistance takes the form of guaranteeing some minimum wealth level, such as via direct government transfers following a loss. Our focus is not on the merits of having this type of assistance program in place; rather we examine its effects within an insurance market subject to adverse selection. In particular, we already know that adverse selection itself imposes some welfare costs, since any type of efficiency that is obtained must be "second-best" in nature. In this article, we examine how such second-best insurance contracting is affected by the existence of the government assistance. We pay particular attention to how the welfare costs (often referred to as "signaling costs") associated with the adverse selection are affected. Except for noneconomic reasons, such as personal pride, government assistance can act as a substitute for market-based insurance: the possibility of government assistance might lower the demand for insurance. Or perhaps not. Consider the simple case of a two-state loss versus no-loss model. Since government subsidy programs are typically written to be "excess" of insurance coverage, government benefits are calculated only after all insurance indemnities have been paid out. For instance, purchasing a level of insurance that would leave one at a wealth level equal to the government-guaranteed minimum in the loss state would be totally redundant. One could receive the same wealth level in the loss state via government assistance, without the purchase of insurance. Indeed, one would be better off with government assistance, since there would not be any insurance premium to pay in the no-loss state of the world. The government essentially provides "free insurance." On the other hand, if insurance prices were actuarially fair, one would want to purchase full coverage insurance in the absence of any governmental assistance programs. Of course, this "fair premium" is based upon the full amount of the loss. By paying the premium for full coverage, an individual must give up the value of the government assistance. This creates a type of fixed cost for purchasing full insurance. Put differently, the individual would need to weigh the relative benefit of receiving the minimal governmental level of "insurance" for a zero premium versus paying a market-based insurance premium in return for a full-coverage insurance contract. Now suppose that, for some reason, insurance coverage available via the marketplace were limited. In this case, the tradeoff between government assistance and market insurance would be different. For example, in the extreme, if the level of insurance available in the marketplace left the individual with no more wealth in the loss state than he or she would have via government assistance, there would be a zero demand for market insurance. …

Journal Article
TL;DR: Cummins and Santomero as discussed by the authors present a survey of the U.S. life insurance industry, focusing on the relationship between business strategies and efficiency by correlating insurer efficiency DEA scores (cost and revenue) with the business practices of life insurers who participated in the WFIC survey.
Abstract: Changes in the Life Insurance Industry: Efficiency, Technology, and Risk Management, edited by J. David Cummins and Anthony M. Santomero, Series on Innovations in Financial Markets and Institutions (IFMI), 1999, Kluwer Academic Publishers, 369 pages. This volume is organized in 10 independent but related chapters devoted to enhance our understanding of the challenges facing life insurers in their quest for a successful strategy. A total of 13 authors wrote or coauthored the chapters, including the leading researchers engaged in two major projects coordinated by the Wharton Financial Institutions Center (WFIC) during the second half portion of the 1990s. One project was a major survey of the industry sponsored by the Sloan Foundation, with a special focus on key drivers of performance, namely technology and labor. The other was a field-based investigation of the financial risk management practices of life insurers. The book's writing style clearly signals that it is targeting a hybrid audience of academics and practitioners. The price to pay is that the more technical material is kept to a minimum. This can possibly be unsatisfactory to specialized readers. Nevertheless, there are many nice features in this book, including the charts, tables, and illustrations that help to understand the material. Chapter 1, "Life Insurance: The State of the Industry" by Anthony M. Santomero, reviews the changing landscape of the financial services industry, in general, and of the U.S. life insurance sector, in particular. Its discussion of the threats and opportunities faced by life insurers leads to the identification of the determinants of success or failure in this line of business. It ends with a nice outline of the remaining chapters. Chapter 2, "The Industry Speaks: Results of the WFIC Insurance Survey" by James F. Moore and Anthony M. Santomero, reports survey results on numerous topics including (1) threat and opportunities as seen by participants, (2) product offerings, (3) distribution channels, (4) strategic choices, (5) use of technology, (6) human resource (HR) practices, and (7) performance measures and standards. Their main conclusion is that not all life insurers are alike. Chapter 3, "Efficiency in the U.S. Life Insurance Industry: Are Insurers Minimizing Costs and Maximizing Revenues" by J. David Cummins, relies on the frontier approach to analyze the performance of specific life insurance firms by comparing them to efficient frontiers consisting of best practice firms in the industry. Data envelopment analysis (DEA) is applied to estimate efficiency frontiers based on a yearly sample of about 750 firms for which financial data on outputs, inputs, output prices, and input prices were available, over the period from 1988 through 1995. The author measures life insurance outputs through variables that correlate highly with the value added by life insurers in their supply of services. Yearly average efficiencies and interfirm performance are discussed. Cummins also looks at the characteristics of the industry's efficiency leaders, the issue of economies of scale, and whether there appears to be a link between distribution and efficiency. Several conclusions emerge from Cummins' analysis of the U.S. life insurance sector. In particular, he finds that efficiency scores are relatively low and widely dispersed among life insurance firms in comparison with other financial industries. Chapter 4, "Efficiency and Competitiveness in the U.S. Life Insurance Industry: Corporate, Product, and Distribution Strategies" by Roderick M. Carr, J. David Cummins, and Laureen Regan, aims at increasing our understanding of the best practices associated with life insurer efficiency. It focuses on the relationships between business strategies and efficiency by correlating insurer efficiency DEA scores (cost and revenue) with the business practices of life insurers who participated in the 1995-1996 WFIC survey. …

Journal Article
TL;DR: This book provides a much more realistic theory of demand for health insurance than the one economists routinely use, but without dropping the core of the normative economic approach of demand.
Abstract: The Theory of Demand for Health Insurance, by John A. Nyman, 2003, Stanford Economics and Finance, Stanford, California: Stanford University Press. This book exposes a new theory of demand for health insurance. Not only is this theory really new (even radically new), but it is highly welcome as well. If this theory is taken seriously by health economists, the dialogue will be easier and more fruitful between them and physicians, policy makers, health services researchers, and even the general public. To tell it in a nutshell, this book provides a much more realistic theory of demand for health insurance than the one economists routinely use, but without dropping the core of the normative economic approach of demand. In a way, it is an elegant solution to a long-lasting controversy opposing the welfarist conventional conception of demand for health insurance, which takes the demand for health care of the uninsured as the "true" willingness to pay for health care, on one hand, and the extra-welfarist approach, which takes the guidelines written by medical experts as the "norm" of health-care consumption, on the other hand. Not only is the solution elegant, it is well written and the author provides several examples to illustrate the theoretical considerations, which helps the reader to understand the key concepts. What does the new theory have to say? First, consumers do not demand (health) insurance in order to reduce financial uncertainty; the conventional theory assumes that, due to the concavity of the utility of wealth, individuals always prefer a certain financial loss to an uncertain one of the same expected magnitude. An abundant literature has shown, however, that this assumption is contradicted by empirical results (in experiments, individuals prefer uncertain losses) and yields paradoxical results. Here, Pr. Nyman uses a result (due to Rabin, 2000; Rabin and Thaler, 2001), which is not specific to health insurance, but applies to attitudes toward risk in general: there is no link between the shape of the utility function of wealth and the attitude toward risk. Second, consumers demand (health) insurance because they exhibit a concave utility of wealth, even if this concavity is not related to risk aversion. To understand the crux of the idea, it is useful to decompose it in sequence: when deciding to buy an insurance contract, I voluntarily forgo part of my income. In case I am ill (or the event occur), my income first decreases by the amount of health-care (damage repairing) expenditures needed to treat the illness (repair the damage caused by the event); however, due to being insured, I then receive an income transfer that raises my income above this lower level. If the utility of wealth is concave, and if the income transfer is greater than the premium (i.e., if the probability of the event is lesser than one), I derive more expected utility from the income transfer when ill than I lost utility from the forgone income (the premium).1 It can even be shown that the lesser the probability, the greater the gain. Third, the normative consequences of this alternative motive of demand for health insurance depart dramatically from the traditional ones. Formally, if one uses the same utility function of wealth, both theories predict the same demand for insurance. But they disagree on how to interpret a specific feature of insurance for health care, namely, the firmly established empirical fact that the insured spend more to treat their illnesses than the uninsured.2 Both theories agree on the fact that this overconsumption of the insured stems from the price pay-off mechanism specific to health insurance: while most insurances provide a lump sum of money if the event occurs (one speaks of "contingent claims contracts"), health insurance usually pays off by reducing the price of health care. Both theories also agree on the cause of this specificity, which turns around the difficulty for an insurer to monitor precisely the lump sum cost of a given illness. …

Journal ArticleDOI
TL;DR: Gomez et al. as mentioned in this paper presented a new methodology for obtaining a premium based on a broad class of conjugate prior distributions, assuming lognormal claims, which is a new class of prior distributions arise in a natural way, using the conditional specification technique introduced by Arnold, Castillo, and Sarabia (1998, 1999).
Abstract: In this article, a new methodology for obtaining a premium based on a broad class of conjugate prior distributions, assuming lognormal claims, is presented. The new class of prior distributions arise in a natural way, using the conditional specification technique introduced by Arnold, Castillo, and Sarabia (1998, 1999). The new family of prior distributions is very flexible and contains, as particular cases, many other distributions proposed in the literature. Together with its flexibility, the main advantage of this distribution is that, due to its dependence on a large number of hyperparameters, it allows incorporating a wide amount of prior information. Several methods for hyperparameter elicitation are proposed. Finally, some examples with real and simulated data are given. INTRODUCTION Actuaries have found (Hewitt and Lefkowitz, 1979; Hogg and Klugman, 1984; among others) that a lognormal distribution is an important model for claim distributions. In many actuarial problems, it is found that although losses do not fit normal curves, mainly because they are skewed toward the upper boundaries, log losses provide a good fit. The lognormal distribution then appears as a natural distribution to be incorporated in credibility theory. Also, the concept of log credibility has been used in several recent papers and applications (see, for example, Landsman and Makov, 1999; Klugman et al., 1998, chapter 5), where the model assumes that claims follow a lognormal distribution. A careful study of the lognormal distribution is contained in Johnson, Kotz, and Balakrishnan (1994, chapter 14). Therefore, assume that [[theta].bar] is a risk parameter characterizing a member of a risk collective, and that the distribution of his claim X given [[theta].bar] is lognormal with probability density function: f (x | [[theta].bar]) = [[[tau].sup.1/2]/x[square root of 2[pi]]] exp{-[[tau]/2][(log x - [mu]).sup.2]}I(x > 0), (1) where [[theta].bar] = ([mu], [tau]), [tau] = 1/[[sigma].sup.2] is the precision parameter, and I is the indicator function. This distribution is denoted by X ~ LN([mu], [tau]). If [x.bar] = ([x.sub.1], ..., [x.sub.n]) is a lognormal data sample, the likelihood becomes, [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]. (2) From a Bayesian framework we have the model, [X.bar] | [mu], [tau]] ~ LN([mu],[tau]), (3) ([mu], [tau]) ~ [pi] ([mu], [tau]), (4) and a prior density [pi] ([mu], [tau]) has to be specified for ([mu], [tau]). If conjugate prior distributions are adopted, the classical solution consists of specifying as prior the normal-gamma distribution, which satisfies (DeGroot, 1970; Jewell, 1974; Padgett and Wei, 1977): [mu] | [tau] ~ N(m, k[[tau].sup.-1]), (5) [tau] ~ G([alpha],[beta]), (6) where N([mu], [[sigma].sup.2]) represents a normal distribution with mean [mu] and variance [[sigma].sup.2] and G([alpha],[beta]) represents a classical gamma distribution with probability density function proportional to [x.sup.[alpha]-1][e.sup.-[beta]x]. This prior distribution has two important properties. 1. It is a conjugate prior of Equation (1). 2. Only four parameters (m, k, [alpha], and [beta]) require elicitation. The first of these properties establishes a congruent model, presents important computational advantages and, in addition, has a tradition in actuarial practice. Along with this mathematical convenience, the use of conjugate structure functions can lead to unrealistic practical situations. For this reason, it is always advisable to evaluate the model for a class of prior densities in a robustness analysis to explore if these distributions are not exerting undue influence on the overall conclusions (Gomez, Hernandez, and Vaquez-Polo, 2000, 2002a; Gomez et al., 2002b). With respect to the second property, four parameters can be insufficient in practice. …

Journal ArticleDOI
TL;DR: In this article, the authors derived cost-benefit rules for automobile safety regulation when drivers may adapt their risk-taking behavior in response to changes in the quality of the road network, and established that road safety measures are Pareto improving if their monetary cost is lower than the difference between their (adjusted for risk aversion) direct welfare gain with unchanged behavior and the induced variation in insured losses due to drivers' behavioral adaptation.
Abstract: It is sometimes argued that road safety measures or automobile safety standards fail to save lives because safer highways or safer cars induce more dangerous driving. A similar but less extreme view is that ignoring the behavioral adaptation of drivers would bias the cost-benefit analysis of a traffic safety measure. This article derives cost-benefit rules for automobile safety regulation when drivers may adapt their risk-taking behavior in response to changes in the quality of the road network. The focus is on the financial externalities induced by accidents because of the insurance system as well as on the consequences of drivers' risk aversion. We establish that road safety measures are Pareto improving if their monetary cost is lower than the difference between their (adjusted for risk aversion) direct welfare gain with unchanged behavior and the induced variation in insured losses due to drivers' behavioral adaptation. The article also shows how this rule can be extended to take other accident external costs into account.

Journal ArticleDOI
TL;DR: In this paper, the authors investigated the effect of group health insurance plan choice on insurance unit price and found that lower unit prices are related to an increase in indemnity benefits and that the reduction in the unit price is greater for lower risks.
Abstract: This study investigates the effect of group health insurance plan choice on insurance unit price. The empirical findings suggest that the unit price of insurance, as measured by the ratio of the premium to expected indemnity benefits, is lower in group plans that offer employees a choice of different insurance options and require a premium contribution than it is in plans lacking at least one of these two features. The analyses suggest that lower unit prices are related to an increase in indemnity benefits and that the reduction in the unit price is greater for lower risks. The findings indicate that although subsidization of high risks by low risks occurs with group health insurance, the degree of subsidization is less when employees are offered a choice of health insurance plans.