scispace - formally typeset
Search or ask a question

Showing papers on "Inefficiency published in 1995"


Journal ArticleDOI
TL;DR: In this paper, a stochastic frontier production function is defined for panel data on firms, in which the nonnegative technical inefficiency effects are assumed to be a function of firm-specific variables and time.
Abstract: A stochastic frontier production function is defined for panel data on firms, in which the non-negative technical inefficiency effects are assumed to be a function of firm-specific variables and time. The inefficiency effects are assumed to be independently distributed as truncations of normal distributions with constant variance, but with means which are a linear function of observable variables. This panel data model is an extension of recently proposed models for inefficiency effects in stochastic frontiers for cross-sectional data. An empirical application of the model is obtained using up to ten years of data on paddy farmers from an Indian village. The null hypotheses, that the inefficiency effects are not stochastic or do not depend on the farmer-specific variables and time of observation, are rejected for these data.

5,783 citations


Journal ArticleDOI
TL;DR: In this paper, the authors identify the characteristics that make individual U.S. banks more likely to fail or be acquired and use bank-specific information to estimate competing-risks hazard models with time-varying covariates.
Abstract: This paper seeks to identify the characteristics that make individual U.S. banks more likely to fail or be acquired. We use bank-specific information to estimate competing-risks hazard models with time-varying covariates. We use alternative measures of productive efficiency to proxy management quality, and find that inefficiency increases the risk of failure while reducing the probability of a bank's being acquired. Finally, we show that the closer to insolvency a bank is (as reflected by a low equity-to-assets ratio) the more likely is its acquisition.

703 citations


Journal ArticleDOI
TL;DR: In this article, measures of technical and scale efficiencies are derived in the Italian banking industries by implementing non-parametric Data Envelopment Analysis on a cross section of 174 Italian banks taken in 1991.
Abstract: Measures of technical and scale efficiencies are derived in the Italian banking industries by implementing non-parametric Data Envelopment Analysis on a cross section of 174 Italian banks taken in 1991. The methodology of the parametric and non-parametric approaches to measure efficiency are discussed. The existence of both technical and allocative efficiency is established. This result is robust to modifications in the specification of inputs and outputs suggested by the Intermediation Approach and by the Asset Approach. In implementing both the Intermediation and the Asset Approach the traditional specification of inputs is modified to allow an explicit role for financial capital. In addition, regression analysis is used on a bank-specific measure of inefficiency to investigate determinants of banks' efficiency. Efficiency is best explained by productive specialization, size and, to a lesser extent, by location.

435 citations


Journal ArticleDOI
TL;DR: In this paper, a heteroscedastic cost-frontier model is developed and estimated using bank cost data similar to that used by Ferrier and Lovell, and the results show dramatic changes in the estimated cost frontier and in the inefficiency measures when accounting for heteroScedasticity in the estimation process.
Abstract: The purpose of this article is to illustrate a straightforward and useful method for addressing the problem of heteroscedasticity in the estimation of frontiers. A heteroscedastic cost-frontier model is developed and estimated using bank cost data similar to that used by Ferrier and Lovell. Our results show dramatic changes in the estimated cost frontier and in the inefficiency measures when accounting for heteroscedasticity in the estimation process. We find that the rankings of firms by their inefficiency measures is affected markedly by the correction for heteroscedasticity but not by alternative distributional assumptions about the one-sided error term.

411 citations


Book ChapterDOI
TL;DR: In this article, the authors explore cognitive and motivational processes that impede mutually beneficial exchanges of concessions and render seemingly tractable conflicts refractory to negotiated resolution, and present a detailed examination of five particular psychological barriers, namely, dissonance arising from the past, optimistic overconfidence, loss aversion, divergent construal and reactive devaluation.
Abstract: Publisher Summary The barriers of special concern in this chapter are psychological. The chapter explores cognitive and motivational processes that impede mutually beneficial exchanges of concessions and render seemingly tractable conflicts refractory to negotiated resolution. In such cases, the failure to achieve significant progress represents a kind of “market inefficiency,” in much the same sense that there is a failure or inefficiency when a motivated buyer and seller are unable to consummate a deal under conditions where the buyer's maximum purchase price exceeds the seller's minimum selling price. The chapter distinguishes psychological barriers from two other kinds of impediments—those that are essentially products of strategic calculation and those that arise from “impersonal” organizational, institutional, or structural factors having little to do either with calculation or with the psychological biases exhibited by individual actors. The chapter presents a detailed examination of five particular psychological barriers—namely, dissonance arising from the past, optimistic overconfidence, loss aversion, divergent construal and reactive devaluation. The chapter examines three broader theoretical perspectives that speak to sources of resentment, misunderstanding, misattribution, and distrust in the negotiation process.

359 citations


Journal ArticleDOI
TL;DR: The Chinese experience showed that its increasing expenditure per person for health care through user fees and insurance had not produced commensurate improvement in health status, and draws some lessons for less developed nations.

304 citations


Journal ArticleDOI
TL;DR: In this article, the overall technical inefficiency is decomposed into a persistent component and a residual component, and a multistep procedure is used to estimate the parameters of the production function as well as persistent and residual technical inefficiencies.
Abstract: This paper introduces a new specification of technical inefficiency in panel data models. First, the overall technical inefficiency is decomposed into a persistent component and a residual component. Second, a multistep procedure is used to estimate the parameters of the production function as well as persistent and residual technical inefficiency. The advantage of this multistep procedure is that the parameter estimates are robust to distributional assumptions on the error components. Distributional assumptions are required in the final stage to estimate the residual component of technical inefficiency. The model is used to examine technical efficiency in Swedish dairy farms during the period 1976 to 1988.

216 citations


Journal ArticleDOI
TL;DR: In this paper, Schmidt and Kumbhakar used a cost function approach and combined the concepts of technical and allocative efficiency in the cost relationship to obtain a farm-specific measure of allocative inefficiency.
Abstract: Farm efficiency, and the question of how to measure it, is an important subject in developing countries' agriculture. There are three distinct approaches to measurement based on cost, profits, and production functions. Farrell developed the concept of technical efficiency based on input and output relationships.' Technical inefficiency arises when actual or observed output from a given input mix is less than the maximum possible; allocative inefficiency arises when the input mix is not consistent with cost minimization. Advances have been made in the frontier cost function literature using flexible functional forms (Bauer; Greene 1980; Forsund, KnoxLovell, and Schmidt; and Schmidt). An approach using systems of equations based on cost, profit, and distance functions was developed to incorporate all available information on multiproduct farms (Kumbhakar; Schmidt; Fare, Grosskopf, and Nelson; Hayes). The present study uses a cost function approach and combines the concepts of technical and allocative efficiency in the cost relationship. Any errors in the production decision translate into higher costs for the producer. At the same time, the stochastic nature of production implies that the theoretical cost function is stochastic.2 Two distinct approaches are used. First, we use the stochastic frontier approach in which translog cost function is specified and estimated without using share equations. The derived measure of inefficiency is then related to socioeconomic, demographic, and farm size variables. Second, we use a behavioral approach in which translog cost function again is estimated, but this time as a function of the shadow prices of the inputs. The cost share equations are estimated jointly with a cost equation incorporating all cross-equation restrictions. Moreover, the input-use inefficiency is introduced through the parameters and these are related to the structural variables. This approach permits us to obtain a farm-specific measure of allocative inefficiency.

197 citations


Journal ArticleDOI
TL;DR: In this paper, a stochastic frontier cost function is used to specify the cost of inefficiency of publicly and privately owned urban water utilities in terms of their different ownership structures and firm-specific characteristics.

188 citations


Journal ArticleDOI
TL;DR: In this article, the authors argue that if an entrant has market power and the seller's cost of production is observable but not verifiable, then privately stipulated damages are set at a socially excessive level to facilitate the extraction of the entrant's surplus.
Abstract: Two roles for stipulated damage provisions have been debated in the literature: protecting relationship-specific investments and inefficiently excluding competitors. Aghion and Bolton (1987) formally demonstrate the latter effect in a model without investment or renegotiation. Although introducing renegotiation alone destroys their result, introducing both renegotiation and investment restores it. In particular, if an entrant has market power and the seller's cost of production is observable but not verifiable, then privately stipulated damages are set at a socially excessive level to facilitate the extraction of the entrant's surplus. In contrast, if the entrant prices competitively (as typically is assumed in the law and economics literature on breach), then private stipulation is efficient. Whereas a simple legal restriction on the contract corrects for any inefficiency, standard court-imposed remedies do not.

185 citations


Journal ArticleDOI
Stuart Ogden1
TL;DR: In this paper, the authors investigate the ways in which accounting and accounting information has contributed to and shaped processes of organizational change in one area of the public sector, the ten Regional Water Authorities of England and Wales.
Abstract: The U.K. Government's belief in the innate inefficiency of traditional public sector provision of goods and services has inspired a number of initiatives which have resulted in management of public sector enterprises being confronted by an increasingly commercial environment, tighter financial controls, increased competition, and in some cases transfer to the private sector through privatization. This paper is concerned with investigating the ways in which accounting and accounting information has contributed to and shaped processes of organizational change in one area of the public sector, the ten Regional Water Authorities of England and Wales. In the early 1980s, the Water Authorities were subject to pressures from new Government financial controls and performance aims to become more efficient. These pressures intensified when the Government announced its intention to privatize them in 1986, and continued up to 1989 when privatization took effect. Since privatization the Water Authorities have been subject to “yardstick” competition under a new regulatory framework, and comparative judgements by the financial markets. In considering these changes, the paper examines the constitutive role of accounting in articulating changing organizational priorities, and in promoting first a vocabulary of costs and subsequently a vocabulary of profits as languages of organizational motive.

Book
01 Jun 1995
TL;DR: In this paper, the authors investigate a set of adaptability variables that have not been previously researched and, therefore, take an alternative focus on adaptive capability, finding that companies with high adaptive capability seemingly perform better than low adapters, despite the implication of high costs and inefficiency.
Abstract: In the literature it is proposed that high adaptive capability is associated with high costs and internal inefficiency, despite the potential benefits to be gained from being adaptive. Investigates a set of adaptability variables that have not been previously researched and, therefore, takes an alternative focus on adaptive capability. Identifies two distinct degrees of high and low adaptive capability in an empirical UK study. Suggests that companies with high adaptive capability seemingly perform better than low adapters, despite the implication of high costs and inefficiency. High adapters also seem to have more comprehensive market orientation and decision‐making style, although they appear to operate in more turbulent external environments. The results extend the current adaptive capability literature, and directions for further research are proposed.

Journal ArticleDOI
TL;DR: In this article, the authors investigate the nature and extent of efficiency and productivity growth in Japanese banking by using nonparametric frontier techniques which allow for technical inefficiency and technological progress/regress.
Abstract: The purpose of this study is to investigate the nature and extent of efficiency and productivity growth in Japanese banking by using nonparametric frontier techniques which allow for technical inefficiency and technological progress/regress. Here, productivity change indexes (the Malmquist productivity, efficiency change and technological change indexes) as well as technical efficiency measures are computed from a 1989-1991 sample of panel data. The study indicates at least five key findings which are summarized in the last section of this paper.

Journal ArticleDOI
TL;DR: In this paper, the estimation of frontier production functions in panel data models is considered and a multi-stage method is proposed to obtain estimates of the parameters of a flexible input requirement function and technical inefficiency decomposed into time-invariant (firm-specific), time-varying, and the residual components.
Abstract: This paper considers the estimation of frontier production functions in panel data models. It proposes a multi-stage method to obtain estimates of (1) the parameters of a flexible input requirement function and (2) technical inefficiency decomposed into time-invariant (firm-specific), time-varying, and the residual components. The proposed method is used to analyse labour-use efficiency of Swedish local social insurance offices on the basis of a large panel of observations during the time period 1974–84. Empirical results show: (1) substantial variations in labour-use efficiency among these offices, with the mean efficiencies declining over time; (2) presence of economies of scale, thereby meaning that most of the offices were of suboptimal size.

01 Jan 1995
TL;DR: In this paper, the effect of early exercise on the put-call parity condition was investigated using European options and they found violations that are much less frequent and smaller than the studies using American options.
Abstract: Existing empirical studies of the put-call parity condition report frequent, substantial violations. An important problem in interpreting these results is that these studies all investigate American options. While some of these studies attempt to reduce the effects of possible early exercise on their tests, they cannot fully account for the effect of early exercise. Therefore, it is not possible to conclude from these studies whether, or to what extent, observed put-call parity violations are due to market inefficiency or due to the value of early exercise. We avoid the early exercise problem by testing put-call parity using European options. We find violations that are much less frequent and smaller than the studies using American options. Moreover, these violations reflect premia for liquidity

Journal ArticleDOI
TL;DR: In this paper, the authors investigate the sensitivity of efficiency measures no broadly different conceptions of how banks operate and find substantial differences in mean efficiency across models and low, though statistically significant, correspondence in the rankings of banks by efficiency scores across models.
Abstract: En the past 15 years, the banking industry has faced growing competition from other lii financial service firms and financial markers and, at the same time, has undergone subsnannial deregulation and change. Proponents of further deregulation, such as the removal of barriers no nhe commingling of commercial and investment banking, argue that such changes would enhance the eff’mciency and viability of American banks. The impact of competitive and regulatory changes on banks can be judged by gross measures of performance, such as profitability and failure rates. Economisns are also interested in how such changes affect the efficiency with which banks transform resources into various financial services. Inefficiency implies that resources are wasned, that is, that firms are producing less than the feasible level of output from the resources employed, or are using relatively costly combinations of resources no produce a particular mix of products or services, Thus, a goal of policymakers, as well as stockholders and managers, is to devise policies than improve the efficiency of commercial banks. linforrunately, economists do not agree upon the appropriate methodology for measuring the efficiency of banks, Several estimation techniques have been proposed, each with advantages and disadvantages. The problem is complicated by the myriad of different services that commercial banks perform. Researchers deal winh complex issues in measuring bank production: Is a deposit an input no the production process, or an output? Should outputs he measured in terms of the number of a bank’s accounts, the number of transactions it processes or the dollar amounts of Its loans or deposits? Perhaps non surprisingly estimates of commercial bank inefficiency vary considerably across studies that use different techniques, conceptions of bank production and data samples. This article investigates the sensitivity of efficiency measures no broadly different conceptions of how banks operate. We use a single-esnimanion technique and a common pooi of banks to compare efficiency measures based on alnernanive views of bank production. We find substantial differences in mean efficiency across models and low, though statistically significant, correspondence in the rankings of banks by efficiency scores across models. First, we discuss why measuring commercial bank efficiency is useful, some alternative measures of efficiency and techniques for estimating efficiency A description of the approach we take, our data and our results follow.

Journal ArticleDOI
TL;DR: This paper examines the implications of utilizing the sample clustering and sample weights in the analysis of survey data and two controversial analyses previously published in medical references are demonstrated with real survey data.
Abstract: SUMMARY Large scale health surveys offer an opportunity to study associations between risk factors and outcomes in a population-based setting Their complicated multistage sampling designs with differential probabilities of sampling individuals can make their analysis unstraightforward Classical 'design-based' methods that yield approximately unbiased estimators of associations and standard errors can be highly inefficient Model-based methods require assumptions which, if wrong, can lead to biased estimators of associations and standard errors This paper examines the implications of utilizing the sample clustering and sample weights in the analysis of survey data The approach is to estimate the inefficiency of using these aspects of the sampling design in a design-based analysis when actually it was unnecessary to do so If the inefficiency is small, then that aspect of the design is used in a design-based fashion Otherwise, additional modelling assumptions are incorporated into the analysis By focusing attention on risk factor-outcome associations in large health surveys, specific recommendations for practitioners are given The issues are demonstrated with real survey data including two controversial analyses previously published in medical references

Journal ArticleDOI
TL;DR: In this paper, the authors give a slightly new perspective on three related questions, namely, whether price changes are informational or allocational, and whether incomplete equitization causes trade.
Abstract: Markets have an allocational role; even in the absence of news about payoffs, prices change to facilitate trade and allocate resources to their best use. Allocational price changes create noise in the signal extraction process, and markets where such trading is important are markets in which we may expect to find a failure of informational efficiency. An important source of allocational trading is the use of dynamic trading strategies caused by the incomplete equitization of risks. Incomplete equitization causes trade. Trade implies the inefficiency of passive strategies, thus requiring investors to determine whether price changes are informational or allocational. I WILL TRY TO GIVE a slightly new perspective on three related questions.

Journal ArticleDOI
TL;DR: In this article, the authors decompose traditional measures of productive efficiency into a management and a regulatory component, and apply it to European railways, where the management is responsible for just managerial inefficiency whereas governments are responsible for slacks in regulatory efficiency.

Journal ArticleDOI
TL;DR: In this paper, the authors introduce efficiency measures that can be used to find the efficiency of a group of firms and pinpoint whether the group inefficiency is due to inefficiency inside or outside individual firms.
Abstract: While the conventional Farrell-Fare approach to efficiency measurement can identify the most inefficient firms, it fails to consider the efficiency of a group of firms thoroughly This paper introduces efficiency measures that can be used to find the efficiency of a group of firms and pinpoint whether the group inefficiency is due to inefficiency inside or outside individual firms Furthermore, a new way of finding the revenue maximum shadow price vector is introduced to compute the allocative efficiency of individual firms when price data are not available

Journal ArticleDOI
TL;DR: In this article, a Bayesian approach is used to investigate a sample's information about a portfolio's degree of inefficiency, and the data indicate that the NYSE-AMEX market portfolio is rather inefficient in the presence of a riskless asset, although this conclusion is justified only after an analysis using informative priors.
Abstract: A Bayesian approach is used to investigate a sample's information about a portfolio's degree of inefficiency. With standard diffuse priors, posterior distributions for measures of portfolio inefficiency can concentrate well away from values consistent with efficiency, even when the portfolio is exactly efficient in the sample. The data indicate that the NYSE-AMEX market portfolio is rather inefficient in the presence of a riskless asset, although this conclusion is justified only after an analysis using informative priors. Including a riskless asset significantly reduces any sample's ability to produce posterior distributions supporting small degrees of inefficiency.

Posted Content
TL;DR: In this article, the determinants of individual bank failures and acquisitions in the United States during 1984-1993 were examined, focusing especially on the role of management quality, as reflected in alternative measures of x-efficiency and find that inefficiency increases the risk of failure, while reducing the probability of a bank's being acquired.
Abstract: This paper examines the determinants of individual bank failures and acquisitions in the United States during 1984-1993. We use bank-specific information suggested by examiner CAMEL-rating categories to estimate competing-risks hazard models with time-varying covariates. We focus especially on the role of management quality, as reflected in alternative measures of x-efficiency and find the inefficiency increases the risk of failure, while reducing the probability of a bank's being acquired. Finally, we show that the closer to insolvency a bank is, as reflected by a low equity-to-assets ratio, the more likely its acquisition.

Journal ArticleDOI
TL;DR: In this paper, a modified linear-programming method for estimating technical efficiency that distinguishes discretionary inputs and socioeconomic variables affecting public production is presented, which is applied to examine the level and possible causes of technical inefficiency in the provision of public education in New York State.
Abstract: Local governments (including school districts) face increasing fiscal stress due to rising costs and little growth in intergovernmental aid and property tax revenues. In this tight fiscal environment, the measurement of and identification of causes for technical inefficiency in the provision of local public services becomes all the more important. Although there is a growing body of literature on the estimation of technical efficiency using linear programming methods, the production models underlying these estimates often are not consistent with the process of local public service provision. In particular, previous studies have not properly controlled for socioeconomic variables that affect service outcomes but are beyond the control of local government officials. Further, the public choice literature theoretically modeling inefficient behavior by bureaucrats devotes scant attention to empirical analysis of the underlying causes of technical inefficiency in the public sector. The major objective of this article is to present a modified linear-programming method for estimating technical efficiency that distinguishes discretionary inputs and socioeconomic variables affecting public production. This method is applied to examining the level and possible causes of technical inefficiency in the provision of public education in New York State.

Journal ArticleDOI
TL;DR: In this article, the authors used nonparametric methods to get upper and lower bounds on the levels of technical and overall cost efficiency in the US steel industry during the period 1958-1986.

Journal ArticleDOI
TL;DR: In this paper, the determinants of cost inefficiency of several publicly operated passenger-bus transportation companies in India in terms of their ownership structure as well as other firm-specific characteristics are estimated.
Abstract: This paper estimates the determinants of cost inefficiency of several publicly operated passenger-bus transportation companies in India in terms of their ownership structure as well as other firm-specific characteristics. A panel data on publicly operated passenger-bus transportation companies is used to estimate a translog cost system with inefficiency. Inefficiency is specified in such a way that both its mean and variance are firm- and time-specific. For the estimation of production technology and cost inefficiency we have used a multi-step estimation procedure instead of the single-step maximum likelihood (ML) method. In the first step we estimate the translog cost system with heteroskedastic cost function without using any distributional assumptions on the error terms. The second stage uses the ML method to estimate the parameters associated with inefficiency, conditional on the parameter estimates obtained from the first stage. Finally, the residual of the cost function is decomposed to obtain firm-and time-specific measures of cost inefficiency, with ownership type and other firm-specific characteristics as explanatory variables.

Journal ArticleDOI
TL;DR: In this paper, an economic model of ecotourism as the utilisation of open access to renewable natural sites is presented and management solutions to the open access problem is examined.
Abstract: Ecotourism refers to tourists travelling to a nature site because of the amenity and recreational value derived from having contact with some aspect of the natural world. While ecotourism is a rapidly growing phenomenon, very much of this growth is unsustainable. This article reviews why this unsustainability arises and how it can be avoided. The first section sets out an economic model of ecotourism as the utilisation of open access to renewable natural sites. This model is used to demonstrate how open access can lead to both economic and environmental inefficiency. The second section examines management solutions to the open access problem. This involves determining an owner of the site, either the state, or the local community, or a private group. This owner must then choose policy instruments to restrict open access. This involves choosing between price and quantity instruments, deciding how to reduce rent dissipation and determining whether to restrict total numbers of tourists or damage done per tou...

Journal ArticleDOI
TL;DR: In this article, the authors examined the correct interpretation of inefficiency scores in the additive model of data envelopment analysis and defined a region of stability that identifies sufficient conditions for altering a technical inefficiency classification to that of technical efficiency.
Abstract: Data Envelopment Analysis is an analytical tool for evaluating the relative technical efficiency of a set of organizations with the same multiple inputs and outputs. This paper examines the correct interpretation of inefficiency scores in the Additive model of Data Envelopment Analysis. A contrived numerical example is offered to demonstrate that certain computational statements appearing in recent literature are not entirely correct. As rectification, a region of stability is defined that identifies sufficient conditions for altering a technical inefficiency classification to that of technical efficiency. Finally, this region of stability technique is applied to bank branch operating efficiencies to demonstrate managerial interpretations and policy implications.

Journal ArticleDOI
TL;DR: In this article, the authors examined the determinants of a particularly important dimension of local aggregate performance -the technical efficiency with which hospital outputs are produced, using data envelopment analysis (DEA) technique.
Abstract: Using aggregates of acute care hospitals in given metropolitan areas as decision making units (DMUs), the hospital industry's technical efficiency in 319 U.S. metropolitan areas was evaluated. The performance of hospital aggregates may be as important to solving hospital cost inefficiency and waste problems as the performance of individual hospitals themselves. This study examines the determinants of a particularly important dimension of local aggregate performance-the technical efficiency with which hospital outputs are produced. Aggregate technical efficiencies are measured using the data envelopment analysis (DEA) technique. Results indicate that at least 3% of health care costs in the gross domestic product (GDP) are due to inefficiencies created by the excessive buildup of providers. Potential planning priorities for eliminating such waste in each local hospital market are recommended.

Journal ArticleDOI
TL;DR: Given the allocative inefficiency of raising taxes to pay for the program, the impact of the proposal on allocative efficiency would be at least as good at the lower bound estimate of monopoly costs while substantially improving efficiency at or near the upper bound estimate.
Abstract: Traditionally, monopoly power in the pharmaceutical industry has been measured by profits. An alternative method estimates the deadweight loss of consumer surplus associated with the exercise of monopoly power. Although upper and lower bound estimates for this inefficiency are far apart, they at least suggest a dramatically greater welfare loss than measures of industry profitability would imply. A proposed system would have the U.S. government employing its power of eminent domain to "take" and distribute pharmaceutical patents, providing as "just compensation" the present value of the patent's expected future monopoly profits. Given the allocative inefficiency of raising taxes to pay for the program, the impact of the proposal on allocative efficiency would be at least as good at our lower bound estimate of monopoly costs while substantially improving efficiency at or near our upper bound estimate.

Journal ArticleDOI
TL;DR: In this paper, a general-equilibrium model is used to analyze agricultural policy incidence in the presence of distortionary income taxation, and the incidence of lump-sum transfers to farmers, supply control through input retirement, and production subsidies are considered.