scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A DEA and random forest regression approach to studying bank efficiency and corporate governance

TL;DR: In this paper, the authors employ Data Envelopment Analysis to estimate the new technical, new cost, and new profit efficiency of Indian banks over the period 2008-2018, and use Random Forest Regression to examine the impact of corporate governance (Board Size, Board Independence, Duality, Gender Diversity, and Board Meetings), bank characteristics (Return on Assets, Size, and Equity to Total Assets), and other characteristics (Ownership and Years) on bank efficiency.
Abstract: We employ Data Envelopment Analysis to estimate the new technical, new cost, and new profit efficiency of Indian banks over the period 2008–2018. Then, we use Random Forest Regression to examine the impact of corporate governance (Board Size, Board Independence, Duality, Gender Diversity, and Board Meetings), bank characteristics (Return on Assets, Size, and Equity to Total Assets), and other characteristics (Ownership and Years) on bank efficiency. Among others, we found that board characteristics play a significant role particularly in new profit efficiency; therefore, policymakers and regulators should consider Board Size, Board Independence, Board Meetings, and Duality while framing guidelines for enhancing bank new profit efficiency. We also found that Board Independence plays a vital role in bank new cost efficiency, while Gender Diversity contributes to both new technical and new cost efficiency. This study makes methodological contributions by employing Machine Learning based Random Forest Regression in tandem with Data Envelopment Analysis under a two-phase model to examine corporate governance and bank efficiency, which is a pioneering attempt.
Citations
More filters
Journal ArticleDOI
TL;DR: In this article , the authors bring together the near 40 years of DEA in a concise format by discussing the popular DEA models, their advantages and shortcomings, and different applications of DEA, and provide a brief bibliometric analysis to highlight the development of DEA over the years in terms of publication trends, highly cited papers, journal citation, etc.
Abstract: DEA, incepted in 80s, has emerged as a popular decision-making technique, for determining the efficiency of similar units. Due to its simplicity and applicability, DEA has gained the attention of scientists and researchers working in diverse areas, which has contributed towards a rich literature both in terms of theoretical development as well as different applications. This paper tries to bring together the near 40 years of existence of DEA in a concise format by discussing the popular DEA models, their advantages and shortcomings, and different applications of DEA. It also provides a brief bibliometric analysis to highlight the development of DEA over the years in terms of publication trends, highly cited papers, journal citation, etc.

8 citations

Journal ArticleDOI
TL;DR: In this article , the authors employed a comprehensive sample of European banks between 2011 and 2021 by using a panel fixed effects regression model to study the relationship between the adoption of financial technology and bank profitability.

7 citations

Journal ArticleDOI
TL;DR: In this article, the impacts of the environment, social and governance (ESG) and its components on global bank profitability considering the COVID-19 outbreak were investigated using a system generalized method of moments (GMM) to investigate the relationship between ESG and bank profitability.
Abstract: PurposeThis study investigated the impacts of the environment, social and governance (ESG) and its components on global bank profitability considering the COVID-19 outbreak.Design/methodology/approachThis study used a system generalized method of moments (GMM) proposed by Arellano and Bover (1995) to investigate the relationship between ESG and bank profitability using an unbalanced sample of 487 banks from 51 countries from 2006 to 2021.FindingsThe findings generally found that ESG activities may reduce bank profitability, thus supporting the trade-off hypothesis that adopting ESG standards could increase bank costs while lowering profitability. In addition, there is a U-shaped relationship between ESG and bank profitability, suggesting that ESG activities can help improve bank performance in the long term. Such effect is the first time observed in the global banking sector. This study’s results are robust across different models and settings (e.g. developed vs developing countries, different levels of profitability, and samples with vs without US banks).Practical implicationsThis study provides empirical evidence to support the sustainable development policy which is implemented by many countries. It also provides empirical incentives for bank managers to be more ESG-oriented in their activities.Originality/valueThis study provides a better understanding of the roles of ESG activity and its components in the global banking system, considering the recent crises.

7 citations

Journal ArticleDOI
TL;DR: In this article, a slacks-based data envelopment analysis technical efficiency (TE) measure, a variable returns to scale cost efficiency model and Malmquist productivity index are employed to determine TE, cost efficiency and productivity change, respectively.
Abstract: Purpose: Against the backdrop of an Indian banking sector that finds itself entangled in the triple deadlock of increasing competition, technological changes and strict regulatory compliance, the study aims to examine the need for reinforcing stringent corporate and risk governance mechanisms as an instrument for improving efficiency and productivity levels. Design/methodology/approach: The authors construct three separate indices, namely, supervisory board index, audit index and risk governance index to measure the governance practices of commercial banks. A slacks-based data envelopment analysis technical efficiency (TE) measure, a variable returns to scale cost efficiency model and Malmquist productivity index are employed to determine TE, cost efficiency and productivity change, respectively. A two-step system-generalized method of moments estimation accounts for the dynamic relationship between governance and efficiency. Findings: The authors show that strict audit and risk governance mechanisms are associated with better efficiency and productivity levels. However, consistent with the free-rider hypothesis, large, independent and diverse boards lead to cost inefficiencies. Strict risk governance structures circumvent the negative effects of high regulatory capital and improve efficiency and total factor productivity. However, friendly boards do not perform efficiently in the presence of regulatory capital, implying that incentives arising from maintaining high levels of equity capital make them more susceptible to risk-taking, and board composition is unable to sidestep this behaviour. Originality/value: The paper contributes to the literature that explores the linkages between governance, efficiency and productivity. The inferences hold relevance in the post-COVID world, as regulators try to circumvent the additional stress on the banking system by adopting sound corporate and risk governance mechanisms. © 2021, Emerald Publishing Limited.

3 citations

Journal ArticleDOI
TL;DR: In this article , a sample of 2352 Clean Development Mechanism (CDM) projects was selected from a United Nations Framework Convention on Climate Change (UNFCCC) database and analyzed first with a two-stage data envelopment analysis (DEA) model that allowed the evaluation of the financial return and environmental efficiency of these projects.
Abstract: The main purpose of this paper is to analyze the efficiency of Clean Development Mechanism (CDM) projects implemented worldwide and understand their main characteristics, seeking to support investment decisions and expand the scientific understanding of the financial and environmental impact of these projects. To achieve this goal, a sample of 2352 CDM projects was selected from a United Nations Framework Convention on Climate Change (UNFCCC) database and analyzed first with a two-stage data envelopment analysis (DEA) model that allowed the evaluation of the financial return and environmental efficiency of these projects. DEA results provide an efficiency ranking that was then analyzed with a classification tree (CHAID algorithm), revealing some main characteristics of the projects with higher efficiencies, like their sizes, locations, and type of CDM. CDMs are projects that demand a significant quantity of resources and effort to produce the expected outcomes, so it is crucial for public managers and investors to know the project profiles that generate the best financial and environmental results. In this sense, this study presents a completely original methodology for this kind of analysis and reveals important insights for these agents and researchers in this field.

2 citations

References
More filters
Journal ArticleDOI
01 Oct 2001
TL;DR: Internal estimates monitor error, strength, and correlation and these are used to show the response to increasing the number of features used in the forest, and are also applicable to regression.
Abstract: Random forests are a combination of tree predictors such that each tree depends on the values of a random vector sampled independently and with the same distribution for all trees in the forest. The generalization error for forests converges a.s. to a limit as the number of trees in the forest becomes large. The generalization error of a forest of tree classifiers depends on the strength of the individual trees in the forest and the correlation between them. Using a random selection of features to split each node yields error rates that compare favorably to Adaboost (Y. Freund & R. Schapire, Machine Learning: Proceedings of the Thirteenth International conference, aaa, 148–156), but are more robust with respect to noise. Internal estimates monitor error, strength, and correlation and these are used to show the response to increasing the number of features used in the splitting. Internal estimates are also used to measure variable importance. These ideas are also applicable to regression.

79,257 citations

Journal ArticleDOI
TL;DR: In this article, the authors draw on recent progress in the theory of property rights, agency, and finance to develop a theory of ownership structure for the firm, which casts new light on and has implications for a variety of issues in the professional and popular literature.

49,666 citations

Journal ArticleDOI
TL;DR: This article gives an introduction to the subject of classification and regression trees by reviewing some widely available algorithms and comparing their capabilities, strengths, and weakness in two examples.
Abstract: Classification and regression trees are machine-learning methods for constructing prediction models from data. The models are obtained by recursively partitioning the data space and fitting a simple prediction model within each partition. As a result, the partitioning can be represented graphically as a decision tree. Classification trees are designed for dependent variables that take a finite number of unordered values, with prediction error measured in terms of misclassification cost. Regression trees are for dependent variables that take continuous or ordered discrete values, with prediction error typically measured by the squared difference between the observed and predicted values. This article gives an introduction to the subject by reviewing some widely available algorithms and comparing their capabilities, strengths, and weakness in two examples. © 2011 John Wiley & Sons, Inc. WIREs Data Mining Knowl Discov 2011 1 14-23 DOI: 10.1002/widm.8 This article is categorized under: Technologies > Classification Technologies > Machine Learning Technologies > Prediction Technologies > Statistical Fundamentals

16,974 citations

Journal ArticleDOI
01 Aug 1996
TL;DR: Tests on real and simulated data sets using classification and regression trees and subset selection in linear regression show that bagging can give substantial gains in accuracy.
Abstract: Bagging predictors is a method for generating multiple versions of a predictor and using these to get an aggregated predictor. The aggregation averages over the versions when predicting a numerical outcome and does a plurality vote when predicting a class. The multiple versions are formed by making bootstrap replicates of the learning set and using these as new learning sets. Tests on real and simulated data sets using classification and regression trees and subset selection in linear regression show that bagging can give substantial gains in accuracy. The vital element is the instability of the prediction method. If perturbing the learning set can cause significant changes in the predictor constructed, then bagging can improve accuracy.

16,118 citations