Proceedings ArticleDOI
An Explainable Hybrid Model for Bankruptcy Prediction Based on the Decision Tree and Deep Neural Network
Tsung-Nan Chou
- pp 122-125
Reads0
Chats0
TLDR
A hybrid approach integrating the decision tree with the deep neural network was proposed to provide a compromise solution for investors to improve predictive accuracy and the overall accuracy was improved to 91% by the hybrid model.Abstract:
For investors seeking solutions to optimize their portfolio of assets to generate profits and minimize losses, choosing a sophisticated model to evaluate the risk of corporate financial distress will be crucial to support their asset management and investment decisions. Both the machine learning and the newly developed deep learning techniques have been employed to construct bankruptcy prediction models for decades. However, applying the deep learning models might increase the predictive accuracy in exchange for losing model interpretability, because the structure and parameters of the model are not easy to provide accountability for investors. In this study, a hybrid approach integrating the decision tree with the deep neural network was proposed to provide a compromise solution for investors. The decision tree was adopted as the primary model to provide explainable ability, while the deep neural network was chosen to improve predictive accuracy. The decision fusion of two models was designed with the compensatory and non-compensatory approaches. The hybrid model was implemented by concatenating the deep neural network to the selected branches of decision tree that perform poor predictive accuracy during model training. The empirical results showed that the predictive accuracy of the deep neural network and the decision tree were 80% and 87% respectively, and the overall accuracy was improved to 91% by the hybrid model.read more
Citations
More filters
Journal ArticleDOI
Explainable Artificial Intelligence (XAI): What we know and what is left to attain Trustworthy Artificial Intelligence
Shaukat Ali,Tamer AbuHmed,Shaker El-Sappagh,Khan Muhammad,J. Alonso-Moral,Roberto Confalonieri,Riccardo Guidotti,Javier Del Ser,Natalia Díaz-Rodríguez,Francisco Herrera +9 more
TL;DR: XAI has become a popular research subject within the AI field in recent years as discussed by the authors , and the need for eXplainable AI (XAI) methods for improving trust in AI models has arisen.
Journal Article
Performance Evaluation of Explainable Machine Learning on Non-Communicable Diseases
TL;DR: It is demonstrated how model-agnostic methods of eXplainable AI (XAI) can help provide explanations to understand black-box models on NCDs datasets better.
Book ChapterDOI
Machine Learning in Finance: Towards Online Prediction of Loan Defaults Using Sequential Data with LSTMs
V. A. Kandappan,A. G. Rekha +1 more
TL;DR: In this article, two approaches are proposed based on LSTM (Long Short Term Memory) along with a hybrid neural network architecture to understand the context between financial transactions and loan defaults, the novelty of the proposed methods is in how they handle structured data and the associated temporal data.
Proceedings ArticleDOI
Hybrid Explainable Smart House Control System
TL;DR: In this paper, a hybrid method for smart house control system, where control is made by training neural network with user habit data and then applying fuzzy rule set and computing with words engine to provide user with explanation why control change to light, heating or ventilation was made.
Journal ArticleDOI
Applications of Explainable Artificial Intelligence in Finance—a systematic review of Finance, Information Systems, and Computer Science literature
TL;DR: An overview of explainable Artificial Intelligence (XAI) in finance can be found in this article with a systematic literature review screening 2,022 articles from leading Finance, Information Systems, and Computer Science outlets.
References
More filters
Proceedings ArticleDOI
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
TL;DR: In this article, the authors propose LIME, a method to explain models by presenting representative individual predictions and their explanations in a non-redundant way, framing the task as a submodular optimization problem.
Journal ArticleDOI
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
Amina Adadi,Mohammed Berrada +1 more
TL;DR: This survey provides an entry point for interested researchers and practitioners to learn key aspects of the young and rapidly growing body of research related to XAI, and review the existing approaches regarding the topic, discuss trends surrounding its sphere, and present major research trajectories.
Journal ArticleDOI
Bankruptcy prediction using neural networks
Rick L. Wilson,Ramesh Sharda +1 more
TL;DR: The study indicates that neural networks perform significantly better than discriminant analysis at predicting firm bankruptcies, and implications for the accounting professional, neural networks researcher and decision support system builders are highlighted.
Journal ArticleDOI
Toward Human-Understandable, Explainable AI
TL;DR: The author introduces XAI concepts, and gives an overview of areas in need of further exploration—such as type-2 fuzzy logic systems—to ensure such systems can be fully understood and analyzed by the lay user.
Journal ArticleDOI
Deep Neural Network Initialization With Decision Trees
TL;DR: By combining the user-friendly features of decision tree models with the flexibility and scalability of deep neural networks, DJINN is an attractive algorithm for training predictive models on a wide range of complex data sets.