EXPLAINABLE ARTIFICIAL INTELLIGENCE FOR DETECT FINANCIAL FRAUDS: IMPROVING TRANSPARENCY AND RUST IN PREDICTIVE MODELS
Main Article Content
Abstract
Financial fraud continues to undermine the stability of economies and institutions worldwide, with billions lost annually
despite conventional prevention strategies. The complexities of modern fraud schemes are often beyond the capacity of conventional
rule-based systems; hence, more sophisticated approaches are required. This paper uses the IEEE-CIS dataset to offer a methodology
for financial fraud detection based on explainable artificial intelligence (XAI). The methodology incorporates data preprocessing,
feature encoding, PCA-based dimensionality reduction, and class imbalance handling via random oversampling. An Artificial
Neural Network (ANN) model is implemented and compared against Decision Tree, LightGBM, Logistic Regression, and CNN
classifiers. Results indicate that while ANN achieves high accuracy (ACC) (97.56%), recal (REC)l remains moderate, reflecting
challenges in detecting minority fraud cases. To enhance transparency, interpretability methods such as SHAP and LIME are
applied, offering clear insights into model decision-making. A comparative analysis shows that CNN delivers the best overall
balance across metrics, while LightGBM demonstrates superior precision. The study helps bridge the gap between predictive
performance and interpretability, ensuring reliable and regulation-compliant fraud detection. Additionally, it provides a framework
adaptable to diverse financial datasets, enabling future improvements in fraud prevention strategies.
Downloads
Article Details

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
Download Copyright