Bankruptcy prediction

The importance of the area is due in part to the relevance for creditors and investors in evaluating the likelihood that a firm may go bankrupt.

The history of bankruptcy prediction includes application of numerous statistical tools which gradually became available, and involves deepening appreciation of various pitfalls in early analyses.

Bankruptcy prediction has been a subject of formal analysis since at least 1932, when FitzPatrick published a study of 20 pairs of firms, one failed and one surviving, matched by date, size and industry, in The Certified Public Accountant.

In 1967, William Beaver applied t-tests to evaluate the importance of individual accounting ratios within a similar pair-matched sample.

The study sought to introduce deep learning models for corporate bankruptcy forecasting using textual disclosures.

The whole procedure consists of the following four stages: first, sequential forward selection was used to extract the most important features; second, a rule-based model was chosen to fit the given dataset since it can present physical meaning; third, a genetic ant colony algorithm (GACA) was introduced; the fitness scaling strategy and the chaotic operator were incorporated with GACA, forming a new algorithm—fitness-scaling chaotic GACA (FSCGACA), which was used to seek the optimal parameters of the rule-based model; and finally, the stratified K-fold cross-validation technique was used to enhance the generalization of the model.

Some financial providers have started to use these datasets with machine learning models to attempt to predict future bankruptcy risks.