Please use this identifier to cite or link to this item: https://hdl.handle.net/10419/251197 
Year of Publication: 
2022
Series/Report no.: 
Deutsche Bundesbank Discussion Paper No. 04/2022
Publisher: 
Deutsche Bundesbank, Frankfurt a. M.
Abstract: 
The transformation of credit scores into probabilities of default plays an important role in credit risk estimation. The linear logistic regression has developed into a standard calibration approach in the banking sector. With the advent of machine learning techniques in the discriminatory phase of credit risk models, however, the standard calibration approach is currently under scrutiny again. In particular, the assumptions behind the linear logistic regression provide critics with a target. Previous literature has converted the calibration problem into a regression task without any loss of generality. In this paper, we draw on recent academic results in order to suggest two new one-parametric families of differentiable functions as candidates for this regression. The derivation of these two families of differentiable functions is based on the maximum entropy principle and, thus, they rely on a minimum number of assumptions. We compare the performance of four calibration approaches on a real-world data set and find that one of the new one-parametric families outperforms the linear logistic regression. Furthermore, we develop an approach in order to quantify the part of the general estimation error of probabilities of default that stems from the statistical dispersion of the discriminatory power.
Subjects: 
Calibration
credit score
cumulative accuracy profile
logistic regression
margin of conservatism
probability of default
JEL: 
G17
G21
G33
ISBN: 
978-3-95729-870-6
Document Type: 
Working Paper

Files in This Item:
File
Size





Items in EconStor are protected by copyright, with all rights reserved, unless otherwise indicated.