Are the Probabilities Right? Dependent Defaults and the Number of Observations Required to Test for Default Rate Accuracy
Roger M. Stein
Volume 4, Number 2, Second Quarter 2006
Users of default prediction models often desire to know how accurate the estimated probabilities are. There are a number of mechanisms for testing this, but one that has found favor due to its intuitive appeal is the examination of goodness of fit between expected and observed default rates. While large data sets are required to test these estimates, particularly when probabilities are small as in the case of higher credit quality borrowers, the question of how large often arises. In this short note, we demonstrate, based on simple statistical relationships, how a lower bound on the size of a sample may be calculated for such experiments. Where we have a fixed sample size, this approach also provides a means for sizing the minimum difference between predicted and empirical default rates that should be observed in order to conclude that the assumed probability and the observed default rate differ. When firms are not independent (correlation is non-zero), adding more observations does not necessarily produce a confidence bound that narrows quickly. We show how a simple simulation approach can be used to characterize this behavior. To provide some guidance on how severely correlation may impact the confidence bounds for of an observed default rate, we suggest an approach that makes use of the limiting distribution of Vasicek (1991) for evaluating the degree to which we can reduce confidence bounds, even with infinite data availability. The main result of the paper is not so much that one can define a likely error bound on an estimate (one can), but that, in general,under realistic conditions, the error bound is necessarily large implying that it can be exceedingly difficult to validate the levels of default probability estimates using observed data.