eduzhai > Applied Sciences > Computer Science >

Feature quantization for parsimonious and interpretable predictive models

  • KanKan
  • (0) Download
  • 20210425
  • Save

... pages left unread,continue reading

Document pages: 9 pages

Abstract: For regulatory and interpretability reasons, logistic regression is stillwidely used. To improve prediction accuracy and interpretability, apreprocessing step quantizing both continuous and categorical data is usuallyperformed: continuous features are discretized and, if numerous, levels ofcategorical features are grouped. An even better predictive accuracy can bereached by embedding this quantization estimation step directly into thepredictive estimation step itself. But doing so, the predictive loss has to beoptimized on a huge set. To overcome this difficulty, we introduce a specifictwo-step optimization strategy: first, the optimization problem is relaxed byapproximating discontinuous quantization functions by smooth functions; second,the resulting relaxed optimization problem is solved via a particular neuralnetwork. The good performances of this approach, which we call glmdisc, areillustrated on simulated and real data from the UCI library and CréditAgricole Consumer Finance (a major European historic player in the consumercredit market).

Please select stars to rate!

         

0 comments Sign in to leave a comment.

    Data loading, please wait...
×