Methods Inf Med 2005; 44(05): 704-711
DOI: 10.1055/s-0038-1634028
Original Article
Schattauer GmbH

Improving Model Robustness with Bootstrapping

Application to Optimal Discriminant Analysis for Ordinal Responses (ODAO)
G. Le Teuff
1   Department of Biostatistics and Medical Informatics, Dijon University Hospital, France
,
C. Quantin
1   Department of Biostatistics and Medical Informatics, Dijon University Hospital, France
,
A. Venot
2   Department of Biostatistics and Medical Informatics, Cochin-Port Royal University Hospital, Paris, France
,
E. Walter
3   Laboratoire des Signaux et Systèmes, CNRS, Supélec, Université Paris-Sud, France
,
J. Coste
2   Department of Biostatistics and Medical Informatics, Cochin-Port Royal University Hospital, Paris, France
› Author Affiliations
Further Information

Publication History

Received: 20 April 2004

accepted: 13 February 2005

Publication Date:
07 February 2018 (online)

Summary

Objective: Recent results published by Coste et al. in discriminant analysis with ordinal responses showed the superiority of optimal discriminating analysis for ordinal responses (ODAO) both in terms of classification and simplicity of implementation compared to classic methods (Fisher’s discrimination, logistic regression) applied to medical data (prognostics of burns) and to simulated data. Nevertheless, the solutions obtained by ODAO may be sensitive to re-sampling (i.e the estimated coefficients by ODAO may show excessive sensitivity to the training sample). This study proposes some solutions to control the fluctuations of sampling and to ensure model stability.

Methods: We used intensive computational methods and bootstrapping, at the outset of model building in order to reduce the sampling variability of estimated coefficients. Thus, the estimation of the coefficients was not based on the minimization of a classification criterion of the training sample, but on the minimization of an aggregate criterion of bootstrapped replications of a classification criterion. Five aggregate criteria were studied.

Results: The improvement in terms of robustness appeared in 30% of the test cases with moderate training sample size and 55% of those with small training sample size.

Conclusion: Simulated test cases showed that bootstrapping can help construct more robust models in difficult classification situations and small training samples which are particularly frequent.