BazEkon - The Main Library of the Cracow University of Economics

BazEkon home page

Main menu

Gatnar Eugeniusz (Akademia Ekonomiczna im. Karola Adamieckiego w Katowicach)
Implementacja metod łączenia modeli dyskryminacyjnych w programie R
The Implementation of Ensemble Methods in R
Prace Naukowe Uniwersytetu Ekonomicznego we Wrocławiu. Taksonomia (16), 2009, nr 47, s. 33-40, tab., bibliogr. 14 poz.
Research of Wrocław University of Economics
Issue title
Klasyfikacja i analiza danych - teoria i zastosowania
Analiza dyskryminacyjna, Podejmowanie decyzji
Discriminant analysis, Decision making
W artykule zostanie przedstawiony przegląd dostępnych pakietów zawierających dodatkowe procedury napisane w języku R. Pokazane zostaną także ich zastosowania w przykładowych programach napisanych w tym języku oraz wyniki analiz porównawczych. (fragment tekstu)

Model aggregation is a well known technique used to improve the classification accuracy in many applications. In this paper, we review a number of available packages in the R environment that can be used for aggregation of classification models. We also compare the CPU time when procedures from different packages were applied. The comparison was done for five data sets from the UCI Repository. (original abstract)
The Main Library of the Cracow University of Economics
The Library of University of Economics in Katowice
The Main Library of Poznań University of Economics and Business
The Main Library of the Wroclaw University of Economics
Full text
  1. Breiman L. (1996), Bagging predictors, "Machine Learning" no 24, s. 123-140.
  2. Breiman L. (2001), Random forests, "Machine Learning" no 45, s. 5-32.
  3. Condorcet, Marquis de (1785), Essais sur l'application de l'analyse a la probabilite des decisions redues a la pluralite des voix, Paris.
  4. Dettling M., Bühlmann P. (2003), Boosting for tumor classification with gene expression data, "Bioinformatics", tom 19, s. 1061-1069.
  5. Freund Y., Schapire R. (1996), A decision-theoretic generalization of on-line learning and application to boosting, "Journal of Computer and System Sciences" no 55, s. 119-139.
  6. Friedman J. (2002), Stochastic gradient boosting, "Computational Statistics and Data Analysis", tom 38(4), s. 367-378.
  7. Friedman J., Hastie T., Tibshirani R. (2000), Additive logistic regression: a statistical view of boosting, "Annals of Statistics" no 28(2), s. 337-407.
  8. Hansen L.K., Salamon P. (1990), Neural network ensembles, IEEE Transactions on Pattern Analysis and Machine Intelligence, t. 12, s. 993-1001.
  9. Hothorn T., Lausen B. (2003), Bundling classifiers by bagging trees, "Computational Statistics & Data Analysis" 2005, tom 49, s. 1068-1078.
  10. Krogh A., Vedelsby J. (1995), Neural network ensembles, cross validation, and active learning, [w:] G. Tesauro, D. Touretzky, T. Leen (eds.), Advances in Neural Information Processing Systems, MIT Press, 7, s. 231-238.
  11. Ridgeway G. (1999), The state of boosting, "Computing Science and Statistics" tom 31, s. 172-181.
  12. Shapley L., Grofman B. (1984), Optimizing group judgemental accuracy in the presence of interdependences, "Public Choice" no 43, s. 329-343.
  13. Therneau T.M., Atkinson E.J. (1997), An introduction to recursive partitioning using the RPART routines, Mayo Foundation, Rochester.
  14. Tumer K., Ghosh J. (1996), Analysis of decision boundaries in linearly combined neural classifiers, "Pattern Recognition" no 29, s. 341-348.
Cited by
Share on Facebook Share on Twitter Share on Google+ Share on Pinterest Share on LinkedIn Wyślij znajomemu