BazEkon - Biblioteka Główna Uniwersytetu Ekonomicznego w Krakowie

BazEkon home page

Meny główne

Piegat Andrzej (West Pomeranian University of Technology in Szczecin), Landowski Marek (Maritime University of Szczecin)
Specialized, MSE-Optimal m-Estimators of the Rule Probability Especially Suitable for Machine Learning
Control and Cybernetics, 2014, vol. 43, nr 1, s. 133-160, rys., tab., bibliogr. s. 158-160
Słowa kluczowe
Uczenie maszynowe, Data Mining, Drzewo decyzyjne
Machine learning, Data Mining, Decision tree
The paper presents an improved sample based rule- probability estimation that is an important indicator of the rule quality and credibility in systems of machine learning. It concerns rules obtained, e.g., with the use of decision trees and rough set theory. Particular rules are frequently supported only by a small or very small number of data pieces. The rule probability is mostly investigated with the use of global estimators such as the frequency-, the Laplace-, or the m-estimator constructed for the full probability interval [0,1]. The paper shows that precision of the rule probability estimation can be considerably increased by the use of m-estimators which are specialized for the interval [phmin, phmax] given by the problem expert. The paper also presents a new interpretation of the m-estimator parameters that can be optimized in the estimators. (original abstract)
Dostępne w
Biblioteka Główna Uniwersytetu Ekonomicznego w Krakowie
Biblioteka Szkoły Głównej Handlowej w Warszawie
Pełny tekst
  1. CESTNTK, В. (1990), Estimating probabilities: A crucial task in machinc learning. In: L. C. Aiello (Ed.), ECAI'90. Pitman, London, 147-149.
  2. CESTNIK, B. (1991), Estimating probabilities in machine learning. Ph.D. thesis, University of Ljubljana, Faculty of Computer and Information Science.
  3. CIIAWLA, N. V., CIEŚLAK, D. A. (2006), Evaluating calibration of probability estimation from decision trees. AAAI Workshop on the Evaluation Methods in Machine Learning, The AAAI Press, Boston, July 2006,18-23.
  4. CICHOSZ, P. (2000), Systemy uczące się (Learning systems). Wydawnictwo Naukowo Techniczne, Warsaw, Poland.
  5. CUSSENS, J. (1993), Bayes and pseudo bayes estimates of conditional probabilities and their reliabilities. In: Proceedings of European Conference on Machine Learning, ECML-93. LNCS 667, 136 152.
  6. FURNKRANZ, J., FLACH, P. A. (2005) ROC 'n' rule learning - towards a better understanding of covering algorithms. Machine Learning, 58(1), 39 77.
  7. HAJEK, A. (2010), Website: Interpretations of probability. The Stanford Encyclopedia of Philosophy (E.N. Zalta ed.). Available from: http://plato.
  8. LAROSE, D. T. (2010), Discovering Statistics. W. H. Freeman and Company, New York.
  9. LUTZ, H., WENDT, W. (1998), Taschenbuch der Regelungstechnik. Verlag Harri Deutsch, Frankfurt am Main.
  10. MOZINA, M., DEMSAR, J., ZABKAR, J., BRATKO, I. (2006), Why is rule learning optimistic and how to correct it. In: European Conference on Machine Learning, ECML 2006. LNCS 4212, 330-340.
  11. PIEGAT, A., LANDOWSKI, M. (2012), Optimal estimator of hypothesis probability for data mining problems with small samples. Int. J. Appl. Math. Comput. Sci., 22, 3, 629-645.
  12. POLKOWSKI, L. (2002), Rough Sets. Physica-Verlag, Heidelberg, New York.
  13. ROKACH, L., MAIMON, O. (2008), Data mining with decision trees, theory and applications. Machine Perception and Artificial Intelligence, 69. World Scientific Publishing Co. Pte. Ltd, New Jersey, Singapore.
  14. SIEGLER, R. S. (1976), Three Aspects of Cognitive Development. Cognitive Psychology, 8, 481-520.
  15. SIEGLER, R. S. (1994), Balance Scale Weight & Distance Database. UCI Machine Learning Repository. Available from: ml/datasets/Balance+Scale.
  16. STARZYK, A., WANG, F. (2004), Dynamic probability estimator for ma- chine learning. IEEE Transactions on Neural Networks, March 15(2), 298-308.
  17. SULZMANN, J. N., FURNKRANZ, J. (2009), An empirical comparison of probability estimation techniques for probabilistic rules. In: J. Gama, V. S. Costa, A. Jorge, P. Brazdil, Proceedings of the 12th International Conference on Discovery Science (DS-09), Porto, Portugal. Springer-Verlag, 317-331.
  18. SULZMANN, J. N., FURNKRANZ, J. (2010), Probability estimation and aggregation for rule learning. Technical Report TUD-KE-201-03, TU Darmstadt, Knowledge Engineering Group.
  19. WITTEN, I. H., FRANK, E. (2005), Data Mining. Second edition, Elsevier, Amsterdam.
  20. ZADROZNY, B., ELKAN, C. (2001), Learning and decision making when costs and probabilities are both unknown. In: Proceedings of the Seventh International Conference on Knowledge Discovery and Data Mining. San Francisco, August 2001. ADM, 204-213.
  21. ZHANG, Z. (1995), Parameter Estimation Techniques: A Tutorial with Application to Conic Fitting. M estimators. INRIA. Available from: http:// Estim/Main.html.
  22. ZIARKO, W. (1999), Decision making with probabilistic decision tables. In: N. Zhong, ed., RSFDGrC'99 Proceedings of the 7th International Workshop on New Directions in Rough Sets, Data Mining, and Granular-Soft Computing, Yamaguchi, Japan. Springer-Verlag, Berlin, Heidelberg, New York, 463-471.
  23. VON MISES, R. (1957), Probability, Statistics and the Truth. Macmillan, Dover, New York.
Cytowane przez
Udostępnij na Facebooku Udostępnij na Twitterze Udostępnij na Google+ Udostępnij na Pinterest Udostępnij na LinkedIn Wyślij znajomemu