Non-Convex Potential Function Boosting Versus Noise Peeling : - A Comparative Study

University essay from Uppsala universitet/Statistiska institutionen

Author: Viktor Venema; [2016]

Keywords: ;

Abstract: In recent decades, boosting methods have emerged as one of the leading ensemble learning techniques. Among the most popular boosting algorithm is AdaBoost, a highly influential algorithm that has been noted for its excellent performance in many tasks. One of the most explored weaknesses of AdaBoost and many other boosting algorithms is that they tend to overfit to label noise, and consequently several alterative algorithms that are more robust have been proposed. Among boosting algorithms which aim to accommodate noisy instances, the non-convex potential function optimizing RobustBoost algorithm has gained popularity by a recent result stating that all convex potential boosters can be misled by random noise. Contrasting this approach, Martinez and Gray (2016) propose a simple but reportedly effective way of remedying the noise problems inherent in the traditional AdaBoost algorithm by introducing peeling strategies in relation to boosting. This thesis evaluates the robustness of these two alternatives on empirical and synthetic data sets in the case of binary classification. The results indicate that the two methods are able to enhance the robustness compared to traditional convex potential function boosting algorithms, but not to a significant extent.

  AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)