您的位置:首页 > 其它

数据挖掘中的几个算法—adaboost

2010-04-06 21:36 435 查看
7 AdaBoost
7.1 Description of the algorithm
Ensemble learning [20] deals with methods which employ multiple learners to solve a problem. The generalization ability of an ensemble is usually significantly better than that of a single learner, so ensemble methods are very attractive. The AdaBoost algorithm [24] proposed by Yoav Freund and Robert Schapire is one of the most important ensemble methods, since it has solid theoretical foundation, very accurate prediction, great simplicity (Schapire said it needs only “just 10 lines of code”), and wide and successful applications.
Let

denote the instance space and

the set of class labels. Assume

Given a weak or base learning algorithm and a training set

where

, the AdaBoost algorithm works as follows. First, it assigns equal weights to all the training examples

. Denote the distribution of the weights at the t-th learning round as

. From the training set and

the algorithm generates a weak or base learner

by calling the base learning algorithm. Then, it uses the training examples to test

, and the weights of the incorrectly classified examples will be increased. Thus, an updated weight distribution

is obtained. From the training set and

AdaBoost generates another weak learner by calling the base learning algorithm again. Such a process is repeated for

rounds, and the final model is derived byweighted majority voting of the

weak learners,where theweights of the learners are determined during the training process. In practice, the base learning algorithm may be a learning algorithm which can use weighted training examples directly; otherwise the weights can be exploited by sampling the training examples according to the weight distribution

. The pseudo-code of AdaBoost is shown in Fig. 5.





In order to deal with multi-class problems, Freund and Schapire presented the AdaBoost.M1 algorithm [24] which requires that the weak learners are strong enough even on hard distributions generated during the AdaBoost process. Another popular multi-class version of AdaBoost is AdaBoost.MH [69] which works by decomposing multi-class task to a series of binary tasks. AdaBoost algorithms for dealing with regression problems have also been studied. Since many variants of AdaBoost have been developed during the past decade, Boosting has become the most important “family” of ensemble methods.
7.2 Impact of the algorithm
As mentioned in Sect. 7.1, AdaBoost is one of the most important ensemble methods, so it is not strange that its high impact can be observed here and there. In this short article we only briefly introduce two issues, one theoretical and the other applied.
In 1988, Kearns and Valiant posed an interesting question, i.e., whether a weak learning algorithm that performs just slightly better than random guess could be “boosted” into an arbitrarily accurate strong learning algorithm. In other words, whether two complexity classes, weakly learnable and strongly learnable problems, are equal. Schapire [67] found that the answer to the question is “yes”, and the proof he gave is a construction, which is the first Boosting algorithm. So, it is evident that AdaBoost was born with theoretical significance. AdaBoost has given rise to abundant research on theoretical aspects of ensemble methods, which can be easily found in machine learning and statistics literature. It is worth mentioning that for their AdaBoost paper [24], Schapire and Freund won the Godel Prize, which is one of the most prestigious awards in theoretical computer science, in the year of 2003.
AdaBoost and its variants have been applied to diverse domains with great success. For example, Viola and Jones [84] combined AdaBoost with a cascade process for face detection. They regarded rectangular features as weak learners, and by using AdaBoost to weight the weak learners, they got very intuitive features for face detection. In order to get high accuracy as well as high efficiency, they used a cascade process (which is beyond the scope of this article). As the result, they reported a very strong face detector: On a 466MHz machine, face detection on a 384 × 288 image cost only 0.067 seconds, which is 15 times faster than state-of-the-art face detectors at that time but with comparable accuracy. This face detector has been recognized as one of the most exciting breakthroughs in computer vision (in particular, face detection) during the past decade. It is not strange that “Boosting” has become a buzzword in computer vision and many other application areas.
7.3 Further research
Many interesting topics worth further studying. Here we only discuss on one theoretical topic and one applied topic.
Many empirical study show that AdaBoost often does not overfit, i.e., the test error of AdaBoost often tends to decrease even after the training error is zero. Many researchers have studied this and several theoretical explanations have been given, e.g. [38]. Schapire et al. [68] presented amargin-based explanation. They argued that AdaBoost is able to increase the margins even after the training error is zero, and thus it does not overfit even after a large number of rounds. However, Breiman [8] indicated that larger margin does not necessarily mean better generalization, which seriously challenged the margin-based explanation. Recently, Reyzin and Schapire [65] found that Breiman considered minimum margin instead of average or median margin, which suggests that the margin-based explanation still has chance to survive. If this explanation succeeds, a strong connection between AdaBoost and SVM could be found. It is obvious that this topic is well worth studying.
Many real-world applications are born with high dimensionality, i.e., with a large amount of input features. There are two paradigms that can help us to deal with such kind of data, i.e., dimension reduction and feature selection. Dimension reduction methods are usually based on mathematical projections, which attempt to transform the original features into an appropriate feature space. After dimension reduction, the originalmeaning of the features is usually lost. Feature selection methods directly select some original features to use, and therefore they can preserve the original meaning of the features, which is very desirable in many applications. However, feature selection methods are usually based on heuristics, lacking solid theoretical foundation. Inspired by Viola and Jones’s work [84], we think AdaBoost could be very useful in feature selection, especially when considering that it has solid theoretical foundation. Current research mainly focus on images, yet we think general AdaBoost-based feature selection techniques are well worth studying.
http://dminning.com/html/93/n-93.html
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: