您的位置:首页 > 其它

The work of last three weeks:Lane line detection, Online boosting, Multiboost

2013-10-09 15:25 459 查看
Content

Lane line detection
Online boosting

Multiboost

        Long time no update, because I have to prepare for my GRE test and materials to application. Although I did not have much time for research, I still make some progress after all.  I just seperate
it into three parts: Lane line detection, Online adaboost and Multiboost.

        Firstly, I finished a small program about lane detection together with a graduate student. As we know, the characters of lane line on the one hand, have the obvious color contrast with the
black road. On the other hand, it has the regular shape: line. After that discovery, we  begin to smooth the image with Gauss kernel to delete noise pixels; Then extract the edge based on canny algorithms (Figure 1).



    When get the edge, we can see clearly the lane line, however, how to discriminate them from other edges? Here, we use the simplest method: Hough line transformation.  We may get many short lines
here but we just fit the nearby short lines to a long line. Then we can get the position of lane line(figure 2).Furthermore, using limits of line angle and
line length, we can successfully delete many false lines. Finally we get the true ones.  The beta-edition program is not accurate and robust, and now I am still optimizing it.

   Secondly, my supervisor asked me to read one master's graduate thesis in my lab. Here I get to know something about online boosting. Inspired by the thesis, I think it is a good method to improve
my previous program. As mentioned before, I have already developed a front vehicle detection using Haar like + Adaboost. Although it achieves basic requirements, sometimes false positive still exists. The reason for the problem exisitance is that system may
encounter various background, and we cannot get all these background as negative samples to train offline classifiers. Well, when we get online adaboost training, using target detected as positive samples, the background as negative samples, we need not to
worry about  this problem anymore. On the one hand, the online negative samples(background) are so  specific that false alarm rate drops down rapidly. On the other hand, the object detected initially will also update our classifier to enhance the hit rate
in next frames.  Therefore, I read Helmut Grabner's paper, On-line Boosting and Visionand N. Oza's paper Online bagging and boostingcarefully and understand the concrete procedure well. The first paper tell the way to distribute weight of
each weak classifier, and the second paper tells how to update each weak classifier when adapting new samples. 

      I download Helmut Grabner's source code, and run it in my own pc. However,there are still several problems with the online-boosting. First, the training takes so much time that it can not meet
real-time requirement; Inaddition, we cannot make sure when to search the whoe image to detect the new target. I am now stll trying to solve these problems. To the first problem I want to replace the adaboost with SVM because its training time is short and
small unmber of samples can render good classification effect. Thus I began to see some paper about online SVM.

       Finally, reading a lot of papers also give me another inspiration to another part of my project which is traffic sign recognition. As mentioned before in my blog, I can now detect the traffic
signs and recognize the speed-limit signs, whereas I have no way to discriminate other signs. Now I see a new algorithms in machine learning: Multiboost. Asaboost can only be useful to bibinaryclassification.
So to recognize various kinds of signs, we need a classifier which can handle multiclass selection and can be as robust as adaboost. Well, multiboost is just the one! Djalel Benbouzid's paperMultiBoost: A Multi-purpose Boosting Package tells how to
develop such a classifier and  provide source code freely to download.点击打开链接 It can implement four types of strong classifiers: AdaBoost.MH(Schapire and Singer,1999) , FilterBoost(Bradley
and Schapire,2008), VJCascade(Viola and Jones,2004),SoftCascade (Bourdev and Brandt,2005). Now I'm still learning it. Next time, I will write my results in my blob.

   All above is what I do in last three weeks (only one week to work, other time spent in GRE). I will keep on to make progress in the future. Thanks for reading ~~~~~~~~
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: