决策树代码实现
2014-08-19 15:57
288 查看
代码说明:
函数:createDataSet():初始化
函数:calcShannonEnt(dataSet):求取熵
函数:splitDataSet(dataSet, axis, value):依据axis,与value进行划分
函数:chooseBestFeatureToSplit(dataSet):根据信息增益,得出适合划分的特征;
测试代码:把上面函数放到文件tree.py中
函数:createDataSet():初始化
函数:calcShannonEnt(dataSet):求取熵
函数:splitDataSet(dataSet, axis, value):依据axis,与value进行划分
函数:chooseBestFeatureToSplit(dataSet):根据信息增益,得出适合划分的特征;
def createDataSet(): dataSet = [[1, 1, 'yes'], [1, 1, 'yes'], [1, 0, 'no'], [0, 1, 'no'], [0, 1, 'no']] labels = ['no surfacing','flippers'] #change to discrete values return dataSet, labels def calcShannonEnt(dataSet): numEntries = len(dataSet) labelCounts = {} for featVec in dataSet: #the the number of unique elements and their occurance currentLabel = featVec[-1] if currentLabel not in labelCounts.keys(): labelCounts[currentLabel] = 0 labelCounts[currentLabel] += 1 shannonEnt = 0.0 for key in labelCounts: prob = float(labelCounts[key])/numEntries shannonEnt -= prob * log(prob,2) #log base 2 return shannonEnt def splitDataSet(dataSet, axis, value): retDataSet = [] for featVec in dataSet: if featVec[axis] == value: reducedFeatVec = featVec[:axis] #chop out axis used for splitting reducedFeatVec.extend(featVec[axis+1:]) retDataSet.append(reducedFeatVec) return retDataSet def chooseBestFeatureToSplit(dataSet): numFeatures = len(dataSet[0]) - 1 #the last column is used for the labels baseEntropy = calcShannonEnt(dataSet) bestInfoGain = 0.0; bestFeature = -1 for i in range(numFeatures): #iterate over all the features featList = [example[i] for example in dataSet]#create a list of all the examples of this feature uniqueVals = set(featList) #get a set of unique values newEntropy = 0.0 for value in uniqueVals: subDataSet = splitDataSet(dataSet, i, value) prob = len(subDataSet)/float(len(dataSet)) newEntropy += prob * calcShannonEnt(subDataSet) infoGain = baseEntropy - newEntropy #calculate the info gain; ie reduction in entropy if (infoGain > bestInfoGain): #compare this to the best gain so far bestInfoGain = infoGain #if better than current best, set to best bestFeature = i return bestFeature
def majorityCnt(classList): classCount={} for vote in classList: if vote not in classCount.keys(): classCount[vote] = 0 classCount[vote] += 1 sortedClassCount = sorted(classCount.iteritems(), key=operator.itemgetter(1), reverse=True) return sortedClassCount[0][0] def createTree(dataSet,labels): classList = [example[-1] for example in dataSet] if classList.count(classList[0]) == len(classList): return classList[0]#stop splitting when all of the classes are equal if len(dataSet[0]) == 1: #stop splitting when there are no more features in dataSet return majorityCnt(classList) bestFeat = chooseBestFeatureToSplit(dataSet) bestFeatLabel = labels[bestFeat] myTree = {bestFeatLabel:{}} del(labels[bestFeat]) featValues = [example[bestFeat] for example in dataSet] uniqueVals = set(featValues) for value in uniqueVals: subLabels = labels[:] #copy all of labels, so trees don't mess up existing labels myTree[bestFeatLabel][value] = createTree(splitDataSet(dataSet, bestFeat, value),subLabels) return myTree def classify(inputTree,featLabels,testVec): firstStr = inputTree.keys()[0] secondDict = inputTree[firstStr] featIndex = featLabels.index(firstStr) key = testVec[featIndex] valueOfFeat = secondDict[key] if isinstance(valueOfFeat, dict): classLabel = classify(valueOfFeat, featLabels, testVec) else: classLabel = valueOfFeat return classLabel
测试代码:把上面函数放到文件tree.py中
import sys import trees myDat ,labels = trees.createDataSet(); print myDat; print labels; print trees.calcShannonEnt(myDat); #myDat[0][-1]="maybe"; print myDat; print trees.calcShannonEnt(myDat); print trees.splitDataSet(myDat, 0, 1); print trees.splitDataSet(myDat, 0, 0); print trees.splitDataSet(myDat, 1, 1); print trees.splitDataSet(myDat, 1, 0); tree = trees.createTree(myDat,labels); print tree; myDat ,labels = trees.createDataSet(); print labels; print trees.classify(tree, labels, [1,1]);结果:
[[1, 1, 'yes'], [1, 1, 'yes'], [1, 0, 'no'], [0, 1, 'no'], [0, 1, 'no']] ['no surfacing', 'flippers'] 0.970950594455 [[1, 1, 'yes'], [1, 1, 'yes'], [1, 0, 'no'], [0, 1, 'no'], [0, 1, 'no']] 0.970950594455 [[1, 'yes'], [1, 'yes'], [0, 'no']] [[1, 'no'], [1, 'no']] [[1, 'yes'], [1, 'yes'], [0, 'no'], [0, 'no']] [[1, 'no']] {'no surfacing': {0: 'no', 1: {'flippers': {0: 'no', 1: 'yes'}}}} ['no surfacing', 'flippers'] yes
相关文章推荐
- 机器学习算法及代码实现--决策树
- 决策树分类算法原理分析与代码实现
- 决策树基本理论学习以及Python代码实现和详细注释
- 《机器学习》第三章决策树学习 ID3算法 c++实现代码
- 决策树简介及代码实现
- 决策树的python代码实现
- 决策树的实现原理与matlab代码
- 决策树原理及Python代码实现
- 决策树(1)ID3原理以及代码实现
- ID3决策树的Python代码实现
- 决策树笔算和python代码实现
- 决策树代码实现
- 第四篇:决策树分类算法原理分析与代码实现
- 决策树(CART)、随机森林、GBDT(GBRT)新手导读及资料推荐,附加python实现代码
- python实现决策树ID3算法的示例代码
- 决策树ID3算法python实现代码及详细注释
- **决策树基础以及Python代码实现**
- r语言做决策树代码实现
- 机器学习之决策树(Decision Tree)及其Python代码实现
- 决策树的Python代码实现与分析