您的位置:首页 > 其它

机器学习之K-近邻算法

2018-03-22 21:04 302 查看
    上学期险些挂了孙大圣的PRML(pattern recognition and machine learning)课之后,决定本学期重新学一遍机器学习的经典算法,为接下来的寻找论文方向铺路,经过2个礼拜的颓废之后,终于写下了这篇knn算法,虽然此类文章已经在网上烂大街了,不过还是决定重新梳理一下自己的思路。
    首先定义一下K-邻近算法,它的工作原理是:存在一个样本数据集合,也称作数据集,并且样本集中每个数据都存在标签,即我们知道样本集中每一数据与所属分类的对应关系。输入没有标签的新数据后,将新数据的每个特征与样本集中数据对应的特征进行比较,然后算法提取样本集中特征最相似数据(最近邻)的分类标签。一般来说,我们只选择样本数据集中前k个最相似的数据,这就是k-近邻算法中k的出处。通常k是不大于20的整数。最后,选择k个最相似数据中出现次数最多的分类,作为新数据的分类。
    以机器学习实战中约会的例子为例,实现KNN算法,KNN的思想很简单,k-近邻算法是分类数据最简单最有效的算法,但是k-近邻算法需要保存全部数据集,如果训练集很大,则必须使用大量的存储空间,同时它必须对数据集中的每个数据计算距离值,实际使用会非常耗时。from numpy import *
import operator
import matplotlib
import matplotlib.pyplot as plt
from os import listdir
def classify0(inX, dataSet, labels, k):
dataSetSize = dataSet.shape[0]
diffMat = tile(inX, (dataSetSize,1)) - dataSet
sqDiffMat = diffMat**2
sqDistances = sqDiffMat.sum(axis=1)
distances = sqDistances**0.5
sortedDistIndicies = distances.argsort()
classCount={}
for i in range(k):
voteIlabel = labels[sortedDistIndicies[i]]
classCount[voteIlabel] = classCount.get(voteIlabel,0) + 1
sortedClassCount = sorted(classCount.items(), key=operator.itemgetter(1), reverse=True)
return sortedClassCount[0][0]

def file2matrix(filename):
fr = open(filename)
numberOfLines = len(fr.readlines()) #get the number of lines in the file
returnMat = zeros((numberOfLines,3)) #prepare matrix to return
classLabelVector = [] #prepare labels return
fr = open(filename)
index = 0
for line in fr.readlines():
line = line.strip()
listFromLine = line.split('\t')
returnMat[index,:] = listFromLine[0:3]
classLabelVector.append(int(listFromLine[-1]))
index += 1
return returnMat,classLabelVector

def autoNorm(dataSet):
minVals = dataSet.min(0)
maxVals = dataSet.max(0)
ranges = maxVals - minVals
normDataSet = zeros(shape(dataSet))
m = dataSet.shape[0]
normDataSet = dataSet - tile(minVals, (m,1))
normDataSet = normDataSet/tile(ranges, (m,1)) #element wise divide
return normDataSet, ranges, minVals

def datingClassTest():
hoRatio = 0.50 #hold out 10%
datingDataMat,datingLabels = file2matrix('D:\Program Files\PycharmProjects\MLIA\KNN\datingTestSet2.txt') #load data setfrom file
normMat, ranges, minVals = autoNorm(datingDataMat)
m = normMat.shape[0]
numTestVecs = int(m*hoRatio)
errorCount = 0.0
for i in range(numTestVecs):
classifierResult = classify0(normMat[i,:],normMat[numTestVecs:m,:],datingLabels[numTestVecs:m],3)
print ("the classifier came back with: %d, the real answer is: %d" % (classifierResult, datingLabels[i]))
if (classifierResult != datingLabels[i]): errorCount += 1.0
print( "the total error rate is: %f" % (errorCount/float(numTestVecs)))
print (errorCount)

if __name__ == '__main__':
resultList =['not at all', 'in small doses', 'in large doses']
percentTats = float(input("percentage of time spent playing video games?"))
ffMiles = float(input("frequent flier miles earned per year?"))
iceCream =float(input("liters of ice cream consumed per year?"))
datingDataMat,datingLabels = file2matrix('D:\Program Files\PycharmProjects\MLIA\KNN\datingTestSet2.txt')
inArr = array([ffMiles,percentTats,iceCream])
normMat, ranges, minVals = autoNorm(datingDataMat)
classifierResult =classify0((inArr-minVals)/ranges,normMat,datingLabels,3)
print("You will probably like this person: ",resultList[classifierResult -1])
# fig = plt.figure()
# ax = fig.add_subplot(111)
# ax.scatter(datingDataMat[:,1],datingDataMat[:,2],15.0*array(datingLabels),15.0*array(datingLabels))
# plt.show()
# print(normMat)
# print(ranges)
# print(minVals)
#datingClassTest()
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: