您的位置:首页 > 编程语言 > Python开发

从入门到放弃:k-means聚类与Python

2017-12-27 15:33 405 查看
原理
伪代码

代码

与EM算法关系

k-means算法是聚类算法。k-means目的是将相似的对象归到一类,并且没有预先的类别信息,所以是无监督学习。

原理

使得分类误差最小argminS∑i=1k∑x∈Si||x−μi||2(1)

伪代码

1. random initialize cluster centroids
2. repeat{
2.1. calculate the class each sample should belong to
2.2. re-calculate each centroids for each class
}until  convergence


2.1 Si=argmini||xj−μi||2

2.2. μi=∑x∈Six2————

代码

from numpy import np

def calDistance(vec1, vec2):
return np.sqrt(np.sum(np.power(vec1-vec2, 2)))

def randInit(matrix, k):
"""
初始化
"""
n = np.shape(datamat)[1]
cent = np.zeros([k, n])
for col in range(n):
min = matrix[:,col].min()
max = matrix[:,col].max()
cent[:,col] = min + (max - min) * np.random.random(k)
return cent

def kMeans(matrix, k):
quantity = matrix.shape()[0] # sample个数
clusterRecord = np.zeros([quantity, 2]) # 记录每个sample属于哪一个类以及误差
cent = randInit(matrix, k)
iteration = True
while iteration:
iteration = False
for i in range(quantity): # 遍历每一个元素
minDist = inf
minIndex = -1
for j in range(k): # 遍历每一个类
dist = calDistance(cent[j, :], matrix[i, :])
if dist < minDist:
minDist = dist
minIndex = j
if clusterRecord[i, 0] != minIndex:
iteration = True
clusterRecord[i,:] = minIndex, minDist
for c in range(k): # 重新计算聚类中心
index = clusterRecord[:, 0]
value = np.nonzero(index == c)
samples = matrix[value[0]]
cent[c,:] = np.mean(samples, axis = 0)
return cent, clusterRecord


与EM算法关系

占坑

http://www.cnblogs.com/jerrylead/archive/2011/04/06/2006910.html
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签:  python 算法