您的位置:首页 > 编程语言 > Python开发

基于隐语义模型的推荐算法---《推荐系统实践》---Python源码(11)

2018-03-30 09:39 591 查看
一、基本隐模型定义$$r_{ui}=\sum_{f=0}^{F}p_{u,f}q_{i,f}$$
$r_{ui}$ 表示用户u对物品i的兴趣度,$p_{u,f}$表示用户和隐类的关系,$q_{i,f}$表示物品i与隐类的关系。p 和 q需要根据数据集进行训练。
训练成本函数:
$$C = \sum_{(u,i)\in K}^{ } (r_{ui}-\sum_{f=0}^{F}p_{u,f}q_{i,f})^{2}+\lambda (\left \| p_{u} \right \|)^{2}+\lambda (\left \| q_{i} \right \|)^{2}$$
成本函数表示误差的平方最小,并进行了正则化,通过随机梯度法求得p和q的值,p和q为一个矩阵。$(u,i)\in K$表示生成的样本集,样本值包含用户和物品对,其中用正样本(正样本表示用户u对物品i感兴趣),也有负样本(用户u对物品i不感兴趣)。
二、函数说明
函数名
说明
SelectNegativeSample
生成样本函数。采取先选热门物品,且负样本的数量和正样本的数量相等
CreateItemsPool
生成一个物品池供样本生成使用,物品根据热门程度排序,越热门位置越靠前
InitLFM
初始化P和Q
LearningLFM
根据数据训练P和Q
TestLFM
为每个用户生成Top 10 的推荐
Precision
测试推荐准确率
三、参数说明
参数

隐特征个数 F
100
学习速率 alpha
0.02
正则化参数 ld
0.01
四、实验结果说明1、本代码执行后,推荐的准确率只有1.5%。
2、可能出现的原因:(1)在p和q进行初始化的时,书中没有提及如何初始化,我采用的是最后一章预测评分的初始化。(2)在正负样本生成的时候,为了生成负样本书中建议采取将用户没有评价的热门物品作为负样本,但是书中给的代码且是随机选择负样本。(3)书中建议负样本的数量和正样本的数量相等,但是给的代码中,为小于3倍正样本的长度。(4)物品池生成的具体方法没有描述。(5)迭代次数没有提及。
五、Python源码
#LFM p and q are determinied by stochastic_gradient_descent
import random as rd, math as mt , operator as op

def SplitData(data, M, k, seed):
test = []
train = []
rd.seed(seed)
for user,item in data:
if rd.randint(0,M) == k:#generate a uniform random number in [0,M]
test.append([user,item])
else:
train.append([user,item])
return train, test

def list2dic(listdata):
dicdata = dict()
for user,item in listdata:
if user not in dicdata.keys():
dicdata[user] = []
dicdata[user].append(item)
else:
dicdata[user].append(item)
return dicdata

def CreateItemsPool(train):
items = dict()
items_pool = []
for u, i in train:
if i not in items.keys():
items[i] = 1;
else:
items[i] += 1;
for item , pop in sorted(items.items(), key = op.itemgetter(1), reverse = True):
items_pool.append(item)
return items_pool;

def SelectNegativeSample(items_pool,trainu):
ret = dict()
for i in trainu:#generate positive samples
ret[i] = 1
n = 0
for i in items_pool:#generate negative samples
if i in ret:
continue
ret[i] = 0
n += 1
if n > len(trainu):
break
return ret

def InitLFM(train, F):
p = dict()
q = dict()
for u,i in train:
if u not in p.keys():
p[u] = [rd.random()/mt.sqrt(F) for x in range(0,F)]
if i not in q.keys():
q[i] = [rd.random()/mt.sqrt(F) for x in range(0,F)]
return p, q

#def RMSE(test, p
b551
, q):
#    error = 0
#    for u, i, rui in test.items():
#        error += pow( Predict(u, i, p, q) - rui, 2)
#    return error/len(test)

def LearningLFM(item_pool, train, F, n, alpha, ld):
p,q = InitLFM(train, F)
train = list2dic(train)
for step in range(0,n):
for u, i in train.items():
samples = SelectNegativeSample(items_pool,train[u])
for item , rui in samples.items():
eui = rui - sum(p[u][f] * q[item][f] for f in range(0, F))
for f in range(0,F):
p[u][f] += alpha * (q[item][f] * eui - ld * p[u][f])
q[item][f] += alpha * (p[u][f] * eui - ld * q[item][f])
alpha *= 0.9
return p,q

def TestLFM( train, p, q, N):
Allrank = dict()
rank =dict()
for u in train.keys():
rank.clear()
for i in q.keys():
if i not in train[u]:
if i not in rank:
rank[i] = 0
for f in range(0,F):
rank[i] += p[u][f] * q[i][f]
Allrank[u]=[]
for item,pop in sorted(rank.items(),key = op.itemgetter(1), reverse = True)[0:N]:
Allrank[u].append(item)
return Allrank

def Precision(allrank, test, N):
hit = 0
all = 0
for user in test.keys():
tu = test[user]
if user in allrank.keys():
for item in allrank[user]:
if item in tu:
hit+=1
all += N
return hit / (all*1.0)
'''
main function
'''
filestring = '/home/sysu-hgavin/文档/ml-1m/ratings.dat'
f = open(filestring, 'r')
data = []
#k=0;
while 1:
line = f.readline()#read data
#k+=1;
if not line:
break
#if k>1000:
#    break;
line = line.split("::")[:2]
line[0] = int (line[0])
line[1] = int (line[1])
data.append(line)
f.close()
M = 8
seed = 3
N = 10 #top-N
F = 100
alpha = 0.02
ld = 0.01
n = 100
#for k in range(M-1):
train, test = SplitData(data, M, 1, seed)#generate train data and test data
items_pool = CreateItemsPool(train)#sort by popularity
p,q = LearningLFM(items_pool, train, F, n, alpha, ld)
test = list2dic(test)
train = list2dic(train)
allrank = TestLFM(train, p, q, N)
pre = Precision(allrank, test, N)
print ("precision is \n" ,pre)
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: