您的位置:首页 > 编程语言 > Python开发

python27使用jieba分词,去除停用词

2017-03-05 20:55 465 查看
# -*- coding: utf-8 -*-
import jieba
import jieba.analyse
import sys
import codecs
reload(sys)
sys.setdefaultencoding('utf-8')

#使用其他编码读取停用词表
#stoplist = codecs.open('../../file/stopword.txt','r',encoding='utf8').readlines()
#stoplist = set(w.strip() for w in stoplist)
#停用词文件是utf8编码
stoplist = {}.fromkeys([ line.strip() for line in open("../../file/stopword.txt") ])

#经过分词得到的应该是unicode编码,先将其转成utf8编码
segs = jieba.cut('北京附近的租房', cut_all=False)
segs = [word.encode('utf-8') for word in list(segs)]

segs = [word for word in list(segs) if word not in stoplist]

for seg in segs:
print seg
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: