您的位置:首页

自然语言20_The corpora with NLTK

2016-11-19 11:30 489 查看

sklearn实战-乳腺癌细胞数据挖掘(博主亲自录制视频教程)

https://study.163.com/course/introduction.htm?courseId=1005269003&utm_campaign=commission&utm_source=cp-400000000398149&utm_medium=share



QQ:231469242

欢迎喜欢nltk朋友交流
https://www.pythonprogramming.net/nltk-corpus-corpora-tutorial/?completed=/lemmatizing-nltk-tutorial/

The corpora with NLTK

寻找文件路径的代码

# -*- coding: utf-8 -*-
"""
Spyder Editor

This is a temporary script file.
"""

import nltk,sys,os
print(nltk.__file__)

if sys.platform.startswith('win'):
# Common locations on Windows:
sys.path += [
str(r'C:\nltk_data'), str(r'D:\nltk_data'), str(r'E:\nltk_data'),
os.path.join(sys.prefix, str('nltk_data')),
os.path.join(sys.prefix, str('lib'), str('nltk_data')),
os.path.join(os.environ.get(str('APPDATA'), str('C:\\')), str('nltk_data'))
]
else:
# Common locations on UNIX & OS X:
sys.path += [
str('/usr/share/nltk_data'),
str('/usr/local/share/nltk_data'),
str('/usr/lib/nltk_data'),
str('/usr/local/lib/nltk_data')
]




nltk的corpus语料库是一个所有语言的数据集合。大多数语料库是TXT文本存储,少数为xml和其它格式,

In this part of the tutorial, I want us to take a moment to peak into the corpora we all downloaded! The NLTK corpus is a massive dump of all kinds of natural language data sets that are definitely worth taking a look at.

Almost all of the files in the NLTK corpus follow the same rules for accessing them by using the NLTK module, but nothing is magical about them. These files are plain text files for the most part, some are XML and some are other formats, but they are all accessible by you manually, or via the module and Python. Let's talk about viewing them manually.

Depending on your installation, your nltk_data directory might be hiding in a multitude of locations. To figure out where it is, head to your Python directory, where the NLTK module is. If you do not know where that is, use the following code:

import nltk
print(nltk.__file__)

Run that, and the output will be the location of the NLTK module's __init__.py. Head into the NLTK directory, and then look for the data.py file.

The important blurb of code is:

if sys.platform.startswith('win'):
# Common locations on Windows:
path += [
str(r'C:\nltk_data'), str(r'D:\nltk_data'), str(r'E:\nltk_data'),
os.path.join(sys.prefix, str('nltk_data')),
os.path.join(sys.prefix, str('lib'), str('nltk_data')),
os.path.join(os.environ.get(str('APPDATA'), str('C:\\')), str('nltk_data'))
]
else:
# Common locations on UNIX & OS X:
path += [
str('/usr/share/nltk_data'),
str('/usr/local/share/nltk_data'),
str('/usr/lib/nltk_data'),
str('/usr/local/lib/nltk_data')
]

There, you can see the various possible directories for the nltk_data. If you're on Windows, chances are it is in your appdata, in the local directory. To get there, you will want to open your file browser, go to the top, and type in %appdata%

Next click on roaming, and then find the nltk_data directory. In there, you will have your corpora file. The full path is something like:

corpora
在windows的路径
C:\Users\yourname\AppData\Roaming\nltk_data\corpora


语料库包括书籍,聊天记录,电影影评

Within here, you have all of the available corpora, including things like books, chat logs, movie reviews, and a whole lot more.

Now, we're going to talk about accessing these documents via NLTK.
As you can see, these are mostly text documents, so you could just use
normal Python code to open and read documents. That said, the NLTK
module has a few nice methods for handling the corpus, so you may find
it useful to use their methology. Here's an example of us opening the
Gutenberg Bible, and reading the first few lines:

古腾堡圣经(Gutenberg Bible),亦称四十二行圣经, 是《圣经》拉丁文公认翻译的印刷品,由翰尼斯·古腾堡于1454年到1455年在德国美因兹(Mainz)采用活字印刷术印刷的。这个圣经是最著名的古版书,他的产生标志着西方图书批量生产的开始

# -*- coding: utf-8 -*-
"""
Spyder Editor

This is a temporary script file.
"""

from nltk.tokenize import sent_tokenize
from nltk.corpus import gutenberg

#sample text
sample=gutenberg.raw("bible-kjv.txt")
tok=sent_tokenize(sample)
for x in range(5):
print (tok[x])




Appdata是什么意思?

意思就是说包括系统程序运行时需要的文件,不建议删除!

Appdata下有三个子文件夹local,locallow,roaming,当你解压缩包时如果不指定路径,系统就把压缩包解到local\temp文件夹下,存放了一些解压文件,
安装软件时就从这里调取数据特别是一些制图软件,体积非常大,占用很多空间。locallow是用来存放共享数据,这两个文件夹下的文件就用优化大师清理,一般都可以清理无用的文件。
roaming文件夹也是存放一些使用程序后产生的数据文件,
如 空间听音乐,登入 的号码等而缓存的一些数据,这些数据优化大师是清理不掉的,
可以打开roaming文件夹里的文件全选定点击删除,删除不掉的就选择跳过,不过当你再使用程序时,这个文件夹又开始膨胀,又会缓存数据.


from nltk.tokenize import sent_tokenize, PunktSentenceTokenizer
from nltk.corpus import gutenberg

# sample text
sample = gutenberg.raw("bible-kjv.txt")

tok = sent_tokenize(sample)

for x in range(5):
print(tok[x])

One of the more advanced data sets in here is "wordnet." Wordnet is a collection of words, definitions, examples of their use, synonyms, antonyms, and more. We'll dive into using wordnet next.

python风控评分卡建模和风控常识

https://study.163.com/course/introduction.htm?courseId=1005214003&utm_campaign=commission&utm_source=cp-400000000398149&utm_medium=share

内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: