python 使用 tweepy 案例: PS4
2013-11-29 00:20
375 查看
First, make sure Python and Tweepy installed well, and the network setup well.
Then, you go to http://dev.twitter.com and log in ,and go to my application, get your ckey/csecret/atoken/asecret
Then, we create the .py file: PS4.py:
Then, we create the .py file: PS4.py:
from tweepy import Stream from tweepy import OAuthHandler from tweepy.streaming import StreamListener import time ckey = 'VUHR4W*******1ZSDEKQ' csecret = 'BqJSl0dbI*************48qdmWKJ1CKmxKbl8JUw5k' atoken = '2207681958******************BMx6UWBzEonhKkhkkeDEOJC' asecret = 'GQTeOedbv*****************GnrsUrZR0ItMJ6BarJnKorI3' class listener(StreamListener): def on_data(self, data): try: ## print data date = data.split('created_at":"')[1].split('","id')[0] tweet = data.split(',"text":"')[1].split('","source')[0] screen_name = data.split(',"screen_name":"')[1].split('","location')[0] location = data.split(',"location":"')[1].split('","url')[0] followers_count = data.split(',"followers_count":')[1].split(',"friends_count')[0] saveThis = date+""+tweet+";;;"+screen_name+";;;"+location+";;;"+followers_count print saveThis ## saveFile = open('twitDB11.txt','a') ## ## saveFile.write(saveThis) ## ## saveFile.write('\n') ## ## saveFile.close() return True except BaseException, e: print 'failed ondata,', str(e) time.sleep(5) def on_error(self, status): print status auth = OAuthHandler(ckey, csecret) auth.set_access_token(atoken, asecret) twitterStream = Stream(auth, listener()) twitterStream.filter(track=["PS4"])
Then, we open it and run it. Colleting data like this:
Wed Nov 27 03:47:29 +0000 2013:::Gamma Blue 11s or PS4?:::JVanlo_ST::::::105 Wed Nov 27 03:47:29 +0000 2013:::DuniaIndo: Shenmue developer doing GDC postmortem, PS4 architect Mark Cerny translating: The full schedule of s... http:\/\/t.co\/sdlHKPJMke:::SHENMUE_MEGA_RT::::::9 Wed Nov 27 03:47:29 +0000 2013:::Does anyone else have a PS4?:::CodyP_Texas54::::::61 Wed Nov 27 03:47:29 +0000 2013:::Wish #oomf would come over and play ps4 with me. :(:::rgvheat3::::::303 Wed Nov 27 03:47:30 +0000 2013:::ayuna_rachim: Shenmue developer doing GDC postmortem, PS4 architect Mark Cerny translating: The full schedule o... http:\/\/t.co\/riqqgFRZXj:::SHENMUE_MEGA_RT::::::9 Wed Nov 27 03:47:31 +0000 2013:::\u6700\u8fd1\u30c6\u30a4\u30eb\u30ba\u3068\u304bFF\u3068\u304bKH\u30b7\u30ea\u30fc\u30ba\u306e\u65b0\u4f5c\u767a\u8868\u3042\u308b\u3068\u5b09\u3057\u3044\u3068\u3068\u3082\u306bPS4\u304b\u3082\u3057\u308c\u306a\u3044\u4e0d\u5b89\u304c\u62bc\u3057\u5bc4\u305b\u3066\u304f\u308b:::xyuyuch18::::::279 Wed Nov 27 03:47:31 +0000 2013:::PlaneteGamers: #Games #News Shenmue developer doing GDC postmortem, PS4 architect Mark Cerny translating ... http:\/\/t.co\/m6DBhuQ92E:::SHENMUE_MEGA_RT::::::9 Wed Nov 27 03:47:31 +0000 2013:::RT @OFWG_Sheed: @Bflakes78 lol i might get ps4 i miss yall so much:::Bflakes78::::::1328 Wed Nov 27 03:47:31 +0000 2013:::\u3069\u3046\u305bPS4\u306a\u3093\u3060\u308d\u899a\u609f\u306f\u3067\u304d\u3066\u3044\u308b:::tana1003:::\u4eac\u90fd:::219 Wed Nov 27 03:47:32 +0000 2013:::onlinegmg: Shenmue developer doing GDC postmortem, PS4 architect Mark Cerny translating http:\/\/t.co\/lhEuMWPvel http:\/\/t.co\/hNrss8yfAD:::SHENMUE_MEGA_RT::::::9 Wed Nov 27 03:47:33 +0000 2013:::I liked a @YouTube video from @nobodyepic http:\/\/t.co\/w0aZnLUeow NobodyEpic 1,000,000 Subscriber Q&A Video (Battlefield 4: PS4:::GabGonzaRom::::::93 Wed Nov 27 03:47:34 +0000 2013:::PS4\u3060\u308d\uff1f:::eyck:::\u30a8\u30aa\u30eb\u30bc\u30a2:::859
相关文章推荐
- python小案例 随机数 冒泡排序 字符串的列表的使用
- python模块使用案例
- 使用python进行数据迁移案例
- Python使用pandas读取Excel文件数据和预处理小案例
- Python:列表中len、in、For的使用案例
- 使用python爬取链家上海二手房信息的案例
- python-nmap使用及案例
- python2.7 使用MySQLdb模块封装一个获取mysql连接的类案例
- python-re模块-使用案例
- Python爬虫(十三)_案例:使用XPath的爬虫
- Python3 初学实践案例(9)sqlacodegen 根据已有数据库生成 ORM 使用的 model.py
- Python3 初学实践案例(12)将源目录中的图片根据设定最长边参数保存到目标目录脚本(Image 的使用)
- Python--Mysql连接池使用案例
- 朴素贝叶斯分类算法原理与Python实现与使用方法案例
- python 案例:使用BeautifuSoup4的爬虫
- python scrapy 的概念、安装、使用和案例
- Python爬虫(十五)_案例:使用bs4的爬虫
- Python、Ruby中的SWIG使用案例
- Python爬虫(入门+进阶)学习笔记 1-8 使用自动化神器Selenium爬取动态网页(案例三:爬取淘宝商品)