您的位置:首页 > 编程语言 > Python开发

python3 黑板客爬虫闯关游戏(四)

2016-02-02 14:10 519 查看
这关较第三关难度增加许多,主要多了并发编程

密码一共有100位,分布在13页,每页打开的时间在15秒左右,所以理所当然的想到要用并发,但是后来发现同IP访问间隔时间不能小于8秒,不然会返回404,所以最好是代理+并发。

目前没有做代理,以后有时间在重新写,由于密码位置上随机出现的,所以采集到页数要远远大于13,我把访问间隔控制在8秒,收集完100位密码一共耗时670s

import urllib.request as ur
import urllib.parse as up
import http.cookiejar as hc
from threading import Thread
import time
import urllib.error as ue
from bs4 import BeautifulSoup
#定义一个100位的空密码
pwd=['' for i in range(100)]

class GetPwdThread(Thread):
def __init__(self,url,opener):
self.url=url
self.opener=opener
super(GetPwdThread,self).__init__()
def run(self):
try:
start=time.time()
resp=self.opener.open(self.url).read().decode('utf-8')
print(self.url)
soup=BeautifulSoup(resp,'html.parser')
password_pos=soup.select('td[title="password_pos"]')
password_val=soup.select('td[title="password_val"]')
for i,j in zip(password_pos,password_val):
soup_pos=BeautifulSoup(str(i),'html.parser')
soup_val=BeautifulSoup(str(j),'html.parser')
pwd[int(soup_pos.td.string)-1]=soup_val.td.string
print(pwd,'pwd length:',len(''.join(pwd)),'use time:%s' %(time.time()-start))
except ue.URLError as e:
print(self.url,e.code)

url='http://www.heibanke.com/accounts/login/?next=/lesson/crawler_ex03/'
cookie=hc.MozillaCookieJar()
handler=ur.HTTPCookieProcessor(cookie)
opener=ur.build_opener(handler)
req=ur.Request(url)
res=opener.open(req)
#保存第一次cookies到本地
cookie.save('cookie.txt',ignore_discard=True, ignore_expires=True)
#提取csrfmiddlewaretoken值
for i in cookie:
token=i.value
value={'csrfmiddlewaretoken':token,'username':'fangjun','password':'19870716'}
data=up.urlencode(value).encode('utf-8')
#加载cookies登录网站
cookie.load('cookie.txt',ignore_discard=True,ignore_expires=True)
handler=ur.HTTPCookieProcessor(cookie)
opener=ur.build_opener(handler)
req=ur.Request(url,data)
res=opener.open(req)
#登录成功,重新保存cookies到本地
cookie.save('cookie.txt',ignore_discard=True, ignore_expires=True)
#测试密码开始
cookie.load('cookie.txt',ignore_discard=True,ignore_expires=True)
handler=ur.HTTPCookieProcessor(cookie)
opener=ur.build_opener(handler)
url='http://www.heibanke.com/lesson/crawler_ex03/pw_list/?page='
starttime=time.time()
Threads=[]
#判断所有位置的密码是否都已找到
while('' in pwd):
#由于密码位置是随机出现的,所以理论上只要一直刷新第一页就可以得到所有密码
for i in range(1,3):
newthread=GetPwdThread(url+str(i),opener)
Threads.append(newthread)
newthread.start()
#访问间隔如果小于8秒,会返回404
time.sleep(8)
for t in Threads:
t.join()
print('password:',''.join(pwd),'total time:%s' %(time.time()-starttime))


测试结果如下

#... http://www.heibanke.com/lesson/crawler_ex03/pw_list/?page=2 ['9', '3', '1', '4', '7', '7', '9', '2', '5', '5', '7', '9', '3', '0', '3', '3', '2', '6', '5', '3', '1', '4', '9', '0', '1', '6', '3', '6', '6', '8', '7', '7', '4', '5', '6', '6', '2', '8', '8', '1', '7', '3', '6', '1', '8', '5', '3', '8', '2', '5', '3', '6', '7', '5', '2', '6', '4', '9', '7', '2', '3', '3', '8', '1', '8', '3', '0', '8', '6', '7', '4', '4', '5', '1', '3', '5', '3', '5', '7', '7', '4', '8', '9', '5', '0', '6', '9', '5', '2', '4', '4', '3', '2', '9', '5', '4', '8', '9', '4', '6'] pwd length: 100 use time:15.125
password: 9314779255793033265314901636687745662881736185382536752649723381830867445135357748950695244329548946 total time:667.8639998435974
[Finished in 670.7s]


  
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: