您的位置:首页 > 运维架构

小白学爬虫——爬取半次元热门榜top100

2017-10-23 13:26 323 查看
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
import urllib
import re

def ImgSave(url,n):
u=urllib.request.urlopen(url)
data=u.read()
file=open("F://pic//"+str(n)+".jpg","wb")
file.write(data)
file.close()

n=0
driver = webdriver.Chrome()
driver.get("https://bcy.net/login")
elem_name=driver.find_element_by_id('email')
elem_pwd=driver.find_element_by_id('password')
elem_name.send_keys("442110511@qq.com")
elem_pwd.send_keys("fuzhe631123")
driver.find_element_by_xpath('//input[@class="btn_green_w121"]').click()
driver.find_element_by_xpath("//a[@href='/illust']").click()
q=20171021
driver.find_element_by_xpath("//a[@href='/illust/toppost100']").click()
print("正在打印"+q+"的排行榜")
while(q>20171016):
s=driver.page_source
pattern=re.compile('work-thumbnail__topBd.*?<a href="(.*?)" target',re.S)
imgs=re.findall(pattern,s)

for i in imgs:
url='https://bcy.net'+i
driver.get(url)
s=driver.page_source
p = re.compile('<img class="detail_std detail_clickable" src="(.*?)"', re.S)
ms = re.findall(p, s)

for m in ms:
n=n+1
ImgSave(m,n)
q = q - 1
driver.get("https://bcy.net/illust/toppost100?type=week&date="+str(q))

driver.close()
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签:  爬虫 selenium