您的位置:首页 > 编程语言 > Python开发

python使用爬虫代理的方案

2019-02-28 17:25 183 查看

scrapy中间件
在项目中新建middlewares.py文件(./项目名/middlewares.py)

#! -*- encoding:utf-8 -*-
import base64
import sys
import random

PY3 = sys.version_info[0] >= 3

def base64ify(bytes_or_str):
if PY3 and isinstance(bytes_or_str, str):
input_bytes = bytes_or_str.encode('utf8')
else:
input_bytes = bytes_or_str

output_bytes = base64.urlsafe_b64encode(input_bytes)
if PY3:
return output_bytes.decode('ascii')
else:
return output_bytes

class ProxyMiddleware(object):
def process_request(self, request, spider):
# 代理服务器
proxyHost = "t.16yun.cn"
proxyPort = "31111"

# 代理隧道验证信息
proxyUser = "username"
proxyPass = "password"

request.meta['proxy'] = "http://{0}:{1}".format(proxyHost,proxyPort)

# 添加验证头
encoded_user_pass = base64ify(proxyUser + ":" + proxyPass)
request.headers['Proxy-Authorization'] = 'Basic ' + encoded_user_pass

# 设置IP切换头(根据需求)
tunnel = random.randint(1,10000)
request.headers['Proxy-Tunnel'] = str(tunnel)

修改项目配置文件 (./项目名/settings.py)

DOWNLOADER_MIDDLEWARES = {
'scrapy.contrib.downloadermiddleware.httpproxy.HttpProxyMiddleware': 110,
'项目名.middlewares.ProxyMiddleware': 100,
}
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: