您的位置:首页 > 编程语言 > Python开发

【Python】urllib的基本用法01

2017-06-14 06:29 513 查看

urllib

urllib是Python3.x中提供的一系列操作URL的库,它可以轻松地模拟用户使用浏览器访问网页。


使用步骤

1.导入urllib库的request模块

from urllib import request


2.请求url,返回响应对象

response = request.urlopen('http://www.baidu.com')


3.使用响应对象输出数据

print(response.read().decode('utf-8'))


#-*-conding:utf-8-*-
'''
Created on 2017年6月13日

@author: v_huxiaoting
'''

from urllib import request

req = request.urlopen('http://www.baidu.com');

res = req.read().decode('utf-8');

print(res);


模拟真实浏览器

携带User-Agent头

一些浏览器,就会根据我们的Header头中的User-Agent信息来判断我们是正常的浏览器还是一个爬虫。



req = request.Request(url)
req.add_header(key,value)#key是User-Agent,value是其值
resp = request.urlopen(req)
print(resp.read().decode('uf-8'`))


举个栗子

from urllib import request
req = request.Request('http://www.baidu.com')
#设置user-Agent
req.add_header('User-Agent', 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36')
#发送请求
rep = request.urlopen(req)

print(rep.read().decode('utf-8'))


使用POST

如:提交表单数据

导入urllib库下面的parse

from urllib import parse


使用urlencode生成post数据

postData = parse.urlencode([
(key1,val1),
(key2,val2),
(key3,val3)
])


使用

使用postData发送post请求

request.urlopen(req.data=postData.encode('utf-8'))


得到请求状态

resp.status


得到服务器的类型

resp.reason


案例

1.获取请求的Header信息

Response Headers
Cache-Control:private
Content-Length:102751
Content-Type:text/html; charset=utf-8
Date:Tue, 13 Jun 2017 22:01:16 GMT
X-AspNet-Version:2.0.50727
X-AspNetMvc-Version:2.0
X-Frame-Options:SAMEORIGIN
X-Powered-By:ASP.NET
Request Headers
view source
Accept:text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
Accept-Encoding:gzip, deflate
Accept-Language:zh-CN,zh;q=0.8
Cache-Control:max-age=0
Connection:keep-alive
Content-Length:196
Content-Type:application/x-www-form-urlencoded
Cookie:ASP.NET_SessionId=tvm5ghvc4g1hs145f2pvmm55; __utmt=1; TS016ae6c3=013b146f106a8f5bed0ae05f795eaf9c86525fd6df4ecbd985d4d7389a42be853f0fadbadb8223ac3046f69ae172ada147f1b79dc0; __utma=214205650.1291320816.1497390244.1497390244.1497390244.1; __utmb=214205650.6.10.1497390244; __utmc=214205650; __utmz=214205650.1497390244.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none)
Host:www.thsrc.com.tw
Origin:http://www.thsrc.com.tw
Referer:http://www.thsrc.com.tw/tw/TimeTable/SearchResult
Upgrade-Insecure-Requests:1
User-Agent:Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36
Form Data
view source
view URL encoded
StartStation:977abb69-413a-4ccf-a109-0272c24fd490
EndStation:a7a04c89-900b-4798-95a3-c01c455622f4
SearchDate:2017/06/14
SearchTime:08:00
SearchWay:DepartureInMandarin
RestTime:
EarlyOrLater:


以上我们需要关注的有User-Agent,origin

下面使用postman来进行操作





然后点击send发送请求

之后就会获得响应的数据



from urllib.request import urlopen
from urllib.request import Request
from urllib import parse
req = Request('http://www.thsrc.com.tw/tw/TimeTable/SearchResult')

postData = parse.urlencode([
("StartStation","977abb69-413a-4ccf-a109-0272c24fd490"),
("EndStation","a7a04c89-900b-4798-95a3-c01c455622f4"),
("SearchDate","2017/06/14"),
("SearchTime","08:00"),
("SearchWay","DepartureInMandarin")
])
req.add_header('Origin','http://www.thsrc.com.tw')
req.add_header('User-Agent', 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36')

resp = urlopen(req,data=postData.encode('utf-8'))

print(resp.read().decode('utf-8'))


Tips:

1.有些网站会根据我们Header头的信息来判断我们是不是爬虫,比如Origin,User-Agent等;

2.如果不携带这些数据,有些网站是会报错的;

3.通过parse.encode进行传递的数据是一个列表,然后里面是一些元组
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签:  python url