您的位置:首页 > 运维架构 > 网站架构

用Scrapy爬取网站时总获取不到源代码的解决办法

2017-11-06 16:35 363 查看
运行scrapy  crawl gupiao,报错如下:
2017-11-06 16:28:19 [scrapy.utils.log] INFO: Scrapy 1.4.0 started (bot: gupiaospider)2017-11-06 16:28:19 [scrapy.utils.log] INFO: Overridden settings: {'BOT_NAME': 'gupiaospider', 'NEWSPIDER_MODULE': 'gupiaospider.spiders', 'SPIDER_MODULES': ['gupiaospider.spiders']}2017-11-06 16:28:19 [scrapy.middleware] INFO: Enabled extensions:['scrapy.extensions.corestats.CoreStats','scrapy.extensions.telnet.TelnetConsole','scrapy.extensions.logstats.LogStats']2017-11-06 16:28:20 [scrapy.middleware] INFO: Enabled downloader middlewares:['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware','scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware','scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware','scrapy.downloadermiddlewares.useragent.UserAgentMiddleware','scrapy.downloadermiddlewares.retry.RetryMiddleware','scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware','scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware','scrapy.downloadermiddlewares.redirect.RedirectMiddleware','scrapy.downloadermiddlewares.cookies.CookiesMiddleware','scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware','scrapy.downloadermiddlewares.stats.DownloaderStats']2017-11-06 16:28:20 [scrapy.middleware] INFO: Enabled spider middlewares:['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware','scrapy.spidermiddlewares.offsite.OffsiteMiddleware','scrapy.spidermiddlewares.referer.RefererMiddleware','scrapy.spidermiddlewares.urllength.UrlLengthMiddleware','scrapy.spidermiddlewares.depth.DepthMiddleware']2017-11-06 16:28:20 [scrapy.middleware] INFO: Enabled item pipelines:[]2017-11-06 16:28:20 [scrapy.core.engine] INFO: Spider opened2017-11-06 16:28:20 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)2017-11-06 16:28:20 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:60232017-11-06 16:28:21 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://stock. 10jqka.com.cn/> (referer: None)2017-11-06 16:28:21 [scrapy.core.engine] INFO: Closing spider (finished)2017-11-06 16:28:21 [scrapy.statscollectors] INFO: Dumping Scrapy stats:{'downloader/request_bytes': 217,'downloader/request_count': 1,'downloader/request_method_count/GET': 1,'downloader/response_bytes': 278,'downloader/response_count': 1,'downloader/response_status_count/200': 1,'finish_reason': 'finished','finish_time': datetime.datetime(2017, 11, 6, 8, 28, 21, 353422),'log_count/DEBUG': 2,'log_count/INFO': 7,'response_received_count': 1,'scheduler/dequeued': 1,'scheduler/dequeued/memory': 1,'scheduler/enqueued': 1,'scheduler/enqueued/memory': 1,'start_time': datetime.datetime(2017, 11, 6, 8, 28, 20, 834393)}2017-11-06 16:28:21 [scrapy.core.engine] INFO: Spider closed (finished)
解决方法:在settings.py文件中添加请求头:
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: 
相关文章推荐