网络爬虫1
2015-06-17 19:22
465 查看
import MySQLdb import urllib import webbrowser as web import json conn=MySQLdb.connect(host="localhost",user="root",passwd="sf123456",port=3306,charset="utf8") cur=conn.cursor() #cur.execute('create database if not exists stock_db') conn.select_db("db_stock") for i in range(1,57): print i url="http://q.10jqka.com.cn/interface/stock/fl/zdf/desc/"+str(i)+"/hsa/quote" content=urllib.urlopen(url).read() open("E:\\data\\stock\\stock0617.json","w").write(content) #web.open_new_tab("E:\\data\\stock\\stock0617.json") f=file("E:\\data\\stock\\stock0617.json") s=json.load(f) length=len(s['data']) for i in range(0,length): sql = "INSERT INTO stock_information1(cje,cjl,hsl,jk,jlr,rtime,stockcode,\ stockid,stockname,zde,zdf,zdj,zgj,zs,zxj)values('%s','%s','%s','%s',\ '%s','%s','%s','%s','%s','%s','%s','%s','%s','%s','%s')"%(s['data'][i]['cje'],s['data'][i]['cjl'],\ s['data'][i]['hsl'],s['data'][i]['jk'],s['data'][i]['jlr'],s['data'][i]['rtime'],\ s['data'][i]['stockcode'],s['data'][i]['stockid'],s['data'][i]['stockname'],s['data'][i]['zde'],\ s['data'][i]['zdf'],s['data'][i]['zdj'],s['data'][i]['zgj'],s['data'][i]['zs'],s['data'][i]['zxj'] ) cur.execute(sql) conn.commit() print "hello" # except: # print e # conn.rollback() conn.close()
相关文章推荐
- MySQL中的integer 数据类型
- Python动态类型的学习---引用的理解
- Python3写爬虫(四)多线程实现数据爬取
- 垃圾邮件过滤器 python简单实现
- 下载并遍历 names.txt 文件,输出长度最长的回文人名。
- mysql中int、bigint、smallint 和 tinyint的区别与长度
- mysql load data 导出、导入 csv
- install and upgrade scrapy
- source命令执行SQL脚本文件
- Scrapy的架构介绍
- Centos6 编译安装Python
- 使用Python生成Excel格式的图片
- 让Python文件也可以当bat文件运行
- [Python]推算数独
- linux下mysql添加用户
- Python中zip()函数用法举例
- Python中map()函数浅析