爬取京东的计算机类书籍
1. 工具: requests, pycharm, scrapy, mongodb
2. 网页提取工具: xpath
1. 分析京东网页:
打开京东网站 查看源码发现不是动态网页,而且都是列表, 说明了很好处理;开始分析;
我们只要提取书名,书的链接, 书的出版社,书的作者,评价数,价格
I
注意一下,书的价格, 评论数,源码并没有,说明是ajax请求;因此使用浏览器抓包看看有没有;
抓包可以找到评论数;
url: https://club.jd.com/comment/productCommentSummaries.action?my=pinglun&referenceIds=11936238
referenceIds书的id 返回是json
再找价格:
也可以找到:其中m 是书的原价, p 是当前的价格;
url: https://p.3.cn/prices/mgets?ext=11000000&pin=&type=1&area=1_72_4137_0&skuIds=J_11936238
skuIds是书的id, 返回是json
2. 编写代码:
scrapy startproject jd
1. 编写spider:
# coding: utf-8
import time
from scrapy.selector import Selector
from scrapy.http import Request
from scrapy.spiders import Spider
import requests
from requests.packages.urllib3.exceptions import InsecureRequestWarning# 禁用安全请求警告
requests.packages.urllib3.disable_warnings(InsecureRequestWarning)
'''
time: 2018-05-19
by: jianmoumou233
爬取京东图书,IT分类'''class Page(Spider):name = "jd"mongo_collections = "jd"headers = {'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8','Accept-Language': 'zh-CN,zh;q=0.8,en-US;q=0.5,en;q=0.3',"Upgrade-Insecure-Requests": "1","Connection": "keep-alive","Cache-Control": "max-age=0",}def start_requests(self):for i in xrange(1, 280):url = 'https://list.jd.com/list.html?cat=1713,3287,3797&page=%d' % iyield Request(url, dont_filter=True)def parse(self, response):''':param url: url:param title: book's name:param author: book's author:param shop: shop's name:param _id: book id and mongodb's _id:param price: book's price:param old_price: book's original price:param comment_count: book's number of comments'''xbody = Selector(response)item = dict()_li = xbody.xpath("//*[@id='plist']/ul/li")for i in _li:item['url'] = i.xpath("./div/div[1]/a/@href").extract_first()item['title'] = i.xpath("./div/div[contains(@class,'p-name')]/a/em/text()").extract_first()item['author'] = i.xpath("./div/div[contains(@class,'p-bookdetails')]//span[contains(@class,'author_type_1')]/a/text()").extract_first()item['shop'] = i.xpath("./div/div[contains(@class,'p-bookdetails')]/span[contains(@class,'p-bi-store')]/a/@title").extract_first()item["_id"] = i.xpath("./div/@data-sku").extract_first()item["spidertime"] = time.strftime("%Y-%m-%d %H:%M:%S")for k, v in item.items():if v:item[k] = str(v).strip()if item.get('_id'):try:item['price'], item["old_price"] = self.price(item['_id'], self.headers)time.sleep(2)item['comment_count'] = self.buy(item['_id'], self.headers)except Exception as e:print eif not str(item['url']).startswith("http"):item['url'] = "https" + item['url']yield item@staticmethoddef price(id, headers):url = "https://p.3.cn/prices/mgets?ext=11000000&pin=&type=1&area=1_72_4137_0&skuIds=J_%s&pdbp=0&pdtk=&pdpin=&pduid=15229474889041156750382&source=list_pc_front" % iddata = requests.get(url, headers=headers, verify=False).json()return data[0].get('p'), data[0].get("m")@staticmethoddef buy(id, headers):url = 'https://club.jd.com/comment/productCommentSummaries.action?my=pinglun&referenceIds=%s' % iddata = requests.get(url, headers=headers, verify=False).json()return data.get('CommentsCount')[0].get("CommentCount")
2. 编写入库(Mongodb)pipelines:
# -*- coding: utf-8 -*-import sys
import pymongo
reload(sys)
sys.setdefaultencoding("utf-8")class Mongo(object):mongo_uri = Nonemongo_db = Noneclient = Nonedb = Nonedef __init__(self, mongo_uri, mongo_db):self.mongo_uri = mongo_uriself.mongo_db = mongo_db@classmethoddef from_crawler(cls, crawler):return cls(mongo_uri=crawler.settings.get('MONGO_URI'),mongo_db=crawler.settings.get('MONGO_DATABASE', 'test'),)def open_spider(self, spider):self.client = pymongo.MongoClient(host=self.mongo_uri)self.db = self.client[self.mongo_db]def close_spider(self, spider):self.client.close()def process_item(self, item, spider):try:self.db[spider.mongo_collections].insert(dict(item))except Exception as e:pass# return item
3. 设置settings:
MONGO_URI = "mongodb://127.0.0.1:27017"
MONGO_DATABASE = "jd"# Configure item pipelines
# See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {'jd.pipelines.Mongo': 300,
}
4. 运行spider:
scrapy crawl jd
5. 结果:
url 的链接应该是https,我拼接错了;
总结:
爬了几千个,还没有封我,加上延迟,headrs尽量都加上;