本次爬取安居客网站,获取上海长宁区的租房信息,参考自:微信公众号
仍然是用scrapy框架构建爬虫,步骤:1.分析网页
2.items.py
3.spiders.py
4. pipelines.py
5.settings.py
上海长宁区租房信息: https://sh.zu.anjuke.com/fangyuan/changning/
这里定义字段保存要爬取的信息
- price ========= scrapy.Field()
这里编写爬虫文件,告诉爬虫要爬取什么,怎么爬取
- import scrapyfrom scrapy.spiders import Rulefrom scrapy.linkextractors import LinkExtractorfrom anjukeSpider.items import AnjukespiderItem# 定义爬虫类class anjuke(scrapy.spiders.CrawlSpider):#爬虫名称name = 'anjuke'#爬虫起始网页start_urls = ['https://sh.zu.anjuke.com/fangyuan/changning/']#爬取规则rules = (
- Rule(LinkExtractor(allow=r'fangyuan/p\d+/'), follow=True), #网页中包含下一页按钮,所以这里设置True爬取所有页面Rule(LinkExtractor(allow=r'https://sh.zu.anjuke.com/fangyuan/\d{10}'), follow=False, callback='parse_item'),#网页里含有【推荐】的房源信息但不一定是我们想要的长宁区,所以设置False不跟进 )#回调函数,主要就是写xpath路径,上一篇实例说过,这里就不赘述了def parse_item(self, response):
- item = AnjukespiderItem()# 租金item['price'] = int(response.xpath("//ul[@class='house-info-zufang cf']/li[1]/span[1]/em/text()").extract_first())# 出租方式item['rent_type'] = response.xpath("//ul[@class='title-label cf']/li[1]/text()").extract_first()# 户型item['house_type'] = response.xpath("//ul[@class='house-info-zufang cf']/li[2]/span[2]/text()").extract_first()# 面积item['area'] = int(response.xpath("//ul[@class='house-info-zufang cf']/li[3]/span[2]/text()").extract_first().replace('平方米',''))# 朝向item['towards'] = response.xpath("//ul[@class='house-info-zufang cf']/li[4]/span[2]/text()").extract_first()# 楼层item['floor'] = response.xpath("//ul[@class='house-info-zufang cf']/li[5]/span[2]/text()").extract_first()# 装修item['decoration'] = response.xpath("//ul[@class='house-info-zufang cf']/li[6]/span[2]/text()").extract_first()# 住房类型item['building_type'] = response.xpath("//ul[@class='house-info-zufang cf']/li[7]/span[2]/text()").extract_first()# 小区item['community'] = response.xpath("//ul[@class='house-info-zufang cf']/li[8]/a[1]/text()").extract_first()yield item
保存爬取的数据,这里只保存为json格式
其实可以不写这部分,不写pipeline ,运行时加些参数:scrapy crawl anjuke -o anjuke.json -t json
scrapy crawl 爬虫名称 -o 目标文件名称 -t 保存格式
- from scrapy.exporters import JsonItemExporterclass AnjukespiderPipeline(object):def __init__(self):
- self.file = open('zufang_shanghai.json', 'wb') #设置文件存储路径self.exporter = JsonItemExporter(self.file, ensure_ascii=False)
- self.exporter.start_exporting()def process_item(self, item, spider):print('write')
- self.exporter.export_item(item)return itemdef close_spider(self, spider):print("close")
- self.exporter.finish_exporting()
- self.file.close()
修改settings文件,使pipeline生效
设置下载延迟,防止访问过快导致被网站屏蔽
- ITEM_PIPELINES = {'anjukeSpider.pipelines.AnjukespiderPipeline': 300,
- }
-
- DOWNLOAD_DELAY = 2
爬取到61条信息,json文件在指定路径已生成
- 2018-10-22 09:02:55 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
- {'downloader/request_bytes': 40861, 'downloader/request_count': 61, 'downloader/request_method_count/GET': 61, 'downloader/response_bytes': 1925879, 'downloader/response_count': 61, 'downloader/response_status_count/200': 61, 'finish_reason': 'finished', 'finish_time': datetime.datetime(2018, 10, 22, 1, 2, 55, 245128), 'item_scraped_count': 60, 'log_count/DEBUG': 122, 'log_count/INFO': 9, 'request_depth_max': 1, 'response_received_count': 61, 'scheduler/dequeued': 61, 'scheduler/dequeued/memory': 61, 'scheduler/enqueued': 61, 'scheduler/enqueued/memory': 61, 'start_time': datetime.datetime(2018, 10, 22, 1, 0, 29, 555537)}2018-10-22 09:02:55 [scrapy.core.engine] INFO: Spider closed (finished)
爬虫到此完成,但爬取到的数据并不直观,还需对其做可视化处理(pyecharts模块),这部分另写一篇pyecharts使用
pyecharts官方文档:http://pyecharts.org/#/zh-cn/