scrapy实践之翻页爬取
import scrapy
from hello_world.items import HelloWorldItem
class MirSpider(scrapy.Spider):
name = "MirSpider"
start_urls = ["http://mirtarbase.cuhk.edu.cn/php/search.php?opt=species&org=bta&sort=id&order=asc&page=1"]
def parse(self, response):
domain = 'http://mirtarbase.cuhk.edu.cn'
for row in response.xpath('//table/tr'):
item = HelloWorldItem()
res = []
for col in (row.xpath('td/text()')):
res.append(col.extract())
if res[0] != 'Bos taurus':
continue
item['species'] = res[0]
item['miRNA'] = res[2]
item['target'] = res[3]
item['total'] = res[4]
item['papers'] = res[5]
yield item
for url in response.xpath('//a/@href').extract():
if 'page' in url:
url = domain + url
yield scrapy.Request(url, callback = self.parse, dont_filter = False)
2. Item Pipeline
对于下载的item, 有些会出现重复的现象,此时可以在pipelines.py中,对item进行操作,实现item去重的代码如下
from itemadapter import ItemAdapter
class HelloWorldPipeline:
def __init__(self):
self.link_set = set()
def process_item(self, item, spider):
link = item['miRNA'] + item['target']
if link in self.link_set:
raise DropItem(item)
self.link_set.add(link)
return item
在process_item方法中,通过一个set对象来达到去重的效果。需要注意,默认pipelines是没有开启的,编写完代码之后,需要在settings.py中进行配置,开启对应的pipeline, 内容如下
ITEM_PIPELINES = {
'hello_world.pipelines.HelloWorldPipeline': 300,
}
对于标准的多页表格数据,采用上述的代码可以轻松实现翻页效果,非常的方便。