python - Scrapy CrawlSpider 和 LinkExtractor 规则不适用于分页

标签 python web-crawler scrapy web-scripting

无法弄清楚为什么 scrapy 中的 CrawlSpider 尽管设置了规则但不进行分页。

但是,如果将start_url更改为http://bitcoin.travel/listing-category/bitcoin-hotels-and-travel/并注释掉 parse_start_url 我为上面的页面抓取了更多项目。

我的目标是抓取所有类别。请知道我做错了什么吗?

import scrapy
from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractors import LinkExtractor

from bitcointravel.items import BitcointravelItem



class BitcoinSpider(CrawlSpider):
    name = "bitcoin"
    allowed_domains = ["bitcoin.travel"]
    start_urls = [
        "http://bitcoin.travel/categories/"
    ]

    rules = (

        # Extract links matching 'item.php' and parse them with the spider's method parse_item
        Rule(LinkExtractor(allow=('.+/page/\d+/$'), restrict_xpaths=('//a[@class="next page-numbers"]'),),
             callback='parse_items', follow=True),
    )

    def parse_start_url(self, response):
        for sel in response.xpath("//ul[@class='maincat-list']/li"):
            url = sel.xpath('a/@href').extract()[0]
            if url == 'http://bitcoin.travel/listing-category/bitcoin-hotels-and-travel/':
            # url = 'http://bitcoin.travel/listing-category/bitcoin-hotels-and-travel/'
                yield scrapy.Request(url, callback=self.parse_items)


    def parse_items(self, response):
        self.logger.info('Hi, this is an item page! %s', response.url)
        for sel in response.xpath("//div[@class='grido']"):
            item = BitcointravelItem()
            item['name'] = sel.xpath('a/@title').extract()
            item['website'] = sel.xpath('a/@href').extract()
            yield item

这就是结果

{'downloader/request_bytes': 574,
 'downloader/request_count': 2,
 'downloader/request_method_count/GET': 2,
 'downloader/response_bytes': 98877,
 'downloader/response_count': 2,
 'downloader/response_status_count/200': 2,
 'dupefilter/filtered': 3,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2016, 2, 15, 13, 44, 17, 37859),
 'item_scraped_count': 24,
 'log_count/DEBUG': 28,
 'log_count/INFO': 8,
 'request_depth_max': 1,
 'response_received_count': 2,
 'scheduler/dequeued': 2,
 'scheduler/dequeued/memory': 2,
 'scheduler/enqueued': 2,
 'scheduler/enqueued/memory': 2,
 'start_time': datetime.datetime(2016, 2, 15, 13, 44, 11, 250892)}
2016-02-15 14:44:17 [scrapy] INFO: Spider closed (finished)

项目数应该是 55 而不是 24

最佳答案

对于http://bitcoin.travel/listing-category/bitcoin-hotels-and-travel/ ,HTML 源包含与规则 '.+/page/\d+/$'

中的模式匹配的链接
<a class='page-numbers' href='http://bitcoin.travel/listing-category/bitcoin-hotels-and-travel/page/2/'>2</a>
<a class='page-numbers' href='http://bitcoin.travel/listing-category/bitcoin-hotels-and-travel/page/3/'>3</a>

http://bitcoin.travel/categories/不包含此类链接,主要包含其他类别页面的链接:

...
<li class="cat-item cat-item-227"><a href="http://bitcoin.travel/listing-category/bitcoin-food/bitcoin-coffee-tea-supplies/" title="The best Coffee &amp; Tea Supplies businesses where you can spend your bitcoins!">Coffee &amp; Tea Supplies</a>  </li>
<li class="cat-item cat-item-50"><a href="http://bitcoin.travel/listing-category/bitcoin-food/bitcoin-cupcakes/" title="The best Cupcakes businesses where you can spend your bitcoins!">Cupcakes</a>  </li>
<li class="cat-item cat-item-229"><a href="http://bitcoin.travel/listing-category/bitcoin-food/bitcoin-distilleries/" title="The best Distilleries businesses where you can spend your bitcoins!">Distilleries</a>  </li>
...

如果您想抓取更多类别页面,则需要添加规则来抓取这些类别页面

关于python - Scrapy CrawlSpider 和 LinkExtractor 规则不适用于分页,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/35411059/

相关文章:

python - 如何在给定切线方向的二次贝塞尔曲线上找到一个点(如果有的话)?

python - 音频中的环形缓冲区

c# - 是否有用于 Firefox 或 Chrome 的 .Net 包装器来抓取网页?

python - xpath如何获取<a>的最后一个元素之前

python - AWS Lambda 错误消息 "Unable to import module ' lambda_function' : No module named 'lambda_function' ",

python - 使用 BeautifulSoup Python 单击按钮后获取值(value)

python - 将双端队列保存在文本文件中

python - Scrapy:从网站上抓取所有文本,但不抓取超链接的文本

python - 如何使用 scrapy 抓取应用程序的 google play 评论?

python - 我可以从 cloud composer DAG 中执行 python 脚本吗?