python - Scrapy+Splash 对任何网站都会返回 403

标签 python web-scraping scrapy splash-screen

由于某种原因,我在使用 Splash 时遇到任何请求都会收到 403 错误。我做错了什么?

已关注 https://github.com/scrapy-plugins/scrapy-splash我设置了所有设置:

SPLASH_URL = 'http://localhost:8050'
DOWNLOADER_MIDDLEWARES = {
    'scrapy_splash.SplashCookiesMiddleware': 723,
    'scrapy_splash.SplashMiddleware': 725,
    'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware': 810,
}
SPIDER_MIDDLEWARES = {
    'scrapy_splash.SplashDeduplicateArgsMiddleware': 100,
}
DUPEFILTER_CLASS = 'scrapy_splash.SplashAwareDupeFilter'
HTTPCACHE_STORAGE = 'scrapy_splash.SplashAwareFSCacheStorage'

开始使用docker启动

sudo docker run -p 8050:8050 scrapinghub/splash

蜘蛛代码:

import scrapy

from scrapy import Selector
from scrapy_splash import SplashRequest


class VestiaireSpider(scrapy.Spider):
    name = "vestiaire"
    base_url = "https://www.vestiairecollective.com"
    rotate_user_agent = True

    def start_requests(self):
        urls = ["https://www.vestiairecollective.com/men-clothing/jeans/"]
        for url in urls:
            yield SplashRequest(url=url, callback=self.parse, meta={'args': {"wait": 0.5}})

    def parse(self, response):
        data = Selector(response)
        category_name = data.xpath('//h1[@class="campaign campaign-title clearfix"]/text()').extract_first().strip()
        self.log(category_name)

然后我运行蜘蛛:

scrapy crawl test

并返回请求 url 的 403:

2017-12-19 22:55:17 [scrapy.utils.log] INFO: Scrapy 1.4.0 started (bot: crawlers) 2017-12-19 22:55:17 [scrapy.utils.log] INFO: Overridden settings: {'DUPEFILTER_CLASS': 'scrapy_splash.SplashAwareDupeFilter', 'CONCURRENT_REQUESTS': 10, 'NEWSPIDER_MODULE': 'crawlers.spiders', 'SPIDER_MODULES': ['crawlers.spiders'], 'ROBOTSTXT_OBEY': True, 'COOKIES_ENABLED': False, 'BOT_NAME': 'crawlers', 'HTTPCACHE_STORAGE': 'scrapy_splash.SplashAwareFSCacheStorage'} 2017-12-19 22:55:17 [scrapy.middleware] INFO: Enabled extensions: ['scrapy.extensions.telnet.TelnetConsole', 'scrapy.extensions.logstats.LogStats', 'scrapy.extensions.memusage.MemoryUsage', 'scrapy.extensions.corestats.CoreStats'] 2017-12-19 22:55:17 [scrapy.middleware] INFO: Enabled downloader middlewares: ['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware', 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware', 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware', 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware', 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware', 'scrapy.downloadermiddlewares.retry.RetryMiddleware', 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware', 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware', 'scrapy_splash.SplashCookiesMiddleware', 'scrapy_splash.SplashMiddleware', 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware', 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware', 'scrapy.downloadermiddlewares.stats.DownloaderStats'] 2017-12-19 22:55:17 [scrapy.middleware] INFO: Enabled spider middlewares: ['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware', 'scrapy_splash.SplashDeduplicateArgsMiddleware', 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware', 'scrapy.spidermiddlewares.referer.RefererMiddleware', 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware', 'scrapy.spidermiddlewares.depth.DepthMiddleware'] 2017-12-19 22:55:17 [scrapy.middleware] INFO: Enabled item pipelines: ['scrapy.pipelines.images.ImagesPipeline'] 2017-12-19 22:55:17 [scrapy.core.engine] INFO: Spider opened 2017-12-19 22:55:17 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min) 2017-12-19 22:55:17 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023 2017-12-19 22:55:20 [scrapy.core.engine] DEBUG: Crawled (200) https://www.vestiairecollective.com/robots.txt> (referer: None) 2017-12-19 22:55:22 [scrapy.core.engine] DEBUG: Crawled (403) http://localhost:8050/robots.txt> (referer: None) 2017-12-19 22:55:23 [scrapy.core.engine] DEBUG: Crawled (403) https://www.vestiairecollective.com/men-clothing/jeans/ via http://localhost:8050/render.html> (referer: None) 2017-12-19 22:55:23 [scrapy.spidermiddlewares.httperror] INFO: Ignoring response <403 https://www.vestiairecollective.com/men-clothing/jeans/>: HTTP status code is not handled or not allowed 2017-12-19 22:55:23 [scrapy.core.engine] INFO: Closing spider (finished) 2017-12-19 22:55:23 [scrapy.statscollectors] INFO: Dumping Scrapy stats: {'downloader/request_bytes': 1254, 'downloader/request_count': 3, 'downloader/request_method_count/GET': 2, 'downloader/request_method_count/POST': 1, 'downloader/response_bytes': 2793, 'downloader/response_count': 3, 'downloader/response_status_count/200': 1, 'downloader/response_status_count/403': 2, 'finish_reason': 'finished', 'finish_time': datetime.datetime(2017, 12, 19, 20, 55, 23, 440598), 'httperror/response_ignored_count': 1, 'httperror/response_ignored_status_count/403': 1, 'log_count/DEBUG': 4, 'log_count/INFO': 8, 'memusage/max': 53850112, 'memusage/startup': 53850112, 'response_received_count': 3, 'scheduler/dequeued': 2, 'scheduler/dequeued/memory': 2, 'scheduler/enqueued': 2, 'scheduler/enqueued/memory': 2, 'splash/render.html/request_count': 1, 'splash/render.html/response_count/403': 1, 'start_time': datetime.datetime(2017, 12, 19, 20, 55, 17, 372080)} 2017-12-19 22:55:23 [scrapy.core.engine] INFO: Spider closed (finished)

最佳答案

问题出在用户代理中。许多网站都需要它才能访问。 访问该站点并避免被禁止的最简单方法是使用此库来随机化用户代理。 https://github.com/cnu/scrapy-random-useragent

关于python - Scrapy+Splash 对任何网站都会返回 403,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/47895141/

相关文章:

python - 在 Python 中只接受数字作为输入

python - Turbogears 2 : authentication, 密码在不同表中,更新时反馈

python - 在 Python lxml 中抓取 HTML 表

python - 何时以及如何在一个 Scrapy 项目中使用多个蜘蛛

python - Scrapy - FormRequest 在方法为 POST 时发送 GET 请求

python - 如何使 matplotlib 图形看起来像这样专业地完成?

javascript - Python Web Scraper - 页面 JavaScript 定义的每页结果有限

python - Python 中的内容查看检查

python - 如何在 Scrapy 中创建带方括号的 url 请求?

python - 通过boost python从python中提取常量值