我使用了一些代理来抓取一些网站。这是我在 settings.py 中所做的:
# Retry many times since proxies often fail
RETRY_TIMES = 10
# Retry on most error codes since proxies fail for different reasons
RETRY_HTTP_CODES = [500, 503, 504, 400, 403, 404, 408]
DOWNLOAD_DELAY = 3 # 5,000 ms of delay
DOWNLOADER_MIDDLEWARES = {
'scrapy.contrib.downloadermiddleware.useragent.UserAgentMiddleware' : None,
'myspider.comm.rotate_useragent.RotateUserAgentMiddleware' : 100,
'scrapy.contrib.downloadermiddleware.retry.RetryMiddleware': 200,
'myspider.comm.random_proxy.RandomProxyMiddleware': 300,
'scrapy.contrib.downloadermiddleware.httpproxy.HttpProxyMiddleware': 400,
}
我还有一个代理下载中间件,它有以下方法:
def process_request(self, request, spider):
log('Requesting url %s with proxy %s...' % (request.url, proxy))
def process_response(self, request, response, spider):
log('Response received from request url %s with proxy %s' % (request.url, proxy if proxy else 'nil'))
def process_exception(self, request, exception, spider):
log_msg('Failed to request url %s with proxy %s with exception %s' % (request.url, proxy if proxy else 'nil', str(exception)))
#retry again.
return request
由于代理有时候不是很稳定,所以process_exception经常会提示很多请求失败的信息。这里的问题是失败的请求再也没有被尝试过。
如前所示,我已经设置了 RETRY_TIMES 和 RETRY_HTTP_CODES 设置,并且还在代理中间件的 process_exception 方法中返回了重试请求。
为什么 scrapy 不再重试失败请求,或者我如何确保至少尝试请求 RETRY_TIMES 我在 settings.py 中设置?
最佳答案
感谢Scrapy IRC channel @nyov的帮助。
'scrapy.contrib.downloadermiddleware.retry.RetryMiddleware': 200,
'myspider.comm.random_proxy.RandomProxyMiddleware': 300,
这里 Retry 中间件首先运行,因此它会在请求到达 Proxy 中间件之前重试该请求。在我的情况下,scrapy 需要代理来抓取网站,否则它会无休止地超时。
所以我颠倒了这两个下载中间件的优先级:
'scrapy.contrib.downloadermiddleware.retry.RetryMiddleware': 300,
'myspider.comm.random_proxy.RandomProxyMiddleware': 200,
关于Python Scrapy 不会重试超时连接,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/20533614/