我正在尝试使用 scrapyd 部署 scrapy 项目,但它给了我错误......
sudo scrapy deploy default -p eScraper
Building egg of eScraper-1371463750
'build/scripts-2.7' does not exist -- can't clean it
zip_safe flag not set; analyzing archive contents...
eScraperInterface.settings: module references __file__
eScraper.settings: module references __file__
Deploying eScraper-1371463750 to http://localhost:6800/addversion.json
Server response (200):
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/scrapyd/webservice.py", line 18, in render
return JsonResource.render(self, txrequest)
File "/usr/local/lib/python2.7/dist-packages/scrapy/utils/txweb.py", line 10, in render
r = resource.Resource.render(self, txrequest)
File "/usr/local/lib/python2.7/dist-packages/twisted/web/resource.py", line 250, in render
return m(request)
File "/usr/local/lib/python2.7/dist-packages/scrapyd/webservice.py", line 66, in render_POST
spiders = get_spider_list(project)
File "/usr/local/lib/python2.7/dist-packages/scrapyd/utils.py", line 65, in get_spider_list
raise RuntimeError(msg.splitlines()[-1])
RuntimeError: OSError: [Errno 20] Not a directory: '/tmp/eScraper-1371463750-Lm8HLh.egg/images'
早些时候我能够正确部署该项目,但现在不行...... 但是如果使用 scrapycrawlspiderName 来使用crawlspider那么就没有问题...... 有人可以帮我吗......
最佳答案
尝试以下两件事: 1.可能是您部署的版本太多,尝试删除一些旧版本 2.部署前,删除build文件夹和setup文件
就运行爬虫而言,如果您运行任何您尚未部署的任意名称的爬虫,则 scrapyd 将返回“OK”响应以及作业 ID。
关于python - Scrapy 部署停止工作,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/17145126/