我在同一个项目中有两个蜘蛛。其中一个依赖于另一个先运行。他们使用不同的管道。如何确保它们按顺序运行?
最佳答案
来自文档:https://doc.scrapy.org/en/1.2/topics/request-response.html
相同的示例,但通过链接 deferreds 顺序运行蜘蛛:
from twisted.internet import reactor, defer
from scrapy.crawler import CrawlerRunner
from scrapy.utils.log import configure_logging
class MySpider1(scrapy.Spider):
# Your first spider definition
...
class MySpider2(scrapy.Spider):
# Your second spider definition
...
configure_logging()
runner = CrawlerRunner()
@defer.inlineCallbacks
def crawl():
yield runner.crawl(MySpider1)
yield runner.crawl(MySpider2)
reactor.stop()
crawl()
reactor.run() # the script will block here until the last crawl call is finished
关于python - Scrapy:如何让两个爬虫依次运行?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/27408880/