我在一个项目中有多个爬虫,问题是现在我正在 SETTINGS 中定义 LOG_FILE
LOG_FILE = "scrapy_%s.log" % datetime.now()
我想要的是scrapy_SPIDERNAME_DATETIME
但我无法在 log_file 名称中提供 spidername ..
我找到了
scrapy.log.start(logfile=None, loglevel=None, logstdout=None)
并在每个蜘蛛 init 方法中调用它,但它不起作用..
任何帮助将不胜感激
最佳答案
蜘蛛的 __init__()
还不够早,无法自行调用 log.start()
,因为此时日志观察器已经启动;因此,您需要重新初始化日志记录状态以欺骗 Scrapy 进入(重新)启动它。
在你的蜘蛛类文件中:
from datetime import datetime
from scrapy import log
from scrapy.spider import BaseSpider
class ExampleSpider(BaseSpider):
name = "example"
allowed_domains = ["example.com"]
start_urls = ["http://www.example.com/"]
def __init__(self, name=None, **kwargs):
LOG_FILE = "scrapy_%s_%s.log" % (self.name, datetime.now())
# remove the current log
# log.log.removeObserver(log.log.theLogPublisher.observers[0])
# re-create the default Twisted observer which Scrapy checks
log.log.defaultObserver = log.log.DefaultObserver()
# start the default observer so it can be stopped
log.log.defaultObserver.start()
# trick Scrapy into thinking logging has not started
log.started = False
# start the new log file observer
log.start(LOG_FILE)
# continue with the normal spider init
super(ExampleSpider, self).__init__(name, **kwargs)
def parse(self, response):
...
输出文件可能如下所示:
scrapy_example_2012-08-25 12:34:48.823896.log
关于python - 垃圾日志问题,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/12049770/