python - Scrapy 在 django 中与 celery 随机崩溃

标签 python web-crawler scrapy celery twisted

我在 Ubuntu 服务器上的 Django 中运行我的 Scrapy 项目。 问题是,即使只有一个蜘蛛在运行,Scrapy 也会随机崩溃。

下面是 TraceBack 的一个片段。作为一个非专家,我用谷歌搜索了

_SIGCHLDWaker Scrappy

但无法理解为以下代码片段找到的解决方案:

--- <exception caught here> ---
  File "/home/b2b/virtualenvs/venv/local/lib/python2.7/site-packages/twisted/internet/posixbase.py", line 602, in _doReadOrWrite
    why = selectable.doWrite()
exceptions.AttributeError: '_SIGCHLDWaker' object has no attribute 'doWrite'

我对twisted不熟悉,虽然很想了解它,但对我来说似乎很不友好。

下面是完整的回溯:

2015-10-10 14:17:13,652: INFO/Worker-4] Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, RandomUserAgentMiddleware, ProxyMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
[2015-10-10 14:17:13,655: INFO/Worker-4] Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
[2015-10-10 14:17:13,656: INFO/Worker-4] Enabled item pipelines: MadePipeline
[2015-10-10 14:17:13,656: INFO/Worker-4] Spider opened
[2015-10-10 14:17:13,657: INFO/Worker-4] Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
Unhandled Error
Traceback (most recent call last):
  File "/home/b2b/virtualenvs/venv/local/lib/python2.7/site-packages/twisted/python/log.py", line 101, in callWithLogger
    return callWithContext({"system": lp}, func, *args, **kw)
  File "/home/b2b/virtualenvs/venv/local/lib/python2.7/site-packages/twisted/python/log.py", line 84, in callWithContext
    return context.call({ILogContext: newCtx}, func, *args, **kw)
  File "/home/b2b/virtualenvs/venv/local/lib/python2.7/site-packages/twisted/python/context.py", line 118, in callWithContext
    return self.currentContext().callWithContext(ctx, func, *args, **kw)
  File "/home/b2b/virtualenvs/venv/local/lib/python2.7/site-packages/twisted/python/context.py", line 81, in callWithContext
    return func(*args,**kw)
--- <exception caught here> ---
  File "/home/b2b/virtualenvs/venv/local/lib/python2.7/site-packages/twisted/internet/posixbase.py", line 602, in _doReadOrWrite
    why = selectable.doWrite()
exceptions.AttributeError: '_SIGCHLDWaker' object has no attribute 'doWrite'

以下是我根据 scrapy 的文档执行任务的方式

from scrapy.crawler import CrawlerProcess, CrawlerRunner
from twisted.internet import reactor
from scrapy.utils.project import get_project_settings
@shared_task
def run_spider(**kwargs):
    task_id = run_spider.request.id
    status = AsyncResult(str(task_id)).status
    source = kwargs.get("source")

    pro, created = Project.objects.get_or_create(name="b2b")
    query, _ = SearchTerm.objects.get_or_create(term=kwargs['query'])
    src, _ = Source.objects.get_or_create(term=query, engine=kwargs['source'])

    b, _ = Bot.objects.get_or_create(project=pro, query=src, spiderid=str(task_id), status=status, start_time=timezone.now())

    process = CrawlerRunner(get_project_settings())

    if source == "amazon":
        d = process.crawl(ComberSpider, query=kwargs['query'], job_id=task_id)
        d.addBoth(lambda _: reactor.stop())
    else:
        d = process.crawl(MadeSpider, query=kwargs['query'], job_id=task_id)
        d.addBoth(lambda _: reactor.stop())
    reactor.run()

我也试过这样的东西tutorial但它导致了一个不同的问题,我无法得到回溯

为了完整起见,这里是我的 Spider 的一个片段

class ComberSpider(CrawlSpider):

    name = "amazon"
    allowed_domains = ["amazon.com"]
    rules = (Rule(LinkExtractor(allow=r'corporations/.+/-*50/[0-9]+\.html', restrict_xpaths="//a[@class='next']"),
                  callback="parse_items", follow=True),
             )

    def __init__(self, *args, **kwargs):
        super(ComberSpider, self).__init__(*args, **kwargs)
        self.query = kwargs.get('query')
        self.job_id = kwargs.get('job_id')
        SignalManager(dispatcher.Any).connect(self.closed_handler, signal=signals.spider_closed)
        self.start_urls = (
            "http://www.amazon.com/corporations/%s/------------"
            "--------50/1.html" % self.query.strip().replace(" ", "_").lower(),
        )

最佳答案

这是一个已知的 Scrapy 问题。查看issue report thread了解详细信息和可能的解决方法。

关于python - Scrapy 在 django 中与 celery 随机崩溃,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/33060257/

相关文章:

python - 我的 python 角色一次只移动 1 个像素

python - Python进程使用的总内存?

python - 从网站解析——源代码不包含我需要的信息

python - 事件期间GUI没有变化吗? (Python3.6、PyQt5)

python - easy_install 'develop' 命令在 virtualenv 中无法工作

javascript - Facebook 爬虫目前是否在解析 DOM 之前解释 javascript?

python - Scrapy - 理解 CrawlSpider 和 LinkExtractor

python - 在递归中使用 scrapy 回调时 xlsxwriter 无法创建文件

Python正则表达式findall捕获重复组

Python下载一个完整的网页(包括CSS)