python - 我无法让 celery 正常工作(aws elasticbeanstalk)

标签 python django celery django-celery amazon-elastic-beanstalk

我正在将 Django 应用程序移动到 python3/django 1.10。该过程的一部分还包括新部署,我们使用 AWS EBS。

celery 任务在迁移之前是可以的,但现在我无法让任务正常工作。

包:

...
celery==3.1.23
Django==1.10.6
django-celery==3.2.1
...

python :

Python 3.4.3

在 supervisor 配置上我添加了一个运行 celery 的配置:

[program:celeryd-workers]
; Set full path to celery program if using virtualenv
command=/opt/python/run/venv/bin/python /opt/python/current/app/manage.py celery worker -A app --app=app.celery_app:app -l DEBUG -c 4
directory=/opt/python/current/app
user=nobody
numprocs=1
autostart=true
autorestart=true
startsecs=10
stopwaitsecs = 600
killasgroup=true    
stdout_logfile=/var/log/celery-worker.log
stderr_logfile=/var/log/celery-worker.log
environment=PYTHONPATH="/opt/python/current/app/:",PATH="/opt/python/run/venv/bin/:%(ENV_PATH)s",DJANGO_SETTINGS_MODULE="settings.qa"

[program:celeryd-beat]
; Set full path to celery program if using virtualenv
command=/opt/python/run/venv/bin/python /opt/python/current/app/manage.py celery beat -A app --app=app.celery_app:app --loglevel=DEBUG --workdir=/tmp --pidfile=/tmp/celerybeat.pid -s /tmp/celerybeat-schedule.db
directory=/opt/python/current/app
user=nobody
numprocs=1
autostart=true
autorestart=true
startsecs=10
stopwaitsecs = 600
killasgroup=true

stdout_logfile=/var/log/celery-beat.log
stderr_logfile=/var/log/celery-beat.log
environment=PYTHONPATH="/opt/python/current/app/:",PATH="/opt/python/run/venv/bin/:%(ENV_PATH)s",DJANGO_SETTINGS_MODULE="settings.qa"

我的 celery_app.py 非常简单:

from __future__ import unicode_literals, absolute_import

from celery import Celery
from django.conf import settings

app = Celery()

app.config_from_object('django.conf:settings')
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)

配置

在 EC2 实例上,celery 的设置:

BROKER_URL = 'the aws elastic cache redis url'
CELERY_RESULT_BACKEND = 'djcelery.backends.database:DatabaseBackend'

BROKER_TRANSPORT_OPTIONS = {
    'visibility_timeout': 600,
}

BROKER_POOL_LIMIT = 1

CELERY_ACCEPT_CONTENT = ['pickle', 'json', 'msgpack']

CELERY_DEFAULT_QUEUE = 'default'
CELERY_QUEUES = {
    'default': {
        'exchange': 'default',
        'exchange_type': 'topic',
        'binding_key': 'tasks.#'
    }
}

CELERY_ALWAYS_EAGER = True

从实例中我检查了配置(从 django.conf 导入设置),并且我检查了实例是否可以使用 redis-cli 连接到 redis。

什么不起作用?

基本上,如果我运行一个任务,即使是像 celery 文档中的 add(x,y) 这样最简单的任务,我也不会在 ./manage 中得到任务.py celery 事件,我让事件打开了几个小时,我尝试从应用程序运行任务,但没有任何反应,而且似乎卡住了。

 No task selected                                                                                                                                                                                  │
│  Workers online: celery@ip-xx-xx-xx-xx                                                                                                                                                            │
│  Info: events: 2187 tasks:0 workers:1/1
│  Keys: j:down k:up i:info t:traceback r:result c:revoke ^c: quit

奇怪的是,如果我按照我的应用程序中的文档运行此任务:

In [1]: from app.core.tasks import add

In [2]: result = add.delay(2,2)

In [3]: result.get()
Out[3]: 4

结果出现了,但是我在事件中看不到任何任务,如果我检查 celery inspect statistics

...
       "pool": {
            "max-concurrency": 4,
            "max-tasks-per-child": "N/A",
            "processes": [
                11363,
                11364,
                11365,
                11366
            ],
            "put-guarded-by-semaphore": false,
            "timeouts": [
                0,
                0
            ],
            "writes": {
                "all": "",
                "avg": "0.00%",
                "inqueues": {
                    "active": 0,
                    "total": 4
                },
                "raw": "",
                "total": 0
            }
        },
....

似乎什么都没用。

当然进程在运行:

[ec2-user@xxx]$ ps aux | grep "celery"
nobody   11350  0.0  3.7 251484 77408 ?        S    08:11   0:01 /opt/python/run/venv/bin/python /opt/python/current/app/manage.py celery beat -A app --app=app.celery_app:app --loglevel=DEBUG --workdir=/tmp --pidfile=/tmp/celerybeat.pid -s /tmp/celerybeat-schedule.db
nobody   11351  0.1  3.8 247804 79848 ?        S    08:11   0:07 /opt/python/run/venv/bin/python /opt/python/current/app/manage.py celery worker -A app --app=app.celery_app:app -l DEBUG -c 4
nobody   11363  0.0  3.4 243876 70956 ?        S    08:11   0:00 /opt/python/run/venv/bin/python /opt/python/current/app/manage.py celery worker -A app --app=app.celery_app:app -l DEBUG -c 4
nobody   11364  0.0  3.4 243876 71024 ?        S    08:11   0:00 /opt/python/run/venv/bin/python /opt/python/current/app/manage.py celery worker -A app --app=app.celery_app:app -l DEBUG -c 4
nobody   11365  0.0  3.4 243876 71024 ?        S    08:11   0:00 /opt/python/run/venv/bin/python /opt/python/current/app/manage.py celery worker -A app --app=app.celery_app:app -l DEBUG -c 4
nobody   11366  0.0  3.4 243876 71024 ?        S    08:11   0:00 /opt/python/run/venv/bin/python /opt/python/current/app/manage.py celery worker -A app --app=app.celery_app:app -l DEBUG -c 4

和日志:

[ec2-user@xxxx log]$ tail  /var/log/celery-worker.log
[2017-03-14 20:34:45,126: DEBUG/MainProcess] pidbox received method enable_events() [reply_to:None ticket:None]
[2017-03-14 20:34:50,124: DEBUG/MainProcess] pidbox received method enable_events() [reply_to:None ticket:None]
[2017-03-14 20:34:55,125: DEBUG/MainProcess] pidbox received method enable_events() [reply_to:None ticket:None]
[2017-03-14 20:35:00,124: DEBUG/MainProcess] pidbox received method enable_events() [reply_to:None ticket:None]
[2017-03-14 20:35:05,125: DEBUG/MainProcess] pidbox received method enable_events() [reply_to:None ticket:None]
[2017-03-14 20:35:10,124: DEBUG/MainProcess] pidbox received method enable_events() [reply_to:None ticket:None]
[2017-03-14 20:35:15,125: DEBUG/MainProcess] pidbox received method enable_events() [reply_to:None ticket:None]
[2017-03-14 20:35:20,124: DEBUG/MainProcess] pidbox received method enable_events() [reply_to:None ticket:None]
[2017-03-14 20:35:25,125: DEBUG/MainProcess] pidbox received method enable_events() [reply_to:None ticket:None]
[2017-03-14 20:35:30,124: DEBUG/MainProcess] pidbox received method enable_events() [reply_to:None ticket:None]

celery 节拍:

[ec2-user@xxxx log]$ tail  /var/log/celery-beat.log
>>>> Testing: False
celery beat v3.1.23 (Cipater) is starting.
__    -    ... __   -        _
Configuration ->
    . broker -> redis://qa-redis.xxxx:6379//
    . loader -> celery.loaders.app.AppLoader
    . scheduler -> celery.beat.PersistentScheduler
    . db -> /tmp/celerybeat-schedule.db
    . logfile -> [stderr]@%DEBUG
    . maxinterval -> now (0s)

你知道我做错了什么吗?

最佳答案

您的设置文件有 CELERY_ALWAYS_EAGER = True 这基本上意味着它将跳过队列并在本地运行任务。因此,您在没有看到任何事件的情况下得到结果。 看看http://docs.celeryproject.org/en/latest/userguide/configuration.html#std:setting-task_always_eager

我会尝试删除该设置并从那里开始工作。

关于python - 我无法让 celery 正常工作(aws elasticbeanstalk),我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/42782365/

相关文章:

python - 使用 Python 3 关闭/终止 Web 浏览器?

python - 我想弄清楚如何将 dbus 与 pidgin 一起使用

python - 如何通过标点符号拆分pandas中的字符串

javascript - For 循环使用 Django 数据作为输入,使用 CanvasJs 创建散点图

Python 多线程转入 Celery 任务。 celery_task.update_state() 错误

python - Python中素数查找算法的运行时间

python-3.x - Django Rest 框架如何创建家庭并为该家庭分配成员,同时为每个成员分配角色

django - Redmine 作为另一个站点的身份验证后端

python - celery apply_async 窒息 rabbitmq

python - Celery 不使用 Redis 在 Kubernetes 中处理任务