apache-spark - 当 HadoopJarStep arg 具有以 .json 结尾的参数时,Airflow 无法使用 EMRAddStep 添加 EMR 步骤

标签 apache-spark airflow amazon-emr

当 Airflow 模板化运算符参数包含任何以 .json 结尾的字符串时,它似乎存在错误。有谁知道如何规避它?以下是我的 DAG - 请注意 STEPS 变量中的 "--files", "s3://dummy/spark/application.json"

from datetime import timedelta
from airflow import DAG
from airflow.providers.amazon.aws.operators.emr_create_job_flow import EmrCreateJobFlowOperator
from airflow.providers.amazon.aws.operators.emr_terminate_job_flow import EmrTerminateJobFlowOperator
from airflow.providers.amazon.aws.operators.emr_add_steps import EmrAddStepsOperator
from airflow.providers.amazon.aws.sensors.emr_job_flow import EmrJobFlowSensor
from airflow.utils.dates import days_ago

DEFAULT_ARGS = {
    'owner': 'Commscope',
    'depends_on_past': False,
    'email': ['<a href="https://stackoverflow.com/cdn-cgi/l/email-protection" class="__cf_email__" data-cfemail="91e2fcf8e2f9e3f0d1f2fefcfce2f2fee1f4bff2fefc" rel="noreferrer noopener nofollow">[email protected]</a>'],
    'email_on_failure': False,
    'email_on_retry': False
}


JOB_FLOW_OVERRIDES = {
    'Name': 'PiCalc',
    'ReleaseLabel': 'emr-5.29.0',
    'Instances': {
        'InstanceGroups': [
            {
                'Name': 'Master node',
                'Market': 'SPOT',
                'InstanceRole': 'MASTER',
                'InstanceType': 'm1.medium',
                'InstanceCount': 1,
            }
        ],
        'KeepJobFlowAliveWhenNoSteps': True,
        'TerminationProtected': False,
    },
    'JobFlowRole': 'EMR_EC2_DefaultRole',
    'ServiceRole': 'EMR_DefaultRole',
}

STEPS = [{
    "Name": "Process data",
    "ActionOnFailure": "CONTINUE",
    "HadoopJarStep": {
        "Jar": "command-runner.jar",
        "Args": [
            "--class", "com.dummy.Application",
            "--files", "s3://dummy/spark/application.json",
            "--driver-java-options",
            "-Dlog4j.configuration=log4j.properties",
            "--driver-java-options",
            "-Dconfig.resource=application.json",
            "--driver-java-options"
            "s3://dummy/spark/app-jar-with-dependencies.jar",
            "application.json"
        ]
    }
}]

with DAG(
        dag_id='data_processing',
        default_args=DEFAULT_ARGS,
        dagrun_timeout=timedelta(hours=2),
        start_date=days_ago(2),
        schedule_interval='0 3 * * *',
        tags=['inquire', 'bronze'],
) as dag:
    job_flow_creator = EmrCreateJobFlowOperator(
        task_id='launch_emr_cluster',
        job_flow_overrides=JOB_FLOW_OVERRIDES,
        aws_conn_id='aws_default',
        emr_conn_id='emr_default'
    )

    job_flow_sensor = EmrJobFlowSensor(
        task_id='check_cluster',
        job_flow_id="{{ task_instance.xcom_pull(task_ids='launch_emr_cluster', key='return_value') }}",
        target_states=['RUNNING', 'WAITING'],
        aws_conn_id='aws_default'
    )

    proc_step = EmrAddStepsOperator(
        task_id='process_data',
        job_flow_id="{{ task_instance.xcom_pull(task_ids='launch_emr_cluster', key='return_value') }}",
        aws_conn_id='aws_default',
        steps=STEPS,
    )

    job_flow_terminator = EmrTerminateJobFlowOperator(
        task_id='terminate_emr_cluster',
        job_flow_id="{{ task_instance.xcom_pull(task_ids='launch_emr_cluster', key='return_value') }}",
        aws_conn_id='aws_default',
        trigger_rule="all_done"
    )

    job_flow_creator >> job_flow_sensor >> proc_step >> job_flow_terminator

集群启动成功,但Airflow 失败并出现以下错误

[2020-08-21 15:06:42,307] {taskinstance.py:1145} ERROR - s3://dummy/spark/application.json
Traceback (most recent call last):
  File "/usr/local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 964, in _run_raw_task
    self.render_templates(context=context)
...
...
  File "/usr/local/lib/python3.7/site-packages/jinja2/loaders.py", line 187, in get_source
    raise TemplateNotFound(template)
jinja2.exceptions.TemplateNotFound: s3://dummy/spark/application.json

最佳答案

Airflow 尝试渲染传递给 template_fields 的所有值。在您使用 EmrAddStepsOperator 的情况下,它的 template_fields 为 ['job_flow_id', 'job_flow_name', 'cluster_states', 'steps']

源代码:https://github.com/apache/airflow/blob/47c6657ce012f6db147fdcce3ca5e77f46a9e491/airflow/providers/amazon/aws/operators/emr_add_steps.py#L48

这是由 https://github.com/apache/airflow/pull/8572 添加的

您可以通过两种方式解决这些问题:

  1. 通过在 .json 示例 "s3://dummy/spark/application.json " 后添加额外的空格来绕过此问题。这是有效的,因为 Airflow 会查找 Iterable 中的每个元素以查找字符串是否以 .json

    结尾
  2. 子类 EmrAddStepsOperator 并覆盖 template_ext 字段。示例:

class FixedEmrAddStepsOperator(BaseOperator):
    template_ext = ()

然后你可以使用这个运算符:

    proc_step = FixedEmrAddStepsOperator(
        task_id='process_data',
        job_flow_id="{{ task_instance.xcom_pull(task_ids='launch_emr_cluster', key='return_value') }}",
        aws_conn_id='aws_default',
        steps=STEPS,
    )

关于apache-spark - 当 HadoopJarStep arg 具有以 .json 结尾的参数时,Airflow 无法使用 EMRAddStep 添加 EMR 步骤,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/63525443/

相关文章:

apache-spark - 非分区 Parquet 数据的谓词下推

amazon-dynamodb - 通过 EMR 上运行的 PySpark 中的 Glue 数据目录访问 DynamoDB 时绝对 URI 中的相对路径异常

scala - Spark DataFrame - 使用 SQL 读取管道分隔文件?

python - 在 Airflow 中,最终用户可以将参数传递给与某些特定 dag 关联的键

apache-spark - *所有* Spark 属性键的列表在哪里?

amazon-web-services - 即使有 requirements.txt,AWS MWAA 中也没有名为 ____ 的模块错误

python - Airflow 在发生故障时使所有 dags 执行特定操作

hive - 指定从 Hive 插入生成的文件的最小数量

join - BroadcastHashJoin 在 Spark 中到底如何工作?

apache-spark - Spark Thrift 服务器与 Apache Thrift 的关系