amazon-web-services - 将 PySpark 连接到 AWS Redshift 时出错

标签 amazon-web-services apache-spark pyspark connection amazon-redshift

一直在尝试将我的 EMR 5.11.0 集群上的 Spark 2.2.1 连接到我们的 Redshift 存储。

我遵循的方法是-

  • 使用内置的 Redshift JDBC
    pyspark --jars /usr/share/aws/redshift/jdbc/RedshiftJDBC41.jar
    
    from pyspark.sql import SQLContext
    sc
    sql_context = SQLContext(sc)
    
    redshift_url = "jdbc:redshift://HOST:PORT/DATABASE?user=USER&password=PASSWORD"
    
    redshift_query  = "select * from table;"
    
    redshift_query_tempdir_storage = "s3://personal_warehouse/wip_dumps/"        
    
    # Read data from a query
    df_users = sql_context.read \
        .format("com.databricks.spark.redshift") \
        .option("url", redshift_url) \
        .option("query", redshift_query) \
        .option("tempdir", redshift_query_tempdir_storage) \
        .option("forward_spark_s3_credentials", "true") \
        .load()
    

    这给了我以下错误 -

  • Traceback (most recent call last): File "", line 7, in File "/usr/lib/spark/python/pyspark/sql/readwriter.py", line 165, in load return self._df(self._jreader.load()) File "/usr/lib/spark/python/lib/py4j-0.10.4-src.zip/py4j/java_gateway.py", line 1133, in call File "/usr/lib/spark/python/pyspark/sql/utils.py", line 63, in deco return f(*a, kw) File "/usr/lib/spark/python/lib/py4j-0.10.4-src.zip/py4j/protocol.py", line 319, in get_return_value ***py4j.protocol.Py4JJavaError: An error occurred while calling o63.load. : java.lang.ClassNotFoundException: Failed to find data source: com.databricks.spark.redshift. Please find packages at http://spark.apache.org/third-party-projects.html at* org.apache.spark.sql.execution.datasources.DataSource$.lookupDataSource(DataSource.scala:546) at org.apache.spark.sql.execution.datasources.DataSource.providingClass$lzycompute(DataSource.scala:87) at org.apache.spark.sql.execution.datasources.DataSource.providingClass(DataSource.scala:87) at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:302) at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:178) at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:146) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) at py4j.Gateway.invoke(Gateway.java:280) at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) at py4j.commands.CallCommand.execute(CallCommand.java:79) at py4j.GatewayConnection.run(GatewayConnection.java:214) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.ClassNotFoundException: com.databricks.spark.redshift.DefaultSource at java.net.URLClassLoader.findClass(URLClassLoader.java:381) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$22$$anonfun$apply$14.apply(DataSource.scala:530) at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$22$$anonfun$apply$14.apply(DataSource.scala:530) at scala.util.Try$.apply(Try.scala:192) at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$22.apply(DataSource.scala:530) at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$22.apply(DataSource.scala:530) at scala.util.Try.orElse(Try.scala:84) at org.apache.spark.sql.execution.datasources.DataSource$.lookupDataSource(DataSource.scala:530) ... 16 more



    有人可以帮忙告诉我在哪里错过了什么/犯了一个愚蠢的错误吗?

    谢谢!

    最佳答案

    我必须在 EMR spark-submit 选项中包含 4 个 jar 文件才能使其正常工作。

    jar 文件列表:

    1. RedshiftJDBC41-1.2.12.1017.jar

    2. spark-redshift_2.10-2.0.0.jar

    3. minimal-json-0.9.4.jar

    4. spark-avro_2.11-3.0.0.jar

    您可以下载 jar 文件并将它们存储在 S3 存储桶中,并在 spark-submit 选项中指向它,例如:

    --jars s3://<pathToJarFile>/RedshiftJDBC41-1.2.10.1009.jar,s3://<pathToJarFile>/minimal-json-0.9.4.jar,s3://<pathToJarFile>/spark-avro_2.11-3.0.0.jar,s3://<pathToJarFile>/spark-redshift_2.10-2.0.0.jar
    

    然后最后像这个例子一样查询你的 Redshift :spark-redshift-example在你的 Spark 代码中。

    关于amazon-web-services - 将 PySpark 连接到 AWS Redshift 时出错,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/48257351/

    相关文章:

    pyspark 从星期四开始的一周获取月份的周数

    amazon-web-services - 从 Lambda 代码调用 AWS boto 函数

    amazon-web-services - CloudFormation AWS::CertificateManager::Certificate 自动证书验证

    amazon-web-services - 更新 S3 存储桶而不删除它

    python - 为什么在使用范围连接提示时会出现异常?

    sql - PySpark 子查询 : Accessing outer query column is not allowed

    python - 如何在 Pyspark 中按列连接/附加多个 Spark 数据帧?

    amazon-web-services - 使用 Spring-Boot MVC 登录的 AWS ELB HTTPS 到 HTTP 不起作用

    apache-spark - Spark 是否将中间 shuffle 输出写入磁盘

    scala - mapWithState 断言失败 : Block rdd_45_0 is not locked for reading