apache-spark - EMR PySpark : LZO Codec not found

标签 apache-spark hdfs pyspark emr

在 EMR (Hadoop 2.4.0) 上使用 Spark (1.4.0) 在 YARN 模式下通过 IPython notebook 运行 PySpark,使用:

IPYTHON_OPTS="notebook --no-browser" nohup /usr/lib/spark/bin/pyspark --master yarn-client --num-executors 2 --executor-memory 512m --executor-cores 1 > /mnt/var/log/python_notebook.log 2> /mnt/var/log/python_notebook_err.log &

在 HDFS 中放置了一个简单的 CSV 文件,并尝试使用
sc.textFile('/tmp/text.csv').first()

但是,这给了我 Caused by: java.lang.ClassNotFoundException: Class com.hadoop.compression.lzo.LzoCodec not found .

在上下文中:
Py4JJavaError                             Traceback (most recent call last)
<ipython-input-54-e39168c6841b> in <module>()
----> 1 sc.textFile('/tmp/text.csv').first()

/usr/lib/spark/python/pyspark/rdd.py in first(self)
   1293         ValueError: RDD is empty
   1294         """
-> 1295         rs = self.take(1)
   1296         if rs:
   1297             return rs[0]

/usr/lib/spark/python/pyspark/rdd.py in take(self, num)
   1245         """
   1246         items = []
-> 1247         totalParts = self.getNumPartitions()
   1248         partsScanned = 0
   1249 

/usr/lib/spark/python/pyspark/rdd.py in getNumPartitions(self)
    353         2
    354         """
--> 355         return self._jrdd.partitions().size()
    356 
    357     def filter(self, f):

/usr/lib/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/java_gateway.py in __call__(self, *args)
    536         answer = self.gateway_client.send_command(command)
    537         return_value = get_return_value(answer, self.gateway_client,
--> 538                 self.target_id, self.name)
    539 
    540         for temp_arg in temp_args:

/usr/lib/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
    298                 raise Py4JJavaError(
    299                     'An error occurred while calling {0}{1}{2}.\n'.
--> 300                     format(target_id, '.', name), value)
    301             else:
    302                 raise Py4JError(

Py4JJavaError: An error occurred while calling o159.partitions.
: java.lang.RuntimeException: Error in configuring object
    at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109)
    at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:75)
    at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
    at org.apache.spark.rdd.HadoopRDD.getInputFormat(HadoopRDD.scala:190)
    at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:203)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217)
    at scala.Option.getOrElse(Option.scala:120)
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:217)
    at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:32)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217)
    at scala.Option.getOrElse(Option.scala:120)
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:217)
    at org.apache.spark.api.java.JavaRDDLike$class.partitions(JavaRDDLike.scala:65)
    at org.apache.spark.api.java.AbstractJavaRDDLike.partitions(JavaRDDLike.scala:47)
    at sun.reflect.GeneratedMethodAccessor30.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:379)
    at py4j.Gateway.invoke(Gateway.java:259)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:207)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.reflect.InvocationTargetException
    at sun.reflect.GeneratedMethodAccessor31.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:106)
    ... 25 more
Caused by: java.lang.IllegalArgumentException: Compression codec com.hadoop.compression.lzo.LzoCodec not found.
    at org.apache.hadoop.io.compress.CompressionCodecFactory.getCodecClasses(CompressionCodecFactory.java:135)
    at org.apache.hadoop.io.compress.CompressionCodecFactory.<init>(CompressionCodecFactory.java:175)
    at org.apache.hadoop.mapred.TextInputFormat.configure(TextInputFormat.java:45)
    ... 29 more
Caused by: java.lang.ClassNotFoundException: Class com.hadoop.compression.lzo.LzoCodec not found
    at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:1980)
    at org.apache.hadoop.io.compress.CompressionCodecFactory.getCodecClasses(CompressionCodecFactory.java:128)
    ... 31 more

我已尝试按照说明操作 here并做了:
os.environ['SPARK_LIBRARY_PATH'] = "/usr/lib/hadoop-lzo/lib/native/"
os.environ['SPARK_CLASSPATH'] = "/usr/lib/hadoop-lzo/lib/"

然而,这似乎没有帮助。

最佳答案

我知道这个问题很老,但我过去一周一直在处理这个问题,所以我想我会发布我们的解决方案,以防其他人遇到这个问题。我们的设置是一个 EC2 实例作为 EMR 之外的驱动程序运行,然后它可以创建 EMR 集群并与主节点通信。集群运行的是 Spark 2.2.0,EMR 版本是 5.9.0。

解决方案是克隆 Twitter Hadoop-Lzo Github repo在 Spark 驱动程序上,然后将路径添加到 hadoop-lzo.jar 以触发提交参数。 SUBMIT_ARGS='--jars /opt/hadoop-lzo/target/hadoop-lzo-0.4.21-SNAPSHOT.jar .只需将 .jar 的路径替换为您将 repo 克隆到的路径。

关于apache-spark - EMR PySpark : LZO Codec not found,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/32212906/

相关文章:

hadoop - Spark 上的 hive : Failed to create spark client

hadoop - hdfs mv命令如何工作

hadoop - 无法将文件从FTP复制到HDFS

apache-spark - 在 S3 中存储时正确的 Parquet 文件大小?

python - 测试将值插入 mongodb(pyspark、pymongo)

python - 如何使用 Pyspark 从 CSV 中正确读取 JSON 字符串?

apache-spark - Spark 图上的 Gremlin 遍历查询

java - 需要多个 slf4j 绑定(bind)的解决方案,而不从类路径中删除其他绑定(bind) jar

apache-spark - 增加 spark.executor.cores 会加快洗牌速度吗

python - Pyspark 中的 Pickle 错误