hadoop - 从S3解压缩

标签 hadoop apache-spark pyspark

我正在使用Spark(pyspark)读取数据,由于我的一些数据现在是.gz格式,因此遇到了麻烦。

%pyspark
data = sc.textFile("s3://mybucket.file.gz")
data.first()


    Traceback (most recent call last):
  File "/tmp/zeppelin_pyspark-5205743886772607083.py", line 267, in <module>
    raise Exception(traceback.format_exc())
Exception: Traceback (most recent call last):
  File "/tmp/zeppelin_pyspark-5205743886772607083.py", line 260, in <module>
    exec(code)
  File "<stdin>", line 1, in <module>
  File "/usr/lib/spark/python/pyspark/rdd.py", line 1041, in count
    return self.mapPartitions(lambda i: [sum(1 for _ in i)]).sum()
  File "/usr/lib/spark/python/pyspark/rdd.py", line 1032, in sum
    return self.mapPartitions(lambda x: [sum(x)]).fold(0, operator.add)
  File "/usr/lib/spark/python/pyspark/rdd.py", line 906, in fold
    vals = self.mapPartitions(func).collect()
  File "/usr/lib/spark/python/pyspark/rdd.py", line 809, in collect
    port = self.ctx._jvm.PythonRDD.collectAndServe(self._jrdd.rdd())
  File "/usr/lib/spark/python/lib/py4j-0.10.4-src.zip/py4j/java_gateway.py", line 1133, in __call__
    answer, self.gateway_client, self.target_id, self.name)
  File "/usr/lib/spark/python/pyspark/sql/utils.py", line 63, in deco
    return f(*a, **kw)
  File "/usr/lib/spark/python/lib/py4j-0.10.4-src.zip/py4j/protocol.py", line 319, in get_return_value
    format(target_id, ".", name), value)
Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 31.0 failed 4 times, most recent failure: Lost task 0.3 in stage 31.0 (TID 270, ip-172-16-238-231.us-west-1.compute.internal, executor 17): java.io.IOException: incorrect header check
    at org.apache.hadoop.io.compress.zlib.ZlibDecompressor.inflateBytesDirect(Native Method)

关于如何读入并解压缩有任何想法吗?

最佳答案

奇怪的是,我要做的就是从最后删除“gz”,它确实起作用了。

关于hadoop - 从S3解压缩,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/42302477/

相关文章:

apache-spark - 逻辑回归的 PySpark mllib p 值

java - MapReduce:如何将 HashMap 传递给映射器

regex - pyspark 不支持正则表达式

pyspark - AWS EMR 集群中的权限被拒绝 : user=zeppelin while using %spark. pyspark 解释器

apache-spark - 从 Pandas udf 记录

hadoop - 从两个 HBase 表进行 MapReduce

python - MapReduce using hadoop streaming via python - 将列表从映射器传递到缩减器并将其作为列表读取

hadoop - 文件传输到HDFS

hadoop - 关于 Hadoop 和 Hive 和 Presto 的问题

apache-spark - 如何刷新 HDFS 路径?