python - 属性错误 : 'DataFrame' object has no attribute '_data'

标签 python apache-spark pyspark databricks azure-databricks

在 Pandas 数据帧上并行化时 Azure Databricks 执行错误。该代码能够创建 RDD,但在执行时中断 .collect()设置:

import pandas as pd
# initialize list of lists 
data = [['tom', 10], ['nick', 15], ['juli', 14]] 
  
# Create the pandas DataFrame 
my_df = pd.DataFrame(data, columns = ['Name', 'Age']) 

def testfn(i):
  return my_df.iloc[i]
test_var=sc.parallelize([0,1,2],50).map(testfn).collect()
print (test_var)
错误:
Py4JJavaError                             Traceback (most recent call last)
<command-2941072546245585> in <module>
      1 def testfn(i):
      2   return my_df.iloc[i]
----> 3 test_var=sc.parallelize([0,1,2],50).map(testfn).collect()
      4 print (test_var)

/databricks/spark/python/pyspark/rdd.py in collect(self)
    901         # Default path used in OSS Spark / for non-credential passthrough clusters:
    902         with SCCallSiteSync(self.context) as css:
--> 903             sock_info = self.ctx._jvm.PythonRDD.collectAndServe(self._jrdd.rdd())
    904         return list(_load_from_socket(sock_info, self._jrdd_deserializer))
    905 

/databricks/spark/python/lib/py4j-0.10.9-src.zip/py4j/java_gateway.py in __call__(self, *args)
   1303         answer = self.gateway_client.send_command(command)
   1304         return_value = get_return_value(
-> 1305             answer, self.gateway_client, self.target_id, self.name)
   1306 
   1307         for temp_arg in temp_args:

/databricks/spark/python/pyspark/sql/utils.py in deco(*a, **kw)
    125     def deco(*a, **kw):
    126         try:
--> 127             return f(*a, **kw)
    128         except py4j.protocol.Py4JJavaError as e:
    129             converted = convert_exception(e.java_exception)

/databricks/spark/python/lib/py4j-0.10.9-src.zip/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
    326                 raise Py4JJavaError(
    327                     "An error occurred while calling {0}{1}{2}.\n".
--> 328                     format(target_id, ".", name), value)
    329             else:
    330                 raise Py4JError(

Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 16 in stage 3845.0 failed 4 times, most recent failure: Lost task 16.3 in stage 3845.0 : org.apache.spark.api.python.PythonException: 'AttributeError: 'DataFrame' object has no attribute '_data'', from <command-2941072546245585>, line 2. Full traceback below:
Traceback (most recent call last):
  File "/databricks/spark/python/pyspark/worker.py", line 654, in main
    process()
  File "/databricks/spark/python/pyspark/worker.py", line 646, in process
    serializer.dump_stream(out_iter, outfile)
  File "/databricks/spark/python/pyspark/serializers.py", line 279, in dump_stream
    vs = list(itertools.islice(iterator, batch))
  File "/databricks/spark/python/pyspark/util.py", line 109, in wrapper
    return f(*args, **kwargs)
  File "<command-2941072546245585>", line 2, in testfn
  File "/databricks/python/lib/python3.7/site-packages/pandas/core/indexing.py", line 1767, in __getitem__
    return self._getitem_axis(maybe_callable, axis=axis)
  File "/databricks/python/lib/python3.7/site-packages/pandas/core/indexing.py", line 2137, in _getitem_axis
    self._validate_integer(key, axis)
  File "/databricks/python/lib/python3.7/site-packages/pandas/core/indexing.py", line 2060, in _validate_integer
    len_axis = len(self.obj._get_axis(axis))
  File "/databricks/python/lib/python3.7/site-packages/pandas/core/generic.py", line 424, in _get_axis
    return getattr(self, name)
  File "/databricks/python/lib/python3.7/site-packages/pandas/core/generic.py", line 5270, in __getattr__
    return object.__getattribute__(self, name)
  File "pandas/_libs/properties.pyx", line 63, in pandas._libs.properties.AxisProperty.__get__
  File "/databricks/python/lib/python3.7/site-packages/pandas/core/generic.py", line 5270, in __getattr__
    return object.__getattribute__(self, name)
AttributeError: 'DataFrame' object has no attribute '_data'
版本详情:
Spark :'3.0.0'
python:3.7.6(默认,2020 年 1 月 8 日,19:59:22)
[海湾合作委员会 7.3.0]

最佳答案

当驱动程序和执行程序安装了不同版本的 Pandas 时,我看到过这样的错误。在我的例子中,它是 Pandas 1.1.0 的驱动程序(通过 databricks-connect),而执行程序在 Databricks Runtime 7.3 和 Pandas 1.0.1 上。 Pandas 1.1.0 内部有很大的变化,因此驱动程序发送给执行程序的代码被破坏了。您需要检查您的执行程序和驱动程序是否具有相同版本的 Pandas(您可以在 release notes 中找到 Databricks 运行时使用的 Pandas 版本)。您可以使用 following script比较执行程序和驱动程序上 Python 库的版本。

关于python - 属性错误 : 'DataFrame' object has no attribute '_data' ,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/65474079/

相关文章:

scala - Spark 流序列化错误

apache-spark - Spark Streaming 和 Spark Structured Streaming 使用相同的微批处理引擎吗?

apache-spark - 为什么 df.limit 在 Pyspark 中不断变化?

python - 系统在使用 PySpark 创建 SparkSession 时找不到指定的路由

python - Cloud ML Engine 在线预测性能

MD5 哈希的 Python + JSON 序列化 - 如何保证两个等效对象将序列化为完全相同的字符串?

python - PySpark 根据列条件删除重复项

python - 如何使用 'pip install' 在 Windows 上安装 NumPy?

python - 接受来自 WebRTC 信号的提议(在 Python 中)

python - PythonRDD 的 rdd 和 ParallelCollectionRDD 有什么区别