pyspark - 自定义模块中的函数在 PySpark 中不起作用,但在交互模式下输入时可以起作用

标签 pyspark apache-spark-sql

我编写了一个模块,其中包含作用于 PySpark DataFrame 的函数。他们对 DataFrame 中的列进行转换,然后返回一个新的 DataFrame。以下是代码示例,已缩短为仅包含其中一个函数:

from pyspark.sql import functions as F
from pyspark.sql import types as t

import pandas as pd
import numpy as np

metadta=pd.DataFrame(pd.read_csv("metadata.csv"))  # this contains metadata on my dataset

def str2num(text):
    if type(text)==None or text=='' or text=='NULL' or text=='null':
        return 0
    elif len(text)==1:
        return ord(text)
    else:
        newnum=''
        for lettr in text:
            newnum=newnum+str(ord(lettr))
        return int(newnum)

str2numUDF = F.udf(lambda s: str2num(s), t.IntegerType())

def letConvNum(df):    # df is a PySpark DataFrame
    #Get a list of columns that I want to transform, using the metadata Pandas DataFrame
    chng_cols=metadta[(metadta.comments=='letter conversion to num')].col_name.tolist()
    for curcol in chng_cols:
        df=df.withColumn(curcol, str2numUDF(df[curcol]))
    return df

这就是我的模块,将其命名为 mymodule.py。如果我启动 PySpark shell,我会执行以下操作:

import mymodule as mm
myf=sqlContext.sql("select * from tablename lim 10")

我检查了myf(PySpark DataFrame),一切正常。我通过尝试使用 str2num 函数来检查是否确实导入了 mymodule:

mm.str2num('a')
97

所以它实际上正在导入模块。那么如果我尝试这个:

df2=mm.letConvNum(df)

并执行以下操作来检查它是否有效:

df2.show()

它尝试执行该操作,但随后崩溃了:

    16/03/10 16:10:44 ERROR Executor: Exception in task 0.0 in stage 1.0 (TID 365)
    org.apache.spark.api.python.PythonException: Traceback (most recent call last):
      File "/usr/hdp/2.3.4.0-3485/spark/python/lib/pyspark.zip/pyspark/worker.py", line 98, in main
        command = pickleSer._read_with_length(infile)
      File "/usr/hdp/2.3.4.0-3485/spark/python/lib/pyspark.zip/pyspark/serializers.py", line 164, in _read_with_length
        return self.loads(obj)
      File "/usr/hdp/2.3.4.0-3485/spark/python/lib/pyspark.zip/pyspark/serializers.py", line 422, in loads
        return pickle.loads(obj)
      File "test2.py", line 16, in <module>
        str2numUDF=F.udf(lambda s: str2num(s), t.IntegerType())
      File "/usr/hdp/2.3.4.0-3485/spark/python/lib/pyspark.zip/pyspark/sql/functions.py", line 1460, in udf
        return UserDefinedFunction(f, returnType)
      File "/usr/hdp/2.3.4.0-3485/spark/python/lib/pyspark.zip/pyspark/sql/functions.py", line 1422, in __init__
        self._judf = self._create_judf(name)
      File "/usr/hdp/2.3.4.0-3485/spark/python/lib/pyspark.zip/pyspark/sql/functions.py", line 1430, in _create_judf
        pickled_command, broadcast_vars, env, includes = _prepare_for_python_RDD(sc, command, self)
      File "/usr/hdp/2.3.4.0-3485/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 2317, in _prepare_for_python_RDD
        [x._jbroadcast for x in sc._pickled_broadcast_vars],
    AttributeError: 'NoneType' object has no attribute '_pickled_broadcast_vars'

            at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:166)
            at org.apache.spark.api.python.PythonRunner$$anon$1.<init>(PythonRDD.scala:207)
            at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:125)
            at org.apache.spark.sql.execution.BatchPythonEvaluation$$anonfun$doExecute$1.apply(python.scala:397)
            at org.apache.spark.sql.execution.BatchPythonEvaluation$$anonfun$doExecute$1.apply(python.scala:362)
            at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$17.apply(RDD.scala:710)
            at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$17.apply(RDD.scala:710)
            at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
            at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)
            at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
            at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
            at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)
            at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
            at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
            at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)
            at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
            at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
            at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)
            at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
            at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
            at org.apache.spark.scheduler.Task.run(Task.scala:88)
            at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
            at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
            at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
            at java.lang.Thread.run(Thread.java:745)
    16/03/10 16:10:44 ERROR TaskSetManager: Task 0 in stage 1.0 failed 1 times; aborting job
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
      File "/usr/hdp/2.3.4.0-3485/spark/python/pyspark/sql/dataframe.py", line 256, in show
        print(self._jdf.showString(n, truncate))
      File "/usr/hdp/2.3.4.0-3485/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/java_gateway.py", line 538, in __call__
      File "/usr/hdp/2.3.4.0-3485/spark/python/pyspark/sql/utils.py", line 36, in deco
        return f(*a, **kw)
      File "/usr/hdp/2.3.4.0-3485/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/protocol.py", line 300, in get_return_value
    py4j.protocol.Py4JJavaError: An error occurred while calling o7299.showString.
    : org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 1.0 failed 1 times, most recent failure: Lost task 0.0 in stage 1.0 (TID 365, localhost): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
      File "/usr/hdp/2.3.4.0-3485/spark/python/lib/pyspark.zip/pyspark/worker.py", line 98, in main
        command = pickleSer._read_with_length(infile)
      File "/usr/hdp/2.3.4.0-3485/spark/python/lib/pyspark.zip/pyspark/serializers.py", line 164, in _read_with_length
        return self.loads(obj)
      File "/usr/hdp/2.3.4.0-3485/spark/python/lib/pyspark.zip/pyspark/serializers.py", line 422, in loads
        return pickle.loads(obj)
      File "test2.py", line 16, in <module>
        str2numUDF=F.udf(lambda s: str2num(s), t.IntegerType())
      File "/usr/hdp/2.3.4.0-3485/spark/python/lib/pyspark.zip/pyspark/sql/functions.py", line 1460, in udf
        return UserDefinedFunction(f, returnType)
      File "/usr/hdp/2.3.4.0-3485/spark/python/lib/pyspark.zip/pyspark/sql/functions.py", line 1422, in __init__
        self._judf = self._create_judf(name)
      File "/usr/hdp/2.3.4.0-3485/spark/python/lib/pyspark.zip/pyspark/sql/functions.py", line 1430, in _create_judf
        pickled_command, broadcast_vars, env, includes = _prepare_for_python_RDD(sc, command, self)
      File "/usr/hdp/2.3.4.0-3485/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 2317, in _prepare_for_python_RDD
        [x._jbroadcast for x in sc._pickled_broadcast_vars],
    AttributeError: 'NoneType' object has no attribute '_pickled_broadcast_vars'

            at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:166)
            at org.apache.spark.api.python.PythonRunner$$anon$1.<init>(PythonRDD.scala:207)
            at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:125)
            at org.apache.spark.sql.execution.BatchPythonEvaluation$$anonfun$doExecute$1.apply(python.scala:397)
            at org.apache.spark.sql.execution.BatchPythonEvaluation$$anonfun$doExecute$1.apply(python.scala:362)
            at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$17.apply(RDD.scala:710)
            at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$17.apply(RDD.scala:710)
            at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
            at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)
            at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
            at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
            at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)
            at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
            at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
            at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)
            at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
            at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
            at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)
            at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
            at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
            at org.apache.spark.scheduler.Task.run(Task.scala:88)
            at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
            at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
            at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
            at java.lang.Thread.run(Thread.java:745)

    Driver stacktrace:
            at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1283)
            at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1271)
            at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1270)
            at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
            at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
            at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1270)
            at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
            at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
            at scala.Option.foreach(Option.scala:236)
            at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:697)
            at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1496)
            at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1458)
            at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1447)
            at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
            at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:567)
            at org.apache.spark.SparkContext.runJob(SparkContext.scala:1824)
            at org.apache.spark.SparkContext.runJob(SparkContext.scala:1837)
            at org.apache.spark.SparkContext.runJob(SparkContext.scala:1850)
            at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:215)
            at org.apache.spark.sql.execution.Limit.executeCollect(basicOperators.scala:207)
            at org.apache.spark.sql.DataFrame$$anonfun$collect$1.apply(DataFrame.scala:1385)
            at org.apache.spark.sql.DataFrame$$anonfun$collect$1.apply(DataFrame.scala:1385)
            at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:56)
            at org.apache.spark.sql.DataFrame.withNewExecutionId(DataFrame.scala:1903)
            at org.apache.spark.sql.DataFrame.collect(DataFrame.scala:1384)
            at org.apache.spark.sql.DataFrame.head(DataFrame.scala:1314)
            at org.apache.spark.sql.DataFrame.take(DataFrame.scala:1377)
            at org.apache.spark.sql.DataFrame.showString(DataFrame.scala:178)
            at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
            at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
            at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
            at java.lang.reflect.Method.invoke(Method.java:497)
            at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
            at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:379)
            at py4j.Gateway.invoke(Gateway.java:259)
            at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
            at py4j.commands.CallCommand.execute(CallCommand.java:79)
            at py4j.GatewayConnection.run(GatewayConnection.java:207)
            at java.lang.Thread.run(Thread.java:745)
    Caused by: org.apache.spark.api.python.PythonException: Traceback (most recent call last):
      File "/usr/hdp/2.3.4.0-3485/spark/python/lib/pyspark.zip/pyspark/worker.py", line 98, in main
        command = pickleSer._read_with_length(infile)
      File "/usr/hdp/2.3.4.0-3485/spark/python/lib/pyspark.zip/pyspark/serializers.py", line 164, in _read_with_length
        return self.loads(obj)
      File "/usr/hdp/2.3.4.0-3485/spark/python/lib/pyspark.zip/pyspark/serializers.py", line 422, in loads
        return pickle.loads(obj)
      File "test2.py", line 16, in <module>
        str2numUDF=F.udf(lambda s: str2num(s), t.IntegerType())
      File "/usr/hdp/2.3.4.0-3485/spark/python/lib/pyspark.zip/pyspark/sql/functions.py", line 1460, in udf
        return UserDefinedFunction(f, returnType)
      File "/usr/hdp/2.3.4.0-3485/spark/python/lib/pyspark.zip/pyspark/sql/functions.py", line 1422, in __init__
        self._judf = self._create_judf(name)
      File "/usr/hdp/2.3.4.0-3485/spark/python/lib/pyspark.zip/pyspark/sql/functions.py", line 1430, in _create_judf
        pickled_command, broadcast_vars, env, includes = _prepare_for_python_RDD(sc, command, self)
      File "/usr/hdp/2.3.4.0-3485/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 2317, in _prepare_for_python_RDD
        [x._jbroadcast for x in sc._pickled_broadcast_vars],
    AttributeError: 'NoneType' object has no attribute '_pickled_broadcast_vars'

            at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:166)
            at org.apache.spark.api.python.PythonRunner$$anon$1.<init>(PythonRDD.scala:207)
            at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:125)
            at org.apache.spark.sql.execution.BatchPythonEvaluation$$anonfun$doExecute$1.apply(python.scala:397)
            at org.apache.spark.sql.execution.BatchPythonEvaluation$$anonfun$doExecute$1.apply(python.scala:362)
            at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$17.apply(RDD.scala:710)
            at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$17.apply(RDD.scala:710)
            at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
            at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)
            at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
            at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
            at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)
            at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
            at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
            at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)
            at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
            at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
            at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)
            at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
            at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
            at org.apache.spark.scheduler.Task.run(Task.scala:88)
            at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
            at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
            at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
            ... 1 more

作为检查,我打开了一个干净的 shell,并且没有导入模块,而是在交互式 shell 中定义了 str2num 函数和 UDF。然后我输入最后一个函数的内容,并进行相同的最终检查:

df2.show()

这一次,我得到了我所期待的转换后的 DataFrame。

为什么交互输入函数可以工作,而从模块读入函数却不行?我知道它正在读取模块,因为常规函数 str2num 可以工作。

最佳答案

我遇到了同样的错误并跟踪了堆栈跟踪。

就我而言,我正在构建一个 Egg 文件,然后通过 --py-files 选项将其传递给 Spark。

关于错误,我认为归结为这样一个事实:当您调用 F.udf(str2num, t.IntegerType()) 时,会创建一个 UserDefinedFunction 实例在 Spark 运行之前,因此它对某些 SparkContext 有一个空引用,称之为 sc。当您运行 UDF 时,将引用 sc._pickled_broadcast_vars,这会在输出中引发 AttributeError

我的解决方法是避免在 Spark 运行之前创建 UDF(因此有一个事件的 SparkContext。在您的情况下,您只需更改

的定义
def letConvNum(df):    # df is a PySpark DataFrame
    #Get a list of columns that I want to transform, using the metadata Pandas DataFrame
    chng_cols=metadta[(metadta.comments=='letter conversion to num')].col_name.tolist()

    str2numUDF = F.udf(str2num, t.IntegerType()) # create UDF on demand
    for curcol in chng_cols:
        df=df.withColumn(curcol, str2numUDF(df[curcol]))
    return df

注意:我还没有实际测试上面的代码,但我自己的代码中的更改是类似的,并且一切正常。

此外,对于感兴趣的读者,请参阅 Spark code for UserDefinedFunction

关于pyspark - 自定义模块中的函数在 PySpark 中不起作用,但在交互模式下输入时可以起作用,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/35923775/

相关文章:

python - Spark MapPartitions

apache-spark - 从 Spark 2.0 到 3.0 的字符串到日期迁移导致无法识别 DateTimeFormatter 中的 'EEE MMM dd HH:mm:ss zzz yyyy' 模式

python - 多个 pyspark "window()"调用在执行 "groupBy()"时显示错误

hadoop - 使用 Apache Spark Streaming 和 Dataframes 交互式搜索 Parquet 存储的数据

apache-spark - 过滤 pyspark 数据帧以保留包含至少 1 个空值的行(保留,而不是删除)

pyspark - pyarrow 错误 : toPandas attempted Arrow optimization

apache-spark - 如何通过增加 spark 的内存来解决 pyspark `org.apache.arrow.vector.util.OversizedAllocationException` 错误?

python - Spark XML - 使用 Excel 中的 XML

scala - 如何拆分逗号分隔的字符串并在 Spark Scala 数据框中获取 n 个值?

sql - 如何找到指定列表中的第一个值?