python - Spark安装问题-TypeError : an integer is required (got type bytes) - spark-2. 4.5-bin-hadoop2.7, hadoop 2.7.1, python 3.8.2

标签 python apache-spark hadoop pyspark apache-spark-sql

<分区>

我正在尝试在我的 64 位 Windows 操作系统计算机上安装 Spark。我安装了 python 3.8.2。我有 20.0.2 版的 pip。我下载 spark-2.4.5-bin-hadoop2.7 并将环境变量设置为 HADOOP_HOME、SPARK_HOME,并将 pyspark 添加到路径变量。当我从 cmd 运行 pyspark 时,我看到下面给出的错误:

C:\Users\aa>pyspark
Python 3.8.2 (tags/v3.8.2:7b3ab59, Feb 25 2020, 23:03:10) [MSC v.1916 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
Traceback (most recent call last):
  File "C:\Users\aa\Downloads\spark-2.4.5-bin-hadoop2.7\spark-2.4.5-bin-hadoop2.7\python\pyspark\shell.py", line 31, in <module>
    from pyspark import SparkConf
  File "C:\Users\aa\Downloads\spark-2.4.5-bin-hadoop2.7\spark-2.4.5-bin-hadoop2.7\python\pyspark\__init__.py", line 51, in <module>
    from pyspark.context import SparkContext
  File "C:\Users\aa\Downloads\spark-2.4.5-bin-hadoop2.7\spark-2.4.5-bin-hadoop2.7\python\pyspark\context.py", line 31, in <module>
    from pyspark import accumulators
  File "C:\Users\aa\Downloads\spark-2.4.5-bin-hadoop2.7\spark-2.4.5-bin-hadoop2.7\python\pyspark\accumulators.py", line 97, in <module>
    from pyspark.serializers import read_int, PickleSerializer
  File "C:\Users\aa\Downloads\spark-2.4.5-bin-hadoop2.7\spark-2.4.5-bin-hadoop2.7\python\pyspark\serializers.py", line 72, in <module>
    from pyspark import cloudpickle
  File "C:\Users\aa\Downloads\spark-2.4.5-bin-hadoop2.7\spark-2.4.5-bin-hadoop2.7\python\pyspark\cloudpickle.py", line 145, in <module>
    _cell_set_template_code = _make_cell_set_template_code()
  File "C:\Users\aa\Downloads\spark-2.4.5-bin-hadoop2.7\spark-2.4.5-bin-hadoop2.7\python\pyspark\cloudpickle.py", line 126, in _make_cell_set_template_code
    return types.CodeType(
TypeError: an integer is required (got type bytes)

我想将 pyspark 导入到我的 python 代码中,但是在 Pycharm 中,但是在我运行我的代码文件之后,我也遇到了类似 TypeError: an integer is required (got type bytes) 的错误。我卸载了 python 3.8.2 并尝试使用 python 2.7,但在这种情况下我出现了折旧错误。我接受下面给出的错误并更新 pip 安装程序。

Could not find a version that satisfies the requirement pyspark (from versions: )
No matching distribution found for pyspark 

然后我运行 python -m pip install --upgrade pip 来更新 pip 但我又遇到了 TypeError: an integer is required (got type bytes) 问题。

C:\Users\aa>python --version
Python 3.8.2

C:\Users\aa>pip --version
pip 20.0.2 from c:\users\aa\appdata\local\programs\python\python38\lib\site-packages\pip (python 3.8)

C:\Users\aa>java --version
java 14 2020-03-17
Java(TM) SE Runtime Environment (build 14+36-1461)
Java HotSpot(TM) 64-Bit Server VM (build 14+36-1461, mixed mode, sharing)

我该如何解决和克服这个问题?目前我有 spark-2.4.5-bin-hadoop2.7 和 python 3.8.2。 提前致谢!

最佳答案

是python3.8和spark版本兼容问题可以看:https://github.com/apache/spark/pull/26194 .

要使其发挥作用(在一定程度上),您需要:

def print_exec(stream):
    ei = sys.exc_info()
    traceback.print_exception(ei[0], ei[1], ei[2], None, stream)

然后您就可以导入 pyspark。

关于python - Spark安装问题-TypeError : an integer is required (got type bytes) - spark-2. 4.5-bin-hadoop2.7, hadoop 2.7.1, python 3.8.2,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/61348860/

相关文章:

scala - val conn = DriverManager.getConnection(URL:9001,hduser,hadoop123)

python - scikit-learn fit() 在规范化数据后导致错误

python - Setup.py:如何添加外部安装候选项?

python - 为什么 "www".count ("ww") 返回 1 而不是 2?

apache-spark - 如何在pycharm中运行pyspark代码时打开spark web ui?

linux - 显然由于 "spark-submit"参数(具有 * 通配符)未扩展,无法通过系统调用从 scala 中调用 "--jars"

linux - 无法将文件从 ubuntu scp 到 Amazon EC2

hadoop - 在由script-runner.jar运行的aws emr脚本中引用文件

java - TaskAttemptContext中的JobConf

python - 在 Windows 上导入 Impyla 库时出错