apache-spark - google cloud dataproc 上的 java.lang.UnsatisfiedLinkError : jep. Jep.init(Ljava/lang/ClassLoader;ZZ)

标签 apache-spark google-cloud-platform google-cloud-dataproc jep

首先我不明白为什么人们在这个问题上给负分。要么解释我如何改进问题。我可以进一步说明。这是我这边的反馈。虽然我是新人,但我不打算不努力就提出问题。

我正在尝试在使用 jep 解释器的 Google Cloud Platform Dataproc 集群上运行用 Scala 编写的 spark 作业。

我添加了 jep 作为依赖项。

使用 Google Cloud Platform Dataproc 在 Scala 上运行 jep 的完整简短解决方案是什么

"black.ninia" % "jep" % "3.9.0"

在我写的 install.sh 脚本中
sudo -E pip install jep    
export JEP_PATH=$(pip show jep | grep "^Location:" | cut -d ':' -f 2,3 | cut -d ' ' -f 2)

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$JEP_PATH/jep

我仍然收到以下错误(java.library.path 中没有 jep)
20/01/07 09:07:23 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 4.0 in stage 9.0 (TID 74, fs-xxxx-xxx-xxxx-test-w-1.c.xx-xxxx.internal, executor 1): java.lang.UnsatisfiedLinkError: no jep in java.library.path
    at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1867)
    at java.lang.Runtime.loadLibrary0(Runtime.java:870)
    at java.lang.System.loadLibrary(System.java:1122)
    at jep.MainInterpreter.initialize(MainInterpreter.java:128)
    at jep.MainInterpreter.getMainInterpreter(MainInterpreter.java:101)
    at jep.Jep.<init>(Jep.java:256)
    at jep.SharedInterpreter.<init>(SharedInterpreter.java:56)
    at dunnhumby.sciencebank.SubsCommons$$anonfun$getUnitVecEmbeddings$1.apply(SubsCommons.scala:33)
    at dunnhumby.sciencebank.SubsCommons$$anonfun$getUnitVecEmbeddings$1.apply(SubsCommons.scala:31)
    at org.apache.spark.sql.execution.MapPartitionsExec$$anonfun$6.apply(objects.scala:196)
    at org.apache.spark.sql.execution.MapPartitionsExec$$anonfun$6.apply(objects.scala:193)
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:827)
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:827)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
    at org.apache.spark.scheduler.Task.run(Task.scala:108)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:338)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)

(已编辑):-

1.) 我已经看到了本地机器的具体答案,但没有看到 Google Cloud Platform 的具体答案。

2.) 我找到了 https://github.com/ninia/jep/issues/141 但这没有帮助

3.) 我也找到了 answer ,但没有回答,谷歌云平台也不接受。我什至从那里执行了所有步骤。

4.) 如果问题缺少一些快照,我会附上。但请提供一些意见。

(编辑:- 08012020 我正在添加 install.sh used )
#!/bin/bash

set -x -e

# Disable ipv6 since it seems to cause intermittent SocketTimeoutException when collecting data
# See CENG-1268 in Jira
printf "\nnet.ipv6.conf.default.disable_ipv6=1\nnet.ipv6.conf.all.disable_ipv6=1\n" >> /etc/sysctl.conf
sysctl -p

if [[ $(/usr/share/google/get_metadata_value attributes/dataproc-role) == Master ]]; then
    config_bucket="$(/usr/share/google/get_metadata_value attributes/dataproc-cluster-configuration-directory | cut -d'/' -f3)"
    dataproc_cluster_name="$(/usr/share/google/get_metadata_value attributes/dataproc-cluster-name)"
    hdfs dfs -mkdir -p gs://${config_bucket}/${dataproc_cluster_name}/spark_events
    systemctl restart spark-history-server.service
fi

tee -a /etc/hosts << EOM
$$(/usr/share/google/get_metadata_value /attributes/preprod-mjr-dataplatform-metrics-mig-ip) influxdb
EOM

echo "[global]
index-url = https://cs-anonymous:XXXXXXXX@artifactory.xxxxxxxx.com/artifactory/api/pypi/pypi-remote/simple" >/etc/pip.conf

PIP_REQUIREMENTS_FILE=gs://preprod-xxx-dpl-artif/dataproc/requirements.txt
PIP_TRANSITIVE_REQUIREMENTS_FILE=gs://preprod-xxx-dpl-artif/dataproc/transitive-requirements.txt

gsutil cp ${PIP_REQUIREMENTS_FILE} .
gsutil cp ${PIP_TRANSITIVE_REQUIREMENTS_FILE} .
gsutil -q cp gs://preprod-xxx-dpl-artif/dataproc/apt-transport-https_1.4.8_amd64.deb /tmp/apt-transport-https_1.4.8_amd64.deb

export http_proxy=http://preprod-xxx-securecomms.preprod-xxx-securecomms.il4.us-east1.lb.dh-xxxxx-media-55595.internal:3128
export https_proxy=http://preprod-xxx-securecomms.preprod-xxx-securecomms.il4.us-east1.lb.dh-xxxxx-media-55595.internal:3128
export no_proxy=google.com,googleapis.com,localhost
echo "deb https://cs-anonymous:Welcome123@artifactory.xxxxxxxx.com/artifactory/debian-main-remote stretch main" >/etc/apt/sources.list.d/main.list
echo "deb https://cs-anonymous:Welcome123@artifactory.xxxxxxxx.com/artifactory/maria-db-debian stretch main" >>/etc/apt/sources.list.d/main.list
echo 'Acquire::CompressionTypes::Order:: "gz";' > /etc/apt/apt.conf.d/02update
echo 'Acquire::http::Timeout "10";' > /etc/apt/apt.conf.d/99timeout
echo 'Acquire::ftp::Timeout "10";' >> /etc/apt/apt.conf.d/99timeout
sudo dpkg -i /tmp/apt-transport-https_1.4.8_amd64.deb
sudo apt-get install --allow-unauthenticated -y /tmp/apt-transport-https_1.4.8_amd64.deb
sudo -E apt-get update --allow-unauthenticated -y -o Dir::Etc::sourcelist="sources.list.d/main.list" -o Dir::Etc::sourceparts="-" -o APT::Get::List-Cleanup="0"

sudo -E apt-get --allow-unauthenticated -y install python-pip gcc python-dev python-tk curl
#requires index-url specifying because the version of pip installed by previous command
#installs an old version that doesn't seem to recognise pip.conf
sudo -E pip install --index-url https://cs-anonymous:xxxxxxx@artifactory.xxxxxxxx.com/artifactory/api/pypi/pypi-remote/simple --ignore-installed pip setuptools wheel

sudo -E pip install jep

sudo -E pip install gensim

JEP_PATH=$(pip show jep | grep "^Location:" | cut -d ':' -f 2,3 | cut -d ' ' -f 2)

cat << EOF >> /etc/spark/conf/spark-env.sh

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$JEP_PATH/jep
export LD_PRELOAD=$LD_PRELOAD:$JEP_PATH/jep
EOF

tee -a /etc/spark/conf/spark-defaults.conf << EOM
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$JEP_PATH/jep
export LD_PRELOAD=$LD_PRELOAD:$JEP_PATH/jep
EOM

tee -a /etc/*bashrc << EOM
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$JEP_PATH/jep
export LD_PRELOAD=$LD_PRELOAD:$JEP_PATH/jep
EOM

source /etc/*bashrc

sudo -E apt-get install --allow-unauthenticated -y \
  pkg-config \
  freetype* \
  python-matplotlib \
  libpq-dev \
  libssl-dev \
  libcrypto* \
  python-dev \
  libtext-csv-xs-perl \
  libmysqlclient-dev \
  libfreetype* \
  libzmq3-dev \
  libzmq3*


sudo -E pip install -r ./requirements.txt 

最佳答案

假设您使用 install.sh 作为 Dataproc 的 init 操作,您的 export 命令只会在运行 init 操作的本地 shell session 中导出这些环境变量,而不是持久地为之后运行的所有 Spark 进程导出。

让 Spark 使用自定义环境变量的方法是将它们添加到 /etc/spark/conf/spark-env.sh 。这是一个 spark user discussion about how to set java.library.path in Spark

本质上,您可以在 init 操作中围绕导出环境变量的部分使用 heredoc。但是,如 https://issues.apache.org/jira/browse/SPARK-1719 所示,环境变量不足以将库路径传播到 YARN 中的执行程序中; spark explicitly sets the library path 而不是通过 LD_LIBRARY_PATH 传播,所以我们也必须在 spark.executor.extraLibraryPath 中使用 spark-defaults.conf

JEP_PATH=$(pip show jep | grep "^Location:" | cut -d ':' -f 2,3 | cut -d ' ' -f 2)

# spark-env.sh for driver process.
cat << EOF >> /etc/spark/conf/spark-env.sh
# Note that backslash before $LD_LIBRARY_PATH on the right hand side;
# it is important that the variable is evaluated in spark-env.sh rather
# than clobbering it with the local $LD_LIBRARY_PATH of the init action
# running process.
export LD_LIBRARY_PATH=\$LD_LIBRARY_PATH:$JEP_PATH/jep
EOF

# For executor processes
cat << EOF >> /etc/spark/conf/spark-defaults.conf
spark.executor.extraLibraryPath=$JEP_PATH/jep
EOF

关于apache-spark - google cloud dataproc 上的 java.lang.UnsatisfiedLinkError : jep. Jep.init(Ljava/lang/ClassLoader;ZZ),我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/59626224/

相关文章:

scala - Spark - ElasticSearch 索引创建性能太慢

hadoop - 问题 : Scala code in Spark shell to retrieve data from Hbase

mysql - AppEngine 和 Cloud SQL 连接错误

google-cloud-dataproc - Dataproc 集群的数据融合配置失败

apache-spark - SparkSQL、Thrift Server 和 Tableau

logging - 集成 Dropwizard 和 Apache Spark

google-cloud-platform - 如何授予对 Google Cloud Dataprep 的访问权限?

python - 在 Google Build 中使用 python 包

jupyter-notebook - 在 Jupyter Notebook 中运行的审核命令

jupyter-notebook - 停止集群后无法在 Google Cloud Dataproc 集群上重新打开 Jupyter 笔记本