scala - 执行 Apache spark ML 管道时出错

标签 scala apache-spark apache-spark-sql spray spark-dataframe

我们正在使用 Apache Spark 1.6、Scala 2.10.5、Sbt 0.13.9

在执行一个简单的管道时:

def buildPipeline(): Pipeline = {
    val tokenizer = new Tokenizer()
    tokenizer.setInputCol("Summary")
    tokenizer.setOutputCol("LemmatizedWords")
    val hashingTF = new HashingTF()
    hashingTF.setInputCol(tokenizer.getOutputCol)
    hashingTF.setOutputCol("RawFeatures")

    val pipeline = new Pipeline()
    pipeline.setStages(Array(tokenizer, hashingTF))
    pipeline
}

执行 ML pipeline fit 方法时出现以下错误。 任何关于可能发生的事情的评论都会有所帮助。

**java.lang.RuntimeException: error reading Scala signature of org.apache.spark.mllib.linalg.Vector: value linalg is not a package**

[error] org.apache.spark.ml.feature.HashingTF$$typecreator1$1.apply(HashingTF.scala:66)
[error] org.apache.spark.sql.catalyst.ScalaReflection$class.localTypeOf(ScalaReflection.scala:642)

[error] org.apache.spark.sql.catalyst.ScalaReflection$.localTypeOf(ScalaReflection.scala:30)
[error] org.apache.spark.sql.catalyst.ScalaReflection$class.schemaFor(ScalaReflection.scala:630)
[error] org.apache.spark.sql.catalyst.ScalaReflection$.schemaFor(ScalaReflection.scala:30)
[error] org.apache.spark.sql.functions$.udf(functions.scala:2576)
[error] org.apache.spark.ml.feature.HashingTF.transform(HashingTF.scala:66)
[error] org.apache.spark.ml.PipelineModel$$anonfun$transform$1.apply(Pipeline.scala:297)
[error] org.apache.spark.ml.PipelineModel$$anonfun$transform$1.apply(Pipeline.scala:297)
[error] org.apache.spark.ml.PipelineModel.transform(Pipeline.scala:297)
at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
    at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)

build.sbt

scalaVersion in ThisBuild := "2.10.5"
scalacOptions := Seq("-unchecked", "-deprecation", "-encoding", "utf8")  

val sparkV = "1.6.0"
val sprayV = "1.3.2"
val specs2V = "2.3.11"
val slf4jV = "1.7.5"
val grizzledslf4jV = "1.0.2"
val akkaV = "2.3.14"

libraryDependencies in ThisBuild ++= { Seq(
  ("org.apache.spark" %% "spark-mllib" % sparkV) % Provided,  
  ("org.apache.spark" %% "spark-core" % sparkV) % Provided, 
  "com.typesafe.akka" %% "akka-actor" % akkaV,
  "io.spray" %% "spray-can" % sprayV,
  "io.spray" %% "spray-routing" % sprayV,
  "io.spray" %% "spray-json" % sprayV, 
  "io.spray" %% "spray-testkit" % "1.3.1" % "test", 
  "org.specs2" %% "specs2-core" % specs2V % "test",
  "org.specs2" %% "specs2-mock" % specs2V % "test",
  "org.specs2" %% "specs2-junit" % specs2V % "test",
  "org.slf4j" % "slf4j-api" % slf4jV,
  "org.clapper" %% "grizzled-slf4j" % grizzledslf4jV
) }

最佳答案

你应该尝试使用

org.apache.spark.ml.linalg.Vector 和

org.apache.spark.mllib.linalg.Vectors 现在正在使用什么,

org.apache.spark.mllib.linalg.Vectors

希望这能解决您的问题。

关于scala - 执行 Apache spark ML 管道时出错,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/35517090/

相关文章:

scala - 将.jar添加到classpath(Scala)

java - Lucidworks 保存 solr 格式未知字段

java - 合并两个在 Apache spark 中具有不同列名的数据集

python - 如何在 pyspark.sql 中创建表作为选择

apache-spark - 如何保护 Spark 中的密码和用户名(例如 JDBC 连接/访问 RDBMS 数据库)?

javascript - 等效于 JavaScript 中的 Scala View

arrays - 将 Range 直接映射到 Array

apache-spark - Spark sql当前时间戳函数

apache-spark - Spark指标: how to access executor and worker data?

eclipse - 在 Scala IDE 中运行类