java - 使用 Scala Flume Sink 到 Spark

标签 java scala apache-spark sbt flume

我的 Flume 配置

source_agent.sources = tail
source_agent.sources.tail.type = exec
source_agent.sources.tail.command = python loggen.py
source_agent.sources.tail.batchSize = 1
source_agent.sources.tail.channels = memoryChannel
#memory-channel
source_agent.channels = memoryChannel
source_agent.channels.memoryChannel.type = memory
source_agent.channels.memoryChannel.capacity = 10000
source_agent.channels.memoryChannel.transactionCapacity=10000
source_agent.channels.memoryChannel.byteCapacityBufferPercentage = 20
source_agent.channels.memoryChannel.byteCapacity = 800000
# Send to Flume Collector on saprk sink
source_agent.sinks = spark
source_agent.sinks.spark.type=org.apache.spark.streaming.flume.sink.SparkSink
source_agent.sinks.spark.batchSize=100
source_agent.sinks.spark.channel = memoryChannel
source_agent.sinks.spark.hostname=localhost
source_agent.sinks.spark.port=1234

我的 Spark-Scala 代码

    package com.thanga.twtsteam
 import org.apache.spark.streaming.flume._
 import org.apache.spark.streaming._
 import org.apache.spark.streaming.StreamingContext._ 
 import org.apache.spark.SparkConf
object SampleStream {
def main(args: Array[String]) {
  val conf = new SparkConf().setMaster("local[2]").setAppName("SampleStream")
  val ssc = new StreamingContext(conf, Seconds(1))
  val flumeStream = FlumeUtils.createPollingStream(ssc, "localhost", 1234)
  ssc.stop()
  }
}

我正在使用 SBT 构建 Jar,我的 SBT 配置如下:

name := "Flume"

version := "1.0"

scalaVersion := "2.10.4"
publishMavenStyle := true
libraryDependencies += "org.apache.spark" % "spark-core_2.10" % "1.4.1"
libraryDependencies += "org.apache.spark" % "spark-streaming_2.10" % "1.4.1" 
libraryDependencies += "org.apache.spark" % "spark-streaming-flume_2.10" % "1.4.1"
libraryDependencies += "org.apache.spark" % "spark-streaming-flume-sink_2.10" % "1.4.1"
libraryDependencies += "org.scala-lang" % "scala-library" % "2.10.4"
resolvers += "Akka Repository" at "http://repo.akka.io/releases/"

问题是现在我可以构建我的 jar 而不会出现任何错误,但在运行时我收到以下错误:

16/04/11 19:52:56 INFO BlockManagerMaster: Registered BlockManager
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/spark/streaming/flume/FlumeUtils$
        at com.thagna.twtsteam.SampleStream$.main(SampleStream.scala:10)
        at com.thanga.twtsteam.SampleStream.main(SampleStream.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:665)
        at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:170)
        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:193)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:112)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.ClassNotFoundException: org.apache.spark.streaming.flume.FlumeUtils$
        at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
        at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
        at java.security.AccessController.doPrivileged(Native Method)
        at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
        ... 11 more
16/04/11 19:52:56 INFO SparkContext: Invoking stop() from shutdown hook

任何人都可以帮助解决

最佳答案

如果您使用spark-submit运行作业,则可以使用--jars选项

例如:

spark-submit --jars ....../lib/spark-streaming_2.10-1.2.1‌​.2.2.6.0-2800.jar

或者 将其添加到您的 SBT 配置中

libraryDependencies += "org.apache.spark" %% "spark-streaming-flume" % "2.1.0"

https://spark.apache.org/docs/latest/streaming-flume-integration.html

关于java - 使用 Scala Flume Sink 到 Spark,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/36551531/

相关文章:

python - 使用 Pyspark 查询数据框中的 json 对象

java - Guava 函数根据对列表中每个项目的应用条件将项目添加到结果集合中

scala - 我需要 sbt 0.11.2 来构建用于 lift 的 mongo auth 应用程序

scala - Scala 中的嵌套重置示例

apache-spark - 迭代配对 RDD (Pyspark) 的值并替换空值

java - 使用Java Spark加载现有Mongodb到Hive

java - 如何减少位平面代码的计算时间

java - 您如何使用 Java 确定 Windows 的 32 位或 64 位体系结构?

java - 带有内部类的 newInstance()

java - Scala + JDBC = 编译后找不到驱动程序