apache-spark - 如何将Spark流数据存储到Hortonworks中的Hdfs?

标签 apache-spark apache-kafka hdfs spark-streaming hortonworks-sandbox

我使用 Spark 从 Kafka 主题流式传输数据。这是我尝试过的代码。这里我只是在控制台中显示流数据。我想将此数据作为文本文件存储在 HDFS 中。

import _root_.kafka.serializer.DefaultDecoder
import _root_.kafka.serializer.StringDecoder
import org.apache.spark.SparkConf
import org.apache.spark.streaming.{Seconds, StreamingContext}
import org.apache.spark.streaming.kafka.KafkaUtils
import org.apache.spark.storage.StorageLevel
object StreamingDataNew {
  def main(args: Array[String]): Unit = {
    val sparkConf = new SparkConf().setAppName("Kafka").setMaster("local[*]")
    val ssc = new StreamingContext(sparkConf, Seconds(10))
val kafkaConf = Map(
      "metadata.broker.list" -> "localhost:9092",
      "zookeeper.connect" -> "localhost:2181",
      "group.id" -> "kafka-streaming-example",
      "zookeeper.connection.timeout.ms" -> "200000"
    )
val lines = KafkaUtils.createStream[Array[Byte], String, DefaultDecoder, StringDecoder](
      ssc,
      kafkaConf,
      Map("topic-one" -> 1), // subscripe to topic and partition 1
      StorageLevel.MEMORY_ONLY
    )
    println("printing" + lines.toString())
    val words = lines.flatMap { case (x, y) => y.split(" ") }
    words.print()

    ssc.start()
    ssc.awaitTermination()

  }
}

我发现我们可以使用“saveAsTextFiles”编写DStream。但是有人可以清楚地提及如何使用上述scala代码连接Hortonworks并存储在HDFS中的步骤吗?

最佳答案

我找到了答案, 这段代码对我来说很有效。

package com.spark.streaming

import org.apache.kafka.common.serialization.StringDeserializer
import org.apache.spark.SparkContext
import org.apache.spark.sql._
import org.apache.spark.streaming.StreamingContext
import org.apache.spark.streaming.kafka010.ConsumerStrategies.Subscribe
import org.apache.spark.streaming.kafka010.KafkaUtils
import org.apache.spark.streaming.kafka010.LocationStrategies.PreferConsistent

object MessageStreaming {
  def main(args: Array[String]): Unit = {
    println("Message streaming")

    val conf = new org.apache.spark.SparkConf().setMaster("local[*]").setAppName("kafka-streaming")
    val context = new SparkContext(conf)
    val ssc = new StreamingContext(context, org.apache.spark.streaming.Seconds(10))
    val kafkaParams = Map(
      "bootstrap.servers" -> "kafka.kafka-cluster.com:9092",
      "group.id" -> "kafka-streaming-example",
      "key.deserializer" -> classOf[StringDeserializer],
      "value.deserializer" -> classOf[StringDeserializer],
      "auto.offset.reset" -> "latest",
      "zookeeper.connection.timeout.ms" -> "200000"
    )
    val topics = Array("cdc-classic")
    val stream = KafkaUtils.createDirectStream[String, String](
      ssc,
      PreferConsistent,
      Subscribe[String, String](topics, kafkaParams))

    val content = stream.filter(x => x.value() != null)
    val sqlContext = new org.apache.spark.sql.SQLContext(context)
    import sqlContext.implicits._

    stream.map(_.value).foreachRDD(rdd => {

      rdd.foreach(println)
      if (!rdd.isEmpty()) {
     rdd.toDF("value").coalesce(1).write.mode(SaveMode.Append).json("hdfs://dev1a/user/hg5tv0/hadoop/MessagesFromKafka")


      }

    })
    ssc.start()
    ssc.awaitTermination

}}

关于apache-spark - 如何将Spark流数据存储到Hortonworks中的Hdfs?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/50460451/

相关文章:

hadoop - 从hadoop网址读取数据时未找到文件异常

apache-spark - Spark 数据集 - 内部连接问题

docker - Kafdrop - 无法使用 bitnami/kafka 连接到 Kafka 集群设置

apache-kafka - Apache Kafka 与高级消费者 : Skip corrupted messages

hadoop - Flume 和 HDFS 集成,HDFS IO 错误

hadoop - 以伪分布式hadoop方式管理hdfs

java - Spark本地读取文本文件在线程 "main"org.apache.spark.SparkException : Task not serializable中引发异常

java - 将结果集转换为数据框

hadoop - 无法让pyspark作业在hadoop集群的所有节点上运行

apache-kafka - Kafka 的 OPC-da 连接器 - 可用的解决方案