scala - 在 Spark rdd 的映射中格式化嵌套映射

标签 scala apache-spark

我有一个如下所示的文本文件:

1007|CNSMR_CARD|1|1|1|1|1|1|1
1007|CNSMR_LOCL_IM_CHKG|1|1|1|1|1|1|1
1009|CNSMR_DIRCT_CHKG|4|4|4|4|4|1|1
1009|CNSMR_DIRCT_OTHR|4|4|4|4|4|1|1
1009|CNSMR_DIRCT_SAVG|4|4|4|4|4|1|1
1009|CNSMR_LOCL_IM_CHKG|4|4|4|4|4|1|1
1010|CNSMR_LOCL_IM_CHKG|1|1|1|1|1|1|1
1012|CNSMR_LOCL_IM_CHKG|1|1|1|1|1|1|1
1033|CNSMR_DIRCT_CHKG|1|1|1|1|2|1|1

然后创建一个像这样的rdd:

val custFile = sc.textFile("custInfo.txt").map(line => line.split('|'))

val custPrd = custFile.map(a => (a(0), ((a(1)), Map("PRVCY_MAIL: " -> a(2), "PRVCY_CALL: " -> a(3), "PRVCY_SWP: " -> a(4), "PRVCY_FCRA: " -> a(5), "PRVCY_GLBA: " -> a(6), "PRVCY_PIPE: " -> a(7), "PRVCY_AFIL: " -> a(8)))))

val custGrp = custPrd.groupByKey

val custPrdGrp = custGrp.map{case (k, vals) => {val valsString = vals.mkString(", "); s"'$k' | {$valsString}" }}

这给了我这个结果:

res4: Array[String] = Array(
'106' | {(CNSMR_LOCL_IM_CHKG,Map(PRVCY_MAIL:  -> 4, PRVCY_GLBA:  -> 4, PRVCY_FCRA:  -> 4, PRVCY_AFIL:  -> 1, PRVCY_PIPE:  -> 1, PRVCY_CALL:  -> 4, PRVCY_SWP:  -> 4))}, 
'107' | {(CNSMR_DIRCT_CHKG,Map(PRVCY_MAIL:  -> 1, PRVCY_GLBA:  -> 1, PRVCY_FCRA:  -> 1, PRVCY_AFIL:  -> 1, PRVCY_PIPE:  -> 1, PRVCY_CALL:  -> 4, PRVCY_SWP:  -> 1)), (CNSMR_DIRCT_SAVG,Map(PRVCY_MAIL:  -> 1, PRVCY_GLBA:  -> 1, PRVCY_FCRA:  -> 1, PRVCY_AFIL:  -> 1, PRVCY_PIPE:  -> 1, PRVCY_CALL:  -> 4, PRVCY_SWP:  -> 1))}

但是我想要一个像这样的数组:

'106' | {'CNSMR_LOCL_IM_CHKG': {PRVCY_MAIL: 4, PRVCY_GLBA: 4, PRVCY_FCRA: 4, PRVCY_AFIL: 1, PRVCY_PIPE: 1, PRVCY_CALL: 4, PRVCY_SWP: 4}}
'107' | {'CNSMR_DIRCT_CHKG': {PRVCY_MAIL: 1, PRVCY_GLBA: 1, PRVCY_FCRA: 1, PRVCY_AFIL: 1, PRVCY_PIPE: 1, PRVCY_CALL: 4, PRVCY_SWP: 1}}, {'CNSMR_DIRCT_SAVG': {PRVCY_MAIL: 1, PRVCY_GLBA: 1, PRVCY_FCRA: 1, PRVCY_AFIL: 1, PRVCY_PIPE: 1, PRVCY_CALL: 4, PRVCY_SWP: 1}}

为了格式化第二张 map ,我尝试了类似的操作,但出现了错误:

    val custPrdGrp = custGrp.map{case (k, vals) => {val valsString = vals map { case (val1, val2, val3, val4, val5, val6, val7) => {val sets = vals.mkString(", "); s"$val1, $val2, $val3, $val4, $val5, $val6, $val7"}}.mkString(", "); s"'$k' | {$valsString}" }}

<console>:27: error: missing parameter type for expanded function
The argument types of an anonymous function must be fully known. (SLS 8.5)
Expected type was: ?
       val custPrdGrp = custGrp.map{case (k, vals) => {val valsString = vals map { case (val1, val2, val3, val4, val5, val6, val7) => {val sets = vals.mkString(", "); s"$val1, $val2, $val3, $val4, $val5, $val6, $val7"}}.mkString(", "); s"'$k' | {$valsString}" }}
                                                                                 ^

如何在 Spark 的 map 中设置嵌套 map 的格式?

最佳答案

让我们从简单的Map[String, String]开始

val m: Map[String,String] = Map(
   "PRVCY_MAIL" -> "1", "PRVCY_GLBA" -> "1",
   "PRVCY_FCRA" -> "1", "PRVCY_AFIL" -> "1",
   "PRVCY_PIPE" -> "1", "PRVCY_CALL" -> "1",
   "PRVCY_SWP" -> "1"
)

请注意,我删除了 : 和 Whitscapes 等格式元素。我认为不需要购买更干净的东西。

现在我们可以定义两个小助手:

def formatMap(sep: String = ": ",
    left: String = "{", right: String = "}")(m: Map[String, String]) = {
  val items = m.toSeq.map{case (k, v) => s"$k$sep$v"}.mkString(", ")
  s"$left$items$right"
}

让我们检查一下它是如何工作的

scala> formatMap()(m)
res50: String = {PRVCY_CALL: 1, PRVCY_SWP: 1, PRVCY_MAIL: 1, PRVCY_AFIL: 1, PRVCY_FCRA: 1, PRVCY_PIPE: 1, PRVCY_GLBA: 1}

scala> formatMap(sep="=")(m)
res51: String = {PRVCY_CALL=1, PRVCY_SWP=1, PRVCY_MAIL=1, PRVCY_AFIL=1, PRVCY_FCRA=1, PRVCY_PIPE=1, PRVCY_GLBA=1}

scala> formatMap(sep="|", left="[", right="]")(m)
res52: String = [PRVCY_CALL|1, PRVCY_SWP|1, PRVCY_MAIL|1, PRVCY_AFIL|1, PRVCY_FCRA|1, PRVCY_PIPE|1, PRVCY_GLBA|1]

现在让我们清理一下已有的内容。首先让我们提取名称:

val keys = Array(
   "PRVCY_MAIL", "PRVCY_CALL", "PRVCY_SWP", "PRVCY_FCRA",
   "PRVCY_GLBA", "PRVCY_PIPE", "PRVCY_AFIL"
)

重写 map :

val custPrd = custFile.map(a => (a(0), (a(1), keys.zip(a.drop(2)).toMap)))

像以前一样分组

val custGrp = custPrd.groupByKey

和 map

val custPrdGrp = custGrp.map{case (k, vals) => {
  val valsString = vals.map{case (id, m) => {
    val fmtM = formatMap()(m)
    s"'$id': $fmtM"
  }}.mkString(", ")
  s"'$k' | {$valsString}"
}}

快速检查:

scala> custPrdGrp.first
res56: String = '1012' | {'CNSMR_LOCL_IM_CHKG': {PRVCY_CALL: 1, PRVCY_SWP: 1, PRVCY_MAIL: 1, PRVCY_AFIL: 1, PRVCY_FCRA: 1, PRVCY_PIPE: 1, PRVCY_GLBA: 1}}

您可能应该以与我对 formatMap 所做的类似的方式提取上面使用的匿名函数,但我将把它作为练习留给您。

关于scala - 在 Spark rdd 的映射中格式化嵌套映射,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/32123418/

相关文章:

Scala 是 Spark 的必备工具吗?

scala - Spark 作为 Hive 的执行引擎

streaming - 如何使 Spark 分区具有粘性,即与节点保持一致?

scala - 在 Spark 中以结构化流模式获取 Offset 的消息正在重置

添加 swagger-play2 依赖项后 Scala Play 项目构建失败 - sbt 找不到播放?

scala - 使用 MixedVec 在 chisel 中使用动态参数创建 IO 包

scala - flatMap 行为在 2.10.0 中发生了变化

scala - 如何从 Play 2.0 中的 POST 获取有效载荷

scala - 在 Spark/Scala 中将 RDD 转换为数据帧

Scala Spark - 计算数据帧列中特定字符串的出现次数