java - 坏元素的映射

标签 java scala apache-spark distributed-computing rdd

我正在实现k-means并且我想创建新的质心。但映射遗漏了一个元素!但是,当 K 的值较小时,例如 15,它会正常工作。

基于 code 我有:

val K = 25 // number of clusters
val data = sc.textFile("dense.txt").map(
     t => (t.split("#")(0), parseVector(t.split("#")(1)))).cache()
val count = data.count()
println("Number of records " + count)

var centroids = data.takeSample(false, K, 42).map(x => x._2)
do {
  var closest = data.map(p => (closestPoint(p._2, centroids), p._2))
  var pointsGroup = closest.groupByKey()
  println(pointsGroup)
  pointsGroup.foreach { println }
  var newCentroids = pointsGroup.mapValues(ps => average(ps.toSeq)).collectAsMap()
  //var newCentroids = pointsGroup.mapValues(ps => average(ps)).collectAsMap() this will produce an error
  println(centroids.size)
  println(newCentroids.size)
  for (i <- 0 until K) {
    tempDist += centroids(i).squaredDist(newCentroids(i))
  }
  ..

在 for 循环中,我会收到错误,它找不到该元素(该元素并不总是相同,它取决于 K:

java.util.NoSuchElementException: key not found: 2

错误出现之前的输出:

Number of records 27776
ShuffledRDD[5] at groupByKey at kmeans.scala:72
25
24            <- IT SHOULD BE 25

问题是什么?


>>> println(newCentroids)
Map(23 -> (-0.0050852959701492536, 0.005512245104477607, -0.004460964477611937), 17 -> (-0.005459583045685268, 0.0029015278781725795, -8.451635532994901E-4), 8 -> (-4.691649213483123E-4, 0.0025375451685393366, 0.0063490755505617585), 11 -> (0.30361112034069937, -0.0017342255382385204, -0.005751167731061906), 20 -> (-5.839587918939964E-4, -0.0038189763756820145, -0.007067070459859708), 5 -> (-0.3787612396704685, -0.005814121628643806, -0.0014961713117870657), 14 -> (0.0024755681263616547, 0.0015191503267973836, 0.003411769193899781), 13 -> (-0.002657690932944597, 0.0077671050923225635, -0.0034652379980563263), 4 -> (-0.006963114731610361, 1.1751361829025871E-4, -0.7481135105367823), 22 -> (0.015318187079953534, -1.2929035958285013, -0.0044176372190034684), 7 -> (-0.002321059060773483, -0.006316359116022083, 0.006164669723756913), 16 -> (0.005341800955165691, -0.0017540737037037035, 0.004066574093567247), 1 -> (0.0024547379611650484, 0.0056298656504855955, 0.002504618082524296), 10 -> (3.421068671121009E-4, 0.0045169004751299275, 5.696239049740164E-4), 19 -> (-0.005453716071428539, -0.001450277556818192, 0.003860007248376626), 9 -> (-0.0032921685273631807, 1.8477108457711313E-4, -0.003070412228855717), 18 -> (-0.0026803160958904053, 0.00913904078767124, -0.0023528013698630146), 3 -> (0.005750011594202901, -0.003607098309178754, -0.003615918896940412), 21 -> (0.0024925166025641056, -0.0037607353461538507, -2.1588444871794858E-4), 12 -> (-7.920202960526356E-4, 0.5390774232894769, -4.928884539473694E-4), 15 -> (-0.0018608492323232324, -0.006973787272727284, -0.0027266663434343404), 24 -> (6.151173211963486E-4, 7.081812613784045E-4, 5.612962808842611E-4), 6 -> (0.005323933953732931, 0.0024014750473186123, -2.969338590956889E-4), 0 -> (-0.0015991676750160377, -0.003001317289659613, 0.5384176139563245))

有相关错误的问题:spark scala throws java.util.NoSuchElementException: key not found: 0 exception


编辑:

在 Zero323 观察到两个质心相同后,我更改了代码,以便所有质心都是唯一的。但是,行为保持不变。因此,我怀疑 closestPoint() 可能会为两个质心返回相同的索引。这是函数:

  def closestPoint(p: Vector, centers: Array[Vector]): Int = {
    var index = 0
    var bestIndex = 0
    var closest = Double.PositiveInfinity
    for (i <- 0 until centers.length) {
      val tempDist = p.squaredDist(centers(i))
      if (tempDist < closest) {
        closest = tempDist
        bestIndex = i
      }
    }
    return bestIndex
  }

如何摆脱这个困境?我正在运行像 Spark cluster 中描述的代码。

最佳答案

在“E-step”(将点分配给簇索引类似于 EM 算法的 E-step)中,您的索引之一可能不会分配任何点。如果发生这种情况,那么您需要有一种将该索引与某个点关联起来的方法,否则在“M-step”之后您将得到更少的簇(向索引分配质心类似于 M- EM 算法的步骤。)类似这样的东西可能应该有效:

val newCentroids = {
  val temp = pointsGroup.mapValues(ps => average(ps.toSeq)).collectAsMap()
  val nMissing = K - temp.size 
  val sample = data.takeSample(false, nMissing, seed)
  var c = -1
  (for (i <- 0 until K) yield {
   val point = temp.getOrElse(i, {c += 1; sample(c) })
   (i, point)
  }).toMap      
}   

只需将该代码替换为您当前用于计算 newCentroids 的行即可。

还有其他方法可以处理这个问题,上面的方法可能不是最好的(多次调用 takeSample 是个好主意吗?每次迭代 k- 一次。意味着算法?如果data包含很多重复值怎么办?等等),但它是一个简单的起点。

顺便说一句,您可能需要考虑如何用 reduceByKey 替换 groupByKey

注意:出于好奇,这里有一个描述 EM 算法和 k 均值算法之间相似之处的引用:http://papers.nips.cc/paper/989-convergence-properties-of-the-k-means-algorithms.pdf .

关于java - 坏元素的映射,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/35373478/

相关文章:

java - 玩! 2.1 要我写scala代码

java - 单击 TableRowSorter 时获取所选行中的用户对象

java - 为什么在 Java 实例化过程中要对类命名两次?

scala - Apache-Spark 内部作业调度

scala - 加一链表: Functional Approach

apache-spark - 将 Spark 数据框中的多行合并为一行

scala - 如何解析日期时间?

csv - Spark SQL如何读取压缩的csv文件?

java - Log4j 突然停止记录

java - 如何从另一个 Activity android打开 fragment