java - 用于编写 Tensorflow TFRecords 数据文件的纯 Java/Scala 代码

标签 java scala apache-spark guava tensorflow

我正在尝试编写 Tensorflow RecordWriter 类的纯 Java/Scala 实现,以便将 Spark DataFrame 转换为 TFRecords文件。根据文档,在 TFRecords 中,每条记录的格式如下:

uint64 length
uint32 masked_crc32_of_length
byte   data[length]
uint32 masked_crc32_of_data

和CRC掩码

masked_crc = ((crc >> 15) | (crc << 17)) + 0xa282ead8ul

目前,我使用以下代码使用 guava 实现计算 CRC:

import com.google.common.hash.Hashing

object CRC32 {
  val kMaskDelta = 0xa282ead8

  def hash(in: Array[Byte]): Int = {
    val hashing = Hashing.crc32c()
    hashing.hashBytes(in).asInt()
  }

  def mask(crc: Int): Int ={
    ((crc >> 15) | (crc << 17)) + kMaskDelta
  }
}

我的其余代码是:

数据编码部分是用下面的一段代码完成的:

  object LittleEndianEncoding {
   def encodeLong(in: Long): Array[Byte] = {
    val baos = new ByteArrayOutputStream()
    val out = new LittleEndianDataOutputStream(baos)
    out.writeLong(in)
    baos.toByteArray
  }

  def encodeInt(in: Int): Array[Byte] = {
    val baos = new ByteArrayOutputStream()
    val out = new LittleEndianDataOutputStream(baos)

    out.writeInt(in)
    baos.toByteArray
  }
}

记录是用protocol buffer生成的:

import com.google.protobuf.ByteString
import org.tensorflow.example._

import collection.JavaConversions._
import collection.mutable._

object TFRecord {

  def int64Feature(in: Long): Feature = {

    val valueBuilder = Int64List.newBuilder()
    valueBuilder.addValue(in)

    Feature.newBuilder()
      .setInt64List(valueBuilder.build())
      .build()
  }


  def floatFeature(in: Float): Feature = {
    val valueBuilder = FloatList.newBuilder()
    valueBuilder.addValue(in)
    Feature.newBuilder()
      .setFloatList(valueBuilder.build())
      .build()
  }

  def floatVectorFeature(in: Array[Float]): Feature = {
    val valueBuilder = FloatList.newBuilder()
    in.foreach(valueBuilder.addValue)

    Feature.newBuilder()
      .setFloatList(valueBuilder.build())
      .build()
  }

  def bytesFeature(in: Array[Byte]): Feature = {
    val valueBuilder = BytesList.newBuilder()
    valueBuilder.addValue(ByteString.copyFrom(in))
    Feature.newBuilder()
      .setBytesList(valueBuilder.build())
      .build()
  }

  def makeFeatures(features: HashMap[String, Feature]): Features = {
    Features.newBuilder().putAllFeature(features).build()
  }


  def makeExample(features: Features): Example = {
    Example.newBuilder().setFeatures(features).build()
  }

}

下面是一个示例,说明我如何结合使用这些东西来生成我的 TFRecords 文件:

val label = TFRecord.int64Feature(1)
val feature = TFRecord.floatVectorFeature(Array[Float](1, 2, 3, 4))
val features = TFRecord.makeFeatures(HashMap[String, Feature]  ("feature"->feature, "label"-> label))
val ex = TFRecord.makeExample(features)
val exSerialized = ex.toByteArray()
val length = LittleEndianEncoding.encodeLong(exSerialized.length)
val crcLength =  LittleEndianEncoding.encodeInt(CRC32.mask(CRC32.hash(length)))
val crcEx = LittleEndianEncoding.encodeInt(CRC32.mask(CRC32.hash(exSerialized)))

val out = new FileOutputStream(new File("test.tfrecords"))
out.write(length)
out.write(crcLength)
out.write(exSerialized)
out.write(crcEx)
out.close()

当我尝试读取我通过 TFRecordReader 进入 Tensorflow 的文件时,我收到以下错误:

W tensorflow/core/common_runtime/executor.cc:1076] 0x24cc430 Compute status: Data loss: corrupted record at 0

我怀疑CRC掩码计算不正确或字节序 java和c++生成的文件不一样。

最佳答案

FWIW,Tensorflow 团队提供了用于读取/写入 TFRecords 的实用程序代码,可以是 found in the ecosystem repo

关于java - 用于编写 Tensorflow TFRecords 数据文件的纯 Java/Scala 代码,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/34711264/

相关文章:

java - Android SQLite CREATE TABLE 语法错误

scala - Scala 中的元组和列表[Any] 之间的区别?

apache-spark - saveJsonToEs 需要太多时间来处理每个元素,即使节点没有重载

java - Maven 错误 : Not authorized, 原因短语:未经授权

java - 使用 Stream 比较两个集合 - anyMatch

java - Spring XML 与 Java 类中变量的初始化

scala - java.lang.StackOverflowError///帕斯卡三角形

algorithm - 如何去掉带有 AND 和 OR 的 bool 表达式中的大括号

scala - Spark : how to read CompactBuffer from an objectFile?

python - 计算pyspark中两个数据帧的行之间的距离