hadoop - Mahout minhash org.apache.hadoop.io.LongWritable无法转换为org.apache.hadoop.io.Text

标签 hadoop mahout minhash

我在用 :

hadoop-1.2.1和mahout-distribution-0.8

当我尝试使用以下命令运行HASHMIN方法时:

$MAHOUT_HOME/bin/mahout org.apache.mahout.clustering.minhash.MinHashDriver -i tce-data/cv.vec -o tce-data/out/cv/minHashDriver/ -ow

我收到此错误:
tce@osy-Inspiron-N5110:~$ $MAHOUT_HOME/bin/mahout org.apache.mahout.clustering.minhash.MinHashDriver  -i  tce-data/cv.vec  -o tce-data/out/cv/minHashDriver/ -ow
Warning: $HADOOP_HOME is deprecated.

Running on hadoop, using /home/tce/app/hadoop-1.2.1/bin/hadoop and HADOOP_CONF_DIR=
MAHOUT-JOB: /home/tce/app/mahout-distribution-0.8/mahout-examples-0.8-job.jar
Warning: $HADOOP_HOME is deprecated.

13/09/10 18:17:46 WARN driver.MahoutDriver: No org.apache.mahout.clustering.minhash.MinHashDriver.props found on classpath, will use command-line arguments only
13/09/10 18:17:46 INFO common.AbstractJob: Command line arguments: {--endPhase=[2147483647], --hashType=[MURMUR], --input=[tce-data/cv.vec], --keyGroups=[2], --minClusterSize=[10], --minVectorSize=[5], --numHashFunctions=[10], --numReducers=[2], --output=[tce-data/out/cv/minHashDriver/], --overwrite=null, --startPhase=[0], --tempDir=[temp], --vectorDimensionToHash=[value]}
13/09/10 18:17:48 INFO input.FileInputFormat: Total input paths to process : 1
13/09/10 18:17:50 INFO mapred.JobClient: Running job: job_201309101645_0031
13/09/10 18:17:51 INFO mapred.JobClient:  map 0% reduce 0%
13/09/10 18:18:27 INFO mapred.JobClient: Task Id : attempt_201309101645_0031_m_000000_0, Status : FAILED
java.lang.ClassCastException: org.apache.hadoop.io.LongWritable cannot be cast to org.apache.hadoop.io.Text
    at org.apache.mahout.clustering.minhash.MinHashMapper.map(MinHashMapper.java:30)
    at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
    at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:364)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
    at org.apache.hadoop.mapred.Child.main(Child.java:249)

我很欣赏任何想法

最佳答案

然后交叉检查几件事,您的job.setOutputKeyClassjob.setOutputValueClass,j ob.setMapOutputKeyClassjob.setMapOutputValueClass应该分别与reducer键,reducer值,mapper键和mapper值类匹配。

您的堆栈跟踪显示Mapper不匹配。您的MinHashMapper应该扩展Mapper<[A, B, C, D >],其中CDjob.setMapOutputKeyClass(C)job.setMapOutputValueClass(D)相同

关于hadoop - Mahout minhash org.apache.hadoop.io.LongWritable无法转换为org.apache.hadoop.io.Text,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/18730808/

相关文章:

hadoop - hadoop是大数据空间中唯一的框架吗?

hadoop mapreduce 只执行一个mapper

machine-learning - 使用朴素贝叶斯分类器进行文档分类

java - 无法在 Cygwin 下运行 Mahout 20newsgroups 示例

python - 为什么使用MinHash分析器的查询无法检索重复项?

java - 如何使用JDK 1.8将Hadoop AWS jar添加到Spark 2.4.5?

hadoop - 为什么 Hbase 中打开的文件太多

hadoop - 如何在mahout中获取群集的主题和文件?

c# - 使用MinHash查找2张图像之间的相似性

minhash - 在生产系统的SimHash和MinHash之间选择