我已经在 Hadoop 上实现了 Secondary sort,但我并不真正了解该框架的行为。
我创建了一个组合键,其中包含原始键和部分值,用于排序。
为此我实现了自己的分区器
public class CustomPartitioner extends Partitioner<CoupleAsKey, LongWritable>{
@Override
public int getPartition(CoupleAsKey couple, LongWritable value, int numPartitions) {
return Long.hashCode(couple.getKey1()) % numPartitions;
}
我自己的组比较器
public class GroupComparator extends WritableComparator {
protected GroupComparator()
{
super(CoupleAsKey.class, true);
}
@Override
public int compare(WritableComparable w1, WritableComparable w2) {
CoupleAsKey c1 = (CoupleAsKey)w1;
CoupleAsKey c2 = (CoupleAsKey)w2;
return Long.compare(c1.getKey1(), c2.getKey1());
}
并以如下方式定义这对夫妇
public class CoupleAsKey implements WritableComparable<CoupleAsKey>{
private long key1;
private long key2;
public CoupleAsKey() {
}
public CoupleAsKey(long key1, long key2) {
this.key1 = key1;
this.key2 = key2;
}
public long getKey1() {
return key1;
}
public void setKey1(long key1) {
this.key1 = key1;
}
public long getKey2() {
return key2;
}
public void setKey2(long key2) {
this.key2 = key2;
}
@Override
public void write(DataOutput output) throws IOException {
output.writeLong(key1);
output.writeLong(key2);
}
@Override
public void readFields(DataInput input) throws IOException {
key1 = input.readLong();
key2 = input.readLong();
}
@Override
public int compareTo(CoupleAsKey o2) {
int cmp = Long.compare(key1, o2.getKey1());
if(cmp != 0)
return cmp;
return Long.compare(key2, o2.getKey2());
}
@Override
public String toString() {
return key1 + "," + key2 + ",";
}
这是司机
Configuration conf = new Configuration();
Job job = new Job(conf);
job.setJarByClass(SSDriver.class);
job.setMapperClass(SSMapper.class);
job.setReducerClass(SSReducer.class);
job.setMapOutputKeyClass(CoupleAsKey.class);
job.setMapOutputValueClass(LongWritable.class);
job.setPartitionerClass(CustomPartitioner.class);
job.setGroupingComparatorClass(GroupComparator.class);
FileInputFormat.addInputPath(job, new Path("/home/marko/WORK/Whirlpool/input.csv"));
FileOutputFormat.setOutputPath(job, new Path("/home/marko/WORK/Whirlpool/output"));
job.waitForCompletion(true);
现在,这可行了,但真正奇怪的是,当在 reducer 中迭代一个键时,键的第二部分(值部分)在每次迭代中都会发生变化。为什么以及如何?
@Override
protected void reduce(CoupleAsKey key, Iterable<LongWritable> values, Context context) throws IOException, InterruptedException {
for (LongWritable value : values) {
//key.key2 changes during iterations, why?
context.write(key, value);
}
}
最佳答案
定义说“如果您希望将数据分区内的所有相关行发送到单个缩减器,则必须实现分组比较器”。这仅确保将发送这些键集到一个单个 reduce 调用,而不是键将从复合(或其他)更改为仅包含完成分组的那部分键的内容。
但是,当您遍历值时,相应的键也会发生变化。我们通常不会观察到这种情况的发生,因为默认情况下,值被分组在同一个(非复合)键上,因此,即使值发生变化,(value of-)键保持不变。
您可以尝试打印键的对象引用,您会注意到在每次迭代中,键的对象引用也在发生变化(如下所示:)
IntWritable@1235ft
IntWritable@6635gh
IntWritable@9804as
或者,您也可以尝试通过以下方式在 IntWritable 上应用组比较器(您必须编写自己的逻辑才能这样做):
Group1:
1 a
1 b
2 c
Group2:
3 c
3 d
4 a
您将看到,随着值的每次迭代,您的 key 也在发生变化。
关于hadoop - 使用复合键时遍历值时部分键发生变化 - Hadoop,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/30079822/