Java Hadoop MapReduce 多值

标签 java hadoop mapreduce

我想做一个电影推荐系统,一直关注这个网站。 LinkHere

def count_ratings_users_freq(self, user_id, values):
"""
For each user, emit a row containing their "postings"
(item,rating pairs)
Also emit user rating sum and count for use later steps.
output:
userid, number of movie rated by user, rating number count, (movieid, movie rating)

17    1,3,(70,3)
35    1,1,(21,1)
49    3,7,(19,2 21,1 70,4)
87    2,3,(19,1 21,2)
98    1,2,(19,2)
"""
item_count = 0
item_sum = 0
final = []
for item_id, rating in values:
    item_count += 1
    item_sum += rating
    final.append((item_id, rating))

yield user_id, (item_count, item_sum, final)

是否可以使用 Hadoop Map and Reduce 将上述代码转换为 Java? userid 作为 key
没有。由用户评分的电影,评分数计数,(movieid,电影评分)作为值。 谢谢!

最佳答案

是的,您可以将其转换为 map reduce 程序。

映射器逻辑:

  1. 假设输入的格式为(用户 ID、电影 ID、电影评级)(例如 17,70,3),您可以用逗号 (,) 分隔每一行并发出“用户 ID”作为键和 (电影 ID、电影评级)作为值。例如作为记录:(17,70,3),您可以发出键:(17) 和值:(70,3)

reducer 逻辑:

  1. 您将保留 3 个变量:movieCount(整数)、movieRatingCount(整数)、movieValues(字符串)。
  2. 对于每个值,您需要解析该值并获得“电影评级”。例如,对于值 (70,3),您将解析电影评级 = 3。

  3. 对于每条有效记录,您将递增 movieCount。您会将已解析的“电影评级”添加到“movieRatingCount”并将该值附加到“movieValues”字符串。

您将获得所需的输出。

下面是一段代码,它实现了这一点:

package com.myorg.hadooptests;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

import java.io.IOException;

public class MovieRatings {


    public static class MovieRatingsMapper
            extends Mapper<LongWritable, Text , IntWritable, Text>{

        public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {

            String valueStr = value.toString();
            int index = valueStr.indexOf(',');

            if(index != -1) {
                try
                {
                    IntWritable keyUserID = new IntWritable(Integer.parseInt(valueStr.substring(0, index)));
                    context.write(keyUserID, new Text(valueStr.substring(index + 1)));
                }
                catch(Exception e)
                {
                    // You could get a NumberFormatException
                }
            }
        }
    }

    public static class MovieRatingsReducer
            extends Reducer<IntWritable, Text, IntWritable, Text> {

        public void reduce(IntWritable key, Iterable<Text> values,
                           Context context) throws IOException, InterruptedException {

            int movieCount = 0;
            int movieRatingCount = 0;
            String movieValues = "";

            for (Text value : values) {
                String[] tokens = value.toString().split(",");
                if(tokens.length == 2)
                {
                    movieRatingCount += Integer.parseInt(tokens[1].trim()); // You could get a NumberFormatException
                    movieCount++;
                    movieValues = movieValues.concat(value.toString() + " ");
                }
            }

            context.write(key, new Text(Integer.toString(movieCount) + "," + Integer.toString(movieRatingCount) + ",(" + movieValues.trim() + ")"));
        }
    }

    public static void main(String[] args) throws Exception {

        Configuration conf = new Configuration();

        Job job = Job.getInstance(conf, "CompositeKeyExample");
        job.setJarByClass(MovieRatings.class);
        job.setMapperClass(MovieRatingsMapper.class);
        job.setReducerClass(MovieRatingsReducer.class);

        job.setOutputKeyClass(IntWritable.class);
        job.setOutputValueClass(Text.class);

        FileInputFormat.addInputPath(job, new Path("/in/in2.txt"));
        FileOutputFormat.setOutputPath(job, new Path("/out/"));

        System.exit(job.waitForCompletion(true) ? 0:1);

    }
}

对于输入:

17,70,3
35,21,1
49,19,2
49,21,1
49,70,4
87,19,1
87,21,2
98,19,2

我得到了输出:

17      1,3,(70,3)
35      1,1,(21,1)
49      3,7,(70,4 21,1  19,2)
87      2,3,(21,2 19,1)
98      1,2,(19,2)

关于Java Hadoop MapReduce 多值,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/34263288/

相关文章:

java - 添加格式化文本

java - 为什么我不能通过调用Yolk的setter方法来设置Egg的实例变量i?

java - 是否可以/支持使用 Java 进行 Impala 查询?

hadoop - 如何有效地读取带有 spark 路径的文件,即想要返回 `wholeTextFiles` 的 `RDD[String, Iterator[String]]`

hadoop - 有没有办法在 Hive 中转置数据

hadoop - 2 即使输入小于 block 大小,Map 任务也会启动,什么决定了 Map 任务的数量?

java - Hadoop 字数统计 : receive the total number of words that start with the letter "c"

java - 从 jdev 连接到 sql 文件

hadoop - 运行 S3DistCp 时设置 HDFS 复制因子

java - NetBeans 8.1 激活失败