hadoop - 伪分布式数映射和归约任务

标签 hadoop mapreduce

我是 Hadoop 的新手。我已经在伪分布式模式下成功配置了 hadoop 设置。现在我想知道选择map和reduce任务数量的逻辑是什么。我们指的是什么?

谢谢

最佳答案

您无法概括要设置的映射器/化简器数量。

map 绘制者数量: 您不能将映射器的数量显式设置为某个数量(有参数可以设置此数量,但它不会生效)。这取决于 hadoop 为给定的输入集创建的输入拆分数量。您可以通过设置mapred.min.split.size参数来控制它。有关更多信息,请阅读 InputSplit 部分 here 。如果由于大量小文件而生成大量映射器,并且您希望减少映射器数量,那么您将需要合并来自多个文件的数据。阅读此内容:How to combine input files to get to a single mapper and control number of mappers .

引用维基页面:

The number of maps is usually driven by the number of DFS blocks in the input files. Although that causes people to adjust their DFS block size to adjust the number of maps. The right level of parallelism for maps seems to be around 10-100 maps/node, although we have taken it up to 300 or so for very cpu-light map tasks. Task setup takes awhile, so it is best if the maps take at least a minute to execute.

Actually controlling the number of maps is subtle. The mapred.map.tasks parameter is just a hint to the InputFormat for the number of maps. The default InputFormat behavior is to split the total number of bytes into the right number of fragments. However, in the default case the DFS block size of the input files is treated as an upper bound for input splits. A lower bound on the split size can be set via mapred.min.split.size. Thus, if you expect 10TB of input data and have 128MB DFS blocks, you'll end up with 82k maps, unless your mapred.map.tasks is even larger. Ultimately the InputFormat determines the number of maps.

The number of map tasks can also be increased manually using the JobConf's conf.setNumMapTasks(int num). This can be used to increase the number of map tasks, but will not set the number below that which Hadoop determines via splitting the input data.

reducer 数量: 您可以显式设置 reducer 的数量。只需设置参数mapred.reduce.tasks即可。有guidelines用于设置此数量,但通常默认的 reducer 数量应该足够了。有时需要单个报告文件,在这些情况下,您可能希望将 reducer 数量设置为 1。

再次引用维基百科:

The right number of reduces seems to be 0.95 or 1.75 * (nodes * mapred.tasktracker.tasks.maximum). At 0.95 all of the reduces can launch immediately and start transfering map outputs as the maps finish. At 1.75 the faster nodes will finish their first round of reduces and launch a second round of reduces doing a much better job of load balancing.

Currently the number of reduces is limited to roughly 1000 by the buffer size for the output files (io.buffer.size * 2 * numReduces << heapSize). This will be fixed at some point, but until it is it provides a pretty firm upper bound.

The number of reduces also controls the number of output files in the output directory, but usually that is not important because the next map/reduce step will split them into even smaller splits for the maps.

The number of reduce tasks can also be increased in the same way as the map tasks, via JobConf's conf.setNumReduceTasks(int num).

关于hadoop - 伪分布式数映射和归约任务,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/16414664/

相关文章:

eclipse - 如何使用 Java -jar 命令运行 map reduce 作业

hadoop:如何增加失败任务的限制

apache-spark - 无法在 hadoop 二进制文件中找到可执行文件 null\bin\winutils.exe

hadoop - 从映射器写入单个文件

python - Hive自定义脚本是否允许2个或更多的reducer?

streaming - Hadoop 或 Hadoop Streaming for MapReduce on AWS

hadoop - 如何在 Spark 中进行文本分析

hadoop - 在 hadoop 上为 hue 启用 https

hadoop - 如何在没有命令行的情况下使用 Java 从 Hadoop 读取文件

hadoop - 总订单划分的分析阶段