我正在尝试执行一个 python MapReduce wordcount 程序
我取自 writing a Hadoop MapReduce program in python 只是想了解它是如何工作的,但问题始终是工作不成功!
我在 Cloudera VM
中使用这个库执行 mapper.py
和 reducer.py
/usr/lib/hadoop-0.20-mapreduce/contrib/streaming/hadoop-streaming-2.6.0-mr1-cdh5.12.0.jar
执行命令:
hadoop jar /usr/lib/hadoop-0.20-mapreduce/contrib/streaming/hadoop-streaming-2.6.0-mr1-cdh5.12.0.jar
-Dmaperd.reduce, tasks=1
-file wordcount/mapper.py
-mapper mapper.py -file wordcount/reducer.py
-reducer reducer.py
-input myinput/test.txt
-output output
最佳答案
问题出在文件mapper.py和reducer.py必须来自本地的路径上
但输入文件必须来自hdfs路径
首先,必须在本地使用测试python代码
cat <input file> | python <path from>/mapper.py | python <path from local>/reducer.py
然后在hdfs上
hadoop jar /usr/lib/hadoop-0.20-mapreduce/contrib/streaming/hadoop-streaming-2.6.0-mr1-cdh5.12.0.jar
-Dmaperd.reduce,tasks=1 -file <path of local>/mapper.py
-mapper "python <path from local>/mapper.py"
-file <path from local>/reducer.py -
reducer "python <path of local>/reducer.py"
-input <path from hdfs>/myinput/test.txt
-output output
关于python - Hadoop MapReduce Wordcount python执行错误,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/47042703/