我目前正在上大数据课,我的一个项目是在本地设置的Hadoop集群上运行Mapper / Reducer。
我一直在为该类使用Python和MRJob库。
这是我当前的Mapper / Reducer Python代码。
from mrjob.job import MRJob
from mrjob.step import MRStep
import re
import os
WORD_RE = re.compile(r"[\w']+")
choice = ""
class MRPrepositionsFinder(MRJob):
def steps(self):
return [
MRStep(mapper=self.mapper_get_words),
MRStep(reducer=self.reducer_find_prep_word)
]
def mapper_get_words(self, _, line):
# set word_list to indicators, convert to lowercase, and strip whitespace
word_list = set(line.lower().strip() for line in open("/hdfs/user/user/indicators.txt"))
# set filename to map_input_file
fileName = os.environ['map_input_file']
# itterate through each word in line
for word in WORD_RE.findall(line):
# if word is in indicators, yield chocie as filename
if word.lower() in word_list:
choice = fileName.split('/')[5]
yield (choice, 1)
def reducer_find_prep_word(self, choice, counts):
# each item of choice is (choice, count),
# so yielding results in value=choice, key=count
yield (choice, sum(counts))
if __name__ == '__main__':
MRPrepositionsFinder.run()
当我尝试在Hadoop集群上运行代码时-我使用了以下命令:
python hrc_discover.py / hdfs / user / user / HRCmail / * -r hadoop --hadoop-bin / usr / bin / hadoop> / hdfs / user / user / output
不幸的是,每次我运行命令时,都会出现以下错误:
找不到配置;退回到自动配置
STDERR:错误:未设置JAVA_HOME且找不到。
追溯(最近一次通话):
在第37行的文件“hrc_discover.py”
MRPrepositionsFinder.run()
运行中的文件“/usr/lib/python3.5/site-packages/mrjob-0.6.0.dev0-py3.5.egg/mrjob/job.py”,第432行
mr_job.execute()
执行中的文件“/usr/lib/python3.5/site-packages/mrjob-0.6.0.dev0-py3.5.egg/mrjob/job.py”,第453行
super (MRJob,self).execute()
执行中的文件“/usr/lib/python3.5/site-packages/mrjob-0.6.0.dev0-py3.5.egg/mrjob/launch.py”,第161行
self.run_job()
在run_job中,文件“/usr/lib/python3.5/site-packages/mrjob-0.6.0.dev0-py3.5.egg/mrjob/launch.py”,第231行
Runner.run()
运行中的文件“/usr/lib/python3.5/site-packages/mrjob-0.6.0.dev0-py3.5.egg/mrjob/runner.py”,行437
self._run()
_run中的文件“/usr/lib/python3.5/site-packages/mrjob-0.6.0.dev0-py3.5.egg/mrjob/hadoop.py”,第346行
self._find_binaries_and_jars()
_find_binaries_and_jars中的第361行的文件“/usr/lib/python3.5/site-packages/mrjob-0.6.0.dev0-py3.5.egg/mrjob/hadoop.py”,第361行
self.get_hadoop_version()
在get_hadoop_version的第198行中,文件“/usr/lib/python3.5/site-packages/mrjob-0.6.0.dev0-py3.5.egg/mrjob/hadoop.py”
返回self.fs.get_hadoop_version()
在get_hadoop_version中的第117行,文件“/usr/lib/python3.5/site-packages/mrjob-0.6.0.dev0-py3.5.egg/mrjob/fs/hadoop.py”
stdout = self.invoke_hadoop(['version'],return_stdout = True)
在invoke_hadoop的第172行中,文件“/usr/lib/python3.5/site-packages/mrjob-0.6.0.dev0-py3.5.egg/mrjob/fs/hadoop.py”
引发CalledProcessError(proc.returncode,args)
subprocess.CalledProcessError:命令'['/ usr / bin / hadoop','version']'返回非零退出状态1
我环顾互联网,发现我需要导出JAVA_HOME变量-但我不想设置任何可能破坏设置的内容。
任何帮助,将不胜感激,谢谢!
最佳答案
看来问题出在etc/hadoop/hadoop-env.sh
脚本文件中。
JAVA_HOME 环境变量配置为:
export JAVA_HOME=$(JAVA_HOME)
因此,我继续将其更改为以下内容:
export JAVA_HOME=/usr/lib/jvm/java-8-openjdk
我试图再次运行以下命令,希望它能起作用:
python hrc_discover.py /hdfs/user/user/HRCmail/* -r hadoop --hadoop-bin /usr/bin/hadoop > /hdfs/user/user/output
值得庆幸的是,MRJob在JAVA_HOME环境中开始学习,并得到以下输出:
No configs found; falling back on auto-configuration
Using Hadoop version 2.7.3
Looking for Hadoop streaming jar in /home/hadoop/contrib...
Looking for Hadoop streaming jar in /usr/lib/hadoop-mapreduce...
Hadoop streaming jar not found. Use --hadoop-streaming-jar
Creating temp directory /tmp/hrc_discover.user.20170306.022649.449218
Copying local files to hdfs:///user/user/tmp/mrjob/hrc_discover.user.20170306.022649.449218/files/...
..
为了解决Hadoop流媒体jar的问题,我在命令中添加了以下开关:
--hadoop-streaming-jar /usr/lib/hadoop/share/hadoop/tools/lib/hadoop-streaming-2.7.3.jar
完整的命令如下所示:
python hrc_discover.py /hdfs/user/user/HRCmail/* -r hadoop --hadoop-streaming-jar /usr/lib/hadoop/share/hadoop/tools/lib/hadoop-streaming-2.7.3.jar --hadoop-bin /usr/bin/hadoop > /hdfs/user/user/output
结果为以下结果:
No configs found; falling back on auto-configuration
Using Hadoop version 2.7.3
Creating temp directory /tmp/hrc_discover.user.20170306.022649.449218
Copying local files to hdfs:///user/user/tmp/mrjob/hrc_discover.user.20170306.022649.449218/files/...
看来问题已解决,Hadoop应该处理我的工作。
关于python - 如何使用Hadoop Streaming在本地Hadoop集群中运行MRJob?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/42615934/