我对使用Hadoop平台和定义MapReduce函数非常陌生,并且在尝试理解为什么该Mapper无法在我的MapReduce脚本中工作时我遇到了困难。我正在尝试解析.txt文件中以字符串形式编写的页面的集合,其中每个“行”都表示<page>...</page>
。这个脚本有什么错误?感谢您的帮助!
from mrjob.job import MRJob
from mrjob.step import MRStep
from mrjob.compat import jobconf_from_env
import lxml
import mwparserfromhell
import heapq
import re
class MRParser(MRJob):
def mapper(self, _, line):
bigString = ''.join(re.findall(r'(<text xml:space="preserve">.*</text>)',line))
root = etree.fromstring(bigString.decode('utf-8'))
if not(bigString == ''):
bigString = etree.tostring(root,method='text', encoding = "UTF-8")
wikicode = mwparserfromhell.parse(bigString)
bigString = wikicode.strip_code()
yield None, bigString
def steps(self):
return [
MRStep(mapper=self.mapper)
]
最佳答案
您缺少 reducer 功能。您需要将映射器中的行作为“键”(没有值)传递给化简器。试试这个:
def mapper(self, _, line):
bigString = ''.join(re.findall(r'(<text xml:space="preserve">.*</text>)',line))
root = etree.fromstring(bigString.decode('utf-8'))
if not(bigString == ''):
bigString = etree.tostring(root,method='text', encoding = "UTF-8")
wikicode = mwparserfromhell.parse(bigString)
bigString = wikicode.strip_code()
yield bigString, None
def reducer(self, key, values):
yield key, None
def steps(self):
return [
MRStep(mapper=self.mapper, reducer=self.reducer)
]
关于python - 使用Python通过MapReduce在Hadoop中解析HTML .txt文件,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/43691302/