hadoop - 无法使用自定义可执行文件运行 EMR Hadoop Streaming 作业

标签 hadoop amazon-web-services hadoop-streaming amazon-emr emr

编辑:

查看名称节点日志,我注意到会定期引发异常。可能相关吗?

2013-04-10 19:23:50,613 WARN org.apache.hadoop.security.ShellBasedUnixGroupsMapping (IPC Server handler 43 on 9000): got exception trying to get groups for user job_201304101854_0005
org.apache.hadoop.util.Shell$ExitCodeException: id: job_201304101854_0005: No such user

    at org.apache.hadoop.util.Shell.runCommand(Shell.java:255)
    at org.apache.hadoop.util.Shell.run(Shell.java:182)
    at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:375)
    at org.apache.hadoop.util.Shell.execCommand(Shell.java:461)
    at org.apache.hadoop.util.Shell.execCommand(Shell.java:444)
    at org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getUnixGroups(ShellBasedUnixGroupsMapping.java:78)
    at org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getGroups(ShellBasedUnixGroupsMapping.java:53)
    at org.apache.hadoop.security.Groups.getGroups(Groups.java:79)
    at org.apache.hadoop.security.UserGroupInformation.getGroupNames(UserGroupInformation.java:1037)
    at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.<init>(FSPermissionChecker.java:50)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:5218)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkTraverse(FSNamesystem.java:5201)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:2030)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.getFileInfo(NameNode.java:850)
    at sun.reflect.GeneratedMethodAccessor3.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:573)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1132)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)
2013-04-10 19:23:50,614 INFO org.apache.hadoop.security.ShellBasedUnixGroupsMapping (IPC Server handler 43 on 9000): add job_201304101854_0005 to shell userGroupsCache
2013-04-10 19:23:50,614 WARN org.apache.hadoop.security.UserGroupInformation (IPC Server handler 43 on 9000): No groups available for user job_201304101854_0005
2013-04-10 19:23:55,886 WARN org.apache.hadoop.security.UserGroupInformation (IPC Server handler 46 on 9000): No groups available for user job_201304101854_0005

我们已经生成了自定义二进制文件来执行 map 和 reduce,使用常识“cat file | map | sort | reduce > output”模式测试了它们的正确操作。我们确保静态编译二进制文件以引入尽可能多的依赖项,并且我们还通过手动将二进制文件上传到主服务器来确认二进制文件在 Amazon 的 EMR AMI 上运行。如果相关的话,我们选择的语言是 Haskell,编译结果是一个简单的 native 二进制可执行文件。

以最简单的情况为例:

bin/hadoop jar contrib/streaming/hadoop-streaming.jar \
    -input s3n://path/to/input \
    -output s3n://path/to/output \
    -mapper "s3n://path/to/Program map" \
    -reducer "s3n://path/to/Program reduce" 

作业确实开始了,但它卡在 map 0% 阶段并且没有移动。它不会从那里继续前进,并且没有任何日志似乎表明任何有用的东西。由于在 600 秒内“未报告”,每个 map task 都会被杀死。每个映射器都显示类似以下内容作为其状态,同时显示 0% 完成:

s3n://path/to/file.csv.gz:0+38175575

计数器部分显示从 s3n 读取了大约 17.5KB。

如果我们现在将作业修改为以下内容以进行测试:

bin/hadoop jar contrib/streaming/hadoop-streaming.jar \
    -input s3n://path/to/input \
    -output s3n://path/to/output \
    -mapper s3n://elasticmapreduce/samples/wordcount/wordSplitter.py \
    -reducer aggregate

然后 mapper 阶段完成 100%,但 reducer 引发异常:

java.io.IOException: exception in uploadSinglePart
    at org.apache.hadoop.fs.s3native.MultipartUploadOutputStream.uploadSinglePart(MultipartUploadOutputStream.java:163)
    at org.apache.hadoop.fs.s3native.MultipartUploadOutputStream.close(MultipartUploadOutputStream.java:219)
    at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:70)
    at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:96)
    at org.apache.hadoop.mapred.TextOutputFormat$LineRecordWriter.close(TextOutputFormat.java:109)
    at org.apache.hadoop.mapred.ReduceTask$OldTrackingRecordWriter.close(ReduceTask.java:475)
    at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:539)
    at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:429)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1132)
    at org.apache.hadoop.mapred.Child.main(Child.java:249)
Caused by: java.lang.RuntimeException: exception in putObject
    at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.storeFile(Jets3tNativeFileSystemStore.java:128)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:83)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
    at org.apache.hadoop.fs.s3native.$Proxy3.storeFile(Unknown Source)
    at org.apache.hadoop.fs.s3native.MultipartUploadOutputStream.uploadSinglePart(MultipartUploadOutputStream.java:160)
    ... 12 more
Caused by: Status Code: 403, AWS Service: Amazon S3, AWS Request ID: 8220819721FFE29E, AWS Error Code: AccessDenied, AWS Error Message: Access Denied, S3 Extended Request ID: TekkBZzgaBlK0e8SkoC7bcBsu1w7Nbpy2U7hPCGp5IPrrsqaPTxUg7QQ09xTXRYC
    at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:619)
    at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:317)
    at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:170)
    at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:2943)
    at com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1123)
    at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.storeFile(Jets3tNativeFileSystemStore.java:121)
    ... 20 more

令人沮丧的是,例如,在相同类型的 EMR 集群上运行 hive,在 S3 上创建新的外部映射表和文件似乎没有任何问题。

在尝试了几个想法后,如果有人能给我们正确的指导让我们的设置正常工作,我将不胜感激。

谢谢, 办公自动化

最佳答案

我认为这很可能是您的问题:

-mapper "s3n://path/to/Program map"

很可能是空格给您带来了问题。我可能会尝试构建两个单独的二进制文件,一个用于 map,一个用于 reduce,您可以直接调用它们而不是传递参数。至少这将帮助您查明问题。

否则,这听起来像是 S3 许可或 mime 类型问题。我会检查您的存储桶的权限,以验证您用于 EMR 作业的凭据是否可以访问该存储桶。

一旦您确定了,我将检查二进制文件本身的权限和属性;当 S3 mime 类型设置不正确时,我遇到了奇怪的问题。例如,这里是 wordSplitter 信息:

$ s3cmd info s3://elasticmapreduce/samples/wordcount/wordSplitter.py
s3://elasticmapreduce/samples/wordcount/wordSplitter.py (object):
File size: 294
Last mod:  Wed, 29 Feb 2012 01:50:25 GMT
MIME type: text/x-python
MD5 sum:   f5b4829658cfbcd5fa5eb32c58163fa8

您的二进制文件可能默认为以某种方式阻碍执行的 mime 类型。

关于hadoop - 无法使用自定义可执行文件运行 EMR Hadoop Streaming 作业,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/15933818/

相关文章:

hadoop - 如何在HDFS中删除快照?

amazon-web-services - AWS EC2 实例中的 Kubernetes 仪表板?

python - 在Amazon EMR上运行mrjob,不支持t2.micro

hadoop - 完全分布式的 Hadoop/MapReduce 程序是否有任何方法可以让其各个节点读取本地输入文件?

hadoop - 配置单元失败 : ParseException line 2:0 cannot recognize input near '' macaddress '' ' CHAR' '(' in column specification

hadoop - 我需要一个工具来分析日志 Hadoop

hadoop - 单节点集群的MapR安装失败

apache-spark - Spark具有Cassandra(具有Hadoop)的性能

java - 运行Hadoop MapReduce Java程序时出现UnsatisfiedLinkError

amazon-web-services - 云信息 : ELB listener rule creation fails with "Invalid request provided"