hadoop - 运行TeraSort时Datanode没有启动

标签 hadoop mapreduce hdfs bigdata master-slave

我有4个奴隶(包括主人)。运行TeraSort时,我的一个奴隶中出现此错误。 DataNode在运行之前已启动,但是当我运行我的DataNode之一时,计算工作由其余3个从属完成:

INFO org.apache.hadoop.hdfs.server.datanode.DataNode: writeBlock blk_-5677299757617064640_1010 received exception java.io.IOException: Connection reset by peer

2015-03-12 16:42:06,835 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(192.168.0.115:50010, storageID=DS-518613992-192.168.0.115-50010-1426203432424, infoPort=50075, ipcPort=50020):DataXceiver

java.io.IOException: Connection reset by peer (this is one error same log same run )

2015-03-12 16:42:09,809 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(192.168.0.115:50010, storageID=DS-518613992-192.168.0.115-50010-1426203432424, infoPort=50075, ipcPort=50020): Exception writing block blk_2791945666924613489_1015 to mirror 192.168.0.112:50010

java.io.IOException: Broken pipe(Second error)

2015-03-12 16:42:09,824 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: writeBlock blk_2791945666924613489_1015 received exception java.io.EOFException: while trying to read 65557 bytes(third error same run)

我陷入了困境。任何帮助表示赞赏!

任务跟踪器日志:
 WARN org.apache.hadoop.mapred.TaskTracker: Failed validating JVM
java.io.IOException: JvmValidate Failed. Ignoring request from task: attempt_201503121637_0001_m_000040_0, with JvmId: jvm_201503121637_0001_m_-2136609016
        at org.apache.hadoop.mapred.TaskTracker.validateJVM(TaskTracker.java:3278)
        at org.apache.hadoop.mapred.TaskTracker.statusUpdate(TaskTracker.java:3348)
        at sun.reflect.GeneratedMethodAccessor3.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
2015-03-12 16:43:02,577 WARN org.apache.hadoop.mapred.DefaultTaskController: Exit code from task is : 143
2015-03-12 16:43:02,577 INFO org.apache.hadoop.mapred.DefaultTaskController: Output from DefaultTaskController's launchTask follows:
2015-03-12 16:43:02,577 INFO org.apache.hadoop.mapred.TaskController:
2015-03-12 16:43:02,577 INFO org.apache.hadoop.mapred.JvmManager: JVM : jvm_201503121637_0001_m_1555953113 exited with exit code 143. Number of tasks it ran: 1
2015-03-12 16:43:02,599 INFO org.apache.hadoop.mapred.TaskTracker: LaunchTaskAction (registerTask): attempt_201503121637_0001_m_000054_0 task's state:UNASSIGNED
2015-03-12 16:43:02,599 INFO org.apache.hadoop.mapred.TaskTracker: Received commit task action for attempt_201503121637_0001_m_000048_0
2015-03-12 16:43:02,599 INFO org.apache.hadoop.mapred.TaskTracker: Trying to launch : attempt_201503121637_0001_m_000054_0 which needs 1 slots
2015-03-12 16:43:02,600 INFO org.apache.hadoop.mapred.TaskTracker: TaskLauncher : Waiting for 1 to launch attempt_201503121637_0001_m_000054_0, currently we have 0 free slots
2015-03-12 16:43:03,618 INFO org.apache.hadoop.mapred.TaskTracker: JVM with ID: jvm_201503121637_0001_m_1496188144 given task: attempt_201503121637_0001_m_000051_0

最佳答案

TaskTracker日志更具描述性。您能在日志中显示什么吗?

还要检查服务器是否在正确的端口上运行。

您可以尝试执行此操作,将hadoop核心jar从工作的datanode复制并替换到发生故障的datanode,然后重新启动mapreduce服务。

再检查一件事,在工作的datanode上执行netstat,以查看tasktracker服务器在哪个端口上运行,然后检查tasktracker服务是否在故障节点上的同一端口上运行。

我想tasktracker的默认端口是50060。

因此,由于端口正常,当reduce任务发出的请求未得到满足或结果被截断时,将发生由对等方重置连接的情况,如果找不到合适的文件也可能发生这种情况(由于权限的原因也可能发生) 。

关于hadoop - 运行TeraSort时Datanode没有启动,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/29023493/

相关文章:

python - 如何让 Reducer 根据键类型发出

apache-spark - 通过流作业和Kafka增加HDFS流量中的网络负载

hadoop - 读取数据为 "streaming fashion"是什么意思?

c - C 中奇怪的 fork() 问题

hadoop - HDFS NFS启动错误: “ERROR portmap.Portmap: Failed to start the server… ChannelException: Failed to bind”

java - 如何告诉 MapReduce 同时使用多少个映射器?

hadoop - 人类可读格式的cloudera hadoop集群上的剩余空间

hadoop - pig 条件语句

java - 如何在 Hadoop MapReduce java API 中使用 Java 断言?

hadoop - 衡量Hadoop Mapreduce作业的总运行时间