hadoop - Hadoop未运行任务

标签 hadoop mapreduce

我有一个由1个主设备和1个从设备连接并且“可能”通信的集群,我遵循了几条指南来安装和设置集群,其中几乎所有集群都是相似的,只是区别是分配的内存和核心。
我的主服务器和从服务器都有8个vcore和32GB,分别具有大约600GB的SDD
enter image description here
但是,当我尝试运行hadoop任务时,出现以下消息:

    hadoop jar $HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-examples-*.jar wordcount /input /output
20/11/03 15:51:34 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
20/11/03 15:51:35 INFO client.RMProxy: Connecting to ResourceManager at master/master:8032
20/11/03 15:51:36 INFO input.FileInputFormat: Total input paths to process : 1
20/11/03 15:51:36 INFO mapreduce.JobSubmitter: number of splits:1
20/11/03 15:51:36 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1604418534431_0001
20/11/03 15:51:36 INFO impl.YarnClientImpl: Submitted application application_1604418534431_0001
20/11/03 15:51:36 INFO mapreduce.Job: The url to track the job: http://master:8088/proxy/application_1604418534431_0001/
20/11/03 15:51:36 INFO mapreduce.Job: Running job: job_1604418534431_0001
20/11/03 15:51:43 INFO mapreduce.Job: Job job_1604418534431_0001 running in uber mode : false
20/11/03 15:51:43 INFO mapreduce.Job:  map 0% reduce 0%
20/11/03 15:51:46 INFO mapreduce.Job: Task Id : attempt_1604418534431_0001_m_000000_0, Status : FAILED
Exception from container-launch.
Container id: container_1604418534431_0001_01_000002
Exit code: 1
Stack trace: ExitCodeException exitCode=1:
        at org.apache.hadoop.util.Shell.runCommand(Shell.java:582)
        at org.apache.hadoop.util.Shell.run(Shell.java:479)
        at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:773)
        at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)
        at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
        at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)


Container exited with a non-zero exit code 1

20/11/03 15:51:49 INFO mapreduce.Job: Task Id : attempt_1604418534431_0001_m_000000_1, Status : FAILED
Exception from container-launch.
Container id: container_1604418534431_0001_01_000003
Exit code: 1
Stack trace: ExitCodeException exitCode=1:
        at org.apache.hadoop.util.Shell.runCommand(Shell.java:582)
        at org.apache.hadoop.util.Shell.run(Shell.java:479)
        at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:773)
        at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)
        at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
        at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)


Container exited with a non-zero exit code 1

20/11/03 15:51:52 INFO mapreduce.Job: Task Id : attempt_1604418534431_0001_m_000000_2, Status : FAILED
Exception from container-launch.
Container id: container_1604418534431_0001_01_000004
Exit code: 1
Stack trace: ExitCodeException exitCode=1:
        at org.apache.hadoop.util.Shell.runCommand(Shell.java:582)
        at org.apache.hadoop.util.Shell.run(Shell.java:479)
        at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:773)
        at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)
        at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
        at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)


Container exited with a non-zero exit code 1

20/11/03 15:51:57 INFO mapreduce.Job:  map 100% reduce 100%
20/11/03 15:51:58 INFO mapreduce.Job: Job job_1604418534431_0001 failed with state FAILED due to: Task failed task_1604418534431_0001_m_000000
Job failed as tasks failed. failedMaps:1 failedReduces:0

20/11/03 15:51:58 INFO mapreduce.Job: Counters: 16
        Job Counters
                Failed map tasks=4
                Killed reduce tasks=1
                Launched map tasks=4
                Other local map tasks=3
                Rack-local map tasks=1
                Total time spent by all maps in occupied slots (ms)=3946
                Total time spent by all reduces in occupied slots (ms)=0
                Total time spent by all map tasks (ms)=3946
                Total time spent by all reduce tasks (ms)=0
                Total vcore-milliseconds taken by all map tasks=3946
                Total vcore-milliseconds taken by all reduce tasks=0
                Total megabyte-milliseconds taken by all map tasks=4845688
                Total megabyte-milliseconds taken by all reduce tasks=0
        Map-Reduce Framework
                CPU time spent (ms)=0
                Physical memory (bytes) snapshot=0
                Virtual memory (bytes) snapshot=0
我正在尝试做以下事情:
echo "hello world hello Hello" > ~/Downloads/test.txt

hadoop fs -mkdir /input

hadoop fs -put ~/Downloads/test.txt /input

hadoop jar /usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-*.jar wordcount /input /output

最佳答案

我通过更改hosts文件上的主机名解决了该问题,我不得不添加一个额外的主机名来帮助将主机名解析为IP(除master之外)
例:

<master.ip> master <hostname>
感谢大家的帮助!

关于hadoop - Hadoop未运行任务,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/64666408/

相关文章:

hadoop - 在 Mapreduce 程序中,我们可以使用数据结构作为值吗?

java - 如何先在flume中加载自定义库

MySQL和Hive查询实现

mapreduce - 如何评价Cassandra的性能?

Java Map Reduce 从不同格式读取 - Avro、文本文件

algorithm - 从 mapreduce 中的 n 个元素中选择 k

hadoop - yarn : How to run MapReduce jobs with lot of mappers comparing to cluster size

r - 如何使用 SparkR 的 as.DataFrame() 将大型 R data.frames 加载到 Spark 中?

hadoop - 在将数据加载到HDFS/Hive中之前进行架构验证/验证

hadoop - 使用哪个 hadoop 版本?