hadoop - 在hadoop -1.2.1中运行hadoop管道时出现的问题

标签 hadoop pipe hadoop-streaming hadoop-plugins hadoop-partitioning

 Hello everybody,

          Earlier I was getting an issue while running the c++ binaries in hadoop

 syscon@syscon-OptiPlex-3020:~/uday/hadoop-1.2.1$ bin/hadoop pipes -D hadoop.pipes.java.recordreader=true -D hadoop.pipes.java.recordwriter=true -input /dft1 -output dft1 -program /bin/wordcount

14/08/16 11:11:12 WARN mapred.JobClient: No job jar file set.  User classes may not be found. See JobConf(Class) or JobConf#setJar(String).
14/08/16 11:11:12 INFO util.NativeCodeLoader: Loaded the native-hadoop library
14/08/16 11:11:12 WARN snappy.LoadSnappy: Snappy native library not loaded
14/08/16 11:11:12 INFO mapred.FileInputFormat: Total input paths to process : 0
14/08/16 11:11:12 INFO mapred.JobClient: Running job: job_201408161011_0003
14/08/16 11:11:13 INFO mapred.JobClient:  map 0% reduce 0%
14/08/16 11:11:20 INFO mapred.JobClient: Task Id : attempt_201408161011_0003_r_000000_0, Status : FAILED
java.io.IOException
    at org.apache.hadoop.mapred.pipes.OutputHandler.waitForAuthentication(OutputHandler.java:188)
    at org.apache.hadoop.mapred.pipes.Application.waitForAuthentication(Application.java:194)
    at org.apache.hadoop.mapred.pipes.Application.<init>(Application.java:149)
    at org.apache.hadoop.mapred.pipes.PipesReducer.startApplication(PipesReducer.java:81)
    at org.apache.hadoop.mapred.pipes.PipesReducer.close(PipesReducer.java:107)
    at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:532)
    at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:421)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Unknown Source)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
    at org.apache.hadoop.mapred.Child.main(Child.java:249)

在几次上述失败的尝试之后,它终止了:
attempt_201408161011_0003_r_000000_2: Server failed to authenticate. Exiting
14/08/16 11:11:37 INFO mapred.JobClient: Job complete: job_201408161011_0003
14/08/16 11:11:37 INFO mapred.JobClient: Counters: 6
14/08/16 11:11:37 INFO mapred.JobClient:   Job Counters 
14/08/16 11:11:37 INFO mapred.JobClient:     Launched reduce tasks=4
14/08/16 11:11:37 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=2096
14/08/16 11:11:37 INFO mapred.JobClient:     Total time spent by all reduces waiting after reserving slots (ms)=0
14/08/16 11:11:37 INFO mapred.JobClient:     Total time spent by all maps waiting after reserving slots (ms)=0
14/08/16 11:11:37 INFO mapred.JobClient:     Failed reduce tasks=1
14/08/16 11:11:37 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=21447
14/08/16 11:11:37 INFO mapred.JobClient: Job Failed: # of failed Reduce Tasks exceeded allowed limit. FailedCount: 1. LastFailedTask: task_201408161011_0003_r_000000
Exception in thread "main" java.io.IOException: Job failed!
    at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1357)
    at org.apache.hadoop.mapred.pipes.Submitter.runJob(Submitter.java:248)
    at org.apache.hadoop.mapred.pipes.Submitter.run(Submitter.java:479)
    at org.apache.hadoop.mapred.pipes.Submitter.main(Submitter.java:494)






Inorder to solve this issue ,tried finding the solution in google to solve this issue...while searching for that,


         *Found this link to be useful to fix the issue

         *so as per the given solution,tried running all the steps which was mwntioned in the below link

              http://www.linuxquestions.org/questions/showthread.php?p=5221898#post5221898

         *Till the 4th step ,it was clear

                 When i was proceeding with the fifth step,it was throwing error like this


syscon@syscon-OptiPlex-3020:~/uday/hadoop-1.2.1/src/c++/pipes$ ./configure
checking for a BSD-compatible install... /usr/bin/install -c
checking whether build environment is sane... yes
checking for gawk... no
checking for mawk... mawk
checking whether make sets $(MAKE)... yes
checking for style of include used by make... GNU
checking for gcc... gcc
checking for C compiler default output file name... a.out
checking whether the C compiler works... yes
checking whether we are cross compiling... no
checking for suffix of executables...
checking for suffix of object files... o
checking whether we are using the GNU C compiler... yes
checking whether gcc accepts -g... yes
checking for gcc option to accept ISO C89... none needed
checking dependency style of gcc... gcc3
checking for special C compiler options needed for large files... no
checking for _FILE_OFFSET_BITS value needed for large files... no
checking how to run the C preprocessor... gcc -E
checking for grep that handles long lines and -e... /bin/grep
checking for egrep... /bin/grep -E
checking for ANSI C header files... yes
checking for sys/types.h... yes
checking for sys/stat.h... yes
checking for stdlib.h... yes
checking for string.h... yes
checking for memory.h... yes
checking for strings.h... yes
checking for inttypes.h... yes
checking for stdint.h... yes
checking for unistd.h... yes
checking pthread.h usability... yes
checking pthread.h presence... yes
checking for pthread.h... yes
checking for pthread_create in -lpthread... yes
checking for HMAC_Init in -lssl... no
configure: error: Cannot find libssl.so
./configure: line 4809: exit: please: numeric argument required
./configure: line 4809: exit: please: numeric argument required



syscon@syscon-OptiPlex-3020:~/uday/hadoop-1.2.1/src/c++/pipes$ locate libssl.so
/home/syscon/uday/hadoop-1.2.1/c++/Linux-amd64-64/lib/libssl.so
/lib/x86_64-linux-gnu/libssl.so.0.9.8
/lib/x86_64-linux-gnu/libssl.so.1.0.0
/usr/lib/libssl.so
/usr/lib/x86_64-linux-gnu/libssl.so
/usr/lib/x86_64-linux-gnu/libssl.so.0.9.8
/usr/local/bin/libssl.so

注意:我已经在我的PC上安装了libssl.so...。但是它仍然抛出错误...

我需要在哪里更改配置文件才能使其正常工作?

有人能帮助我吗?

最佳答案

如果您通过运行以下命令将其配置文件更改为可执行文件,我认为配置文件没有任何问题:
$ sudo chmod + x配置
这可能是一件事。通过执行以下任一操作,将LIBS变量设置为-lycrpto:
$ export LIBS = -lcrypto#这是临时的
或编辑.bashrc文件。
跟着这些步骤:
$ cd
$ sudo纳米.bashrc
在最后一行添加“$ export LIBS = -lcrypto”
关闭并保存文件。
然后运行:
$来源
或重新启动计算机。
再次运行您的配置文件,然后查看其工作情况。
瞧!

关于hadoop - 在hadoop -1.2.1中运行hadoop管道时出现的问题,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/25337363/

相关文章:

Hadoop 二进制最新版本?

python - 来自 unix 管道的 pip install 参数

c++ - 如何在 bash 中使用管道在 C++ 中使用 getenv() 访问文件?

Python hadoop 流式传输 : Setting a job name

python - 如何使用 hbase 作为 hadoop 流作业的源

hadoop - Apache Flume自定义拦截器-二进制和奇怪的HDFS文件

hadoop - HBASE(导入数据)

hadoop - 更改 dfs.block 大小的值是否会影响现有数据

bash - 在 Bash 中通过管道运算符重定向两个命令的输入

hadoop - 在 AMI 3.0.1 上运行弹性 mapreduce 流