我遇到一个奇怪的问题,向您保证我已经在Google上搜索了很多。
我正在运行一组AWS Elastic MapReduce群集,并且我有一个带有约16个分区的Hive表。它们是从emr-s3distcp创建的(因为原始s3存储桶中大约有216K文件),带有--groupBy且限制设置为64MiB(在这种情况下为DFS块大小),它们只是带有每行使用JSON SerDe的json对象。
当我运行此脚本时,它需要很长时间,然后由于某些IPC连接而放弃。
最初,从s3distcp到HDFS的压力是如此之大,以至于我采取了一些措施(请参阅:调整容量至更高容量的计算机,然后将dfs权限设置为3倍复制,因为它是一个小型集群,并且块大小设置为64MiB )。那行得通,复制不足的块数变为零(EMR中小于3的默认值是2,但我已更改为3)。
查看/mnt/var/log/apps/hive_081.log会产生如下概述:
2013-05-12 09:56:12,120 DEBUG org.apache.hadoop.ipc.Client (Client.java:<init>(222)) - The ping interval is60000ms.
2013-05-12 09:56:12,120 DEBUG org.apache.hadoop.ipc.Client (Client.java:<init>(265)) - Use SIMPLE authentication for protocol ClientProtocol
2013-05-12 09:56:12,120 DEBUG org.apache.hadoop.ipc.Client (Client.java:setupIOstreams(551)) - Connecting to /10.17.17.243:9000
2013-05-12 09:56:12,121 DEBUG org.apache.hadoop.ipc.Client (Client.java:sendParam(769)) - IPC Client (47) connection to /10.17.17.243:9000 from hadoop sending #14
2013-05-12 09:56:12,121 DEBUG org.apache.hadoop.ipc.Client (Client.java:run(742)) - IPC Client (47) connection to /10.17.17.243:9000 from hadoop: starting, having connections 2
2013-05-12 09:56:12,125 DEBUG org.apache.hadoop.ipc.Client (Client.java:receiveResponse(804)) - IPC Client (47) connection to /10.17.17.243:9000 from hadoop got value #14
2013-05-12 09:56:12,126 DEBUG org.apache.hadoop.ipc.RPC (RPC.java:invoke(228)) - Call: getFileInfo 6
2013-05-12 09:56:21,523 INFO org.apache.hadoop.ipc.Client (Client.java:handleConnectionFailure(663)) - Retrying connect to server: domU-12-31-39-10-81-2A.compute-1.internal/10.198.130.216:9000. Already tried 6 time(s).
2013-05-12 09:56:22,122 DEBUG org.apache.hadoop.ipc.Client (Client.java:close(876)) - IPC Client (47) connection to /10.17.17.243:9000 from hadoop: closed
2013-05-12 09:56:22,122 DEBUG org.apache.hadoop.ipc.Client (Client.java:run(752)) - IPC Client (47) connection to /10.17.17.243:9000 from hadoop: stopped, remaining connections 1
2013-05-12 09:56:42,544 INFO org.apache.hadoop.ipc.Client (Client.java:handleConnectionFailure(663)) - Retrying connect to server: domU-12-31-39-10-81-2A.compute-1.internal/10.198.130.216:9000. Already tried 7 time(s).
依此类推,直到一个客户达到极限。
在Elastic MapReduce下的Hive中修复此问题需要做什么?
谢谢
最佳答案
一段时间后,我注意到:甚至在我的集群中都没有违规的IP地址,因此它是一个固定的配置单元元存储。我已通过以下方式解决此问题:
CREATE TABLE whatever_2 LIKE whatever LOCATION <hdfs_location>;
ALTER TABLE whetever_2 RECOVER PARTITIONS;
希望能帮助到你。
关于hadoop - AWS Elastic MapReduce下的Hive查询性能降低,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/16506267/