在这里,我尝试使用 hiveql 执行 map reduce 操作,它适用于选择查询,但它为某些聚合和过滤操作抛出一些异常,请帮助我解决它。我已经在适当的地方添加了 mongo-hadoop jar
hive > 从用户中选择 *; 好的 1 汤姆 28 2 爱丽丝 18 3 鲍勃 29
hive> select * from users where age>=20; MapReduce 作业总数 = 1 启动 Job 1 out of 1 由于没有 reduce 运算符,reduce 任务数设置为 0
Kill Command = /home/administrator/hadoop-2.2.0//bin/hadoop job -kill job_1398687508122_0002
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0
2014-05-05 12:08:41,195 Stage-1 map = 0%, reduce = 0%
2014-05-05 12:08:57,723 Stage-1 map = 100%, reduce = 0%`enter code here`
Ended Job = job_1398687508122_0002 with errors
Error during job, obtaining debugging information...
Examining task ID: task_1398687508122_0002_m_000000 (and more) from job job_1398687508122_0002
Task with the most failures(4):
-----
Task ID:
task_1398687508122_0002_m_000000
-----
Diagnostic Messages for this Task:
Error: java.io.IOException: java.io.IOException: Couldn't get next key/value from mongodb:
at org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderNextException(HiveIOExceptionHandlerChain.java:121)
at org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderNextException(HiveIOExceptionHandlerUtil.java:77)
at org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:276)
at org.apache.hadoop.hive.ql.io.HiveRecordReader.doNext(HiveRecordReader.java:79)
at org.apache.hadoop.hive.ql.io.HiveRecordReader.doNext(HiveRecordReader.java:33)
at org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.next(HiveContextAwareRecordReader.java:108)
at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.moveToNext(MapTask.java:197)
at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.next(MapTask.java:183)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:52)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:429)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:162)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:157)
Caused by: java.io.IOException: Couldn't get next key/value from mongodb:
at com.mongodb.hadoop.mapred.input.MongoRecordReader.nextKeyValue(MongoRecordReader.java:93)
at com.mongodb.hadoop.mapred.input.MongoRecordReader.next(MongoRecordReader.java:98)
at com.mongodb.hadoop.mapred.input.MongoRecordReader.next(MongoRecordReader.java:27)
at org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:274)
... 13 more
Caused by: com.mongodb.MongoException$Network: Read operation to server localhost/127.0.0.1:12345 failed on database test
at com.mongodb.DBTCPConnector.innerCall(DBTCPConnector.java:253)
at com.mongodb.DBTCPConnector.call(DBTCPConnector.java:216)
at com.mongodb.DBApiLayer$MyCollection.__find(DBApiLayer.java:288)
at com.mongodb.DBApiLayer$MyCollection.__find(DBApiLayer.java:273)
at com.mongodb.DBCursor._check(DBCursor.java:368)
at com.mongodb.DBCursor._hasNext(DBCursor.java:459)
at com.mongodb.DBCursor.hasNext(DBCursor.java:484)
at com.mongodb.hadoop.mapred.input.MongoRecordReader.nextKeyValue(MongoRecordReader.java:80)
... 16 more
Caused by: java.net.ConnectException: Connection refused
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:579)
at com.mongodb.DBPort._open(DBPort.java:223)
at com.mongodb.DBPort.go(DBPort.java:125)
at com.mongodb.DBPort.call(DBPort.java:92)
at com.mongodb.DBTCPConnector.innerCall(DBTCPConnector.java:244)
... 23 more
FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
MapReduce Jobs Launched:
Job 0: Map: 1 HDFS Read: 0 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 0 msec
最佳答案
在 Hive 中,“select * from table”的操作模式不同于任何其他更复杂的查询。该查询在单个 JVM 中的 Hive 客户端中运行。逻辑是无论如何查询最终都必须从单个线程将所有内容打印到控制台,因此仅从该线程执行所有操作也不会更糟。其他一切,包括一个简单的过滤器,都将作为一个或多个 MapReduce 作业运行。
当您在没有过滤器的情况下运行查询时,我猜您是在运行 MongoDB 的同一台机器上执行此操作,因此它可以连接到 localhost:12345。但是,当您运行 MapReduce 作业时,它是尝试连接的另一台机器:一个任务节点。映射器尝试连接到“localhost:12345”以从 Mongo 获取数据,但无法这样做。也许 Mongo 没有在那台机器上运行,或者它可能在不同的端口上运行。我不知道你的集群是如何配置的。
无论如何,您应该以集群中所有机器都可以访问的方式指定 MongoDB 实例的位置。如果它有一个相当静态的本地 IP 地址,那将有效,但最好是通过主机名和 DNS 解析来完成。
关于mongodb - 使用 MongoDB 的 Hive 表映射,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/23466517/