hadoop - 为什么此sqoop命令会引发异常?无法找到或加载主类org.apache.hadoop.mapreduce.v2.app.MRAppMaster

标签 hadoop mapreduce sqoop

如果您有帮助,我对sqoop有任何疑问,非常感谢您的帮助。
我从本地计算机编写了一个sqoop命令,将数据从hdfs导出到oracle数据数据库。我在本地计算机上使用hadoop-3.3.0和sqoop 1.4.7。
错误是:
错误:找不到或加载主类org.apache.hadoop.mapreduce.v2.app.MRAppMaster
sqoop命令:

sqoop export --connect "jdbc:oracle:thin:@(description=(address=(protocol=tcp)(host=172.16.49.30)(port=1521))(connect_data=(service_name=stgdb)))" --table CORE_ETL.DEPOSIT_TURNOVER --username username --password password  --export-dir /tmp/merged_deposit_turnover/sqoop/ --input-fields-terminated-by "," --input-lines-terminated-by '\n'
yarn-site.xml:
 <configuration>
  <property>
    <name>yarn.acl.enable</name>
    <value>true</value>
  </property>
  <property>
    <name>yarn.admin.acl</name>
    <value>*</value>
  </property>
  <property>
    <name>yarn.resourcemanager.address</name>
    <value>cluster.com:8032</value>
  </property>
  <property>
    <name>yarn.resourcemanager.admin.address</name>
    <value>cluster.com:8033</value>
  </property>
  <property>
    <name>yarn.resourcemanager.scheduler.address</name>
    <value>cluster.com:8030</value>
  </property>
  <property>
    <name>yarn.resourcemanager.resource-tracker.address</name>
    <value>cluster.com:8031</value>
  </property>
  <property>
    <name>yarn.resourcemanager.webapp.address</name>
    <value>cluster.com:8088</value>
  </property>
  <property>
    <name>yarn.resourcemanager.webapp.https.address</name>
    <value>cluster.com:8090</value>
  </property>
  <property>
    <name>yarn.resourcemanager.client.thread-count</name>
    <value>50</value>
  </property>
  <property>
    <name>yarn.resourcemanager.scheduler.client.thread-count</name>
    <value>50</value>
  </property>
  <property>
    <name>yarn.resourcemanager.admin.client.thread-count</name>
    <value>1</value>
  </property>
  <property>
    <name>yarn.scheduler.minimum-allocation-mb</name>
    <value>1024</value>
  </property>
  <property>
    <name>yarn.scheduler.increment-allocation-mb</name>
    <value>512</value>
  </property>
  <property>
    <name>  yarn.nodemanager.resource.memory-mb</name>
    <value>2048</value>
  </property>
  <property>
    <name>yarn.scheduler.maximum-allocation-mb</name>
    <value>2048</value>
  </property>
  <property>
    <name>yarn.scheduler.minimum-allocation-vcores</name>
    <value>1</value>
  </property>
  <property>
    <name>yarn.scheduler.increment-allocation-vcores</name>
    <value>1</value>
  </property>
  <property>
    <name>yarn.scheduler.maximum-allocation-vcores</name>
    <value>2</value>
  </property>
  <property>
    <name>yarn.resourcemanager.amliveliness-monitor.interval-ms</name>
    <value>1000</value>
  </property>
  <property>
    <name>yarn.am.liveness-monitor.expiry-interval-ms</name>
    <value>600000</value>
  </property>
  <property>
    <name>yarn.resourcemanager.am.max-attempts</name>
    <value>2</value>
  </property>
  <property>
    <name>yarn.resourcemanager.container.liveness-monitor.interval-ms</name>
    <value>600000</value>
  </property>
  <property>
    <name>yarn.resourcemanager.nm.liveness-monitor.interval-ms</name>
    <value>1000</value>
  </property>
  <property>
    <name>yarn.nm.liveness-monitor.expiry-interval-ms</name>
    <value>600000</value>
  </property>
  <property>
    <name>yarn.resourcemanager.resource-tracker.client.thread-count</name>
    <value>50</value>
  </property>
 <property>
    <name>yarn.application.classpath</name>
    <value>$HADOOP_CONF_DIR,$HADOOP_COMMON_HOME/*,$HADOOP_COMMON_HOME/lib/*,$HADOOP_HDFS_HOME/*,$HADOOP_HDFS_HOME/lib/*,$HADOOP_YARN_HOME/*,$HADOOP_YARN_HOME/lib/*,$HADOOP_COMMON_HOME/share/hadoop/common/*,$HADOOP_COMMON_HOME/share/hadoop/common/lib/*,$HADOOP_HDFS_HOME/share/hadoop/hdfs/*,$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*,$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*,$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*,$HADOOP_YARN_HOME/share/hadoop/yarn/*,$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*</value>
  </property>
        
    
  <property>
    <name>yarn.resourcemanager.scheduler.class</name>
    <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value>
  </property>
  <property>
    <name>yarn.scheduler.capacity.resource-calculator</name>
    <value>org.apache.hadoop.yarn.util.resource.DefaultResourceCalculator</value>
  </property>
  <property>
    <name>yarn.resourcemanager.max-completed-applications</name>
    <value>10000</value>
  </property>
  <property>
    <name>yarn.nodemanager.remote-app-log-dir</name>
    <value>/tmp/logs</value>
  </property>
  <property>
    <name>yarn.nodemanager.remote-app-log-dir-suffix</name>
    <value>logs</value>
  </property>
</configuration>
环境变量:
export HADOOP_HOME=/etc/hadoop
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export HADOOP_YARN_HOME=$HADOOP_HOME
export YARN_CONF_DIR=/etc/hadoop/etc/hadoop
export HADOOP_CONF_DIR=/etc/hadoop/etc/hadoop
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin
hdfs-site.xml
<configuration>
  <property>
    <name>dfs.namenode.name.dir</name>
    <value>file:///data/dfs/nn</value>
  </property>
  <property>
    <name>dfs.namenode.servicerpc-address</name>
    <value>cluster.com:8022</value>
  </property>
  <property>
    <name>dfs.https.address</name>
    <value>cluster.com:9871</value>
  </property>
  <property>
    <name>dfs.https.port</name>
    <value>9871</value>
  </property>
  <property>
    <name>dfs.namenode.http-address</name>
    <value>cluster.com:9870</value>
  </property>
  <property>
    <name>dfs.replication</name>
    <value>3</value>
  </property>
  <property>
    <name>dfs.blocksize</name>
    <value>67108864</value>
  </property>
  <property>
    <name>dfs.client.use.datanode.hostname</name>
    <value>false</value>
  </property>
  <property>
    <name>fs.permissions.umask-mode</name>
    <value>022</value>
  </property>
  <property>
    <name>dfs.client.block.write.locateFollowingBlock.retries</name>
    <value>7</value>
  </property>
  <property>
    <name>dfs.namenode.acls.enabled</name>
    <value>false</value>
  </property>
  <property>
    <name>dfs.client.read.shortcircuit</name>
    <value>false</value>
  </property>
  <property>
    <name>dfs.domain.socket.path</name>
    <value>/var/run/hdfs-sockets/dn</value>
  </property>
  <property>
    <name>dfs.client.read.shortcircuit.skip.checksum</name>
    <value>false</value>
  </property>
  <property>
    <name>dfs.client.domain.socket.data.traffic</name>
    <value>false</value>
  </property>
  <property>
    <name>dfs.datanode.hdfs-blocks-metadata.enabled</name>
    <value>true</value>
  </property>
  <property>
    <name>dfs.support.append</name>
    <value>true</value>
  </property>
</configuration>
mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
  <name>yarn.app.mapreduce.am.staging-dir</name>
  <value>/user</value>
</property>

    
<property>
  <name>yarn.app.mapreduce.am.env</name>
  <value>HADOOP_MAPRED_HOME=/etc/hadoop</value>
</property>
    
<property>
  <name>mapreduce.map.env</name>
  <value>HADOOP_MAPRED_HOME=/etc/hadoop</value>
</property>
    
<property>
  <name>mapreduce.reduce.env</name>
  <value>HADOOP_MAPRED_HOME=/etc/hadoop</value>
</property>


    <property> 
    <name>mapreduce.application.classpath</name>
    <value>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*,$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*,$HADOOP_MAPRED_HOME/share/hadoop/common/*,$HADOOP_MAPRED_HOME/share/hadoop/common/lib/*,$HADOOP_MAPRED_HOME/share/hadoop/yarn/*,$HADOOP_MAPRED_HOME/share/hadoop/yarn/lib/*,$HADOOP_MAPRED_HOME/share/hadoop/hdfs/*,$HADOOP_MAPRED_HOME/share/hadoop/hdfs/lib/*</value>
</property>
    
</configuration>
sqoop错误:
Warning: /usr/lib/sqoop/../hbase does not exist! HBase imports will fail.
Please set $HBASE_HOME to the root of your HBase installation.
Warning: /usr/lib/sqoop/../hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
Warning: /usr/lib/sqoop/../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
Warning: /usr/lib/sqoop/../zookeeper does not exist! Accumulo imports will fail.
Please set $ZOOKEEPER_HOME to the root of your Zookeeper installation.
2020-08-22 17:56:24,879 INFO sqoop.Sqoop: Running Sqoop version: 1.4.7
2020-08-22 17:56:25,173 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
2020-08-22 17:56:25,492 INFO oracle.OraOopManagerFactory: Data Connector for Oracle and Hadoop is disabled.
2020-08-22 17:56:25,579 INFO manager.SqlManager: Using default fetchSize of 1000
2020-08-22 17:56:25,579 INFO tool.CodeGenTool: Beginning code generation
2020-08-22 17:56:27,694 INFO manager.OracleManager: Time zone has been set to GMT
2020-08-22 17:56:27,883 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM CORE_ETL.DEPOSIT_TURNOVER t WHERE 1=0
2020-08-22 17:56:28,188 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /etc/hadoop
Note: /tmp/sqoop-hatef/compile/dc629ada72d032251eb72d68f8f68c85/CORE_ETL_DEPOSIT_TURNOVER.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
2020-08-22 17:56:33,829 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-hatef/compile/dc629ada72d032251eb72d68f8f68c85/CORE_ETL.DEPOSIT_TURNOVER.jar
2020-08-22 17:56:33,902 INFO mapreduce.ExportJobBase: Beginning export of CORE_ETL.DEPOSIT_TURNOVER
2020-08-22 17:56:33,902 INFO Configuration.deprecation: mapred.job.tracker is deprecated. Instead, use mapreduce.jobtracker.address
2020-08-22 17:56:34,381 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
2020-08-22 17:56:36,685 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2020-08-22 17:56:38,545 INFO manager.OracleManager: Time zone has been set to GMT
2020-08-22 17:56:38,638 INFO Configuration.deprecation: mapred.reduce.tasks.speculative.execution is deprecated. Instead, use mapreduce.reduce.speculative
2020-08-22 17:56:38,645 INFO Configuration.deprecation: mapred.map.tasks.speculative.execution is deprecated. Instead, use mapreduce.map.speculative
2020-08-22 17:56:38,647 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
2020-08-22 17:56:38,996 INFO client.DefaultNoHARMFailoverProxyProvider: Connecting to ResourceManager at hdp-name1-esxi12.sdb247.com/172.16.49.10:8032
2020-08-22 17:56:40,130 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding for path: /user/airflow/.staging/job_1597060731030_0459
2020-08-22 18:01:01,798 INFO input.FileInputFormat: Total input files to process : 1
2020-08-22 18:01:01,885 INFO input.FileInputFormat: Total input files to process : 1
2020-08-22 18:01:02,817 INFO mapreduce.JobSubmitter: number of splits:4
2020-08-22 18:01:02,999 INFO Configuration.deprecation: mapred.map.tasks.speculative.execution is deprecated. Instead, use mapreduce.map.speculative
2020-08-22 18:01:05,962 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1597060731030_0459
2020-08-22 18:01:05,962 INFO mapreduce.JobSubmitter: Executing with tokens: []
2020-08-22 18:01:08,561 INFO conf.Configuration: resource-types.xml not found
2020-08-22 18:01:08,562 INFO resource.ResourceUtils: Unable to find 'resource-types.xml'.
2020-08-22 18:01:08,901 INFO impl.YarnClientImpl: Submitted application application_1597060731030_0459
2020-08-22 18:01:09,086 INFO mapreduce.Job: The url to track the job: http://hdp-name1-esxi12.sdb247.com:8088/proxy/application_1597060731030_0459/
2020-08-22 18:01:09,088 INFO mapreduce.Job: Running job: job_1597060731030_0459
2020-08-22 18:01:11,442 INFO mapreduce.Job: Job job_1597060731030_0459 running in uber mode : false
2020-08-22 18:01:11,444 INFO mapreduce.Job:  map 0% reduce 0%
2020-08-22 18:01:11,671 INFO mapreduce.Job: Job job_1597060731030_0459 failed with state FAILED due to: Application application_1597060731030_0459 failed 2 times due to AM Container for appattempt_1597060731030_0459_000002 exited with  exitCode: 1
Failing this attempt.Diagnostics: [2020-08-22 18:03:19.337]Exception from container-launch.
Container id: container_1597060731030_0459_02_000001
Exit code: 1

[2020-08-22 18:03:19.338]Container exited with a non-zero exit code 1. Error file: prelaunch.err.
Last 4096 bytes of prelaunch.err :
Last 4096 bytes of stderr :
Error: Could not find or load main class org.apache.hadoop.mapreduce.v2.app.MRAppMaster

Please check whether your etc/hadoop/mapred-site.xml contains the below configuration:
<property>
  <name>yarn.app.mapreduce.am.env</name>
  <value>HADOOP_MAPRED_HOME=${full path of your hadoop distribution directory}</value>
</property>
<property>
  <name>mapreduce.map.env</name>
  <value>HADOOP_MAPRED_HOME=${full path of your hadoop distribution directory}</value>
</property>
<property>
  <name>mapreduce.reduce.env</name>
  <value>HADOOP_MAPRED_HOME=${full path of your hadoop distribution directory}</value>
</property>

[2020-08-22 18:03:19.339]Container exited with a non-zero exit code 1. Error file: prelaunch.err.
Last 4096 bytes of prelaunch.err :
Last 4096 bytes of stderr :
Error: Could not find or load main class org.apache.hadoop.mapreduce.v2.app.MRAppMaster

Please check whether your etc/hadoop/mapred-site.xml contains the below configuration:
<property>
  <name>yarn.app.mapreduce.am.env</name>
  <value>HADOOP_MAPRED_HOME=${full path of your hadoop distribution directory}</value>
</property>
<property>
  <name>mapreduce.map.env</name>
  <value>HADOOP_MAPRED_HOME=${full path of your hadoop distribution directory}</value>
</property>
<property>
  <name>mapreduce.reduce.env</name>
  <value>HADOOP_MAPRED_HOME=${full path of your hadoop distribution directory}</value>
</property>

For more detailed output, check the application tracking page: http://cluster.com:8088/cluster/app/application_1597060731030_0459 Then click on links to logs of each attempt.
. Failing the application.
2020-08-22 18:01:11,780 INFO mapreduce.Job: Counters: 0
2020-08-22 18:01:11,916 WARN mapreduce.Counters: Group FileSystemCounters is deprecated. Use org.apache.hadoop.mapreduce.FileSystemCounter instead
2020-08-22 18:01:11,921 INFO mapreduce.ExportJobBase: Transferred 0 bytes in 273.1812 seconds (0 bytes/sec)
2020-08-22 18:01:12,013 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
2020-08-22 18:01:12,015 INFO mapreduce.ExportJobBase: Exported 0 records.
2020-08-22 18:01:12,015 ERROR mapreduce.ExportJobBase: Export job failed!
2020-08-22 18:01:12,016 ERROR tool.ExportTool: Error during export: 
Export job failed!
    at org.apache.sqoop.mapreduce.ExportJobBase.runExport(ExportJobBase.java:445)
    at org.apache.sqoop.manager.OracleManager.exportTable(OracleManager.java:465)
    at org.apache.sqoop.tool.ExportTool.exportTable(ExportTool.java:80)
    at org.apache.sqoop.tool.ExportTool.run(ExportTool.java:99)
    at org.apache.sqoop.Sqoop.run(Sqoop.java:147)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
    at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:183)
    at org.apache.sqoop.Sqoop.runTool(Sqoop.java:234)
    at org.apache.sqoop.Sqoop.runTool(Sqoop.java:243)
    at org.apache.sqoop.Sqoop.main(Sqoop.java:252)

最佳答案

您提到您已经在Cloudera中安装了一个集群,但是尚不清楚Sqoop在何处运行或在何处获得这些XML文件。
如果您已经完全安装了Cloudera Cluster,则应该已经在那里安装并配置了Sqoop,以使其运行而不会出现很多问题(您可能需要额外的JDBC驱动程序,但是应该如此)
否则,如果尝试从外部设置Sqoop(和Hadoop),则需要从Hadoop集群中的工作程序节点获取$HADOOP_HOME/conf文件夹的副本,以确保所有客户端配置都相同。

关于hadoop - 为什么此sqoop命令会引发异常?无法找到或加载主类org.apache.hadoop.mapreduce.v2.app.MRAppMaster,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/63536899/

相关文章:

hadoop - hive外部表有什么意义?

hadoop - 使用 pig latin 选择不同的计数

hadoop - 我们如何使用 SQoop 对从 RDBMS 迁移到 HDFS 的数据进行测试?

mysql - Sqoop 从 HDFS 导出到 MySQL

hadoop - 使Sqoop1与Hadoop2一起使用

hadoop - 当键分为几列时,如何在 Apache Pig 中加入两个商店?

hadoop - 如何在Hadoop中将数据从Hive导出到HDFS?

scala - 如何在缩放时计算类型管道中行中列的频率?

linux - Hive命令行通过Cygwin select查询报错

c# - 如何在Cloudera Hue上执行C#MapReduce作业