java - hdfs文件系统复制错误

标签 java linux hadoop mapreduce hdfs

我写了下面的 bash 脚本

#!/bin/bash
cd /export/hadoop-1.0.1/bin
./hadoop namenode -format
./start-all.sh
./hadoop fs -rmr hdfs://192.168.1.8:7000/export/hadoop-1.0.1/bin/output
./hadoop fs -rmr hdfs://192.168.1.8:7000/export/hadoop-1.0.1/bin/input
./hadoop fs -mkdir hdfs://192.168.1.8:7000/export/hadoop-1.0.1/input
./readwritepaths
./hadoop fs -put /export/hadoop-1.0.1/bin/input/paths.txt hdfs://192.168.1.8:7000/export/hadoop-1.0.1/bin/input/paths.txt
./hadoop jar /export/hadoop-1.0.1/bin/ParallelIndexation.jar org.myorg.ParallelIndexation /export/hadoop-1.0.1/bin/input /export/hadoop-1.0.1/bin/output -D mapred.map.tasks=1 1> resultofexecute.txt 2>&1

作为其执行命令的结果

./hadoop fs -put /export/hadoop-1.0.1/bin/input/paths.txt hdfs://192.168.1.8:7000/export/hadoop-1.0.1/bin/input/paths.txt

我收到了以下消息

13/04/28 10:13:15 WARN hdfs.DFSClient: DataStreamer Exception: org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /export/hadoop-1.0.1/bin/input/paths.txt could only be replicated to 0 nodes, instead of 1
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1556)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:601)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)

    at org.apache.hadoop.ipc.Client.call(Client.java:1066)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
    at $Proxy1.addBlock(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:601)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
    at $Proxy1.addBlock(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3507)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3370)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2700(DFSClient.java:2586)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2826)

13/04/28 10:13:15 WARN hdfs.DFSClient: Error Recovery for block null bad datanode[0] nodes == null
13/04/28 10:13:15 WARN hdfs.DFSClient: Could not get block locations. Source file "/export/hadoop-1.0.1/bin/input/paths.txt" - Aborting...
put: java.io.IOException: File /export/hadoop-1.0.1/bin/input/paths.txt could only be replicated to 0 nodes, instead of 1
13/04/28 10:13:15 ERROR hdfs.DFSClient: Exception closing file /export/hadoop-1.0.1/bin/input/paths.txt : org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /export/hadoop-1.0.1/bin/input/paths.txt could only be replicated to 0 nodes, instead of 1
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1556)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:601)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)

org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /export/hadoop-1.0.1/bin/input/paths.txt could only be replicated to 0 nodes, instead of 1
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1556)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:601)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)

    at org.apache.hadoop.ipc.Client.call(Client.java:1066)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
    at $Proxy1.addBlock(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:601)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
    at $Proxy1.addBlock(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3507)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3370)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2700(DFSClient.java:2586)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2826)

我也在一个从属节点上给datanode一个日志(在第二个从属节点上这个日志包含类似的错误)

2013-04-28 11:10:40,634 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG:   host = myhost2/192.168.1.10
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 1.0.1
STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r 1243785; compiled by 'hortonfo' on Tue Feb 14 08:15:38 UTC 2012
************************************************************/
2013-04-28 11:10:40,948 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2013-04-28 11:10:40,982 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
2013-04-28 11:10:40,983 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2013-04-28 11:10:40,983 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system started
2013-04-28 11:10:41,285 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
2013-04-28 11:10:41,308 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists!
2013-04-28 11:10:42,811 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7000. Already tried 0 time(s).
2013-04-28 11:10:43,811 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7000. Already tried 1 time(s).
2013-04-28 11:10:44,813 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7000. Already tried 2 time(s).
2013-04-28 11:10:45,814 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7000. Already tried 3 time(s).
2013-04-28 11:10:46,814 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7000. Already tried 4 time(s).
2013-04-28 11:10:47,814 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7000. Already tried 5 time(s).
2013-04-28 11:10:48,815 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7000. Already tried 6 time(s).
2013-04-28 11:10:49,815 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7000. Already tried 7 time(s).
2013-04-28 11:10:50,816 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7000. Already tried 8 time(s).
2013-04-28 11:10:51,818 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7000. Already tried 9 time(s).
2013-04-28 11:10:51,822 INFO org.apache.hadoop.ipc.RPC: Server at 192.168.1.8/192.168.1.8:7000 not available yet, Zzzzz...
2013-04-28 11:10:53,824 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7000. Already tried 0 time(s).
2013-04-28 11:10:54,825 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7000. Already tried 1 time(s).
2013-04-28 11:10:55,826 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7000. Already tried 2 time(s).
2013-04-28 11:10:56,828 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7000. Already tried 3 time(s).
2013-04-28 11:10:57,828 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7000. Already tried 4 time(s).
2013-04-28 11:10:58,829 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7000. Already tried 5 time(s).
2013-04-28 11:10:59,829 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7000. Already tried 6 time(s).
2013-04-28 11:11:00,830 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7000. Already tried 7 time(s).
2013-04-28 11:11:01,831 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7000. Already tried 8 time(s).
2013-04-28 11:11:02,831 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7000. Already tried 9 time(s).
2013-04-28 11:11:02,833 INFO org.apache.hadoop.ipc.RPC: Server at 192.168.1.8/192.168.1.8:7000 not available yet, Zzzzz...
2013-04-28 11:11:04,834 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7000. Already tried 0 time(s).
2013-04-28 11:11:05,834 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7000. Already tried 1 time(s).
2013-04-28 11:11:06,835 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7000. Already tried 2 time(s).
2013-04-28 11:11:07,836 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7000. Already tried 3 time(s).
2013-04-28 11:11:08,837 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7000. Already tried 4 time(s).
2013-04-28 11:11:09,837 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7000. Already tried 5 time(s).
2013-04-28 11:11:40,381 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Incompatible namespaceIDs in /tmp/hadoop-hadoop/dfs/data: namenode namespaceID = 454531810; datanode namespaceID = 345408440
    at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:232)
    at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:147)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:385)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:299)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1582)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1521)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1539)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1665)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1682)

2013-04-28 11:11:40,383 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at myhost2/192.168.1.10
************************************************************/

帮助消除复制错误。 @ChrisWhite 名称节点日志。

2013-04-28 10:10:38,310 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = one/192.168.1.8
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 1.0.1
STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r 1243785; compiled by 'hortonfo' on Tue Feb 14 08:15:38 UTC 2012
************************************************************/
2013-04-28 10:10:38,579 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2013-04-28 10:10:38,594 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
2013-04-28 10:10:38,596 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2013-04-28 10:10:38,596 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2013-04-28 10:11:08,818 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
2013-04-28 10:11:08,825 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists!
2013-04-28 10:11:08,831 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm registered.
2013-04-28 10:11:08,832 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source NameNode registered.
2013-04-28 10:11:08,852 INFO org.apache.hadoop.hdfs.util.GSet: VM type       = 32-bit
2013-04-28 10:11:08,854 INFO org.apache.hadoop.hdfs.util.GSet: 2% max memory = 19.33375 MB
2013-04-28 10:11:08,854 INFO org.apache.hadoop.hdfs.util.GSet: capacity      = 2^22 = 4194304 entries
2013-04-28 10:11:08,855 INFO org.apache.hadoop.hdfs.util.GSet: recommended=4194304, actual=4194304
2013-04-28 10:11:08,977 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hadoop
2013-04-28 10:11:08,977 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
2013-04-28 10:11:08,977 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=true
2013-04-28 10:11:08,983 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.block.invalidate.limit=100
2013-04-28 10:11:08,983 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
2013-04-28 10:11:09,088 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemStateMBean and NameNodeMXBean
2013-04-28 10:11:09,129 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times 
2013-04-28 10:11:09,143 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of files = 1
2013-04-28 10:11:09,147 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of files under construction = 0
2013-04-28 10:11:09,147 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file of size 112 loaded in 0 seconds.
2013-04-28 10:11:09,147 INFO org.apache.hadoop.hdfs.server.common.Storage: Edits file /tmp/hadoop-hadoop/dfs/name/current/edits of size 4 edits # 0 loaded in 0 seconds.
2013-04-28 10:11:09,149 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file of size 112 saved in 0 seconds.
2013-04-28 10:11:09,157 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file of size 112 saved in 0 seconds.
2013-04-28 10:11:09,160 INFO org.apache.hadoop.hdfs.server.namenode.NameCache: initialized with 0 entries 0 lookups
2013-04-28 10:11:09,160 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading FSImage in 192 msecs
2013-04-28 10:11:09,176 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total number of blocks = 0
2013-04-28 10:11:09,177 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of invalid blocks = 0
2013-04-28 10:11:09,177 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of under-replicated blocks = 0
2013-04-28 10:11:09,177 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of  over-replicated blocks = 0
2013-04-28 10:11:09,177 INFO org.apache.hadoop.hdfs.StateChange: STATE* Safe mode termination scan for invalid, over- and under-replicated blocks completed in 15 msec
2013-04-28 10:11:09,177 INFO org.apache.hadoop.hdfs.StateChange: STATE* Leaving safe mode after 0 secs.
2013-04-28 10:11:09,177 INFO org.apache.hadoop.hdfs.StateChange: STATE* Network topology has 0 racks and 0 datanodes
2013-04-28 10:11:09,177 INFO org.apache.hadoop.hdfs.StateChange: STATE* UnderReplicatedBlocks has 0 blocks
2013-04-28 10:11:09,192 INFO org.apache.hadoop.util.HostsFileReader: Refreshing hosts (include/exclude) list
2013-04-28 10:11:09,204 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source FSNamesystemMetrics registered.
2013-04-28 10:11:09,223 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcDetailedActivityForPort7000 registered.
2013-04-28 10:11:09,223 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcActivityForPort7000 registered.
2013-04-28 10:11:09,225 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at: one/192.168.1.8:7000
2013-04-28 10:11:09,245 INFO org.apache.hadoop.ipc.Server: Starting SocketReader
2013-04-28 10:11:09,247 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicateQueue QueueProcessingStatistics: First cycle completed 0 blocks in 0 msec
2013-04-28 10:11:09,247 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicateQueue QueueProcessingStatistics: Queue flush completed 0 blocks in 0 msec processing time, 0 msec clock time, 1 cycles
2013-04-28 10:11:09,248 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: InvalidateQueue QueueProcessingStatistics: First cycle completed 0 blocks in 0 msec
2013-04-28 10:11:09,248 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: InvalidateQueue QueueProcessingStatistics: Queue flush completed 0 blocks in 0 msec processing time, 0 msec clock time, 1 cycles
2013-04-28 10:11:39,379 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2013-04-28 10:11:39,559 INFO org.apache.hadoop.http.HttpServer: Added global filtersafety (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
2013-04-28 10:11:39,574 INFO org.apache.hadoop.http.HttpServer: dfs.webhdfs.enabled = false
2013-04-28 10:11:39,582 INFO org.apache.hadoop.http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50070
2013-04-28 10:11:39,583 INFO org.apache.hadoop.http.HttpServer: listener.getLocalPort() returned 50070 webServer.getConnectors()[0].getLocalPort() returned 50070
2013-04-28 10:11:39,583 INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 50070
2013-04-28 10:11:39,583 INFO org.mortbay.log: jetty-6.1.26
2013-04-28 10:11:40,093 INFO org.mortbay.log: Started SelectChannelConnector@0.0.0.0:50070
2013-04-28 10:11:40,093 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at: 0.0.0.0:50070
2013-04-28 10:11:40,111 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2013-04-28 10:11:40,170 INFO org.apache.hadoop.ipc.Server: IPC Server handler 9 on 7000: starting
2013-04-28 10:11:40,171 INFO org.apache.hadoop.ipc.Server: IPC Server handler 8 on 7000: starting
2013-04-28 10:11:40,171 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 7000: starting
2013-04-28 10:11:40,171 INFO org.apache.hadoop.ipc.Server: IPC Server handler 6 on 7000: starting
2013-04-28 10:11:40,171 INFO org.apache.hadoop.ipc.Server: IPC Server handler 5 on 7000: starting
2013-04-28 10:11:40,171 INFO org.apache.hadoop.ipc.Server: IPC Server handler 4 on 7000: starting
2013-04-28 10:11:40,171 INFO org.apache.hadoop.ipc.Server: IPC Server handler 3 on 7000: starting
2013-04-28 10:11:40,172 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 7000: starting
2013-04-28 10:11:40,172 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 7000: starting
2013-04-28 10:11:40,172 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 7000: starting
2013-04-28 10:11:40,172 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 7000: starting
2013-04-28 10:11:41,177 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:hadoop cause:java.io.IOException: File /tmp/hadoop-hadoop/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1
2013-04-28 10:11:41,180 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 7000, call addBlock(/tmp/hadoop-hadoop/mapred/system/jobtracker.info, DFSClient_1259183364, null) from 192.168.1.8:37770: error: java.io.IOException: File /tmp/hadoop-hadoop/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1
java.io.IOException: File /tmp/hadoop-hadoop/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1556)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:601)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)

最佳答案

您需要在 hdfs-site.xml 中配置 dfs.name.dir 和 dfs.data.dir 属性的值,否则它们很可能默认为临时目录(正如@rVr 在他的回答中指出的那样被删除在系统重启时)。

至于合适的值 - 这取决于您的系统,但通常您应该为 dfs.name.dir 创建一个目录(在您的名称节点服务器上),然后为 dfs.data.dir 创建另一个目录(或在大多数情况下生产集群,这是不同磁盘上目录的 csv 值)。

创建并配置这些值后,您需要确保 hdfs-site.xml 文件分布在您的集群中。之后你应该重新格式化你的namenode并最终使用bin文件夹中的脚本启动你的HDFS服务(确保从你的namenode运行的机器上运行这个脚本)

关于java - hdfs文件系统复制错误,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/16260403/

相关文章:

mysql - 使用 drill 修改 RDBMS 时的 UPDATE/INSERT

java - java中boxlayout的x_axis和line_axis有什么区别?

java - 测试两个持久性上下文是否相等

java - New Relic 监控 Websockets 应用程序详细信息

java - 无法启动设备中的 Activity

linux - angular2 cli 安装出错

c - 如何获取 Linux Ruby C 扩展项目的包含目录 (ruby.h)?

linux - 无法使用 SQLMap 转储文件/目录不存在

sql - 在 HIVE 的子组中使用排名

hadoop - 如何获取原始Hadoop指标