如果创建子目录时父目录不存在,我希望HDFS命令失败。当我使用FileSystem#mkdirs
中的任何一个时,我发现没有引发异常,而是创建了不存在的父目录:
import java.util.UUID
import org.apache.hadoop.conf.Configuration
import org.apache.hadoop.fs.{FileSystem, Path}
val conf = new Configuration()
conf.set("fs.defaultFS", s"hdfs://$host:$port")
val fileSystem = FileSystem.get(conf)
val cwd = fileSystem.getWorkingDirectory
// Guarantee non-existence by appending two UUIDs.
val dirToCreate = new Path(cwd, new Path(UUID.randomUUID.toString, UUID.randomUUID.toString))
fileSystem.mkdirs(dirToCreate)
如果没有繁琐的检查存在的负担,如果父目录不存在,如何强制HDFS引发异常?
最佳答案
FileSystem API不支持这种类型的行为。相反,应该使用 FileContext#mkdir
。例如:
import org.apache.hadoop.conf.Configuration
import org.apache.hadoop.fs.{FileContext, FileSystem, Path}
import org.apache.hadoop.fs.permission.FsPermission
val files = FileContext.getFileContext()
val cwd = files.getWorkingDirectory
val permissions = new FsPermission("644")
val createParent = false
// Guarantee non-existence by appending two UUIDs.
val dirToCreate = new Path(cwd, new Path(UUID.randomUUID.toString, UUID.randomUUID.toString))
files.mkdir(dirToCreate, permissions, createParent)
上面的例子将抛出:
java.io.FileNotFoundException: Parent directory doesn't exist: /user/erip/f425a2c9-1007-487b-8488-d73d447c6f79
关于scala - 如何防止Hadoop的HDFS API创建父目录?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/47857456/