英文:
Hadoop error "too many failed volumes" while running command to get file permissions : ExitCodeException exitCode=-1073741515
问题
在我的 Windows 机器上运行深度优先搜索时,我遇到了以下错误:
-- 文件路径:C:/hadoop/data/datanode
2020-05-05 11:44:59.230 警告 checker.StorageLocationChecker: 检查存储位置时出现异常 [DISK]file:/C:/hadoop/data/datanode
java.lang.RuntimeException: 在运行获取文件权限的命令时出错:ExitCodeException 退出码=-1073741515:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:1009)
at org.apache.hadoop.util.Shell.run(Shell.java:902)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1227)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:1321)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:1303)
at org.apache.hadoop.fs.FileUtil.execCommand(FileUtil.java:1343)
at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfoByNonNativeIO(RawLocalFileSystem.java:726)
...
2020-05-05 11:44:59.243 错误 datanode.DataNode: secureMain 中出现异常
org.apache.hadoop.util.DiskChecker$DiskErrorException: 失败的卷过多 - 当前有效卷:0,配置的卷:1,失败的卷:1,允许的卷故障:0
其他人遇到过这个问题吗?
英文:
While running dfs on my window machine I am getting this error:
> -- file path: C:/hadoop/data/datanode
2020-05-05 11:44:59,230 WARN checker.StorageLocationChecker: Exception checking StorageLocation [DISK]file:/C:/hadoop/data/datanode
java.lang.RuntimeException: Error while running command to get file permissions : ExitCodeException exitCode=-1073741515:
at org.apache.hadoop.util.Shell.renter code here
unCommand(Shell.java:1009)
at org.apache.hadoop.util.Shell.run(Shell.java:902)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1227)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:1321)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:1303)
at org.apache.hadoop.fs.FileUtil.execCommand(FileUtil.java:1343)
at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfoByNonNativeIO(RawLocalFileSystem.java:726)
at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:717)
at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getPermission(RawLocalFileSystem.java:678)
at org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:233)
at org.apache.hadoop.util.DiskChecker.checkDirInternal(DiskChecker.java:141)
at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:116)
at org.apache.hadoop.hdfs.server.datanode.StorageLocation.check(StorageLocation.java:239)
at org.apache.hadoop.hdfs.server.datanode.StorageLocation.check(StorageLocation.java:52)
at org.apache.hadoop.hdfs.server.datanode.checker.ThrottledAsyncChecker$1.call(ThrottledAsyncChecker.java:142)
at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfoByNonNativeIO(RawLocalFileSystem.java:766)
at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:717)
at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getPermission(RawLocalFileSystem.java:678)
at org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:233)
at org.apache.hadoop.util.DiskChecker.checkDirInternal(DiskChecker.java:141)
at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:116)
at org.apache.hadoop.hdfs.server.datanode.StorageLocation.check(StorageLocation.java:239)
at org.apache.hadoop.hdfs.server.datanode.StorageLocation.check(StorageLocation.java:52)
at org.apache.hadoop.hdfs.server.datanode.checker.ThrottledAsyncChecker$1.call(ThrottledAsyncChecker.java:142)
at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
2020-05-05 11:44:59,243 ERROR datanode.DataNode: Exception in secureMain
org.apache.hadoop.util.DiskChecker$DiskErrorException: Too many failed volumes - current valid volumes: 0, volumes configured: 1, volumes failed: 1, volume failures tolerated: 0
Has anyone faced this issue?
专注分享java语言的经验与见解,让所有开发者获益!
评论