Hadoop error "too many failed volumes" while running command to get file permissions : ExitCodeException exitCode=-1073741515

huangapple 未分类评论50阅读模式
英文:

Hadoop error "too many failed volumes" while running command to get file permissions : ExitCodeException exitCode=-1073741515

问题

在我的 Windows 机器上运行深度优先搜索时,我遇到了以下错误:

  1. -- 文件路径:C:/hadoop/data/datanode
  2. 2020-05-05 11:44:59.230 警告 checker.StorageLocationChecker: 检查存储位置时出现异常 [DISK]file:/C:/hadoop/data/datanode
  3. java.lang.RuntimeException: 在运行获取文件权限的命令时出错:ExitCodeException 退出码=-1073741515
  4. at org.apache.hadoop.util.Shell.runCommand(Shell.java:1009)
  5. at org.apache.hadoop.util.Shell.run(Shell.java:902)
  6. at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1227)
  7. at org.apache.hadoop.util.Shell.execCommand(Shell.java:1321)
  8. at org.apache.hadoop.util.Shell.execCommand(Shell.java:1303)
  9. at org.apache.hadoop.fs.FileUtil.execCommand(FileUtil.java:1343)
  10. at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfoByNonNativeIO(RawLocalFileSystem.java:726)
  11. ...
  12. 2020-05-05 11:44:59.243 错误 datanode.DataNode: secureMain 中出现异常
  13. org.apache.hadoop.util.DiskChecker$DiskErrorException: 失败的卷过多 - 当前有效卷:0,配置的卷:1,失败的卷:1,允许的卷故障:0
  14. 其他人遇到过这个问题吗?
英文:

While running dfs on my window machine I am getting this error:

> -- file path: C:/hadoop/data/datanode
2020-05-05 11:44:59,230 WARN checker.StorageLocationChecker: Exception checking StorageLocation [DISK]file:/C:/hadoop/data/datanode
java.lang.RuntimeException: Error while running command to get file permissions : ExitCodeException exitCode=-1073741515:
at org.apache.hadoop.util.Shell.renter code hereunCommand(Shell.java:1009)
at org.apache.hadoop.util.Shell.run(Shell.java:902)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1227)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:1321)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:1303)
at org.apache.hadoop.fs.FileUtil.execCommand(FileUtil.java:1343)
at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfoByNonNativeIO(RawLocalFileSystem.java:726)
at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:717)
at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getPermission(RawLocalFileSystem.java:678)
at org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:233)
at org.apache.hadoop.util.DiskChecker.checkDirInternal(DiskChecker.java:141)
at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:116)
at org.apache.hadoop.hdfs.server.datanode.StorageLocation.check(StorageLocation.java:239)
at org.apache.hadoop.hdfs.server.datanode.StorageLocation.check(StorageLocation.java:52)
at org.apache.hadoop.hdfs.server.datanode.checker.ThrottledAsyncChecker$1.call(ThrottledAsyncChecker.java:142)
at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfoByNonNativeIO(RawLocalFileSystem.java:766)
at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:717)
at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getPermission(RawLocalFileSystem.java:678)
at org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:233)
at org.apache.hadoop.util.DiskChecker.checkDirInternal(DiskChecker.java:141)
at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:116)
at org.apache.hadoop.hdfs.server.datanode.StorageLocation.check(StorageLocation.java:239)
at org.apache.hadoop.hdfs.server.datanode.StorageLocation.check(StorageLocation.java:52)
at org.apache.hadoop.hdfs.server.datanode.checker.ThrottledAsyncChecker$1.call(ThrottledAsyncChecker.java:142)
at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
2020-05-05 11:44:59,243 ERROR datanode.DataNode: Exception in secureMain
org.apache.hadoop.util.DiskChecker$DiskErrorException: Too many failed volumes - current valid volumes: 0, volumes configured: 1, volumes failed: 1, volume failures tolerated: 0

Has anyone faced this issue?

huangapple
  • 本文由 发表于 2020年5月5日 15:35:23
  • 转载请务必保留本文链接:https://java.coder-hub.com/61607998.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定