No FileSystem for scheme: hdfs,No FileSystem for scheme: file
2016-04-12 23:42
483 查看
Why this happened to us
Differents JARs (hadoop-commons for LocalFileSystem, hadoop-hdfs for DistributedFileSystem) each contain a different file called
their
the filesystem implementations they want to declare (This is called a Service Provider Interface, see
2116).
When we use
each-other. Only one of these files remains (the last one that was added). In this case, the Filesystem list from hadoop-commons overwrites the list from hadoop-hdfs, so
no longer declared.
How we fixed it
After loading the hadoop configuration, but just before doing anything Filesystem-related, we call this:
------------------------------------
I‘ve got the similar problem with "java -jar xx.jar" in hadoop-2.0.5-alpha:
java.io.IOException: No FileSystem for scheme: file
but it works well when running with "hadoop jar".
When I add the follow config into core-default.xml and it works with "java
-jar"
<property>
<name>fs.file.impl</name>
<value>org.apache.hadoop.fs.LocalFileSystem</value>
<description>The FileSystem for file: uris.</description>
</property>
<property>
<name>fs.hdfs.impl</name>
<value>org.apache.hadoop.hdfs.DistributedFileSystem</value>
<description>The FileSystem for hdfs: uris.</description>
</property>
So,maybe it is not the problem of missing required dependencies. I don't
why,but it works![/code]
Differents JARs (hadoop-commons for LocalFileSystem, hadoop-hdfs for DistributedFileSystem) each contain a different file called
org.apache.hadoop.fs.FileSystemin
their
META-INFO/servicesdirectory. This file lists the canonical classnames of
the filesystem implementations they want to declare (This is called a Service Provider Interface, see
org.apache.hadoop.FileSystemline
2116).
When we use
maven-assembly, it merges all our JARs into one, and all
META-INFO/services/org.apache.hadoop.fs.FileSystemoverwrite
each-other. Only one of these files remains (the last one that was added). In this case, the Filesystem list from hadoop-commons overwrites the list from hadoop-hdfs, so
DistributedFileSystemwas
no longer declared.
How we fixed it
After loading the hadoop configuration, but just before doing anything Filesystem-related, we call this:
hadoopConfig.set("fs.hdfs.impl", org.apache.hadoop.hdfs.DistributedFileSystem.class.getName() ); hadoopConfig.set("fs.file.impl", org.apache.hadoop.fs.LocalFileSystem.class.getName() );
------------------------------------
I‘ve got the similar problem with "java -jar xx.jar" in hadoop-2.0.5-alpha:
java.io.IOException: No FileSystem for scheme: file
but it works well when running with "hadoop jar".
When I add the follow config into core-default.xml and it works with "java
-jar"
<property>
<name>fs.file.impl</name>
<value>org.apache.hadoop.fs.LocalFileSystem</value>
<description>The FileSystem for file: uris.</description>
</property>
<property>
<name>fs.hdfs.impl</name>
<value>org.apache.hadoop.hdfs.DistributedFileSystem</value>
<description>The FileSystem for hdfs: uris.</description>
</property>
So,maybe it is not the problem of missing required dependencies. I don't
why,but it works![/code]
相关文章推荐
- 分布式文件系统HDFS原理与操作
- HDFS源码分析EditLog之读取操作符
- HDFS API基本操作
- HDFS源码分析EditLog之获取编辑日志输入流
- HDFS异构存储
- Hadoop节点上负载过高的问题分析
- hdfs
- HDFS
- HDFS上读写数据的流程解释
- HDFS详解
- Hadoop集群_Hadoop安装配置
- hadoop报错处理
- 利用web的curl命令进行HDFS操作
- HDFS读写过程
- Hadoop学习第一次复习
- centos6.5安装hadoop2.6.4
- HIVE fields terminated by 与 表在hdfs上的关系
- HDFS操作——使用 FileSystem api 读写数据
- 别老扯什么Hadoop了,你的数据根本不够大!
- HDFS的备份机制