您的位置:首页 > 大数据 > Hadoop

No FileSystem for scheme: hdfs,No FileSystem for scheme: file

2016-04-12 23:42 483 查看
Why this happened to us

Differents JARs (hadoop-commons for LocalFileSystem, hadoop-hdfs for DistributedFileSystem) each contain a different file called
org.apache.hadoop.fs.FileSystem
in
their
META-INFO/services
directory. This file lists the canonical classnames of
the filesystem implementations they want to declare (This is called a Service Provider Interface, see
org.apache.hadoop.FileSystem
line
2116).

When we use
maven-assembly
, it merges all our JARs into one, and all
META-INFO/services/org.apache.hadoop.fs.FileSystem
overwrite
each-other. Only one of these files remains (the last one that was added). In this case, the Filesystem list from hadoop-commons overwrites the list from hadoop-hdfs, so
DistributedFileSystem
was
no longer declared.

How we fixed it

After loading the hadoop configuration, but just before doing anything Filesystem-related, we call this:
hadoopConfig.set("fs.hdfs.impl",
org.apache.hadoop.hdfs.DistributedFileSystem.class.getName()
);
hadoopConfig.set("fs.file.impl",
org.apache.hadoop.fs.LocalFileSystem.class.getName()
);


------------------------------------
I‘ve got the similar problem with "java -jar xx.jar" in hadoop-2.0.5-alpha:
java.io.IOException: No FileSystem for scheme: file

but it works well when running with "hadoop jar".

When I add the follow config into core-default.xml and it works with "java
-jar"

<property>
<name>fs.file.impl</name>
<value>org.apache.hadoop.fs.LocalFileSystem</value>
<description>The FileSystem for file: uris.</description>
</property>

<property>
<name>fs.hdfs.impl</name>
<value>org.apache.hadoop.hdfs.DistributedFileSystem</value>
<description>The FileSystem for hdfs: uris.</description>
</property>


So,maybe it is not the problem of missing required dependencies. I don't
why,but it works![/code]
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: