Hadoop通过C的API访问HDFS
2011-11-23 17:34
477 查看
在通过Hadoop的C的API 访问HDFS的时候,编译和运行出现了不少问题,在这边,做个总结吧:
系统:Ubuntu11.04,Hadoop-0.20.203.0
样例代码就是参考官方文档中提供到:
编译:官网这样描述
See the Makefile for hdfs_test.c in the libhdfs source directory (${HADOOP_HOME}/src/c++/libhdfs/Makefile) or something like:
gcc above_sample.c -I${HADOOP_HOME}/src/c++/libhdfs -L${HADOOP_HOME}/libhdfs -lhdfs -o above_sample
但是我两个方法都试了,不行,后面发现原来是要少了:
所以完整到makefile就是:
好了,编译通过,但是运行的时候出现以下错误信息:
1.
./testHdfs: error while loading shared libraries: xxx.so.0:cannot open shared object file: No such file or directory
解决方法:把xxx.so.0所在的目录添加到/etc/ld.so.conf中,然后/sbin/ldconfig –v下就可以了。
2.
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/commons/configuration/Configuration
...
Call to org.apache.hadoop.fs.Filesystem::get(URI, Configuration) failed!
Exception in thread "main" java.lang.NullPointerException
Call to get configuration object from filesystem failed!
解决方法,修改/etc/profile,添加相应的CLASSPATH:
最后,恭喜你,问题解决了。
系统:Ubuntu11.04,Hadoop-0.20.203.0
样例代码就是参考官方文档中提供到:
#include "hdfs.h" int main(int argc, char **argv) { hdfsFS fs = hdfsConnect("default", 0); const char* writePath = "/tmp/testfile.txt"; hdfsFile writeFile = hdfsOpenFile(fs, writePath, O_WRONLY|O_CREAT, 0, 0, 0); if(!writeFile) { fprintf(stderr, "Failed to open %s for writing!\n", writePath); exit(-1); } char* buffer = "Hello, World!"; tSize num_written_bytes = hdfsWrite(fs, writeFile, (void*)buffer, strlen(buffer)+1); if (hdfsFlush(fs, writeFile)) { fprintf(stderr, "Failed to 'flush' %s\n", writePath); exit(-1); } hdfsCloseFile(fs, writeFile); }
编译:官网这样描述
See the Makefile for hdfs_test.c in the libhdfs source directory (${HADOOP_HOME}/src/c++/libhdfs/Makefile) or something like:
gcc above_sample.c -I${HADOOP_HOME}/src/c++/libhdfs -L${HADOOP_HOME}/libhdfs -lhdfs -o above_sample
但是我两个方法都试了,不行,后面发现原来是要少了:
LIB = -L$(HADOOP_INSTALL)/c++/Linux-i386-32/lib/ libjvm=/usr/lib/jvm/java-6-openjdk/jre/lib/i386/client/libjvm.so
所以完整到makefile就是:
HADOOP_INSTALL=/home/fzuir/hadoop-0.20.203.0
PLATFORM=Linux-i386-32
JAVA_HOME=/usr/lib/jvm/java-6-openjdk/
CPPFLAGS= -I$(HADOOP_INSTALL)/src/c++/libhdfs
LIB = -L$(HADOOP_INSTALL)/c++/Linux-i386-32/lib/ libjvm=/usr/lib/jvm/java-6-openjdk/jre/lib/i386/client/libjvm.soLDFLAGS += -lhdfs
testHdfs: testHdfs.c
gcc testHdfs.c $(CPPFLAGS) $(LIB) $(LDFLAGS) $(libjvm) -o testHdfs
clean:
rm testHdfs
好了,编译通过,但是运行的时候出现以下错误信息:
1.
./testHdfs: error while loading shared libraries: xxx.so.0:cannot open shared object file: No such file or directory
解决方法:把xxx.so.0所在的目录添加到/etc/ld.so.conf中,然后/sbin/ldconfig –v下就可以了。
2.
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/commons/configuration/Configuration
...
Call to org.apache.hadoop.fs.Filesystem::get(URI, Configuration) failed!
Exception in thread "main" java.lang.NullPointerException
Call to get configuration object from filesystem failed!
解决方法,修改/etc/profile,添加相应的CLASSPATH:
HADOOP_HOME=/home/fzuir/hadoop-0.20.203.0 export PATH=$HADOOP_HOME/bin:$PATH export CLASSPATH=.:$HADOOP_HOME/lib/commons-lang-2.4.jar:$HADOOP_HOME/hadoop-cor e-1.0.1.jar:$HADOOP_HOME/lib/commons-logging-api-1.0.4.jar:$HADOOP_HOME/lib/comm ons-configuration-1.6.jar:$JAVA_HOME/lib:$JRE_HOME/lib:$HADOOP_HOME/contrib/stre aming/hadoop-streaming-1.0.1.jar:$CLASSPATH
最后,恭喜你,问题解决了。
相关文章推荐
- Hadoop通过c语言API访问hdfs
- Hadoop通过C的API访问HDFS
- 很好-Hadoop通过C的API访问HDFS
- hadoop学习;hdfs操作;运行抛出权限异常: Permission denied;api查看源码方法;源码不停的向里循环;抽象类通过debug查找源码
- Java程序中不通过hadoop jar的方式访问hdfs
- 通过JAVA—API访问HDFS 上的文件
- Hadoop通过路径和和链接访问HDFS
- hadoop之HDFS:通过Java API访问HDFS
- 使用Hadoop的JAVA API远程访问HDFS
- Hadoop-2.6.0上的C的API访问HDFS
- 通过hadoopAPI访问文件
- 通过HDFS的API访问文件系统的例子
- 使用Hadoop API访问Kerberos 安全HDFS
- Hadoop学习--通过API访问本地文件系统--day04
- 通过 HDFS 的 API 访问文件系
- hadoop学习笔记--5.HDFS的java api接口访问
- Python通过thrift访问hadoop:报错java.lang.IllegalArgumentException: Wrong FS: hdfs:/ expected file:///
- Hadoop通过url地址访问HDFS
- 通过hadoopAPI访问文件
- hadoop 集群 远程访问 mysql(通过sqoop从远程数据库服务器向hdfs迁移数据) 屡次失败的原因