您的位置:首页 > 大数据 > Hadoop

HDFS命令行操作

2013-08-15 15:04 309 查看
启动后可通过命令行使用hadoop。

(1)所有命令 (先将$HADOOP_HOME/bin加入到.bashrc的$PATH变量中)

[html] view
plaincopy

[hadoop@node14 hadoop-0.21.0]$ ll $HADOOP_HOME/bin

total 88

-rwxr-xr-x 1 hadoop hadoop 4131 Aug 17 2010 hadoop

-rwxr-xr-x 1 hadoop hadoop 8658 Aug 17 2010 hadoop-config.sh

-rwxr-xr-x 1 hadoop hadoop 3841 Aug 17 2010 hadoop-daemon.sh

-rwxr-xr-x 1 hadoop hadoop 1242 Aug 17 2010 hadoop-daemons.sh

-rwxr-xr-x 1 hadoop hadoop 4130 Aug 17 2010 hdfs

-rwxr-xr-x 1 hadoop hadoop 1201 Aug 17 2010 hdfs-config.sh

-rwxr-xr-x 1 hadoop hadoop 3387 Aug 17 2010 mapred

-rwxr-xr-x 1 hadoop hadoop 1207 Aug 17 2010 mapred-config.sh

-rwxr-xr-x 1 hadoop hadoop 2720 Aug 17 2010 rcc

-rwxr-xr-x 1 hadoop hadoop 2058 Aug 17 2010 slaves.sh

-rwxr-xr-x 1 hadoop hadoop 1367 Aug 17 2010 start-all.sh

-rwxr-xr-x 1 hadoop hadoop 1018 Aug 17 2010 start-balancer.sh

-rwxr-xr-x 1 hadoop hadoop 1778 Aug 17 2010 start-dfs.sh

-rwxr-xr-x 1 hadoop hadoop 1255 Aug 17 2010 start-mapred.sh

-rwxr-xr-x 1 hadoop hadoop 1359 Aug 17 2010 stop-all.sh

-rwxr-xr-x 1 hadoop hadoop 1069 Aug 17 2010 stop-balancer.sh

-rwxr-xr-x 1 hadoop hadoop 1277 Aug 17 2010 stop-dfs.sh

-rwxr-xr-x 1 hadoop hadoop 1163 Aug 17 2010 stop-mapred.sh

(2) hadoop命令

[html] view
plaincopy

[hadoop@node14 hadoop-0.21.0]$ hadoop

Usage: hadoop [--config confdir] COMMAND

where COMMAND is one of:

fs run a generic filesystem user client

version print the version

jar <jar> run a jar file

distcp <srcurl> <desturl> copy file or directories recursively

archive -archiveName NAME -p <parent path> <src>* <dest> create a hadoop archive

classpath prints the class path needed to get the

Hadoop jar and the required libraries

daemonlog get/set the log level for each daemon

or

CLASSNAME run the class named CLASSNAME

Most commands print help when invoked w/o parameters.

(3) hadoop fs

[html] view
plaincopy

[hadoop@node14 hadoop-0.21.0]$ hadoop fs

Usage: java FsShell

[-ls <path>]

[-lsr <path>]

[-df [<path>]]

[-du [-s] [-h] <path>]

[-dus <path>]

[-count[-q] <path>]

[-mv <src> <dst>]

[-cp <src> <dst>]

[-rm [-skipTrash] <path>]

[-rmr [-skipTrash] <path>]

[-expunge]

[-put <localsrc> ... <dst>]

[-copyFromLocal <localsrc> ... <dst>]

[-moveFromLocal <localsrc> ... <dst>]

[-get [-ignoreCrc] [-crc] <src> <localdst>]

[-getmerge <src> <localdst> [addnl]]

[-cat <src>]

[-text <src>]

[-copyToLocal [-ignoreCrc] [-crc] <src> <localdst>]

[-moveToLocal [-crc] <src> <localdst>]

[-mkdir <path>]

[-setrep [-R] [-w] <rep> <path/file>]

[-touchz <path>]

[-test -[ezd] <path>]

[-stat [format] <path>]

[-tail [-f] <file>]

[-chmod [-R] <MODE[,MODE]... | OCTALMODE> PATH...]

[-chown [-R] [OWNER][:[GROUP]] PATH...]

[-chgrp [-R] GROUP PATH...]

[-help [cmd]]

Generic options supported are

-conf <configuration file> specify an application configuration file

-D <propertyproperty=value> use value for given property

-fs <local|namenode:port> specify a namenode

-jt <local|jobtracker:port> specify a job tracker

-files <comma separated list of files> specify comma separated files to be co pied to the map reduce cluster

-libjars <comma separated list of jars> specify comma separated jar files to include in the classpath.

-archives <comma separated list of archives> specify comma separated archives to be unarchived on the compute machines.

The general command line syntax is

bin/hadoop command [genericOptions] [commandOptions]

(4)HDFS操作

[html] view
plaincopy

hadoop fs

hadoop fs -ls

hadoop fs -mkdir firstdir<span style="white-space:pre"> </span>//HDFS创建目录

hadoop fs -rmr firstdir<span style="white-space:pre"> </span>//HDFS删除目录

hadoop fs -put test.txt first.txt<span style="white-space:pre"> </span>//从本地目录中将文件放入HDFS

hadoop fs -cat first.txt

hadoop fs -df

hadoop fs -get first.txt FirstTXTfromHDFS.txt //从HDFS取文件到本地

若文件写入遇到异常

(0)检查和机器名是否正确

node14配置了外部IP和内部IP,在/etc/hosts中加入两条IP与机器名的对应表,如果外部IP放在内部IP的前面,

则通过netstat -npl查看时,发现9000和9001是外部IP占据,故应在/etc/hosts中内部IP放在外部的IP的前面。

或者在conf中配置文件中,全部用IP,而不要用机器名。

(1)关闭防火墙

sudo /etc/init.d/iptables stop

(2)查看磁盘空间是否正常

df -hl

(3)检查目录是否正常

hadoop.tmp.dir默认:/tmp/hadoop-${user.name}

删除/tmp下面的文件,重新hadoop namenode -format,重启所有进程。

(4)单独启动各个进程

在namenode和datanode上分别启动节点

$hadoop-daemon.sh start namenode

$hadoop-daemon.sh start datanode
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: