您的位置:首页 > 大数据 > Hadoop

spark从hdfs上读取文件运行wordcount

2014-11-30 16:09 489 查看
1.配置环境说明

hadoop配置节点:sg202(namenode SecondaryNameNode)  sg206(datanode) sg207(datanode) sg208(datanode)

spark配置节点:sg201(Master)  sg211(Worker)

2.从hdfs上读取文件并运行wordcount

a. 登录hadoop的主节点sg202 将要进行wordcount的文件上传到hdfs上

[html]
view plaincopyprint?





[root@sg202 hadoop-1.0.4]# hadoop fs -put /home/hadoop-1.0.4/README.txt  input 

[root@sg202 hadoop-1.0.4]# hadoop fs -put /home/hadoop-1.0.4/README.txt  input

b. 登录spark的Master节点(sg201)进入sparkshell

[html]
view plaincopyprint?





[root@sg201 spark-0.7.3]# MASTER=spark://172.16.48.201:7077 ./spark-shell 

[root@sg201 spark-0.7.3]# MASTER=spark://172.16.48.201:7077 ./spark-shell

c. 运行wordcount

[html]
view plaincopyprint?





scala> val
file=sc.textFile("hdfs://172.16.48.202:9000/user/root/input/README.txt") 

scala> val file=sc.textFile("hdfs://172.16.48.202:9000/user/root/input/README.txt")


[html]
view plaincopyprint?





scala> val
count=file.flatMap(line => line.split(" ")).map(word
=> (word,1)).reduceByKey(_+_) 

scala> val count=file.flatMap(line => line.split(" ")).map(word => (word,1)).reduceByKey(_+_)


[html]
view plaincopyprint?





scala> count.collect() 
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: