您的位置:首页 > 其它

spark:学习杂记+wordcount(单词统计)--22

2015-04-07 23:46 579 查看
1.RDD中.filter函数过滤带“ERROR”的行

-----------------------------------------------------------------

val errors = file.filter(line => line.contains("ERROR"))
errors.count()


2.Spark追求的目标:像编写单机程序一样编写分布式程序

3.分布式数据架构,弹性分布式数据集RDD的两种创建方式

----------------------------------------------------------------

a:从Hadoop文件系统创建

b:从父RDD转换新RDD

4.DSM:传统的共享内存系统(区别于RDD)

5.AKKA:基于Scala的Spark通信框架

6.容错机制:Spark选择记录更新方式(另一种是数据检查点)

Lineage机制,Checkpoint机制,Shuffle机制

-----------------------------------------------------------------------------------------------

-----------------------------------------------------------------------------------------------

WordCount:统计文件中的词频

package ymhd

import org.apache.log4j.{Level, Logger}
import org.apache.spark._
import SparkContext._
import scala.collection.mutable.ListBuffer

/**
* Created by sendoh on 2015/4/6.
*/
object WordCount {
def main(args: Array[String]): Unit ={
//
Logger.getLogger("org.apache.spark").setLevel(Level.WARN)
Logger.getLogger("org.eclipse.jetty.server").setLevel(Level.OFF)
//
if (args.length != 3){
println("Usage: java -jar code.jar dependency_jars file_locaion save_location")
System.exit(0)
}
//
val jars = ListBuffer[String]()
args(0).split(',').map(jars += _)
//
val conf = new SparkConf().setAppName("WordCount").setSparkHome("/usr/local/spark-1.2.0-bin-hadoop2.4").setJars(jars)setMaster("spark://192.168.30.129:7077")
val sc = new SparkContext(conf)
//
val textRDD = sc.textFile("hdfs://localhost:9000/datatnt/textworda.txt")
//val result = textRDD.flatMap(_.split("\t").toString()).map(word => (word, 1)).reduceByKey(_ + _).saveAsSequenceFile("hdfs://localhost:9000/outputtnt/wordcount")
val result = textRDD.flatMap(line => line.split(" ")).map(word => (word, 1)).reduceByKey(_ + _).saveAsSequenceFile("hdfs://localhost:9000/outputtnt/wordcount")
}

}

内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: