spark rdd countByValue
2015-01-18 14:45
357 查看
package com.latrobe.spark import org.apache.spark.{SparkContext, SparkConf} /** * Created by spark on 15-1-18. * 统计出集合中每个元素的个数 */ object CountByValue { def main(args: Array[String]) { val conf = new SparkConf().setAppName("spark-demo").setMaster("local") val sc = new SparkContext(conf) val xx = sc.parallelize(List(1,1,1,1,2,2,3,6,5,9)) //打印结果:Map(2 -> 2, 5 -> 1, 1 -> 4, 9 -> 1, 3 -> 1, 6 -> 1) println(xx.countByValue()) } }
相关文章推荐
- Spark编程之基本的RDD算子count, countApproxDistinct, countByValue等
- pair RDD groupByKey countByKey countByValue aggregateByKey reduceByKey 测试
- Spark wordcount 编译错误 -- reduceByKey is not a member of RDD
- spark RDD算子(六)之键值对聚合操作reduceByKey,foldByKey,排序操作sortByKey
- Spark编程之基本的RDD算子之glom,substract,substractByKey,intersection,distinct,union
- Spark算子:RDD键值转换操作(3)–groupByKey、reduceByKey、reduceByKeyLocally
- Spark算子:RDD键值转换操作(3)–groupByKey、reduceByKey、reduceByKeyLocally
- Spark Streaming实现实时WordCount,DStream的使用,updateStateByKey(func)实现累计计算单词出现频率
- Caused by: java.sql.SQLException: Column count doesn't match value count at row 1
- SparkStreaming案例:NetworkWordCount--ReceiverSupervisorImpl中的startReceiver(),Receiver如何将数据store到RDD
- java8实现spark wordcount并且按照value排序输出
- spark pairRDD基本操作(三)——附带wordcount程序
- Spark核心RDD:combineByKey函数详解
- Spark-RDD 之 排序sortBy 和sortByKey
- Spark核心RDD:combineByKey函数详解
- 通过wordCount实战详解Spark RDD创建 -- (视频笔记)
- value toDF is not a member of org.apache.spark.rdd.RDD
- spark的wordcount产生多少个RDD
- Spark算子:RDD键值转换操作(2)–combineByKey、foldByKey
- spark【例子】count(distinct 字段) 简易版 使用groupByKey和zip