spark--transform算子--cogroup
2017-07-17 17:56
267 查看
import org.apache.spark.{SparkConf, SparkContext} /** * Created by yz02 on 2017/6/16. */ object T_cogroup { System.setProperty("hadoop.home.dir","F:\\hadoop-2.6.5") def main(args: Array[String]): Unit = { val conf = new SparkConf().setAppName("cogroup_test").setMaster("local") val sc = new SparkContext(conf) val rdd = sc.parallelize(List(("A",1), ("B", 2), ("C", 3))) val rdd1 = sc.parallelize(List(("A", 4))) val rdd2 = sc.parallelize(List(("A", 4), ("A", 5))) //在类型为(K,V)和(K,W)的数据集上调用,返回一个 (K, (Seq[V], Seq[W]))元组的数据集。 val sum = rdd.cogroup(rdd1) .foreach(println) val sum1 = rdd.cogroup(rdd2) .foreach(println) } }
运行结果:
(B,(CompactBuffer(2),CompactBuffer()))
(A,(CompactBuffer(1),CompactBuffer(4)))
(C,(CompactBuffer(3),CompactBuffer()))
(B,(CompactBuffer(2),CompactBuffer()))
(A,(CompactBuffer(1),CompactBuffer(4, 5)))
(C,(CompactBuffer(3),CompactBuffer()))
相关文章推荐
- spark--transform算子--mapPartitions
- spark--transform算子--sample
- spark--transform算子--mapPartitionsWithIndex
- spark--transform算子--union
- Spark的Transform算子和Action算子列举和示例
- spark--transform算子--parallelized
- spark--transform算子--distinct
- 【Spark篇】---SparkStreaming算子操作transform和updateStateByKey
- spark--transform算子--filter
- spark--transform算子--flatMap
- spark--transform算子--groupByKey
- spark--transform算子--reduceByKey
- spark--transform算子--intersection
- spark--transform算子--cartesian
- spark--transform算子--coalesce
- spark--transform算子--join
- spark--transform算子--repartition
- spark--transform算子--map
- Spark的算子的分类
- spark算子:combineByKey