spark--transform算子--repartition
2017-07-19 09:39
471 查看
import org.apache.spark.{SparkConf, SparkContext} import scala.collection.mutable.ArrayBuffer /** * Created by liupeng on 2017/6/16. */ object T_repartition { System.setProperty("hadoop.home.dir","F:\\hadoop-2.6.5") def fun_index(index : Int, iter : Iterator[String]) : Iterator[String] = { var list = ArrayBuffer[String]() while (iter.hasNext) { val name : String = iter.next() var fs = index + ":" + name list += fs println(fs) } return list.iterator } def fun_index1(index : Int, iter : Iterator[String]) : Iterator[String] = { var list = ArrayBuffer[String]() while (iter.hasNext) { val name : String = iter.next() var fs = index + ":" + name list += fs println(fs) } return list.iterator } def main(args: Array[String]): Unit = { val conf = new SparkConf().setAppName("repartition_test").setMaster("local") val sc = new SparkContext(conf) //repartition,功能是将RDD的partition的数量增多或者减少 //建议使用场景 //一个很经典的场景,使用sparkSQL从HIVE中查询数据的时候,sparkSQL会根据HIVE对应的hdfs文件的block的数量决定加载出来的partition有多少个 //这里默认的partition的数量是我们无法设置 //有些时候,可能他会自动设置的partition的数量过于少了,为了进行优化,可以提高并行度,就是对RDD使用repartition算子 val nameList : List[String] = List("liupeng1", "liupeng2", "liuipeng3", "liupeng4", "liupeng5", "liupeng6", "liupeng7", "liupeng8", "liupeng9", "liupeng10", "liupeng11", "liupeng12" ) val nameRDD = sc.parallelize(nameList, 3) val nameRDD2 = nameRDD.mapPartitionsWithIndex(fun_index) val nameRDD3 = nameRDD2.repartition(6) val nameRDD4 = nameRDD3.mapPartitionsWithIndex(fun_index1) val info : Array[String] = nameRDD4.collect() } }
运行结果:
0:liupeng1
0:liupeng2
0:liuipeng3
0:liupeng4
1:liupeng5
1:liupeng6
1:liupeng7
1:liupeng8
2:liupeng9
2:liupeng10
2:liupeng11
2:liupeng12
0:1:liupeng7
0:2:liupeng10
1:0:liupeng1
1:1:liupeng8
1:2:liupeng11
2:0:liupeng2
2:2:liupeng12
3:0:liuipeng3
4:0:liupeng4
4:1:liupeng5
5:1:liupeng6
5:2:liupeng9
相关文章推荐
- spark--transform算子--mapPartitions
- spark--transform算子--union
- Spark算子--coalesce和repartition
- spark算子(repartition和coalesce)
- Spark的Transform算子和Action算子列举和示例
- spark--transform算子--mapPartitionsWithIndex
- spark--transform算子--parallelized
- Spark算子:RDD基本转换操作(2)–coalesce、repartition
- Spark算子:RDD基本转换操作(2)–coalesce、repartition
- Spark算子:RDD基本转换操作(2)–coalesce、repartition
- spark--transform算子--distinct
- Spark算子:RDD基本转换操作(2)–coalesce、repartition
- Spark编程之基本的RDD算子coalesce, repartition, checkpoint
- spark--transform算子--filter
- Spark算子:RDD基本转换操作(2)–coalesce、repartition
- spark--transform算子--flatMap
- Spark算子:RDD基本转换操作(2)–coalesce、repartition
- spark--transform算子--groupByKey
- spark--transform算子--reduceByKey
- Spark算子[02]:coalesce,repartition