您的位置:首页 > 其它

如何完美的解释reducebyKey

2017-07-03 20:42 295 查看
down
votfavorit

I am new to spark and scala. I was confused about the way reuceByKey function works in spark. Suppose we hava the following code:
val lines = sc.textFile("data.txt")
val pairs = lines.map(s => (s, 1))
val counts = pairs.reduceByKey((a, b) => a + b)


the map function is clear: s is the key and it points to the line from data.txt and 1 is the value.

However, I didn't get how the reduceByKey works internally? Does "a" points to the key ? alternatively, does "a" point to "s" ? Then what does represent a + b ? how are they filled ?

9down
votefavorite

2

I am new to spark and scala. I was confused about the way reuceByKey function works in spark. Suppose we hava the following code:
val lines = sc.textFile("data.txt")
val pairs = lines.map(s => (s, 1))
val counts = pairs.reduceByKey((a, b) => a + b)


the map function is clear: s is the key and it points to the line from data.txt and 1 is the value.

However, I didn't get how the reduceByKey works internally? Does "a" points to the key ? alternatively, does "a" point to "s" ? Then what does represent a + b ? how are they filled ?

up
vote25down
voteaccepted
Let's break it down to discrete methods and types. That usually exposes the intricacies for new devs:
pairs.reduceByKey((a, b) => a + b)


becomes
pairs.reduceByKey((a: Int, b: Int) => a + b)


and renaming the variables makes it a little more explicit
pairs.reduceByKey((accumulatedValue: Int, currentValue: Int) => accumulatedValue + currentValue)


So, we can now see that we are simply taking an accumulated value for the given
key and summing it with the next value of that key. NOW, let's
break it further so we can understand the key part. So, let's visualize the method more like this:
pairs.reduce((accumulatedValue: List[(String, Int)], currentValue: (String, Int)) => {
//Turn the accumulated value into a true key->value mapping
val accumAsMap = accumulatedValue.toMap
//Try to get the key's current value if we've already encountered it
accumAsMap.get(currentValue._1) match {
//If we have encountered it, then add the new value to the existing value and overwrite the old
case Some(value : Int) => (accumAsMap + (currentValue._1 -> (value + currentValue._2))).toList
//If we have NOT encountered it, then simply add it to the list
case None => currentValue :: accumulatedValue
}
})


So, you can see that the reduceByKey takes the boilerplate of finding
the key and tracking it so that you don't have to worry about managing that part.

Deeper, truer if you want

All that being said, that is a simplified version of what happens as there are some optimizations that are done here. This operation is associative, so the spark engine will perform these reductions locally first (often termed map-side reduce) and then once
again at the driver. This saves network traffic; instead of sending all the data and performing the operation, it can reduce it as small as it can and then send that reduction over the wire.

9down
votefavorite

2

I am new to spark and scala. I was confused about the way reuceByKey function works in spark. Suppose we hava the following code:
val lines = sc.textFile("data.txt")
val pairs = lines.map(s => (s, 1))
val counts = pairs.reduceByKey((a, b) => a + b)


the map function is clear: s is the key and it points to the line from data.txt and 1 is the value.

However, I didn't get how the reduceByKey works internally? Does "a" points to the key ? alternatively, does "a" point to "s" ? Then what does represent a + b ? how are they filled ?

One requirement for the 
reduceByKey
 function
is that is must be associative. To build some intuition on how 
reduceByKey
 works,
let's first see how an associative associative function helps us in a parallel computation:



As we can see, we can break an original collection in pieces and by applying the associative function, we can accumulate a total. The sequential case is trivial, we are used to it: 1+2+3+4+5+6+7+8+9+10.

Associativity lets us use that same function in sequence and in parallel. 
reduceByKey
 uses
that property to compute a result out of an RDD, which is a distributed collection consisting of partitions.

Consider the following example:
// collection of the form ("key",1),("key,2),...,("key",20) split among 4 partitions
val rdd =sparkContext.parallelize(( (1 to 20).map(x=>("key",x))), 4)
rdd.reduceByKey(_ + _)
rdd.collect()
> Array[(String, Int)] = Array((key,210))


In spark, data is distributed into partitions. For the next illustration, (4) partitions are to the left, enclosed in thin lines. First, we apply the function locally to each partition, sequentially in the partition, but we run all 4 partitions in parallel.
Then, the result of each local computation are aggregated by applying the same function again and
finally come to a result.



reduceByKey
 is
an specialization of 
aggregateByKey
 
aggregateByKey
 takes
2 functions: one that is applied to each partition (sequentially) and one that is applied among the results of each partition (in parallel). 
reduceByKey
 uses
the same associative function on both cases: to do a sequential computing on each partition and then combine those results in a final result as we have illustrated here.

9down
votefavorite

2

I am new to spark and scala. I was confused about the way reuceByKey function works in spark. Suppose we hava the following code:
val lines = sc.textFile("data.txt")
val pairs = lines.map(s => (s, 1))
val counts = pairs.reduceByKey((a, b) => a + b)


the map function is clear: s is the key and it points to the line from data.txt and 1 is the value.

However, I didn't get how the reduceByKey works internally? Does "a" points to the key ? alternatively, does "a" point to "s" ? Then what does represent a + b ? how are they filled ?

内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: