Spark 优化GroupByKey产生RDD[(K, Iter
2016-11-21 本文已影响0人
wangqiaoshi
RDD触发机制
在spark中,RDD Action操作,是由SparkContext来触发的. 通过scala Iterator来实现.
/**
* Return a new RDD by applying a function to all elements of this RDD.
*/
def map[U: ClassTag](f: T => U): RDD[U] = withScope {
val cleanF = sc.clean(f)
new MapPartitionsRDD[U, T](this, (context, pid, iter) => iter.map(cleanF))
}
/**
* Return a new RDD by first applying a function to all elements of this
* RDD, and then flattening the results.
*/
def flatMap[U: ClassTag](f: T => TraversableOnce[U]): RDD[U] = withScope {
val cleanF = sc.clean(f)
new MapPartitionsRDD[U, T](this, (context, pid, iter) => iter.flatMap(cleanF))
}
/**
* Return a new RDD containing only the elements that satisfy a predicate.
*/
def filter(f: T => Boolean): RDD[T] = withScope {
val cleanF = sc.clean(f)
new MapPartitionsRDD[T, T](
this,
(context, pid, iter) => iter.filter(cleanF),
preservesPartitioning = true)
}
GroupByKey分析
GroupByKey是一个非常耗资源的操作,shuffle之后,每个key分组之后的数据,会缓存在内存中,也就是Iterable[V].
def groupByKey(): RDD[(K, Iterable[V])] = self.withScope {
groupByKey(defaultPartitioner(self))
}
def groupByKey(partitioner: Partitioner): RDD[(K, Iterable[V])] = self.withScope {
// groupByKey shouldn't use map side combine because map side combine does not
// reduce the amount of data shuffled and requires all map side data be inserted
// into a hash table, leading to more objects in the old gen.
val createCombiner = (v: V) => CompactBuffer(v)
val mergeValue = (buf: CompactBuffer[V], v: V) => buf += v
val mergeCombiners = (c1: CompactBuffer[V], c2: CompactBuffer[V]) => c1 ++= c2
val bufs = combineByKeyWithClassTag[CompactBuffer[V]](
createCombiner, mergeValue, mergeCombiners, partitioner, mapSideCombine = false)
bufs.asInstanceOf[RDD[(K, Iterable[V])]]
}
如果对RDD[(K, Iterable[V])].在进行flatMap的操作,比如每10条统计一个结果,就会出现问题.
eg:
val sc = new SparkConf().setAppName("demo").setMaster("local[1]")
val sparkContext =new SparkContext(sc)
val rdd = sparkContext.makeRDD(Seq(
("wang",25),("wang",26),("wang",18),("wang",15),("wang",7),("wang",1)
))
.groupByKey().flatMap(kv=>{
var i =0
kv._2.map(r=>{
i=i+1
println(r)
r
})
})
sparkContext.runJob(rdd,add _)
def add(list:Iterator[Int]): Unit ={
var i=0
val items = new mutable.MutableList[Int]()
while(list.hasNext){
items.+=(list.next())
if(i>=2){
println(items.mkString(","))
items.clear()
i=0
}else if(!list.hasNext){
println(items.mkString(","))
}
i=i+1
}
}
结果:
25
26
18
15
7
1
25,26,18
15,7
1
val sc = new SparkConf().setAppName("demo").setMaster("local[1]")
val sparkContext =new SparkContext(sc)
val rdd = sparkContext.makeRDD(Seq(
("wang",25),("wang",26),("wang",18),("wang",15),("wang",7),("wang",1)
))
.groupByKey().flatMap(kv=>{
var i =0
kv._2.toIterator.map(r=>{
i=i+1
println(r)
r
})
})
sparkContext.runJob(rdd,add _)
def add(list:Iterator[Int]): Unit ={
var i=0
val items = new mutable.MutableList[Int]()
while(list.hasNext){
items.+=(list.next())
if(i>=2){
println(items.mkString(","))
items.clear()
i=0
}else if(!list.hasNext){
println(items.mkString(","))
}
i=i+1
}
}
结果:
25
26
18
25,26,18
15
7
15,7
1
1
结论
RDD[(K, Iterable[V])].flatMap直接用Iterable,那么在Action就没法进行控制,只能flatMap里面所有数据执行完之后,才能执行后面操作