spark2.3 跑不過去的join 排查
上週跑spark job 遇到 這樣一個語句
select cc_base_part1.*,cc_base_part1.nsf_cards_ratio * 1.00 / cc_rank_agg.nsf_on_entry as nsf_ratio_to_pop
| from cc_base_part1
| left join cc_rank_agg
| on trim(cc_base_part1.country) = trim(cc_rank_agg.cntry_code);
跑了很久都沒有通過,心想多半是遇到skew join 的問題. 首先來分析下這個 join. 左表 (cc_base_part1 ) 是個大表,
右表是個小表(裏面的記錄不多, 記錄country 信息,大約10+個國家)。整個查詢是計算某些 country 的一些統計。 預計中這是個broadcast join(小表放內存,大表和小表進行 join) 查看了執行計劃是個sort merge join( 這就不奇怪爲什麼 job 會 hang )
EXPLAIN COST select
| cc_base_part1.*,cc_base_part1.nsf_cards_ratio * 1.00 / cc_rank_agg.nsf_on_entry as nsf_ratio_to_pop
| from cc_base_part1
| left join cc_rank_agg
| on trim(cc_base_part1.country) = trim(cc_rank_agg.cntry_code);
== Optimized Logical Plan ==
Project [cust_id#97, country#98, cc_id#99, bin_hmac#100, credit_card_created_date#101, card_usage#102, cc_category#103, cc_size#104, nsf_risk#105, nsf_cards_ratio#106, dt#107, CheckOverflow((promote_precision(cast(CheckOverflow((promote_precision(nsf_cards_ratio#106) * 1.00), DecimalType(22,4)) as decimal(38,16))) / promote_precision(nsf_on_entry#111)), DecimalType(38,6)) AS nsf_ratio_to_pop#94], Statistics(sizeInBytes=4.27E+37 B, hints=none)
+- Join LeftOuter, (trim(country#98, None) = trim(cntry_code#108, None)), Statistics(sizeInBytes=4.68E+37 B, hints=none)
:- HiveTableRelation `fpv3653`.`cc_base_part1`, org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, [cust_id#97, country#98, cc_id#99, bin_hmac#100, credit_card_created_date#101, card_usage#102, cc_category#103, cc_size#104, nsf_risk#105, nsf_cards_ratio#106], [dt#107], Statistics(sizeInBytes=8.0 EB, hints=none)
+- Project [cntry_code#108, nsf_on_entry#111], Statistics(sizeInBytes=4.4 EB, hints=none)
+- HiveTableRelation `fpv3653`.`cc_rank_agg`, org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, [cntry_code#108, num_tot_cards#109L, num_nsf_cards#110L, nsf_on_entry#111], [dt#112], Statistics(sizeInBytes=8.0 EB, hints=none)
解決方法
1. enlarge spark.sql.autoBroadcastJoinThreshold( default is 10M), 但是好像不合適,Optimized Logical Plan 裏面
- Project [cntry_code#108, nsf_on_entry#111], Statistics(sizeInBytes=4.4 EB, hints=none)
估計要設置得 大於4.4EB , 才能讓spark 從 sort merged join 變成 broadcast join.
這裏記錄下什麼是EB (見最後)
2. 使用 broadcast join 語法, 指定小表 cc_rank_agg.
select /*+ BROADCAST (cc_rank_agg) */ cc_base_part1.*,cc_base_part1.nsf_cards_ratio *1.00 / cc_rank_agg.nsf_on_entryas nsf_ratio_to_pop
from cc_base_part1
left join cc_rank_agg
on trim(cc_base_part1.country) =trim(cc_rank_agg.cntry_code);
改完看下新的 physical plan, 的確變成 broadcast join. job 終於跑過了 !
== Physical Plan ==
*(1) Project [cust_id#89, country#90, cc_id#91, bin_hmac#92, credit_card_created_date#93, card_usage#94, cc_category#95, cc_size#96, nsf_risk#97, nsf_cards_ratio#98, dt#99, CheckOverflow((promote_precision(cast(CheckOverflow((promote_precision(nsf_cards_ratio#98) * 1.00), DecimalType(22,4)) as decimal(38,16))) / promote_precision(nsf_on_entry#103)), DecimalType(38,6)) AS nsf_ratio_to_pop#86]
+- *(1) BroadcastHashJoin [trim(country#90, None)], [trim(cntry_code#100, None)], LeftOuter, BuildRight
:- HiveTableScan [cust_id#89, country#90, cc_id#91, bin_hmac#92, credit_card_created_date#93, card_usage#94, cc_category#95, cc_size#96, nsf_risk#97, nsf_cards_ratio#98, dt#99], HiveTableRelation `fpv3653`.`cc_base_part1`, org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, [cust_id#89, country#90, cc_id#91, bin_hmac#92, credit_card_created_date#93, card_usage#94, cc_category#95, cc_size#96, nsf_risk#97, nsf_cards_ratio#98], [dt#99]
+- BroadcastExchange HashedRelationBroadcastMode(List(trim(input[0, string, true], None)))
+- HiveTableScan [cntry_code#100, nsf_on_entry#103], HiveTableRelation `fpv3653`.`cc_rank_agg`, org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, [cntry_code#100, num_tot_cards#101L, num_nsf_cards#102L, nsf_on_entry#103], [dt#104]
3. 發生這樣的情況是因爲hive table statistics 統計信息不對,這麼一張小表怎麼會是8.0 EB
+- HiveTableRelation `fpv3653`.`cc_rank_agg`, org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, [cntry_code#108, num_tot_cards#109L, num_nsf_cards#110L, nsf_on_entry#111], [dt#112], Statistics(sizeInBytes=8.0 EB, hints=none)
spark 提供了 analyze compute statistics 去重新計算 table 的 statistics
analyze table fpv3653.cc_rank_agg compute statistics noscan;
重新計算後,再打印執行計劃, 可以看到cc_rank_agg 小表只有2.1KB , spark 自動把這個 join 變成了 broadcast join
== Optimized Logical Plan ==
Project [cust_id#48, country#49, cc_id#50, bin_hmac#51, credit_card_created_date#52, card_usage#53, cc_category#54, cc_size#55, nsf_risk#56, nsf_cards_ratio#57, dt#58, CheckOverflow((cast(CheckOverflow((nsf_cards_ratio#57 * 1.00), DecimalType(22,4)) as decimal(38,16)) / nsf_on_entry#62), DecimalType(38,20)) AS nsf_ratio_to_pop#45], Statistics(sizeInBytes=1.01E+22 B, hints=none)
+- Join LeftOuter, (trim(country#49) = trim(cntry_code#59)), Statistics(sizeInBytes=1.10E+22 B, hints=none)
:- CatalogRelation `fpv3653`.`cc_base_part1`, org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, [cust_id#48, country#49, cc_id#50, bin_hmac#51, credit_card_created_date#52, card_usage#53, cc_category#54, cc_size#55, nsf_risk#56, nsf_cards_ratio#57], [dt#58], Statistics(sizeInBytes=8.0 EB, hints=none)
+- Project [cntry_code#59, nsf_on_entry#62], Statistics(sizeInBytes=1198.0 B, hints=none)
+- CatalogRelation `fpv3653`.`cc_rank_agg`, org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, [cntry_code#59, num_tot_cards#60L, num_nsf_cards#61L, nsf_on_entry#62], [dt#63], Statistics(sizeInBytes=2.1 KB, hints=none)
== Physical Plan ==
*Project [cust_id#48, country#49, cc_id#50, bin_hmac#51, credit_card_created_date#52, card_usage#53, cc_category#54, cc_size#55, nsf_risk#56, nsf_cards_ratio#57, dt#58, CheckOverflow((cast(CheckOverflow((nsf_cards_ratio#57 * 1.00), DecimalType(22,4)) as decimal(38,16)) / nsf_on_entry#62), DecimalType(38,20)) AS nsf_ratio_to_pop#45]
+- *BroadcastHashJoin [trim(country#49)], [trim(cntry_code#59)], LeftOuter, BuildRight
:- HiveTableScan [cust_id#48, country#49, cc_id#50, bin_hmac#51, credit_card_created_date#52, card_usage#53, cc_category#54, cc_size#55, nsf_risk#56, nsf_cards_ratio#57, dt#58], CatalogRelation `fpv3653`.`cc_base_part1`, org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, [cust_id#48, country#49, cc_id#50, bin_hmac#51, credit_card_created_date#52, card_usage#53, cc_category#54, cc_size#55, nsf_risk#56, nsf_cards_ratio#57], [dt#58]
+- BroadcastExchange HashedRelationBroadcastMode(List(trim(input[0, string, true])))
+- HiveTableScan [cntry_code#59, nsf_on_entry#62], CatalogRelation `fpv3653`.`cc_rank_agg`, org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, [cntry_code#59, num_tot_cards#60L, num_nsf_cards#61L, nsf_on_entry#62], [dt#63]
什麼是 EB?