spark on Yarn 动态资源分配

2018-05-27  本文已影响0人  金刚_30bf

配置文件:

spark.default.parallelism=40
#spark.executor.memory=1536m
#spark.executor.memoryOverhead=512m
#spark.driver.cores=1
#spark.driver.memory=1g
#spark.executor.instances=3
spark.serializer=org.apache.spark.serializer.KryoSerializer
spark.dynamicAllocation.enabled=true
spark.shuffle.service.enabled=true

yarn-site.xml

<!-- comment mapreduce shuffle , change to spark_shuffle
  <property>
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>
  </property>

  <property>
    <name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name>
    <value>org.apache.hadoop.mapred.ShuffleHandler</value>
  </property> -->
<!--  for spark on yarn : spark_shuffle -->
  <property>
    <name>yarn.nodemanager.aux-services</name>
    <value>spark_shuffle</value>
  </property>

  <property>
    <name>yarn.nodemanager.aux-services.spark_shuffle.class</name>
    <value>org.apache.spark.network.yarn.YarnShuffleService</value>
  </property>

yarn-env.sh

export YARN_HEAPSIZE=1000

使用spark-shell --master yarn 后,
通过webui ,看到初始没有executor 。
触发action后, 迅速申请的executor :


图片.png
上一篇 下一篇

猜你喜欢

热点阅读