玩转大数据大数据Spark On K8S

Spark on k8s: 通过hostPath设置SPARK_

2019-09-25  本文已影响0人  Kent_Yao

前言

spark.local.dir/SPARK_LOCAL_DIRS 用于Spark 在 Shuffle阶段临时文件及RDD持久化存储等,可以使用逗号分隔配置多个路径对应到不同的磁盘,Spark on Yarn时这个路径默认会被Yarn集群的配置LOCAL_DIRS所取代。在Spark on k8s下,这个路径默认是emptyDir这个Volume所对应的位置,可以简单的理解为java.io.tmpDir对应的/tmp目录。

配置

spark.kubernetes.driver.volumes.[VolumeType].spark-local-dir-[VolumeName].mount.path=<mount path>
spark.kubernetes.driver.volumes.[VolumeType].spark-local-dir-[VolumeName].options.path=<host path>

[VolumeType] - hostPath, emptyDir,...
[VolumeName] - 命名
<mount path> - pod中对应的目录
<host path> - 宿主机文件目录

Driver

spark.kubernetes.driver.volumes.hostPath.spark-local-dir-0.mount.path=/opt/dfs/0
spark.kubernetes.driver.volumes.hostPath.spark-local-dir-1.mount.path=/opt/dfs/1
spark.kubernetes.driver.volumes.hostPath.spark-local-dir-2.mount.path=/opt/dfs/2
spark.kubernetes.driver.volumes.hostPath.spark-local-dir-3.mount.path=/opt/dfs/3
spark.kubernetes.driver.volumes.hostPath.spark-local-dir-4.mount.path=/opt/dfs/4
spark.kubernetes.driver.volumes.hostPath.spark-local-dir-5.mount.path=/opt/dfs/5
spark.kubernetes.driver.volumes.hostPath.spark-local-dir-6.mount.path=/opt/dfs/6
spark.kubernetes.driver.volumes.hostPath.spark-local-dir-7.mount.path=/opt/dfs/7
spark.kubernetes.driver.volumes.hostPath.spark-local-dir-8.mount.path=/opt/dfs/8
spark.kubernetes.driver.volumes.hostPath.spark-local-dir-9.mount.path=/opt/dfs/9
spark.kubernetes.driver.volumes.hostPath.spark-local-dir-10.mount.path=/opt/dfs/10
spark.kubernetes.driver.volumes.hostPath.spark-local-dir-11.mount.path=/opt/dfs/11

spark.kubernetes.driver.volumes.hostPath.spark-local-dir-0.options.path=/mnt/dfs/0
spark.kubernetes.driver.volumes.hostPath.spark-local-dir-1.options.path=/mnt/dfs/1
spark.kubernetes.driver.volumes.hostPath.spark-local-dir-2.options.path=/mnt/dfs/2
spark.kubernetes.driver.volumes.hostPath.spark-local-dir-3.options.path=/mnt/dfs/3
spark.kubernetes.driver.volumes.hostPath.spark-local-dir-4.options.path=/mnt/dfs/4
spark.kubernetes.driver.volumes.hostPath.spark-local-dir-5.options.path=/mnt/dfs/5
spark.kubernetes.driver.volumes.hostPath.spark-local-dir-6.options.path=/mnt/dfs/6
spark.kubernetes.driver.volumes.hostPath.spark-local-dir-7.options.path=/mnt/dfs/7
spark.kubernetes.driver.volumes.hostPath.spark-local-dir-8.options.path=/mnt/dfs/8
spark.kubernetes.driver.volumes.hostPath.spark-local-dir-9.options.path=/mnt/dfs/9
spark.kubernetes.driver.volumes.hostPath.spark-local-dir-10.options.path=/mnt/dfs/10
spark.kubernetes.driver.volumes.hostPath.spark-local-dir-11.options.path=/mnt/dfs/11

Executor

spark.kubernetes.executor.volumes.hostPath.spark-local-dir-0.mount.path=/opt/dfs/0
spark.kubernetes.executor.volumes.hostPath.spark-local-dir-1.mount.path=/opt/dfs/1
spark.kubernetes.executor.volumes.hostPath.spark-local-dir-2.mount.path=/opt/dfs/2
spark.kubernetes.executor.volumes.hostPath.spark-local-dir-3.mount.path=/opt/dfs/3
spark.kubernetes.executor.volumes.hostPath.spark-local-dir-4.mount.path=/opt/dfs/4
spark.kubernetes.executor.volumes.hostPath.spark-local-dir-5.mount.path=/opt/dfs/5
spark.kubernetes.executor.volumes.hostPath.spark-local-dir-6.mount.path=/opt/dfs/6
spark.kubernetes.executor.volumes.hostPath.spark-local-dir-7.mount.path=/opt/dfs/7
spark.kubernetes.executor.volumes.hostPath.spark-local-dir-8.mount.path=/opt/dfs/8
spark.kubernetes.executor.volumes.hostPath.spark-local-dir-9.mount.path=/opt/dfs/9
spark.kubernetes.executor.volumes.hostPath.spark-local-dir-10.mount.path=/opt/dfs/10
spark.kubernetes.executor.volumes.hostPath.spark-local-dir-11.mount.path=/opt/dfs/11

spark.kubernetes.executor.volumes.hostPath.spark-local-dir-0.options.path=/mnt/dfs/0
spark.kubernetes.executor.volumes.hostPath.spark-local-dir-1.options.path=/mnt/dfs/1
spark.kubernetes.executor.volumes.hostPath.spark-local-dir-2.options.path=/mnt/dfs/2
spark.kubernetes.executor.volumes.hostPath.spark-local-dir-3.options.path=/mnt/dfs/3
spark.kubernetes.executor.volumes.hostPath.spark-local-dir-4.options.path=/mnt/dfs/4
spark.kubernetes.executor.volumes.hostPath.spark-local-dir-5.options.path=/mnt/dfs/5
spark.kubernetes.executor.volumes.hostPath.spark-local-dir-6.options.path=/mnt/dfs/6
spark.kubernetes.executor.volumes.hostPath.spark-local-dir-7.options.path=/mnt/dfs/7
spark.kubernetes.executor.volumes.hostPath.spark-local-dir-8.options.path=/mnt/dfs/8
spark.kubernetes.executor.volumes.hostPath.spark-local-dir-9.options.path=/mnt/dfs/9
spark.kubernetes.executor.volumes.hostPath.spark-local-dir-10.options.path=/mnt/dfs/10
spark.kubernetes.executor.volumes.hostPath.spark-local-dir-11.options.path=/mnt/dfs/11

性能测试

当默认使用emptyDir, Spark on k8s性能比同配置的Spark on yarn有极大的性能差距,当数据规模增大是,性能问题暴露的越发明显。

emptyDir

通过观察宿主机上的metrics信息,发现sda这块盘被打满,而其他11块盘都空闲。这也导致该宿主机的cpu利用率无法上去。


disk bottleneck

以上面的配置,重新测试Terasort,我们可以看到Spark on k8s的性能得到了极大的提升,尤其是在大数据量的情况下


hostPath

通过观察宿主机上的metrics,12块磁盘都得到了利用,spark shuffle的能力得到提升,CPU的使用率相比之前得到质的飞跃。


metrics of kubelet
上一篇下一篇

猜你喜欢

热点阅读