Spark 访问HA模式下的HDFS

2019-05-27  本文已影响0人  麦子星星

sc.hadoopConfiguration.set("fs.defaultFS", "hdfs://nameservice1")

sc.hadoopConfiguration.set("dfs.nameservices", "nameservice1")

sc.hadoopConfiguration.set("dfs.ha.namenodes.nameservice1", "nn1,nn2")

sc.hadoopConfiguration.set("dfs.namenode.rpc-address.nameservice1.nn1", "master:8020")

sc.hadoopConfiguration.set("dfs.namenode.rpc-address.nameservice1.nn2", "slave1:8020")

sc.hadoopConfiguration.set("dfs.client.failover.proxy.provider.nameservice1",

  "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider")

上一篇 下一篇

猜你喜欢

热点阅读