Flink(1.13) 部署 Standalone模式

2021-08-17  本文已影响0人  万事万物

重新拷贝一份新的文件

保留原始文件

 cp -r flink flink-standalone
 vim flink-conf.yaml

配置信息如下

// 配置masterIP(可以指定当前服务器IP)
jobmanager.rpc.address: hadoop102
// 配置 master 与 worker内部通信端口
jobmanager.rpc.port: 6123

 vim workers

配置信息如下:master只有一个,workers可以有多个

hadoop102
hadoop103
hadoop104

启动集群

 flink-standalone]$ bin/start-cluster.sh
Starting cluster.
Starting standalonesession daemon on host hadoop102.
Starting taskexecutor daemon on host hadoop102.
Starting taskexecutor daemon on host hadoop103.
Starting taskexecutor daemon on host hadoop104.

Standalone 与 yarn 的区别

区别就在于是谁在管理资源,Standalone 是由 Flink自己管理资源,yarn 是交由 yarn 管理资源。

高可用

任何时候都有一个 主 JobManager 和多个备用 JobManagers,以便在主节点失败时有备用 JobManagers 来接管集群。这保证了没有单点故障,一旦备 JobManager 接管集群,作业就可以正常运行。主备 JobManager 实例之间没有明显的区别。每个 JobManager 都可以充当主备节点。

vim flink-conf.yaml

配置信息如下

# 高可用依赖于zookeeper,复制master选举
high-availability: zookeeper
# 绑定zookeeper集群
high-availability.zookeeper.quorum: hadoop102:2181,hadoop103:2181,hadoop104:2181
# 配置master信息存放地址
high-availability.storageDir: hdfs://hadoop102:9820/flink-standalone/ha/
# zookeeper 存储地址
high-availability.zookeeper.path.root: /flink-standalone
# 集群id
high-availability.cluster-id: /cluster_atguigu

high-availability.storageDir: hdfs://hadoop102:9820/flink/ha/ 端口参考core-site.xml中的配置

        <!-- 指定NameNode的地址 -->
        <property>
                <name>fs.defaultFS</name>
                <value>hdfs://hadoop102:9820</value>
        </property>
[admin@hadoop102 flink-standalone]$ bin/stop-cluster.sh
Stopping taskexecutor daemon (pid: 10745) on host hadoop102.
Stopping taskexecutor daemon (pid: 6284) on host hadoop103.
Stopping taskexecutor daemon (pid: 6243) on host hadoop104.
Stopping standalonesession daemon (pid: 10376) on host hadoop102.

添加

# Flink需要
export HADOOP_CLASSPATH=`hadoop classpath`

source 使其立即生效

[admin@hadoop102 flink-standalone]$ bin/start-cluster.sh
Starting HA cluster with 1 masters.
Starting standalonesession daemon on host hadoop102.
Starting taskexecutor daemon on host hadoop102.
Starting taskexecutor daemon on host hadoop103.
Starting taskexecutor daemon on host hadoop104.

注意:虽然现在是高可用,但是只有一个master

Starting HA cluster with 1 masters.
hadoop102:8081
hadoop103:8081
hadoop104:8081

重启集群,这样就有三台了。

Starting HA cluster with 3 masters.
[zk: localhost:2181(CONNECTED) 5] get /flink-standalone/cluster_atguigu/leader/dispatcher_lock
tcp://flink@hadoop102:39417/user/rpc/dispatcher
16237 StandaloneSessionClusterEntrypoint
 kill -9 16237
上一篇 下一篇

猜你喜欢

热点阅读