大数据

kylo集群+nifi集群搭建

2018-11-09  本文已影响11人  夜空最亮的9星

kylo集群搭建教程

kylo文档中 Clustering Kylo 介绍的比较模糊,但是大概步骤都讲到了。

kylo的集群,目的是做HA,所以两个节点共用同一个数据库,在这个案例中,两个节点共用122上的mysql、activeMQ、elasticsearch 和 nifi。

121 122 服务
N Y mysql
N Y activeMQ
N Y elasticsearch
Y Y nifi
Y Y kylo

ModeShape Configuration

修改metadata-repository.json 文件内容

vim /opt/kylo/kylo-services/conf/metadata-repository.json

在最后一个追加如下内容

 ,"clustering": {
    "clusterName":"kylo-modeshape-cluster",
    "configuration":"modeshape-jgroups-config.xml",
    "locking":"db"
}

修改后预览:

image

Kylo Configuration

在 /opt/kylo/kylo-services/conf/目录下执行如下语句

<!--这里一定要执行一个空语句的 -->

echo " " >>  application.properties

echo " " >>  application.properties

echo "kylo.cluster.nodeCount=2" >>  application.properties

echo "kylo.cluster.jgroupsConfigFile=kylo-cluster-jgroups-config.xml" >>  application.properties

sed -i 's|jms.activemq.broker.url=.*|jms.activemq.broker.url=tcp://10.88.88.122:61616|'  application.properties

sed -i 's|config.elasticsearch.jms.url=.*|config.elasticsearch.jms.url=tcp://10.88.88.122:61616|'  application.properties

修改nifi.rest.host指向同一个NIFI节点

121

vim /opt/kylo/kylo-services/conf/application.properties 

nifi.rest.host=10.88.88.122

nifi.rest.port=8079

122

vim /opt/kylo/kylo-services/conf/application.properties 

nifi.rest.host=10.88.88.122

nifi.rest.port=8079

修改121和122的 elasticsearch-rest.properties

search.rest.host=10.88.88.122

search.rest.port=9200

Quartz Scheduler Configuration

查看在/opt/kylo/setup/config/kylo-cluster目录下的配置文件:

image

在121上操作:

拷贝 quartz-cluster-example.properties/opt/kylo/kylo-services/conf/ 目录下。并重命名为quartz.properties 不需要修改内容。

拷贝 kylo-cluster-jgroups-config-example.xml/opt/kylo/kylo-services/conf/ 目录下。并重命名为
kylo-cluster-jgroups-config.xml

修改参数如下:

122上的kylo-cluster-jgroups-config.xml

<!--bind_port 不需要修改-->
<!--bind_addr 当前节点IP-->

 <TCP bind_port="7900"
       bind_addr="10.88.88.122"
       
       ....

<!--initial_hosts kylo的所在节点的IP,端口都是7900不变-->

<TCPPING timeout="3000" async_discovery="true" num_initial_members="2"
           initial_hosts="10.88.88.122[7900],10.88.88.121[7900]"
           

121上的kylo-cluster-jgroups-config.xml

<!--bind_port 不需要修改-->
<!--bind_addr 当前节点IP-->

 <TCP bind_port="7900"
       bind_addr="10.88.88.121"
       
       ....

<!--initial_hosts kylo的所在节点的IP,端口都是7900不变-->

<TCPPING timeout="3000" async_discovery="true" num_initial_members="2"
           initial_hosts="10.88.88.121[7900],10.88.88.122[7900]"
           

修改后预览如下:

image

拷贝modeshape-local-test-jgroups-config.xml/opt/kylo/kylo-services/conf/ 目录下。并重命名为 modeshape-jgroups-config.xml

121上的modeshape-jgroups-config.xml

 <TCP bind_port="7800"
       bind_addr="10.88.88.121"

        ...
              
<TCPPING timeout="3000" async_discovery="true" num_initial_members="2"
           initial_hosts="10.88.88.121[7800],10.88.88.122[7801]"

122上的modeshape-jgroups-config.xml

 <TCP bind_port="7801"
       bind_addr="10.88.88.122"

        ...
              
<TCPPING timeout="3000" async_discovery="true" num_initial_members="2"
           initial_hosts="10.88.88.122[7801],10.88.88.121[7800]"

修改后预览如下:


image

测试:(可以跳过)

在121上执行

java -Djava.net.preferIP4Stack=true  -cp  /opt/kylo/kylo-services/conf:/opt/kylo/kylo-services/lib/*:/opt/kylo/kylo-services/plugin/* org.jgroups.tests.McastReceiverTest -bind_addr 10.88.88.121 -port 7900


在122上执行
java -Djava.net.preferIP4Stack=true -cp  /opt/kylo/kylo-services/conf:/opt/kylo/kylo-services/lib/*:/opt/kylo/kylo-services/plugin/* org.jgroups.tests.McastSenderTest -bind_addr 10.88.88.122 -port 7900

修改run-kylo-services.sh文件

java $KYLO_SERVICES_OPTS -Djava.net.preferIPv4Stack=true -cp /opt/kylo/kylo-services/conf ....

原因:

If you get a Network is unreachable error, below, you may need to do the following:

SEVERE: JGRP000200: failed sending discovery request
java.io.IOException: Network is unreachable
    at java.net.PlainDatagramSocketImpl.send(Native Method)
    at java.net.DatagramSocket.send(DatagramSocket.java:693)
    at org.jgroups.protocols.MPING.sendMcastDiscoveryRequest(MPING.java:295)
    at org.jgroups.protocols.PING.sendDiscoveryRequest(PING.java:62)
    at org.jgroups.protocols.PING.findMembers(PING.java:32)
    at org.jgroups.protocols.Discovery.findMembers(Discovery.java:244)

重新新建kylo数据库

登录mysql

drop database kylo ;create database kylo;

然后

cd /opt/kylo/setup/sql/mysql

sh setup-mysql.sh 10.88.88.122 kylo kylo

cd /opt/kylo/setup/sql

sh generate-update-sql.sh
<!--在当前目录下生成两个文件-->
image

登录mysql

use kylo source /opt/kylo/setup/sql/mysqlkylo-db-update-script.sql;

下载Quartz

Download and extract the Quartz distribution to a machine. http://d2zwv9pap9ylyd.cloudfront.net/quartz-2.2.3-distribution.tar.gz You just need this to get the database scripts.

Run the Quartz database scripts for your database found in the docs/dbTables

再次登录mysql;

use kylo;

souroce ~/quartz-2.2.3/docs/dbTables/tables_mysql.sql;

启动kylo

访问:http://10.88.88.122:8400/index.html#!/admin/cluster

image

nifi 集群搭建

kylo 文档中介绍的很详细,安装官网教程配置即可;

在这里,分别在121、122上安装nifi

然后执行如下命令:

121上

sed -i "s|nifi.web.http.host=.*|nifi.web.http.host=10.88.88.121|" /opt/nifi/current/conf/nifi.properties

sed -i 's|nifi.cluster.is.node=.*|nifi.cluster.is.node=true|' /opt/nifi/current/conf/nifi.properties

sed -i 's|nifi.cluster.node.address=.*|nifi.cluster.node.address=10.88.88.121|' /opt/nifi/current/conf/nifi.properties

sed -i 's|nifi.cluster.node.protocol.port=.*|nifi.cluster.node.protocol.port=8078|' /opt/nifi/current/conf/nifi.properties

sed -i 's|nifi.zookeeper.connect.string=.*|nifi.zookeeper.connect.string=10.88.88.121:2181|' /opt/nifi/current/conf/nifi.properties


至此nifi集群模式已经设置好了,下面配置kylo和nifi的交互

修改nifi的activeMQ的配置

vi /opt/nifi/ext-config/config.properties 

#  这里需要配置active MQ的主节点
jms.activemq.broker.url=tcp://10.88.88.121:61616

或者

sed -i "s|jms.activemq.broker.url=.*|jms.activemq.broker.url=tcp://10.88.88.121:61616|" /opt/nifi/ext-config/config.properties 

修改nifi.properties配置文件


<!--设置超时时长,避免导入模板返回500错误-->
sed -i "s|nifi.cluster.node.connection.timeout=.*|nifi.cluster.node.connection.timeout=25 sec|"  /opt/nifi/nifi-1.6.0/conf/nifi.properties
    
sed -i "s|nifi.cluster.node.read.timeout=.*|nifi.cluster.node.read.timeout=25 sec|"  /opt/nifi/nifi-1.6.0/conf/nifi.properties

添加监控插

拷贝/opt/kylo/setup/plugins下的kylo-service-monitor-kylo-cluster-0.9.1.jar 到/opt/kylo/kylo-services/plugin/目录下

重启nifi和kylo后导入模板测试(data_ingest.zip)报错。原因是activeMQ的配置需要修改

image image image image

然后在kylo中跑流程就可以成功了。

image image
上一篇 下一篇

猜你喜欢

热点阅读