ClickHouse集群部署
2020-03-27 本文已影响0人
济南打工人
集群节点信息
192.168.175.212 ch01
192.168.175.213 ch02
192.168.175.214 ch03
搭建一个zookeeper集群
复制数据需要zookeeper配合
- 下载安装包
zookeeper 官网:https://zookeeper.apache.org/releases.html#download
- 解压
tar zxf apache-zookeeper-3.6.0-bin.tar.gz -C /apps/
- 重命名
mv apache-zookeeper-3.6.0-bin zookeeper
- 修改配置文件
进入zookeeper的conf目录,拷贝zoo_sample.cfg为zoo.cfg,cp zoo_sample.cfg zoo.cfg
修改zoo.cfg文件:
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/apps/zookeeper/data/zookeeper
clientPort=2182
autopurge.purgeInterval=0
globalOutstandingLimit=200
server.1=ch01:2888:3888
server.2=ch02:2888:3888
server.3=ch03:2888:3888
- 创建需要的目录
mkdir -p /apps/zookeeper/data/zookeeper
配置完成后将当前的zookeeper目录scp到其他两个节点
- 设置myid
vim /apps/zookeeper/data/zookeeper/myid #ch201为1,ch202为2,ch203为3
- 添加环境变量
cat /etc/profile
export ZOOKEEPER_HOME=/apps/zookeeper
export PATH=$PATH:$ZOOKEEPER_HOME/bin
- 进入zookeeper的bin目录,启动zookeeper服务,每个节点都需要启动
zkServer.sh start
- 启动之后查看每个节点的状态
root@ch01:~# zkServer.sh status
/usr/bin/java
ZooKeeper JMX enabled by default
Using config: /apps/zookeeper/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost.
Mode: leader
root@ch02:~# zkServer.sh status
/usr/bin/java
ZooKeeper JMX enabled by default
Using config: /apps/zookeeper/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost.
Mode: follower
root@ch03:~# zkServer.sh status
/usr/bin/java
ZooKeeper JMX enabled by default
Using config: /apps/zookeeper/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost.
Mode: follower
安装单机ClickHouse
官网https://clickhouse.tech/#quick-start
sudo apt-get install dirmngr
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv E0C56BD4
echo "deb http://repo.clickhouse.tech/deb/stable/ main/" | sudo tee \
/etc/apt/sources.list.d/clickhouse.list
sudo apt-get update
sudo apt-get install -y clickhouse-server clickhouse-client
sudo service clickhouse-server start
clickhouse-client
- 创建数据目录
mkdir -p /data/clickhouse /data/clickhouse/tmp/ /data/clickhouse/user_files/
- 配置/etc/clickhouse-server/config.xml
<log>/var/log/clickhouse-server/clickhouse-server.log</log>
<path>/data/clickhouse/</path>
<tmp_path>/data/clickhouse/tmp/</tmp_path>
<user_files_path>/data/clickhouse/user_files/</user_files_path>
<errorlog>/var/log/clickhouse-server/clickhouse-server.err.log</errorlog>
- 把上述目录的用户都赋权给clickhouse
chown -R clickhouse:clickhouse /data
- 启动clickhouse
/etc/init.d/clickhouse-server start
- 验证单节点的clickhouse
root@ch01:~# clickhouse-client --password
ClickHouse client version 20.3.4.10 (official build).
Password for user (default):
Connecting to localhost:9000 as user default.
Connected to ClickHouse server version 20.3.4 revision 54433.
ch01 :) show databases;
SHOW DATABASES
┌─name────┐
│ default │
│ system │
└─────────┘
2 rows in set. Elapsed: 0.004 sec.
ch01 :)
配置集群
修改配置文件 vim /etc/clickhouse-server/config.xml
打开 <listen_host>::</listen_host> 的注释
- 创建配置文件 vim /etc/metrika.xml
<yandex>
<clickhouse_remote_servers>
<perftest_3shards_1replicas>
<shard>
<internal_replication>true</internal_replication>
<replica>
<host>192.168.175.212</host>
<port>9000</port>
</replica>
</shard>
<shard>
<replica>
<internal_replication>true</internal_replication>
<host>192.168.175.213</host>
<port>9000</port>
</replica>
</shard>
<shard>
<internal_replication>true</internal_replication>
<replica>
<host>192.168.175.214</host>
<port>9000</port>
</replica>
</shard>
</perftest_3shards_1replicas>
</clickhouse_remote_servers>
<!--zookeeper相关配置-->
<zookeeper-servers>
<node index="1">
<host>192.168.175.212</host>
<port>2181</port>
</node>
<node index="2">
<host>192.168.175.213</host>
<port>2181</port>
</node>
<node index="3">
<host>192.168.175.214</host>
<port>2181</port>
</node>
</zookeeper-servers>
<macros>
<replica>192.168.175.212</replica>
</macros>
<networks>
<ip>::/0</ip>
</networks>
<clickhouse_compression>
<case>
<min_part_size>10000000000</min_part_size>
<min_part_size_ratio>0.01</min_part_size_ratio>
<method>lz4</method>
</case>
</clickhouse_compression>
</yandex>
上述配置中三个节点中不同的地方在于
<macros>
<replica>192.168.175.212</replica>
</macros>
改为当前节点的IP即可
- 重启clickhouse服务
/etc/init.d/clickhouse-server restart
验证
在每个节点启动clickhouse客户端,和单节点启动完全一样,查询集群信息
select * from system.clusters;
1585280379728_2.png