ClickHouse单节点多实例部署高可用集群

2021-08-10  本文已影响0人  codeMan_6616

环境说明

操作系统:ubuntu16.04
节点1:
192.168.18.61:ubuntu61
部署服务:clickhouserver(cs01-01)和 clickhouserver(cs01-02)
节点2:
192.168.18.63:ubuntu63
部署服务:clickhouserver(cs02-01)和 clickhouserver(cs02-02)
节点3:
192.168.18.62:ubuntu62
部署服务:zookeeper

cat /etc/hosts
192.168.18.61 ubuntu61
192.168.18.62 ubuntu62
192.168.18.63 ubuntu63

每个节点分别安装ClickHouse

单节点服务多实例

复制配置文件:

sudo cp /etc/clickhouse-server/config.xml /etc/clickhouse-server/config9001.xml
sudo cp /etc/clickhouse-server/user.xml /etc/clickhouse-server/user9001.xml
sudo cp /etc/systemd/system/clickhouse-server.service /etc/systemd/system/clickhouse-server9001.service

chmod 777  /etc/clickhouse-server/config9001.xml
chown clickhouse:clickhouse /etc/clickhouse-server/config9001.xml
chown clickhouse:clickhouse /etc/clickhouse-server/user9001.xml

修改配置:

<log>/var/log/clickhouse-server/clickhouse-server9001.log</log>
<errorlog>/var/log/clickhouse-server/clickhouse-server9001.err.log</errorlog>
<http_port>8124</http_port>
<tcp_port>9001</tcp_port>
<mysql_port>9014</mysql_port>
<postgresql_port>9015</postgresql_port>
<interserver_http_port>9019</interserver_http_port>
<interserver_http_host>ubuntu61</interserver_http_host>
<listen_host>::</listen_host>
<path>/var/lib/clickhouse9001/</path>
<tmp_path>/var/lib/clickhouse9001/tmp/</tmp_path>
<user_files_path>/var/lib/clickhouse9001/user_files/</user_files_path>
<users_xml>
    <path>users9001.xml</path>
</users_xml>
<local_directory> 
    <path>/var/lib/clickhouse9001/access/</path>
</local_directory>

注意:
每个节点填写各自的hostname

<interserver_http_host>ubuntu61</interserver_http_host>

修改启动服务:

vi /etc/systemd/system/clickhouse-server9001.service
ExecStart=/usr/bin/clickhouse-server --config=/etc/clickhouse-server/config9001.xml --pid-file=/run/clickhouse-server/clickhouse-server9001.pid

启动服务:

systemctl start clickhouse-server9001
systemctl status clickhouse-server9001

集群配置

vi /etc/clickhouse-server/config.xml

<remote_servers>
    <cluster_3s_1r>
        <shard>
            <internal_replication>true</internal_replication>
            <replica>
                <host>ubuntu61</host>
                <port>9000</port>
                <user>default</user>
                <password>123456</password>
            </replica>
            <replica>
                <host>ubuntu63</host>
                <port>9001</port>
                <user>default</user>
                <password></password>
            </replica>
        </shard>
        <shard>
            <internal_replication>true</internal_replication>
            <replica>
                <host>ubuntu63</host>
                <port>9000</port>
                <user>default</user>
                <password></password>
            </replica>
            <replica>
                <host>ubuntu61</host>
                <port>9001</port>
                <user>default</user>
                <password></password>
            </replica>
        </shard>
    </cluster_3s_1r>
</remote_servers>
<zookeeper>
<node>
<host>ubuntu62</host>
<port>2181</port>
</node>
</zookeeper>
<macros>
        <layer>01</layer>
        <shard>01</shard>
        <replica>cluster01-01-1</replica>
</macros>

每个节点的配置文件区别:
ubuntu61分片1、副本1

<macros>
        <layer>01</layer>
        <shard>01</shard>
        <replica>cluster01-01-1</replica>
</macros>

ubuntu61分片2、副本2

<macros>
        <layer>01</layer>
        <shard>02</shard>
        <replica>cluster01-02-2</replica>
</macros>

ubuntu63分片2、副本1

<macros>
        <layer>01</layer>
        <shard>02</shard>
        <replica>cluster01-02-1</replica>
</macros>

ubuntu63分片1、副本2

<macros>
        <layer>01</layer>
        <shard>01</shard>
        <replica>cluster01-01-2</replica>
</macros>

重启服务后,查看集群配置是否成功

 clickhouse-client --port 9000 --password 123456
 select * from system.clusters;

测试集群

1、分别登陆每个实例client,执行命令,创建数据库

CREATE DATABASE monchickey;
show databases;

2、每个实例,创建复制表

CREATE TABLE monchickey.image_label ( label_id UInt32,  label_name String,  insert_time Date) ENGINE = ReplicatedMergeTree('/clickhouse/tables/01-01/image_label','cluster01-01-1',insert_time, (label_id, insert_time), 8192);

CREATE TABLE monchickey.image_label ( label_id UInt32,  label_name String,  insert_time Date) ENGINE = ReplicatedMergeTree('/clickhouse/tables/01-02/image_label','cluster01-02-2',insert_time, (label_id, insert_time), 8192);

CREATE TABLE monchickey.image_label ( label_id UInt32,  label_name String,  insert_time Date) ENGINE = ReplicatedMergeTree('/clickhouse/tables/01-02/image_label','cluster01-02-1',insert_time, (label_id, insert_time), 8192);

CREATE TABLE monchickey.image_label ( label_id UInt32,  label_name String,  insert_time Date) ENGINE = ReplicatedMergeTree('/clickhouse/tables/01-01/image_label','cluster01-01-2',insert_time, (label_id, insert_time), 8192);

3、创建分布式表
分布式表只是作为一个查询引擎,本身不存储任何数据,查询时将sql发送到所有集群分片,然后进行进行处理和聚合后将结果返回给客户端
不需要在每个节点上都创建,按实际需要在指定的节点行创建即可

use monchickey;
CREATE TABLE image_label_all AS image_label ENGINE = Distributed(cluster_3s_1r, monchickey, image_label, rand());

4、插入数据,进行验证
在节点1,分片1上执行插入

use monchickey;
INSERT INTO monchickey.image_label (label_id,label_name,insert_time) VALUES (1,'aaa','2021-07-13');
INSERT INTO monchickey.image_label (label_id,label_name,insert_time) VALUES (2,'bbb','2021-07-13');

此时在其他分片节点上进行查询,能看到数据,说明集群搭建成功
在其他节点分片上进行插入

INSERT INTO monchickey.image_label (label_id,label_name,insert_time) VALUES (3,'ccc','2021-07-14');
INSERT INTO monchickey.image_label (label_id,label_name,insert_time) VALUES (4,'ddd','2021-07-15');
INSERT INTO monchickey.image_label (label_id,label_name,insert_time) VALUES (5,'eee','2021-07-16');

查看结果,观察数据变化

select * from image_label;
select * from image_label_all;
上一篇下一篇

猜你喜欢

热点阅读