Rust

关于Databend部署-单机版(测试)

2022-08-10  本文已影响0人  神奇的考拉

本文是关于部署databend-单机版的过程,主要是为了测试。

操作系统 MacOs Monterey 12.5
处理器 双核intel core i5/ 2.3GHz
内存 16GB
rustup 1.25.1
rustc 1.64.0-nightly
curl -LJO https://github.com/datafuselabs/databend/releases/download/v0.7.158-nightly/databend-v0.7.158-nightly-x86_64-apple-darwin.tar.gz

接着解压文件: 创建目录databend存放解压的文件

tar -zvxf  databend-v0.7.158-nightly-x86_64-apple-darwin.tar.gz  ~/databend
解压后
关于databend-meta.toml
# 指定日志目录(按照自己需要指定目录)
log_dir            = "./logs"
# 指定ip
admin_api_address  = "0.0.0.0:28101"
grpc_api_address   = "0.0.0.0:9191"

# 配置raft
[raft_config]
id            = 1
raft_dir      = "./logs/meta"
raft_api_port = 28103

# Assign raft_{listen|advertise}_host in test config.
# This allows you to catch a bug in unit tests when something goes wrong in raft meta nodes communication. 
raft_listen_host = "127.0.0.1"
raft_advertise_host = "localhost"

# 指定当前部署集群模式
# Start up mode: single node cluster
single        = true
关于databend-query.toml
# 关于query的配置项
[query]
max_active_sessions = 256
wait_timeout_mills = 5000

# 主要是query用到的端口:注意不要端口冲突
# For flight rpc.
flight_api_address = "0.0.0.0:9091"

# Databend Query http address.
# For admin RESET API.
admin_api_address = "0.0.0.0:8081"

# Databend Query metrics RESET API.
metric_api_address = "0.0.0.0:7071"

# Databend Query MySQL Handler.
mysql_handler_host = "0.0.0.0"
mysql_handler_port = 3307

# Databend Query ClickHouse Handler.
clickhouse_http_handler_host = "0.0.0.0"
clickhouse_http_handler_port = 8125

# Databend Query HTTP Handler.
http_handler_host = "0.0.0.0"
http_handler_port = 8001

tenant_id = "test_tenant"
cluster_id = "test_cluster"

table_engine_memory_enabled = true
database_engine_github_enabled = true

table_cache_enabled = true
table_memory_cache_mb_size = 1024
table_disk_cache_root = "_cache"
table_disk_cache_mb_size = 10240

# 配置日志
[log]
level = "ERROR"
dir = "./databend/logs"
query_enabled = true

# 配置meta
[meta]
# To enable embedded meta-store, set address to "".
embedded_dir = "./databend/meta_embedded_1"
address = "0.0.0.0:9191"
username = "root"
password = "root"
client_timeout_in_second = 60
auto_sync_interval = 60

# 指定数据存储位置:这里是为了测试,故指定fs
# Storage config.
[storage]
# fs | s3 | azblob
type = "fs"

# Set a local folder to store your data.
# Comment out this block if you're NOT using local file system as storage.
[storage.fs]
data_path = "./databend/stateless_test_data"
./bin/databend-meta -c  configs/databend-meta.toml > logs/meta.log 2>&1 &

验证是否启动成功

curl -I  http://127.0.0.1:28101/v1/health
或
ps -ef | grep -v grep | grep databend

2、测试query

./bin/databend-query -c configs/databend-query.toml > logs/query.log 2>&1 &

验证query是否启动成功

curl -I  http://127.0.0.1:8081/v1/health
或
ps -ef | grep -v grep | grep databend

3、使用mysql client链接

mysql -h 127.0.0.1 -P3307 -uroot

验证测试

use default;
create table if not exists table_v1(f_a int);
insert into table_v1 values (1), (2), (3);

select * from table_v1;
测试结果

从 SQL 字符串经过 Parser 解析成 AST 语法树,然后经过 Binder 绑定 catalog 等信息转成逻辑计划,再经过一系列优化器处理转成物理计划,最后遍历物理计划构建对应的执行逻辑。query 涉及的模块有:

query

Query 服务,整个函数的入口在 bin/databend-query.rs其中包含一些子模块,这里介绍下比较重要的子模块

common/ast

基于 nom_rule 实现的新版 sql parser

common/datavalues

各类 Column 的定义,表示数据在内存上的布局, 后续会逐步迁移到 common/expressions

common/datablocks

Datablock 表示 Vec<Column> 集合,里面封装了一些常用方法, 后续会逐步迁移到 common/expressions

标量函数以及聚合函数等实现注册

common/hashtable

实现了一个线性探测的 hashtable,主要用于 group by 聚合函数以及 join 等场景#### common/formats
负责数据对外各类格式的 序列化反序列化,如 CSV/TSV/Json 格式等

opensrv

见代码

上一篇下一篇

猜你喜欢

热点阅读