681. 【数据库评测】Cloudwave 4.0 集群版(4节

2023-05-25  本文已影响0人  七镜

一、评测结果


评测结论1:4台64核256g阿里云服务器组成的4节点集群,hadoop3.2.2 作为分布式存储,Cloudwave4.0在 SSB1000g 国际标准测试集下,整体性能优于Starrocks3.0近0.36倍。
评测结论2:在多表联合join场景下,Cloudwave4.0版本,耗时几乎等于零

[附]13条标准测试SQL测试结果表:

数据库 数据集 响应时间(s) CPU 最大占用率 存储压缩比 数据导入时间
Cloudwave4.0 ssb1000 7.602 90%(5763%/6400%) 59%(360g/606g) 58分钟
Starrocks3.0 ssb1000 10.397 66.6%(4266%/6400%) 169%(1024g/606g) 112分钟

[附]2条拓展测试SQL测试结果表

数据库 数据集 拓展SQL1响应时间(s) 拓展SQL1 CPU 最大占用率 拓展SQL2响应时间(s) 拓展SQL2 CPU 最大占用率
Cloudwave4.0 ssb1000 0.012 0.0935%(6%/6400%) 0.014 0.118%(7.6%/6400%)
Starrocks3.0 ssb1000 2.79 78.7%(5037%/6400%) 4.8 90.5%(5797%/6400%)

二、评测环境

表名 行数 说明
lineorder 60 亿 SSB 商品订单表
customer 3000 万 SSB 客户表
part 200 万 SSB 零部件表
supplier 200 万 SSB 供应商表
dates 2556 日期表
硬件环境 jdk版本 mysql版本 hadoop版本 Starrocks版本 Starrocks版本

三、评测方法

多表联合join测试

四、开始测试[cloudwave]

  1. 查看为hadoop准备的存储空间


./sync_scripts.sh 'df -h' | grep home
  1. 格式化hadoop


hdfs namenode -format
  1. 启动hdfs,并查看服务状态


start-dfs.sh 
./sync_scripts.sh 'jps'
  1. 创建ssb1000数据上传目录


hdfs dfs -mkdir /cloudwave
hdfs dfs -mkdir /cloudwave/uploads
hdfs dfs -put ssb1000 /cloudwave/uploads/

  1. 检查数据上传结果


hdfs dfs -du -h /
du -sh /home/cloudwave/ssb-poc-0.9.3/ssb-poc/output/data_dir/ssb1000
  1. 启动cloudwave数据库,并导入ssb1000数据


./start-all-server.sh
./cplus_go.bin -s 'loaddata ssb1000'
  1. [cloudwave]开始测试13条标准测试SQL


  1. [cloudwave] 开始测试2条多表联合joinSQL



五、对比测试

  1. 清空hdfs


  2. [starrocks] 启动 starrocks3.0 fe


./fe/bin/start_fe.sh --daemon
  1. [starrocks] 添加starrocks3.0 be


mysql -uroot -h127.0.0.1 -P9030
ALTER SYSTEM ADD BACKEND "172.17.161.33:9050"; 
ALTER SYSTEM ADD BACKEND "172.17.161.32:9050"; 
ALTER SYSTEM ADD BACKEND "172.17.161.31:9050"; 
ALTER SYSTEM ADD BACKEND "172.17.161.30:9050"; 
  1. [starrocks] 启动 starrocks 3.0 be


./sync_scripts.sh "cd $(pwd)/be/bin && ./start_be.sh --daemon &&ps -ef | grep starrocks_be"
  1. [starrocks] 验证集群状态,4个节点的 Alive=true 即可。



  1. [starrocks] 创建表


  1. [starrocks] 开始导入数据,ssb1000导入时间是


date && ./bin/stream_load.sh data_dir/ssb30 && date
  1. [starrocks] 查看ssb1000 压缩比,ssb1000数据的原始大小是606G,导入starrocks数据库之后,神奇的发现,占用了1T的分布式存储(压缩呢???)。
  1. [starrocks] 开始测试
  1. [starrocks]分析测试结果
  1. [starrocks] 开始测试2条多表联合joinSQL


五、附加

  1. Cloudwave 测试脚本
#!/bin/bash
# Program:
#       test ssb
# History:
# 2023/03/17    junfenghe.cloud@qq.com  version:0.0.1

rm -rf ./n*txt
for ((i=1; i<20; i++))
do

    cat sql_ssb.sql |./cplus.sh > n${i}.txt

done

  1. Starrocks 测试脚本
#!/bin/bash
# Program:
#       test ssb
# History:
# 2023/03/17    junfenghe.cloud@qq.com  version:0.0.1

rm -rf ./n*txt
for ((i=1; i<20; i++))
do

    cat sql_ssb.sql | mysql -uroot -P 9030 -h 127.0.0.1 -v -vv -vvv >n${i}.txt

done

  1. 分析脚本
#!/bin/bash
#Program:
#       analysis cloudwave/starrocks logs of base compute
#History:
#2023/02/20     junfenghe.cloud@qq.com  version:0.0.1

path=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/sbin:/usr/local/bin:~/bin
export path



suff="(s)#####"

if [ -z "${1}" ]
then

        echo "Please input database'name"
        exit -1

fi

if [ -z "$2" ]
then

        echo "Please input times of scanner"
        exit -f
fi

if [ -n "${3}" ]
then
        suff=${3}
fi

for current in ${2}
do
        result_time=""

        if [ "${1}" == "starrocks" ]
        then
            for time in $( cat ${current} | grep sec  | awk -F '('  '{print $2}' | awk -F ' ' '{print $1}' )
            do
                result_time="${result_time}${time}${suff}"
            done
        elif [ "${1}" == "cloudwave" ]
        then
            for time in $( cat ${current} | grep Elapsed | awk '{print $2}'| sed 's/:/*60+/g'| sed 's/+00\*60//g ; s/+0\*60//g ; s/^0\*60+//g' )
            do
                result_time="${result_time}${time}${suff}"
            done
        fi

        echo ${result_time%${suff}*}

done


exit 0
  1. sql_ssb.sql
use ssb1000;
select sum(lo_revenue) as revenue from lineorder,dates where lo_orderdate = d_datekey and d_year = 1993 and lo_discount between 1 and 3 and lo_quantity < 25;
select sum(lo_revenue) as revenue from lineorder,dates where lo_orderdate = d_datekey and d_yearmonthnum = 199401 and lo_discount between 4 and 6 and lo_quantity between 26 and 35;
select sum(lo_revenue) as revenue from lineorder,dates where lo_orderdate = d_datekey and d_weeknuminyear = 6 and d_year = 1994 and lo_discount between 5 and 7 and lo_quantity between 26 and 35;
select sum(lo_revenue) as lo_revenue, d_year, p_brand from lineorder ,dates,part,supplier where lo_orderdate = d_datekey and lo_partkey = p_partkey and lo_suppkey = s_suppkey and p_category = 'MFGR#12' and s_region = 'AMERICA' group by d_year, p_brand order by d_year, p_brand;
select sum(lo_revenue) as lo_revenue, d_year, p_brand from lineorder,dates,part,supplier where lo_orderdate = d_datekey and lo_partkey = p_partkey and lo_suppkey = s_suppkey and p_brand between 'MFGR#2221' and 'MFGR#2228' and s_region = 'ASIA' group by d_year, p_brand order by d_year, p_brand;
select sum(lo_revenue) as lo_revenue, d_year, p_brand from lineorder,dates,part,supplier where lo_orderdate = d_datekey and lo_partkey = p_partkey and lo_suppkey = s_suppkey and p_brand = 'MFGR#2239' and s_region = 'EUROPE' group by d_year, p_brand order by d_year, p_brand;
select c_nation, s_nation, d_year, sum(lo_revenue) as lo_revenue from lineorder,dates,customer,supplier where lo_orderdate = d_datekey and lo_custkey = c_custkey and lo_suppkey = s_suppkey and c_region = 'ASIA' and s_region = 'ASIA'and d_year >= 1992 and d_year <= 1997 group by c_nation, s_nation, d_year order by d_year asc, lo_revenue desc;
select c_city, s_city, d_year, sum(lo_revenue) as lo_revenue from lineorder,dates,customer,supplier where lo_orderdate = d_datekey and lo_custkey = c_custkey and lo_suppkey = s_suppkey and  c_nation = 'UNITED STATES' and s_nation = 'UNITED STATES' and d_year >= 1992 and d_year <= 1997 group by c_city, s_city, d_year order by d_year asc, lo_revenue desc;
select c_city, s_city, d_year, sum(lo_revenue) as lo_revenue from lineorder,dates,customer,supplier where lo_orderdate = d_datekey and lo_custkey = c_custkey and lo_suppkey = s_suppkey and (c_city='UNITED KI1' or c_city='UNITED KI5') and (s_city='UNITED KI1' or s_city='UNITED KI5') and d_year >= 1992 and d_year <= 1997 group by c_city, s_city, d_year order by d_year asc, lo_revenue desc;
select c_city, s_city, d_year, sum(lo_revenue) as lo_revenue from lineorder,dates,customer,supplier where lo_orderdate = d_datekey and lo_custkey = c_custkey and lo_suppkey = s_suppkey and (c_city='UNITED KI1' or c_city='UNITED KI5') and (s_city='UNITED KI1' or s_city='UNITED KI5') and d_yearmonth  = 'Dec1997' group by c_city, s_city, d_year order by d_year asc, lo_revenue desc;
select d_year, c_nation, sum(lo_revenue) - sum(lo_supplycost) as profit from lineorder,dates,customer,supplier,part where lo_orderdate = d_datekey and lo_custkey = c_custkey and lo_suppkey = s_suppkey and lo_partkey = p_partkey and c_region = 'AMERICA' and s_region = 'AMERICA' and (p_mfgr = 'MFGR#1' or p_mfgr = 'MFGR#2') group by d_year, c_nation order by d_year, c_nation;
select d_year, s_nation, p_category, sum(lo_revenue) - sum(lo_supplycost) as profit from lineorder,dates,customer,supplier,part where lo_orderdate = d_datekey and lo_custkey = c_custkey and lo_suppkey = s_suppkey and lo_partkey = p_partkey and c_region = 'AMERICA'and s_region = 'AMERICA' and (d_year = 1997 or d_year = 1998) and (p_mfgr = 'MFGR#1' or p_mfgr = 'MFGR#2') group by d_year, s_nation, p_category order by d_year, s_nation, p_category;
select d_year, s_city, p_brand, sum(lo_revenue) - sum(lo_supplycost) as profit from lineorder,dates,customer,supplier,part where lo_orderdate = d_datekey and lo_custkey = c_custkey and lo_suppkey = s_suppkey and lo_partkey = p_partkey and c_region = 'AMERICA'and s_nation = 'UNITED STATES' and (d_year = 1997 or d_year = 1998) and p_category = 'MFGR#14' group by d_year, s_city, p_brand order by d_year, s_city, p_brand;
上一篇 下一篇

猜你喜欢

热点阅读