Druid 实践2-安装与配置

2017-08-22  本文已影响0人  zfylin

安装准备

安装包准备

生产环境的Hadoop使用Java7, 官方安装包使用Java8,所以需要下载源码使用Java7重新编译。
安装使用 Druid 0.9.2 + imply 2.0.0

安装环境

外部依赖

规划与部署

druid 采用分布式设计,不同类型的节点各司其职,故在实际部署集群环境走了需要对各类节点进行统一规划,从功能上分为三个部分。

实际部署,至少部署两个管理节点互备容错;由于Druid支持横向扩展,考虑机器资源有限,可以将管理节点和查询节点混合部署在同一台物理机上,同时为了加快热点数据的查询,可以考虑加上历史节点,利用分层特性,把小部分热点数据源放在管理节点所在的机器上的历史节点。
机器选择上,管理节点和历史节点考虑用多核大内存机器,数据节点涉及历史数据和数据的本地缓存,需要更大的磁盘空间。推荐使用ssd。

管理节点配置文件:

cp conf/supervise/master-no-zk.conf conf/supervise/master-with-query.conf
vim conf/supervise/master-with-query.conf
:verify bin/verify-java
:verify bin/verify-node

broker bin/run-druid broker conf
historical bin/run-druid historical conf
pivot bin/run-pivot conf
coordinator bin/run-druid coordinator conf
!p80 overlord bin/run-druid overlord conf

运行命令:

nohup ./bin/supervise -c conf/supervise/master-with-query.conf > master-with-query.log &

数据节点配置文件

vim conf/supervise/data.conf
:verify bin/verify-java

historical bin/run-druid historical conf
middleManager bin/run-druid middleManager conf

# Uncomment to use Tranquility Server
#!p95 tranquility-server bin/tranquility server -configFile conf/tranquility/server.json

# Uncomment to use Tranquility Kafka
#!p95 tranquility-kafka bin/tranquility kafka -configFile conf/tranquility/kafka.json

运行命令

nohup ./bin/supervise -c conf/supervise/data.conf > data.log &

基本配置

基础依赖配置

配置文件为:conf/druid/_common/common.runtime.properties

注:采用HDFS作为Deep Storage时,离线批量导入数据任务会利用MapReduce加速写入处理,因此需要将生产环境Hadoop对应客户端配置文件core-site.xml,hdfs-site.xml,yarn-site.xml,mapred-site.xml放到conf/druid/_common目录下。

数据节点配置调优

查询节点配置调优

节点配置

节点规划

机器 IP 节点
druid-01 10.1.12.76 master节点、query节点、pivot
druid-02 10.1.12.77 data节点
druid-03 10.1.12.78 master节点、query节点
druid-04 10.1.12.79 data节点
druid-05 10.1.12.80 data节点

全局Common配置

#
# Extensions
#

druid.extensions.directory=dist/druid/extensions
druid.extensions.hadoopDependenciesDir=dist/druid/hadoop-dependencies
druid.extensions.loadList=["druid-histogram","druid-datasketches","mysql-metadata-storage","druid-hdfs-storage"]

#
# Logging
#

# Log all runtime properties on startup. Disable to avoid logging properties on startup:
druid.startup.logging.logProperties=true

#
# Zookeeper
#

druid.zk.service.host=datanode1:2181,datanode2:2181,datanode3:2181
druid.zk.paths.base=/druid

#
# Metadata storage
#

# For Derby server on your Druid Coordinator (only viable in a cluster with a single Coordinator, no fail-over):
#druid.metadata.storage.type=derby
#druid.metadata.storage.connector.connectURI=jdbc:derby://master.example.com:1527/var/druid/metadata.db;create=true
#druid.metadata.storage.connector.host=master.example.com
#druid.metadata.storage.connector.port=1527

# For MySQL:
druid.metadata.storage.type=mysql
druid.metadata.storage.connector.connectURI=jdbc:mysql://druid-01:3306/druid?characterEncoding=utf8&useSSL=false&serverTimezone=UTC
druid.metadata.storage.connector.user=username
druid.metadata.storage.connector.password=password

# For PostgreSQL:
#druid.metadata.storage.type=postgresql
#druid.metadata.storage.connector.connectURI=jdbc:postgresql://db.example.com:5432/druid
#druid.metadata.storage.connector.user=...
#druid.metadata.storage.connector.password=...

#
# Deep storage
#

# For local disk (only viable in a cluster if this is a network mount):
#druid.storage.type=local
#druid.storage.storageDirectory=var/druid/segments

# For HDFS:
druid.storage.type=hdfs
druid.storage.storageDirectory=hdfs://ip:port/druid/segments

# For S3:
#druid.storage.type=s3
#druid.storage.bucket=your-bucket
#druid.storage.baseKey=druid/segments
#druid.s3.accessKey=...
#druid.s3.secretKey=...

#
# Indexing service logs
#

# For local disk (only viable in a cluster if this is a network mount):
#druid.indexer.logs.type=file
#druid.indexer.logs.directory=var/druid/indexing-logs

# For HDFS:
druid.indexer.logs.type=hdfs
druid.indexer.logs.directory=hdfs://ip:port/druid/indexing-logs

# For S3:
#druid.indexer.logs.type=s3
#druid.indexer.logs.s3Bucket=your-bucket
#druid.indexer.logs.s3Prefix=druid/indexing-logs

#
# Service discovery
#

druid.selectors.indexing.serviceName=druid/overlord
druid.selectors.coordinator.serviceName=druid/coordinator

#
# Monitoring
#

druid.monitoring.monitors=["com.metamx.metrics.JvmMonitor"]
druid.emitter=logging
druid.emitter.logging.logLevel=debug

Master机器配置

Master机器两台,作为管理节点相互作为HA支撑,同时承担部分数据查询

jvm.config

-server
-Xms3g
-Xmx3g
-Duser.timezone=UTC
-Dfile.encoding=UTF-8
-Djava.io.tmpdir=var/tmp
-Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager
-Dderby.stream.error.file=var/druid/derby.log

runtime.properties

druid.host=druid-01
druid.service=druid/coordinator
druid.port=8081

druid.coordinator.startDelay=PT30S
druid.coordinator.period=PT30S

jvm.config

-server
-Xms3g
-Xmx3g
-Duser.timezone=UTC
-Dfile.encoding=UTF-8
-Djava.io.tmpdir=var/tmp
-Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager

runtime.properties

druid.host=druid-01
druid.service=druid/overlord
druid.port=8090

druid.indexer.queue.startDelay=PT30S

druid.indexer.runner.type=remote
druid.indexer.storage.type=metadata

jvm.config

-server
-Xms12g
-Xmx12g
-XX:MaxDirectMemorySize=3072m
-Duser.timezone=UTC
-Dfile.encoding=UTF-8
-Djava.io.tmpdir=var/tmp
-Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager

runtime.properties

druid.host=druid-01
druid.service=druid/broker
druid.port=8082

# HTTP server threads
druid.broker.http.numConnections=5
druid.server.http.numThreads=25

# Processing threads and buffers
druid.processing.buffer.sizeBytes=268435456
druid.processing.numMergeBuffers=2
druid.processing.numThreads=3
druid.processing.tmpDir=var/druid/processing

# Query cache disabled -- push down caching and merging instead
druid.broker.cache.useCache=false
druid.broker.cache.populateCache=false

# SQL
druid.sql.enable=true

jvm.config

-server
-Xms4g
-Xmx4g
-XX:MaxDirectMemorySize=3072m
-Duser.timezone=UTC
-Dfile.encoding=UTF-8
-Djava.io.tmpdir=var/tmp
-Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager

runtime.properties

druid.host=druid-01
druid.service=druid/historical
druid.port=8083

# HTTP server threads
druid.server.http.numThreads=20

# Processing threads and buffers
druid.processing.buffer.sizeBytes=268435456
druid.processing.numMergeBuffers=2
druid.processing.numThreads=3
druid.processing.tmpDir=var/druid/processing

# Segment storage
druid.segmentCache.locations=[{"path":"var/druid/segment-cache","maxSize"\:130000000000}]
druid.server.maxSize=130000000000

# Query cache
druid.historical.cache.useCache=true
druid.historical.cache.populateCache=true
druid.cache.type=caffeine
druid.cache.sizeInBytes=2000000000

Data机器配置

Data机器3台,作为数据节点负责数据处理,Shared nothing 架构。

jvm.config

-server
-Xms4g
-Xmx4g
-XX:MaxDirectMemorySize=3072m
-Duser.timezone=UTC
-Dfile.encoding=UTF-8
-Djava.io.tmpdir=var/tmp
-Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager

runtime.properties

druid.host=druid-01
druid.service=druid/historical
druid.port=8083

# HTTP server threads
druid.server.http.numThreads=20

# Processing threads and buffers
druid.processing.buffer.sizeBytes=268435456
druid.processing.numMergeBuffers=2
druid.processing.numThreads=3
druid.processing.tmpDir=var/druid/processing

# Segment storage
druid.segmentCache.locations=[{"path":"var/druid/segment-cache","maxSize"\:130000000000}]
druid.server.maxSize=130000000000

# Query cache
druid.historical.cache.useCache=true
druid.historical.cache.populateCache=true
druid.cache.type=caffeine
druid.cache.sizeInBytes=2000000000

jvm.config

-server
-Xms64m
-Xmx64m
-Duser.timezone=UTC
-Dfile.encoding=UTF-8
-Djava.io.tmpdir=var/tmp
-Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager

runtime.properties

druid.host=druid-02
druid.service=druid/middlemanager
druid.port=8091

# Number of tasks per middleManager
druid.worker.capacity=3

# Task launch parameters
druid.indexer.runner.javaOpts=-server -Xmx2g -Duser.timezone=UTC -Dfile.encoding=UTF-8 -Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager
druid.indexer.task.baseTaskDir=var/druid/task
druid.indexer.task.restoreTasksOnRestart=true

# HTTP server threads
druid.server.http.numThreads=40

# Processing threads and buffers
druid.processing.buffer.sizeBytes=268435456
druid.processing.numMergeBuffers=2
druid.processing.numThreads=2
druid.processing.tmpDir=var/druid/processing

# Hadoop indexing
druid.indexer.task.hadoopWorkingPath=var/druid/hadoop-tmp
druid.indexer.task.defaultHadoopCoordinates=["org.apache.hadoop:hadoop-client:2.3.0"]

上一篇下一篇

猜你喜欢

热点阅读