Elasticsearch集群
一、环境准备:
server1: 10.4.4.151
server2: 10.4.4.152
server3: 10.4.4.153
安装JDK
https://www.jianshu.com/p/4aedf8e134e2
二、下载安装Elasticsearch
1、下载安装包:
$ cd /usr/local/services/src/
$ wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.5.4.tar.gz
2、安装
$ tar xvf elasticsearch-6.5.4.tar.gz -C ../
$ cd ../elasticsearch-6.5.4/
3、配置
$ cat elasticsearch.yml
# ---------------------------------- Cluster -----------------------------------
# Use a descriptive name for your cluster:
cluster.name: es-sa
# ------------------------------------ Node ------------------------------------
# Use a descriptive name for the node:
node.name: es-sa-1
# ----------------------------------- Paths ------------------------------------
# Path to directory where to store the data (separate multiple locations by comma):
path.data: /data/elasticsearch/es-sa/data
# Path to log files:
path.logs: /data/elasticsearch/es-sa/logs
# ----------------------------------- Memory -----------------------------------
# Lock the memory on startup:
bootstrap.memory_lock: true
# ---------------------------------- Network -----------------------------------
# Set the bind address to a specific IP (IPv4 or IPv6):
network.host: 10.4.4.151
# Set a custom port for HTTP:
http.port: 9200
# --------------------------------- Discovery ----------------------------------
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
discovery.zen.ping.unicast.hosts: ["10.4.4.151", "10.4.4.152", "10.4.4.153"]
# Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):
discovery.zen.minimum_master_nodes: 2
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
thread_pool:
bulk:
queue_size: 3000
indices.query.bool.max_clause_count: 10240
action.search.shard_count.limit: 3000
4、系统启动服务
$ cat /lib/systemd/system/elasticsearch-sa.service
[Unit]
Description=Elasticsearch
Documentation=http://www.elastic.co
Wants=network-online.target
After=network-online.target
[Service]
LimitMEMLOCK=infinity
RuntimeDirectory=elasticsearch
Environment=JAVA_HOME=/usr/local/services/jdk1.8.0_91
Environment=ES_HOME=/usr/local/services/elasticsearch-6.5.4
Environment=ES_PATH_CONF=/usr/local/services/elasticsearch-6.5.4/config
Environment=PID_DIR=/usr/local/services/elasticsearch-6.5.4/logs
#EnvironmentFile=-/etc/sysconfig/elasticsearch
WorkingDirectory=/usr/local/services/elasticsearch-6.5.4
User=user_00
Group=users
ExecStart=/usr/local/services/elasticsearch-6.5.4/bin/elasticsearch -p ${PID_DIR}/elasticsearch-sa.pid --quiet \
# StandardOutput is configured to redirect to journalctl since
# some error messages may be logged in standard output before
# elasticsearch logging system is initialized. Elasticsearch
# stores its logs in /var/log/elasticsearch and does not use
# journalctl by default. If you also want to enable journalctl
# logging, you can simply remove the "quiet" option from ExecStart.
StandardOutput=journal
StandardError=inherit
# Specifies the maximum file descriptor number that can be opened by this process
LimitNOFILE=102400
# Specifies the maximum number of processes
LimitNPROC=102400
# Specifies the maximum size of virtual memory
LimitAS=infinity
# Specifies the maximum file size
LimitFSIZE=infinity
# Disable timeout logic and wait until process is stopped
TimeoutStopSec=0
# SIGTERM signal is used to stop the Java process
KillSignal=SIGTERM
# Send the signal only to the JVM rather than its control group
KillMode=process
# Java process is never killed
SendSIGKILL=no
# When a JVM receives a SIGTERM signal it exits with code 143
SuccessExitStatus=143
[Install]
WantedBy=multi-user.target
# Built for packages-6.5.4 (packages)
启动错误:
ERROR: [3] bootstrap checks failed
[1]: max file descriptors [4096] for elasticsearch process is too low, increase to at least [65536]
[2]: memory locking requested for elasticsearch process but memory is not locked
[3]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
问题1分析:
https://www.elastic.co/guide/en/elasticsearch/reference/current/file-descriptors.html
Elasticsearch使用大量文件描述符或文件句柄。用完文件描述符可能是灾难性的,最有可能导致数据丢失。确保将运行Elasticsearch的用户的打开文件描述符数量限制增加到65,536或更高。
解决:在/etc/security/limits.conf配置文件下填加以下配置
* soft nproc 102400
* hard nproc 102400
* soft nofile 102400
* hard nofile 102400
* soft core unlimited
问题2分析:内存锁问题
https://www.elastic.co/guide/en/elasticsearch/reference/current/_memory_lock_check.html
https://www.elastic.co/guide/en/elasticsearch/reference/current/setting-system-settings.html
解决:在/etc/security/limits.conf配置文件下填加以下配置
user_00 soft memlock unlimited
user_00 hard memlock unlimited
如果是系统服务无启动,则在系统服务配置文件中[Service] 添加如下配置参数:
LimitMEMLOCK=infinity
$ sudo systemctl daemon-reload
问题3分析:最大虚拟内存区域
Elasticsearch mmapfs默认使用目录来存储其索引。mmap计数的默认操作系统限制可能太低,这可能导致内存不足异常。
https://www.elastic.co/guide/en/elasticsearch/reference/current/vm-max-map-count.html
解决:在 /etc/sysctl.conf 配置文件下填加以下配置并刷新生效sysctl -p
vm.max_map_count=655360