我爱编程

集群搭建(kafka+hadoop+spark+elastics

2016-10-13  本文已影响0人  谜碌小孩

集群搭建(两台与多台一样,hadoop没有选则HA方案)


1. vim /etc/hosts (每个节点都修改)

10.128.7.39          hostname1
10.128.7.84          hostname2

2. 安装jdk,配置好各组件环境变量

3. 配置ssh(每个节点)

ssh-keygen -t rsa
ssh-copy-id -i /root/.ssh/id_rsa.pub hostname1
ssh-copy-id -i /root/.ssh/id_rsa.pub hostname2

4. 集群配置

4.1 hadoop集群配置(每个节点)

cd $HADOOP_HOME/etc/hadoop/
   <name>fs.defaultFS</name>
   <value>hdfs://hostname:9000</value>
   <description>NameNode URI.</description>
 </property>```

- vim slaves(指定datanode节点)
      hostname1
      hostname2

###4.2 kafka集群配置(每个节点)
- vim config/server.properties
      broker.id=0(1、2、3……每台机器不一样)
      host.name=hostname
      zookeeper.connect=hostname1:2181,hostname2:2181

- vim producer.properties
      metadata.broker.list=hostname1:9092,hostname2:9092
      prodeucer.type=async

- vim consumer.properties
      zookeeper.connect=hostname1:2181,hostname2:2181

- vim zookeeper.properties
修改dataDir={kafka安装目录}/zookeeper/logs/
注释掉maxClientCnxns=0
在文件末尾添加如下语句
      tickTime=2000
      initLimit=5
      syncLimit=2
      server.1=hostname1:2888:3888
      server.2=hostname2:2888:3888
在dataDir目录下建立一个myid文件
命令   echo 1 >myid
另外机器分别设置为2、3、4,依次类推

###4.3 spark集群配置(每个节点)
    cd /安装目录/spark/conf/
- vim spark-env.sh
      export HADOOP_CONF_DIR=/安装目录/hadoop/etc/hadoop 
      export SPARK_MASTER_PORT=7077
      export SPARK_MASTER_IP=10.128.7.39
- cp slaves.template slaves
- vim slaves
    hostname1
    hostname2

###4.4 es集群配置(每个节点)
cd /data/mdware/es/config/
- vim elasticsearch.yml 
      node.name: "kone1"(每个节点名称不一样)
      network.host: 10.128.7.39
      discovery.zen.ping.multicast.enabled: false
      discovery.zen.fd.ping_timeout: 100s
      discovery.zen.ping.timeout: 100s
      discovery.zen.ping.unicast.hosts: ["10.128.7.39:9300","10.128.7.84:9300"](所有节点写全)

##5. 启动集群
- 初始化  
      bin/hdfs namenode -format
- 启动hadoop
      bin/hdfs namenode -format
      sbin/hadoop-daemon.sh start namenode
      sbin/hadoop-daemons.sh start datanode

- 启动spark
      sbin/start-master.sh
      sbin/start-slaves.sh

- 启动kafka(每个节点)
每台机器  
      nohup bin/zookeeper-server-start.sh config/zookeeper.properties >> logs/zookeeper.log &
      nohup bin/kafka-server-start.sh config/server.properties >> logs/kafka.log &

- 启动es(每个节点)
      cd /data/mdware/es
      ulimit -n 655360
      export ES_HEAP_SIZE=16g(1/4内存)
      bin/elasticsearch -Des.insecure.allow.root=true -d
上一篇下一篇

猜你喜欢

热点阅读