搭建Hadoop-HA + ZooKeeper + Yarn +

2019-01-22  本文已影响0人  上杉丶零

前提:搭建Hadoop-HA + ZooKeeper + Yarn环境

node01 node02 node03 node04
NameNode01 NameNode02 NameNode03
DataNode01 DataNode02 DataNode03
JournalNode01 JournalNode02 JournalNode03
ZooKeeper01 ZooKeeper02 ZooKeeper03
ZooKeeperFailoverController01 ZooKeeperFailoverController02 ZooKeeperFailoverController03
ResourceManager01 ResourceManager02
NodeManager01 NodeManager02 NodeManager03
MySQL Server MetaStore Server Hive CLI
  1. 配置node01上的MySQL服务

安装MySQL:
yum install mysql-server -y
启动MySQL服务:
service mysqld start
修改MySQL权限:
GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY '123123' WITH GRANT OPTION;
DELETE FROM user WHERE host != '%';
flush privileges;
登陆MySQL:
mysql -u root -p

  1. 安装node03、node04上的Hive

tar -zxvf apache-hive-2.3.4-bin.tar.gz -C /opt/hive/

  1. 配置node03、node04上的Hive

在node03上修改/opt/hive/apache-hive-2.3.4-bin/conf/hive-site.xml
cp /opt/hive/apache-hive-2.3.4-bin/conf/hive-default.xml.template /opt/hive/apache-hive-2.3.4-bin/conf/hive-site.xml
vim /opt/hive/apache-hive-2.3.4-bin/conf/hive-site.xml
添加:

<configuration>
  <property>  
    <name>hive.metastore.warehouse.dir</name>  
    <value>/hive</value>  
  </property>  
  <property>  
    <name>javax.jdo.option.ConnectionURL</name>  
    <value>jdbc:mysql://node01:3306/hive?createDatabaseIfNotExist=true</value>  
  </property>  
  <property>  
    <name>javax.jdo.option.ConnectionDriverName</name>  
    <value>com.mysql.jdbc.Driver</value>  
  </property>     
  <property>  
    <name>javax.jdo.option.ConnectionUserName</name>  
    <value>root</value>  
  </property>  
  <property>  
    <name>javax.jdo.option.ConnectionPassword</name>  
    <value>123123</value>  
  </property>
</configuration>

在node04上修改/opt/hive/apache-hive-2.3.4-bin/conf/hive-site.xml
cp /opt/hive/apache-hive-2.3.4-bin/conf/hive-default.xml.template /opt/hive/apache-hive-2.3.4-bin/conf/hive-site.xml
vim /opt/hive/apache-hive-2.3.4-bin/conf/hive-site.xml
添加:

<configuration>
  <property>  
    <name>hive.metastore.warehouse.dir</name>  
    <value>/hive</value>  
  </property>  
  <property>  
    <name>hive.metastore.uris</name>  
    <value>thrift://node03:9083</value>  
  </property> 
</configuration>
  1. 添加node03上的MySQL驱动:

mv mysql-connector-java-5.1.32-bin.jar /opt/hive/apache-hive-2.3.4-bin/lib/

  1. 配置node03、node04上的环境变量

在node03、node04上修改/etc/profile
vim /etc/profile
添加:

export HIVE_HOME=/opt/hive/apache-hive-2.3.4-bin
export PATH=$PATH:$HIVE_HOME/bin

在node03、node04上运行:
. /etc/profile

  1. 初始化数据库

在node03上运行:
schematool -dbType mysql -initSchema

  1. 启动Hive服务端

在node3上运行:
hive --service metastore

  1. 启动Hive客户端

在node04上运行:
hive

  1. 配置node01、node02、node03、node04上的Hadoop

在node01、node02、node03、node04上修改/opt/hadoop/hadoop-3.1.1/etc/hadoop/core-site.xml
vim /opt/hadoop/hadoop-3.1.1/etc/hadoop/core-site.xml
添加:

<property>
  <name>hadoop.proxyuser.root.groups</name>
  <value>*</value>
</property>
<property>
  <name>hadoop.proxyuser.root.hosts</name>
  <value>*</value>
</property>
  1. 重启Hadoop

在node01、node02、node03上运行:
hdfs dfsadmin -fs hdfs://node01:8020 -refreshSuperUserGroupsConfiguration

  1. 启动Hive服务端

在node3上运行:
hiveserver2

  1. 启动Hive客户端

在node04上运行:
beeline
!connect jdbc:hive2://node03:10000 root 1

  1. 查看进程

在node01、node02、node03、node04上运行:
jps

上一篇下一篇

猜你喜欢

热点阅读