Hive从入门到精通2:Hive环境搭建之本地MySQL模式

2020-04-02  本文已影响0人  金字塔下的小蜗牛

Hive共有三种安装模式:1.单用户模式(本地模式、嵌入模式);2.本地MySQL模式;3.远程MySQL模式。三者的区别是:1.单用户模式是本地模式的一种,它的元信息存储在hive自带的Derby数据库中,同一时刻只能为一个用户提供服务,功能是用于测试hive程序;2.本地MySQL模式的元信息存储在本地MySQL中,同一时刻可以为多个用户提供服务,功能是用于开发测试hive程序;3.远程MySQL模式的元信息存储在远程MySQL中,同一时刻可以为多个用户提供服务,功能是用于实际生产环境。本节首先来介绍一下Hive的本地MySQL模式的搭建过程。

本节用到的安装介质:

mysql-connector-java-5.1.46.tar.gz 提取码:mdl7
apache-hive-3.1.0-bin.tar.gz 提取码:993d

1. Linux环境准备

1台主机,关闭防火墙、设置IP地址、hostname、hosts、配置秘钥认证。

bigdata 192.168.126.110

mysql和hive都安装在bigdata上。

2.安装MySQL

安装mysql客户端:

[root@bigdata ~]# yum -y install mysql

安装mysql服务器端:

[root@bigdata ~]# yum -y install mysql-server

注:这里如果提示“没有可用软件包 mysql-server。”,可以参考以下文章解决:《MySQL常见问题汇总

启动mysql服务器:

[root@bigdata ~]# service mysqld start
Redirecting to /bin/systemctl start mysqld.service

根据提示也可以使用命令:

systemctl start mysqld.service

设置mysql管理员密码:

[root@bigdata ~]# mysqladmin -uroot password 123456
Warning: Using a password on the command line interface can be insecure.

测试登录是否成功:

[root@bigdata ~]# mysql -uroot -p123456
Warning: Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 3
Server version: 5.6.39 MySQL Community Server (GPL)

Copyright (c) 2000, 2018, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its affiliates.
Other names may be trademarks of their respective owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql>

授权root用户:

mysql> grant all privileges on . to root@'%' identified by '123456' with grant option;
Query OK, 0 rows affected (0.06 sec)
mysql> grant all privileges on . to root@'bigdata' identified by '123456' with grant option;
Query OK, 0 rows affected (0.06 sec)
mysql> grant all privileges on . to root@'localhost' identified by '123456' with grant option;
Query OK, 0 rows affected (0.06 sec)
mysql> flush privileges;

3. 上传Hive安装包

上传hive安装包到/root/tools/目录下:

[root@bigdata tools]# pwd
/root/tools
[root@bigdata tools]# ls
apache-hive-3.1.0-bin.tar.gz

4.解压Hive安装包

将hive安装包解压到安装目录/root/trainings/:

[root@bigdata tools]# tar -zxvf apache-hive-3.1.0-bin.tar.gz -C /root/trainings/

5. 修改Hive配置文件

[root@bigdata conf]# pwd
/root/trainings/apache-hive-3.1.0-bin/conf
[root@bigdata conf]# echo $HADOOP_HOME
/root/trainings/hadoop-2.7.3

编辑hive-env.sh配置文件:

[root@bigdata conf]# cp hive-env.sh.template hive-env.sh
[root@bigdata conf]# vim hive-env.sh
HADOOP_HOME=/root/trainings/hadoop-2.7.3
export HIVE_CONF_DIR=/root/trainings/apache-hive-3.1.0-bin/conf
export HIVE_AUX_JARS_PATH=/root/trainings/apache-hive-3.1.0-bin/lib

创建以下目录备用

创建HDFS上的目录:

[root@bigdata ~]# hdfs dfs -mkdir -p /hive/warehouse

创建本地的目录:

[root@bigdata ~]# cd /root/trainings/apache-hive-3.1.0-bin/
[root@bigdata apache-hive-3.1.0-bin]# mkdir logs
[root@bigdata apache-hive-3.1.0-bin]# mkdir tmpdir

编辑hive-site.xml配置文件:

[root@bigdata conf]# cp hive-default.xml.template hive-site.xml
[root@bigdata conf]# vim hive-site.xml

修改如下配置(加粗斜体文字就是上面建的目录):

<property>
<name>hive.metastore.warehouse.dir</name>
<value>/hive/warehouse</value>
<description>location of default database for the warehouse</description>
</property>
<property>
<name>hive.querylog.location</name>
<value>/root/trainings/apache-hive-3.1.0-bin/logs</value>
<description>Location of Hive run time structured log file</description>
</property>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://bigdata:3306/hive?createDatabaseIfNotExist=true</value>
<description>JDBC connect string for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
<description>Driver class name for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>root</value>
<description>Username to use against metastore database</description>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>123456</value>
<description>password to use against metastore database</description>
</property>
<property>
<name>hive.metastore.schema.verification</name>
<value>false</value>
</property>
<property>
<name>datanucleus.schema.autoCreateAll</name>
<value>true</value>
</property>

新增两个配置:

<property>
<name>system:java.io.tmpdir</name>
<value>/root/trainings/apache-hive-3.1.0-bin/tmpdir</value>
<description>template directory</description>
</property>
<property>
<name>system:user.name</name>
<value>root</value>
<description>user name</description>
</property></pre>

6.加载MySQL连接器

上传MySQL连接工具到/root/tools目录下

[root@bigdata tools]# ls
mysql-connector-java-5.1.46.tar.gz

解压MySQL连接工具:

[root@bigdata tools]# tar -zxvf mysql-connector-java-5.1.46.tar.gz

复制mysql-connector-java-5.1.46-bin.jar到hive的lib目录下:

[root@bigdata tools]# cp mysql-connector-java-5.1.46/mysql-connector-java-5.1.46-bin.jar
/root/trainings/apache-hive-3.1.0-bin/lib/</pre>

7.设置环境变量

[root@bigdata tools]# vim /root/.bash_profile

追加下面内容:

HIVE_HOME=/root/trainings/apache-hive-3.1.0-bin/
export HIVE_HOME
PATH=$HIVE_HOME/bin:$PATH
export PATH

使环境变量生效:

[root@bigdata tools]# source /root/.bash_profile

8.初始化元数据库

这一步是可选的:因为第一次启动hive会根据配置自动初始化元数据库,如果启动hive出现问题没有成功初始化元数据库,可执行下面语句手动初始化元数据库。

[root@bigdata ~]# schematool -dbType mysql -initSchema

9.启动Hive

确保Hadoop集群已经成功启动:

[root@bigdata ~]# jps
2209 NameNode
2535 SecondaryNameNode
2808 NodeManager
4345 Jps
2347 DataNode
2699 ResourceManager

确保MySQL已启动且能成功链接:

[root@bigdata ~]# mysql -uroot -p123456
Warning: Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 66
Server version: 5.6.40 MySQL Community Server (GPL)

Copyright (c) 2000, 2018, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql>

启动hive:

[root@bigdata ~]# hive
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/root/trainings/hadoop-2.7.3/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/root/trainings/apache-hive-3.1.0-bin/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/root/trainings/apache-hive-3.1.0-bin/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/root/trainings/hadoop-2.7.3/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Hive Session ID = fe24628b-0047-4440-b448-058ef1be8fb2

Logging initialized using configuration in jar:file:/root/trainings/apache-hive-3.1.0-bin/lib/hive-common-3.1.0.jar!/hive-log4j2.properties Async: true
Hive Session ID = 21beccee-7bc1-4ce7-93a0-98c376b8e0af
Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
hive>

注:如果启动过程中出错,请参考下面文章解决:

www.linux-man.com/archives/464

由于是多用户模式,可以有多个用户同时登录。

至此,Hive的本地MySQL模式开发环境已经搭建完成,祝你玩的愉快!

上一篇下一篇

猜你喜欢

热点阅读