Hadoop入门(三)HIVE derby

2019-08-05  本文已影响0人  d24b5d9a8312

发自简书
master 192.168.179.129
worker1 192.168.179.130
worker2 192.168.179.131
操作系统Ubuntu 18.04.2
以下所有步骤在start-all.sh之后Hadoop入门(二)HDFS集群

tar -xzvf apache-hive-2.3.5-bin.tar.gz
sudo vim /etc/profile

export JAVA_HOME=/usr/java/jdk1.8.0_221
export JRE_HOME=$JAVA_HOME/jre
export CLASSPATH=$JAVA_HOME/lib:$JRE_HOME/lib:$CLASSPATH
PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$PATH:$HIVE_HOME/bin:$HADOOP_HOME/bin:
export HADOOP_HOME=/home/njupt4145438/Downloads/hadoop-3.1.2
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export HIVE_HOME=/home/njupt4145438/Downloads/apache-hive-2.3.5-bin
source /etc/profile

1、修改配置文件在config文件下

cd  apache-hive-2.3.5-bin/conf

hive-env.sh

和hdfs一样先修改环境,hdfs基于java,hive基于hdfs

cp hive-env.sh.template hive-env.sh
sudo vim hive-env.sh
image.png

hive-site.xml

//cp hive-default.xml.template hive-site.xml
sudo vim hive-site.xml

用的是derby,因为一直错,索性把hive-default.xml.template里的东西全删了......

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>  
        <property>  
                  <name>javax.jdo.option.ConnectionURL</name>  
                  <value>jdbc:derby:;databaseName=metastore_db;create=true</value>  
        </property>  
           
        <property>  
                  <name>javax.jdo.option.ConnectionDriverName</name>  
                  <value>org.apache.derby.jdbc.EmbeddedDriver</value>  
        </property>  
           
        <property>  
                  <name>hive.metastore.local</name>  
                  <value>true</value>  
        </property>  
           
        <property>  
                  <name>hive.metastore.warehouse.dir</name>  
                  <value>/home/njupt4145438/Downloads/apache-hive-2.3.5-bin/warehouse</value>  
        </property>  
           
        <property>
            <name>datanucleus.schema.autoCreateAll</name>
            <value>true</value>
    </property>
    <property>
            <name>hive.server2.authentication</name>
            <value>NONE</value>
    </property>      
</configuration>

2、hdfs权限问题,要增加hadoop-3.1.2/etc/hadoop/core-site.xml

<property>
  <name>hadoop.proxyuser.root.hosts</name>
  <value>*</value>
</property>
<property>
  <name>hadoop.proxyuser.root.groups</name>
  <value>*</value>
</property>
<property>
  <name>hadoop.proxyuser.用户名.hosts</name>
  <value>*</value>
</property>
<property>
  <name>hadoop.proxyuser.用户名.groups</name>
  <value>*</value>
</property>

重启hdfs

sbin/start-all.sh
sbin/stop-all.sh

3、初始化

!注意不要这个垃圾db文件

cd  apache-hive-2.3.5-bin
bin/schematool -dbType derby -initSchema
mv metastore_db metastore_db.tmp
bin/hive

4、开启服务

bin/hiveserver2
会卡在这,可以守护进程
开启另一个ssh用beeline连接数据库

先看一下默认的10000端口

netstat -anp |grep 10000
ps -ef | grep hive
bin/beeline
连接数据库
!connect jdbc:hive2://192.168.179.129:10000

5、使用JDBC

pom.xml

<dependency>
            <groupId>org.apache.hive</groupId>
            <artifactId>hive-jdbc</artifactId>
            <version>0.11.0</version>
        </dependency>
        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-common</artifactId>
            <version>3.1.2</version>
</dependency>
import java.sql.*;
import org.apache.hive.jdbc.HiveDriver;
public class HiveTest {

    private static String driverName = "org.apache.hive.jdbc.HiveDriver";

    public static void main(String[] args)
            throws SQLException {
        try {
            Class.forName(driverName);
        } catch (ClassNotFoundException e) {
            e.printStackTrace();
            System.exit(1);
        }
        Connection con = DriverManager.getConnection("jdbc:hive2://192.168.179.129:10000/","njupt4145438","123456");

        Statement stmt = con.createStatement();
        String sql = "show databases";
        ResultSet res = stmt.executeQuery(sql);
        if (res.next()) {
            System.out.println(res.getString(1));
        }
        if (res.next()) {
            System.out.println(res.getString(1));
        }
    }
}
上一篇 下一篇

猜你喜欢

热点阅读