spark读取hive

2018-01-11  本文已影响1133人  無敵兔八哥

spark2.0+ 使用Sparksession替代HiveContext

1.添加MAVEN依赖

<!-- https://mvnrepository.com/artifact/mysql/mysql-connector-java -->
       <dependency>
           <groupId>mysql</groupId>
           <artifactId>mysql-connector-java</artifactId>
           <version>5.1.35</version>
       </dependency>

       <dependency>
           <groupId>org.apache.spark</groupId>
           <artifactId>spark-hive_2.11</artifactId>
           <version>2.1.1</version>
       </dependency>

       <!-- https://mvnrepository.com/artifact/org.apache.spark/spark-hive -->
       <dependency>
           <groupId>org.apache.spark</groupId>
           <artifactId>spark-sql_2.11</artifactId>
           <version>2.1.1</version>
       </dependency>


       <!-- https://mvnrepository.com/artifact/org.apache.spark/spark-core -->
       <dependency>
           <groupId>org.apache.spark</groupId>
           <artifactId>spark-core_2.11</artifactId>
           <version>2.1.1</version>
       </dependency>

2.编写SparkSession

package com.hualala.bi

import java.io.File

import org.apache.spark.sql.SparkSession

/**
  *
  * @author jiaquanyu 
  *
  */
object SparkSqlApp {

  def main(args: Array[String]): Unit = {

    case class Record(key: Int, value: String)

    val warehouseLocation = new File("spark-warehouse").getAbsolutePath

    val spark = SparkSession
      .builder()
      .appName("Spark SQL on hive")
      .master("spark://192.168.4.4:7077")
      .config("spark.sql.warehouse.dir", warehouseLocation)
      .enableHiveSupport()
      .getOrCreate()

    spark.sql("show databases").collect().foreach(println)
  }
}

3.将hive-site.xml保存到项目中的resources下 没有需创建

注:如需debug 建议spark-shell下进行

上一篇下一篇

猜你喜欢

热点阅读