187、Spark 2.0之Dataset开发详解-action

2019-02-11  本文已影响0人  ZFH__ZJ

常见的action操作

collect

collect:将分布式存储在集群上的分布式数据集(比如dataset),中的所有数据都获取到driver端来

count

count:对dataset中的记录数进行统计个数的操作

first

first:获取数据集中的第一条数据

foreach

foreach:遍历数据集中的每一条数据,对数据进行操作,这个跟collect不同,collect是将数据获取到driver端进行操作
foreach是将计算操作推到集群上去分布式执行
foreach(println(_))这种,真正在集群中执行的时候,是没用的,因为输出的结果是在分布式的集群中的,我们是看不到的

reduce

reduce:对数据集中的所有数据进行归约的操作,多条变成一条

show

show,默认将dataset数据打印前20条

take

take,从数据集中获取指定条数

数据

employee.json

{"name": "Leo", "age": 25, "depId": 1, "gender": "male", "salary": 20000}
{"name": "Marry", "age": 30, "depId": 2, "gender": "female", "salary": 25000}
{"name": "Jack", "age": 35, "depId": 1, "gender": "male", "salary": 15000}
{"name": "Tom", "age": 42, "depId": 3, "gender": "male", "salary": 18000}
{"name": "Kattie", "age": 21, "depId": 3, "gender": "female", "salary": 21000}
{"name": "Jen", "age": 30, "depId": 2, "gender": "female", "salary": 28000}
{"name": "Jen", "age": 19, "depId": 2, "gender": "female", "salary": 8000}

代码

object ActionOperation {
  def main(args: Array[String]): Unit = {
    val sparkSession= SparkSession
      .builder()
      .master("local")
      .appName("ActionOperationScala")
      .getOrCreate()

    import sparkSession.implicits._
    import org.apache.spark.sql.functions._

    val employeePath = this.getClass.getClassLoader.getResource("employee.json").getPath

    val employeeDF = sparkSession.read.json(employeePath)

    employeeDF.collect().map(println(_))
    println(employeeDF.first())
    println(employeeDF.count())
    employeeDF.foreach(println(_))
    println(employeeDF.map(employee => 1).reduce(_ + _))
    employeeDF.take(3).foreach(println(_))
  }
}
上一篇下一篇

猜你喜欢

热点阅读