提交任务到集群

2018-06-24  本文已影响0人  阿发贝塔伽马

Hadoop2.7.4+Spark2.2.0滴滴云分布式集群搭建过程
使用IDEA+sbt构建Scala+spark应用,统计英文词频
代码很简单

import org.apache.spark.{SparkConf, SparkContext}
object WordCount{
  def main(args: Array[String]): Unit = {
    val conf = new SparkConf().setAppName("wordcount")
    val sc = new SparkContext(conf)
    // 接收文件参数
    val input=sc.textFile(args(0))
    // flatMap展平返回list
    val lines=input.flatMap(x=>x.split("[ ,.'?/\\|><:;\"-+_=()*&^%$#@!`~]+"))
    val count=lines.map(word=>(word,1)).reduceByKey{(x,y)=>x+y}
    // 保存到目录
    val output=count.saveAsTextFile(args(1))
  }
}

打包成wordcount.jar,上传到Master

scp /opt/spark-2.2.0-bin-hadoop2.7 dc2-user@116.85.9.118:
spark-submit --master spark://114.55.246.88:7077 --class \
WordCount  wordcount.jar  \
hdfs://Master:9000/Hadoop/Input/Jane.txt  \
hdfs://Master:9000/Hadoop/Output

上一篇下一篇

猜你喜欢

热点阅读