spark编程it互联网

Spark学习

2015-07-12  本文已影响2564人  一只小青鸟

Spark学习

一、Spark简介

二、Spark内置库

三、RDD

四、使用Spark的方式

五、配置Spark环境

软件 版本
操作系统 Mint-16-64bit
Hadoop 2.6.0
Spark 1.4.0
Scala 2.11.6
模式 Spark on Yarn [Cluster]
$ tar -xzvf spark-1.4.0.tar.gz
$ sudo chmod 777 -R spark-1.4.0/
$ sudo mv spark-1.4.0/ /usr/
$ sudo vi /etc/profile

#添加以下三行
export HADOOP_CONF_DIR=$HADOP_HOME/etc/hadoop
export SPARK_HOME=/usr/spark-1.4.0/
export PATH="$PATH:$SPARK_HOME"
$ cd /usr/spark-1.4.0/conf

$ sudo vi slaves
#添加worker节点
node

$ sudo cp log4j.properties.template log4j.properties

$ sudo cp spark-defaults.conf.template spark-defaults.conf
$ sudo vi spark-defaults.conf
#添加以下几行
[
spark.yarn.am.waitTime 10
spark.yarn.submit.file.replication 0
spark.yarn.preserve.staging.files false
spark.yarn.scheduler.heartbeat.interval-ms 5000 spark.yarn.max.executor.failures 6
spark.yarn.historyServer.address node:10020
spark.yarn.executor.memoryOverhead 512
spark.yarn.driver.memoryOverhead 512
]


$ sudo cp spark-env.sh.template spark-env.sh
$ sudo vi spark-env.sh
#添加以下几行
[
export SCALA_HOME=/usr/scala-2.11.6
export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64/

#standalone
SPARK_MASTER_IP=node
SPARK_WORKER_MEMORY=512M

#yarn
export HADOOP_HOME=/usr/hadoop-2.6.0
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
SPARK_EXECUTOR_INSTANCES=1
SPARK_EXECUTOR_CORES=1
SPARK_EXECUTOR_MEMORY=256M
SPARK_DRIVER_MEMORY=256M
SPARK_YARN_APP_NAME="Spark 1.4.0"
]

确认Hadoop已经在运行

$ cd /usr/spark-1.4.0/sbin
$ ./start-all.sh

运行后执行jps命令,应该出现master和worker两个进程


jps

(1) 运行示例程序

$ cd /usr/spark-1.4.0/bin
$ run-example SparkPi

(2) 以Yarn-Client模型运行示例程序

$ cd /usr/spark-1.4.0/bin 

#yarn-cluster模式
spark-submit --class org.apache.spark.examples.JavaSparkPi --master yarn-cluster --driver-memory 256m  --executor-memory 256m --executor-cores 1 ../lib/spark-examples-1.4.0-hadoop2.6.0.jar 10

#standalone模式
spark-submit --class org.apache.spark.examples.SparkPi --master local --driver-memory 128m --executor-memory 128m --executor-cores 1 /usr/spark-1.4.0/lib/spark-examples-1.4.0-hadoop2.6.0.jar 10
运行Kafka后进程

编译后执行下面的们命令:

spark-submit  --master local --driver-memory 128m  --executor-memory 128m --executor-cores 1 --jars /home/zhy/spark-lib/zkclient-0.5.jar /home/zhy/spark-app/testSpark.jar

错误信息:

 ERROR ReceiverTracker: Deregistered receiver for stream 0: Error starting receiver 0 - java.lang.NoSuchMe                                          thodError: scala.Predef$.ArrowAssoc(Ljava/lang/Object;)Ljava/lang/Object;
        at kafka.consumer.ZookeeperConsumerConnector.<init>(ZookeeperConsumerConnector.scala:107)
        at kafka.consumer.ZookeeperConsumerConnector.<init>(ZookeeperConsumerConnector.scala:143)
        at kafka.consumer.Consumer$.create(ConsumerConnector.scala:94)
        at org.apache.spark.streaming.kafka.KafkaReceiver.onStart(KafkaInputDStream.scala:100)
        at org.apache.spark.streaming.receiver.ReceiverSupervisor.startReceiver(ReceiverSupervisor.scala:125)
        at org.apache.spark.streaming.receiver.ReceiverSupervisor.start(ReceiverSupervisor.scala:109)
        at org.apache.spark.streaming.scheduler.ReceiverTracker$ReceiverLauncher$$anonfun$8.apply(ReceiverTracker.scala:308                                          )
        at org.apache.spark.streaming.scheduler.ReceiverTracker$ReceiverLauncher$$anonfun$8.apply(ReceiverTracker.scala:300                                          )
        at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1765)
        at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1765)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:63)
        at org.apache.spark.scheduler.Task.run(Task.scala:70)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:744)

错误信息分析:可能是由于开源组件版本的兼容性问题引起的

九、Spark + Streaming

十、启动Hadoop与Spark集群

#1.在虚拟机中输入下面一行命令,以获取虚拟机IP地址
$ ifconfig

#2.在windows中打开Cmd命令行,输入下面一行命令,其中yourIPAddress为ifconfig命令显示的IP地址
$ ping yourIPAddress

#3.确认能够ping通
$ ifconfig
$ sudo vi /etc/hosts
#修改172.20.10.4 node这一行为下面引号内的内容(不含引号),其中yourIPAddress为ifconfig命令显示的与主机能够相互ping通的IP地址:
"yourIPAddress node"
$ cd /usr/hadoop-2.*.*/sbin
$ ./start-all.sh
$ cd /usr/spark-1.4.0/sbin
$ ./start-all.sh
#运行示例程序
$ cd /usr/spark-1.4.0/bin
$ run-example SparkPi

#通过spark-submit运行示例程序
$ cd /usr/spark-1.4.0/bin
$ spark-submit --class org.apache.spark.examples.SparkPi --master local --driver-memory 128m --executor-memory 128m --executor-cores 1 /usr/spark-1.4.0/lib/spark-examples-1.4.0-hadoop2.*.*.jar 10
$ cd /usr/spark-1.4.0/bin
# --class  指定运行的类 
# --master 指定运行方式
# --driver-memory 指定为该task分配的driver内存
# --executor-memory 指定为该task分配的executor内存
# --executor-cores 指定为该task分配的executor运行核数
# ***.jar 最后一个参数是jar包的位置,之后的参数都作为task的参数传入
# arg0 arg1 可选 task的参数

$ spark-submit --class YourClass --master local --driver-memory 128m --executor-memory 128m --executor-cores 1 ***.jar arg0 arg1

十一、Client 远程执行Spark任务

对于Windows开发环境,远程执行Spark任务需要以下步骤:

下载一个Windows的SSH客户端,这里选择的是MobaXterm,其便携版下载地址如下:

[下载地址] -> http://mobaxterm.mobatek.net/MobaXterm_v7.7.zip

下载后解压即可使用,界面是这样的:

mobaxtermmobaxterm

(1) 在MobaXterm软件中点击左边的Session侧边栏,在"Saved Session"文字上点击右键,在弹出菜单中点击"New Session",进入如下界面:

mobaxterm2mobaxterm2

(2) 点击SSH,在Remote Host中填入节点的IP地址,勾选"specify usernam"并填入用户名,点击"OK"即可。

mobaxterm3mobaxterm3

此时会自动尝试SSH连接,输入密码即可连接成功。下图左边为MobaXterm自带的可视化ftp工具,右边为SSH命令行:

mobaxterm4mobaxterm4

进入ftp中希望上传的文件夹,点击下图红框中的按钮即可选择要上传的文件并上传:

mobaxterm5mobaxterm5

在右边命令行中以类似如下命令格式执行Spark任务:

$ spark-submit --class YourClass --master local --driver-memory 128m --executor-memory 128m --executor-cores 1 ***.jar arg0 arg1
上一篇 下一篇

猜你喜欢

热点阅读