慕课网Spark SQL日志分析 - 2.Spark 实战环境搭
官网:http://spark.apache.org/
1.Spark源码编译
1.1 源码下载
下载地址: http://spark.apache.org/downloads.html

1.2 编译
文档地址:http://spark.apache.org/docs/latest/building-spark.html
前置要求:
- Building Spark using Maven requires Maven 3.3.9 or newer and Java 8+
- export MAVEN_OPTS="-Xmx2g -XX:ReservedCodeCacheSize=512m"
mvn 编译命令:
// 前提,需要对源码有一定的了解
./build/mvn -Pyarn -Phadoop-2.7 -Dhadoop.version=2.7.3 -DskipTests clean package
// 编译一个可运行的包
./dev/make-distribution.sh --name custom-spark --pip --r --tgz -Psparkr -Phadoop-2.7 -Phive -Phive-thriftserver -Pmesos -Pyarn -Pkubernetes
// pom 上述命令制定-D的含义
<hadoop.version>2.2.0</hadoop.version>
<protobuf.version>2.5.0</protobuf.version>
<yarn.version>${hadoop.version}</yarn.version>
// pom 上述命令制定-P的含义
<profile>
<id>hadoop-2.6</id>
<properties>
<hadoop.version>2.6.4</hadoop.version>
<jets3t.version>0.9.3</jets3t.version>
<zookeeper.version>3.4.6</zookeeper.version>
<curator.version>2.6.0</curator.version>
</properties>
</profile>
我们的命令
./build/mvn -Pyarn -Phadoop-2.6 -Phive -Phive-thriftserver -Dhadoop.version=2.6.0-cdh5.7.0 -DskipTests clean package
或
(推荐使用)
./dev/make-distribution.sh --name 2.6.0-cdh5.7.0 --pip --r --tgz -Psparkr -Phadoop-2.6 -Phive -Phive-thriftserver -Pmesos -Pyarn -Pkubernetes Dhadoop.version=2.6.0-cdh5.7.0
编译完成后:
spark-$VERSION-bin-$NAME
spark-2.1.0-bin-2.6.0-cdh5.7.0.tgz
将编译好的spark-2.2.0-bin-2.6.0-cdh5.7.0.tgz这个包进行解压,看到解压后的目录中有这些目录:
drwxr-xr-x 2 root root 4096 Aug 23 23:09 bin #存放客户端相关的脚本
drwxr-xr-x 2 root root 4096 Aug 23 23:09 conf #配置文件
drwxr-xr-x 5 root root 4096 Aug 23 23:09 data #存放测试数据
drwxr-xr-x 4 root root 4096 Aug 23 23:09 examples #spark自带的测试用例 想学好的同学,一定重点查看spark自带的测试代码
drwxr-xr-x 2 root root 16384 Aug 23 23:09 jars #存放spark相应的jar包 最佳实践以后说
-rw-r--r-- 1 root root 17881 Aug 23 23:09 LICENSE
drwxr-xr-x 2 root root 4096 Aug 23 23:09 licenses
-rw-r--r-- 1 root root 24645 Aug 23 23:09 NOTICE
drwxr-xr-x 6 root root 4096 Aug 23 23:09 python
-rw-r--r-- 1 root root 3809 Aug 23 23:09 README.md
-rw-r--r-- 1 root root 136 Aug 23 23:09 RELEASE
drwxr-xr-x 2 root root 4096 Aug 23 23:09 sbin #存放服务端相关的脚本:启停集群等
drwxr-xr-x 2 root root 4096 Aug 23 23:09 yarn #存放yarn相关的jar包,这是动态资源调度用到的
常见问题:
如果编译过程中看到的信息不是太懂。编译命令后加上 -X,就能看到更详细的信息
前人经验https://blog.csdn.net/chen_1122/article/details/77935149?locationNum=3&fps=1
【问题1】
[ERROR] Failed to execute goal on project spark-launcher_2.11: Could not
resolve dependencies for project org.apache.spark:spark-
launcher_2.11:jar:2.2.0: Failure to find org.apache.hadoop:hadoop-
client:jar:2.6.0-cdh5.7.0 in https://repo1.maven.org/maven2 was cached in the
local repository, resolution will not be reattempted until the update interval of
central has elapsed or updates are forced
这是因为默认的是apache的仓库,但是我们hadoop的版本写的是CDH,这时要将CDH的仓库配进来,打开spark目录下的pom.xml文件,将CDH的仓库配进去
vi /usr/local/spark-test/app/spark-2.2.0/pom.xml 添加如下
<repository>
<id>cloudera</id>
<name>cloudera Repository</name>
<url>https://repository.cloudera.com/artifactory/cloudera-repos</url>
</repository>
【问题2】
export MAVEN_OPTS="-Xmx2g -XX:ReservedCodeCacheSize=512m -XX:MacPreSize=512M"
有些同学是阿里云的机器,但是你这机器的内存可能是有限的,建议vm至少2-4G。VM:8G
【问题3】
如果编译的是scala版本是2.10
./dev/change-scala-version.sh 2.10
【问题4】
was cached in the local repository....
- 去仓库把 xxx.lastupdated文件全部删除,重新执行maven命令
- 编译命令后面 -U
2.Spark环境搭建
-
单机模式
bin/spark-shell --master local[2]
zsh 遇到的坑 https://blog.csdn.net/u012675539/article/details/52079013
http://spark.apache.org/docs/latest/submitting-applications.html

-
standalone模式
和hadoop/yarn 类似
1 master + n slave
# spark-env.sh
SPARK_MASTER_HOST=xxxx
SPARK_WORKER_CORES=2
SPARK_WORKER_MEMORY=2g
SPARK_WORKER_INSTANCE=2
# 分布式环境下只需将其他的机器配置到slaves文件(其他worker进程)里
# 启动
sbin/start-all.sh

# 启动sell
spark-shell --master spark://host:ip
3.Spark简单使用
val file = spark.sparkContext.textFile("file:///Users/gaowenfeng/Documents/学习资料/Spark SQL慕课网日志分析/data/wc.txt")
val wordCounts = file.flatMap(line => line.split(",")).map((word => (word,1))).reduceByKey(_+_)
wordCounts.collect