Spark三种模式的安装与配置

2021-09-09  本文已影响0人  大数据ZRL

环境准备

1. Local模式

安装jdk

wget https://download.oracle.com/otn/java/jdk/8u301-b09/d3c52aa6bfa54d3ca74e617f18309292/jdk-8u301-linux-x64.tar.gz?AuthParam=1631169458_b753f63069d375ab0a6a52e1d9cd9013
tar xzvf jdk-8u301-linux-x64.tar.gz -C ../software/
JAVA_HOME=/root/***/software/jdk1.8.0_301
PATH=$PATH:$JAVA_HOME/bin
java version "1.8.0_301"
Java(TM) SE Runtime Environment (build 1.8.0_301-b09)
Java HotSpot(TM) 64-Bit Server VM  (build 25.301-b09, mixed mode)

安装scala

wget https://downloads.lightbend.com/scala/2.11.8/scala-2.11.8.tgz
tar xzvf scala-2.11.8.tgz -C ../software/
SCALA_HOME=/root/***/software/scala-2.11.8
PATH=$PATH:$SCALA_HOME/bin
Welcome to Scala 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_301).
Type in expressions for evaluation. Or try :help.

安装spark

wget https://archive.apache.org/dist/spark/spark-2.4.8/spark-2.4.8-bin-hadoop2.7.tgz
tar xzvf spark-2.4.8-bin-hadoop2.7.tgz -C ../software/
SPARK_HOME=/root/***/software/spark-2.4.8-bin-hadoop2.7
PATH=$PATH:$SPARK_HOME/bin
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 2.4.8
      /_/

2. Standalone模式

hostname role
bigdata112 master
bigdata113 worker
bigdata114 worker
bigdata115 worker
cd ../software/spark-2.4.8-bin-hadoop2.7/conf/
cp spark-env.sh.template spark-env.sh
vim spark-env.sh 
  export JAVA_HOME=/root/***/software/jdk1.8.0_301
  export SCALA_HOME=/root/***/software/scala-2.11.8
  export SPARK_HOME=/root/***/software/spark-2.4.8-bin-hadoop2.7
  export SPARK_EXECUTOR_MEMORY=5G
  export SPARK_EXECUTOR_cores=2
  export SPARK_WORKER_CORES=2
cp slaves.template slaves
vim slaves 
  bigdata113
  bigdata114
  bigdata115
scp /root/***/software/spark-2.4.8-bin-hadoop2.7 bigdata113:/root/***/software/
scp /root/***/software/spark-2.4.8-bin-hadoop2.7 bigdata114:/root/***/software/
scp /root/***/software/spark-2.4.8-bin-hadoop2.7 bigdata115:/root/***/software/
starting org.apache.spark.deploy.master.Master, logging to /root/***/software/spark-2.4.8-bin-hadoop2.7/logs/spark-root-org.apache.spark.deploy.master.Master-1-***2021.out
图片.png
bigdata113: starting org.apache.spark.deploy.worker.Worker, logging to /root/***/software/spark-2.4.8-bin-hadoop2.7/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-***2021.out
bigdata114: starting org.apache.spark.deploy.worker.Worker, logging to /root/***/software/spark-2.4.8-bin-hadoop2.7/logs/spark-root-org.apache.spark.deploy.worker.Worker-2-***2021.out
bigdata115: starting org.apache.spark.deploy.worker.Worker, logging to /root/***/software/spark-2.4.8-bin-hadoop2.7/logs/spark-root-org.apache.spark.deploy.worker.Worker-3-***2021.out
图片.png
Spark context available as 'sc' (master = spark://***2021:7077, app id = app-20210909163213-0001).

Spark On Yarn模式

HADOOP_CONF_DIR=/root/***/software/hadoop-2.7.6/etc/hadoop
YARN_CONF_DIR=/root/***/software/hadoop-2.7.6/etc/hadoop
Spark context available as 'sc' (master = yarn, app id = application_1560334779290_0001).
上一篇 下一篇

猜你喜欢

热点阅读