安装Hadoop 2.9
2019-03-19 本文已影响0人
任嘉平生愿
image.png
两个从节点
image.png
因为要搞一波flink,再加上之前的hadoop版本有点旧了所以重写一下
环境:Centeros7 jdk8 hadoop2.9
1.配置环境变量和主机和ip对应
vi /etc/profile
JAVA_HOME=/home/java/jdk8
JRE_HOME=/home/java/jdk8/jre
HADOOP_HOME=/home/bigdata/hadoop
CLASS_PATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JRE_HOME/lib
PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin:$HADOOP_HOME/bin
export JAVA_HOME JRE_HOME CLASS_PATH PATH HADOOP_HOME
source /etc/profile
vi etc/hosts
192.168.229.133 master
192.168.229.138 slave1
192.168.229.139 slave2
2.解压hadoop 并改名
tar -zxvf hadoop-2.9.2.tar.gz
mv hadoop-2.9.2 hadoop
进入hadoop的配置目录
cd hadoop/etc/hadoop
image.png
3.vi hadoop-env.sh
#第27行
export JAVA_HOME=/home/java/jdk8
4.vi core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://master:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/hadoop/tmp</value>
</property>
</configuration>
**5.vi hdfs-site.xml **
<configuration>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>master:50090</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
6.mv mapred-site.xml.template mapred-site.xml
vi mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
7.vi yarn-site.xml
<configuration>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>master</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>
8.集群搭建
vi slaves
slave1
slave2
9.clone两台机器
10.配置免密登陆这个自行百度
11.启动namenode在主节点上
hadoop namenode -format
12.启动hadoop
cd bin目录
./start-all.sh
13.检查各个节点是否正确
主节点
两个从节点
image.png
至此安装hadoop成功
QuorumPeerMain是zk的进程