大数据

hadoop搭建[完全分布式]

2018-11-05  本文已影响0人  57山本无忧

环境准备

主机名称 IP地址
s101 192.168.200.101
s102 192.168.200.102
s103 192.168.200.103
s104 192.168.200.104
[root@hadoop-master ~]# uname -r
2.6.32-358.el6.x86_64
[root@hadoop-master ~]# uname -m
x86_64
[root@hadoop-master ~]# cat /etc/redhat-release 
CentOS release 6.4 (Final)
[root@hadoop-master ~]# getenforce 
Disabled
192.168.200.101 s101
192.168.200.102 s102
192.168.200.103 s103
192.168.200.104 s104

配置root免密码

登陆到s101服务器,并切换到root用户,su - root

  1. 在s101主机上生成密钥对
[root@s101 ~]# ssh-keygen

一直回车

  1. 配置本机登陆面密码
[root@s101 ~]#cd /root/.ssh/
[root@s101 .ssh]#cp id_rsa.pub authorized_keys

测试本机是否免密码成功,如果直登陆上,则说明配置成功

[root@s101 ~]ssh s101
  1. 将s101的公钥复制到到s102~104主机上
[root@s101 ~]ssh-copy-id -i /root/.ssh/id_rsa.pub root@s102

输入yes,然后输入s102的root密码,同理s103和s104

[root@s101 ~]ssh-copy-id -i /root/.ssh/id_rsa.pub root@s103
[root@s101 ~]ssh-copy-id -i /root/.ssh/id_rsa.pub root@s104

分别测试s102~s104是否免密码成功,如果直登陆上,则说明配置成功

[root@s101 ~]ssh s102

创建hadoop用户,配置hadoop用户免密码登陆

useradd hadoop && echo 123456 | passwd --stdin hadoop
[hadoop@s101 ~]$ ssh-keygen
hadoop@s101 ~]$ cd /home/hadoop/.ssh/
[hadoop@s101 .ssh]$ cp id_rsa.pub authorized_key
[hadoop@s101 .ssh]$ ssh s101
[hadoop@s101 .ssh]$ ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub hadoop@s102
[hadoop@s101 .ssh]$ ssh s102
[hadoop@s101 .ssh]$ ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub hadoop@s103
[hadoop@s101 .ssh]$ ssh s103
[hadoop@s101 .ssh]$ ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub hadoop@s104
[hadoop@s101 .ssh]$ ssh s104

给Hadoop用户安装配置JDK

[hadoop@s101 app]$ tar zxf /app/jdk-8u144-linux-x64.tar.gz -C /app/
[hadoop@s101 app]$ ln -s /app/jdk1.8.0_144 /app/jdk
[hadoop@s101 app]$echo -e '##################JAVA环境变量配置#############\nexport JAVA_HOME=/app/jdk\nexport JRE_HOME=$JAVA_HOME/jre\nexport CLASSPATH=$JAVA_HOME/lib:$JRE_HOME/lib:$CLASSPATH\nexport PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$PATH' >> ~/.bash_profile && source ~/.bash_profile&&tail -5 ~/.bash_profile
[hadoop@s101 app]$ java -version

安装Hadoop

[root@s101 ~]# mkdir /app
[root@s101 ~]# chown -R hadoop:hadoop /app
[root@s101 ~]# ll -d /app/
drwxr-xr-x 2 hadoop hadoop 4096 Nov  5 15:42 /app/
[root@s101 home]# su - hadoop
[hadoop@s101 ~]$ cd /app/
[hadoop@s101 app]$ wget --no-check-certificate https://mirrors.cnnic.cn/apache/hadoop/common/hadoop-2.9.0/hadoop-2.9.0.tar.gz
[hadoop@s101 app]$ tar zxf /app/hadoop-2.9.0.tar.gz -C /app/
[hadoop@s101 app]$ ln -s /app/hadoop-2.9.0 /app/hadoop
echo -e '######Hadoop环境变量配置######\nexport HADOOP_HOME=/app/hadoop\nexport PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH' >> /home/hadoop/.bash_profile && source /home/hadoop/.bash_profile&&tail -3 /home/hadoop/.bash_profile

配置hadoop集群配置相关文件

export JAVA_HOME=/app/jdk
s102
s103
s104
<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://s101:9000</value>
    </property>
</configuration>
<configuration>
   <property>
        <name>dfs.replication</name>
        <value>3</value>
    </property>
</configuration>
<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
</configuration>
<configuration>
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>
        <name>yarn.resourcemanager.hostname</name>
        <value>s101</value>
    </property>
</configuration>

分发文件给s102-s104

[hadoop@s101 /]$ rsync -lr app hadoop@s102:/
[hadoop@s101 /]$ rsync -lr app hadoop@s103:/
[hadoop@s101 /]$ rsync -lr app hadoop@s104:/
[hadoop@s101 /]$ cd /home/hadoop/
[hadoop@s101 ~]$ rsync -lr .bash_profile hadoop@s102:/home/hadoop/
[hadoop@s101 ~]$ rsync -lr .bash_profile hadoop@s103:/home/hadoop/
[hadoop@s101 ~]$ rsync -lr .bash_profile hadoop@s104:/home/hadoop/
[hadoop@s101 /home/hadoop]$ll /app/
[hadoop@s101 /home/hadoop]$cat /home/hadoop/.bash_profile

启动hadoop集群

格式化文件系统

[hadoop@s101 /home/hadoop]$hdfs namenode -format

启动集群

执行start-dfs.sh

[hadoop@s101 /home/hadoop]$start-dfs.sh 
Starting namenodes on [s101]
s101: starting namenode, logging to /app/hadoop-2.9.0/logs/hadoop-hadoop-namenode-s101.out
s103: starting datanode, logging to /app/hadoop-2.9.0/logs/hadoop-hadoop-datanode-s103.out
s104: starting datanode, logging to /app/hadoop-2.9.0/logs/hadoop-hadoop-datanode-s104.out
s102: starting datanode, logging to /app/hadoop-2.9.0/logs/hadoop-hadoop-datanode-s102.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /app/hadoop-2.9.0/logs/hadoop-hadoop-secondarynamenode-s101.out
[hadoop@s101 /home/hadoop]$jps
24392 NameNode
24589 SecondaryNameNode
[hadoop@s102 /home/hadoop]$jps
3969 DataNode
[hadoop@s103 /home/hadoop]$jps
3896 DataNode
[hadoop@s104 /home/hadoop]$jps
3936 DataNode

执行start-yarn.sh

[hadoop@s101 /home/hadoop]$start-yarn.sh 
starting yarn daemons
starting resourcemanager, logging to /app/hadoop-2.9.0/logs/yarn-hadoop-resourcemanager-s101.out
s104: starting nodemanager, logging to /app/hadoop-2.9.0/logs/yarn-hadoop-nodemanager-s104.out
s103: starting nodemanager, logging to /app/hadoop-2.9.0/logs/yarn-hadoop-nodemanager-s103.out
s102: starting nodemanager, logging to /app/hadoop-2.9.0/logs/yarn-hadoop-nodemanager-s102.out
[hadoop@s101 /home/hadoop]$jps
24392 NameNode
24589 SecondaryNameNode
24797 ResourceManager
[hadoop@s102 /home/hadoop]$jps
3969 DataNode
4105 NodeManager
[hadoop@s103 /home/hadoop]$jps
4032 NodeManager
3896 DataNode
[hadoop@s104 /home/hadoop]$jps
3936 DataNode
4072 NodeManager
上一篇下一篇

猜你喜欢

热点阅读