数值分析

【大数据工程师】【Hadoop】在CentOS7上安装Hadoo

2019-07-12  本文已影响0人  炼狱腾蛇Eric

1. 官方网站

https://hadoop.apache.org/

2. 下载

https://hadoop.apache.org/releases.html

3. 环境

4. 配置系统环境

192.168.56.201 node1
192.168.56.202 node2
192.168.56.203 node3
[root@node1 etc]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:vSs9kZ7yxb16okc8e40afXJkICUQRI4Ig0xP3sh2F7c root@node1
The key's randomart image is:
+---[RSA 2048]----+
| o..+   .o*o. .  |
|  o= = . = . o   |
|    * + o E . .  |
|   . . . .   . . |
|        S .o    o|
|          oo+o o |
|         o.++o+oo|
|        o =o+.+=.|
|         +++o*.  |
+----[SHA256]-----+

[root@node1 etc]# ssh-copy-id node2
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'node2 (192.168.56.202)' can't be established.
ECDSA key fingerprint is SHA256:S+fUIkc5tzfLcZ8OKjyo5Gj89fYbiM9Q/r+K6k9LZkQ.
ECDSA key fingerprint is MD5:52:96:7a:52:d1:e7:3b:99:d9:b7:a0:2a:87:71:78:f3.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@node2's password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'node2'"
and check to make sure that only the key(s) you wanted were added.
$ yum -y install java java-devel
export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.212.b04-0.el7_6.x86_64
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar

5. 安装和配置hadoop

$ cd /opt
$ wget https://www.apache.org/dyn/closer.cgi/hadoop/common/hadoop-3.2.0/hadoop-3.2.0.tar.gz
$ tar xf hadoop
export HADOOP_HOME=/opt/hadoop-3.2.0
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin

export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_YARN_HOME=$HADOOP_HOME

export HADOOP_INSTALL=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_CONF_DIR=$HADOOP_HOME
export HADOOP_LIBEXEC_DIR=$HADOOP_HOME/libexec
export JAVA_LIBRARY_PATH=$HADOOP_HOME/lib/native:$JAVA_LIBRARY_PATH
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop

export HDFS_DATANODE_USER=root
export HDFS_DATANODE_SECURE_USER=root
export HDFS_SECONDARYNAMENODE_USER=root
export HDFS_NAMENODE_USER=root

export YARN_RESOURCEMANAGER_USER=root
export YARN_NODEMANAGER_USER=root
$ source /etc/profile
mkdir -p /data/hadoop
<configuration>
    <property>
        <name>fs.default.name</name>
        <value>hdfs://node1:9000</value>
    </property>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/data/hadoop/tmp</value>
</configuration>
<configuration>
    <property>
        <name>dfs.replication</name>
        <value>3</value>
    </property>
    <property>
        <name>dfs.namenode.name.dir</name>
        <value>/data/hadoop/hdfs/name</value>
    </property>
    <property>
        <name>dfs.namenode.data.dir</name>
        <value>/data/hadoop/hdfs/data</value>
    </property>
    <property>
        <name>dfs.namenode.http-address</name>
        <value>node1:50070</value>
    </property>
    <property>
        <name>dfs.namenode.secondary.http-address</name>
        <value>node2:50090</value>
    </property>
    <property>
        <name>dfs.permissions</name>
        <value>false</value>
    </property>
</configuration>
<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
    <property>
        <name>mapreduce.application.classpath</name>
        <value>
            /opt/hadoop-3.2.0/etc/hadoop,
            /opt/hadoop-3.2.0/share/hadoop/common/*,
            /opt/hadoop-3.2.0/share/hadoop/common/lib/*,
            /opt/hadoop-3.2.0/share/hadoop/hdfs/*,
            /opt/hadoop-3.2.0/share/hadoop/hdfs/lib/*,
            /opt/hadoop-3.2.0/share/hadoop/mapreduce/*,
            /opt/hadoop-3.2.0/share/hadoop/mapreduce/lib/*,
            /opt/hadoop-3.2.0/share/hadoop/yarn/*,
            /opt/hadoop-3.2.0/share/hadoop/yarn/lib/*
        </value>
    </property>
</configuration>
<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
    <property>
        <name>mapreduce.application.classpath</name>
        <value>
            /opt/hadoop-3.2.0/etc/hadoop,
            /opt/hadoop-3.2.0/share/hadoop/common/*,
            /opt/hadoop-3.2.0/share/hadoop/common/lib/*,
            /opt/hadoop-3.2.0/share/hadoop/hdfs/*,
            /opt/hadoop-3.2.0/share/hadoop/hdfs/lib/*,
            /opt/hadoop-3.2.0/share/hadoop/mapreduce/*,
            /opt/hadoop-3.2.0/share/hadoop/mapreduce/lib/*,
            /opt/hadoop-3.2.0/share/hadoop/yarn/*,
            /opt/hadoop-3.2.0/share/hadoop/yarn/lib/*
        </value>
    </property>
</configuration>
node1
node2
node3
$ scp /etc/profile.d/hadoop.sh node2:/etc/profile.d/hadoop.sh
$ scp /etc/profile.d/hadoop.sh node3:/etc/profile.d/hadoop.sh
$ scp -r /opt/hadoop-3.2.0 node2:/opt/
$ scp -r /opt/hadoop-3.2.0 node3:/opt/
$ hdfs namenode -format
$ $HADOOP_HOME/bin/start-all.sh
$ $HADOOP_HOME/bin/stop-all.sh
上一篇 下一篇

猜你喜欢

热点阅读