大数据运维及安全@IT·互联网我爱编程

Step by Step 实现基于 Cloudera 5.8.2

2017-02-20  本文已影响743人  大数据之心

请结合阅读Step by Step 实现基于 Cloudera 5.8.2 的企业级安全大数据平台 - 传输层加密配置 - Clouder Manager 组件传输层加密,避免阐述概念模糊和理解偏差。

Hadoop 服务在 TLS/SSL 加密过程中充当的角色

服务端进程在启动的时候会加载密钥库 JKS,当客户端与其通信时,服务端把证书传给客户端,客户端会把证书在本地的可信库进行比对,以进行认证。

相关文件,这里的服务端和客户端指的是 TLS / SSL 范畴:

在每次配置后需要做基于 HUE 的功能测试:

证书兼容性

Python 类组件使用 PEM:HUE, Impala;

Java 类组件使用 JKS:HDFS, YARN, Hive, HBase, Oozie;

配置 HDFS 加密

前序条件和须知:

对每个节点上的 cms.keystore.${HOSTNAME} 进行重命名,cp 成一个通用的名字,这样才可以匹配 HDFS 的配置:

pscp -h list_all rename_jks.sh /tmp
pssh -h list_all "sudo /usr/bin/bash /tmp/rename_jks.sh"

其中脚本 rename_jks.sh 内容如下:

#!/bin/bash
HOSTNAME=`hostname -f`
sudo cp /opt/cloudera/security/jks/cms.keystore.${HOSTNAME} /opt/cloudera/security/jks/cms.keystore

修改 HDFS 配置后重启,PASSWD 为上一篇中设置的 JKS 密码:

ssl.server.keystore.location=${BASE_SECURITY_PATH}/jks/cms.keystore
ssl.server.keystore.password=${PASSWD}
ssl.server.keystore.keypassword=${PASSWD}
ssl.client.truststore.location=${JAVA_HOME}/jre/lib/security/jssecacerts.public
ssl.client.truststore.password=${PASSWD}
hadoop.ssl.enabled=true
dfs.datanode.address = 1024
dfs.data.transfer.protection = privacy

配置 YARN 加密

修改 YARN 配置后重启,PASSWD 为上一篇中设置的 JKS 密码:

ssl.server.keystore.location=${BASE_SECURITY_PATH}/jks/cms.keystore
ssl.server.keystore.password=${PASSWD}
ssl.server.keystore.keypassword=${PASSWD}
ssl.client.truststore.location=${JAVA_HOME}/jre/lib/security/jssecacerts.public
ssl.client.truststore.password=${PASSWD}

配置 HBase 加密

修改 HBase 配置后重启,PASSWD 为上一篇中设置的 JKS 密码:

hdaoop.ssl.enabled, hbase.ssl.enabled = true
ssl.server.keystore.location = ${BASE_SECURITY_PATH}/jks/cms.keystore
ssl.server.keystore.password=${PASSWD}
ssl.server.keystore.keypassword=${PASSWD}
 
hbase.rest.ssl.enabled = true
hbase.rest.ssl.keystore.store = ${BASE_SECURITY_PATH}/jks/cms.keystore
hbase.rest.ssl.keystore.password = ${PASSWD}
hbase.rest.ssl.keystore.keypassword = ${PASSWD}
 
hbase.thrift.ssl.enabled = true
hbase.thrift.ssl.keystore.store = ${BASE_SECURITY_PATH}/jks/cms.keystore
hbase.thrift.ssl.keystore.password = ${PASSWD}
hbase.thrift.ssl.keystore.keypassword = ${PASSWD}

配置 HiveServer2 加密

相比较SASL QOP传输加密,SSL加密在大数据请求情况下性能更好,所以选择SSL。修改 Hive 配置后重启,PASSWD 为上一篇中设置的 JKS 密码,请注意替换 BASE_SECURITY_PATH PASSWD::

hive.server2.enable.SSL, hive.server2.use.SSL = true
hive.server2.keystore.path = ${BASE_SECURITY_PATH}/jks/cms.keystore
hive.server2.keystore.password =${PASSWD}
hive.server2.webui.keystore.password = ${PASSWD}
hive.server2.webui.keystore.path = ${BASE_SECURITY_PATH}/jks/cms.keystore

配置到此,HUE 中的 Oozie Editor 的 Hive 作业会运行失败,我们可以通过 beeline 进行测试:

kdestroy
kinit hive/hive_admin

HIVE_SERVER2_HOSTNAME=192.168.1.3
beeline -u "jdbc:hive2://${HIVE_SERVER2_HOSTNAME}:10000/default;principal=hive/${HIVE_SERVER2_HOSTNAME}@DOMAIN.COM;ssl=True;sslTrustStore=${JAVA_HOME}/jre/lib/security/jssecacerts.public;trustStorePassword=${PSSWD};"

配置 HUE 加密

HUE 作为客户端

HUE 在和 HBase、Oozie、HDFS、YRAN 进行交互的时候是作为客户端的,所以 HUE 必须具有所有这些服务的证书,并保存在可信库中,因为 HUE 是基于 Python 编写,所以可信库是 PEM 格式。

执行脚本,在每台主机上生成 PEM 格式密钥库:

pscp -h list_all generate_pem_ca.sh /tmp
pssh -h list_all "sudo /usr/bin/bash /tmp/generate_pem_ca.sh"

其中脚本 generate_pem_ca.sh 的内容如下,请为 JAVA_HOME 进行赋值:

#!/bin/bash
 
JAVA_HOME=${JAVA_HOME}
HOSTNAME=`hostname -f`
BASE_SECURITY_PATH=/opt/cloudera/security
 
sudo ${JAVA_HOME}/bin/keytool -exportcert -keystore ${BASE_SECURITY_PATH}/jks/cms.keystore.${HOSTNAME} -alias cms.${HOSTNAME} -storepass ${PASSWD} -file ${BASE_SECURITY_PATH}/jks/cms.keystore.hue.${HOSTNAME}
 
sudo openssl x509 -inform der -in ${BASE_SECURITY_PATH}/jks/cms.keystore.hue.${HOSTNAME} > /tmp/cms.pem.hue.${HOSTNAME}
sudo cp /tmp/cms.pem.hue.${HOSTNAME} ${BASE_SECURITY_PATH}/x509/
sudo chown cloudera-scm.cloudera-scm ${BASE_SECURITY_PATH}/x509/cms.pem.hue.${HOSTNAME}

整合密钥库,生成公共可信库:

for agent in `cat list_agents_hostname`;do scp ${agent}:/tmp/cms.pem.hue.${agent} /tmp;done;

# 执行脚本 生成 HUE 公共可信库
#!/bin/bash
for agent in `cat list_agents_hostname`
do 
    pem_list="${pem_list} cms.pem.hue.${agent}"
done
cat ${pem_list} > /tmp/cms.pem.hue.public
 
# 分发公共可信库 cms.pem.hue.public
pscp -h list /tmp/cms.pem.hue.public /tmp
pssh -h list "sudo cp /tmp/cms.pem.hue.public /opt/cloudera/security/x509"
pssh -h list "sudo chown cloudera-scm.cloudera-scm /opt/cloudera/security/x509/cms.pem.hue.public"

修改 HUE 配置:

ssl_cacerts = /opt/cloudera/security/x509/cms.pem.hue.public

因为是自签名密钥,需要修改环境变量,Hue Service Environment Advanced Configuration Snippet

REQUESTS_CA_BUNDLE = /opt/cloudera/security/x509/cms.pem.hue.public

重启 HUE 服务,请注意,此时 HUE 的 Oozie Editor 中 Hive 作业仍运行失败。

HUE 作为服务端

我们不使用 HUE 自带的 LoadBalancer,使用 Nginx 替代。在 HUE Server 上执行如下操作:

sudo -u cloudera-scm cp /opt/cloudera/security/x509/cms.key.${HOSTNAME} /opt/cloudera/security/x509/cms.key.hue-server
sudo -u cloudera-scm cp /opt/cloudera/security/x509/cms.pem.${HOSTNAME} /opt/cloudera/security/x509/cms.pem.hue-server

修改 HUE 配置后重启,请注意替换 PASSWD::

Enable TLS/SSL for Hue = true
ssl_certificate = /opt/cloudera/security/x509/cms.pem.hue-server
ssl_private_key = /opt/cloudera/security/x509/cms.key.hue-server
ssl_password = ${PASSWD}

Nginx 的安装请参考网上教程,假设 Nginx 服务器 (192.168.1.1) 已经生成过 JKS 密钥,我们需要配置 Nginx 免密钥访问 HUE Server:

sudo cp /opt/cloudera/security/x509/cms.key.${HOSTNAME} /opt/cloudera/security/x509/cms.key.nginx
sudo cp /opt/cloudera/security/x509/cms.key.nginx /opt/cloudera/security/x509/cms.key.nginx.bak
sudo openssl rsa -in /opt/cloudera/security/x509/cms.key.nginx.bak -out /opt/cloudera/security/x509/cms.key.nginx

配置 Nginx 实现 LoadBalancer:

server {
    server_name 192.168.1.1;
    charset utf-8;
 
    listen 8889 ssl;
    ssl_certificate /opt/cloudera/security/x509/cms.pem.nginx;
    ssl_certificate_key /opt/cloudera/security/x509/cms.key.nginx;
 
    client_max_body_size 0;
    location / {
        proxy_pass https://hue;
        proxy_set_header Host $http_host;
        proxy_set_header X-Forwarded-For $remote_addr;
    }
 
    location /static/ {
        alias /opt/cloudera/parcels/CDH/lib/hue/build/static/;
 
        expires 30d;
        add_header Cache-Control public;
    }
}
 
upstream hue {
    ip_hash;
 
    # List all the Hue instances here for high availability.
    server HUE_HOSTNAME1:8888 max_fails=3;
    server HUE_HOSTNAME2:8888 max_fails=3;
    ...
}

配置 HUE Server 和 HiveServer2 之间的加密,通过 Cloudera Manager Admin Console 对 hue.ini 进行追加配置,配置项为 Hue Service Advanced Configuration Snippet (Safety Valve) for hue_safety_valve.ini

[beeswax]
    [[ssl]]
    enabled = true
    cacerts = /opt/cloudera/security/x509/cms.pem.hue.public
    validate = true

重启 HUE 服务。

配置 Impala 加密

配置 HUE 和 ImpalaD 传输加密

该场景下 HUE 作为客户端,ImapalD 作为服务端。和 HUE 一样,需要使用到服务端的公共可信库和客户端的密钥库:

#/bin/bash
HOSTNAME=`hostname -f`
BASE_SECURITY_PATH=/opt/cloudera/security/x509
 
# 生成 ImpalaD 服务端 CA、Key
sudo -u cloudera-scm cp ${BASE_SECURITY_PATH}/x509/cms.key.${HOSTNAME} ${BASE_SECURITY_PATH}/x509/cms.key.impala
sudo -u cloudera-scm cp ${BASE_SECURITY_PATH}/x509/cms.pem.${HOSTNAME} ${BASE_SECURITY_PATH}/x509/cms.pem.impala
 
# 生成客户端可信库
sudo -u cloudera-scm cp ${BASE_SECURITY_PATH}/x509/cms.pem.hue.public ${BASE_SECURITY_PATH}/x509/cms.pem.impala.public

修改 Impala 配置后重启,请注意替换 BASE_SECURITY_PATH PASSWD

client_services_ssl_enabled = true
ssl_server_certificate = webserver_certificate_file = ${BASE_SECURITY_PATH}/x509/cms.pem.impala
ssl_private_key = webserver_private_key_file = ${BASE_SECURITY_PATH}/x509/cms.key.impala
ssl_private_key_password_cmd = webserver_private_key_password_cmd = ${PASSWD}
ssl_client_ca_certificate = ${BASE_SECURITY_PATH}/x509/cms.key.impala.public

修改 HUE 配置,在 hue.ini 中添加和 Impala 的传输加密配置,配置项为 Hue Service Advanced Configuration Snippet (Safety Valve) for hue_safety_valve.ini

[impala]
    [[ssl]]
    enabled = true
    cacerts = /opt/cloudera/security/x509/cms.pem.hue.public
    validate = true

重启 HUE 服务,然后我们可以在 Impala Editor 中对 SQL 操作进行验证,确认加密调整后的功能无误。

配置 StateStore 传输加密

修改 Impala 配置后重启,请注意替换 BASE_SECURITY_PATH PASSWD

webserver_certificate_file = ${BASE_SECURITY_PATH}/jks/cms.pem.impala
webserver_private_key_file = ${BASE_SECURITY_PATH}/jks/cms.key.impala
webserver_private_key_password_cmd = ${PASSWD}

配置 ImpalaD 和 LDAP 传输加密

该场景下 ImpalaD 作为客户端,LDAP 作为服务端。修改 Impala 配置后重启,请注意替换 BASE_SECURITY_PATH

ldap_ca_certificate = ${BASE_SECURITY_PATH}/jks/cms.pem.impala

配置 HAProxy 做前端转发、SSL穿透

做穿透并且不在 HUE 层面做 LoadBalancer 的目的是:

修改 /etc/haproxy/haproxy.cfg

frontend f_impala_jdbc
    bind 0.0.0.0:21050
    mode tcp
    default_backend b_impala_jdbc
 
backend b_impala_jdbc
    mode tcp
    balance roundrobin
    stick-table type ip size 200k expore 30m
    stick on src
    server b_impala_jdbc_01 ${IMPALA_DAEMON_01}:21050
    server b_impala_jdbc_01 ${IMPALA_DAEMON_02}:21050
    ...

重启 HAProxy 后配置生效,假设 HAProxy 架设在 192.168.1.1,我们可以通过 https://192.168.1.1:21050 去访问 ImpalaD 服务。

配置 Oozie 加密

修改配置后重启,请注意替换 BASE_SECURITY_PATH PASSWD JAVA_HOME

Enable TLS/SSL for Oozie = true
Oozie TLS/SSL Server JKS Keystore File Location = ${BASE_SECURITY_PATH}/jks/cms.keystore
Oozie TLS/SSL Server JKS Keystore File Password = ${PASSWD}
Oozie TLS/SSL Certificate Trust Store File = ${JAVA_HOME}/jre/lib/security/jssecacerts.public
Oozie TLS/SSL Certificate Trust Store Password = ${PASSWD}

配置 HDFS HTTPFS 加密

修改配置后重启,请注意替换 BASE_SECURITY_PATH PASSWD JAVA_HOME

Enable TLS/SSL for HttpFS = true
HttpFS TLS/SSL Server JKS Keystore File Location = ${BASE_SECURITY_PATH}/jks/cms.keystore
HttpFS TLS/SSL Server JKS Keystore File Password = ${PASSWD}
HttpFS TLS/SSL Certificate Trust Store File = ${JAVA_HOME}/jre/lib/security/jssecacerts.public
HttpFS TLS/SSL Certificate Trust Store Password = ${PASSWD}

小结

本篇介绍了如何对 Hadoop 核心组件配置传输层加密。在每次配置后,最直观的测试方式就是在 HUE 中对各组件的功能模块进行黑盒集成测试。比如配置了 HDFS 加密后,我们可以对 FileBrowser 进行访问;配置了 HBase 加密后,我们可以对 HBase 进行查询访问;配置了 HiveServer2 加密后,我们可以对 Hive Editor Impala Editor 进行查询访问。

上一篇 下一篇

猜你喜欢

热点阅读