大数据相关论文

玩转大数据计算之Hue

2017-07-09  本文已影响127人  编程回忆录

Hue版本:我们使用Hue 3.11.0

Hue安装

 $HADOOP_HOME/sbin/stop-all.sh

修改core-site.xml文件,添加如下内容:

        <property>
               <name>hadoop.proxyuser.hue.hosts</name>
                <value>*</value>
        </property>
        <property>
               <name>hadoop.proxyuser.hue.groups</name>
                <value>*</value>
        </property>

修改hdfs-site.xml文件,添加如下内容:

  <property>
       <name>dfs.webhdfs.enabled</name>
       <value>true</value>
   </property>

再重新启动Hadoop的服务:

 $HADOOP_HOME/sbin/start-all.sh &
brew install gmp
brew install mysql
brew install openssl
export LDFLAGS=-L/usr/local/opt/openssl/lib && export CPPFLAGS=-I/usr/local/opt/openssl/include

解压、进入解压目录、安装

tar zxvf hue-3.11.0.tgz
cd hue-3.11.0
vim desktop/conf/hue.ini

参考官方链接:http://gethue.com/how-to-configure-hue-in-your-hadoop-cluster/
修改[desktop],[hadoop],[beeswax],[database]配置节相关内容如下:

desktop配置:

[desktop]

  # Set this to a random string, the longer the better.
  # This is used for secure hashing in the session store.
  secret_key=

  # Execute this script to produce the Django secret key. This will be used when
  # 'secret_key' is not set.
  ## secret_key_script=

  # Webserver listens on this address and port
  http_host=0.0.0.0
  http_port=8888

  # Time zone name
  time_zone=Asia/Chongqing

  # Enable or disable Django debug mode.
  django_debug_mode=false

  # Enable or disable database debug mode.
  ## database_logging=false

  # Whether to send debug messages from JavaScript to the server logs.
  ## send_dbug_messages=false

  # Enable or disable backtrace for server error
  http_500_debug_mode=false

  # Enable or disable memory profiling.
  ## memory_profiler=false

  # Server email for internal error messages
  ## django_server_email='hue@localhost.localdomain'

  # Email backend
  ## django_email_backend=django.core.mail.backends.smtp.EmailBackend

  # Webserver runs as this user
  server_user=hue
  server_group=hue

  # This should be the Hue admin and proxy user
  default_user=hue

  # This should be the hadoop cluster admin
  default_hdfs_superuser=hdfs

hadoop配置:

[hadoop]

  # Configuration for HDFS NameNode
  # ------------------------------------------------------------------------
  [[hdfs_clusters]]
    # HA support by using HttpFs

    [[[default]]]
      # Enter the filesystem uri
      fs_defaultfs=hdfs://localhost:9000

      # NameNode logical name.
      ## logical_name=

      # Use WebHdfs/HttpFs as the communication mechanism.
      # Domain should be the NameNode or HttpFs host.
      # Default port is 14000 for HttpFs.
      webhdfs_url=http://localhost:50070/webhdfs/v1

      # Change this if your HDFS cluster is Kerberos-secured
      ## security_enabled=false

      # In secure mode (HTTPS), if SSL certificates from YARN Rest APIs
      # have to be verified against certificate authority
      ## ssl_cert_ca_verify=True

      # Directory of the Hadoop configuration
      hadoop_conf_dir=$HADOOP_HOME/etc/hadoop

  # Configuration for YARN (MR2)
  # ------------------------------------------------------------------------
  [[yarn_clusters]]

    [[[default]]]
      # Enter the host on which you are running the ResourceManager
      resourcemanager_host=localhost

      # The port where the ResourceManager IPC listens on
      ## resourcemanager_port=8032

      # Whether to submit jobs to this cluster
      submit_to=True

      # Resource Manager logical name (required for HA)
      ## logical_name=

      # Change this if your YARN cluster is Kerberos-secured
      ## security_enabled=false

      # URL of the ResourceManager API
      resourcemanager_api_url=http://localhost:8088

      # URL of the ProxyServer API
      ## proxy_api_url=http://localhost:8088

      # URL of the HistoryServer API
      history_server_api_url=http://localhost:19888

beeswax配置:

[beeswax]

  # Host where HiveServer2 is running.
  # If Kerberos security is enabled, use fully-qualified domain name (FQDN).
  hive_server_host=localhost

  # Port where HiveServer2 Thrift server runs on.
  hive_server_port=10000

  # Hive configuration directory, where hive-site.xml is located
  hive_conf_dir=$HIVE_HOME/conf

database配置:

[[database]]
    # Database engine is typically one of:
    # postgresql_psycopg2, mysql, sqlite3 or oracle.
    #
    # Note that for sqlite3, 'name', below is a path to the filename. For other backends, it is the database name
    # Note for Oracle, options={"threaded":true} must be set in order to avoid crashes.
    # Note for Oracle, you can use the Oracle Service Name by setting "host=" and "port=" and then "name=<host>:<port>/<service_name>".
    # Note for MariaDB use the 'mysql' engine.
    engine=mysql
    host=localhost
    port=3306
    user=root
    password=hive123456
    # Execute this script to produce the database password. This will be used when 'password' is not set.
    ## password_script=/path/script
    name=hue

依次执行如下命令:

create database hue;
grant all privileges on hue.* to root@'%' identified by'hive123456';
flush privileges;

再切换到hue的安装目录,执行:

build/env/bin/hue syncdb
hue-2.png

最后终于可以启动Hue的服务了:

build/env/bin/supervisor  
hue-3.png hue-4.png

后面的文章将使用Hue查询Hive和HBase以及Spark等。

上一篇 下一篇

猜你喜欢

热点阅读