linux

5 使用Linux环境变量

2018-06-05  本文已影响5人  7125messi

bash shell用一个叫作环境变量(environment variable)的特性来存储有关shell会话和工作环境的信息(这也是它们被称作环境变量的原因)。这项特性允许你在内存中存储数据,以便程序或shell中运行的脚本能够轻松访问到它们。这也是存储持久数据的一种简便方法。

在bash shell中,环境变量分为两类:

 全局变量

 局部变量

1 全局变量

Linux系统在你开始bash会话时就设置了一些全局环境变量,Linux系统在你开始bash会话时就设置了一些全局环境变量。要查看全局变量,可以使用envprintenv命令。

[root@server106 ~]# env
TOMCAT_HOME=/opt/tomcat
XDG_SESSION_ID=824
SPARK_HOME=/data/spark
HOSTNAME=server106
HADOOP_LOG_DIR=/opt/hadoopdata/logs/hadoop
TOMCAT_VERSION=8.5.5
TERM=xterm
SHELL=/bin/bash
HADOOP_HOME=/opt/hadoop
HISTSIZE=1000
KYLIN_VERSION=1.5.3
YARN_PID_DIR=/opt/hadoopdata/pids
SSH_CLIENT=192.168.123.44 63602 22
HADOOP_PID_DIR=/opt/hadoopdata/pids
SQOOP_HOME=/opt/sqoop
HCAT_HOME=/opt/hive/hcatalog
SSH_TTY=/dev/pts/0
QT_GRAPHICSSYSTEM_CHECKED=1
HBASE_HOME=/opt/hbase
FLUME_VERSION=1.7.0
PYSPARK_PYTHON=python3
PYSPARK_DRIVER_PYTHON=ipython3
FLUME_HOME=/opt/flume
MAIL=/var/spool/mail/root
PATH=/opt/anaconda3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/opt/hadoop/bin:/opt/hadoop/sbin:/data/spark/bin:/data/spark/sbin:/opt/sqoop/bin:/opt/flume/bin:/opt/hive/bin:/opt/hbase/bin:/opt/kylin/bin:/opt/tomcat/bin:/usr/lib/scala-2.10.4/bin:/opt/maven-3.3.9/bin:/opt/sbt-launcher-packaging-0.13.13/bin:/opt/kafka-manager-1.3.3.4/bin:/root/bin
SPARK_VERSION=1.6.2
HIVE_HOME=/opt/hive
PWD=/root
HADOOP_VERSION=2.7.2
JAVA_HOME=/usr/lib/jvm/java
HADOOP_CONF_DIR=/opt/hadoop/etc/hadoop
LANG=en_US.UTF-8
YARN_LOG_DIR=/opt/hadoopdata/logs/yarn
SQOOP_VERSION=1.4.6
HISTCONTROL=ignoredups
SHLVL=1
HOME=/root
HBASE_VERSION=1.2.2
KYLIN_HOME=/opt/kylin
LOGNAME=root
SSH_CONNECTION=192.168.123.44 63602 192.168.111.106 22
PYSPARK_DRIVER_PYTHON_OPTS=notebook --NotebookApp.open_browser=False --allow-root --NotebookApp.ip='192.168.111.106' --NotebookApp.port=8889
LESSOPEN=||/usr/bin/lesspipe.sh %s
SCALA_HOME=/usr/lib/scala-2.10.4
XDG_RUNTIME_DIR=/run/user/0
DISPLAY=localhost:10.0
HIVE_VERSION=2.1.0
_=/usr/bin/env

要显示个别环境变量的值,可以使用printenv命令,但是不要用env命令。
也可以使用echo显示变量的值。在这种情况下引用某个环境变量的时候,必须在变量前面加上一个美元符($)。

[root@server106 ~]# printenv HADOOP_HOME
/opt/hadoop

[root@server106 ~]# echo $HADOOP_HOME
/opt/hadoop

[root@server106 ~]# ls $HADOOP_HOME
bin  etc  include  lib  libexec  LICENSE.txt  logs  NOTICE.txt  README.txt  sbin  share

2 局部变量

Linux系统也默认定义了标准的局部环境变量。不过你也可以定义自己的局部变量,如你所想,这些变量被称为用户定义局部变量。

set命令会显示出全局变量、局部变量以及用户定义变量。它还会按照字母顺序对结果进行排序。

env和printenv命令同set命令的区别在于前两个命令不会对变量排序,也不会输出局部变量和用户定义变量。

$ echo $my_variable
$ my_variable=Hello
$ echo $my_variable
Hello

$ unset my_variable
$ echo $my_variable

3 默认的 shell 环境变量

默认情况下, bash shell会用一些特定的环境变量来定义系统环境。这些变量在你的Linux系统上都已经设置好了,只管放心使用。





4 设置 PATH 环境变量

PATH环境变量定义了用于进行命令和程序查找的目录

[root@server106 ~]# echo $LANG
en_US.UTF-8

[root@server106 ~]# echo $PATH
/opt/anaconda3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/opt/hadoop/bin:/opt/hadoop/sbin:/data/spark/bin:/data/spark/sbin:/opt/sqoop/bin:/opt/flume/bin:/opt/hive/bin:/opt/hbase/bin:/opt/kylin/bin:/opt/tomcat/bin:/usr/lib/scala-2.10.4/bin:/opt/maven-3.3.9/bin:/opt/sbt-launcher-packaging-0.13.13/bin:/opt/kafka-manager-1.3.3.4/bin:/root/bin

输出中显示了有20个可供shell用来查找命令和程序。 PATH中的目录使用冒号分隔。
如果命令或者程序的位置没有包括在PATH变量中,那么如果不使用绝对路径的话, shell是没法找到的。

例如,下面启动spark-shell,注意在PATH中有/data/spark/bin:/data/spark/sbin所以可以正确启动:

[root@server106 ~]# spark-shell
18/06/05 21:22:02 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
18/06/05 21:22:02 INFO SecurityManager: Changing view acls to: root
18/06/05 21:22:02 INFO SecurityManager: Changing modify acls to: root
18/06/05 21:22:02 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
18/06/05 21:22:02 INFO HttpServer: Starting HTTP Server
18/06/05 21:22:02 INFO Utils: Successfully started service 'HTTP class server' on port 46654.
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 1.6.2
      /_/

Using Scala version 2.10.5 (OpenJDK 64-Bit Server VM, Java 1.8.0_121)
Type in expressions to have them evaluated.
Type :help for more information.
18/06/05 21:22:05 INFO SparkContext: Running Spark version 1.6.2
18/06/05 21:22:05 INFO SecurityManager: Changing view acls to: root
18/06/05 21:22:05 INFO SecurityManager: Changing modify acls to: root
18/06/05 21:22:05 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
18/06/05 21:22:05 INFO Utils: Successfully started service 'sparkDriver' on port 46844.
18/06/05 21:22:06 INFO Slf4jLogger: Slf4jLogger started
18/06/05 21:22:06 INFO Remoting: Starting remoting
18/06/05 21:22:06 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem@192.168.111.106:35474]
18/06/05 21:22:06 INFO Utils: Successfully started service 'sparkDriverActorSystem' on port 35474.
18/06/05 21:22:06 INFO SparkEnv: Registering MapOutputTracker
18/06/05 21:22:06 INFO SparkEnv: Registering BlockManagerMaster
18/06/05 21:22:06 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-d4f4157f-1a21-41d3-9dcd-b181f96a6dfc
18/06/05 21:22:06 INFO MemoryStore: MemoryStore started with capacity 3.4 GB
18/06/05 21:22:06 INFO SparkEnv: Registering OutputCommitCoordinator
18/06/05 21:22:06 INFO Utils: Successfully started service 'SparkUI' on port 4040.
18/06/05 21:22:06 INFO SparkUI: Started SparkUI at http://192.168.111.106:4040
18/06/05 21:22:06 INFO Executor: Starting executor ID driver on host localhost
18/06/05 21:22:06 INFO Executor: Using REPL class URI: http://192.168.111.106:46654
18/06/05 21:22:06 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 35459.
18/06/05 21:22:06 INFO NettyBlockTransferService: Server created on 35459
18/06/05 21:22:06 INFO BlockManager: external shuffle service port = 7337
18/06/05 21:22:06 INFO BlockManagerMaster: Trying to register BlockManager
18/06/05 21:22:06 INFO BlockManagerMasterEndpoint: Registering block manager localhost:35459 with 3.4 GB RAM, BlockManagerId(driver, localhost, 35459)
18/06/05 21:22:06 INFO BlockManagerMaster: Registered BlockManager
18/06/05 21:22:12 INFO EventLoggingListener: Logging events to hdfs://meihui/spark/eventLog/local-1528204926654.snappy
18/06/05 21:22:12 INFO SparkILoop: Created spark context..
Spark context available as sc.
18/06/05 21:22:13 INFO HiveContext: Initializing execution hive, version 1.2.1
18/06/05 21:22:13 INFO ClientWrapper: Inspected Hadoop version: 2.6.0
18/06/05 21:22:13 INFO ClientWrapper: Loaded org.apache.hadoop.hive.shims.Hadoop23Shims for Hadoop version 2.6.0
18/06/05 21:22:13 INFO HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
18/06/05 21:22:13 INFO ObjectStore: ObjectStore, initialize called
18/06/05 21:22:14 INFO Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored
18/06/05 21:22:14 INFO Persistence: Property datanucleus.cache.level2 unknown - will be ignored
18/06/05 21:22:14 WARN Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
18/06/05 21:22:14 WARN Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
18/06/05 21:22:18 INFO ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
18/06/05 21:22:18 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
18/06/05 21:22:18 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
18/06/05 21:22:21 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
18/06/05 21:22:21 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
18/06/05 21:22:22 INFO MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY
18/06/05 21:22:22 INFO ObjectStore: Initialized ObjectStore
18/06/05 21:22:22 WARN ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 1.2.0
18/06/05 21:22:22 WARN ObjectStore: Failed to get database default, returning NoSuchObjectException
18/06/05 21:22:22 INFO HiveMetaStore: Added admin role in metastore
18/06/05 21:22:22 INFO HiveMetaStore: Added public role in metastore
18/06/05 21:22:23 INFO HiveMetaStore: No user is added in admin role, since config is empty
18/06/05 21:22:23 INFO HiveMetaStore: 0: get_all_databases
18/06/05 21:22:23 INFO audit: ugi=root  ip=unknown-ip-addr  cmd=get_all_databases   
18/06/05 21:22:23 INFO HiveMetaStore: 0: get_functions: db=default pat=*
18/06/05 21:22:23 INFO audit: ugi=root  ip=unknown-ip-addr  cmd=get_functions: db=default pat=* 
18/06/05 21:22:23 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MResourceUri" is tagged as "embedded-only" so does not have its own datastore table.
18/06/05 21:22:23 INFO SessionState: Created local directory: /tmp/root
18/06/05 21:22:23 INFO SessionState: Created local directory: /tmp/0d77f204-0d10-4ab0-bafe-36bb5da85aeb_resources
18/06/05 21:22:23 INFO SessionState: Created HDFS directory: /tmp/hive/root/0d77f204-0d10-4ab0-bafe-36bb5da85aeb
18/06/05 21:22:23 INFO SessionState: Created local directory: /tmp/root/0d77f204-0d10-4ab0-bafe-36bb5da85aeb
18/06/05 21:22:23 INFO SessionState: Created HDFS directory: /tmp/hive/root/0d77f204-0d10-4ab0-bafe-36bb5da85aeb/_tmp_space.db
18/06/05 21:22:23 INFO HiveContext: default warehouse location is /user/hive/warehouse
18/06/05 21:22:23 INFO HiveContext: Initializing HiveMetastoreConnection version 1.2.1 using Spark classes.
18/06/05 21:22:23 INFO ClientWrapper: Inspected Hadoop version: 2.6.0
18/06/05 21:22:23 INFO ClientWrapper: Loaded org.apache.hadoop.hive.shims.Hadoop23Shims for Hadoop version 2.6.0
18/06/05 21:22:24 INFO HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
18/06/05 21:22:24 INFO ObjectStore: ObjectStore, initialize called
18/06/05 21:22:24 INFO Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored
18/06/05 21:22:24 INFO Persistence: Property datanucleus.cache.level2 unknown - will be ignored
18/06/05 21:22:24 WARN Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
18/06/05 21:22:24 WARN Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
18/06/05 21:22:25 INFO ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
18/06/05 21:22:26 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
18/06/05 21:22:26 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
18/06/05 21:22:26 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
18/06/05 21:22:26 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
18/06/05 21:22:26 INFO Query: Reading in results for query "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used is closing
18/06/05 21:22:26 INFO MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY
18/06/05 21:22:26 INFO ObjectStore: Initialized ObjectStore
18/06/05 21:22:26 INFO HiveMetaStore: Added admin role in metastore
18/06/05 21:22:26 INFO HiveMetaStore: Added public role in metastore
18/06/05 21:22:26 INFO HiveMetaStore: No user is added in admin role, since config is empty
18/06/05 21:22:26 INFO HiveMetaStore: 0: get_all_databases
18/06/05 21:22:26 INFO audit: ugi=root  ip=unknown-ip-addr  cmd=get_all_databases   
18/06/05 21:22:26 INFO HiveMetaStore: 0: get_functions: db=default pat=*
18/06/05 21:22:26 INFO audit: ugi=root  ip=unknown-ip-addr  cmd=get_functions: db=default pat=* 
18/06/05 21:22:26 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MResourceUri" is tagged as "embedded-only" so does not have its own datastore table.
18/06/05 21:22:26 INFO SessionState: Created local directory: /tmp/22f6e7fa-94d5-4ada-8157-ec55ef23a6be_resources
18/06/05 21:22:26 INFO SessionState: Created HDFS directory: /tmp/hive/root/22f6e7fa-94d5-4ada-8157-ec55ef23a6be
18/06/05 21:22:26 INFO SessionState: Created local directory: /tmp/root/22f6e7fa-94d5-4ada-8157-ec55ef23a6be
18/06/05 21:22:26 INFO SessionState: Created HDFS directory: /tmp/hive/root/22f6e7fa-94d5-4ada-8157-ec55ef23a6be/_tmp_space.db
18/06/05 21:22:26 INFO SparkILoop: Created sql context (with Hive support)..
SQL context available as sqlContext.

scala> 

如果shell找不到指定的命令或程序,它会产生一个错误信息:

$ myprog
-bash: myprog: command not found

应用程序放置可执行文件的目录不在PATH环境变量所包含的目录中。解决的
办法是保证PATH环境变量包含了所有存放应用程序的目录。

添加新的应用程序到PATH中
$ echo $PATH
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:
/sbin:/bin:/usr/games:/usr/local/games
$ PATH=$PATH:/home/christine/Scripts
$ echo $PATH
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/
games:/usr/local/games:/home/christine/Scripts
$ myprog
The factorial of 5 is 120.

5 定位系统环境变量

当你登录Linux系统时, bash shell会作为登录shell启动。登录shell会从5个不同的启动文件里读取命令:

 /etc/profile
 $HOME/.bash_profile
 $HOME/.bashrc
 $HOME/.bash_login
 $HOME/.profile

5.1 登录Shell

(1)/etc/profile文件是系统上默认的bash shell的主启动文件。系统上的每个用户登录时都会执行这个启动文件。

每个发行版的/etc/profile文件都有不同的设置和命令。例如:

在Ubuntu发行版的/etc/profile文件中,涉及了一个叫作/etc/bash.bashrc的文件。这个文件包含了系统环境变量。

在CentOS发行版的/etc/profile文件中,并没有出现这个文件

Ubuntu Linux系统中,/etc/profile.d目录下包含以下文件:

$ ls -l /etc/profile.d
total 12
-rw-r--r-- 1 root root 40 Apr 15 06:26 appmenu-qt5.sh
-rw-r--r-- 1 root root 663 Apr 7 10:10 bash_completion.sh
-rw-r--r-- 1 root root 1947 Nov 22 2013 vte.sh

CentOS系统中, /etc/profile.d目录下的文件更多:

$ ls -l /etc/profile.d
total 80
-rw-r--r--. 1 root root 1127 Mar 5 07:17 colorls.csh
-rw-r--r--. 1 root root 1143 Mar 5 07:17 colorls.sh
-rw-r--r--. 1 root root 92 Nov 22 2013 cvs.csh
-rw-r--r--. 1 root root 78 Nov 22 2013 cvs.sh
-rw-r--r--. 1 root root 192 Feb 24 09:24 glib2.csh
-rw-r--r--. 1 root root 192 Feb 24 09:24 glib2.sh
-rw-r--r--. 1 root root 58 Nov 22 2013 gnome-ssh-askpass.csh
-rw-r--r--. 1 root root 70 Nov 22 2013 gnome-ssh-askpass.sh
-rwxr-xr-x. 1 root root 373 Sep 23 2009 kde.csh
-rwxr-xr-x. 1 root root 288 Sep 23 2009 kde.sh
-rw-r--r--. 1 root root 1741 Feb 20 05:44 lang.csh
-rw-r--r--. 1 root root 2706 Feb 20 05:44 lang.sh
-rw-r--r--. 1 root root 122 Feb 7 2007 less.csh
-rw-r--r--. 1 root root 108 Feb 7 2007 less.sh
-rw-r--r--. 1 root root 976 Sep 23 2011 qt.csh
-rw-r--r--. 1 root root 912 Sep 23 2011 qt.sh
-rw-r--r--. 1 root root 2142 Mar 13 15:37 udisks-bash-completion.sh
-rw-r--r--. 1 root root 97 Apr 5 2012 vim.csh
-rw-r--r--. 1 root root 269 Apr 5 2012 vim.sh
-rw-r--r--. 1 root root 169 May 20 2009 which2.sh

(2)$HOME目录下的启动文件
提供一个用户专属的启动文件来定义该用户所用到的环境变量。大多数Linux发行版只用这四个启动文件中的一到两个:

 $HOME/.bash_profile
 $HOME/.bashrc
 $HOME/.bash_login
 $HOME/.profile

说明:Linux发行版在环境文件方面存在的差异非常大。本节中所列出的HOME下的那些文件并非每个用户都有。例如有些用户可能只有一个`HOME/.bash_profile`文件。这很正常。

$HOME表示的是某个用户的主目录。它和波浪号(~)的作用一样

Shell会按照按照下列顺序,运行第一个被找到的文件,余下的则被忽略:

$HOME/.bash_profile
$HOME/.bash_login
$HOME/.profile

注意,这个列表中并没有$HOME/.bashrc文件。这是因为该文件通常通过其他文件运行的。

CentOS Linux系统中的.bash_profile文件的内容如下:
.bash_profile启动文件会先去检查HOME目录中是不是还有一个叫.bashrc的启动文件。如果有的话,会先执行启动文件里面的命令。

[root@server106 ~]# cat $HOME/.bash_profile
# .bash_profile

# Get the aliases and functions
if [ -f ~/.bashrc ]; then
    . ~/.bashrc
fi

# User specific environment and startup programs

PATH=$PATH:$HOME/bin

export PATH

5.2 交互式Shell进程

如果你的bash shell不是登录系统时启动的(比如是在命令行提示符下敲入bash时启动),那么你启动的shell叫作交互式shell。交互式shell不会像登录shell一样运行,但它依然提供了命令行提示符来输入命令。

如果bash是作为交互式shell启动的,它就不会访问/etc/profile文件,只会检查用户HOME目录中的.bashrc文件。

.bashrc文件有两个作用:一是查看/etc目录下通用的bashrc文件,二是为用户提供一个定制自己的命令别名和私有脚本函数的地方。

所以类似于spark-shell这样的交互式shell需要在HOME目录中的.bashrc文件中写好对应的应用程序启动方式

export SPARK_HOME=/data/spark
export SPARK_VERSION=1.6.2
export PATH=$PATH:/data/spark/bin:/data/spark/sbin

在CentOS Linux系统上,这个文件看起来如下:

# [root@server106 ~]# cat ~/.bashrc
[root@server106 ~]# cat $HOME/.bashrc
# .bashrc

# User specific aliases and functions

alias rm='rm -i'
alias cp='cp -i'
alias mv='mv -i'

# Source global definitions
if [ -f /etc/bashrc ]; then
    . /etc/bashrc
fi
export JAVA_HOME=/usr/lib/jvm/java
export HADOOP_HOME=/opt/hadoop
export HADOOP_VERSION=2.7.2
export HADOOP_CONF_DIR=/opt/hadoop/etc/hadoop
export HADOOP_PID_DIR=/opt/hadoopdata/pids
export YARN_PID_DIR=/opt/hadoopdata/pids
export HADOOP_LOG_DIR=/opt/hadoopdata/logs/hadoop
export YARN_LOG_DIR=/opt/hadoopdata/logs/yarn
export PATH=$PATH:/opt/hadoop/bin:/opt/hadoop/sbin
export SPARK_HOME=/data/spark
export SPARK_VERSION=1.6.2
export PATH=$PATH:/data/spark/bin:/data/spark/sbin
export SQOOP_VERSION=1.4.6
export SQOOP_HOME=/opt/sqoop
export PATH=$PATH:/opt/sqoop/bin
export FLUME_VERSION=1.7.0
export FLUME_HOME=/opt/flume
export PATH=$PATH:/opt/flume/bin
export HIVE_HOME=/opt/hive
export HIVE_VERSION=2.1.0
export PATH=$PATH:/opt/hive/bin
export HBASE_HOME=/opt/hbase
export HBASE_VERSION=1.2.2
export PATH=$PATH:$HBASE_HOME/bin
export KYLIN_HOME=/opt/kylin
export HCAT_HOME=/opt/hive/hcatalog
export KYLIN_VERSION=1.5.3
export PATH=$PATH:/opt/kylin/bin
export TOMCAT_HOME=/opt/tomcat
export TOMCAT_VERSION=8.5.5
export PATH=$PATH:/opt/tomcat/bin
#export LD_LIBRARY_PATH=/opt/hadoop/lib/native/lzo
export SCALA_HOME=/usr/lib/scala-2.10.4
export PATH=$PATH:/usr/lib/scala-2.10.4/bin:/opt/maven-3.3.9/bin:/opt/sbt-launcher-packaging-0.13.13/bin:/opt/kafka-manager-1.3.3.4/bin

# added by Anaconda3 4.0.0 installer
export PATH="/opt/anaconda3/bin:$PATH"
export PYSPARK_PYTHON="python3"
export PYSPARK_DRIVER_PYTHON="ipython3"
export PYSPARK_DRIVER_PYTHON_OPTS="notebook --NotebookApp.open_browser=False --allow-root --NotebookApp.ip='192.168.111.106' --NotebookApp.port=8889"
#export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/lib/oracle/11.2/client64/lib


alias hals='hdfs dfs -ls'
alias hatail='hdfs dfs -tail'
alias hacount='hdfs dfs -count'
alias hacat='hdfs dfs -cat'
alias startHS='nohup hive --service hiveserver2 > /opt/hive-data/logs/hs2/hs2.log 2>&1 &'
alias pglog='cd /var/lib/pgsql/9.4/data/pg_log'
alias yarnlog='yarn logs -applicationId'
alias hadu='hdfs dfs -du -h'

5.3 环境变量持久化

修改完.bashrc环境变量后,使其执行

source ~/.bashrc

对全局环境变量来说(Linux系统中所有用户都需要使用的变量),可能更倾向于将新的或修改过的变量设置放在/etc/profile文件中,但这可不是什么好主意。如果你升级了所用的发行版,这个文件也会跟着更新,那你所有定制过的变量设置可就都没有了。

最好是在/etc/profile.d目录中创建一个以.sh结尾的文件。把所有新的或修改过的全局环境变量设置放在这个文件中。

在大多数发行版中,存储个人用户永久性bash shell变量的地方是$HOME/.bashrc文件。这一点适用于所有类型的shell进程。

可以把自己的alias设置放在$HOME/.bashrc启动文件中,使其效果永久化。

linux查看和修改PATH环境变量的方法

查看PATH:echo $PATH
以添加mongodb server为列

修改方法一:

export PATH=/usr/local/mongodb/bin:$PATH
//配置完后可以通过echo $PATH查看配置结果。
生效方法:立即生效
有效期限:临时改变,只能在当前的终端窗口中有效,当前窗口关闭后就会恢复原有的path配置
用户局限:仅对当前用户

修改方法二:

通过修改.bashrc文件:
nano ~/.bashrc
//在最后一行添上:
export PATH=/usr/local/mongodb/bin:$PATH
生效方法:(有以下两种)
1、关闭当前终端窗口,重新打开一个新终端窗口就能生效
2、输入“source ~/.bashrc”命令,立即生效
有效期限:永久有效
用户局限:仅对当前用户

修改方法三:

通过修改profile文件:
nano /etc/profile
/export PATH //找到设置PATH的行,添加
export PATH=/usr/local/mongodb/bin:$PATH
生效方法:系统重启
有效期限:永久有效
用户局限:对所有用户

修改方法四:

通过修改environment文件:
nano /etc/environment
PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games"中加入“:/usr/local/mongodb/bin
生效方法:系统重启
有效期限:永久有效
用户局限:对所有用户

上一篇下一篇

猜你喜欢

热点阅读