About Activation Function All Yo

2021-03-09  本文已影响0人  IntoTheVoid

激活函数

激活函数的意义:激活函数为层与层之间增加非线性连接,增加模型的复杂性,如果层之间没有非线性,那么即使很深的层堆叠也等同于单个层
例如f(x)=2x+3 \quad and \quad g(x)=5x+1连接这两个线性函数等于得到另一个线性函数f(g(x))=2(5x+1)+3=10x+1

1.Sigmoid

优点

  • 平滑梯度:防止输出值产生跳跃
  • 输出值约束在0-1之间:规范化每个神经元的输出值
  • 明确的预测:对于X大于2或小于-2的X,趋向于将Y值(预测)带到曲线的边缘,非常接近1或0。

缺点

  • 梯度消失问题:只对-4 to 4之间的值敏感,对于非常高或非常低的X值,预测值几乎没有变化,从而导致梯度消失。 这可能会导致网络拒绝进一步学习,或者太慢而无法准确
  • 计算代价大
  • 不以0为中心:无法对具有强负,中性和强正值的输入数据进行建模。

f(x)=\sigma(x)=\frac{1}{1+e^{-x}} \\ f^{\prime}(x)=f(x)(1-f(x))

image-20210308140111421.png
# sigmoid 函数
import numpy as np
def sigmoid_function(x):
    z = (1/(1 + np.exp(-x)))
    return z

# sigmoid 导数
def sigmoid_derivative(x):
    return sigmoid(x)*(1-sigmoid(x))

2.Tanh

优点

  • 以0为中心:可以更轻松地对具有强负,中性和强正值的输入数据进行建模。
  • 平滑梯度:防止输出值产生跳跃
  • 输出值约束在-1-1之间:规范化每个神经元的输出值

缺点

  • 梯度消失问题:仅对-2 to 2之间的值敏感,对于非常高或非常低的X值,预测值几乎没有变化,从而导致梯度消失。 这可能会导致网络拒绝进一步学习,或者太慢而无法准确
  • 计算代价大

f(x)=\tanh (x)=\frac{\left(e^{x}-e^{-x}\right)}{\left(e^{x}+e^{-x}\right)} \\ f^{\prime}(x)=1-f(x)^{2}

image-20210308141039070.png
# Tanh 函数
import numpy as np
def tanh_function(x):
    z = (2/(1 + np.exp(-2*x))) -1
    return z

# Tanh 导数
def tanh_derivative(x):
    return 1 - (tanh_function(x))**2

3.ReLU

优点

  • 计算代价小:允许网络更快的收敛
  • 非线性分段函数:尽管它看起来像线性函数,但ReLU具有微分函数并允许反向传播

缺点

  • Dying ReLU问题:当输入接近零或为负时,函数的梯度变为零,网络将无法执行反向传播,也无法学习。

f(x)=\left\{\begin{array}{ll} 0 & \text { for } x \leq 0 \\ x & \text { for } x>0 \end{array} \\ f^{\prime}(x)=\left\{\begin{array}{ll} 0 & \text { for } x \leq 0 \\ 1 & \text { for } x>0 \end{array}\right.\right.

image-20210308141854474.png
# ReLU 函数
def relu_function(x):
    if x<0:
        return 0
    else:
        return x

# ReLU 导数
def relu_derivative(x):
    if x>= 0:
        return 1
    else:
        return 0

4.Leaky ReLU

优点

  • 防止Dying ReLU问题:ReLU的这种变化在负区域具有较小的正斜率,因此即使对于负输入值,它也能够进行反向传播
  • 计算代价小:允许网络更快的收敛
  • 非线性分段函数:尽管它看起来像线性函数,但ReLU具有微分函数并允许反向传播

缺点

  • 结果不一致:Leaky ReLU无法为负输入值提供一致的预测。

f(x)=\left\{\begin{array}{ll} 0.01x & \text { for } x \leq 0 \\ x & \text { for } x>0 \end{array} \\ f^{\prime}(x)=\left\{\begin{array}{ll} 0.01 & \text { for } x \leq 0 \\ 1 & \text { for } x>0 \end{array}\right.\right.

image-20210308143426244.png
#  Leaky ReLU函数
import numpy as np
def leaky_relu_function(x):
    if x >= 0:
        return x
    else:
        return 0.01x

# Leaky ReLU 导数
def leaky_relu_derivative(x):
    if x >= 0:
        return 1
    else:
        return 0.01

5.Parametric ReLU

优点

  • 允许学习负斜率:与Leaky ReLU不同,此函数提供函数负数部分的斜率作为参数。 因此,可以进行反向传播并学习最合适的α值。

缺点

  • 对于不同的问题可能会有不同的表现。

f(x)=\left\{\begin{array}{ll} \alpha x & \text { for } x \leq 0 \\ x & \text { for } x>0 \end{array} \\ f^{\prime}(x)=\left\{\begin{array}{ll} \alpha & \text { for } x \leq 0 \\ 1 & \text { for } x>0 \end{array}\right.\right.

prelu-300x262.png
# Parametric 函数
def parametric_function(x, alpha):
    if x >= 0:
        return x
    else:
        return alpha*x

# Parametric 导数
def parametric_derivative(x, alpha):
    if x >= 0:
        return 1
    else:
        return alpha

Softmax

优点

  • 能够处理多分类:其他激活函数只能处理一个类,将每个类别的输出归一化在0和1之间,得出输入值属于特定类别的概率。
  • 常用于输出神经元:通常,Softmax仅用于输出层,用于需要将输入分类为多个类别的神经网络。

\sigma\left(z_{i}\right)=\frac{e^{z_{i}}}{\sum_{j=1}^{K} e^{z_{j}}}

推导开始:
\begin{array}{l} define \quad S_i =\sigma\left(z_{i}\right), \quad g_{i}=e^{x_{i}} \quad and \quad h_{i}=\sum_{k=1}^{N} e^{x_{k}} \end{array}


\begin{aligned} MainFormula &: \frac{\partial S_{i}}{\partial x_{j}}=\frac{\frac{\partial g_{i}}{\partial x_{j}} h_i-\frac{\partial h_{i}}{\partial x_{j}} g_i}{[h_i]^{2}} \\ SubFormula (1) &: \frac{\partial g_{i}}{\partial x_{j}}=\left\{\begin{array}{ll} e^{x_{j}}, & \text { if } i=j \\ 0, & \text { otherwise } \end{array}\right. \\ SubFormula (2) &: \frac{\partial h_{i}}{\partial x_{j}}=\frac{\partial\left(e^{x} 1+e^{x} 2+\ldots+e^{x} N\right)}{\partial x_{j}}=e^{x_{j}} \end{aligned}


\begin{aligned} if \quad i=j:\\ \frac{\partial \frac{e^{x_{i}}}{\sum_{k=1}^{N} e^{x_{k}}}}{\partial x_{j}} &=\frac{e^{x_{j}} \left[\sum_{k=1}^{N} e^{x_{k}}\right]-e^{x_{j}} e^{x_{i}}}{\left[\sum_{k=1}^{N} e^{x_{k}}\right]^{2}} \\ &=\frac{e^{x_{j}}\left(\left[\sum_{k=1}^{N} e^{x_{k}}\right] -e^{x_{i}}\right)}{\left[\sum_{k=1}^{N} e^{x_{k}}\right]^{2}} \\ &=\frac{e^{x_{j}}}{\sum_{k=1}^{N} e^{x_{k}}} \frac{\left[\sum_{k=1}^{N} e^{x_{k}}\right]-e^{x_{i}}}{\sum_{k=1}^{N} e^{x_{k}}} \\ &=\frac{e^{x_{j}}}{\sum_{k=1}^{N} e^{x_{k}}}\left(\frac{\sum_{k=1}^{N} e^{x_{k}}}{\sum_{k=1}^{N} e^{x_{k}}}-\frac{e^{x_{i}}}{\sum_{k=1}^{N} e^{x_{k}}}\right) \\ &=\frac{e^{x_{j}}}{\sum_{k=1}^{N} e^{x_{k}}}\left(1-\frac{e^{x_{i}}}{\sum_{k=1}^{N} e^{x_{k}}}\right) \\ &=\sigma\left(x_{j}\right)\left(1-\sigma\left(x_{i}\right)\right) \\ else \quad i\ne j:\\ \frac{\partial \frac{e^{x_{i}}}{\sum_{k=1}^{N} e^{x_{k}}}}{\partial x_{j}} &=\frac{0-e^{x_{j}} e^{x_{i}}}{\left[\sum_{k=1}^{N} e^{x_{k}}\right]^{2}} \\ &=0-\frac{e^{x_{j}}}{\sum_{k=1}^{N} e^{x_{k}}} \frac{e^{x_{i}}}{\sum_{k=1}^{N} e^{x_{k}}} \\ &=0-\sigma\left(x_{j}\right) \sigma\left(x_{i}\right) \end{aligned}

为了方便进行代码书写,此处可以改写为雅克比矩阵的形式

知识补充: 雅克比矩阵

假如f_1, f_2, ..., f_n都是x_1, x_2, ..., x_m的函数,并且相对于各个自变量的偏微分都存在,那么定义T为:
T=\frac{\partial\left(f_{1}, f_{2}, \cdots, f_{n}\right)}{\partial\left(x_{1}, x_{2}, \cdots, x_{m}\right)}=\left(\begin{array}{cccc} \frac{\partial f_{1}}{\partial x_{1}} & \frac{\partial f_{1}}{\partial x_{2}} & \cdots & \frac{\partial f_{1}}{\partial x_{m}} \\ \frac{\partial f_{2}}{\partial x_{1}} & \frac{\partial f_{2}}{\partial x_{2}} & \cdots & \frac{\partial f_{3}}{\partial x_{m}} \\ \vdots & \vdots & \ddots & \vdots \\ \frac{\partial \hat{s}_{n}}{\partial x_{1}} & \frac{\partial f_{n}}{\partial x_{2}} & \cdots & \frac{\partial f_{n}}{\partial \tau_{m}} \end{array}\right)

\begin{aligned} \frac{\partial S}{\partial \mathbf{x}} & = \left[\begin{array}{ccc} \frac{\partial S_{1}}{\partial x_{1}} & \cdots & \frac{\partial S_{1}}{\partial x_{N}} \\ \cdots & \frac{\partial S_{i}}{\partial x_{j}} & \cdots \\ \frac{\partial S_{N}}{\partial x_{1}} & \cdots & \frac{\partial S_{N}}{\partial x_{N}} \end{array}\right] \\ \frac{\partial S}{\partial \mathbf{x}} & = \left[\begin{array}{ccc} \sigma\left(x_{1}\right)-\sigma\left(x_{1}\right) \sigma\left(x_{1}\right) & \ldots & 0-\sigma\left(x_{1}\right) \sigma\left(x_{N}\right) \\ \ldots & \sigma\left(x_{j}\right)-\sigma\left(x_{j}\right) \sigma\left(x_{i}\right) & \ldots \\ 0-\sigma\left(x_{N}\right) \sigma\left(x_{1}\right) & \ldots & \sigma\left(x_{N}\right)-\sigma\left(x_{N}\right) \sigma\left(x_{N}\right) \end{array}\right] \\ \frac{\partial S}{\partial \mathbf{x}} & = \left[\begin{array}{ccc} \sigma\left(x_{1}\right) & \ldots & 0 \\ \ldots & \sigma\left(x_{j}\right) & \ldots \\ 0 & \ldots & \sigma\left(x_{N}\right) \end{array}\right]-\left[\begin{array}{ccc} \sigma\left(x_{1}\right) \sigma\left(x_{1}\right) & \ldots & \sigma\left(x_{1}\right) \sigma\left(x_{N}\right) \\ \ldots & \sigma\left(x_{j}\right) \sigma\left(x_{i}\right) & \ldots \\ \sigma\left(x_{N}\right) \sigma\left(x_{1}\right) & \ldots & \sigma\left(x_{N}\right) \sigma\left(x_{N}\right) \end{array}\right] \\ \frac{\partial S}{\partial \mathbf{x}} & = diag(\mathbf{x}) - \sigma(\mathbf{x})\cdot\sigma(\mathbf{x^T}) \end{aligned}

# softmax 函数
import numpy as np
def softmax_function(arr):
    """
    input: a array
    return: a array after computed
    """
    exps = np.exp(arr)
    sums = np.sum(exps)
    return np.divide(exps, sums)

# softmax 导数
def softmax_derivative(arr):
    """
    input: a array after computed by softmax
    output: n*n matrix, n = len(arr)
    """
    s = arr.reshape(-1, 1)
    return np.diagflat(s) - np.dot(s, s.T)
上一篇下一篇

猜你喜欢

热点阅读