Different regularization

2016-12-11  本文已影响24人  阿o醒

Different regularization methods have different effects on the learning process.

For example,

L2 regularization penalizes high weight values.

L1 regularization penalizes weight values that do not equal zero.

Adding noise to the weights during learning ensures that the learned hidden representations take extreme values.

Sampling the hidden representations regularizes the network by pushing the hidden representation to be binary during the forward pass which limits the modeling capacity of the network.

上一篇下一篇

猜你喜欢

热点阅读