[tf]LSTM

2018-12-16  本文已影响15人  VanJordan

创建一个简单的LSTM

在tensorflow中通过一句简单的命令就可以实现一个完整的LSTM结构。

lstm = tf.nn.rnn_cell.BasicLSTMCell(lstm_hidden_size)

将LSTM中的初始状态初始化全0数组使用.zero_state函数

state = lstm.zero_state(batch_size,tf.float32)
for i in range(num_steps):
    output,state = lstm.call(input,state)

创建多层的LSTM

创建深层的循环神经网络,同样可以使用 zero_state进行初始化。

lstm_cell  = tf.nn.rnn_cell.BasicLSTMCell(lstm_hidden_size)
stacked_lstm = tf.nn.rnn_cell.MultiRNNCell([lstm_cell(lstm_size) for _ in range(number_of_layers])
state = stacked_lstm.zero_state(batch_size,tf.float32)1

LSTM中使用Dropout

tf.nn.rnn_cell.DropoutWrapper(
    cell,
    input_keep_prob=1.0,
    output_keep_prob=1.0,
    state_keep_prob=1.0,
    variational_recurrent=False,
    input_size=None,
    dtype=None,
    seed=None,
    dropout_state_filter_visitor=None
)
tf.nn.rnn_cell.BasicLSTMCell
stacked_lstm = tf.nn.rnn_cell.MultiRNNCell([tf.nn.rnn_cell.DropoutWrapper(lstm_cell(lstm_size)) for _ in range(number_of_layers])

BiLSTM

tf.nn.bidirectional_dynamic_rnn(
    cell_fw, 
    cell_bw, 
    inputs, 
    initial_state_fw=None, 
    initial_state_bw=None, 
    sequence_length=None, 
    dtype=None, 
    parallel_iterations=None, 
    swap_memory=False, 
    time_major=False, 
    scope=None
)

输出 (outputs, output_states) :

dynamic_rnn

tf.nn.dynamic_rnn(cell, inputs, 
    initial_state=None, 
    sequence_length=None, 
    dtype=None, 
    parallel_iterations=None, 
    swap_memory=False, 
    time_major=False, 
    scope=None
)

输入参数

输出:

上一篇 下一篇

猜你喜欢

热点阅读