Lecture 14 | (3/5) Recurrent Neu
2019-11-02 本文已影响0人
Ysgc
https://www.youtube.com/watch?v=ItYyu3KQvOQ
code generated by a RNN
n-1 x 100
only 1% space is used
inefficient!!!
an advantage and disadvantage
project from N dim to M dim subspace
a learnable transformation!!!
time delayed neural network
end up capturing some semantic relationships
only consider the final error
the strategy above works for the "many to one" case
there're 2 problems
another problem: how to train
here's the recording of "hello", but no label for every time step: alignment problem
solution: CTC