Note 2: ELMo

2020-07-11  本文已影响0人  qin7zhen

Deep contextualized word representations

Peters et al, 2018
  1. ELMo (Embeddings from Language Models) learns a linear combination of the vectors stacked above each input word for each end task, which markedly improves performance over just using the top LSTM layer.
    • High-level captures the context-dependent aspects of word meaning.
    • Low-level captures the basic syntax.
    • Different from others, ELMo word representations are functions of the entire input sentence.


      [Devlin et al. 2019]

2. Bidirectional language models (biLM)

Given a sequence of N tokens [t_1, t_2, \ldots, t_N],

3. ELMO


Reference

Peters, M. E., Neumann, M., Iyyer, M., Gardner, M., Clark, C., Lee, K., & Zettlemoyer, L. (2018). Deep contextualized word representations. arXiv preprint arXiv:1802.05365.
Devlin, J., Chang, M. W., Lee, K., & Toutanova, K. (2018). Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.

上一篇 下一篇

猜你喜欢

热点阅读