NLP中知识蒸馏和对抗训练相关的论文收集
2020-04-29 本文已影响0人
top_小酱油
分享一下自己看论文收集总结的一些paper. 持续更新~ 欢迎大家来一起更新呀
Adversarial Training (Adversarial Examples)
- Explaining and harnessing adversarial examples 2014 开山之作
- Distributional smoothing with virtual adversarial training 2015
- Adversarial training methods for semi-supervised text classification (FGM) 2016 code1 code2
- Seq2Sick: Evaluating the Robustness of Sequence-to-Sequence with Adversarial Examples 2018
- Interpretable adversarial perturbation in input embedding space for text 2018
- Generating Natural Language Adversarial Examples 2018
- Towards Deep Learning Models Resistant to Adversarial Attacks (PGD) ICLR 2018
- Freelb: enhanced adversarial training for language understanding 2019
- A Survey: Towards a Robust Deep Neural Network in Text 2019
- Adversarial attacks on deep learning models in natural language: A survey 2019
- Technical report on Conversational Question Answering 2019 问答任务中基于RoBERTa融合对抗样本以及知识蒸馏技术
- SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization 2019
Knowledge Distillation(知识蒸馏)
-
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter 2019
-
Tinybert: distilling bert for natural language understanding 2019
-
Distilling Task-Specific Knowledge from BERT into Simple Neural Networks 2019
-
Improving Multi-Task Deep Neural Networks via Knowledge Distillation for Natural Language Understanding 2019 code