PAPER-REVIEW
Back to the previous page |page management |upload|daily.pptx, weekly.pptx
Contents
Deep Learning Methodology
- Finding Structure in Time(vanilla RNN), 1990| PDF|Github|[PPT][]
- Long Short-Term Memory(LSTM)| PDF|Github|[PPT][]
- Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation(GRU), 2014| PDF|Github|[PPT][]
- BRNN| PDF|Github|[PPT][]
- title| PDF|Github|[PPT][]
Computer Vision
Natural Language Processing
- Language Models are Unsupervised Multitask Learners| PDF|Github|[PPT][]
- XLNet: Generalized Autoregressive Pretraining for Language Understanding| PDF|Github|[PPT][]
- Emotion-Cause Pair Extraction: A New Task to Emotion Analysis in Texts| PDF|Github|[PPT][]
- Transferable Multi-Domain State Generator for Task-Oriented Dialogue Systems| PDF|Github|[PPT][]
- Ordered Neurons: Integrating Tree Structures into Recurrent Neural Networks| PDF|Github|[PPT][]
- Probing the Need for Visual Context in Multimodal Machine Translation| PDF|Github|[PPT][]
- Bridging the Gap between Training and Inference for Neural Machine Translation| PDF|Github|[PPT][]
- On Extractive and Abstractive Neural Document Summarization with Transformer Language Models| PDF|Github|[PPT][]
- CTRL: A Conditional Transformer Language Model for Controllable Generation| PDF|Github|[PPT][]
- ALBERT: A Lite BERT for Self-supervised Learning of Language Representations| PDF|Github|[PPT][]
- BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding| PDF|Github|[PPT][]
- Sequence Classification with Human Attention| PDF|Github|[PPT][]
- Phrase-Based & Neural Unsupervised Machine Translation| PDF|Github|[PPT][]
- What you can cram into a single vector: Probing sentence embeddings for linguistic properties| PDF|Github|[PPT][]
- SWAG: A Large-Scale Adversarial Dataset for Grounded Commonsense Inference| PDF|Github|[PPT][]
- Deep contextualized word representations| PDF|Github|[PPT][]
- Meta-Learning for Low-Resource Neural Machine Translation| PDF|Github|[PPT][]
- Linguistically-Informed Self-Attention for Semantic Role Labeling| PDF|Github|[PPT][]
- A Hierarchical Multi-task Approach for Learning Embeddings from Semantic Tasks| PDF|Github|[PPT][]
- Know What You Don’t Know: Unanswerable Questions for SQuAD| PDF|Github|[PPT][]
- An Empirical Evaluation of Generic Convolutional and Recurrent Networks for Sequence Modeling| PDF|Github|[PPT][]
- Universal Language Model Fine-tuning for Text Classification| PDF|Github|[PPT][]
- Improving Language Understandingby Generative Pre-Training| PDF|Github|[PPT][]
- Dissecting Contextual Word Embeddings: Architecture and Representation| PDF|Github|[PPT][]
- title| PDF|Github|[PPT][]
Image Capioning
- Attention on Attention for Image Captioning| PDF|Github|[PPT][]
- Image Captioning with Semantic Attention| PDF|Github|PPT