References¶
Dataset
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev and Percy Liang. 2016, SQuAD: 100,000+ Questions for Machine Comprehension of Text
Pranav Rajpurkar, Robin Jia and Percy Liang. 2018, Know What You Don’t Know: Unanswerable Questions for SQuAD
Victor Zhong, Caiming Xiong, and Richard Socher. 2017, Seq2SQL: Generating Structured Queries from Natural Language using Reinforcement Learning
Model
Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi and Hannaneh Hajishirzi. 2016, Bidirectional Attention Flow for Machine Comprehension
Danqi Chen, Adam Fisch, Jason Weston and Antoine Bordes. 2017, Reading Wikipedia to Answer Open-Domain Questions
Christopher Clark and Matt Gardner. 2017, Simple and Effective Multi-Paragraph Reading Comprehension
Adams Wei Yu, David Dohan, Minh-Thang Luong, Rui Zhao, Kai Chen, Mohammad Norouzi and Quoc V. Le. 2018, QANet: Combining Local Convolution with Global Self-Attention for Reading Comprehension
Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova. 2018, BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Xiaojun Xu, Chang Liu and Dawn Song. 2017, SQLNet: Generating Structured Queries From Natural Language Without Reinforcement Learning
Token
Yoon Kim,. 2014, Convolutional Neural Networks for Sentence Classification
B. McCann, J. Bradbury, C. Xiong, R. Socher, Learned in Translation: Contextualized Word Vectors
P. Bojanowski, E. Grave, A. Joulin, T. Mikolov, Enriching Word Vectors with Subword Information
Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee and Luke Zettlemoyer. 2018, Deep contextualized word representations
Other Framework
Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F. Liu, Matthew Peters, Michael Schmitz and Luke S. Zettlemoyer. 2017, AllenNLP: A Deep Semantic Natural Language Processing Platform
Guillaume Klein, Yoon Kim, Yuntian Deng, Vincent Nguyen, Jean Senellart and Alexander M. Rush OpenNMT: Neural Machine Translation Toolkit