We study the problem of joint question answering (QA) and question generation
(QG) in this paper.
Our intuition is that QA and QG have intrinsic connections and these two
tasks could improve each other.
On one side, the QA model judges whether the generated question of a QG model
is relevant to the answer.
On the other side, the QG model provides the probability of generating a
question given the answer, which is a useful evidence that in turn facilitates
QA.
In this paper we regard QA and QG as dual tasks.
We propose a training framework that trains the models of QA and QG
simultaneously, and explicitly leverages their probabilistic correlation to
guide the training process of both models.
We implement a QG model based on sequence-to-sequence learning, and a QA
model based on recurrent neural network.
As all the components of the QA and QG models are differentiable, all the
parameters involved in these two models could be conventionally learned with
back propagation.
We conduct experiments on three datasets. Empirical results show that our
training framework improves both QA and QG tasks.
The improved QA model performs comparably with strong baseline approaches on
all three datasets.
%0 Generic
%1 tang2017question
%A Tang, Duyu
%A Duan, Nan
%A Qin, Tao
%A Yan, Zhao
%A Zhou, Ming
%D 2017
%K question_generation
%T Question Answering and Question Generation as Dual Tasks
%U http://arxiv.org/abs/1706.02027
%X We study the problem of joint question answering (QA) and question generation
(QG) in this paper.
Our intuition is that QA and QG have intrinsic connections and these two
tasks could improve each other.
On one side, the QA model judges whether the generated question of a QG model
is relevant to the answer.
On the other side, the QG model provides the probability of generating a
question given the answer, which is a useful evidence that in turn facilitates
QA.
In this paper we regard QA and QG as dual tasks.
We propose a training framework that trains the models of QA and QG
simultaneously, and explicitly leverages their probabilistic correlation to
guide the training process of both models.
We implement a QG model based on sequence-to-sequence learning, and a QA
model based on recurrent neural network.
As all the components of the QA and QG models are differentiable, all the
parameters involved in these two models could be conventionally learned with
back propagation.
We conduct experiments on three datasets. Empirical results show that our
training framework improves both QA and QG tasks.
The improved QA model performs comparably with strong baseline approaches on
all three datasets.
@misc{tang2017question,
abstract = {We study the problem of joint question answering (QA) and question generation
(QG) in this paper.
Our intuition is that QA and QG have intrinsic connections and these two
tasks could improve each other.
On one side, the QA model judges whether the generated question of a QG model
is relevant to the answer.
On the other side, the QG model provides the probability of generating a
question given the answer, which is a useful evidence that in turn facilitates
QA.
In this paper we regard QA and QG as dual tasks.
We propose a training framework that trains the models of QA and QG
simultaneously, and explicitly leverages their probabilistic correlation to
guide the training process of both models.
We implement a QG model based on sequence-to-sequence learning, and a QA
model based on recurrent neural network.
As all the components of the QA and QG models are differentiable, all the
parameters involved in these two models could be conventionally learned with
back propagation.
We conduct experiments on three datasets. Empirical results show that our
training framework improves both QA and QG tasks.
The improved QA model performs comparably with strong baseline approaches on
all three datasets.},
added-at = {2018-01-07T21:11:19.000+0100},
author = {Tang, Duyu and Duan, Nan and Qin, Tao and Yan, Zhao and Zhou, Ming},
biburl = {https://www.bibsonomy.org/bibtex/22e35ebc0658bf578a93a13d1f7b1fa98/defeatnelly},
interhash = {399e399e0537be155e1fa68fe562e8a5},
intrahash = {2e35ebc0658bf578a93a13d1f7b1fa98},
keywords = {question_generation},
note = {cite arxiv:1706.02027},
timestamp = {2018-01-07T21:11:19.000+0100},
title = {Question Answering and Question Generation as Dual Tasks},
url = {http://arxiv.org/abs/1706.02027},
year = 2017
}