Abstract
In this paper, we propose a method to learn a joint multimodal embedding
space. We compare the effect of various constraints using paired text and video
data. Additionally, we propose a method to improve the joint embedding space
using an adversarial formulation with unpaired text and video data. In addition
to testing on publicly available datasets, we introduce a new, large-scale
text/video dataset. We experimentally confirm that learning such a shared
embedding space benefits three difficult tasks (i) zero-shot activity
classification, (ii) unsupervised activity discovery, and (iii) unseen activity
captioning.
Users
Please
log in to take part in the discussion (add own reviews or comments).