Abstract
In this work, we release COVID-Twitter-BERT (CT-BERT), a transformer-based
model, pretrained on a large corpus of Twitter messages on the topic of
COVID-19. Our model shows a 10-30% marginal improvement compared to its base
model, BERT-Large, on five different classification datasets. The largest
improvements are on the target domain. Pretrained transformer models, such as
CT-BERT, are trained on a specific target domain and can be used for a wide
variety of natural language processing tasks, including classification,
question-answering and chatbots. CT-BERT is optimised to be used on COVID-19
content, in particular social media posts from Twitter.
Users
Please
log in to take part in the discussion (add own reviews or comments).