From post

MarkupLM: Pre-training of Text and Markup Language for Visually Rich Document Understanding.

, , , и . ACL (1), стр. 6078-6087. Association for Computational Linguistics, (2022)

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed.

 

Другие публикации лиц с тем же именем

STEP: Sequence-to-Sequence Transformer Pre-training for Document Summarization., , , , и . CoRR, (2020)LongNet: Scaling Transformers to 1,000,000,000 Tokens, , , , , , и . (2023)cite arxiv:2307.02486Comment: Work in progress.Constrained Text Coclustering with Supervised and Unsupervised Constraints., , , , , и . IEEE Trans. Knowl. Data Eng., 25 (6): 1227-1239 (2013)BEiT: BERT Pre-Training of Image Transformers., , и . CoRR, (2021)DeltaLM: Encoder-Decoder Pre-training for Language Generation and Translation by Augmenting Pretrained Multilingual Encoders., , , , , , , , и . CoRR, (2021)LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding., , , , , , , , , и 2 other автор(ы). CoRR, (2020)GTrans: Grouping and Fusing Transformer Layers for Neural Machine Translation., , , , , , , и . IEEE ACM Trans. Audio Speech Lang. Process., (2023)Coooolll: A Deep Learning System for Twitter Sentiment Classification., , , , и . SemEval@COLING, стр. 208-212. The Association for Computer Linguistics, (2014)UnihanLM: Coarse-to-Fine Chinese-Japanese Language Model Pretraining with the Unihan Database., , , и . AACL/IJCNLP, стр. 201-211. Association for Computational Linguistics, (2020)DiT: Self-supervised Pre-training for Document Image Transformer., , , , , и . ACM Multimedia, стр. 3530-3539. ACM, (2022)