From post

Using Pre-Trained Language Models for Abstractive DBPEDIA Summarization: A Comparative Study.

, , , , и . SEMANTiCS, том 56 из Studies on the Semantic Web, стр. 19-37. IOS Press, (2023)

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed.

 

Другие публикации лиц с тем же именем

Using Pre-Trained Language Models for Abstractive DBPEDIA Summarization: A Comparative Study., , , , и . SEMANTiCS, том 56 из Studies on the Semantic Web, стр. 19-37. IOS Press, (2023)Multilingual Serviceability Model for Detecting and Ranking Help Requests on Social Media during Disasters., и . ICWSM, стр. 1571-1584. AAAI Press, (2024)Emotion Detection for Spanish by Combining LASER Embeddings, Topic Information, and Offense Features., и . IberLEF@SEPLN, том 2943 из CEUR Workshop Proceedings, стр. 78-85. CEUR-WS.org, (2021)Efficient Detection of Multilingual Hate Speech by Using Interactive Attention Network with Minimal Human Feedback., , и . WebSci, стр. 130-138. ACM, (2021)Construction of Hyper-Relational Knowledge Graphs Using Pre-Trained Large Language Models., , , и . CoRR, (2024)Unraveling Code-Mixing Patterns in Migration Discourse: Automated Detection and Analysis of Online Conversations on Reddit., , , , и . CoRR, (2024)Comparison of Social Media in English and Russian During Emergencies and Mass Convergence Events., и . ISCRAM, ISCRAM Association, (2019)Cross-Lingual Query-Based Summarization of Crisis-Related Social Media: An Abstractive Approach Using Transformers., и . HT, стр. 21-31. ACM, (2022)Single-Sentence Readability Prediction in Russian., , и . AIST, том 436 из Communications in Computer and Information Science, стр. 91-100. Springer, (2014)DBpedia Abstractive Summaries., , , , и . (декабря 2022)