Deutscher Wortschatz contains data generated from newspapers and web resources that are publicly available. The data were collected per language and encompass statistics about co-occurrences of words in randomly selected sentences.
Web 0.1: Wetter, Fuball-Live-Ticker, Fernsehprogramm: 16 Millionen Deutsche drcken jeden Tag auf die Fernbedienung, um an Informationen des Videotexts zu kommen. (tags: tv text)
Textuality is often thought of in linguistic terms; for instance, the talk and writing that circulate in the classroom. In this paper I take a multimodal perspective on textuality and context. I draw on illustrative examples from school Science and English to examine how image, colour, gesture, gaze, posture and movement—as well as writing and speech—are mobilized and orchestrated by teachers and students, and how this shapes learning contexts. Throughout the paper I discuss the issues raised by a multimodal perspective for the conceptualization of text and learning context, and how this approach can contribute to learning and pedagogy more generally. I suggest that attending to the full ensemble of communicative modes involved in learning contexts enables a richer view of the complex ways in which curriculum knowledge (and policy) is mediated and articulated through classroom practices.
Now you can easily create tables in plain text which can be copied into any text file. Multi-line cells' contents is supported as well as multirow and multicolumn spanning of cells.
Today, speech technology is only available for a small fraction of the thousands of languages spoken around the world because traditional systems need to be trained on large amounts of annotated speech audio with transcriptions. Obtaining that kind of data for every human language and dialect is almost impossible.
Wav2vec works around this limitation by requiring little to no transcribed data. The model uses self-supervision to push the boundaries by learning from unlabeled training data. This enables speech recognition systems for many more languages and dialects, such as Kyrgyz and Swahili, which don’t have a lot of transcribed speech audio. Self-supervision is the key to leveraging unannotated data and building better systems.
P. Moreira, Y. Bizzoni, K. Nielbo, I. Lassen, и M. Thomsen. Proceedings of the The 5th Workshop on Narrative Understanding, стр. 25--35. Toronto, Canada, Association for Computational Linguistics, (июля 2023)