Textuality is often thought of in linguistic terms; for instance, the talk and writing that circulate in the classroom. In this paper I take a multimodal perspective on textuality and context. I draw on illustrative examples from school Science and English to examine how image, colour, gesture, gaze, posture and movement—as well as writing and speech—are mobilized and orchestrated by teachers and students, and how this shapes learning contexts. Throughout the paper I discuss the issues raised by a multimodal perspective for the conceptualization of text and learning context, and how this approach can contribute to learning and pedagogy more generally. I suggest that attending to the full ensemble of communicative modes involved in learning contexts enables a richer view of the complex ways in which curriculum knowledge (and policy) is mediated and articulated through classroom practices.
First goal of this work is to collect and expose a compilation of gestural interactions in Sci-Fi movies, providing a catalog to researchers as resource to future discussions. The second goal is to classify the collected data according to a series of criteria.
a micro, modular, Object-Oriented and concise JavaScript Library that simplifies HTML document traversing, event handling, and Ajax interactions for rapid mobile web development. It allows you to write powerful, flexible and cross-browser code with its elegant, well documented and micro coherent API.
J. Lee, M. Wu, C. Liu, and Y. Chuang. Proceedings of the 2nd International Conference on Biomedical Signal and Image Processing, page 13-17. New York, NY, USA, Association for Computing Machinery (ACM), (2017)
C. Liu, G. Clark, and J. Lindqvist. Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, page 374-386. New York, NY, USA, Association for Computing Machinery (ACM), (2017)