Abstract

Learners are expected to stay wakeful and focused while interacting with e-learning platforms. Although wakefulness of learners strongly relates to educational outcomes, detecting drowsy learning behaviors only from log data is not an easy task. In this study, we describe the results of our research to model learners’ wakefulness based on multimodal data generated from heart rate, seat pressure, and face recognition. We collected multimodal data from learners in a blended course of informatics and conducted two types of analysis on them. First, we clustered features based on learners’ wakefulness labels as generated by human raters and ran a statistical analysis. This analysis helped us generate insights from multimodal data that can be used to inform learner and teacher feedback in multimodal learning analytics. Second, we trained machine learning models with multiclass-Support Vector Machine (SVM), Random Forest (RF) and CatBoost Classifier (CatBoost) algorithms to recognize learners’ wakefulness states automatically. We achieved an average macro-F1 score of 0.82 in automated user-dependent models with CatBoost. We also showed that compared to unimodal data from each sensor, the multimodal sensor data can improve the accuracy of models predicting the wakefulness states of learners while they are interacting with e-learning platforms.

Links and resources

Tags

community