Using rubrics to assess information literacy: An examination of methodology and interrater reliability
M. Oakleaf. Journal of the American Society for Information Science and Technology, 60 (5):
969--983(2009)
DOI: 10.1002/asi.21030
Zusammenfassung
Academic librarians seeking to assess information literacy skills often focus on testing as a primary means of evaluation. Educators have long recognized the limitations of tests, and these limitations cause many educators to prefer rubric assessment to test-based approaches to evaluation. In contrast, many academic librarians are unfamiliar with the benefits of rubrics. Those librarians who have explored the use of information literacy rubrics have not taken a rigorous approach to methodology and interrater reliability. This article seeks to remedy these omissions by describing the benefits of a rubric-based approach to information literacy assessment, identifying a methodology for using rubrics to assess information literacy skills, and analyzing the interrater reliability of information literacy rubrics in the hands of university librarians, faculty, and students. Study results demonstrate that Cohen's κ can be effectively employed to check interrater reliability. The study also indicates that rubric training sessions improve interrater reliability among librarians, faculty, and students.
Beschreibung
Using rubrics to assess information literacy: An examination of methodology and interrater reliability - Oakleaf - 2009 - Journal of the American Society for Information Science and Technology - Wiley Online Library
%0 Journal Article
%1 ASI:ASI21030
%A Oakleaf, Megan
%D 2009
%I Wiley Subscription Services, Inc., A Wiley Company
%J Journal of the American Society for Information Science and Technology
%K Bewertung Informationskompetenz evaluation information_literacy rubrics
%N 5
%P 969--983
%R 10.1002/asi.21030
%T Using rubrics to assess information literacy: An examination of methodology and interrater reliability
%U http://dx.doi.org/10.1002/asi.21030
%V 60
%X Academic librarians seeking to assess information literacy skills often focus on testing as a primary means of evaluation. Educators have long recognized the limitations of tests, and these limitations cause many educators to prefer rubric assessment to test-based approaches to evaluation. In contrast, many academic librarians are unfamiliar with the benefits of rubrics. Those librarians who have explored the use of information literacy rubrics have not taken a rigorous approach to methodology and interrater reliability. This article seeks to remedy these omissions by describing the benefits of a rubric-based approach to information literacy assessment, identifying a methodology for using rubrics to assess information literacy skills, and analyzing the interrater reliability of information literacy rubrics in the hands of university librarians, faculty, and students. Study results demonstrate that Cohen's κ can be effectively employed to check interrater reliability. The study also indicates that rubric training sessions improve interrater reliability among librarians, faculty, and students.
@article{ASI:ASI21030,
abstract = {Academic librarians seeking to assess information literacy skills often focus on testing as a primary means of evaluation. Educators have long recognized the limitations of tests, and these limitations cause many educators to prefer rubric assessment to test-based approaches to evaluation. In contrast, many academic librarians are unfamiliar with the benefits of rubrics. Those librarians who have explored the use of information literacy rubrics have not taken a rigorous approach to methodology and interrater reliability. This article seeks to remedy these omissions by describing the benefits of a rubric-based approach to information literacy assessment, identifying a methodology for using rubrics to assess information literacy skills, and analyzing the interrater reliability of information literacy rubrics in the hands of university librarians, faculty, and students. Study results demonstrate that Cohen's κ can be effectively employed to check interrater reliability. The study also indicates that rubric training sessions improve interrater reliability among librarians, faculty, and students.},
added-at = {2013-08-20T12:13:12.000+0200},
author = {Oakleaf, Megan},
biburl = {https://www.bibsonomy.org/bibtex/2ab662c301e0b646b198068b757481d8e/blostben},
description = {Using rubrics to assess information literacy: An examination of methodology and interrater reliability - Oakleaf - 2009 - Journal of the American Society for Information Science and Technology - Wiley Online Library},
doi = {10.1002/asi.21030},
interhash = {ccb333cd27c46f3e912ae5df1859a8d9},
intrahash = {ab662c301e0b646b198068b757481d8e},
issn = {1532-2890},
journal = {Journal of the American Society for Information Science and Technology},
keywords = {Bewertung Informationskompetenz evaluation information_literacy rubrics},
number = 5,
pages = {969--983},
publisher = {Wiley Subscription Services, Inc., A Wiley Company},
timestamp = {2013-08-20T12:13:12.000+0200},
title = {Using rubrics to assess information literacy: An examination of methodology and interrater reliability},
url = {http://dx.doi.org/10.1002/asi.21030},
volume = 60,
year = 2009
}