Abstract
Purpose
Surveys are widely used in health
professions education (HPE) research,
yet little is known about the quality
of the instruments employed. Poorly
designed survey tools containing
unclear or poorly formatted items can
be difficult for respondents to interpret
and answer, yielding low-quality data.
This study assessed the quality of
published survey instruments in HPE.
Method
In 2017, the authors performed
an analysis of HPE research articles
published in three high-impact journals
in 2013. They included articles that
employed at least one self-administered
survey. They designed a coding rubric
addressing five violations of established
best practices for survey item design and
used it to collect descriptive data on the
validity and reliability evidence reported
and to assess the quality of available
survey items.
Results
Thirty-six articles met inclusion
criteria and included the instrument
for coding, with one article using 2
surveys, yielding 37 unique surveys.
Authors reported validity and reliability
evidence for 13 (35.1\%) and 8 (21.6\%)
surveys, respectively. Results of the
item-quality assessment revealed that
a substantial proportion of published
survey instruments violated established
best practices in the design and
visual layout of Likert-type rating
items. Overall, 35 (94.6\%) of the
37 survey instruments analyzed
contained at least one violation of
best practices.
Conclusions
The majority of articles failed to report
validity and reliability evidence, and a
substantial proportion of the survey
instruments violated established
best practices in survey design. The
authors suggest areas of future inquiry
and provide several improvement
recommendations for HPE researchers,
reviewers, and journal editors.
(private-note)First, researchers often refer to such instruments as ” validated surveys,” which is inaccurate because validity and reliability are properties of the survey scores and their proposed interpretations in a given context, not properties of the survey instrument itself.16
%0 Journal Article
%1 citeulike:14601712
%A Artino, Anthony R.
%A Phillips, Andrew W.
%A Utrankar, Amol
%A Ta, Andrew Q.
%A Durning, Steven J.
%D 2018
%J Academic Medicine
%K citeulikeExport methods, qualitative, statistics, survey
%N 3
%P 456--463
%R 10.1097/acm.0000000000002002
%T ” The Questions Shape the Answers”
%U http://dx.doi.org/10.1097/acm.0000000000002002
%V 93
%X Abstract
Purpose
Surveys are widely used in health
professions education (HPE) research,
yet little is known about the quality
of the instruments employed. Poorly
designed survey tools containing
unclear or poorly formatted items can
be difficult for respondents to interpret
and answer, yielding low-quality data.
This study assessed the quality of
published survey instruments in HPE.
Method
In 2017, the authors performed
an analysis of HPE research articles
published in three high-impact journals
in 2013. They included articles that
employed at least one self-administered
survey. They designed a coding rubric
addressing five violations of established
best practices for survey item design and
used it to collect descriptive data on the
validity and reliability evidence reported
and to assess the quality of available
survey items.
Results
Thirty-six articles met inclusion
criteria and included the instrument
for coding, with one article using 2
surveys, yielding 37 unique surveys.
Authors reported validity and reliability
evidence for 13 (35.1\%) and 8 (21.6\%)
surveys, respectively. Results of the
item-quality assessment revealed that
a substantial proportion of published
survey instruments violated established
best practices in the design and
visual layout of Likert-type rating
items. Overall, 35 (94.6\%) of the
37 survey instruments analyzed
contained at least one violation of
best practices.
Conclusions
The majority of articles failed to report
validity and reliability evidence, and a
substantial proportion of the survey
instruments violated established
best practices in survey design. The
authors suggest areas of future inquiry
and provide several improvement
recommendations for HPE researchers,
reviewers, and journal editors.
@article{citeulike:14601712,
abstract = {{Abstract
Purpose
Surveys are widely used in health
professions education (HPE) research,
yet little is known about the quality
of the instruments employed. Poorly
designed survey tools containing
unclear or poorly formatted items can
be difficult for respondents to interpret
and answer, yielding low-quality data.
This study assessed the quality of
published survey instruments in HPE.
Method
In 2017, the authors performed
an analysis of HPE research articles
published in three high-impact journals
in 2013. They included articles that
employed at least one self-administered
survey. They designed a coding rubric
addressing five violations of established
best practices for survey item design and
used it to collect descriptive data on the
validity and reliability evidence reported
and to assess the quality of available
survey items.
Results
Thirty-six articles met inclusion
criteria and included the instrument
for coding, with one article using 2
surveys, yielding 37 unique surveys.
Authors reported validity and reliability
evidence for 13 (35.1\%) and 8 (21.6\%)
surveys, respectively. Results of the
item-quality assessment revealed that
a substantial proportion of published
survey instruments violated established
best practices in the design and
visual layout of Likert-type rating
items. Overall, 35 (94.6\%) of the
37 survey instruments analyzed
contained at least one violation of
best practices.
Conclusions
The majority of articles failed to report
validity and reliability evidence, and a
substantial proportion of the survey
instruments violated established
best practices in survey design. The
authors suggest areas of future inquiry
and provide several improvement
recommendations for HPE researchers,
reviewers, and journal editors.}},
added-at = {2019-03-31T01:14:40.000+0100},
author = {Artino, Anthony R. and Phillips, Andrew W. and Utrankar, Amol and Ta, Andrew Q. and Durning, Steven J.},
biburl = {https://www.bibsonomy.org/bibtex/2c2c7fca5c6c29af0913514fc33d19f5e/dianella},
citeulike-article-id = {14601712},
citeulike-linkout-0 = {http://dx.doi.org/10.1097/acm.0000000000002002},
comment = {(private-note)First, researchers often refer to such instruments as ” validated surveys,” which is inaccurate because validity and reliability are properties of the survey scores and their proposed interpretations in a given context, not properties of the survey instrument itself.16},
doi = {10.1097/acm.0000000000002002},
interhash = {8514809853554b0892a696a6c324a22b},
intrahash = {c2c7fca5c6c29af0913514fc33d19f5e},
issn = {1040-2446},
journal = {Academic Medicine},
keywords = {citeulikeExport methods, qualitative, statistics, survey},
month = mar,
number = 3,
pages = {456--463},
posted-at = {2018-06-09 12:20:49},
priority = {2},
timestamp = {2019-03-31T01:16:26.000+0100},
title = {{ ” The Questions Shape the Answers”}},
url = {http://dx.doi.org/10.1097/acm.0000000000002002},
volume = 93,
year = 2018
}