Prompting for Comprehension: Exploring the Intersection of Explain in Plain English Questions and Prompt Writing
D. Smith, P. Denny, und M. Fowler. Proceedings of the Eleventh ACM Conference on Learning @ Scale, Seite 39–50. New York, NY, USA, Association for Computing Machinery, (15.07.2024)
DOI: 10.1145/3657604.3662039
Zusammenfassung
Learning to program requires the development of a variety of skills including the ability to read, comprehend, and communicate the purpose of code. In the age of large language models (LLMs), where code can be generated automatically, developing these skills is more important than ever for novice programmers. The ability to write precise natural language descriptions of desired behavior is essential for eliciting code from an LLM, and the code that is generated must be understood in order to evaluate its correctness and suitability. In introductory computer science courses, a common question type used to develop and assess code comprehension skill is the 'Explain in Plain English' (EiPE) question. In these questions, students are shown a segment of code and asked to provide a natural language description of that code's purpose. The adoption of EiPE questions at scale has been hindered by: 1) the difficulty of automatically grading short answer responses and 2) the ability to provide effective and transparent feedback to students. To address these shortcomings, we explore and evaluate a grading approach where a student's EiPE response is used to generate code via an LLM, and that code is evaluated against test cases to determine if the description of the code was accurate. This provides a scalable approach to creating code comprehension questions and enables feedback both through the code generated from a student's description and the results of test cases run on that code. We evaluate students' success in completing these tasks, their use of the feedback provided by the system, and their perceptions of the activity.
Beschreibung
Prompting for Comprehension: Exploring the Intersection of Explain in Plain English Questions and Prompt Writing | Proceedings of the Eleventh ACM Conference on Learning @ Scale
%0 Conference Paper
%1 Smith2024
%A Smith, David H.
%A Denny, Paul
%A Fowler, Max
%B Proceedings of the Eleventh ACM Conference on Learning @ Scale
%C New York, NY, USA
%D 2024
%I Association for Computing Machinery
%K code-comprehension las2024 llm progtutor
%P 39–50
%R 10.1145/3657604.3662039
%T Prompting for Comprehension: Exploring the Intersection of Explain in Plain English Questions and Prompt Writing
%U https://doi.org/10.1145/3657604.3662039
%X Learning to program requires the development of a variety of skills including the ability to read, comprehend, and communicate the purpose of code. In the age of large language models (LLMs), where code can be generated automatically, developing these skills is more important than ever for novice programmers. The ability to write precise natural language descriptions of desired behavior is essential for eliciting code from an LLM, and the code that is generated must be understood in order to evaluate its correctness and suitability. In introductory computer science courses, a common question type used to develop and assess code comprehension skill is the 'Explain in Plain English' (EiPE) question. In these questions, students are shown a segment of code and asked to provide a natural language description of that code's purpose. The adoption of EiPE questions at scale has been hindered by: 1) the difficulty of automatically grading short answer responses and 2) the ability to provide effective and transparent feedback to students. To address these shortcomings, we explore and evaluate a grading approach where a student's EiPE response is used to generate code via an LLM, and that code is evaluated against test cases to determine if the description of the code was accurate. This provides a scalable approach to creating code comprehension questions and enables feedback both through the code generated from a student's description and the results of test cases run on that code. We evaluate students' success in completing these tasks, their use of the feedback provided by the system, and their perceptions of the activity.
%@ 9798400706332
@inproceedings{Smith2024,
abstract = {Learning to program requires the development of a variety of skills including the ability to read, comprehend, and communicate the purpose of code. In the age of large language models (LLMs), where code can be generated automatically, developing these skills is more important than ever for novice programmers. The ability to write precise natural language descriptions of desired behavior is essential for eliciting code from an LLM, and the code that is generated must be understood in order to evaluate its correctness and suitability. In introductory computer science courses, a common question type used to develop and assess code comprehension skill is the 'Explain in Plain English' (EiPE) question. In these questions, students are shown a segment of code and asked to provide a natural language description of that code's purpose. The adoption of EiPE questions at scale has been hindered by: 1) the difficulty of automatically grading short answer responses and 2) the ability to provide effective and transparent feedback to students. To address these shortcomings, we explore and evaluate a grading approach where a student's EiPE response is used to generate code via an LLM, and that code is evaluated against test cases to determine if the description of the code was accurate. This provides a scalable approach to creating code comprehension questions and enables feedback both through the code generated from a student's description and the results of test cases run on that code. We evaluate students' success in completing these tasks, their use of the feedback provided by the system, and their perceptions of the activity.},
added-at = {2024-07-18T18:02:04.000+0200},
address = {New York, NY, USA},
author = {Smith, David H. and Denny, Paul and Fowler, Max},
biburl = {https://www.bibsonomy.org/bibtex/2cf73b05d64f670e1f7173ffd0cc675aa/brusilovsky},
booktitle = {Proceedings of the Eleventh ACM Conference on Learning @ Scale},
day = 15,
description = {Prompting for Comprehension: Exploring the Intersection of Explain in Plain English Questions and Prompt Writing | Proceedings of the Eleventh ACM Conference on Learning @ Scale},
doi = {10.1145/3657604.3662039},
interhash = {12b749628de15dc9261ccf16518f58a8},
intrahash = {cf73b05d64f670e1f7173ffd0cc675aa},
isbn = {9798400706332},
keywords = {code-comprehension las2024 llm progtutor},
location = {Atlanta, GA, USA},
month = {7},
pages = {39–50},
publisher = {Association for Computing Machinery},
series = {L@S '24},
timestamp = {2024-07-18T18:02:04.000+0200},
title = {Prompting for Comprehension: Exploring the Intersection of Explain in Plain English Questions and Prompt Writing},
url = {https://doi.org/10.1145/3657604.3662039},
year = 2024
}