Generating Educational Explanations using Assertion-Enhanced Few-Shot Learning
Designed an effective prompting technique for Large Language Models (LLMs) to generate accurate explanations in problem-solving domains reducing hallucinations by 15% in an Intelligent Tutoring System when compared witrh traditional chain-of-thought prompting.Evaluated and compared the quality of educational explanations through survey data analysis from in-service middle school teachers.
The work is currently in preprint Paper link
In this project, we proposed a method to include conceptual knowledge, defined as assertions, in prompts for Large Language Models (LLMs), separate from few-shot demonstrations.
We conducted qualitative ablation studies to evaluate the effectiveness of this approach. Our findings revealed that including assertions as part of the few-shot demonstrations, or simply increasing the number of few-shots, resulted in poorer explanation quality. In contrast, separating assertions from few-shot demonstrations in the prompt led to higher-quality explanations.
Our proposed prompt reduced hallucinations by 15% compared to traditional chain-of-thought prompting and received significantly higher quality ratings, averaging 4 out of 5 across five metrics, from in-service middle school teachers in a survey.