Within the learning-by-teaching paradigm, students, who we refer as tutors, often tend to dictate what they know or what to do rather than reflecting on their knowledge when assisting a teachable agent (TA). it is vital to explore more effective ways of fostering tutor reflection and enhancing the learning experience. While TAs can employ static follow-up questions, such as "Can you clarify or explain more in detail?" to encourage reflective thinking, the question arises: Can Large Language Models (LLMs) generate more adaptive and contextually-driven questions to deepen tutor engagement and facilitate their learning process? In this paper, we propose ExpectAdapt, a novel questioning framework for the TA using three stacked LLMs to promote reflective thinking in tutors, thereby, facilitating tutor learning. ExpectAdapt generates questions by directing tutors towards an expected response while adapting based on the tutor’s contributions using conversation history as a contextual guide. Our empirical study with 42 middle-school students demonstrates that adaptive followup questions facilitated tutor learning by effectively increasing problem-solving accuracy in the learning-by-teaching environment when compared to tutors answering static followup and no followup questions.
What and How You Explain Matters: Inquisitive Teachable Agent Scaffolds Knowledge-Building for Tutor Learning
Tasmia Shahriar, and Noboru Matsuda
In International Conference on Artificial Intelligence in Education , 2023
Students learn by teaching a teachable agent, a phenomenon called tutor learning. Literature suggests that tutor learning happens when students (who tutor the teachable agent) actively reflect on their knowledge when responding to the teachable agent’s inquiries (aka knowledge-building). However, most students often lean towards delivering what they already know instead of reflecting on their knowledge (aka knowledge-telling). The knowledge-telling behavior weakens the effect of tutor learning. We hypothesize that the teachable agent can help students commit to knowledge-building by being inquisitive and asking follow-up inquiries when students engage in knowledge-telling. Despite the known benefits of knowledge-building, no prior work has operationalized the identification of knowledge-building and knowledge-telling features from students’ responses to teachable agent’s inquiries and governed them toward knowledge-building. We propose a Constructive Tutee Inquiry that aims to provide follow-up inquiries to guide students toward knowledge-building when they commit to a knowledge-telling response. Results from an evaluation study show that students who were treated by Constructive Tutee Inquiry not only outperformed those who were not treated but also learned to engage in knowledge-building without the aid of follow-up inquiries over time.
“Can you clarify what you said?”: Studying the impact of tutee agents’ follow-up questions on tutors’ learning
Tasmia Shahriar, and Noboru Matsuda
In Artificial Intelligence in Education: 22nd International Conference, AIED 2021, Utrecht, The Netherlands, June 14–18, 2021, Proceedings, Part I 22, acceptance rate: 24% out of 168 submissions , 2021
Students learn by teaching others as tutors. Advancement in the theory of learning by teaching has given rise to many pedagogical agents. In this paper, we exploit a known cognitive theory that states if a tutee asks deep questions in a peer tutoring environment, a tutor benefits from it. Little is known about a computational model of such deep questions. This paper aims to formalize the deep tutee questions and proposes a generalized model of inquiry-based dialogue, called the constructive tutee inquiry, to ask follow-up questions to have tutors reflect their current knowledge (aka knowledge-building activity). We conducted a Wizard of Oz study to evaluate the proposed constructive tutee inquiry. The results showed that the constructive tutee inquiry was particularly effective for the low prior knowledge students to learn conceptual knowledge.
Continuous obstructed detour queries
Rudra Ranajee Saha, Tanzima Hashem, Tasmia Shahriar , and 1 more author
In 10th International Conference on Geographic Information Science (GIScience 2018) , 2018
In this paper, we introduce Continuous Obstructed Detour (COD) Queries, a novel query type in spatial databases. COD queries continuously return the nearest point of interests (POIs) such as a restaurant, an ATM machine and a pharmacy with respect to the current location and the fixed destination of a moving pedestrian in presence of obstacles like a fence, a lake or a private building. The path towards a destination is typically not predetermined and the nearest POIs can change over time with the change of a pedestrian’s current location towards a fixed destination. The distance to a POI is measured as the summation of the obstructed distance from the pedestrian’s current location to the POI and the obstructed distance from the POI to the pedestrian’s destination. Evaluating the query for every change of a pedestrian’s location would incur extremely high processing overhead. We develop an efficient solution for COD queries and verify the effectiveness and efficiency of our solution in experiments.
Preprints
Assertion Enhanced Few-Shot Learning: Instructive Technique for Large Language Models to Generate Educational Explanations