Mehul MOTANI1,2*, Kei Sen FONG1, and John Chong Min TAN1
1Department of Electrical and Computer Engineering,
College of Design and Engineering (CDE),
2Institute of Data Science, Institute for Digital Medicine (WisDM), N.1 Institute for Health,
National University of Singapore (NUS)
Motani, M., Fong, K. S., & Tan, J. C. M. (2024). A tool for learning via productive struggle using generative AI [Paper presentation]. In Higher Education Conference in Singapore (HECS) 2024, 3 December, National University of Singapore. https://blog.nus.edu.sg/hecs/hecs2024-motani-et-al/
SUB-THEME
Opportunities from Generative AI
KEYWORDS
Student learning, generative AI, symbolic regression, large language models
CATEGORY
Paper Presentation
EXTENDED ABSTRACT
Learning is often facilitated through struggle. This principle is embodied by an approach called productive struggle (PS), in which students are persuaded that struggling is part of learning and should be embraced. Productive struggle is related to an idea called productive failure (PF), in which students are given a task without prior instruction on how to solve it and allowed (even encouraged) to fail (Kapur, 2016). The key idea is that the initial struggle and the experience of failure can enhance learning and understanding when the correct solutions and underlying principles are subsequently taught. In this work, we adopt PS and combine it with generative AI tools, e.g., large language models and symbolic regression, to help students make progress on a learning task by providing fast, personalised feedback (Peng, 2019). We demonstrate our ideas on a specific learning task, namely building students’ intuition about the structure of mathematical equations and show how this could work via a prototype system. Our work contributes to the broader movement to explore the positive impacts of AI in education (Chen, 2020).
METHODS
In this work, we present an interactive tool, with graphing and large language model (LLM) capabilities, to build students’ intuition about the structure of mathematical equations. We name this tool Guess the Equation (GTE). GTE starts with a set of equations designed by the instructor. For each of these equations, we provide a visual graphical plot of sampled points from the equation and the students’ task is to guess the functional form of the equation without any prior knowledge (see Figure 1).
The student will first have to make an initial guess using natural language text. Then, GTE explains, qualitatively, the differences between the guess and the true equation, along with displaying a graphical plot of the guess and sampled points (see Figure 2). The student will then have to modify the guess, which GTE evaluates and provides qualitative hints to guide the student towards the answer. This iteration repeats until the student achieves the answer. Throughout this process, the student experiences multiple failures, each supplemented with a qualitative hint towards the right answer.
In developing GTE, we utilise two main technical components:
- TaskGen (Tan, 2024), a generative AI tool, that reformulates a complex task down into subtasks. As an improvement to free-form text output common in LLMs, TaskGen uses a special output, i.e., StrictJSON, for each part of the process. StrictJSON is an LLM output parser for the JSON format with type checking, ensuring extractable outputs that are compatible with downstream tasks. i.e., graph plotting, code execution.
- Symbolic Regression (SR) (Fong, 2023), which is an approach that learns closed-form functional expressions from data. SR is used to suggest incorrect yet reasonably well-fitting equations to the student, allowing GTE to provide meaningful hints without divulging the true answer. SR is the key component that generates “near-successes”, which function as productive negative examples.
DISCUSSION
GTE provides reasoning in the form of Thoughts (Figure 3, green text) and a Summary of Conversation (Figure 3, purple text) that helps instructors in troubleshooting and verifying the hints given to students. This also reduces the chance of LLM hallucinations (Ji, 2023).
GTE is robust and provides appropriate responses to diverse student input (see Figures 4 and 5). Note that in Figure 2, the input is not even in equation form. This is an improvement from traditional learning tools, which require manual planning and design of edge cases.
CONCLUSION
This study presents a novel PS-based approach to using generative AI and SR to enhance student learning. By utilising state-of-the-art AI tools, GTE allows students to obtain fast, iterative, automated and personalised feedback, allowing them to obtain more experience on failures. Future work involves doing a quantitative comparison to conventional teaching methods. We note that our approach can be generalised to learning tasks in other areas, such as machine learning, engineering, and physics, demonstrating the potential of AI in education (Chen, 2020).
REFERENCES
Kapur, M. (2016). Examining productive failure, productive success, unproductive failure, and unproductive success in learning. Educational Psychologist, 51(2), 289-299. https://doi.org/10.1080/00461520.2016.1155457
Peng, H., Ma, S., & Spector, J. M. (2019). Personalized adaptive learning: an emerging pedagogical approach enabled by a smart learning environment. Smart Learning Environments, 6(1), 1-14. https://doi.org/10.1186/s40561-019-0089-y
Chen, L., Chen, P., & Lin, Z. (2020). Artificial intelligence in education: A review. IEEE Access, 8, 75264-75278. https://doi.org/10.1109/ACCESS.2020.2988510
Tan, J. C. M., Saroj, P., Runwal, B., Maheshwari, H., Sheng, B. L. Y., Cottrill, R., … & Motani, M. (2024). TaskGen: A task-based, memory-infused agentic framework using StrictJSON. arXiv preprint arXiv:2407.15734. https://doi.org/10.48550/arXiv.2407.15734
Fong, K. S., Wongso, S., & Motani, M. (2023). Rethinking symbolic regression: Morphology and adaptability in the context of evolutionary algorithms. In The Eleventh International Conference on Learning Representations.
Ji, Z., Yu, T., Xu, Y., Lee, N., Ishii, E., & Fung, P. (2023). Towards mitigating LLM hallucination via self reflection. In Findings of the Association for Computational Linguistics: Empirical Methods in Natural Language Processing (pp. 1827-1843). Retrieved from https://ar5iv.labs.arxiv.org/html/2310.06271.