Contextually Relevant Question Generation with Large Language Models

Jean ONG Hui Fang* and YEO Wee Kiang

School of Computing, National University of Singapore

* e0949099@u.nus.edu

Ong, J. H. F., & Yeo, W. K. (2024). Contextually Relevant Question Generation with Large Language Models [Lightning Talk]. In Higher Education Conference in Singapore (HECS) 2024, 3 December, National University of Singapore. https://blog.nus.edu.sg/hecs/hecs2024-ong-and-yeo

SUB-THEME

Opportunities from Generative AI

KEYWORDS

Generative AI, Bloom Taxonomy, Large Language Models, Question Generation, Bloom Taxonomy, Cognitive Levels

CATEGORY

Lightning Talk

EXTENDED ABSTRACT

Question generation, a task of generating questions from various inputs (Rus et al., 2008) is a critical aspect of the educational process. Questions encourage learners to engage, recall information, identify misconceptions, focus on key material, and reinforce concepts. (Thalheimer, 2003) Research has shown that incorporating questions in teaching is highly beneficial, as it encourages students to engage in self-explanation. (Chi et al., 1994) Despite its benefits, crafting questions remains a manual and complex process, requiring training, experience, and resources. Automatic question generation (AQG) offers a promising solution in education, gathering increasing interest across various research communities. (Kurdi et al., 2020) Effective AQG allows educators to spend more time on other important instructional activities while enhancing the efficiency and scalability of quality questions for various purposes. Past reviews on AQG systems highlight persistent challenges: producing questions aimed at high cognitive levels, controlling question difficulty, and providing constructive feedback to learners (Zhang et al., 2021).

Angel for material-based Q&A

In 2023, the “Angel” approach emerged as a notable advancement in Automatic Question Generation (AQG), addressing key challenges in the field. This method leverages advanced prompting, automated curation, and thorough evaluation metrics, integrating educational frameworks like Bloom’s Taxonomy to guide the creation of higher-order cognitive questions using Large Language Models (LLMs). The Angel approach follows a three-step process:

  1. Question and Answer Generation: Employs advanced prompt-based methods to produce questions and answers of varying difficulty.
  2. Self Augmentation: Uses a high-temperature setting (0.9) to generate diverse question-answer pairs for each educational paragraph.
  3. Q&A Self-Curation: Questions that promote higher-order thinking skills, as identified by an LLM during generation, are selected based on Bloom’s Taxonomy.

“Angel” demonstrates the potential of LLMs to generate high-quality question-answer pairs that cover a diverse range of cognitive skills. (Blobstein et. al., 2023)

Zero-shot angel for advanced question generation

With an emphasis on question generation (QG), we extend the experiment in the original study with specific modifications to address existing limitations. These modifications are designed to extend its applicability and utility within the educational sector. In one of our modifications, we transitioned from the originally used few-shot approach to adapting the ‘Angel’ method into a zero-shot framework. This adjustment allows LLMs to generate questions for each paragraph without needing sample questions. Comparing Figure 2 to Figure 1 below, our findings show that the zero-shot method is as effective when using a suitably large model. An example of a question generated by the Zero-Shot Angel method is: “Discuss and propose a sustainable consumption plan for future generations. How can we ensure responsible consumption of exhaustible natural resources like coal, petroleum, and natural gas?” In contrast, a question generated without this method is: “What country has vast reserves of natural gas?”

A78-Fig 1

Figure 1. Bloom’s Taxonomy scores including Few-Shot Angel (Original Study)

 

A78-Fig 2

Figure 2. Bloom’s Taxonomy scores including Zero-Shot Angel

Contextual relevance to learning objectives

To improve the relevance of question generation in alignment with learning objectives, it’s crucial to integrate contextual information and learning outcomes into the question formulation process. We experiment with various retrieval methods and explore the practicality of incorporating LLMs into an AQG system, including the use of LLM-based evaluation methods. Our findings compare LLM-based and human evaluations, highlighting their effectiveness and reliability in question generation.

REFERENCES

Blobstein, A., Yifal, T., Izmaylov, D., Levy, M., & Segal, A. (2023). Angel: A new generation tool for learning material based questions and answers. NeurIPS’23 Workshop on Generative AI for Education (GAIED). http://gaied.org/9_paper.pdf

Rus, V., Cai, Z., & Graesser, A. (2008). Question generation: Example of a multi-year evaluation campaign. Proc WS on the QGSTEC. https://www.researchgate.net/profile/Zhiqiang-Cai/publication/228948043_Question_Generation_Example_of_A_Multi-year_Evaluation_Campaign/links/560d4cb708aeed9d13751bd2/Question-Generation-Example-of-A-Multi-year-Evaluation-Campaign.pdf

Thalheimer, W. (2003). The learning benefits of questions. Work Learning Research.

Chi, M. T. H., Leeuw, N. D., Chiu, M. -H., & Lavancher, C. (1994). Eliciting self-explanations improves understanding. Cognitive Science. 18(3), 439-477. https://doi.org/10.1016/0364-0213(94)90016-7

Kurdi, G., Leo, J., Parsia, B., Sattler, U., & Al-Emari, S. (2020). A systematic review of automatic question generation for educational purposes. International Journal of Artificial Intelligence in Education, 30, 121-204. https://doi.org/10.1007/s40593-019-00186-y

Zhang, R., Guo, J., Chen, L., Fan, Y., & Cheng., X. (2021). A review on question generation from natural language text. ACM Transactions on Information Systems (TOIS), 40(1), 1-43. https://doi.org/10.1145/3468889

Viewing Message: 1 of 1.
Warning

Blog.nus accounts will move to SSO login soon. Once implemented, only current NUS staff and students will be able to log in to Blog.nus. Public blogs remain readable to non-logged in users. (More information.)