Exploring The Role of Generative AI As a Training Tool for Medical Undergraduates in Discharge Summary Writing-Methodology And Study Design

Nathasha LUKE 2, *, CHUA Chun En1, and Desmond B. TEO1

1Department of Medicine, NUHS
2Department of Physiology, Yong Loo Lin School of Medicine, NUS

*nathasha@nus.edu.sg

Luke, N., Chua, C.E., & Teo D.B. (2024). Exploring The Role of Generative AI As a Training Tool for Medical Undergraduates in Discharge Summary Writing -Methodology And Study Design[Lightning Talk]. In Higher Education Conference in Singapore (HECS) 2024, 3 December, National University of Singapore. https://blog.nus.edu.sg/hecs/hecs2024-luke-et-al

SUB-THEME

Opportunities from Generative AI

KEYWORDS

Discharge summary, Generative AI, Chatbot, Large Language Models

CATEGORY

Lightning Talk

INTRODUCTION

A discharge summary is a permanent record of a patient’s hospitalisation, which should be concise, yet contain adequate and accurate information regarding the hospitalisation (Ando et al., 2022). Substandard discharge summaries result in gaps in subsequent patient follow-ups, clinical coding of data, hospital subvention, and medical insurance (Sukanya, 2017). Globally, discharge summaries are authored by junior doctors but there is little formal teaching and quality assessment in most training programs. An initial audit of 100 discharge summaries within the Department of Medicine, National University Hospital, in January 2021 revealed that only 21% had complete information.

 

To address this gap, a teaching program was implemented to train medical students on discharge summary writing and hands-on, case-based sessions where the students drafted discharge summaries for tutors to provide feedback. This programme demonstrated an improvement in the quality of discharge summaries over the years (Chua & Teo, 2023). However, conducting this program was challenging due to limitations in the number of facilitators to conduct these sessions and provide one-to-one feedback. Hence, we planned a project to evaluate the capability of Generative Artificial Intelligence (Gen AI) to provide feedback in discharge summary writing training.

METHODOLOGY AND WORKFLOW

To ensure sustainability without the need for facilitator manpower, this project caters to an interactive e-learning module complemented by Gen AI to provide feedback on discharge summaries written by students based on case scenarios. Gen AI will assess the accuracy and quality of discharge summaries based on a rubric to provide individualised feedback.

 

This study will be conducted in two phases, where in the initial phase, researchers will evaluate different Gen AI platforms to decide on the best platform to provide feedback. In the subsequent phase, the students will directly interact with the selected platform to receive feedback, in which the researchers will evaluate the learning experience.

 

In the first phase, an e-learning module will be implemented to train students followed by a formative assessment component where students create and submit their discharge summaries through the LMS. Each discharge summary will be subjected to feedback from five arms, (1) an experienced clinician, and generative AI platforms which include (2) Llama 3, (3) Gemini AI, (4) Co-Pilot, and (5) GPT-4 powered Chatbot. The feedback provided by these five arms will then be objectively evaluated by an expert in a blinded manner, to identify the best platform.

 

In the second phase, the students will directly interact with the selected platform as guided by the study team to receive feedback for discharge summaries. The generative AI outputs and student feedback will be evaluated to determine the efficacy and identify the best strategies to implement the programme.

FIGURES AND TABLES

a22 - Fig 1

Figure 1. Methodology for Phase 1

 

Figure 2. Methodology for Phase II

 

REFERENCES

Ando, K., Okumura, T., Komachi, M., Horiguchi, H., & Matsumoto, Y. (2022). Is artificial intelligence capable of generating hospital discharge summaries from inpatient records?. PLOS Digital Health, 1(12). https://doi.org/10.1371/journal.pdig.0000158

Sukanya, C. (2017). Validity of principal diagnoses in discharge summaries and ICD-10 coding assessments based on national health data of Thailand. Healthcare Informatics Research, 23(4), 293-303. https://doi.org/10.4258/hir.2017.23.4.293

Chua, C. E., & Teo, D. B. (2023). Writing a high‐quality discharge summary through structured training and assessment. Medical Education, 57(8), 773–774. https://doi.org/10.1111/medu.15102

Contextually Relevant Question Generation with Large Language Models

Jean ONG Hui Fang* and YEO Wee Kiang

School of Computing, National University of Singapore

* e0949099@u.nus.edu

Ong, J. H. F., & Yeo, W. K. (2024). Contextually Relevant Question Generation with Large Language Models [Lightning Talk]. In Higher Education Conference in Singapore (HECS) 2024, 3 December, National University of Singapore. https://blog.nus.edu.sg/hecs/hecs2024-ong-and-yeo

SUB-THEME

Opportunities from Generative AI

KEYWORDS

Generative AI, Bloom Taxonomy, Large Language Models, Question Generation, Bloom Taxonomy, Cognitive Levels

CATEGORY

Lightning Talk

EXTENDED ABSTRACT

Question generation, a task of generating questions from various inputs (Rus et al., 2008) is a critical aspect of the educational process. Questions encourage learners to engage, recall information, identify misconceptions, focus on key material, and reinforce concepts. (Thalheimer, 2003) Research has shown that incorporating questions in teaching is highly beneficial, as it encourages students to engage in self-explanation. (Chi et al., 1994) Despite its benefits, crafting questions remains a manual and complex process, requiring training, experience, and resources. Automatic question generation (AQG) offers a promising solution in education, gathering increasing interest across various research communities. (Kurdi et al., 2020) Effective AQG allows educators to spend more time on other important instructional activities while enhancing the efficiency and scalability of quality questions for various purposes. Past reviews on AQG systems highlight persistent challenges: producing questions aimed at high cognitive levels, controlling question difficulty, and providing constructive feedback to learners (Zhang et al., 2021).

Angel for material-based Q&A

In 2023, the “Angel” approach emerged as a notable advancement in Automatic Question Generation (AQG), addressing key challenges in the field. This method leverages advanced prompting, automated curation, and thorough evaluation metrics, integrating educational frameworks like Bloom’s Taxonomy to guide the creation of higher-order cognitive questions using Large Language Models (LLMs). The Angel approach follows a three-step process:

  1. Question and Answer Generation: Employs advanced prompt-based methods to produce questions and answers of varying difficulty.
  2. Self Augmentation: Uses a high-temperature setting (0.9) to generate diverse question-answer pairs for each educational paragraph.
  3. Q&A Self-Curation: Questions that promote higher-order thinking skills, as identified by an LLM during generation, are selected based on Bloom’s Taxonomy.

“Angel” demonstrates the potential of LLMs to generate high-quality question-answer pairs that cover a diverse range of cognitive skills. (Blobstein et. al., 2023)

Zero-shot angel for advanced question generation

With an emphasis on question generation (QG), we extend the experiment in the original study with specific modifications to address existing limitations. These modifications are designed to extend its applicability and utility within the educational sector. In one of our modifications, we transitioned from the originally used few-shot approach to adapting the ‘Angel’ method into a zero-shot framework. This adjustment allows LLMs to generate questions for each paragraph without needing sample questions. Comparing Figure 2 to Figure 1 below, our findings show that the zero-shot method is as effective when using a suitably large model. An example of a question generated by the Zero-Shot Angel method is: “Discuss and propose a sustainable consumption plan for future generations. How can we ensure responsible consumption of exhaustible natural resources like coal, petroleum, and natural gas?” In contrast, a question generated without this method is: “What country has vast reserves of natural gas?”

A78-Fig 1

Figure 1. Bloom’s Taxonomy scores including Few-Shot Angel (Original Study)

 

A78-Fig 2

Figure 2. Bloom’s Taxonomy scores including Zero-Shot Angel

Contextual relevance to learning objectives

To improve the relevance of question generation in alignment with learning objectives, it’s crucial to integrate contextual information and learning outcomes into the question formulation process. We experiment with various retrieval methods and explore the practicality of incorporating LLMs into an AQG system, including the use of LLM-based evaluation methods. Our findings compare LLM-based and human evaluations, highlighting their effectiveness and reliability in question generation.

REFERENCES

Blobstein, A., Yifal, T., Izmaylov, D., Levy, M., & Segal, A. (2023). Angel: A new generation tool for learning material based questions and answers. NeurIPS’23 Workshop on Generative AI for Education (GAIED). http://gaied.org/9_paper.pdf

Rus, V., Cai, Z., & Graesser, A. (2008). Question generation: Example of a multi-year evaluation campaign. Proc WS on the QGSTEC. https://www.researchgate.net/profile/Zhiqiang-Cai/publication/228948043_Question_Generation_Example_of_A_Multi-year_Evaluation_Campaign/links/560d4cb708aeed9d13751bd2/Question-Generation-Example-of-A-Multi-year-Evaluation-Campaign.pdf

Thalheimer, W. (2003). The learning benefits of questions. Work Learning Research.

Chi, M. T. H., Leeuw, N. D., Chiu, M. -H., & Lavancher, C. (1994). Eliciting self-explanations improves understanding. Cognitive Science. 18(3), 439-477. https://doi.org/10.1016/0364-0213(94)90016-7

Kurdi, G., Leo, J., Parsia, B., Sattler, U., & Al-Emari, S. (2020). A systematic review of automatic question generation for educational purposes. International Journal of Artificial Intelligence in Education, 30, 121-204. https://doi.org/10.1007/s40593-019-00186-y

Zhang, R., Guo, J., Chen, L., Fan, Y., & Cheng., X. (2021). A review on question generation from natural language text. ACM Transactions on Information Systems (TOIS), 40(1), 1-43. https://doi.org/10.1145/3468889

A Language Model-enhanced Network-centric Approach to Career Skills Enhancement

ZHOU Caishen1* and YEO Wee Kiang2

1, 2 Department of Information Systems and Analytics, School of Computing

*e1132296@nus.edu.sg

 

Zhou, C., & Yeo, W. K. (2024).  A Language Model-enhanced Network-centric Approach to Career Skills Enhancement [Lightning Talk]. In Higher Education Conference in Singapore (HECS) 2024, 3 December, National University of Singapore. https://blog.nus.edu.sg/hecs/hecs2024-zhou-yeo

 

SUB-THEME

Opportunities from Generative AI

 

KEYWORDS

Generative AI, Personalised Learning Pathways, Large Language Models, Graph-Enhanced Dynamic Training, Competency-Based Learning.

 

CATEGORY

Lightning Talk

 

INTRODUCTION

Adult continuing education is changing rapidly to meet the needs of today’s fast-evolving job market and the unique learning preferences of adults. Our project uses Large Language Models (LLMs) and skills knowledge graphs to create personalised learning paths. This approach makes career changes and skill development easier and more effective. (Knowles, 1970; Hase and Kenyon, 2000).

 

BACKGROUND AND MOTIVATION

Our system employs an LLM to facilitate natural language conversations, making interactions more intuitive and user-friendly. Within this platform, the system performs detailed skills assessments. It evaluates an individual’s current skills and career goals and identifies skill gaps. This is done by analysing a skills knowledge graph specific to the user’s desired job role (Shou et al., 2023). This network-centric approach addresses the limitations of traditional linear educational pathways, which often fail to accommodate the diverse and dynamic needs of adult learners (Romero and Ventura, 2007). This approach offers personalised and efficient learning tailored to specific career goals, unlike the traditional approach, which revisits redundant content and lacks focus.

A63-Figure 1

Figure 1. Comparison of linear learning path and complex interconnected skills network for personalised learning

 

Figure 1 shows two learning pathways. The left diagram is a linear path from A to C. The right diagram displays a complex network of skills, highlighting multiple pathways and progression routes. This illustrates the difference between traditional linear educational pathways and network-centric approaches, emphasising the system’s adaptability in mapping personalised learning paths. This adaptability is evident in the journey of Anand, a mid-level software developer proficient in Python, aiming to specialise in Artificial Intelligence (AI) and Machine Learning, particularly Generative AI. His journey contrasts two learning approaches: a network-centric approach that offers personalised, efficient learning aligned with his goals, and a traditional approach that revisits redundant content and lacks focus.

 

METHODOLOGY

Our approach uses a system software, “Neo4j” and Retriever-Augmented Generation (RAG) techniques to improve the learning experience. Neo4j generates skill relationships based on data from the Jobs-Skills Dashboard – SDFE23/24 by SkillsFuture Singapore (SkillsFutureSG, 2023) and thereafter identifies skill gaps using Cypher queries. These queries are generated by LLMs based on user input. RAG enhances information retrieval and response generation, ensuring coherent and optimised learning pathways (Guo and Berkhahn, 2016; Lewis et al., 2020).

 

SIGNIFICANCE OF THE PROJECT

The network-centric approach significantly enhances the learning experience by allowing individuals to bypass introductory courses and focus directly on new and relevant areas pertinent to the user’s career goals. This personalised learning trajectory enables learners to build on their existing skills efficiently, specialising in areas critical to their desired career paths. Our approach not only improves learning efficiency but also boosts learner engagement by offering choices that resonate with their personal interests and career aspirations (Vaswani et al., 2017; Zawacki-Richter et al., 2019). By tailoring the learning process to the specific needs and goals of each learner, we ensure that the education provided is directly relevant and immediately applicable, thereby facilitating smoother and more effective career transitions.

CONCLUSION

In conclusion, our network-centric approach, supported by Large Language Models and skills network graphs, offers personalised and efficient learning pathways for adult learners. This method overcomes the limitations of traditional learning by recognising prior knowledge and focusing on practical, job-relevant skills. It is a valuable tool for career enhancement in a fast-changing job market. (Bates, 2019; Bai and Che, 2021).

 

REFERENCES

Bai, J., & Che, L. (2021). Construction and application of database micro-course knowledge graph based on Neo4j. Association for Computing Machinery, 1-5, 68. https://dl.acm.org/doi/10.1145/3448734.3450798

Bates, A. W. (2019). Teaching in a digital age: Guidelines for designing teaching and learning (2nd ed.). BCcampus. https://pressbooks.bccampus.ca/teachinginadigitalagev2/

Guo, C., & Berkhahn, F. (2016). Entity embeddings of categorical variables. Cornell University. 1-9. https://doi.org/10.48550/arXiv.1604.06737

Hase, S. & Kenyon, C. (2000). From andragogy to heutagogy. Southern Cross University. 5(3), 1-10.

Knowles, M. S. (1970). The modern practice of adult education from pedagogy to andragogy. Association Press.

Lewis, P., Perez, E., Piktus, A., Petroni. F., Karpukhin. V., Goyal. N., Küttler. H., Lewis. M., Yih. W. -T., Rocktäschel, T., Riedel. S., & Kiela, D. (2020). Retrieval-augmented generation for knowledge-intensive nlp tasks. Cornell University. https://doi.org/10.48550/arXiv.2005.11401

Romero, C., & Ventura,S. (2007). Educational data mining: A survey from 1995 to 2005. Expert Systems with Applications. 33(1), 135-146. https://doi.org/10.1016/j.eswa.2006.04.005

Shou, Z., Chen, Y., Wen, H., Liu, J., Mo, J., & Zhang, H. (2023). A knowledge concept recommendation model based on tensor decomposition and transformer reordering. Electronics, 12(7), 1593. https://doi.org/10.3390/electronics12071593

SkillsFutureSG. (2023). Jobs-Skills Dashboard – SDFE23/24 by SkillsFutureSG. https://public.tableau.com/app/profile/skillsfuturesg/viz/JobsSkillsTalentInsight-SDFE_17001475553270/Overview

Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L., & Polosukhin, I. (2017). Attention is all you need. https://doi.org/10.48550/arXiv.1706.03762

Zawacki-Richter, O., Marín, V. I., Bond, M., & Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education – where are the educators?”. International Journal of Educational Technology in Higher Education. 16, 1-27, 39. https://doi.org/10.1186/s41239-019-0171-0

A Tool for Learning via Productive Struggle Using Generative AI

Mehul MOTANI1,2*, Kei Sen FONG1, and John Chong Min TAN1

1Department of Electrical and Computer Engineering,
College of Design and Engineering (CDE),
2Institute of Data Science, Institute for Digital Medicine (WisDM), N.1 Institute for Health,
National University of Singapore (NUS)

*motani@nus.edu.sg

 

Motani, M., Fong, K. S., & Tan, J. C. M. (2024). A tool for learning via productive struggle using generative AI [Paper presentation]. In Higher Education Conference in Singapore (HECS) 2024, 3 December, National University of Singapore. https://blog.nus.edu.sg/hecs/hecs2024-motani-et-al/

SUB-THEME

Opportunities from Generative AI

 

KEYWORDS

Student learning, generative AI, symbolic regression, large language models

 

CATEGORY

Paper Presentation 

 

EXTENDED ABSTRACT

Learning is often facilitated through struggle. This principle is embodied by an approach called productive struggle (PS), in which students are persuaded that struggling is part of learning and should be embraced. Productive struggle is related to an idea called productive failure (PF), in which students are given a task without prior instruction on how to solve it and allowed (even encouraged) to fail (Kapur, 2016). The key idea is that the initial struggle and the experience of failure can enhance learning and understanding when the correct solutions and underlying principles are subsequently taught. In this work, we adopt PS and combine it with generative AI tools, e.g., large language models and symbolic regression, to help students make progress on a learning task by providing fast, personalised feedback (Peng, 2019). We demonstrate our ideas on a specific learning task, namely building students’ intuition about the structure of mathematical equations and show how this could work via a prototype system. Our work contributes to the broader movement to explore the positive impacts of AI in education (Chen, 2020).

 

METHODS

In this work, we present an interactive tool, with graphing and large language model (LLM) capabilities, to build students’ intuition about the structure of mathematical equations. We name this tool Guess the Equation (GTE). GTE starts with a set of equations designed by the instructor. For each of these equations, we provide a visual graphical plot of sampled points from the equation and the students’ task is to guess the functional form of the equation without any prior knowledge (see Figure 1).

HECS2024-a100-Fig1
Figure 1. GTE prototype of student user interface

 

The student will first have to make an initial guess using natural language text. Then, GTE explains, qualitatively, the differences between the guess and the true equation, along with displaying a graphical plot of the guess and sampled points (see Figure 2). The student will then have to modify the guess, which GTE evaluates and provides qualitative hints to guide the student towards the answer. This iteration repeats until the student achieves the answer. Throughout this process, the student experiences multiple failures, each supplemented with a qualitative hint towards the right answer.

HECS2024-a100-Fig2
Figure 2. GTE provides fast personalised feedback to the student

 

In developing GTE, we utilise two main technical components:

  1. TaskGen (Tan, 2024), a generative AI tool, that reformulates a complex task down into subtasks. As an improvement to free-form text output common in LLMs, TaskGen uses a special output, i.e., StrictJSON, for each part of the process. StrictJSON is an LLM output parser for the JSON format with type checking, ensuring extractable outputs that are compatible with downstream tasks. i.e., graph plotting, code execution.
  2. Symbolic Regression (SR) (Fong, 2023), which is an approach that learns closed-form functional expressions from data. SR is used to suggest incorrect yet reasonably well-fitting equations to the student, allowing GTE to provide meaningful hints without divulging the true answer. SR is the key component that generates “near-successes”, which function as productive negative examples.

 

DISCUSSION

GTE provides reasoning in the form of Thoughts (Figure 3, green text) and a Summary of Conversation (Figure 3, purple text) that helps instructors in troubleshooting and verifying the hints given to students. This also reduces the chance of LLM hallucinations (Ji, 2023).

 

GTE is robust and provides appropriate responses to diverse student input (see Figures 4 and 5). Note that in Figure 2, the input is not even in equation form. This is an improvement from traditional learning tools, which require manual planning and design of edge cases.

HECS2024-a100-Fig3
Figure 3. GTE provides extra information, such as Thoughts and Summary of Conversation.

 

HECS2024-a100-Fig4
Figure 4. GTE responds appropriately even when no equations are provided.

 

HECS2024-a100-Fig5
Figure 5. GTE handles exceptions with explanations, in contrast to traditional tools which simply dismiss such responses without constructive feedback

 

CONCLUSION

This study presents a novel PS-based approach to using generative AI and SR to enhance student learning. By utilising state-of-the-art AI tools, GTE allows students to obtain fast, iterative, automated and personalised feedback, allowing them to obtain more experience on failures. Future work involves doing a quantitative comparison to conventional teaching methods. We note that our approach can be generalised to learning tasks in other areas, such as machine learning, engineering, and physics, demonstrating the potential of AI in education (Chen, 2020).

 

REFERENCES

Kapur, M. (2016). Examining productive failure, productive success, unproductive failure, and unproductive success in learning. Educational Psychologist, 51(2), 289-299. https://doi.org/10.1080/00461520.2016.1155457

Peng, H., Ma, S., & Spector, J. M. (2019). Personalized adaptive learning: an emerging pedagogical approach enabled by a smart learning environment. Smart Learning Environments, 6(1), 1-14. https://doi.org/10.1186/s40561-019-0089-y

Chen, L., Chen, P., & Lin, Z. (2020). Artificial intelligence in education: A review. IEEE Access, 8, 75264-75278. https://doi.org/10.1109/ACCESS.2020.2988510

Tan, J. C. M., Saroj, P., Runwal, B., Maheshwari, H., Sheng, B. L. Y., Cottrill, R., … & Motani, M. (2024). TaskGen: A task-based, memory-infused agentic framework using StrictJSON. arXiv preprint arXiv:2407.15734. https://doi.org/10.48550/arXiv.2407.15734

Fong, K. S., Wongso, S., & Motani, M. (2023). Rethinking symbolic regression: Morphology and adaptability in the context of evolutionary algorithms. In The Eleventh International Conference on Learning Representations.

Ji, Z., Yu, T., Xu, Y., Lee, N., Ishii, E., & Fung, P. (2023). Towards mitigating LLM hallucination via self reflection. In Findings of the Association for Computational Linguistics: Empirical Methods in Natural Language Processing (pp. 1827-1843). Retrieved from https://ar5iv.labs.arxiv.org/html/2310.06271.

 

Viewing Message: 1 of 1.
Success

Blog.nus login is now via SSO. Don't see or cannot edit your blogs after logging in? Please get in touch with us, and we will fix that. (More information.)