Prakash S/O Perumal Haridas*, Teong Jin TAN and Muhamad Faizal Bin Ibrahim
Centre for Teaching, Learning, and Technology
Prakash P. H., Tan, T. J., & Muhamad Faizal Ibrahim. (2024). Enhancing Content Creation with Gen AI Tools for Teaching and Learning [Lightning Talk]. In Higher Education Conference in Singapore (HECS) 2024, 3 December, National University of Singapore. https://blog.nus.edu.sg/hecs/hecs2024-prakash-et-al
SUB-THEME
Opportunities from Generative AI
KEYWORDS
AI-Assisted Content Creation, Educational Technology, Multimedia, Video Production, Video Post-Production
CATEGORY
Lightning Talk
EXTENDED ABSTRACT
The massive rise and spread of Generative Artificial Intelligence (Gen AI) technologies has ushered in a new era; one of opportunity, creativity, and efficiency. This lightning talk will cover how educators can leverage this era of Gen AI to enhance their content creation pipeline and easily create a variety of multimedia collaterals for teaching and learning. The talk will focus on three key AI-assisted applications for educational content creation. These are Image Generation, Video Generation, and Voice Generation.
The talk will give an overview of Image Generation and what it involves, including showing examples of AI-assisted high-quality image creations with examples of the prompts used to generate them. These will cover a broad range of subject matter across various disciplines and in different visual styles to show just how flexible this can be for educators. With how much Gen AI technology has continued to improve, educators simply input their desired text in a natural language and easily receive multiple output versions from Text to Image generators (Costin, 2024). This can be useful for creating visuals, illustrations, and diagrams that can help enhance the learner’s understanding and engagement. These can also work well for different types of learners, making it easier for more people to learn and understand new things.
The next section covering Video Generation will be split into two parts: Image-to-video and text-to-video. Related video-generated example snippets will be shown in the same format and across various subject matter like what was earlier shown for Image Generation, together with the prompts that were used to generate them. The image-to-video generation will be useful for educators in helping to give life to static photographs and visuals. In an animated form, these moving visuals will help to further complement the educational content and make it more visually appealing for the learner. For text-to-video, it is more of a wildcard in that generated results can be unpredictable (Weatherbed, 2024) but the potential for this technology (Dolak, 2024) and where it can go is worth a mention (Vynck et al., 2024). Some video snippets of what this technology can do once it reaches another level of maturity and photorealism (Germanidis, 2024) will be shown here.
Voice Generation involves a process of AI-assisted voice cloning, allowing educators to generate and replicate voices that mimic human voice patterns and intonations (Ashworth, 2023). Some audio snippet examples will be shown to give educators a better idea of the strength of this technology. This will be helpful for educators to easily create personalised content for learners. With their generated synthetic voices, educators only need to record their real voice once and then leverage AI to create as many types of audio content as possible (Lee, 2023). This will come as a real time saver and will help educators to be more efficient.
To wrap up, the talk will highlight how Gen AI comes into play in the overall video production pipeline. An example flowchart will be shown to illustrate this process and the areas where Gen AI can create an impact (Sparrow, 2024). Educators no longer need to worry too much about technicalities and can instead focus their energy on pedagogical innovation. By the end of the talk, educators will hopefully walk away more confident in their knowledge of what a content creation pipeline entails and how they can enhance their own content by harnessing Gen AI tools to produce engaging, adaptable, and inclusive learning experiences.
REFERENCES
Ashworth, B. (2023). AI can clone your favorite podcast host’s voice. Wired. https://www.wired.com/story/ai-podcasts-podcastle-revoice-descript/
Costin, A. (2024). Adobe advances creative ideation with the new firefly image 3 model. Adobe Blog. https://blog.adobe.com/en/publish/2024/04/23/adobe-advances-creative-ideation-with-new-firefly-image-3-model
Dolak, K. (2024). Toys “R” US debuts first video ad using Sora, OpenAI’s text-to-video tool. The Hollywood Reporter. https://www.hollywoodreporter.com/business/digital/toys-r-us-ad-sora-openai-video-tool-reaction-1235932993/
Germanidis, A. (2024). Introducing gen-3 alpha: A new frontier for video generation. Runway. https://runwayml.com/blog/introducing-gen-3-alpha/
Lee, T. B. (2023). I cloned my voice with A.I and my mother couldn’t tell the difference. Slate Magazine. https://slate.com/technology/2023/04/descript-playht-ai-voice-copy.html
Sparrow, M. (2024). Adobe announces new AI tools for Premiere Pro. Forbes. https://www.forbes.com/sites/marksparrow/2024/04/15/adobe-announces-new-ai-tools-for-premiere-pro/
Vynck, G. D., Elker, J., & Remmel, T. (2024). The future of AI video is here, super weird flaws and all. The Washington Post. https://www.washingtonpost.com/technology/interactive/2024/ai-video-sora-openai-flaws/
Weatherbed, J. (2024). Adobe Premiere Pro Is getting generative AI video tools – and hopefully OpenAI’s Sora. The Verge. https://www.theverge.com/2024/4/15/24130804/adobe-premiere-pro-firefly-video-generative-ai-openai-sora