Last month, in a second post in a short series, I reported on how CAPDM and Artificial Intelligence Limited had seamlessly integrated ChatGPT into Moodle courses allowing students to ask questions directly of the Large Language Model (LLM) or via a custom ChatGPT using vector embeddings from Open Educational Resource (OER) texts for context. At the time I hinted that we could extend this functionality to enable students to generate practice quizzes at selected points, or potentially at any point, in a course.
I can now announce that we are able to offer this feature in any new or legacy course. The integration into Moodle is seamless and, for now at least, simple. The dynamically generated quizzes take their context from the particular ‘hooks’ that we use. These ‘hooks’ can be pretty much any feature – from individual paragraphs, to sections or (in one of our cases) from the context of Reflective Activities.
The resulting ‘Quick Quizzes’ are dynamically inserted into the current page at the point of focus, so there is no awkward navigation or user experience.
Why do this? That is a good question and worthy of some answer. Firstly, it was a (small) technical challenge to integrate this generative technology into the virtual learning environment. More importantly, it seemed useful to put the power of an LLM directly into the hands of the potentially struggling student to enable them to get answers or additional information on a particular topic that might be unclear. The quizzes go a bit further and allow them to test their understanding.
In a sense this is akin to augmenting the course content provided, but it does so in the context of the precise needs of the struggling student.
No doubt many tutors will use LLMs to augment the core content that they assemble for a course (a perfectly reasonable thing to do) but it seemed more personal and useful to put control in the hands of the student. I know from my own experience that I’ve struggled with seemingly innocuous sections of texts and would have loved to have had a personal tutor on hand – or even to ask an AI for a specific explanation. This is what we are trying to achieve and we would welcome the approach being tested.
All of this is made possible as CAPDM ‘engineer’ all course pages (they do no hand building) from a set of XML single-source masters. Adding code to the publishing systems used to include hooks to ChatGPT is a minor task. It also relies on the fact that CAPDM uses its own tried and tested display module, very similar the OU ‘oucontent’ Moodle module, giving full control over added, custom functionality – including adding these hooks to ChatGPT. Institutions hand building courses directly within a VLE will struggle to integrate such features into their student learning experience.
What’s next? The next stage of this collaborative piece of CAPDM research will be to look to build a ‘local’ LLM only using content from a reliable, content-rich domain as used by a programme of learning (e.g. an MBA) rather than use ChatGPT and its use of the wider Internet. We will look to do this with OER content (e.g. OpenStax) though no doubt the major publishers will be looking to do similar with their extensive content domains.