Thursday, October 23, 2025

AI-generated assessments for vocational education and training - webinar

 Here are notes from the webinar on the ConCove Tūhura project AI-generated assessments for VET

The report provides the literature scan and details of the process undertaken to identify appropriate AI to undertake the task, and the processes to ensure that the AI- generated assessments would meet moderation requirements (quality assurance) for use for assessing VET standards. 

The work was undertaken by Stuart Martin from George Angus Consulting and Karl Hartley from Epic Learning. Both present in the webinar which begins with an introduction by Katherine Hall (CE for ConCoVE Tūhura) and by Eve Price (project manager at ConCoVE).

In Katherine's introduction, the rationale for the project was shared along with some of the journey taken by the project to break new ground.

Eve Price provided the background of the project. Most projects focus on integrating AI into ako or the prevention of AI for assessment. This project wanted to help support the time consuming 'back room' processes including resource and assessment development.

Karl ran through the approaches to the product. The evaluation/review processes could not really keep up with the speed at which assessments can be developed when it is supported by AI. 

Stuart shared reflections on how the process evolved and the various processes put in place, were reflected on and were then reintroduced into the AI-generation project. Explained how various quality pointers were met to ensure the efficacy of the process.

Eve detailed the need to be specific with what needed to be achieved - assessment, feedback, etc. Selection the correct AI is also important. Prompts are detailed in the project report. Important to evaluate at each step.

The bigger picture with micro-credentials, skills standards and AI-generated assessments all add innovations to the VET ecosystem. Understanding the policies and processes used by WDCs and NZQA need to always be part of the process, so that various quality points are met.

Stuart summarised some of the challenges and how the project worked through these. 

Karl talked on the importance of people in the process when AI is generating the assessments. Firstly, important to understand some of the mechanics of AI - what is under the hood. Secondly, quality assurance must be focused on the concepts, not so much the grammar/spelling etc. Thirdly, need to make sure assessment purpose is clear. 

Next, academic integrity and ethics were discussed. Important to ensure that there is understanding the impact of AI on privacy and data sovereignty (including indigenous perspectives). Important to train the AI to understand tieh Aotearoa context. Claude AI was selected due to its stance on human rights, ethics etc. 

Findings included: assessments did not meet moderation but improved the opportunities for inclusiveness and personalisation of learning. Failing moderation added to the learnings from the project. The items involved too many questions, answers being at too long and at too high a level. 

Eve reiterated the need to 'define what good looks like' to the AI, so that human objectives/ perspectives are taken into account. Important to ensure principles of ethics etc are maintained as it is important to 'keep humans at the centre'.

Karl's learning include AI drawing in novel content through its hallucination. The AI included assessor approaches into its assessment and this caused him to consider the learner information that should be included to provide direction. The U S of A standardised approaches to writing assessments, seemed to permeate the assessments produced by AI. This had to be superseded through careful prompting.

Flexibility to allow for personalisation to industry (example safety unit standard customised to a range of work roles/ disciplines); and learners (for ESOL, neurodiverse learners etc.). 

 Q & A followed 

The webinar was recorded. 

Discussions revolved around practicalities, challenges and solutions.

All in, good sharing that adds to everyone's learning about the roles of AI to support teaching and learning, integration of practice/practical and cultural contexts, the need to be aware of the fish hooks' in using AI, how quickly AI is developing to meet user needs, and the need to continually learn to ensure that the understanding of AI / ethics etc. form the foundation for working with AI. 


No comments: