Updated - 26th August
Attended part of a presentation organised through Scarlatti to present the work funded by the Food and Fibre CoVE. It is one of a range of projects on artificial intelligence.
The presentation shares work undertaken to use AI for oral assessments.A report is available, summarising the pilot and findings. The various outputs from the project can be found at this site.
The presentation showcases the AI agent for learner oral assessment and shares perspectives from users as to the AI agent's efficacy. The guide to using the oral assessment chatbot is found here.
Began with sharing of a 'case study' comparing conventional (paper-based requiring travelling to an assessment centre) to the piloted (mobile, oral, AI supported). Then a couple of warm up sessions followed by short overview of F & F CoVE. They have completed 109 projects over the past 5 years!! Also introduced Scarlatti, and the research team from Fruition, F & F Cove and the WDC.
Then the sharing of using AI agents in Oceania followed by the demonstration of the AI agent.
AI agent oral assessments came about as written assessments were a barrier for many learners and an Ai agent needed to be tested.
Began with sharing the AI in Education articles - which was the way to find out what was being done. AI agents being used more by universities in Australia. Most agents created using Cogniti or custom GPTs. Most were in a text-based mode. Not as much use in vocation education or in NZ and most were focused on learning rather than assessments.
Showcased the AI agent (VEVA) with proviso that it is still a prototype :) tested in a dairy training context. The agent does provide the answer if the assessee indicates they are unsure or unable to answer the question. Using this for formative assessment would be more pertinent. It works well if correct answers are provided but does fall over if the candidate is unprepared!
Transcripts are archived and can be used to grade the conversation. With the small number of candidates (14 + 11) the moderation of the grading, showed accuracy.
Important finding included that an off the shelf product would not work. This was because a controlled conversation was required, precise outputs was also required, and funding/payment needed to keep the agent.
Therefore, two agents were used. The 'examiner agent ran using Open AI Real time Voice API) and the Assessor ran on GPT 4o text mode.
A demonstration (akin to our own experiences with Ako AI) in that responses from AI can sometimes be difficult to predict. The controlled conversation was primed with standardised questions. Prompting content included course materials, assessment rubric and example answers which had been graded. RAG (retrieval augmented generation) was used to search course information as there is too much course information to put in. Ethical guard rails were included.
Shared the various ways in which the oral assessment AI agent could be used. These include summative and formative assessments, authentication of context and candidate, employer verification, pre-screening for readiness to be assessed and for recognition of prior learning (RPL).
Closed with where the project will move towards next.
A panel made up for Scarlatti staff (Adam Barker, Sam COrmack, Leater Hoare) and users - Jenny Sinclair from Dairy Training and Tiffany Andrews from Fruition the answered a series of questions.
Overall, a good overview of the possibilities for using AI to support multimodal forms of assessments.
No comments:
Post a Comment