Friday, October 24, 2025

Aotearoa Tertiary AI network ATAIN - presentation from Dr. Simon McCallum

 Notes taken from presentation from Dr. Simon McCallum, Victorial University Wellington on 'Adapting to AI'.

The presentation is part of a fortnightly series organised through ATAIN which is a SIG of Flexible Learning Association of  NZ (FLANZ).

Simon began with an introduction. He has been teaching game development since 2004 but has also taught AI since 1991. Noted that Gen Ai is everywhere and we use it unintentionally, unconsciously, but also using it consciously and strategically. Productivity benefits depends on training of the AI. Ai agents work together to automate generic tasks.

Across industries, adoption is mixed, some fast, some very slow. The risks include programmers with AI automating other industries and the use of 'software / automation on demand'. 

Revised the two lane approach to assessments. Students are using AI - At UVW 66% admit using it.

Covered the following:

 AI literacy - Core / domain specific - compulsory for all students and staff, understand if and when to use AI and avoid the risk of thoughtless AI use.

Assessments need to move to testing understanding and learning rather then outputs. Test meta cognition, use oral assessments.

All non-invigilated (lane 2) work should be considered as group work, using group work assessment techniques - assess process, influence, delta, learning journey. 

Assessments can embrace AI assistance. AI selects questions, create bespoke questions, suggest oral assessment questions. Human markers determine the grade. Provided details of the process from his context. Use AI to generate questions from student submission for them to complete, to check understanding they have presented in their essay!

Increase the quantity of group/team work so students can create human connections and increase their experiences with working with others. Invrease the amount of work that is groupwork but not the groupmark.

Proposed that NZ universities fund a NZ based server to assure AI sovereignty. A more equitable approach as all students will have access to high quality models instead of having to pay for the upgraded models. 

Also the creation of a position to report to Te Hiwa (leadership group to organise and manage AI across the entire university - teaching, learning, research and professional. 

Summarised agentic AI. Moves AI from being a chatbot to actually being able to 'do things'. AI will sort out a plan, and work through it to meet the prompt objective. Swarm coding can be activated, to check the outputs from each agent. Therefore, AI is not a search engine. It is better to have the AI question us to work out what we want done! 'help me do ----' 

Summarised project on how AI is used in NZ secondary schools. Mixed across schools on strategy, current use, professional / student use and community involvement. Schools welcome clearer policies and guidelines. Challenges are similar to Universities, assessments, professional development, etc.

Shifted to a summary of the extrinsic and intrinsic purposes of learning. If the motivation is just to pass the exam, then AI is an impediment. However, if there is a less constricted 'assessment' e.g. develop a game, AI accelerated capability and leads to extended learning. 

Therefore important to engage learnings to focus on intrinsic motivation, self-reflection and etc. and to hold them to account for what they want to  learn. Teacher as NPC . Encouraged learner negotiated assessments and rubrics, giving them agency. For example, have learners establish the range of marks assigned to various aspects of an assessment. This encourages meta cognition for students to structure their learning, their strengths/ weaknesses etc. 

How do we measure metacognition - confidence is good when it is accurate, under / over-confidence is a problem and AI makes this worse. Important to operationalise and elicit accurate statements from learners.

Invitation to use the group's Discord to continue the conversation and share ideas. 






Thursday, October 23, 2025

AI-generated assessments for vocational education and training - webinar

 Here are notes from the webinar on the ConCove Tūhura project AI-generated assessments for VET

The report provides the literature scan and details of the process undertaken to identify appropriate AI to undertake the task, and the processes to ensure that the AI- generated assessments would meet moderation requirements (quality assurance) for use for assessing VET standards. 

The work was undertaken by Stuart Martin from George Angus Consulting and Karl Hartley from Epic Learning. Both present in the webinar which begins with an introduction by Katherine Hall (CE for ConCoVE Tūhura) and by Eve Price (project manager at ConCoVE).

In Katherine's introduction, the rationale for the project was shared along with some of the journey taken by the project to break new ground.

Eve Price provided the background of the project. Most projects focus on integrating AI into ako or the prevention of AI for assessment. This project wanted to help support the time consuming 'back room' processes including resource and assessment development.

Karl ran through the approaches to the product. The evaluation/review processes could not really keep up with the speed at which assessments can be developed when it is supported by AI. 

Stuart shared reflections on how the process evolved and the various processes put in place, were reflected on and were then reintroduced into the AI-generation project. Explained how various quality pointers were met to ensure the efficacy of the process.

Eve detailed the need to be specific with what needed to be achieved - assessment, feedback, etc. Selection the correct AI is also important. Prompts are detailed in the project report. Important to evaluate at each step.

The bigger picture with micro-credentials, skills standards and AI-generated assessments all add innovations to the VET ecosystem. Understanding the policies and processes used by WDCs and NZQA need to always be part of the process, so that various quality points are met.

Stuart summarised some of the challenges and how the project worked through these. 

Karl talked on the importance of people in the process when AI is generating the assessments. Firstly, important to understand some of the mechanics of AI - what is under the hood. Secondly, quality assurance must be focused on the concepts, not so much the grammar/spelling etc. Thirdly, need to make sure assessment purpose is clear. 

Next, academic integrity and ethics were discussed. Important to ensure that there is understanding the impact of AI on privacy and data sovereignty (including indigenous perspectives). Important to train the AI to understand tieh Aotearoa context. Claude AI was selected due to its stance on human rights, ethics etc. 

Findings included: assessments did not meet moderation but improved the opportunities for inclusiveness and personalisation of learning. Failing moderation added to the learnings from the project. The items involved too many questions, answers being at too long and at too high a level. 

Eve reiterated the need to 'define what good looks like' to the AI, so that human objectives/ perspectives are taken into account. Important to ensure principles of ethics etc are maintained as it is important to 'keep humans at the centre'.

Karl's learning include AI drawing in novel content through its hallucination. The AI included assessor approaches into its assessment and this caused him to consider the learner information that should be included to provide direction. The U S of A standardised approaches to writing assessments, seemed to permeate the assessments produced by AI. This had to be superseded through careful prompting.

Flexibility to allow for personalisation to industry (example safety unit standard customised to a range of work roles/ disciplines); and learners (for ESOL, neurodiverse learners etc.). 

 Q & A followed 

The webinar was recorded. 

Discussions revolved around practicalities, challenges and solutions.

All in, good sharing that adds to everyone's learning about the roles of AI to support teaching and learning, integration of practice/practical and cultural contexts, the need to be aware of the fish hooks' in using AI, how quickly AI is developing to meet user needs, and the need to continually learn to ensure that the understanding of AI / ethics etc. form the foundation for working with AI. 


Monday, October 20, 2025

AI forum productivity report for New Zealand

This report - AI in Action: Exploring the impact of AI on NZ's productivity, is produced by the AI Forum NZ in partnership with Victoria University Wellington and PR powered by heft.

It is the third biannual report and collates an overview of the impact AI is having on productivity across NZ. Since the first report in 2023, there has been growth in the use of AI with accompanying effects on work, the workforce and contributions to the economy. 

The 3 page executive summary provides the main points. Key findings are then extended and discusses followed on by case studies.

In the businesses surveyed, 91% reported productivity gains, 50% view AI contributes to cost savings with 77% saying that there have been actual cost savings. However, less than 25% had savings over $50,000. Therefore consistent productivity gains.

Workforce impacts include increased job losses which reflect the country being in recession; 55% reported that new roles have been created; cost of setting up AI have reduced, strategic policy investments have been attained; operationally, AI cost less. 

AI's introduction requires the building of trust across the workforce with AI literacy being a key and the need to ensure that there is inclusive engagement for all.

Overall, data that reports on growing adoption, settling in of organisations into understanding how AI can be leveraged to increase efficiencies, and acceptance of AI as inevitable part of current and future business activity. 




Monday, October 13, 2025

Assignments in the AI era

 In light of this article from Radio NZ, whereby some universities in Aotearoa are no longer checking assessments using AI tracking platforms, a summary of ways to think about assessments in the AI age is of importance. There has been much discussion on how assessments in higher education need to be evaluated and re-thought, given the infiltration of AI into our work and study. This article in Times Higher Education, distills many of the main discussion points in academia on how AI affects academic writing.

The work undertaken at my institute is focused around holistic / programme wide assessment design, rather than on individual courses. The term 'programmatic assessments' is sometimes used to describe this approach

Some of the other strategies we have used, are summarised in this blog - NavigateAI (Dr. Ryan Baltrip)  In summary, to place greater weighting on recording the evidence of learning, rather than the product of learning. Therefore, portfolios and similar assessments are more useful than one off invigilated exams, or assignments. 

In Aotearoa, Otago Polytechnic's Bruno Balducci, have introduced the concept of AI safe design, a framework for the design of assessments which take into account the influences of AI. These are useful as a way to help educators work through the many pitfalls involved in redesigning assessments that will be authentic and relevant, but will not tempt learners into using AI to complete them.

The other concept we have used to help our teachers work out how to structure assessments in an AI age is the 'two lanes' assessment structure.  Here, lane 1 assessments are used to as assessments OF learning - or summative, higher stakes assessments. Lane two are the assessments FOR learning, taking on formative approaches to inform learners as they progress to the course.

Therefore, it is important to not just assume that current assessments will be appropriate but to undertake a stock take to understand the purposes of each assessment, and to put in place relevant assessments that will meet the purposes of each assessment i.e. evidence that the learner has met learning outcomes.  








Monday, October 06, 2025

Guide to using AI - school context

 Here is a useful guide for AI (in schools / US of A context). 

The guide begins with a section on how to use and why to use the guide.

The second section focuses on ethical issues - ensuring this is at the very front of any consideration for the use /integration of AI into teaching and learning. 

Discussions on the impact on students', risk and benefits and teacher perspectives follow.

The guide towards determining AI policies is then introduced and discussed. The 'how to create an AI policy' section is useful, drawing on key principles and providing suggestions. The checklist for developing AI policies (page 18) sets out the many parameters that need to be thought through as AI is introduced into the school curriculum.

A series of case studies and discussion pieces follow, documenting the struggles, challenges and pragmatic approaches adopted along with detailing various strategies and approaches. Discussions revolve around why, how, when and implications for introducing and using AI in schools. Strategies for assessments in the AI age are summarised (pages 30 -31) including the need to design engaging assessments, using paper based materials, having in-class assignments and assessments, adding oral assessments, emphasising the learning process, helping students understand the implications of using AI, clearly spelling out what is and what is not acceptable when using AI, and more frequent low stakes assignments.

A range of curated resources are provided for follow up and reference.

All in, a realistic documentation of how AI impacts on day to day school systems and environments. The pros and cons are drawn from case studies. The teacher voice comes through well and their perspectives and experiences are valued. Principles derived are relevant across educational sectors. 




Monday, September 29, 2025

Horizon Report - 2025

The 2025 Horizon report on Teaching and Learning was published earlier this year. 

As with previous reports, the horizon scan included summarisations of the social, technological, economic, enviromental and political trends which impinge on the the future of education.

The topics covered included :

AI tools for T & L; 'faculty' development for Gen AI; AI governance; cybersecurity; evolving teaching practice; and critical digital literacy. Therefore, this report is 'AI' dominated with the call for 'critical digital literacy' as a requirement for all, to ensure that AI utilisation and integration is grounded in ethical principles.

The case studies provided are mostly North American but there are a few case studies from Oz as well.

The report closed with discussion on the scenarios for growth (AI and VR shift education into the virtual but equitable access is still a challenge), constraint (caused by government regulation to draw on learning analytics for decision making but limits access to many), collapse (unregulated AI leads to collapse of truth) and transformation ( emphasis on workforce readiness at the expense of liberal arts education), although a mostly positive/optimistic stance is maintained. 




Friday, September 26, 2025

Enacting assessment reform in a time of AI - Tertiary and Standards Quality Agency (Australia)

 The Australian Tertiary and Standards Quality Agency (TESQA) has published a report to provide guidance on assessment reform in the age of AI. 

The main approaches are:

- taking a programme wide approach to assessment reform i.e. across the entire degree programme

- assuring learning in every "unit/subject"- i.e. across a course

- implementing a combination of the above - by ensuring assessment mapping is constructively aligned between learning outcomes and assessments. 

The strengths and challenges of each of the above are presented and discussed.