The New Zealand Qualifications Authority (NZQA) convened a symposium on the implications of AI on assessments. The main event held at Te Papa with online attendance by many. The event is also supported by the Ministry of Education, Universities NZ and the NZ Assessment Institute, NZ Council for Educational research, Post Primary Teachers Union, Tertiary Education Union and Network for Learning.
Notes from the online /streamed sessions follow:
The day begins with mihi whakatua, and introduction. Lee Kershaw Karaitiana MCs along with offering the karakia and welcomes.
DrSimon McCallum, Senior Lecturer in Software Engineering, Victoria University of
Wellington opens with the first keynote on ‘the dawn of AI’. Begins with his pepeha (Māori introduction). Has been teaching game programming since 1999 and every year, there is something new and the students coming in change each year. Went through what is Generative AI - what are are Large Language Models (LLMs) which is the training of a machine to translate. Much of language relies on our experiences. Explained how word vectors work to help machines understand words and how these contribute to ChatGPT unravelling the nuances of language. Then explained how ChatGPT works to answer the prompts it is given and the importance of prompt engineering. Provided the principles of prompt engineering including how ChatGPT learns as the process of prompting continues. Currently, other AI platforms - ChatGPT 4 etc. has added guardrails and other 'agent-based' systems to try to provide more authentic outputs. Explained the many processing methods used to evaluate what the output will be. 4 is much more advanced and able to provide less stilted outputs, and the scholar plug in generates real citations - cost US$20 a month - so there is an equity issue. AutoGPT (cost $20 per complex problem) uses Python to create a plan with the ability to write code to solve the problem. Warning on privacy issues as AutoGPT able to make a plan with access to all the items in your (Google) account! Provided examples for AI image generation - Dall-E 2, stable fusion, Nvidia AI playground etc. Photo generation is now very blurry, given images can be 'enhanced', sometimes without our knowledge (Samsung phones often provide a better version of a photo you take!).
Note - AI understands language but not actual words. Assessments often draw on learners use of language as a way to assess critical thinking etc. However, now AI able to do similar, making it a challenge to how we assess students. Observation of groups of low capability students have high use of AI but then do not learn :( High ability students learning AI progress quickly though. Improving understanding is the key, not just using it to replace the work learners have to do. Posits that presently, ChatGPT able to complete assessments at Level 3 but Bard and Bing able to meet Level 7 to 9 in some areas. Argues that all work is now group work. Need to assess learners' contribution to the group :)
Challenged us to think about how we prepare learners? AI can be used to 'augment' so the combined AI and human effort requires assessment. Suggests assessments as 'motivational' which are agentic, intrinsic, relevant and covert - works with small groups of highly motivated learner. Authentic assessments must connect task/time to assess complex reasoning/thought. How do we roll out a new approach to assessment, especially when the future in the world of AI is still unknown. Encouragement to use AI as a tutor, supporting personalised learning 24/7 able to translate concepts to different levels, attain customised explanations and form chains of thought. If AI now a co-author, then author statements require being clear as to who had done the work and justification of not using AI now required!! We need to be the 'rider' of AI. Suggests flipped exams (it is the prompts, not the answers), AI to triage the work and rethink of what authentic assessments will look like. Finished with some thoughts on what may happen into the future (pessimistic). Shifting from clever words to caring people, need to be aware of the apathy epidemic (people who no longer have to think!).
The keynote is followed by a short presentation by Dr. George Slim, consultant advisor to the Prime Minister's Chief Science Advisor who speaks on 'a science policy review'. Provided a Aotearoa science perspective on how AI has changed (increased/accelerated) research - biology (DNA, viruses etc etc). Panel being assembled to bring together a report as to how to address the many challenges presented. Resources also being provided to archive contemporary thinking as the technology moves on. Government is just beginning work on implications and response. Do we ban it (Italy), leave it to the market (US of A). NZ Privacy Commission has begun to work on guidelines and resources. Important understand and manage AI.
After morning tea, online presentation from Dr Lenka Ucnik, Assistant Director Higher Education Integrity Unit, Tertiary EducationQuality and Standards Agency (Australia) provides the Australian context. Provided context and background of TEQAS - does not regulate on vocational education though but for higher education. The key messaging on AI is that it is here to stay. Can be an assistive tool for students (especially for those with disabilities), research and teaching. The main premise is to implement risk analysis management to maintain academic integrity. AI affects academic integrity and there are discipline specific processes. Important to ensure learners/students attain the skills to work with AI (see learner guide). Encouraged participants to think beyond the immediate and evaluate /plan / strategise towards the future. There are opportunites but also important to mitigate risks! and the ongoing work required to ensure the integrity of education. What is the most important objective of education and how can the affordances from AI contribute.
Professor Cath Ellis from the
University of New South Wales then presents on ‘the link between cheating and
assessment’. Shared an observation from a student, generating a presentation using ChatGPT and attaining a good mark. Currently, learning is assessed with an artefact/performance - a proxy. Learning is embodied :) Assessments pitched at being 'just good enough'. At the moment, ChatGPT moved from producing work which as from just enough to good. What needs to be done and who does it now? 'Cheating is contextual and socially constructed' - example of ebook for commuting (good) but in the Tour D France (bad). There is a plethora of sites which allow support essay writing.
We still need to ensure the authenticity of assessments, whose work is it when AI is available. We need to focus on finding evidence that learning has occurred, not why cheating has occurred. Do we need to assess some things many times?? Education's role is really about making sure our learners are able to weed out 'hallucinations' generated by AI. Conceptual frameworks on academic integrity and assessment security needs to be discussed. We need to champion those learners who are able to work and willing; and not criminalise students who are unable or unwilling. Bulk of energy needs to be in championing, not so much in criminalising. Encourage to focus on metacognitive rather than with content. Call for placing importance on critical AI studies. Check critical AI.
Following on is ProfessorMargaret Bearman, Centre for Research in Assessment and Digital Learning,
Deakin University (Australia) who presents on ‘generative AI – the issues right
here, right now’. Presented on the short term implications and moved on to future. Defined assessments as both graded and non-graded, not necessarily marked by teachers. Assessments should not only assure learning but also promote learning. Educational institutions response to AI can be ignore, ban, invigilate, embrace, design around or rethink. Uncertainties around Ai include legal, ethical, and access.
Design around is probably the best option for the moment. Shared possibilities through ways of making knowledge requirements more specific - knowing your students, specifically requiring the assessment task to reference something that happened in class, designing more authentic assessments. Design to cheat proof assessments. Invigilation is costly, stressful, tests capabilities unrelated to task, narrow band capabilities and cheating still goes on. Rethinking invigilation may be one option - move towards oral, assessment of learning outcomes across tasks, invigilate only the common sorts of knowledge/skills. Need to rethink the curriculum to account for AI.
Then Associate Professor Jason Stephens from the University of Auckland on 'achieving academic integrity in academia: the aspirations and its obstacles'. Covered what does it mean to achieve with integrity and why it is important, the obstacles to achieving integrity. Being honest is not (always) easy! students need help (somethings a lot) to achieve with integrity. Educators are obligated to design environments that mitigate dishonesty. Defined achievement of integrity as being hones and having strong moral principles and the state of being whole and not divided. Shared model of moral functioning in academia. Survey across 7 institutions in 2022 (before ChatGPT) reveals 15.1% of students use AI. Obstacles to achieving integrity include thinking as being costly, modern society moves fast and has high expectations, a culture of cheating is sometimes seen to be supported, through contagion effects and 'the power of the situation'. Harmful for well-being when students afforded opportunities for cheat, so important to maintain academic integrity.
A local school based
perspective is presented by Claire Amos, Principal | Tumuaki, Albany Senior
High School and Kit Willett from Selwyn College. Claire briefly went through context of the school which has a well-established, well-embedded innovative curriculum model - tutorial, specialist subjects and impact projects. Approach to AI is to embrace its potential, rethink assessments. Use AI to reduce workload, support UDL and support learner agency and self-direction. Working through addressing ethical issues, teaching critical thinking and addressing plagiarisms. Shared examples of how teachers used AI - maths to generate practice tasks, photography to look for connotation and denotations in images, create quick worksheets etc. Also shared how students use AI - support design learning approaches and spending time to discuss how to use AI for good and not for cheating but as a coach :) Shared concerns of increase in more additions into a busy curriculum, the compounding of digital equity, and the need to support students to use AI in a critical manner. Summarised reflections - for example, what happens when we do not assess/rank/grade students?
Kit shared that there has been more plagiarism this semester than across the last few years. Students were briefed about the consequences but many still did not take the advise. Kit works in a school with more traditional approaches including using invigilated assessments. Shared challenges a teacher has to undertake to meet NZQA requirements. A more traditional approach! Shared how teachers could use AI to help lower their workload as well.
Panels and forum occur after lunch.
The
first with perspectives on AI, convened by A. P. Jason Stephens,student association
representative (four high school ākonga and two university ākonga), Claire Amos and Kit Willett. All acknowledged knowledge of AI, usually comes up more when assessments are handed out. Did not report on conversations with teachers as to how AI should be/or not used. High school students are cognisant of AI capabilities and will use it as a resource but know of others who use it to plagiarise. Student association representatives from higher education wanted better utilisation of AI to support equity in education and fairness with regards to invigilated assessments. Image-based disciplines need to really work on how to assess when there is so much available. AI use in creativity needs to be clarified - is AI augmenting or doing all the work?
Banning AI will only make 'forbidden fruit' more attractive. People who want to cheat, will do it. Inequities are acerbated as students able to afford AI still advantaged.
Good points brought up by all the ākonga. They are pragmatic. AI can support learning, however assessments are still a grey area. Interesting discussion ensued around what is learning, the role of technology in supporting learning, and assessment philosophies. Call to look at updating an archaic education and assessment system to reflect the technology affordances and what is the present and future social /work / industry environments.
The
second is the AI forum with the Aotearoa NZ perspective with Gabriela Mazorra de Cos as convenor and Professor Michael Witbrock and Dr. Karatiana Taiuru. Micheal overviewed 'Where are we going with AI?' As a country, we respond well to new developments. Summarised history of AI from 1940s and the current rapid improvement in its usability. Ran through pluses and minuses of AI. automate everything?? to free humans from mundane work. AI may be in the from of a organisational/consolidated form rather than as an individual form. Integration of natural and artificial intelligences with existing and new kinds of organisational intelligences need to be considered. Education will be about how to help learners become the best humans :)
Karatiana covered 'how do we embed Kaupapa Māori ethics and culture from the outset?' Spoke about the opportunities to turn back effects of colonisation. Digital technologies and now AI confer affordances to support the revitalisation and increase in Te Reo and Mātauranga Māori. Argues all datum has Mātauranga Māori threaded through it, it is a taonga and must be used to empower Māori. AI is no exception. Still much to be done to ensure the integrity and ethics of how Mātauranga Māori is used. Important to plan towards the future to ensure ākonga are educated about Māori ethics. Also proposed the deployment of AI as personal learning assistants/tutors to assist with the shortage of Māori experts, but must be developed in association with Māori.
Q & A ensued covering future possibilities as we are a small, generally well-educated country, to leverage off AI for the betterment of all.
Then a provider response panel Associate Professor Jenny Poskitt from Massey University chaired by with Dr. Mark Nichols from the Open Polytechnic / Te Pūkenga, Kit
Willett, Dr. Kevin Shedlock Victoria University and Sue Townsend from Le Cordon Bleu, the Private Training representative. Mark posited that education is a way to treat ignorance and AI may enhance understanding. Both of these must be addressed with assessment. Four techniques, video practice, videoed randomised questions through interactive oral assessments, viva voce and then use of AI tutors. Video is now more commonplace and allow for interpersonal assessments. Interactive oral assessments are useful as one approach.
Kevin argued that 'grey' aspects of life, where there are no 'correct' answers is something Māori find normal. The head of the fish can set the direction, but the tail and the body must also follow. Therefore, important to be collaborative when working with AI.
Sue ran through context of Le Cordon Bleu. Impact of AI seems to be similar to schools and universities'. Three main areas, AI will impact on the types of work available; change inevitable for assessment practices and support facilitators to shift; flexibility and equity for learners have access.
Kit(secondary school context) reiterated that tamariki need skills going into the future. Spoke on personal growth, curiosity, intentionality and the challenges of assessing these. Rethinking assessments is a key - reducing assessments and ensuring they are more focused.
Q & A followed.
The workshop closes with a panel on ‘reflections’ with
AP Poskitt and Dr. Grant Kinkum, CE of NZQA. Jenny summed up the day's discussions with the rise of AI and its many implications. Education which engages is truly a great experience. Rethinking, redesigning are required to leverage off AI to address equity, inclusion. Human being requires reciprocity, empathy and relationships. Ethical challenges are posed by Ai and to move forward, dialogue is required to create new ways of doing.
Grant thanked presenters and participants and encouraged the conversations to continue. A whole of education required to reap the advantages of AI and meet the challenges. Collaborative work between ākonga and kaiako and the system at large required. Summarised the important themes across the day. Learning content, skills etc. less important than ensuring our ākonga attain the cognitive, evaluative and critical thinking to be agile/flexible as AI continues to evolve. The purpose of education is just as important as assessment design. Encouraged ongoing work as we move through into the future.
Lee closes the symposium with karakia.