Wednesday, May 31, 2023

Assessment in the age of AI - NZQA symposium - notes

 The New Zealand Qualifications Authority (NZQA) convened a symposium on the implications of AI on assessments. The main event held at Te Papa with online attendance by many. The event is also supported by the Ministry of Education, Universities NZ and the NZ Assessment Institute, NZ Council for Educational research, Post Primary Teachers Union, Tertiary Education Union and Network for Learning. 

Notes from the online /streamed sessions follow:

The day begins with mihi whakatua, and introduction. Lee Kershaw Karaitiana MCs along with offering the karakia and welcomes. 

DrSimon McCallum, Senior Lecturer in Software Engineering, Victoria University of Wellington opens with the first keynote on ‘the dawn of AI’. Begins with his pepeha (Māori introduction). Has been teaching game programming since 1999 and every year, there is something new and the students coming in change each year. Went through what is Generative AI - what are are Large Language Models (LLMs) which is the training of a machine to translate. Much of language relies on our experiences. Explained how word vectors work to help machines understand words and how these contribute to ChatGPT unravelling the nuances of language. Then explained how ChatGPT works to answer the prompts it is given and the importance of prompt engineering. Provided the principles of prompt engineering including how ChatGPT learns as the process of prompting continues. Currently, other AI platforms - ChatGPT 4 etc. has added guardrails and other 'agent-based' systems to try to provide more authentic outputs. Explained the many processing methods used to evaluate what the output will be. 4 is much more advanced and able to provide less stilted outputs, and the scholar plug in generates real citations - cost US$20 a month - so there is an equity issue. AutoGPT (cost $20 per complex problem) uses Python to create a plan with the ability to write code to solve the problem. Warning on privacy issues as AutoGPT able to make a plan with access to all the items in your (Google) account! Provided examples for AI image generation - Dall-E 2, stable fusion, Nvidia AI playground etc. Photo generation is now very blurry, given images can be 'enhanced', sometimes without our knowledge (Samsung phones often provide a better version of a photo you take!).

Note - AI understands language but not actual words. Assessments often draw on learners use of language as a way to assess critical thinking etc. However, now AI able to do similar, making it a challenge to how we assess students. Observation of groups of low capability students have high use of AI but then do not learn :( High ability students learning AI progress quickly though. Improving understanding is the key, not just using it to replace the work learners have to do. Posits that presently, ChatGPT able to complete assessments at Level 3 but Bard and Bing able to meet Level 7 to 9 in some areas. Argues that all work is now group work. Need to assess learners' contribution to the group :) 

Challenged us to think about how we prepare learners? AI can be used to 'augment' so the combined AI and human effort requires assessment. Suggests assessments as 'motivational' which are agentic, intrinsic, relevant and covert - works with small groups of highly motivated learner. Authentic assessments must connect task/time to assess complex reasoning/thought. How do we roll out a new approach to assessment, especially when the future in the world of AI is still unknown. Encouragement to use AI as a tutor, supporting personalised learning 24/7 able to translate concepts to different levels, attain customised explanations and form chains of thought. If AI now a co-author, then author statements require being clear as to who had done the work and justification of not using AI now required!! We need to be the 'rider' of AI. Suggests flipped exams (it is the prompts, not the answers), AI to triage the work and rethink of what authentic assessments will look like. Finished with some thoughts on what may happen into the future (pessimistic). Shifting from clever words to caring people, need to be aware of the apathy epidemic (people who no longer have to think!). 

 The keynote is followed by a short presentation by Dr. George Slim, consultant advisor to the Prime Minister's Chief Science Advisor who speaks on 'a science policy review'. Provided a Aotearoa science perspective on how AI has changed (increased/accelerated) research - biology (DNA, viruses etc etc). Panel being assembled to bring together a report as to how to address the many challenges presented. Resources also being provided to archive contemporary thinking as the technology moves on. Government is just beginning work on implications and response. Do we ban it (Italy), leave it to the market (US of A). NZ Privacy Commission has begun to work on guidelines and resources. Important understand and manage AI. 

 

After morning tea, online presentation  from Dr Lenka Ucnik, Assistant Director Higher Education Integrity Unit, Tertiary EducationQuality and Standards Agency (Australia) provides the Australian context. Provided context and background of TEQAS - does not regulate on vocational education though but for higher education. The key messaging on AI is that it is here to stay. Can be an assistive tool for students (especially for those with disabilities), research and teaching. The main premise is to implement risk analysis management to maintain academic integrity. AI affects academic integrity and there are discipline specific processes. Important to ensure learners/students attain the skills to work with AI (see learner guide). Encouraged participants to think beyond the immediate and evaluate /plan / strategise towards the future. There are opportunites but also important to mitigate risks! and the ongoing work required to ensure the integrity of education. What is the most important objective of education and how can the affordances from AI contribute. 

 

Professor Cath Ellis from the University of New South Wales then presents on ‘the link between cheating and assessment’. Shared an observation from a student, generating a presentation using ChatGPT and attaining a good mark. Currently, learning is assessed with an artefact/performance - a proxy. Learning is embodied :) Assessments pitched at being 'just good enough'. At the moment, ChatGPT moved from producing work which as from just enough to good. What needs to be done and who does it now?  'Cheating is contextual and socially constructed' - example of ebook for commuting (good) but in the Tour D France (bad). There is a plethora of sites which allow support essay writing. 

We still need to ensure the authenticity of assessments, whose work is it when AI is available. We need to focus on finding evidence that learning has occurred, not why cheating has occurred. Do we need to assess some things many times?? Education's role is really about making sure our learners are able to weed out 'hallucinations' generated by AI. Conceptual frameworks on academic integrity and assessment security needs to be discussed. We need to champion those learners who are able to work and willing; and not criminalise students who are unable or unwilling. Bulk of energy needs to be in championing, not so much in criminalising. Encourage to focus on metacognitive rather than with content. Call for placing importance on critical AI studies. Check critical AI. 

 

Following on is ProfessorMargaret Bearman, Centre for Research in Assessment and Digital Learning, Deakin University (Australia) who presents on ‘generative AI – the issues right here, right now’. Presented on the short term implications and moved on to future. Defined assessments as both graded and non-graded, not necessarily marked by teachers. Assessments should not only assure learning but also promote learning. Educational institutions response to AI can be ignore, ban, invigilate, embrace, design around or rethink. Uncertainties around Ai include legal, ethical, and access. 

 Design around is probably the best option for the moment. Shared possibilities through ways of making knowledge requirements more specific - knowing your students, specifically requiring the assessment task to reference something that happened in class, designing more authentic assessments. Design to cheat proof assessments. Invigilation is costly, stressful, tests capabilities unrelated to task, narrow band capabilities and cheating still goes on. Rethinking invigilation may be one option - move towards oral, assessment of learning outcomes across tasks, invigilate only the common sorts of knowledge/skills. Need to rethink the curriculum to account for AI. 


Then Associate Professor Jason Stephens from the University of Auckland on 'achieving academic integrity in academia: the aspirations and its obstacles'. Covered what does it mean to achieve with integrity and why it is important, the obstacles to achieving integrity. Being honest is not (always) easy! students need help (somethings a lot) to achieve with integrity. Educators are obligated to design environments that mitigate dishonesty. Defined achievement of integrity as being hones and having strong moral principles and the state of being whole and not divided. Shared model of moral functioning in academia. Survey across 7 institutions in 2022 (before ChatGPT) reveals 15.1% of students use AI. Obstacles to achieving integrity include thinking as being costly, modern society moves fast and has high expectations, a culture of cheating is sometimes seen to be supported, through contagion effects and 'the power of the situation'. Harmful for well-being when students afforded opportunities for cheat, so important to maintain academic integrity. 


A local school based perspective is presented by Claire Amos, Principal | Tumuaki, Albany Senior High School and Kit Willett from Selwyn College. Claire briefly went through context of the school which has a well-established, well-embedded innovative curriculum model - tutorial, specialist subjects and impact projects. Approach to AI is to embrace its potential, rethink assessments. Use AI to reduce workload, support UDL and support learner agency and self-direction. Working through addressing ethical issues, teaching critical thinking and addressing plagiarisms. Shared examples of how teachers used AI - maths to generate practice tasks, photography to look for connotation and denotations in images, create quick worksheets etc. Also shared how students use AI - support design learning approaches and spending time to discuss how to use AI for good and not for cheating but as a coach :) Shared concerns of increase in more additions into a busy curriculum, the compounding of digital equity, and the need to support students to use AI in a critical manner. Summarised reflections - for example, what happens when we do not assess/rank/grade students?

Kit shared that there has been more plagiarism this semester than across the last few years. Students were briefed about the consequences but many still did not take the advise. Kit works in a school with more traditional approaches including using invigilated assessments. Shared challenges a teacher has to undertake to meet NZQA requirements. A more traditional approach! Shared how teachers could use AI to help lower their workload as well. 

 

Panels and forum occur after lunch.

 The first with perspectives on AI, convened by A. P. Jason Stephens,student association representative (four high school ākonga and two university ākonga), Claire Amos and Kit Willett. All acknowledged knowledge of AI, usually comes up more when assessments are handed out. Did not report on conversations with teachers as to how AI should be/or not used. High school students are cognisant of AI capabilities and will use it as a resource but know of others who use it to plagiarise. Student association representatives from higher education wanted better utilisation of AI to support equity in education and fairness with regards to invigilated assessments. Image-based disciplines need to really work on how to assess when there is so much available. AI use in creativity needs to be clarified - is AI augmenting or doing all the work? 

Banning AI will only make 'forbidden fruit' more attractive. People who want to cheat, will do it. Inequities are acerbated as students able to afford AI still advantaged. 

Good points brought up by all the ākonga. They are pragmatic. AI can support learning, however assessments are still a grey area. Interesting discussion ensued around what is learning, the role of technology in supporting learning, and assessment philosophies. Call to look at updating an archaic education and assessment system to reflect the technology affordances and what is the present and future social /work / industry environments. 

The second is the AI forum with the Aotearoa NZ perspective with Gabriela Mazorra de Cos as convenor and Professor Michael Witbrock and Dr. Karatiana Taiuru. Micheal overviewed 'Where are we going with AI?' As a country, we respond well to new developments. Summarised history of AI from 1940s and the current rapid improvement in its usability. Ran through pluses and minuses of AI. automate everything?? to free humans from mundane work. AI may be in the from of a organisational/consolidated form rather than as an individual form. Integration of natural and artificial intelligences with existing and new kinds of organisational intelligences need to be considered. Education will be about how to help learners become the best humans :) 

Karatiana covered 'how do we embed Kaupapa Māori ethics and culture from the outset?' Spoke about the opportunities to turn back effects of colonisation. Digital technologies and now AI confer affordances to support the revitalisation and increase in Te Reo and Mātauranga Māori. Argues all datum has Mātauranga Māori threaded through it, it is a taonga and must be used to empower Māori. AI is no exception. Still much to be done to ensure the integrity and ethics of how Mātauranga Māori is used. Important to plan towards the future to ensure ākonga are educated about Māori ethics. Also proposed the deployment of AI as personal learning assistants/tutors to assist with the shortage of Māori experts, but must be developed in association with Māori. 

Q & A ensued covering future possibilities as we are a small, generally well-educated country, to leverage off AI for the betterment of all. 

Then a provider response panel Associate Professor Jenny Poskitt from Massey University chaired by with Dr. Mark Nichols from the Open Polytechnic / Te Pūkenga, Kit Willett, Dr. Kevin Shedlock Victoria University and Sue Townsend from Le Cordon Bleu,  the Private Training representative. Mark posited that education is a way to treat ignorance and AI may enhance understanding. Both of these must be addressed with assessment. Four techniques, video practice, videoed randomised questions through interactive oral assessments, viva voce and then use of AI tutors. Video is now more commonplace and allow for interpersonal assessments. Interactive oral assessments are useful as one approach.

 Kevin argued that 'grey' aspects of life, where there are no 'correct' answers is something Māori find normal. The head of the fish can set the direction, but the tail and the body must also follow. Therefore, important to be collaborative when working with AI. 

Sue ran through context of Le Cordon Bleu. Impact of AI seems to be similar to schools and universities'. Three main areas, AI will impact on the types of work available; change inevitable for assessment practices and support facilitators to shift; flexibility and equity for learners have access. 

Kit(secondary school context)  reiterated that tamariki need skills going into the future. Spoke on personal growth, curiosity, intentionality and the challenges of assessing these. Rethinking assessments is a key - reducing assessments and ensuring they are more focused. 

Q & A followed. 

The workshop closes with a panel on ‘reflections’ with AP Poskitt and Dr. Grant Kinkum, CE of NZQA. Jenny summed up the day's discussions with the rise of AI and its many implications. Education which engages is truly a great experience. Rethinking, redesigning are required to leverage off AI to address equity, inclusion. Human being requires reciprocity, empathy and relationships. Ethical challenges are posed by Ai and to move forward, dialogue is required to create new ways of doing. 

Grant thanked presenters and participants and encouraged the conversations to continue. A whole of education required to reap the advantages of AI and meet the challenges. Collaborative work between ākonga and kaiako and the system at large required. Summarised the important themes across the day. Learning content, skills etc. less important than ensuring our ākonga attain the cognitive, evaluative and critical thinking to be agile/flexible as AI continues to evolve. The purpose of education is just as important as assessment design. Encouraged ongoing work as we move through into the future. 

Lee closes the symposium with karakia. 


Monday, May 29, 2023

Life long learning - podcast - Singapore's SkillFuture with Dr Gog Soon Joo

This came through Soon Joo Gog's linkedin post. She participated in a podcast from the 100 cases of lifelong learning for the workplace series.

This podcast with Soon Joo, focuses on Singapore's SkillsFuture which represents a case study of a country's strategic response to the many challenges posed by the future of work on citizens. 

 Skillsfuture brings together individual incentives to all Singaporeans and organisations, to enable them to upskill, reskill, or just stay on top of rapid advances in techology. The government funded holistic response includes not only information and incentives but also does the work to maintain skills frameworks through on-going research on skills demands. There is access to the skills and training advisory for individuals and organisations. For individuals, information is organised into early and mid-career initiative to inform and augment current and future skills. 

Monday, May 22, 2023

Future of jobs report 2023 - World Economic forum

 The World Economic Forum released their annual Future of Jobs report

The key findings make for interesting reading with no great surprises. The impact on technologies is high. Large corporates involved in the survey, predict 23% of jobs will be caught up in the 'churn' as some jobs become obsolete. However, the effect is for a net positive, i.e. that technologies will create more jobs than those which are lost. However, it is still up to the individual to re-train, move horizontally etc. to ride out the turbulence.

Greatest job growth is driven by technology, digitalisation and sustainability. Declining jobs are also affected by the same drivers and include clerks in data entry, ticketing, postal services, cashiers etc.

There is predicted large-scale growth in education, agriculture, digital commerce in trade. Analytical and creative thinking are now core skills. Also included are the self-efficacy skills - resilience, flexibility, agility, motivation and self-awareness, curiosity and lifelong learning. Dependability and attention to detail have risen into the top 10. Perhaps these reflect the post-pandemic era as people adjust to a new normal of physical engagement with workplaces after several years of 'work from home' arrangements and disruptions caused by illness. 

Given the above, an estimated 44% of work skills will be disrupted in the next five years as increased need for cognitive skills grow. However, core corporate upskilling strategies are not aligned to these needs. Investment in learning and on-the-job training seen to be essential. With almost 1/2 of companies surveyed indicating the need to invest in skills training for their workers.

All in, the effect of AI's prominence and capability, signal the rapid change facing the current workforce. Companies with the means, and capability to advance, reap the immediate and future benefits. 


Thursday, May 18, 2023

FLANZ presentation on impact of AI on online, flexible and distance learning

Notes from today's   'AI is here to stay: Its impact on online, flexible, and distance learning', offered by FLANZ (Flexible Learning Association NZ) in partnership with EdTech NZ and as part of TechWeek here in Aotearoa.

Panellist include  Dr. Rebecca Marrone, Dr. Mark McConnell, Shanon O' Connor and Dr. Truman Pham with facilitation from Bettina Schwenger

Shanon O'Connor from Tōnui Collab starts things off. Shared experiences as to how to provide equitable access to Māori, especially with those who live in rural/isolated areas and those currently affected due to damage to infrastructure from the Cyclone. AI could be useful in providing personalised learning. At the school level, some are finding AI challenging. However, digital inequity is also an important first step as without hardware, capability/ adequate digital literacies, AI tools are not accessible. Cautioned on the use of AI which, due to its underlying epismological sources, can be biased for non-mainstream cultures. Māori digital sovereignty is an important aspect to keep in mind (see work of Dr. Kariatiana Tainui). 

Dr. Mark McConnell shared reflections from a 'front-line' university teacher. Background with all all University of Auckland exams are now online, with most being open book and submitted online. The immediate challenge is how to work through students using AI and how to find out if students actually has the knowledge and skills to complete the assessment. Found that Chat GPT scored well on first year assessment questions (multiple choice and short answer for legal questions). Borderline for year 2 questions but failed with complex legal issues and concepts. Case studies, legislations etc etc often made up by ChatGPT!! but seems to have improved over time. Responded by putting all questions into ChatGPT, and reworking questions where required. Discussed the challenges with large numbers of students, to detect AI content and how Ai tools do not always provide opportunities for students to learn essential thinking skills. Shared guidelines (as decision made not to ban Chat GPT): use at your own risk, need to verify accuracy of answers, academic integrity and no expectation to use it. Strategies to use to: include using images, integrate ChatGPT into the question to evaluate what has been produced for authenticity, validity etc. 

Dr. Truman Phan from AcademyEX next with discussion on the approach taken with students, to ensure they are attain all aspects of digital, including AI literacies. AI needs to be used with responsibility but also ensure inclusive and equitable for all. Digital divide is very real. As with Shanon's presentation, first steps to ensure digital equity needs to be attained before engagement with AI is possible. 

A recording from Dr. Rebecca Marrone from the University of South Australia followed. Provided an example of an adaptive learning platform, which is also mediated with AI. Learning is personalised or differentiated as they move through the platform. Discussed ethical considerations with regards to learning analytics. Presented on opportunities for both learners and lecturers/teachers. 

Q & A followed. 

Monday, May 15, 2023

Horizon report 2023 - overview

Not too many surprises in the latest Horizon report for teaching and education. 

As per usual, the report begins with horizon scanning of the various social, technological, economic, environmental and political trends. The effects of the pandemic will be a key influence across the coming years. Hence, the rise and integration of digital technologies into work, education and society, have now become mainstream. 

The key technologies and practices distilled include AI-enabled applications for predictive, personal learning. The increasing sophistication and acceptance of generative AI. The blurring of boundaries between learning modalities. The increase in deployment of hyflex and microcredentials. And the increasing need to support students' sense of belonging and connectedness when learning is distributed, delivery is multi-channel, and multimodal.

Speculation on the scenarios for growth, constraint, collapse and transformation is presented. The report predicts growth of digital technologies for teaching and learning, constrained by the effects of climate change, with possible collapse of traditional long campus-based undergraduate education, and the challenges for the transformation of education to incorporate AI, flexible learning and personalised education.

A series of implication essays then provided to encourage discussion on topics including learning spaces, equity and accessibility, digital connectivity, adult learners, innovation in research and learning, faculty challenges, and under-resourced institutions.

All in, summation of where things are at, in a post-pandemic world wrestling with social, economic and political fall-out from the uncertainties of several years of disruption. Changes in education move at glacial speed but the pandemic showed that it is possible to shift rapidly when required. What is needed now is to work out what of the emergency changes could be retained, how to leverage off the improved capabilities of faculty, and reengage students', now used to several years of disruptions and changed delivery. 



Tuesday, May 09, 2023

International Handbook of Lifelong Learning (3rd edition)

 The Third International Handbook of Lifelong Learning, edited by K, Evans, W-O, Lee and M. Zukas and published by Springer as a 'living reference' contains 63 chapters. The two previous volumes were published in 2001 and 2012 respectively, meaning the series is a good overview of how lifelong learning, adult education and continuing education have evolved across 3 decades.

The chapters cover topical issues including lifelong learning's contribution to helping humans ameliorate the effects of climate change, country reports / perspectives on lifelong learning, research focuses and issues, neuroplasticity and adult learning, migrants and refugees and the importance of lifelong learning in helping them resettle, digital technologies and industry four, importance of indigenous knowledge, and accommodating sexual and gender identities in societal, cultural and lifelong learning contexts.

All in, chapters provide contemporary overviews of issues, challenges and recommendations to support, extend and viewpoint lifelong learning's importance and application.