Monday, September 18, 2017

Transition to technology / AI driven world

I have been following with some interest, the reports over the last couple of weeks coming out of Asia on collisions between US Navy ships and other vessels.

As a consequence of the latest collision a couple of weeks ago, the fourth accident this year, the UN Navy relieved its commander of the 7th Fleet.

Today online had an interesting article on how the collisions may be an outcome of an over-reliance on technology. The article surmises this over-reliance on technology, may be have led to a decline in basic seamanship and other competencies.

One of the seminal readings on workplace on workplace learning is an article by Edwin Hutchins on ‘learning to navigate'.

Hutchins, E. (1996). Learning to navigate. In S. Chaiklin & Lave, J. (Eds). Understanding practice: perspectives on activity and context (pp.35-63). Cambridge, Mass: Cambridge University Press. also see his book - Cognition in the Wild.
An overview and more up-to date (2002) analysis and discussion on distributed cognition is provided by Karasavvidis.

In short, the article presents how the knowledge and expertise required to run complex machinery, organisations, processes etc. are shared amongst workers. The context for Pea’s study was a naval ship. The 'technology' 20 plus years ago, still required sailors to manually trace the ship's trajectory on nautical maps. Each seaman added a task / piece of knowledge and the collection of all of these activities ensured the ship reached its destination safely.

Of note, in Hutchin's work, and also the work of Pea, is the notion of 'distributed intelligence'. See the seminal readings for these:

Hutchins, E., & Klausen, T. (1998). Distributed cognition in an airline cockpit. In Y. Engestrom & D. Middleton, Cognition and communication at work (pp. 15-34).. Cambridge, UK: Cambridge University Press.

Pea, R.D. (1993). Practices of distributed intelligence and designs for education. In G. Salomon (Ed.), Distributed cognitions: Psychological and educational considerations (pp. 47-87). New York: Cambridge University Press.

When we add Artificial intelligence into the mix, the need for greater levels of understanding amongst the 'users' of the information being generated, takes on a whole different connotation. In short, the human 'overseers' will require some way to 'see the BIG picture'. Otherwise, decisions made by humans and AI, within already complex systems, become even more complicated. Especially given two recent examples for caution: AI robots are sexist / racist given their programmers tend to come from perspectives of privilege (often WASP) and 'killer robot' warfare is closer than we think.

Therefore, education for all humans, requires a BIG picture focus. The ability to be skilled in occupational tasks will also require an understanding of WHY the task is required, WHERE the task fits into the larger scheme of things and understanding of the implications if any parts of the whole, become compromised plus HOW to correct, re-develop, re-configure etc. if something does go wrong, in a timely manner. Simulations will need to ensure these big picture focuses are embedded to provide for authentic learning by novices and others requiring upgrading or updating. There is therefore also a need for learners to understand HOW AI may work and the algorithms underlying decision making. The human brain, may make decisions which are going to differ, due to the individualised nature of human learning. Hence, we not not only need to be empathetic to the needs of others when we work in teams etc. but also be aware of what AI brings into our work processes.

No comments: