Featured Explainable AI

Published on October 18th, 2021 📆 | 7570 Views ⚑

0

Explainable AI is essential in education


https://www.ispeech.org/text.to.speech

Edtech adoption in our schools has increased during the pandemic and, if reports are to be believed, adoption of Artificial Intelligence (AI) has also steadily grown. This is largely a positive development but do school leaders and teachers really understand the technology? AI is often adaptive and self-learning, which means that what it already understands about a learner will lead it to make particular conclusions about the future needs of said learner.

As more schools use AI-powered technologies, it’s increasingly important that teachers know how the technology makes decisions. Teachers need to understand not just what a child has learnt, but how they have learnt it. For this to be possible, providers of AI-enabled technology need to explain how it decides on a particular course of action. For example, AI designed to help teach a foreign language might recommend you revise some words more frequently than others, based on data about the words you’ve forgotten the fastest.

AI ambiguity

It can be hard to explain how AI programmes make decisions, or why they are doing what they are doing. Even when you have an explanation it can still be unclear, things like “The AI made a decision because past data suggests this is an optimal choice” isn’t going to help a teacher explain to a student why they now need to return to a topic they feel they understand.

This form of AI, often referred to as ‘black box AI’, can be incredibly powerful and effective. But education is too important to be trusted to an inscrutable, black box – instead, it needs explainable AI.

Explainable AI is not a new concept in AI circles; indeed, the EU is bringing forward new regulations in this area, but its importance needs to be understood by school leaders and teachers. Explainable AI means that the results from an AI-powered response can be understood by humans. This contrasts with black box AI where even the designers of a tool sometimes can’t explain why an AI programme has arrived at a specific decision. The choice made by AI doesn’t always have to be the same one a human would make, but a human should be able to understand the process by which the decision was made.

The choice made by AI doesn’t always have to be the same one a human would make, but a human should be able to understand the process by which the decision was made

AI to elevate, not deprecate

For classroom resources like Sparx, this means that a teacher using AI should be able to ask “Why is this student being asked this question?” and receive a clear answer. If that’s not the case, then I question whether it has a place in the classroom. Teachers bring the professional judgement and the knowledge of their students, that AI will always lack. Impactful and understandable AI needs to involve teachers and their expert insight.





Impactful and understandable AI needs to involve teachers and their expert insight

The growth of AI in schools is to be embraced; it can save teachers time and provide a level of personalised learning that would be challenge for any teacher. However, we need to ensure it’s built on principles which mean it can be trusted long-term. Teachers make decisions with their students’ best interests in mind and they need to feel confidence that any AI tool will amplify, rather than contradict, these decisions.

I encourage school leaders to challenge edtech companies using AI to describe how a teacher can understand, at an individual student level, why AI has selected a particular question or resource. It’s a simple question and the answer should be easy to understand, if it is then you can feel confident they are committed to explainable AI.


You might also like: Big data is our friend – here’s how it can help level the playing field in education


 

Source link

Tagged with: • •



Comments are closed.