I opened the conference and welcomed Ken Shelton onto the stage. The focus was on AI and ensuring we use it responsibly and after the keynote I had a good chat with him. There is a lot going on in generative AI and most teachers are finding this a great challenge, teachers have divided into two camps, the gung-ho let us use it for everything and those that don’t want change and only want AI detection to lock down the students from using it in their work.

I discussed how we are using it in computing and the fact that code is relatively easy to test if the AI has got it right or wrong is incredibly powerful and shows how computing is likely to develop over the next few years. Currently when I have tested it in classroom environments, when given full access to the tools about 20% of the students will reject using AI completely, because it makes too many mistakes!
There are numerous concerns with AI, AI detection is wholly unreliable and teachers recognizing the quality of the students’ is still by far the most reliable method of authenticity. It might be old fashioned, but assessing by examination remains the most secure way of checking knowledge. That does not mean exams need to all be closed book and I would argue that there is still value in portfolios of work. I would also argue that students need to learn AI literacy, the same way in the late 90’s students need to learn how to research using the web.
Age restrictions vary, but in Malaysia and Thailand students are allowed to use most AI from age 13. This is true of both Gemini and ChatGPT. Schools with a Google education package has the assurance that. their data will not be used in training. Also note that at time of writing NotebookLM is only available to 18+. Talking of terms and conditions, which I am sure you all read in detail. Did you know that MagicSchool limit compensation for any data breach caused by their systems to just $100? This might not hold up in the courts, but that would do very little to help with any data mitigation problems, especially because it is a system where they encourage you to write reports and potentially disclose sensitive data.
Magic School and many other educational offerings are often just wrappers that then connect into an engine such as ChatGPT. The value add that they offer can be very variable, you are also effectively adding a middle person into an arrangement. Ken has likened this to microwaving a meal, would you not want to take the time and trouble to prepare your food properly? i.e. At least use AI prompts directly and learn prompts that are successful. Also if you have a directly relationship with Google or OpenAI then if there is a problem you have a direct contract.
Bias continues to be a massive problem with generative AI and it highlights the importance of teachers to help compensate for this. If you ask an AI for a doctor, who would it draw? To ensure equity, I suggested that teachers might need to deliberately prompt to ensure that the picture represent the classes that are in front of them. If you want to suggest males and females can code then you need to prompt the AI to generate it as such. These issues are already affecting people and Ken himself, kept of getting told at Hong Kong airport that he was wearing a mask. He actually had to put his hand next to his face to give the camera the positioning to show he was not. In this case, the training of the model needed a broader range of faces to make fewer errors.
These issues underline the essential importance of teaching computational thinking and digital literacy. Teaching coding ensures that students can easily test an AI against known inputs and outputs. This is incredible learning that will help students to not only be creators, but also more discerning consumers of AI product.