Please note there is a small amount of offensive language in the clip.
Artificial intelligence is reshaping society faster than any previous technology, and in his talk The Dangers of Unregulated AI on Humanity and the Workforce, Tristan Harris warns that without guidance or regulation, it could erode the very structures that hold our communities together. He describes a future where decisions that once required human reflection are delegated to algorithms optimised for profit rather than people. These systems, built by a small number of powerful companies, already influence what we see, believe, and value. Left unchecked, they risk deepening inequality, amplifying misinformation, and making moral choices without accountability. Harris’s message is not only about better technology but about shared responsibility—governments, technologists, educators, and citizens all have a role in ensuring AI serves humanity rather than replaces it.
For educators, this message carries particular significance. Schools are where young people first learn to make sense of the digital world, and understanding AI is now as important as learning to read or write. Teaching students how to use AI tools is no longer enough; we must help them grasp how these systems work, where their data comes from, and why they sometimes make mistakes. This is the new literacy—one that blends curiosity with critical thinking. Just as students were once taught to question what they read online, they now need to question what AI produces and understand the values embedded within it.
Education also plays a vital part in preparing students for a world where many jobs will evolve or disappear entirely. The skills that remain essential are the ones AI cannot replicate: empathy, creativity, ethical judgment, and adaptability. These need to be nurtured across the curriculum, from discussions about bias and fairness in computing lessons to moral reasoning and civic debate in the humanities. Harris’s talk reminds us that progress without values is not education—it is training. By encouraging reflection and responsibility, schools can help future innovators understand both the promise and the cost of automation.
Teachers, too, shape how AI is understood and modelled. When educators use AI transparently—acknowledging its assistance, discussing its limits, and showing discernment—they demonstrate that technology is something to be guided, not obeyed. Schools can become laboratories for responsible innovation: spaces where students explore how technology can improve lives rather than manipulate attention or maximise profit. Clear policies, collaboration with parents, and a commitment to frameworks such as Relevant, Responsible, and Resilient technology use can help align this vision across the school community.
Harris concludes his talk not with despair but with hope. The future of AI, he suggests, is still in our hands. If we act with wisdom and foresight, it can be a force for creativity, inclusion, and human flourishing. For teachers, that means seeing the classroom not as a refuge from AI but as the place where society learns how to live thoughtfully with it. The goal is not to make students fear technology, but to help them recognise that the most powerful intelligence will always remain human.